CI-CD Automation with Cloud-to-Edge Pipeline for IoT Workloads
CI-CD Tools Ecosystem
Following tools or systems used for CI-CD automation in Pipeline agent server.
Ubuntu 18.04 bootstrap installation script
This is a bootstrap file to install all the necessary tools and systems in Ubuntu 18.04 as required to create CI-CD pipeline for Java microservice projects. This bootstrap will install Adopt OpenJDK11, Maven, Git, Jenkins, Docker, Nginx and MicroK8s. Some of the systems will require setup after the server is up and running such as Jenkins, Git, etc.
CI-CD with Jenkins Automation in Local VM Host
Ubuntu’s MicroK8s is simple and I am falling in love with it. Right from its installation to deployment of applications is pretty simple. It’s superb simple if you want to use it as single node kuberenetes cluster to run your IoT workloads in your Edge devices. (I dont think Minikube is that simple and straightforward)
- I have automated a Java microservice using Jenkins from build (Maven) to deployment in kubernetes (MicroK8s)
- All tools and systems are installed separately in my local VM host(Jenkins, Maven, Git, Docker and MicroK8s).
- Code is checked out from GitHub for Maven build.
- Jenkins job is triggered manually/periodically.
- Everything is done as code and automated (Pipeline as Code, Infrastructure as Code and Deployment as Code)
Jenkins Pipeline Code
Jenkins Pipeline Stage View
Kubernetes resources after Jenkins pipeline completion
$ docker image ls|grep ‘localhost:32000/vaccinesethu
localhost:32000/vaccinesethu 1.0.3 695733971b23 44 minutes ago 588MB
$ microk8s ctr images ls|grep ‘localhost:32000/vaccinesethu:1.0.3’
localhost:32000/vaccinesethu:1.0.3 application/vnd.docker.distribution.manifest.v2+json sha256:f9b3676b79750a03ec6f0c7b4bd0f9b0b7732df8606a121fa6d23834c226efbe 294.5 MiB linux/amd64 io.cri-containerd.image=managed
$ kubectl get deployment -n cowin-dev
NAME READY UP-TO-DATE AVAILABLE AGE
vaccinesethu 1/1 1 1 43m
$ kubectl get service -n cowin-dev
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
vaccinesethu NodePort 10.152.183.142 <none> 8085:32568/TCP,9095:32018/TCP 43m
$ kubectl get pod -n cowin-dev
NAME READY STATUS RESTARTS AGE
vaccinesethu-574df789f8-m428w 1/1 Running 4 43m
$ curl http://10.152.183.142:8085/api/vaccinesethu/index
<HTML><HEAD><H1>Cowin Vaccine Sethu Service</H1></HEAD></HTML>
$ curl http://10.152.183.142:9095/actuator/health
{“status”:”UP”,”groups”:[“liveness”,”readiness”]}
Key Lessons Learnt with Local VM Automation
- Microk8s Storage service will not work in case API server can’t be reached if firewall IN and OUT is not allowed for cbr0 interface. When Storage is enabled, there will be no error shown with microk8s.status. But when microk8s.inspect is run, warnings will be displayed if specific firewall changes to be done. Its like a secret which is not documented anywhere in MicroK8s documentation. It took nearly 1 day to troubleshoot and scroll through forums. Microk8s team should automate this as part of snap install itself.
- Don’t use Microk8s CTR service as it is deprecated eventhough its simple to use. It requires sudo permission and also it will not work from Jenkins inspite of adding Jenkins user in microk8s group and sudo group
- You can use kubernetes built-in registry in case of dev environment. For that enable registry service in Microk8s. Make sure that storage service and its hostpath-provisioner pods are running successfully before enabling registry service. After registry service is enabled, check if PersistentVolumeClaim sttaus is “Bound” instead of pending
CI-CD with Jenkins Automation in AWS EC2 Host
AWS CloudFormation VPC EC2 Stack for Jenkins Pipline Agent Setup
AWS Platform as used for Pipeline Agent
Jenkins Pipeline Execution in AWS EC2 Agent
The same pipeline groovy script has been used as-is in AWS EC2 Jenkins agent as well and it just worked as it is.
Exposing the Microservice endpoint on EC2 public domain endpoint
Until the deployment of microservice is completed in kubernetes cluster, the respective IP and port will not be known and hence they can’t be added in VPC Security Group rules for allowing traffic from outside respectively.
So, Nginx has been used as a webserver and proxy to forward the requests from EC2 public domain to internal kubernetes cluster.
After the deployment of microservice is completed in Kubernetes, obtain the cluster nodeport IP and also the port on which the microservice has been configured to run. Modify /etc/nginx/sites-enabled/default to add the following location information so that the requests on port 80 will be forwarded to microservice endpoint running in pods of kubernetes. Microservice should be deployed in kubernetes using NodePort spec instead of ClusterIP in order to expose the endpoint outside of the cluster.
location /api/vaccinesethu/ {
proxy_pass http://<kubernetes cluster nodeport ip>:<port>$request_uri;
}
location /actuator/ {
proxy_pass http://<kubernetes cluster nodeport ip>:<port>$request_uri;
}
Key lessons learnt with AWS EC2 Setup
- Make sure to allow 8080 traffic in Security Group so that Jenkins can be accessed from EC2 public domain.
- By default ubuntu firewall will be disabled in AWS EC2 instance. Make sure that the firewall rules are set for all the ports/services as required for jenkins, kubernetes, docker and microservices so that the internal IP routing and forwarding will work especially among the pods in kubernetes cluster.
- Traffic routing from outside into EC2 instance in public subnet will be allowed in MOST RESTRICTIVE manner as per the combined effect of the rules set in Security Group & Firewall rules in EC2 instance (if enabled only). i.e If firewall is enabled in EC2 instance, then it must be ALLOWED both in SG and as well as in EC2 instance firewall rules for traffic from outside on a particular port or service have to be allowed. For eg. if EC2 firewall allow the traffic on port 8085 but if SG restricts it then traffic from outside will not be allowed to EC2 instance. Whereas if traffic is allowed from outside on port 22 (for SSH) in SG but it is not allowed in EC2 firewall, SSH can’t be done to EC2 instance. Similarly if traffic from outside is allowed on port 80 in SG but if it is not allowed in EC2 instance firewall settings, then traffic will not be allowed.
- Make sure the appropriate IAM permissions and polices are set for the resources created in the VPC stack and as well as for Cloudformation stack creation in the given AWS account
- We can only have one SSH session with AWS EC2 instance at a time.