Google Cloud Run with CI/CD

Recently I found it intriguing to see the Google announcement of Cloud Run(https://cloud.google.com/run/). I was not sure how it was a different solution as Azure had already demonstrated Serverless Container service using Virtual Kubelet and similarly Amazon has Fargate. So how different is this solution to make PRE’s excited!? I started out exploring the documentation and demoes and then made my own repo to demonstrate how simple it can be.

You can find the source code of the example use case I refer to in this blog here: https://github.com/julianfrank/googlecloudrun

Pre-Requisites

  1. Google cloud Account
  2. Basic familiarity with GCloud Console UI
  3. Basic Understanding of Git and working copy in your workspace
  4. Account in Github

Step 1: Copy wlEmoticon-smile.png

You can fork this Repository to your Account on github before the below steps

Below is the option you could choose if you want google cloud to store your repo…

Under Connect External Repository select your project where you want to host this repo and the provider as github

You will be asked to authorize with gihub password.. Go ahead and the select the repo where you forked my repo…

Step 2: Understand the next Steps

Let me quickly explain the files in this repo

main.go -> Simple Go file to run a http server on PORT. PORT is an environment variable that Google Cloud Run will share when the container is invoked. The main.go code serves the files in the public folder by default. In this repo the public folder has only one simple index.html file.

Dockerfile-> This is a simple multistage Dockerfile definition that compiles the main.go into the myapp executable. In the final stage it copies the myapp executable and public folder into the final container image that gets built locally. This final stage uses the alpine scratch layer which results in a really compact and secure <4MB image!

Cloudbuild.yaml-> This file specifies how to CI and CD. Go ahead and replace the $PROJECT_ID with your preferred project if it is not going to be your current project…Also the $REPO_NAME to something more user friendly if needed. But make sure you note down what you change and make suitable changes in further steps accordingly.

Step 3: Setup the Endpoint

Choose “Cloud Run”

Click on “Create Service”

Provide the container url… and service name as per what is specified in the cloudbuild.yaml file. This is important!!

Also Ensure that “Allow unauthenticated invocations” have been ticked..

I also like to reduce the memory to 128MB and Maximum Requests per container to 4 to prevent misuse…

Hit Create and the endpoint should get created…The result will show error as the target image is still not built…We are getting there…

Step 4: Setup the Trigger

Now that the key ingredients are ready, Its time to setup the trigger that will make this whole engine run on its own…

Goto Cloud Build and Select the Triggers Option

Click on add Trigger

Choose the Cloud Source Repository

Now Give the Trigger a name…Leave everything on default…Change the Build Configuration to “Cloud Build Configuration file”. This will tell the trigger to proceed and take the next steps as per the cloudbuild.yaml file definitions.

Now click on “Create Trigger”

You should now see the trigger listed

You can activate the trigger manually using the “Run Trigger” Option… Let’s hit it!

The History will now show the status of the build process…now go to the Cloud Run Dashboard and you should find the endpoint with the url associated with it…

Click on this url and you should see your sample html page

wlEmoticon-highfive.pngNow Go back to your github repo and change the code… you will see the build triggers the instant you commit and push the code back to the repository…

wlEmoticon-highfive.pngIn summary the whole initial planning and preparation is still time consuming but once setup, everything is fluid and painless.

As of Now (may2019) the cloudbuild.yaml is unable to build the endpoint with public unauthorized access enabled endpoints… This while great for internal only apps … Is a bad thing for public facing web page / API based solutions. Hopefully it gets ironed out by the time Cloud Run becomes GA.

Rendezvous with Azure Container Instance

Microsoft released the Azure Container Instance last week and unlike the rest of the container oriented cloud solutions in the market, this is a unique solution and I couldn’t resist trying it out. The Quick start guide in https://docs.microsoft.com/en-us/azure/container-instances/container-instances-quickstart let’s us experience how it can be utilized from the az-cli console bothe from the azure website and from the laptop. The rest of the tutorials also help understand how a standalone app can be packaged, loaded into the Azure Container Registry and the run inside the ACI.

So, what is this service….

To start with, it is basically Docker Linux Instances created on call and destroyed without the tenant having to invest in a fixed pool of virtual machines which happens to be the case with other Container Services. This is not the first time and there are other players in the market providing similar services (hyper.sh, now.sh etc.) where the customers can have their own Orchestration Infrastructure on Dedicated Infrastructure and utilise the Flexible Cloud Infrastructure that can scale from ‘0’ to …a very large number. Of course, there are the server-less services provided by the big three cloud vendors…but they happen to be very opinionated and short-lived on a per invocation basis. ACI on the contrary Gives the Server-less experience in an unopinionated way and perfectly ready for limited but long running services.

To enable this capability to be controlled by customer built orchestrators, Microsoft has already released the ACI-Connector (https://github.com/Azure/aci-connector-k8s)for Kubernetes. This connector runs as a container in the Local Kubernetes Infrastructure and proxies the requests to create and destroy containers as per the developer provided yaml. From my testing this was not working on the day of launch but got fixed on 4thAug2017…Works perfectly now…. I hope that in future this capability expands to other orchestrators and adds more features.

What would be good use cases for this?

Like similar Container PaaS services available this can be used to handle worker services which need more compute time than what the server-less services can accept. Otherwise with current pricing this is not a good candidate to replace always-on VM based services.

Containers Dissected … almost

HCL recently entered into partnership with Mesosphere and it’s been a long time since I blogged…. So I thought I got to put something. I was discussing about increasing ‘utilization efficiency’ of servers could hit breaks when discussion came towards the difference between hypervisor based VM and Containers. Verbal exchange apart, I thought this has to be the topic for today’s blog…so here goes.

First let us look at what happens with hypervisors:

In a VM based environment the Hypervisor is a fundamentally a Virtual Machine Emulator that creates logical processor, memory, network, disk and other interfaces completely ‘fooling’ the OS running in the guest OS to think it has control of the physical hardware. To do this the Hypervisor lets the OS boot up from its own image and run its own kernel, user land and user applications as per the image in the boot up disk. What the Guess OS actually gets is ‘an’ isolated process from the Host OS which could be given access to some or all the host’s cpus with policing. Similarly, the memory is completely isolated for the Guest and if needed overprovisioned. Network is provided using a variety of virtual networks (host only, bridged, open, direct NIC access etc.). Disks also are ultimately actually files located either locally or remotely. Bottom-line the guest has no access to both its host and its peers (other guest VMs).

This is great but things like boot up speed and physical size of the VM makes it feel slow and restricts mobility. And this is where containers come into the picture providing speed and mobility with a few compromises

In lieu of the Hypervisor in the VM world, the container has the ‘Container Engine’…Names can vary based on vendor but let’s call it container engine, CE in short. Now unlike the Hypervisor, the CE does not perform any emulation but rather runs the user land in an isolated process of the host OS. It further uses the very many tools available to control the access between the application in the Container and the host Resources. From a Process perspective the Host sees the container as just another application process forking sub processes. It also has capability to manipulate the container processes. The reverse is however blocked by the CE so the Container thinks it is the only process running. All other resources are similarly isolated like container has access to the host memory directly but CE can restrict the quantity of memory used. Network also is further provided using virtual networks and physical network access is not available to the container. Access to disk is managed by restricting to a folder in the host OS’s Disk or mounting a remote disk.

Bottom-line Containers ensure that applications run inside containers are isolated form their peers and the host but effectively work on the host’s kernel and its capabilities.

This technique however is not unique unlike how it is being projected by Docker and other Container Players. Isolated Execution environments have always been provided by JVM multi-tenancy and non-isolated application execution has been provided by the very many application servers (Tomcat, Windows Application Server, WebSphere etc.).

The value that containers bring is the capability to specify the environment in which the developer wants the application to work, develop the app in such an environment and ensure that the exact same environment is built when the app is run in production. That is a capability that other multi app serving solutions have not been able to provide for ages.

Before I finish I do want to mention one stray case here: VMware vSphere Integrated Containers (VIC). The Architecture above fits well for Docker engine, rkt, warden etc.…but the Architecture for VIC seems completely different. Will update this blog after further investigation.