Google Cloud Run with CI/CD

Recently I found it intriguing to see the Google announcement of Cloud Run(https://cloud.google.com/run/). I was not sure how it was a different solution as Azure had already demonstrated Serverless Container service using Virtual Kubelet and similarly Amazon has Fargate. So how different is this solution to make PRE’s excited!? I started out exploring the documentation and demoes and then made my own repo to demonstrate how simple it can be.

You can find the source code of the example use case I refer to in this blog here: https://github.com/julianfrank/googlecloudrun

Pre-Requisites

  1. Google cloud Account
  2. Basic familiarity with GCloud Console UI
  3. Basic Understanding of Git and working copy in your workspace
  4. Account in Github

Step 1: Copy wlEmoticon-smile.png

You can fork this Repository to your Account on github before the below steps

Below is the option you could choose if you want google cloud to store your repo…

Under Connect External Repository select your project where you want to host this repo and the provider as github

You will be asked to authorize with gihub password.. Go ahead and the select the repo where you forked my repo…

Step 2: Understand the next Steps

Let me quickly explain the files in this repo

main.go -> Simple Go file to run a http server on PORT. PORT is an environment variable that Google Cloud Run will share when the container is invoked. The main.go code serves the files in the public folder by default. In this repo the public folder has only one simple index.html file.

Dockerfile-> This is a simple multistage Dockerfile definition that compiles the main.go into the myapp executable. In the final stage it copies the myapp executable and public folder into the final container image that gets built locally. This final stage uses the alpine scratch layer which results in a really compact and secure <4MB image!

Cloudbuild.yaml-> This file specifies how to CI and CD. Go ahead and replace the $PROJECT_ID with your preferred project if it is not going to be your current project…Also the $REPO_NAME to something more user friendly if needed. But make sure you note down what you change and make suitable changes in further steps accordingly.

Step 3: Setup the Endpoint

Choose “Cloud Run”

Click on “Create Service”

Provide the container url… and service name as per what is specified in the cloudbuild.yaml file. This is important!!

Also Ensure that “Allow unauthenticated invocations” have been ticked..

I also like to reduce the memory to 128MB and Maximum Requests per container to 4 to prevent misuse…

Hit Create and the endpoint should get created…The result will show error as the target image is still not built…We are getting there…

Step 4: Setup the Trigger

Now that the key ingredients are ready, Its time to setup the trigger that will make this whole engine run on its own…

Goto Cloud Build and Select the Triggers Option

Click on add Trigger

Choose the Cloud Source Repository

Now Give the Trigger a name…Leave everything on default…Change the Build Configuration to “Cloud Build Configuration file”. This will tell the trigger to proceed and take the next steps as per the cloudbuild.yaml file definitions.

Now click on “Create Trigger”

You should now see the trigger listed

You can activate the trigger manually using the “Run Trigger” Option… Let’s hit it!

The History will now show the status of the build process…now go to the Cloud Run Dashboard and you should find the endpoint with the url associated with it…

Click on this url and you should see your sample html page

wlEmoticon-highfive.pngNow Go back to your github repo and change the code… you will see the build triggers the instant you commit and push the code back to the repository…

wlEmoticon-highfive.pngIn summary the whole initial planning and preparation is still time consuming but once setup, everything is fluid and painless.

As of Now (may2019) the cloudbuild.yaml is unable to build the endpoint with public unauthorized access enabled endpoints… This while great for internal only apps … Is a bad thing for public facing web page / API based solutions. Hopefully it gets ironed out by the time Cloud Run becomes GA.

Infrastructure as code using Terraform for Coder

Last week I came across the awesome Open Source solution called coder (https://coder.com) which allows you to host your own Visual Studio Code as a Web Application! I’ve spent very little time with this but am impressed on how quickly and easily this solution gets you up to speed … unlike the other solutions like Apache che for example…

At the same time, I have been trying to learn Terraform with aws as the provider…. So, I decided to make a new multi-tiered environment using Terraform with Coder getting auto deployed. To add a twist in the spec I wanted coder to be the online ssh terminal to administer the private servers. After a few experimentations I have now completed this and am sharing the code in https://github.com/julianfrank/coderonaws

Coder on AWS – Sample Design

The above diagram represents the environment generated… Interested?? If yes continue reading

Step 1: Clone the repo

git clone https://github.com/julianfrank/coderonaws
cd coderonaws

Step 2: Download and Install terraform

The terraform website has the details for each OS… https://learn.hashicorp.com/terraform/getting-started/install.html#installing-terraform

Step 3: Provide your aws details

You will find .tfvars files in each of the three directories (base, public and private)

Provide you aws account details

# Amazon AWS Access Key
aws_access_key = "<AWS account Key"
# Amazon AWS Secret Key
aws_secret_key = "AWS Secret Key"

You can change the other variables as well if you want .. To see a list of variables you can change check out the vardefs.tf file to see the list

Do not forget to rename the .tfvars.template files to .tfvars

mv *.tfvars.template *.tfvars

Step 4: Start Conjuring the environment creation

Change directory back to the coderonaws folder and fire up the start.bat if you are on windows

./start.bat

On other OS you would need to manually go to each folder and invoke terraform apply manually

cd base
terraform apply -auto-approve
cd public
terraform apply -auto-approve
cd private
terraform apply -auto-approve

If all went well, you should see a log file in the public and private folders with the url you can use to connect to your coder instance

…github.com\julianfrank\coderonaws> cat .\public\public.log
{public_ssh: ssh -i "###.pem" ec2-user@ec2-54-213-122-60.us-west-2.compute.amazonaws.com}
{coder_url: http://ec2-54-213-122-60.us-west-2.compute.amazonaws.com}
{nat_ssh: ssh -i "###.pem" ec2-user@ip-172-16-0-121.us-west-2.compute.internal}
…github.com\julianfrank\coderonaws> cat .\private\private.log
{public_ssh: ssh -i \"###.pem\" ec2-user@172.16.1.145}

Open the coder_url in your favorite modern browser (not ie9)… You should be welcomed with a Password challenge

By default I have used `PASSword’ as the password… To change this default password in the runcoder.sh file in the files folder and reconjure the environment…

Click on the ‘ENTER IDE’ button and you will be presented with the coder ui

Now control + ` to open the terminal

Now lookup the log files to see the command line for the server you want to access… and type it in the terminal to get ssh access… All Interactive TUI works perfectly fine in the terminal.

Cool right…

Step 5: Now to tear down the environment…

Back to the local cli and invoke the ./destroy.bat to destroy the entire vpc and instances cleanly from your account…

If you don’t destroy you may get billed on the instance usage….

Not bad right!

Just a cautionary note… Now the design in this repo is not really secure and ready for enterprise production use….So use with care

DevOps Environment Architecture – My Thoughts

After writing about my thoughts on how application architecture might look like in the future, I have been now thinking about how CTOs would want to remodel their DevOps Environment to cater to the whole new multi-cloud ecosystem with completely new Jargons flying around… Lemme illustrate: Cloud Native / 12 Factor Applications, Multi-Cloud, Hybrid–Cloud, Micro-Segmentation, Containers, ChatOps, NoOps, PaaS, FaaS, Serverless, TDD, BDD, CI/CD, Blue-Green, A/B, Canary … You get the picture right…. All these were alien terms in the old Waterfall Model of Application Development but are now the new reality but retrofitting the waterfall style of governance on this ecosystem is a sure recipe for disaster!

So how can we approach this?

I see two dimensions by which we should approach the new estate

  1. The Environmental State Dimension – In this dimension we look from the context of the state of the work item in terms of modern agile Life-Cycle
  2. The Application Life-Cycle State Dimension – From this perspective we see the work item from a user experience impact perspective….

Let’s Explore the State Dimension…

I see four clear states that the code ultimately will go through in a multi-cloud CI/CD environment

Developer Station

  1. This is the environment that the developer uses to write, perform local tests, branch and sync with multiple developers’ s work
  2. This can range from a completely unmanaged BYOD environment to a hyper secured VDI Client
  3. A few Options in increasing order of IT Control I can think of are as below:
    1. BYOD Laptop/Desktop with Developer’s own tools and environment
    2. IT provided Laptop/Desktop/Workstation with mix of IT and Developer installed tools
    3. Virtual App based IT supplied Environment on Developers Device
    4. VDI Client Accessible from Developer Device

Test Zone

  1. This would be the zone where the code gets committed for Integration Tests and Compliance Tests against the bigger SOA / MicroServices Environment
  2. This typically would be cloud based to minimize cost as the load would vary significantly based on working slots of developers and commit levels based of application change loads
  3. Automation is inevitable and manual intervention is not advisable considering the maturity of testing tools automation available in the market

Staging Zone

  1. This zone would be a small scale replica of the Production zone in terms of Multi-Cloud Architecture, Storage distribution, Networking and Security
  2. The Aim would be to Test the Application in terms of Performance, UX and Resilience on multiple Cloud Failure Scenarios. 100% Automation is Possible and hence manual intervention should be avoided
  3. Observability Assurance would be another important goal post in this environment… Though I personally have doubts on maturity of automation capability… Unless Developer Adheres to Corporate Standards, Observability would not be possible for the given code and automation of this is doubtful and imo may need manual intervention in certain scenarios…

Production Zone

  1. I don’t think this zone needs any introduction
  2. This is where the whole ITIL/IT4IT comes to play from a governance and management perspective
  3. This also would be the zone where multiple clouds thrive in an interconnected, secured and 100% IT Governed manner

 

Now to the other dimension…

Application Life-cycle

I have already talked about this in a previous blog (Digital {{Dev}} Lifecycle) …

But overtime I believe there is more needed in an ever changing multi-modal enterprise environment… But that I leave for the next post … Till then bye!

Future of Multi-Cloud using CLOAKS

Its Pongal 2018! Feels good after trying out Veshti… That too the non-Velcro type. Getting to get a few trees planted was a bonus. Sadly both these ventures are in their infancy and took my thoughts back to the IT world in terms of the Multi-Cloud pitch which is now slowly showing signs of maturing into a practical architecture.

I’ve always prophesied that the future of cloud is Peer to Peer Multi-Cloud and has always been an aspiration ever since the ‘cloud’ got popularized. The first use-case that came to my mind was the capability to port application/services across different service providers, geographies and partners. However IMO we should looks at more parameters to truly evaluate how things ave been changing and what does the future hold for us! Here is an CLOAKS based attempt:

  1. Competition
    • Number of vendors that support the architecture exact same stack
  2. Location
    • Number of locations the Infrastructure Architecture can accommodate for the provided Applications
  3. Openness
    • The Level of Openness in the individual specifications
  4. Applications
    • Readiness of Applications to make full use of the Multi-Cloud Architecture
  5. Kinematics
    • Appreciation of the heavy duty impact of the CAP implications in Application Design considering the multi-cloud scenario
  6. Security
    • Maturity of Security of in-scope workloads for both data at rest, in motion, in compute, identity, policy and control of data leak.

Its been at least 5 years but maturity across all these capabilities has not been truly demonstrated even close to my expectations. However there is good news as we are seeing things changing in the right direction and I believe it would be interesting to look at these evolving in different ages as below:

Age 1) Service Provider defined

This is something that became practical with AWS and Azure IaaS workloads providing network peering with on-premise workloads. Further multi-Region Networking is provided to handle movement of workloads within the same provider 😉

Age 2) Platform Vendor Defined

We are currently in this age with Vendors provided solutions that let enterprises scale their applications between their on-premise data-center and the cloud. the VMware Cloud solutions for AWS and Bluemix are a step in the right direction but still restricted to and supported between the same Platform only. There is still a lot to happen in this space this year and only time will tell what other vendors have in store!

Age 3) Community Defined

This I believe is the future and will be built by communities of like minded technocrats and disrupted by some new player who will force the cloud biggies to shut down the walls that they have built to discourage inter-operability between vendors and clouds.