Google Cloud Run with CI/CD

Recently I found it intriguing to see the Google announcement of Cloud Run( I was not sure how it was a different solution as Azure had already demonstrated Serverless Container service using Virtual Kubelet and similarly Amazon has Fargate. So how different is this solution to make PRE’s excited!? I started out exploring the documentation and demoes and then made my own repo to demonstrate how simple it can be.

You can find the source code of the example use case I refer to in this blog here:


  1. Google cloud Account
  2. Basic familiarity with GCloud Console UI
  3. Basic Understanding of Git and working copy in your workspace
  4. Account in Github

Step 1: Copy wlEmoticon-smile.png

You can fork this Repository to your Account on github before the below steps

Below is the option you could choose if you want google cloud to store your repo…

Under Connect External Repository select your project where you want to host this repo and the provider as github

You will be asked to authorize with gihub password.. Go ahead and the select the repo where you forked my repo…

Step 2: Understand the next Steps

Let me quickly explain the files in this repo

main.go -> Simple Go file to run a http server on PORT. PORT is an environment variable that Google Cloud Run will share when the container is invoked. The main.go code serves the files in the public folder by default. In this repo the public folder has only one simple index.html file.

Dockerfile-> This is a simple multistage Dockerfile definition that compiles the main.go into the myapp executable. In the final stage it copies the myapp executable and public folder into the final container image that gets built locally. This final stage uses the alpine scratch layer which results in a really compact and secure <4MB image!

Cloudbuild.yaml-> This file specifies how to CI and CD. Go ahead and replace the $PROJECT_ID with your preferred project if it is not going to be your current project…Also the $REPO_NAME to something more user friendly if needed. But make sure you note down what you change and make suitable changes in further steps accordingly.

Step 3: Setup the Endpoint

Choose “Cloud Run”

Click on “Create Service”

Provide the container url… and service name as per what is specified in the cloudbuild.yaml file. This is important!!

Also Ensure that “Allow unauthenticated invocations” have been ticked..

I also like to reduce the memory to 128MB and Maximum Requests per container to 4 to prevent misuse…

Hit Create and the endpoint should get created…The result will show error as the target image is still not built…We are getting there…

Step 4: Setup the Trigger

Now that the key ingredients are ready, Its time to setup the trigger that will make this whole engine run on its own…

Goto Cloud Build and Select the Triggers Option

Click on add Trigger

Choose the Cloud Source Repository

Now Give the Trigger a name…Leave everything on default…Change the Build Configuration to “Cloud Build Configuration file”. This will tell the trigger to proceed and take the next steps as per the cloudbuild.yaml file definitions.

Now click on “Create Trigger”

You should now see the trigger listed

You can activate the trigger manually using the “Run Trigger” Option… Let’s hit it!

The History will now show the status of the build process…now go to the Cloud Run Dashboard and you should find the endpoint with the url associated with it…

Click on this url and you should see your sample html page

wlEmoticon-highfive.pngNow Go back to your github repo and change the code… you will see the build triggers the instant you commit and push the code back to the repository…

wlEmoticon-highfive.pngIn summary the whole initial planning and preparation is still time consuming but once setup, everything is fluid and painless.

As of Now (may2019) the cloudbuild.yaml is unable to build the endpoint with public unauthorized access enabled endpoints… This while great for internal only apps … Is a bad thing for public facing web page / API based solutions. Hopefully it gets ironed out by the time Cloud Run becomes GA.

Network for Humans

I’ve been thinking for a long time about how the whole concept of electronic networking could be made better or at least different. This post on twitter [] by @etherealmind made me regather everything going on in my head and put them in this blog… So here goes…

Why Change?

To be Frank, I’m a bit lazy and tired of the nerd-ism surrounding CIDR. I was irritated by the whole static planning that needs to be done when working in Infrastructure as code to isolate and control traffic between different application groups by using CIDR. I liked the way Docker Swarm enforces Network segmentation/isolation in a human understandable manner. The VMware NSX micro segmentation using Security Groups is also nice from an Infra-as-code workflow. But both are virtual network only constructs and not applicable for the physical networking layer… I was not able to see any way out except change the way we represent the networking endpoints which in current situation is either IPv4/IPv6 or MAC. Both of which from a human reader perspective is an 8-bit number range (IPv4) or Hexadecimal (IPv6/MAC) … Neither of which I like…. So, what could be the way out… Well I got this idea and am jotting it down as is … so expect changes later

The change in Endpoint naming convention

To start with I wanted to look at a naming convention that we humans use for objects. My first iteration was to use English alphabets from A to Z which is 24 in count, numerals 0 to 9 and a few regularly used symbols found popularly on all keyboards.

Alphabets    ->    a-z    ->    24

Numerals    ->    0-9    ->    10

The nearest bit count needed to accommodate this would be 6 => 2^6=64 slots. The alphabets and numerals eat up 34 leaving us with 30 additional space for other characters. In my second iteration, I make the naming “case restricted”, then I need to reserve 48 spaced for alphabets: 24 for small caps [a-z] and 24 for Capitals [A-Z].

This would then occupy 24+24+1=58 slots leaving 6 slots for other characters… I propose we use






If I overlay these 6 bits per character plan on a 128-bit address space (as is used currently by IPv6), I can accommodate 21 (128//6=21) and leaving 2 bits (128%6=2). I propose to use these for type indicators that I’ll describe later…. IMO 21 slots are a reasonable count for human readable endpoint name.

Structure of the Endpoint Name

IPv4 & IPv6 are fairly unbiased and allow the network admin to structure the allotment as per their whims and fancy. This is great for static infrastructures but not great if the Endpoints are going to be dynamic and vary in both rate of change but also change in count. I propose to used a biased naming structure here so that the name can be predictable within a certain boundary and leave enough space for dynamic naming. My first iteration is to use this structure






endpoint type

2 bits


0 – public fixed

1 – public mobile

2 – private fixed

3 – private mobile


Celestial location


E (earth)


Latitudinal Slot



With North Pole as reference, divide latitude into 64 slices and give each slice a char code (0-61)


Longitudinal Slot



With GMT as reference, divide longitude into 64 slices and give each slice a char code (0-61)





IND for India, USA for the US, … you get the drift right…





The definition for this could vary by country or could just default to 00 for globally mobile endpoints


service provider



Legal owner of this endpoint responsible for this endpoint


user defined



Name that I understand and the network too

Errr! Isn’t this too complicated!

Yup… It is going to be too complicated if we use the full naming convention always. But what if I make the addressing a bit IPv6-ish and move the whole tier 0 to 5 into a `::` . The :: can be filled by the network device and user only specifies if the target is public/private and the human understandable name…. Now Users will be dealing with only names like “2::dbserver1” for private servers, “” for public servers.

I must admit I haven’t thought this through but I guess that’s enough for this weekend. More on this later if I get any more idea…

Infrastructure as code using Terraform for Coder

Last week I came across the awesome Open Source solution called coder ( which allows you to host your own Visual Studio Code as a Web Application! I’ve spent very little time with this but am impressed on how quickly and easily this solution gets you up to speed … unlike the other solutions like Apache che for example…

At the same time, I have been trying to learn Terraform with aws as the provider…. So, I decided to make a new multi-tiered environment using Terraform with Coder getting auto deployed. To add a twist in the spec I wanted coder to be the online ssh terminal to administer the private servers. After a few experimentations I have now completed this and am sharing the code in

Coder on AWS – Sample Design

The above diagram represents the environment generated… Interested?? If yes continue reading

Step 1: Clone the repo

git clone
cd coderonaws

Step 2: Download and Install terraform

The terraform website has the details for each OS…

Step 3: Provide your aws details

You will find .tfvars files in each of the three directories (base, public and private)

Provide you aws account details

# Amazon AWS Access Key
aws_access_key = "<AWS account Key"
# Amazon AWS Secret Key
aws_secret_key = "AWS Secret Key"

You can change the other variables as well if you want .. To see a list of variables you can change check out the file to see the list

Do not forget to rename the .tfvars.template files to .tfvars

mv *.tfvars.template *.tfvars

Step 4: Start Conjuring the environment creation

Change directory back to the coderonaws folder and fire up the start.bat if you are on windows


On other OS you would need to manually go to each folder and invoke terraform apply manually

cd base
terraform apply -auto-approve
cd public
terraform apply -auto-approve
cd private
terraform apply -auto-approve

If all went well, you should see a log file in the public and private folders with the url you can use to connect to your coder instance

…\julianfrank\coderonaws> cat .\public\public.log
{public_ssh: ssh -i "###.pem"}
{nat_ssh: ssh -i "###.pem"}
…\julianfrank\coderonaws> cat .\private\private.log
{public_ssh: ssh -i \"###.pem\" ec2-user@}

Open the coder_url in your favorite modern browser (not ie9)… You should be welcomed with a Password challenge

By default I have used `PASSword’ as the password… To change this default password in the file in the files folder and reconjure the environment…

Click on the ‘ENTER IDE’ button and you will be presented with the coder ui

Now control + ` to open the terminal

Now lookup the log files to see the command line for the server you want to access… and type it in the terminal to get ssh access… All Interactive TUI works perfectly fine in the terminal.

Cool right…

Step 5: Now to tear down the environment…

Back to the local cli and invoke the ./destroy.bat to destroy the entire vpc and instances cleanly from your account…

If you don’t destroy you may get billed on the instance usage….

Not bad right!

Just a cautionary note… Now the design in this repo is not really secure and ready for enterprise production use….So use with care

The Next Frontier for Collaboration

So why did I start on this?

After a long time, last weekend, I came across a pitch I used to take on our POV for Communications way back in 2007…I had predicted that something called “Air PBX” would replace all the then known Predominantly On-Premise Solutions by 2020…

Well I’m now in the start of 2019 and to me it looks like the concept of Air PBX is now prevalent as Cloud PBX from various vendors and is now the default start point of voice architecture. Only Laggards and some customers with genuine “Consistent” quality requirements remain on Legacy On-premise solutions. So much has changed in the Industry and I too have broadened my portfolio now covering the entire Collaboration Technologies. My outlook also has now changed and spearheaded by the frontline collaboration products…ahem…ahem… I’m referring to the Teams duo (WebEx Teams and Microsoft Teams) …

This weekend I was thinking what the scenario in collaboration in the next decade would be…and this followed…

Architecture Highlights for Collaboration in 2033, IMHO

Plenty of Architectures reviewed and Possibilities got analyzed in my head, and I settled on a few highlights that could define the Collaboration Architecture of 2033…

  1. All functionality of Collaboration Services would be available and consumed from Public Clouds.
  2. On-Premise solutions would exist but not as “Production” Equipment. On-Premise Solutions would be built and maintained to handle DR/BCP aka “Cloud Fall” situations
  3. PBX, Voicemail, SMS and ACD may be terms that will be found only in dictionary
  4. The Current Complexity of Multiple Products for Multiple Services will disappear, and Significant Convergence will happen on the Admin Front-End. Please do note that while “Services Endpoints” Collapse/unify, the “Consumption” Mechanisms will explode.
  5. The Diversity of “Communication and Collaboration” Management Teams will disappear and will be replaced by teams aligned to the prevalent vendors at that time. To Illustrate, the SME of vendor 1 will cover all technologies from voice, video, documents collaboration, messaging and the various modalities of consumption that will be normal by then…But this SME may not have any idea of how to get things done on vendor 2’s platform.
  6. Very few Enterprise SMEs will understand the backend complexity of the respective platforms and these too will be focused on managing the DR/BCP setups only.
  7. Identity, Privacy Policies and Data Protection used for communications and collaboration will be external to the vendor platforms unlike how it is tightly integrated currently.
  8. AI/ML based technologies will become utility and serve well understood services with full access to the user’s live and historical interactions. The Universal Policy Managers will ensure that Privacy is managed.

So how would the Collaboration Architecture of the Future look like?

Its Feb 2019 and things could change significantly both towards or away from what I believe will happen. I took a similar approach in 2007 when even Hosted PBX was not a normal practice. At that time UCaaS and Air PBX were terms with very few practical technologies available to make them a reality. But the market has moved in exactly the direction I predicted… I’m going to use a similar extrapolation this time… so here goes…

I believe the entire Architecture will be broadly clustered on four key Solution Units:

  1. Contact Service Providers
  2. Content Service Providers
  3. Security Policy Managers
  4. Consumption Technologies

Of these IT SMEs will have deep knowledge of only the Consumption Technologies. The rest will be of “Talkonology” grade and will be well versed only on the GUI/API based management. Only a few curious and ardent nerds will have knowledge of the inner workings, and their knowledge would be utilized in customer’s DR/BCP Build and management purposes.

Contact Service Providers

In the Current ecosystem this is led by the likes of Skype for Business Server Editions, Cisco UC Servers and similar IP PBX/UC Servers from multiple UC Vendors. IMO these functionalities will move to cloud-based platforms like Skype for Business Online, Microsoft Teams, Cisco WebEx Teams and similar platforms…. Slowly and steadily these will build tight integration with Content Infrastructures in the backend.

The Contact Services themselves will become simplified with Unified Interfaces providing access to all Channels of Communications for the Users. The back-end however would be significantly more powerful and feature heavy than current UCaaS solutions.

Content Service Providers

In current ecosystem this is led by Microsoft SharePoint, Exchange and the various Knowledge Management products in the market like Salesforce.

As mentioned above these would merge from being separate products to a unified product in the admin front-end. Please do note that in the back end they will continue to be different with each service doing what it does best. This Product will also be handling all the data used by the ML Engines deployed in both back-end and Consumption devices. Governance will be handled by Universal Privacy Policy Managers

Security Policy Managers

To be candid our current ecosystem does have several wannabes in this product group, but none may be ready to take the overall nine yards.

The products in this group will be universal in the sense that they will work independent from to the contact and content platforms. This group may not be covered completely by any single platform as well unlike the contact and content products…

Consumption Technologies

This will be the most interesting group which will flourish widely and be the target of time spent by the Architects and Administrators of the Future

If you’ve been in this side of business, then these shouldn’t be too new. The only major difference will be that by 2033 these will be normal and significantly less complex… Also, the Legacy pieces may remain in some Laggards’ IT Portfolio….


I wanted to write a lot, but time is short and hence kept to a minimum… maybe I’ll write a follow-up in future…

To get an idea of how I was doing the extrapolation… You can check out my earlier blogs and .

Happy Reading -.