Posts by julianthefrank

I'm a simple human being in search of Utopia ;-)

So what’s going on outside the Core Contact Center

I happened to be watching a few documentaries on how Einstein and other Key Scientists discovered their contributions for which they are now known for… I looked back at my work and see a plethora or technologies, buzzwords and jargons thrown around everywhere and I started thinking if it was possible to bring all of them under one logical roof. What I noticed was that the core contact center gets a lot of attention but the surrounding soft processes and technologies outside the core has been ignored. Let me first explain what I mean by the core:

It has three primary components:

  1. The Connect Services – Focus is on Expanding the ways by which the Customer is able to connect with the Contact center. Nowadays this extends to finding ways how agents also can perform their service from multiple channels other than the rather traditional phone + pc method.
  2. Self Service Services – These Focus on Diverting the Interaction from Human to Machine. The Motivations can be many but bottom-line it’s a machine doing the human’s job. Again the rant of IVR Hell is the most popular slogan in every CC Sales person’s narration and continues to the Rise of the Bots in service of Humanity and the best bots being from their garages.
  3. Automation Services – These Focus on Ensuring that the Customer gets serviced by the right agent or bot based on information collected during the interaction or history.

All of these are fundamental to any contact center solution for longer than the past 2 decades and hence I never got myself to blog on the various transformations happening here. What could happen outside the core is however never discussed and hence the key subject of this blog.

Let’s visualize this Core Thingy

So the Experience Gained by Customer when Interacting with the Connect services, we call “Customer Experience” aka “CX”. Similarly, the Experience the Core Gives to Agents becomes the “Agent Experience” aka “AX” … Right?

Wrong… Let’s see why…

Let’s focus on the Customer and see what actually is driving their CX… I hear your mind voice…you just thought about the new term “Omni-Channel” …. And Something Else is coming up… “Customer Engagement” …. Ah now I hear something else … Ok Stop… I’m here to tell my opinion …not Yours!

In my opinion Customer Experience is governed by three key activities

  1. Engineering – This is where the Engineers tirelessly build the core and associated solutions block by block. After crossing the mindless desert of bureaucracy, the storm of politics and whirlpools of bugs, the Engineers brings solutions to production. This used to consist of lifelong projects in the SDLC era but now has been cut short using DevOps so engineers have to cross smaller obstacles than larger ones before…
  2. Experience – Once the Solutions are brought to production the customer happens to use the solution and hence you get “Customer Experience”. Thankfully there are tools which are able to Quantitatively measure these customer experiences using DataOps. This used to be a laborious manual task in the past but nowadays has become automatic to a large extent letting the Data Engineers to focus on Insights
  3. Insight – The Insights is the activity performed typically by Supervisors but now slowly business managers and marketing managers are also getting into these tools to gain insights to better their side of business. These Insights result in Stories which in turn fuels the next round of Engineering.

Now let’s visualize what I’m talking about …

Now in Traditional Environments, this whole cycle would happen every month at max but the way things are moving in the Digital Economy, it actually moved on to Events based model thanks to AI ….

On a similar note the same cycle goes on in the Agent Side as well contributing and improving the “Agent Experience” and “Agent Engagement”

So What else could be happening here… All the Engineering activity happen mostly on the CC Platform and the Data about Customer and Agent Experiences and Interaction Histories are stored in Data Stores

So Let’s bring them all together:

So Let’s look at this new box called Platform we just added… It’s basically the core of the contact center exposed to Developers and Infrastructure Engineers.

The AppOps Team would use Observability Tools to understand the Services’ performance and bottlenecks.

The AIOps on the other hand use Experience Monitoring Solutions and Uptime Monitoring Solutions with Automated Remediation Solutions.

For the Developer there is the DevOps Stack with the Code Repository to store their configurations and code. Continuous Integration Ensures that the ready to release software/configuration gets Tested functionally and for Security Vulnerabilities as well, before landing on the platform.

So this is how all this would look like:

So the Platform has a lot of real-time and historical data in the Data Store… Let’s see what the Data Folks do with it…

So If you have a real Data Engineering Minded Org then the Data Engineers and Scientists would like to have their own layer of lakes to handle the processed data in their useable form.

Most Orgs would use prebuilt Analytics solutions to serve business metrics to Business Managers and Contact Metrics to Supervisors…

There could and should be more outside the core that typically gets ignored in most orgs… If you know anything I missed please do let me know

Migrate for Anthos

Wanne try migrating your VMs to Google Anthos… Migrate for Anthos by Sreenivas M

Sreenivas Makam's Blog

Anthos is a hybrid/multi-cloud platform from GCP. Anthos allows customers to build their application once and run in GCP or in any other private or public cloud. Anthos unifies the control, management and data plane when running a container based application across on-premise and multiple clouds. Anthos was launched in last year’s NEXT18 conference and made generally available recently. VMWare integration is available now, integration with other clouds is planned in the roadmap. 1 of the components of Anthos is called “Migrate for Anthos” which allows direct migration of VM into Containers running on GKE. This blog will focus on “Migrate for Anthos”. I will cover the need for “Migrate for Anthos”, platform architecture and move a simple application from GCP VM into a GKE container. Please note that “Migrate for Anthos” is in BETA now and it is not ready for production.

Need for “Migrate for Anthos”

Modern application…

View original post 1,606 more words

Spectrum of Machine Learning Skills

I’ve been always wondering what roles are there in Machine Learning space. I just thought I should summarize and here goes… Below is my view of how the ML Skillset is spread out with differing levels of math and machine knowledge

Let me know if you disagree in the comments…

Google Cloud Run with CI/CD

Recently I found it intriguing to see the Google announcement of Cloud Run(https://cloud.google.com/run/). I was not sure how it was a different solution as Azure had already demonstrated Serverless Container service using Virtual Kubelet and similarly Amazon has Fargate. So how different is this solution to make PRE’s excited!? I started out exploring the documentation and demoes and then made my own repo to demonstrate how simple it can be.

You can find the source code of the example use case I refer to in this blog here: https://github.com/julianfrank/googlecloudrun

Pre-Requisites

  1. Google cloud Account
  2. Basic familiarity with GCloud Console UI
  3. Basic Understanding of Git and working copy in your workspace
  4. Account in Github

Step 1: Copy wlEmoticon-smile.png

You can fork this Repository to your Account on github before the below steps

Below is the option you could choose if you want google cloud to store your repo…

Under Connect External Repository select your project where you want to host this repo and the provider as github

You will be asked to authorize with gihub password.. Go ahead and the select the repo where you forked my repo…

Step 2: Understand the next Steps

Let me quickly explain the files in this repo

main.go -> Simple Go file to run a http server on PORT. PORT is an environment variable that Google Cloud Run will share when the container is invoked. The main.go code serves the files in the public folder by default. In this repo the public folder has only one simple index.html file.

Dockerfile-> This is a simple multistage Dockerfile definition that compiles the main.go into the myapp executable. In the final stage it copies the myapp executable and public folder into the final container image that gets built locally. This final stage uses the alpine scratch layer which results in a really compact and secure <4MB image!

Cloudbuild.yaml-> This file specifies how to CI and CD. Go ahead and replace the $PROJECT_ID with your preferred project if it is not going to be your current project…Also the $REPO_NAME to something more user friendly if needed. But make sure you note down what you change and make suitable changes in further steps accordingly.

Step 3: Setup the Endpoint

Choose “Cloud Run”

Click on “Create Service”

Provide the container url… and service name as per what is specified in the cloudbuild.yaml file. This is important!!

Also Ensure that “Allow unauthenticated invocations” have been ticked..

I also like to reduce the memory to 128MB and Maximum Requests per container to 4 to prevent misuse…

Hit Create and the endpoint should get created…The result will show error as the target image is still not built…We are getting there…

Step 4: Setup the Trigger

Now that the key ingredients are ready, Its time to setup the trigger that will make this whole engine run on its own…

Goto Cloud Build and Select the Triggers Option

Click on add Trigger

Choose the Cloud Source Repository

Now Give the Trigger a name…Leave everything on default…Change the Build Configuration to “Cloud Build Configuration file”. This will tell the trigger to proceed and take the next steps as per the cloudbuild.yaml file definitions.

Now click on “Create Trigger”

You should now see the trigger listed

You can activate the trigger manually using the “Run Trigger” Option… Let’s hit it!

The History will now show the status of the build process…now go to the Cloud Run Dashboard and you should find the endpoint with the url associated with it…

Click on this url and you should see your sample html page

wlEmoticon-highfive.pngNow Go back to your github repo and change the code… you will see the build triggers the instant you commit and push the code back to the repository…

wlEmoticon-highfive.pngIn summary the whole initial planning and preparation is still time consuming but once setup, everything is fluid and painless.

As of Now (may2019) the cloudbuild.yaml is unable to build the endpoint with public unauthorized access enabled endpoints… This while great for internal only apps … Is a bad thing for public facing web page / API based solutions. Hopefully it gets ironed out by the time Cloud Run becomes GA.