So what’s going on outside the Core Contact Center

I happened to be watching a few documentaries on how Einstein and other Key Scientists discovered their contributions for which they are now known for… I looked back at my work and see a plethora or technologies, buzzwords and jargons thrown around everywhere and I started thinking if it was possible to bring all of them under one logical roof. What I noticed was that the core contact center gets a lot of attention but the surrounding soft processes and technologies outside the core has been ignored. Let me first explain what I mean by the core:

It has three primary components:

  1. The Connect Services – Focus is on Expanding the ways by which the Customer is able to connect with the Contact center. Nowadays this extends to finding ways how agents also can perform their service from multiple channels other than the rather traditional phone + pc method.
  2. Self Service Services – These Focus on Diverting the Interaction from Human to Machine. The Motivations can be many but bottom-line it’s a machine doing the human’s job. Again the rant of IVR Hell is the most popular slogan in every CC Sales person’s narration and continues to the Rise of the Bots in service of Humanity and the best bots being from their garages.
  3. Automation Services – These Focus on Ensuring that the Customer gets serviced by the right agent or bot based on information collected during the interaction or history.

All of these are fundamental to any contact center solution for longer than the past 2 decades and hence I never got myself to blog on the various transformations happening here. What could happen outside the core is however never discussed and hence the key subject of this blog.

Let’s visualize this Core Thingy

So the Experience Gained by Customer when Interacting with the Connect services, we call “Customer Experience” aka “CX”. Similarly, the Experience the Core Gives to Agents becomes the “Agent Experience” aka “AX” … Right?

Wrong… Let’s see why…

Let’s focus on the Customer and see what actually is driving their CX… I hear your mind voice…you just thought about the new term “Omni-Channel” …. And Something Else is coming up… “Customer Engagement” …. Ah now I hear something else … Ok Stop… I’m here to tell my opinion …not Yours!

In my opinion Customer Experience is governed by three key activities

  1. Engineering – This is where the Engineers tirelessly build the core and associated solutions block by block. After crossing the mindless desert of bureaucracy, the storm of politics and whirlpools of bugs, the Engineers brings solutions to production. This used to consist of lifelong projects in the SDLC era but now has been cut short using DevOps so engineers have to cross smaller obstacles than larger ones before…
  2. Experience – Once the Solutions are brought to production the customer happens to use the solution and hence you get “Customer Experience”. Thankfully there are tools which are able to Quantitatively measure these customer experiences using DataOps. This used to be a laborious manual task in the past but nowadays has become automatic to a large extent letting the Data Engineers to focus on Insights
  3. Insight – The Insights is the activity performed typically by Supervisors but now slowly business managers and marketing managers are also getting into these tools to gain insights to better their side of business. These Insights result in Stories which in turn fuels the next round of Engineering.

Now let’s visualize what I’m talking about …

Now in Traditional Environments, this whole cycle would happen every month at max but the way things are moving in the Digital Economy, it actually moved on to Events based model thanks to AI ….

On a similar note the same cycle goes on in the Agent Side as well contributing and improving the “Agent Experience” and “Agent Engagement”

So What else could be happening here… All the Engineering activity happen mostly on the CC Platform and the Data about Customer and Agent Experiences and Interaction Histories are stored in Data Stores

So Let’s bring them all together:

So Let’s look at this new box called Platform we just added… It’s basically the core of the contact center exposed to Developers and Infrastructure Engineers.

The AppOps Team would use Observability Tools to understand the Services’ performance and bottlenecks.

The AIOps on the other hand use Experience Monitoring Solutions and Uptime Monitoring Solutions with Automated Remediation Solutions.

For the Developer there is the DevOps Stack with the Code Repository to store their configurations and code. Continuous Integration Ensures that the ready to release software/configuration gets Tested functionally and for Security Vulnerabilities as well, before landing on the platform.

So this is how all this would look like:

So the Platform has a lot of real-time and historical data in the Data Store… Let’s see what the Data Folks do with it…

So If you have a real Data Engineering Minded Org then the Data Engineers and Scientists would like to have their own layer of lakes to handle the processed data in their useable form.

Most Orgs would use prebuilt Analytics solutions to serve business metrics to Business Managers and Contact Metrics to Supervisors…

There could and should be more outside the core that typically gets ignored in most orgs… If you know anything I missed please do let me know

Advertisement

Spectrum of Machine Learning Skills

I’ve been always wondering what roles are there in Machine Learning space. I just thought I should summarize and here goes… Below is my view of how the ML Skillset is spread out with differing levels of math and machine knowledge

Let me know if you disagree in the comments…

Google Cloud Run with CI/CD

Recently I found it intriguing to see the Google announcement of Cloud Run(https://cloud.google.com/run/). I was not sure how it was a different solution as Azure had already demonstrated Serverless Container service using Virtual Kubelet and similarly Amazon has Fargate. So how different is this solution to make PRE’s excited!? I started out exploring the documentation and demoes and then made my own repo to demonstrate how simple it can be.

You can find the source code of the example use case I refer to in this blog here: https://github.com/julianfrank/googlecloudrun

Pre-Requisites

  1. Google cloud Account
  2. Basic familiarity with GCloud Console UI
  3. Basic Understanding of Git and working copy in your workspace
  4. Account in Github

Step 1: Copy wlEmoticon-smile.png

You can fork this Repository to your Account on github before the below steps

Below is the option you could choose if you want google cloud to store your repo…

Under Connect External Repository select your project where you want to host this repo and the provider as github

You will be asked to authorize with gihub password.. Go ahead and the select the repo where you forked my repo…

Step 2: Understand the next Steps

Let me quickly explain the files in this repo

main.go -> Simple Go file to run a http server on PORT. PORT is an environment variable that Google Cloud Run will share when the container is invoked. The main.go code serves the files in the public folder by default. In this repo the public folder has only one simple index.html file.

Dockerfile-> This is a simple multistage Dockerfile definition that compiles the main.go into the myapp executable. In the final stage it copies the myapp executable and public folder into the final container image that gets built locally. This final stage uses the alpine scratch layer which results in a really compact and secure <4MB image!

Cloudbuild.yaml-> This file specifies how to CI and CD. Go ahead and replace the $PROJECT_ID with your preferred project if it is not going to be your current project…Also the $REPO_NAME to something more user friendly if needed. But make sure you note down what you change and make suitable changes in further steps accordingly.

Step 3: Setup the Endpoint

Choose “Cloud Run”

Click on “Create Service”

Provide the container url… and service name as per what is specified in the cloudbuild.yaml file. This is important!!

Also Ensure that “Allow unauthenticated invocations” have been ticked..

I also like to reduce the memory to 128MB and Maximum Requests per container to 4 to prevent misuse…

Hit Create and the endpoint should get created…The result will show error as the target image is still not built…We are getting there…

Step 4: Setup the Trigger

Now that the key ingredients are ready, Its time to setup the trigger that will make this whole engine run on its own…

Goto Cloud Build and Select the Triggers Option

Click on add Trigger

Choose the Cloud Source Repository

Now Give the Trigger a name…Leave everything on default…Change the Build Configuration to “Cloud Build Configuration file”. This will tell the trigger to proceed and take the next steps as per the cloudbuild.yaml file definitions.

Now click on “Create Trigger”

You should now see the trigger listed

You can activate the trigger manually using the “Run Trigger” Option… Let’s hit it!

The History will now show the status of the build process…now go to the Cloud Run Dashboard and you should find the endpoint with the url associated with it…

Click on this url and you should see your sample html page

wlEmoticon-highfive.pngNow Go back to your github repo and change the code… you will see the build triggers the instant you commit and push the code back to the repository…

wlEmoticon-highfive.pngIn summary the whole initial planning and preparation is still time consuming but once setup, everything is fluid and painless.

As of Now (may2019) the cloudbuild.yaml is unable to build the endpoint with public unauthorized access enabled endpoints… This while great for internal only apps … Is a bad thing for public facing web page / API based solutions. Hopefully it gets ironed out by the time Cloud Run becomes GA.

Network for Humans

I’ve been thinking for a long time about how the whole concept of electronic networking could be made better or at least different. This post on twitter [https://twitter.com/etherealmind/status/1110280603675566081] by @etherealmind made me regather everything going on in my head and put them in this blog… So here goes…

Why Change?

To be Frank, I’m a bit lazy and tired of the nerd-ism surrounding CIDR. I was irritated by the whole static planning that needs to be done when working in Infrastructure as code to isolate and control traffic between different application groups by using CIDR. I liked the way Docker Swarm enforces Network segmentation/isolation in a human understandable manner. The VMware NSX micro segmentation using Security Groups is also nice from an Infra-as-code workflow. But both are virtual network only constructs and not applicable for the physical networking layer… I was not able to see any way out except change the way we represent the networking endpoints which in current situation is either IPv4/IPv6 or MAC. Both of which from a human reader perspective is an 8-bit number range (IPv4) or Hexadecimal (IPv6/MAC) … Neither of which I like…. So, what could be the way out… Well I got this idea and am jotting it down as is … so expect changes later

The change in Endpoint naming convention

To start with I wanted to look at a naming convention that we humans use for objects. My first iteration was to use English alphabets from A to Z which is 24 in count, numerals 0 to 9 and a few regularly used symbols found popularly on all keyboards.

Alphabets    ->    a-z    ->    24

Numerals    ->    0-9    ->    10

The nearest bit count needed to accommodate this would be 6 => 2^6=64 slots. The alphabets and numerals eat up 34 leaving us with 30 additional space for other characters. In my second iteration, I make the naming “case restricted”, then I need to reserve 48 spaced for alphabets: 24 for small caps [a-z] and 24 for Capitals [A-Z].

This would then occupy 24+24+1=58 slots leaving 6 slots for other characters… I propose we use

+

*

/

.

\

If I overlay these 6 bits per character plan on a 128-bit address space (as is used currently by IPv6), I can accommodate 21 (128//6=21) and leaving 2 bits (128%6=2). I propose to use these for type indicators that I’ll describe later…. IMO 21 slots are a reasonable count for human readable endpoint name.

Structure of the Endpoint Name

IPv4 & IPv6 are fairly unbiased and allow the network admin to structure the allotment as per their whims and fancy. This is great for static infrastructures but not great if the Endpoints are going to be dynamic and vary in both rate of change but also change in count. I propose to used a biased naming structure here so that the name can be predictable within a certain boundary and leave enough space for dynamic naming. My first iteration is to use this structure

tier

name

Slots

example

NA

endpoint type

2 bits

0

0 – public fixed

1 – public mobile

2 – private fixed

3 – private mobile

0

Celestial location

1

E (earth)

1

Latitudinal Slot

1

N

With North Pole as reference, divide latitude into 64 slices and give each slice a char code (0-61)

2

Longitudinal Slot

1

G

With GMT as reference, divide longitude into 64 slices and give each slice a char code (0-61)

3

Country

3

IND

IND for India, USA for the US, … you get the drift right…

4

zone

2

TN

The definition for this could vary by country or could just default to 00 for globally mobile endpoints

5

service provider

3

YBB

Legal owner of this endpoint responsible for this endpoint

6

user defined

10

JF-PC.home

Name that I understand and the network too

Errr! Isn’t this too complicated!

Yup… It is going to be too complicated if we use the full naming convention always. But what if I make the addressing a bit IPv6-ish and move the whole tier 0 to 5 into a `::` . The :: can be filled by the network device and user only specifies if the target is public/private and the human understandable name…. Now Users will be dealing with only names like “2::dbserver1” for private servers, “0::google.com” for public servers.

I must admit I haven’t thought this through but I guess that’s enough for this weekend. More on this later if I get any more idea…

Infrastructure as code using Terraform for Coder

Last week I came across the awesome Open Source solution called coder (https://coder.com) which allows you to host your own Visual Studio Code as a Web Application! I’ve spent very little time with this but am impressed on how quickly and easily this solution gets you up to speed … unlike the other solutions like Apache che for example…

At the same time, I have been trying to learn Terraform with aws as the provider…. So, I decided to make a new multi-tiered environment using Terraform with Coder getting auto deployed. To add a twist in the spec I wanted coder to be the online ssh terminal to administer the private servers. After a few experimentations I have now completed this and am sharing the code in https://github.com/julianfrank/coderonaws

Coder on AWS – Sample Design

The above diagram represents the environment generated… Interested?? If yes continue reading

Step 1: Clone the repo

git clone https://github.com/julianfrank/coderonaws
cd coderonaws

Step 2: Download and Install terraform

The terraform website has the details for each OS… https://learn.hashicorp.com/terraform/getting-started/install.html#installing-terraform

Step 3: Provide your aws details

You will find .tfvars files in each of the three directories (base, public and private)

Provide you aws account details

# Amazon AWS Access Key
aws_access_key = "<AWS account Key"
# Amazon AWS Secret Key
aws_secret_key = "AWS Secret Key"

You can change the other variables as well if you want .. To see a list of variables you can change check out the vardefs.tf file to see the list

Do not forget to rename the .tfvars.template files to .tfvars

mv *.tfvars.template *.tfvars

Step 4: Start Conjuring the environment creation

Change directory back to the coderonaws folder and fire up the start.bat if you are on windows

./start.bat

On other OS you would need to manually go to each folder and invoke terraform apply manually

cd base
terraform apply -auto-approve
cd public
terraform apply -auto-approve
cd private
terraform apply -auto-approve

If all went well, you should see a log file in the public and private folders with the url you can use to connect to your coder instance

…github.com\julianfrank\coderonaws> cat .\public\public.log
{public_ssh: ssh -i "###.pem" ec2-user@ec2-54-213-122-60.us-west-2.compute.amazonaws.com}
{coder_url: http://ec2-54-213-122-60.us-west-2.compute.amazonaws.com}
{nat_ssh: ssh -i "###.pem" ec2-user@ip-172-16-0-121.us-west-2.compute.internal}
…github.com\julianfrank\coderonaws> cat .\private\private.log
{public_ssh: ssh -i \"###.pem\" ec2-user@172.16.1.145}

Open the coder_url in your favorite modern browser (not ie9)… You should be welcomed with a Password challenge

By default I have used `PASSword’ as the password… To change this default password in the runcoder.sh file in the files folder and reconjure the environment…

Click on the ‘ENTER IDE’ button and you will be presented with the coder ui

Now control + ` to open the terminal

Now lookup the log files to see the command line for the server you want to access… and type it in the terminal to get ssh access… All Interactive TUI works perfectly fine in the terminal.

Cool right…

Step 5: Now to tear down the environment…

Back to the local cli and invoke the ./destroy.bat to destroy the entire vpc and instances cleanly from your account…

If you don’t destroy you may get billed on the instance usage….

Not bad right!

Just a cautionary note… Now the design in this repo is not really secure and ready for enterprise production use….So use with care

The Next Frontier for Collaboration

So why did I start on this?

After a long time, last weekend, I came across a pitch I used to take on our POV for Communications way back in 2007…I had predicted that something called “Air PBX” would replace all the then known Predominantly On-Premise Solutions by 2020…

Well I’m now in the start of 2019 and to me it looks like the concept of Air PBX is now prevalent as Cloud PBX from various vendors and is now the default start point of voice architecture. Only Laggards and some customers with genuine “Consistent” quality requirements remain on Legacy On-premise solutions. So much has changed in the Industry and I too have broadened my portfolio now covering the entire Collaboration Technologies. My outlook also has now changed and spearheaded by the frontline collaboration products…ahem…ahem… I’m referring to the Teams duo (WebEx Teams and Microsoft Teams) …

This weekend I was thinking what the scenario in collaboration in the next decade would be…and this followed…

Architecture Highlights for Collaboration in 2033, IMHO

Plenty of Architectures reviewed and Possibilities got analyzed in my head, and I settled on a few highlights that could define the Collaboration Architecture of 2033…

  1. All functionality of Collaboration Services would be available and consumed from Public Clouds.
  2. On-Premise solutions would exist but not as “Production” Equipment. On-Premise Solutions would be built and maintained to handle DR/BCP aka “Cloud Fall” situations
  3. PBX, Voicemail, SMS and ACD may be terms that will be found only in dictionary
  4. The Current Complexity of Multiple Products for Multiple Services will disappear, and Significant Convergence will happen on the Admin Front-End. Please do note that while “Services Endpoints” Collapse/unify, the “Consumption” Mechanisms will explode.
  5. The Diversity of “Communication and Collaboration” Management Teams will disappear and will be replaced by teams aligned to the prevalent vendors at that time. To Illustrate, the SME of vendor 1 will cover all technologies from voice, video, documents collaboration, messaging and the various modalities of consumption that will be normal by then…But this SME may not have any idea of how to get things done on vendor 2’s platform.
  6. Very few Enterprise SMEs will understand the backend complexity of the respective platforms and these too will be focused on managing the DR/BCP setups only.
  7. Identity, Privacy Policies and Data Protection used for communications and collaboration will be external to the vendor platforms unlike how it is tightly integrated currently.
  8. AI/ML based technologies will become utility and serve well understood services with full access to the user’s live and historical interactions. The Universal Policy Managers will ensure that Privacy is managed.

So how would the Collaboration Architecture of the Future look like?

Its Feb 2019 and things could change significantly both towards or away from what I believe will happen. I took a similar approach in 2007 when even Hosted PBX was not a normal practice. At that time UCaaS and Air PBX were terms with very few practical technologies available to make them a reality. But the market has moved in exactly the direction I predicted… I’m going to use a similar extrapolation this time… so here goes…

I believe the entire Architecture will be broadly clustered on four key Solution Units:

  1. Contact Service Providers
  2. Content Service Providers
  3. Security Policy Managers
  4. Consumption Technologies

Of these IT SMEs will have deep knowledge of only the Consumption Technologies. The rest will be of “Talkonology” grade and will be well versed only on the GUI/API based management. Only a few curious and ardent nerds will have knowledge of the inner workings, and their knowledge would be utilized in customer’s DR/BCP Build and management purposes.

Contact Service Providers

In the Current ecosystem this is led by the likes of Skype for Business Server Editions, Cisco UC Servers and similar IP PBX/UC Servers from multiple UC Vendors. IMO these functionalities will move to cloud-based platforms like Skype for Business Online, Microsoft Teams, Cisco WebEx Teams and similar platforms…. Slowly and steadily these will build tight integration with Content Infrastructures in the backend.

The Contact Services themselves will become simplified with Unified Interfaces providing access to all Channels of Communications for the Users. The back-end however would be significantly more powerful and feature heavy than current UCaaS solutions.

Content Service Providers

In current ecosystem this is led by Microsoft SharePoint, Exchange and the various Knowledge Management products in the market like Salesforce.

As mentioned above these would merge from being separate products to a unified product in the admin front-end. Please do note that in the back end they will continue to be different with each service doing what it does best. This Product will also be handling all the data used by the ML Engines deployed in both back-end and Consumption devices. Governance will be handled by Universal Privacy Policy Managers

Security Policy Managers

To be candid our current ecosystem does have several wannabes in this product group, but none may be ready to take the overall nine yards.

The products in this group will be universal in the sense that they will work independent from to the contact and content platforms. This group may not be covered completely by any single platform as well unlike the contact and content products…

Consumption Technologies

This will be the most interesting group which will flourish widely and be the target of time spent by the Architects and Administrators of the Future

If you’ve been in this side of business, then these shouldn’t be too new. The only major difference will be that by 2033 these will be normal and significantly less complex… Also, the Legacy pieces may remain in some Laggards’ IT Portfolio….

Finally

I wanted to write a lot, but time is short and hence kept to a minimum… maybe I’ll write a follow-up in future…

To get an idea of how I was doing the extrapolation… You can check out my earlier blogs https://julianfrank.wordpress.com/2014/09/26/the-ucc-infrastructure/ and https://julianfrank.wordpress.com/2014/09/19/thoughts-on-ucc-first-a-recap-of-what-has-been-happening-so-far/ .

Happy Reading -.

Theory of Pre-Sales Truthiness

Just a Random Thought about how I look at how True a Sales Pitch is based on timing and type…

  • Slide Only -> 50% true
  • Recorded Demo -> 60% true
  • Canned Demo (Working Software from Sales Laptop) -> 70% true
  • Onsite Demo / POC -> 80% true
  • User Acceptance Handover -> 90% true … maybe its too late
  • Support Renewal after 1 year -> 100% true … but does it matter

Just my view …

Enabling External Encoder in Microsoft Teams Live Events for Extreme Noobs

This is an exciting time with the Teams Collaboration market that got triggered by Slack and has caused giants like Microsoft and Cisco to build and introduce their own versions of Team Collaboration Solutions. Each one is trying to address this market with supposedly unique experiences. While I’m a big fan of Cisco Webex Teams for its completeness of vision, my favorite happens to be Microsoft Teams. The reason is its rebel stance it has taken against the Traditional Office Applications by not adhering to their Architecture. Instead this team (Microsoft Team’s dev team) has gone ahead with open source ecosystem to the extent possible and use the Traditional .Net/Visual C++ copy paste to a minimum. The Efficiency benefits shows up with the relatively tiny installation file in the 70-80 MB range that can be installed by the user without admin rights… this is Preposterous for any Traditional Microsoft developer! I love this open attitude and for a 1-year old software Microsoft Teams is loaded with features and keeps coming up with new features every month. I would advice you to check their twitter feed @MicrosoftTeams if you don’t believe me… In comparison, both Traditional Microsoft oldies and other competition are just too slow to come up with updating their capabilities… Unlike a traditional admin, I’m a person who like rapid change and this fluidity of Microsoft Teams is something I love!

Getting back to the topic, Microsoft recently announced the new feature called Live Events as part of their Meetings Capabilities. While the regular Meetings is for Many-To-Many Real-Time Multi-Media Collaboration……

Live Events is specifically geared for ‘Near Real-time’, ‘Some-to-Many’ Video Collaboration.

Bidirectional capabilities are restricted to text and not voice or video. On the flip side the capacity of the audience is greatly increased beyond the 250-participant limit of regular Meetings. Further capability to bring in External Encoders to make the event rich with Studio like capabilities completely blast all other competition out of the water!

If this was a audio/video blog you should be hearing a loud bomb sound now

So great features, but how do they actually perform. The Regular Live Events setup and run is pretty simple and well documented, you can check here (https://docs.microsoft.com/en-us/microsoftteams/teams-live-events/what-are-teams-live-events)for more details to get started quickly

Further links here will guide you through on how to enable live events for all or selective users. Everything can be achieved over GUI and boring and hence I’m not going to blog about here…

Now, when the time came to enable External Encoder in my lab account, I had some interesting nerdish adventure and I believe this would be of interest to someone who has just started administering Microsoft Teams and has not faced PowerShell before. If you are an IT Pro who manages Skype for Business Online on a regular basis then this article may be boring and you may want to stop reading….

For the rest of us, join me on a trip to Teams ‘PowerShell’ Wonderland

 

Getting Started

Typically, I wouldn’t have gone into this as I typically try out Office365 stuff from my desktop which is fully setup. This I tried on my new laptop with zero Office365 activity and that meant starting from scratch… Compared to the rest of Microsoft Teams administration, this one was old school and hence this blog

The first thing you need to have is a ‘Windows’ OS, preferably Windows 10 Creators Update or later… if you are something older, then you may have some other adventure in addition to what I experienced😉… Do let me know in the comments.

 

Install Skype Online PowerShell Modules

This usually is supposed to be a boring activity…Just head over to https://download.microsoft.com/download/2/0/5/2050B39B-4DA5-48E0-B768-583533B42C3B/SkypeOnlinePowerShell.Exe

Download and install….

Beyond the need for admin rights what could go wrong??? Wrong…

 

….the old world has to catch you by the throat and install its Goodies …

 

So, head back to https://aka.ms/vs/15/release/VC_redist.x64.exe

Download and install …with admin access of course…Now again try to install the PowerShell Modules

 

After this you need to ‘Restart’! Yippee!

Power of the Shell be with You

Now after Reboot and open the most favorite adventure app called Windows PowerShell… I like the ISE as it lets me interactively check documentation on modules and create scripts… You could have the same adventure as this blog with the regular PowerShell as well…

Now we need to import the modules we ‘Installed’… Other shells don’t have such needs! Why! The explanation is a bit lengthy …but google it and you should get a good answer

 

We Import the modules using the following command

>Import-Module SkypeOnlineConnector

 

This sadly results in an error!

The reason is that by default the execution policy is set to Restricted and hence Mighty Powerful magic like Import-Module is not allowed… So, we need to change to Signed…And not just Signed but to ‘RemoteSigned’ as our execution is going to happen remotely in Office365 Servers…

>Set-ExecutionPolicy RemoteSigned -Scope CurrentUser

You should be presented with a confirmation if you have enough strength to wield such mighty powers and if you want to wield it always

I usually do ‘A’ but you would be safer with ‘Y’

 

Now let’s do the Import

>Import-Module SkypeOnlineConnector

We now get something going and a confirmation appears again if all the new magic skills are something you can handle?

I’m a pro so I say ‘A’ …again if you want to be careful, then choose ‘R’

 

Now we are all loaded up…Time to do some magic…

Let’s prepare to do some magic

First authenticate ourselves… Lets get our credentials into a variable called $userCredential

>$userCredential = Get-Credential

cmdlet Get-Credential at command pipeline position 1

Supply values for the following parameters:

 

Awesome… Now create a session to build a bridge to the Ether World

>$sfbSession = New-CsOnlineSession -Credential $userCredential

> Import-PSSession $sfbSession

If you see this…then it means that It is working!

 

ModuleType Version Name ExportedCommands

———- ——- —- —————-

Script 1.0 tmp_w5fa1s0p.qns {Clear-CsOnlineTelephoneNumberReservation, ConvertTo-JsonForPSWS, Copy-C…

 

Finally! let’s do the stuff we actually wanted to do

Check what is the Broadcast Policy set globally

>Get-CsTeamsMeetingBroadcastPolicy -identity Global

 

Darn it asked for credentials again!

 

But something went wrong….

Creating a new session for implicit remoting of “Get-CsTeamsMeetingBroadcastPolicy” command…

New-PSSession : [admin3a.online.lync.com] Connecting to remote server admin3a.online.lync.com failed with the following error

message : The WinRM client cannot process the request. The authentication mechanism requested by the client is not supported by the

server or unencrypted traffic is disabled in the service configuration. Verify the unencrypted traffic setting in the service

configuration or specify one of the authentication mechanisms supported by the server. To use Kerberos, specify the computer name

as the remote destination. Also verify that the client computer and the destination computer are joined to a domain. To use Basic,

specify the computer name as the remote destination, specify Basic authentication and provide user name and password. Possible

authentication mechanisms reported by server: For more information, see the about_Remote_Troubleshooting Help topic.

At C:\Users\<removed>\AppData\Local\Temp\tmp_w5fa1s0p.qns\tmp_w5fa1s0p.qns.psm1:136 char:17

+ & $script:NewPSSession `

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : OpenError: (System.Manageme….RemoteRunspace:RemoteRunspace) [New-PSSession], PSRemotingTransportExce

ption

+ FullyQualifiedErrorId : AccessDenied,PSSessionOpenFailed

Exception calling “GetSteppablePipeline” with “1” argument(s): “No session has been associated with this implicit remoting module.”

At C:\Users\<removed>\AppData\Local\Temp\tmp_w5fa1s0p.qns\tmp_w5fa1s0p.qns.psm1:10423 char:13

+ $steppablePipeline = $scriptCmd.GetSteppablePipeline($myI …

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : NotSpecified: (:) [], ParentContainsErrorRecordException

+ FullyQualifiedErrorId : RuntimeException

Back to the Spell Book

A bit of googling later it turns out that Import-PSSession only imports the ingredients of our spell but the darn pentagram is stuck in the cloud! So, lets enter the cloud …

> Enter-PSSession $sfbSession

[admin3a.online.lync.com]: PS>

How do you know you are on the cloud…? You see the Command Prompt has changed! You may get a different server name…. but if you reached here…You are doing Good!

Now let’s check the global policy for TeamsMeetingBroadcast…

[admin3a.online.lync.com]: PS> Get-CsTeamsMeetingBroadcastPolicy -identity Global

Description :

AllowBroadcastScheduling : True

AllowBroadcastTranscription : False

BroadcastAttendeeVisibilityMode : EveryoneInCompany

BroadcastRecordingMode : AlwaysEnabled

Key :[{urn:schema:Microsoft.Rtc.Management.Policy.Teams.2017}TeamsMeetingBroadcastPolicy,Tenant{800fdedd-6533-43f5-9557-965b3eca76f6},Global]

ScopeClass : Global

Anchor : Microsoft.Rtc.Management.ScopeFramework.GlobalScopeAnchor

Identity : Global

TypedIdentity : Global

Element : <TeamsMeetingBroadcastPolicy xmlns=”urn:schema:Microsoft.Rtc.Management.Policy.Teams.2017″

AllowBroadcastScheduling=”true” AllowBroadcastTranscription=”false”

BroadcastAttendeeVisibilityMode=”EveryoneInCompany” BroadcastRecordingMode=”AlwaysEnabled” />

We need to specifically focus on the status of AllowBroadcastScheduling to be True… For me it is true and if you have already fiddled on the GUI Policies, then this must be true…else Please go back to the GUI Admin Centre and enable Meeting scheduling to True in the Global Policy

 

Are we there yet?

If you’ve come this far then now we are ready to do the magic we came all this way for

[admin3a.online.lync.com]: PS> Grant-CsTeamsMeetingBroadcastPolicy -Identity <type full user name here> -PolicyName $null -Verbose

 

Whoosh!

VERBOSE: Performing the operation “Grant-CsTeamsMeetingBroadcastPolicy” on target “<the username will appear here>”.

VERBOSE: Audit disabled on Cmdlet level

We finally did it!

 

How do I check?

Head back to the streams portal and click on Create drop down…the user for whom you did the magic should be able to see the ‘Live Event (preview)’

Now head back to Teams Client or Web Page and create a new Live Event Meeting and the user should be able to see the ‘External Encoder’ enabled…

Awesome! Thanks for being with me on this adventure! Now your user can configure External Encoder in their Live Events!

 

I wish the Microsoft Teams Dev Team put a little more effort and do away with this adventure and let the administrator enable/disable the External Encoder from the GUI itself… IMHO, PowerShell for this is overkill as only a few people will be given this magic gauntlet

What Next? I want more adventure…

Now may be a good time to check out Luca Vitali’s article on how to use OBS as a external encoder for your event at https://lucavitali.wordpress.com/2018/08/24/how-to-use-obs-studio-external-encoder-for-live-events/

For other more ‘Not Free’ solutions head on to https://docs.microsoft.com/en-us/stream/live-encoder-setup

All the Best!!

How could applications look like in the future? #2

This is in continuation to my previous blog with the same topic… https://julianfrank.wordpress.com/2018/01/21/how-could-future-apps-look-like/ … So why again??

One of my pet project was going complicated beyond control to the extent that I hardly remember why I wrote the code in a particular way :sigh: … Hence I’ve been looking for converting my tiny yet complicated monolithic application into a micro service application! Looked around and found istio a few months back and love the level of policed flexibility it provides. This is great for enterprise environments with relatively large teams handling the whole new stack… Sadly for my project that was way too much overhead and complexity for a relatively few features that I was trying to build…

Then I discovered moleculer. This is a nice framework that has all the fundamental features of micro-service ecosystem ticked and yet relatively small and less complicated for my weekend project needs…You can read about its features at a glance at http://moleculer.services/docs/0.13/ … I moved some services -> got into trouble -> cried out for help -> got support from @icebobcsi and others in its gitter.im/moleculerjs/moleculer channel -> now my app is back on track…. Very impressed!

To add to the benefits, the capability to merge and separate services across nodes means that I can code in a system with all services in one node… When deploying to production, I can split the services into separate nodes and make then run in separate container or vm instances… The Framework cares nothing about where you host except that they are able to reach each other over the network. To make things simple you can start with service discovery over TCP Broadcast! Of course for large scale deployment, other mechanisms like NATS, MQTT, Redis etc would be a better fit. My only quip was that this is locked to NodeJS. I Hope that in future, other cloud friendly languages like Golang is also supported…

Back to the topic… My idea of apps being Universally Discover-able is already a possibility Now!

What is left is the service (which I referred to as app in my previous blog) to be self-aware and adapt to work conditions as per needed….

Just imagine a situation where we got five services as part of our application: A, B, C, D, E

Now imagine a situation that the app is bombarded with two users with one user targeting A while the second user using a mix of services

A self-Aware application would in this situation split itself into multiple nodes, letting the first node focus on user 2 while creating copies to handle the high demand from user 1…

When the load is removed, the services move back to a minimal node instance back to original size….

Currently this is facilitated by K8S with careful engineering from IT. But in future I believe, the framework would take care of deciding how the service behaves and the process of mitosis by itself… Am I just day-dreaming?? I don’t know !!

Kubernetes YAML 3-page Bit Paper

The Kubernetes YAML has been evolving like forever and still continues to change. However, after some time it starts making sense.  The sheer number of specifications can  however be overwhelming for anyone and so I came up with this bitpaper that can save countless visits to the kubernetes.io website. This is not exhaustive nor complete but has most of the commonly used specs and their nuances and can be printed on 3 sides.

This has been made for Kubernetes 1.10.

Hope you find this useful

M - Mandatory , Must Specify

O - Optional
kind

Deployment

ReplicaSet

StatefulSet

apiVersion

apps/v1

apps/v1

apps/v1

metadata.name

M

M

M

metadata.namespace

O

O

O

spec.selector

spec.matchLabels
spec.matchExpressions

spec.matchLabels

spec.matchLabels

spec.template

M(Pod template)

M(Pod template)

M(Pod template)

spec.template.spec.restartPolicy

Always only

spec.template.metadata.labels

M

M

M

spec.replicas

O(1)

O(1)

O(1)

spec.strategy.type

O(RollingUpdate)
Recreate

spec.maxSurge

M(strategy==RollingUpdate)

spec.maxUnavailable

M(strategy==RollingUpdate)

spec.minReadySeconds

O(0)

spec.progressDeadlineSeconds

O(> minReadySeconds)

spec.revisionHistoryLimit

O(10s)

spec.paused

O(False)

spec.serviceName

O

spec.volumeClaimTemplates

M

Notes kubectl rollout status deployments
kubectl rollout history –record | –revision=n
kubectl rollout undo | –to-revision=n

kubectl scale deployment | –replicas=n

kubectl autoscale deployment –min= –max= –cpu-percent=

kubectl rollout pause

kubectl set resources deployment -c= –limits=cpu=,memory=

kubectl rollout resume

Allergic to kubectl rolling-update command
You can delete a ReplicaSet without affecting any of its pods, using kubectl delete with the –cascade=false
kind

DaemonSet

HorizontalPodAutoscaler

 ReplicationController

apiVersion

apps/v1

batch/v2alpha1

v1

metadata.name

M

M

M

metadata.namespace

O

O

spec.selector

spec.matchLabels
spec.matchExpressions

spec.template

M(Pod template)

M(Pod template)

spec.template.spec.restartPolicy

Always Only

Always Only

spec.template.metadata.labels

M

M

spec.template.spec.nodeSelector

O

spec.template.spec.affinity

O

Respect Taints and Tolerations

Yes

spec.replicas

O(1)

spec.scaleTargetRef

M(ReplicaSet)

spec.minReplicas

?

spec.maxReplicas

?

spec.targetCPUUtilizationPercentage

?

Notes Allergic to kubectl rolling-update command
You can delete a rc without affecting any of its pods, using kubectl delete with the –cascade=false
kind

Job

CronJob

apiVersion

apps/v1

autoscaling/v1

metadata.name

M

M

metadata.namespace

O

O

spec.selector

O

spec.template

M(Pod template)

spec.template.spec.restartPolicy

Never or OnFailure only

spec.parallelism

O

spec.completions

O
1 -single (default)
>1 -Parallel Jobs

.spec.backoffLimit

O

.spec.activeDeadlineSeconds

O

spec.jobTemplate

M(Job)

spec.startingDeadlineSeconds

O (Large Number)

spec.concurrencyPolicy

O
Allow(default)
Forbid
Replace

spec.suspend

O (False)

spec.successfulJobsHistoryLimit

O (3)

spec.failedJobsHistoryLimit

O (1)

Notes For Non-Parralel Jobs: spec.parallelism & spec.completions =1 (unset)
Fixed completion : spec.completions>1
Queue: spec.paralleism>1

As you must have noted by now, I’m only handling Controllers right now… Will do something similar for Services, Pods and other Infra components in later post

DevOps Environment Architecture – My Thoughts

After writing about my thoughts on how application architecture might look like in the future, I have been now thinking about how CTOs would want to remodel their DevOps Environment to cater to the whole new multi-cloud ecosystem with completely new Jargons flying around… Lemme illustrate: Cloud Native / 12 Factor Applications, Multi-Cloud, Hybrid–Cloud, Micro-Segmentation, Containers, ChatOps, NoOps, PaaS, FaaS, Serverless, TDD, BDD, CI/CD, Blue-Green, A/B, Canary … You get the picture right…. All these were alien terms in the old Waterfall Model of Application Development but are now the new reality but retrofitting the waterfall style of governance on this ecosystem is a sure recipe for disaster!

So how can we approach this?

I see two dimensions by which we should approach the new estate

  1. The Environmental State Dimension – In this dimension we look from the context of the state of the work item in terms of modern agile Life-Cycle
  2. The Application Life-Cycle State Dimension – From this perspective we see the work item from a user experience impact perspective….

Let’s Explore the State Dimension…

I see four clear states that the code ultimately will go through in a multi-cloud CI/CD environment

Developer Station

  1. This is the environment that the developer uses to write, perform local tests, branch and sync with multiple developers’ s work
  2. This can range from a completely unmanaged BYOD environment to a hyper secured VDI Client
  3. A few Options in increasing order of IT Control I can think of are as below:
    1. BYOD Laptop/Desktop with Developer’s own tools and environment
    2. IT provided Laptop/Desktop/Workstation with mix of IT and Developer installed tools
    3. Virtual App based IT supplied Environment on Developers Device
    4. VDI Client Accessible from Developer Device

Test Zone

  1. This would be the zone where the code gets committed for Integration Tests and Compliance Tests against the bigger SOA / MicroServices Environment
  2. This typically would be cloud based to minimize cost as the load would vary significantly based on working slots of developers and commit levels based of application change loads
  3. Automation is inevitable and manual intervention is not advisable considering the maturity of testing tools automation available in the market

Staging Zone

  1. This zone would be a small scale replica of the Production zone in terms of Multi-Cloud Architecture, Storage distribution, Networking and Security
  2. The Aim would be to Test the Application in terms of Performance, UX and Resilience on multiple Cloud Failure Scenarios. 100% Automation is Possible and hence manual intervention should be avoided
  3. Observability Assurance would be another important goal post in this environment… Though I personally have doubts on maturity of automation capability… Unless Developer Adheres to Corporate Standards, Observability would not be possible for the given code and automation of this is doubtful and imo may need manual intervention in certain scenarios…

Production Zone

  1. I don’t think this zone needs any introduction
  2. This is where the whole ITIL/IT4IT comes to play from a governance and management perspective
  3. This also would be the zone where multiple clouds thrive in an interconnected, secured and 100% IT Governed manner

 

Now to the other dimension…

Application Life-cycle

I have already talked about this in a previous blog (Digital {{Dev}} Lifecycle) …

But overtime I believe there is more needed in an ever changing multi-modal enterprise environment… But that I leave for the next post … Till then bye!

Onedrive for Business (16) Greedy Cache Problem

For the past week I’ve been struggling with my c: filling up quickly despite me having cleared the temporary files and even moved my ost file to a different drive… I noticed that Onedrive4B was refusing to complete its sync and then it occurred to me that probably it was the culprit… Every time i freed up some space I found the pending file count to reduce and then stall while the OS started complaining of low space in c:

After googling and a few hacks later I found the main culprit. Onedrive4B cache cannot be moved to other location other than its pre-designated location! That’s IMO a capability there in pre 14 versions but now not supported!

Anyway I went to the C:\Users\<username>\AppData\Local\Microsoft\Office\16.0 folder and to my horror saw multiple OfficeFileCachexxx.old folders with each occupying GBs of space! I just deleted all the *.old folders and my c drive gained plenty of spare GBs it started off with! Now problem partially solved… Onedrive now syncs and maintains only one copy of cache and leaves the rest of space in c: alone… But why doesn’t microsoft not allow the cache to be moved to more spacious locations? I wonder!

Rendezvous with Picroft

This weekend I decided to try out Picroft which happens to be the Simplified Software Only version of the Mycroft Project that is ready to be run on any Raspberry Pi 3 hardware…. Great! So I read through the instructions provided in their website … There were two deviations that I wanted to use

  1. Use the HDMI Audio for Audio instead of the hard coded Analog Stereo Output in the RasPi3
  2. Use Microphone outside the list of mics mentioned in their site…. I’m referring to the microphone in my ancient Microsoft Lifecam VX-6000 to be precise!

So I installed and connected per instruction… The microphone got detected without any problem and Mycroft learnt ‘my’ voice so well that it wouldn’t respond to my son!

Unfortunately the Audio Output is hardcoded to the on-board analog stereo jack which I did test and was working well but I wanted out put on my HDMI display which has much better acoustics! A bit of googling and I found the solution in this site: https://mycroft.ai/documentation/picroft/picroft-audio/#how-to-output-audio-via-hdmi … Its simple … Just

'Control-C'

out of the debug screen and then open the auto_run.sh file… In the line that says sudo amixer cset numid 3 “1” … Change it to

sudo amixer cset numid 3 "2"

…thats it … on the cli type

exit

and picroft will reload with audio output on the screen instead of the audio jack! Yipee!

Unfortunately looks like the software is still under heavy development on the picroft side… I found the response time to be slower than AVS and also the accuracy is good but needs improvement….

Next I need to find if it can work in a pure LAN environment without Internet! But … Weekend almost over … So that’s for some time later… Peace Out till then

An Approach to Cognify Enterprise Applications

I recently witnessed the setup of my brand new Windows 10 Laptop and was surprised when Cortana guided the installation with voice recognition! This was happening before OS is there on the laptop! … I wouldn’t have imagined this 5 years ago and set off imagining how the experience would have been if the setup designer decided to completely remove any mouse/keyboard inputs. Further, what if Cortana had matured to converse with me naturally without any Pre-Coded questions being asked in sequence! Instead of saying yes or no I dabble about how good the laptop looks and Cortana responds with affirmation or otherwise but gently getting me to respond to the key questions needed to be answered before the full blown OS installation could start… It sounds cool but in future releases this may be the reality!

Back to the topic of Enterprise Applications, Conversational experiences are being continuously developed and improved upon with the bots learning how to converse from both pre-built flows and historical conversation logs. In the enterprise context it now becomes important that CIOs & CTOs start thinking about how their Business Applications can be used on these Conversational Platforms. Enterprise Leaders need to think carefully about how this gets architected and deployed so that it does not become something mechanical and irritating like traditional IVR Solutions. To Succeed in the endeavor we need to look not just at the New Cognitive Platform but also the Services expected to be enabled on the bot and keep the experience exciting so it does not meet the same fate as IVR in terms of experience.

I see the following SUPER aspects of the Solution to be first Scrutinised carefully before project initiation:

  • Service – Look at where the Service is currently performed and check for viability of being able to Integrate with the Cognitive Platform
  • User Experience – Look at how complex is the service to be executed over Automated Interfaces like Phone, Virtual Assistants and Chat UI
  • Peripherals – Look for the peripherals where the services have been provided currently and check if the same can be reused or replacement would be required. Oversight here could lead to Urgent and Expensive replacement later and decreased User Adoption.
  • Environment – Different Services are performed in different work conditions and careful consideration should be made so appropriate services are not provided in certain conditions. For example, speaking out Bank Balance on a Loud Personal Assistant as Speech could embarrass users and lead to privacy concerns of a different nature.
  • Reliability – Here the Cognitive Platform itself should be judged in terms of fragility not just in terms of uptime but in terms of handling edge cases. This is where the continuous unsupervised learning capability needs to be looked at very carefully and evaluated to ensure that the Platform builds up cognition over time.

Here is an approach of how Enterprise Leaders can start moving their workforce to embrace Cognitive Applications

Step 1) Service Audit – Perform an Audit of Services Being performed and the related applications.

Step 2) Cognitive Index Evaluation – User the SUPER aspects to Evaluate the Cognification of each service.

Step 3) Build Road Map – Categorise the Services in terms of ease of introduction and ease of development and batch them in phases.

Step 4) Identify Rollout Strategy – Based on complexity and number of possible solutions and channels under consideration, one or more POCs may need to be initiated followed by bigger rollouts. In case of multiplicity of Business Applications needing to be integrated, then Business Abstraction Layer Solutions could be brought in to significantly boost Integration time.

Step 5) Monitor and Manage –  While the Cognitive Solution brings reduction in service tickets to IT, injection of capabilities like ‘Undirected Engagement’ could lead to monitoring and management of conversations in terms of Ethics, Privacy and Corporate Diversity Policy.

What do you think?

How Could Future Apps Look Like?

I’ve been looking at entries in my diary I had made during college days and some interesting ideas popped up in terms of how future enterprise apps could look like. But first lets start from where we are right now

The Past

It is believed in common parlance that all enterprise apps are monoliths. This however is not true and many orgs that I happened to work with in the start of my career (early 2000s) had already split their software stack into layers and modules irrespective of whether the interconnection mechanism was SOAP or just plain old file transfer! However the individual application services were still carefully managed in dedicated servers.

The Present

Virtualisation fuelled with the Boom in Web Standards has now made the concept of Services Oriented Architecture a norm rather than an exception. Now the services are being maintained in dedicated environments but can be easily moved around (relatively) fast with minimal downtime. Cloud and PaaS have further made it relatively easy to distribute the services across geographies and service providers. Server-less is the latest buzzword which works great for IOT boom and uni-kernel infrastructure architectures that are slowly but steady being implemented by service providers.

The Future (IMHO)

I believe that the next trend would be to make the services themselves to be self aware, universally discover-able and self portable! Let me explain these one by one:

Self-Aware

The Applications will be built to know their life and need in the system. They would also have security systems in place to let them realise if they are operating in the right location or not -AND- if they are servicing the correct service/humans. They would also have a distributed block-chain inspired credit system that will be used to decide if they need to remain active or self-destruct!

Universally Discover-able

The Security Standards already are being redesigned to be universal instead of being perimeter limited. The same will extend to make the services themselves to be discover-able much in the same way we humans are slowly moving into using National-ID Card systems. It goes without saying that they would have some mechanism to disappear and replicate without causing confusion in the system as well! Bottom line if I create a software service and it needs a service then it would be able to discover and setup a contract to perform the service.

Self Portable

My Service would have compute credits that would be shared with the service I call to perform my services! Once Credits are over my service would self-destruct! But during its lifetime it would move across “certified” Cloud Domains and make itself available where necessary and leaving replicas to ensure distributed services.

This is not new ideas really, but just a bad actor being used for good purposes…. I’m referring to the lightweight viruses that for decades have been travelling and replicating across computers including mine two decades ago wiping out my programs… AaaaaW how I hate viruses!

Anyway they gave me some ideas to write about this week… Lets see if these come true!

Future of Multi-Cloud using CLOAKS

Its Pongal 2018! Feels good after trying out Veshti… That too the non-Velcro type. Getting to get a few trees planted was a bonus. Sadly both these ventures are in their infancy and took my thoughts back to the IT world in terms of the Multi-Cloud pitch which is now slowly showing signs of maturing into a practical architecture.

I’ve always prophesied that the future of cloud is Peer to Peer Multi-Cloud and has always been an aspiration ever since the ‘cloud’ got popularized. The first use-case that came to my mind was the capability to port application/services across different service providers, geographies and partners. However IMO we should looks at more parameters to truly evaluate how things ave been changing and what does the future hold for us! Here is an CLOAKS based attempt:

  1. Competition
    • Number of vendors that support the architecture exact same stack
  2. Location
    • Number of locations the Infrastructure Architecture can accommodate for the provided Applications
  3. Openness
    • The Level of Openness in the individual specifications
  4. Applications
    • Readiness of Applications to make full use of the Multi-Cloud Architecture
  5. Kinematics
    • Appreciation of the heavy duty impact of the CAP implications in Application Design considering the multi-cloud scenario
  6. Security
    • Maturity of Security of in-scope workloads for both data at rest, in motion, in compute, identity, policy and control of data leak.

Its been at least 5 years but maturity across all these capabilities has not been truly demonstrated even close to my expectations. However there is good news as we are seeing things changing in the right direction and I believe it would be interesting to look at these evolving in different ages as below:

Age 1) Service Provider defined

This is something that became practical with AWS and Azure IaaS workloads providing network peering with on-premise workloads. Further multi-Region Networking is provided to handle movement of workloads within the same provider 😉

Age 2) Platform Vendor Defined

We are currently in this age with Vendors provided solutions that let enterprises scale their applications between their on-premise data-center and the cloud. the VMware Cloud solutions for AWS and Bluemix are a step in the right direction but still restricted to and supported between the same Platform only. There is still a lot to happen in this space this year and only time will tell what other vendors have in store!

Age 3) Community Defined

This I believe is the future and will be built by communities of like minded technocrats and disrupted by some new player who will force the cloud biggies to shut down the walls that they have built to discourage inter-operability between vendors and clouds.

Migrate for Anthos

Wanne try migrating your VMs to Google Anthos… Migrate for Anthos by Sreenivas M

Sreenivas Makam's Blog

Anthos is a hybrid/multi-cloud platform from GCP. Anthos allows customers to build their application once and run in GCP or in any other private or public cloud. Anthos unifies the control, management and data plane when running a container based application across on-premise and multiple clouds. Anthos was launched in last year’s NEXT18 conference and made generally available recently. VMWare integration is available now, integration with other clouds is planned in the roadmap. 1 of the components of Anthos is called “Migrate for Anthos” which allows direct migration of VM into Containers running on GKE. This blog will focus on “Migrate for Anthos”. I will cover the need for “Migrate for Anthos”, platform architecture and move a simple application from GCP VM into a GKE container. Please note that “Migrate for Anthos” is in BETA now and it is not ready for production.

Need for “Migrate for Anthos”

Modern application…

View original post 1,606 more words