So what’s going on outside the Core Contact Center

I happened to be watching a few documentaries on how Einstein and other Key Scientists discovered their contributions for which they are now known for… I looked back at my work and see a plethora or technologies, buzzwords and jargons thrown around everywhere and I started thinking if it was possible to bring all of them under one logical roof. What I noticed was that the core contact center gets a lot of attention but the surrounding soft processes and technologies outside the core has been ignored. Let me first explain what I mean by the core:

It has three primary components:

  1. The Connect Services – Focus is on Expanding the ways by which the Customer is able to connect with the Contact center. Nowadays this extends to finding ways how agents also can perform their service from multiple channels other than the rather traditional phone + pc method.
  2. Self Service Services – These Focus on Diverting the Interaction from Human to Machine. The Motivations can be many but bottom-line it’s a machine doing the human’s job. Again the rant of IVR Hell is the most popular slogan in every CC Sales person’s narration and continues to the Rise of the Bots in service of Humanity and the best bots being from their garages.
  3. Automation Services – These Focus on Ensuring that the Customer gets serviced by the right agent or bot based on information collected during the interaction or history.

All of these are fundamental to any contact center solution for longer than the past 2 decades and hence I never got myself to blog on the various transformations happening here. What could happen outside the core is however never discussed and hence the key subject of this blog.

Let’s visualize this Core Thingy

So the Experience Gained by Customer when Interacting with the Connect services, we call “Customer Experience” aka “CX”. Similarly, the Experience the Core Gives to Agents becomes the “Agent Experience” aka “AX” … Right?

Wrong… Let’s see why…

Let’s focus on the Customer and see what actually is driving their CX… I hear your mind voice…you just thought about the new term “Omni-Channel” …. And Something Else is coming up… “Customer Engagement” …. Ah now I hear something else … Ok Stop… I’m here to tell my opinion …not Yours!

In my opinion Customer Experience is governed by three key activities

  1. Engineering – This is where the Engineers tirelessly build the core and associated solutions block by block. After crossing the mindless desert of bureaucracy, the storm of politics and whirlpools of bugs, the Engineers brings solutions to production. This used to consist of lifelong projects in the SDLC era but now has been cut short using DevOps so engineers have to cross smaller obstacles than larger ones before…
  2. Experience – Once the Solutions are brought to production the customer happens to use the solution and hence you get “Customer Experience”. Thankfully there are tools which are able to Quantitatively measure these customer experiences using DataOps. This used to be a laborious manual task in the past but nowadays has become automatic to a large extent letting the Data Engineers to focus on Insights
  3. Insight – The Insights is the activity performed typically by Supervisors but now slowly business managers and marketing managers are also getting into these tools to gain insights to better their side of business. These Insights result in Stories which in turn fuels the next round of Engineering.

Now let’s visualize what I’m talking about …

Now in Traditional Environments, this whole cycle would happen every month at max but the way things are moving in the Digital Economy, it actually moved on to Events based model thanks to AI ….

On a similar note the same cycle goes on in the Agent Side as well contributing and improving the “Agent Experience” and “Agent Engagement”

So What else could be happening here… All the Engineering activity happen mostly on the CC Platform and the Data about Customer and Agent Experiences and Interaction Histories are stored in Data Stores

So Let’s bring them all together:

So Let’s look at this new box called Platform we just added… It’s basically the core of the contact center exposed to Developers and Infrastructure Engineers.

The AppOps Team would use Observability Tools to understand the Services’ performance and bottlenecks.

The AIOps on the other hand use Experience Monitoring Solutions and Uptime Monitoring Solutions with Automated Remediation Solutions.

For the Developer there is the DevOps Stack with the Code Repository to store their configurations and code. Continuous Integration Ensures that the ready to release software/configuration gets Tested functionally and for Security Vulnerabilities as well, before landing on the platform.

So this is how all this would look like:

So the Platform has a lot of real-time and historical data in the Data Store… Let’s see what the Data Folks do with it…

So If you have a real Data Engineering Minded Org then the Data Engineers and Scientists would like to have their own layer of lakes to handle the processed data in their useable form.

Most Orgs would use prebuilt Analytics solutions to serve business metrics to Business Managers and Contact Metrics to Supervisors…

There could and should be more outside the core that typically gets ignored in most orgs… If you know anything I missed please do let me know

Enabling External Encoder in Microsoft Teams Live Events for Extreme Noobs

This is an exciting time with the Teams Collaboration market that got triggered by Slack and has caused giants like Microsoft and Cisco to build and introduce their own versions of Team Collaboration Solutions. Each one is trying to address this market with supposedly unique experiences. While I’m a big fan of Cisco Webex Teams for its completeness of vision, my favorite happens to be Microsoft Teams. The reason is its rebel stance it has taken against the Traditional Office Applications by not adhering to their Architecture. Instead this team (Microsoft Team’s dev team) has gone ahead with open source ecosystem to the extent possible and use the Traditional .Net/Visual C++ copy paste to a minimum. The Efficiency benefits shows up with the relatively tiny installation file in the 70-80 MB range that can be installed by the user without admin rights… this is Preposterous for any Traditional Microsoft developer! I love this open attitude and for a 1-year old software Microsoft Teams is loaded with features and keeps coming up with new features every month. I would advice you to check their twitter feed @MicrosoftTeams if you don’t believe me… In comparison, both Traditional Microsoft oldies and other competition are just too slow to come up with updating their capabilities… Unlike a traditional admin, I’m a person who like rapid change and this fluidity of Microsoft Teams is something I love!

Getting back to the topic, Microsoft recently announced the new feature called Live Events as part of their Meetings Capabilities. While the regular Meetings is for Many-To-Many Real-Time Multi-Media Collaboration……

Live Events is specifically geared for ‘Near Real-time’, ‘Some-to-Many’ Video Collaboration.

Bidirectional capabilities are restricted to text and not voice or video. On the flip side the capacity of the audience is greatly increased beyond the 250-participant limit of regular Meetings. Further capability to bring in External Encoders to make the event rich with Studio like capabilities completely blast all other competition out of the water!

If this was a audio/video blog you should be hearing a loud bomb sound now

So great features, but how do they actually perform. The Regular Live Events setup and run is pretty simple and well documented, you can check here (https://docs.microsoft.com/en-us/microsoftteams/teams-live-events/what-are-teams-live-events)for more details to get started quickly

Further links here will guide you through on how to enable live events for all or selective users. Everything can be achieved over GUI and boring and hence I’m not going to blog about here…

Now, when the time came to enable External Encoder in my lab account, I had some interesting nerdish adventure and I believe this would be of interest to someone who has just started administering Microsoft Teams and has not faced PowerShell before. If you are an IT Pro who manages Skype for Business Online on a regular basis then this article may be boring and you may want to stop reading….

For the rest of us, join me on a trip to Teams ‘PowerShell’ Wonderland

 

Getting Started

Typically, I wouldn’t have gone into this as I typically try out Office365 stuff from my desktop which is fully setup. This I tried on my new laptop with zero Office365 activity and that meant starting from scratch… Compared to the rest of Microsoft Teams administration, this one was old school and hence this blog

The first thing you need to have is a ‘Windows’ OS, preferably Windows 10 Creators Update or later… if you are something older, then you may have some other adventure in addition to what I experienced😉… Do let me know in the comments.

 

Install Skype Online PowerShell Modules

This usually is supposed to be a boring activity…Just head over to https://download.microsoft.com/download/2/0/5/2050B39B-4DA5-48E0-B768-583533B42C3B/SkypeOnlinePowerShell.Exe

Download and install….

Beyond the need for admin rights what could go wrong??? Wrong…

 

….the old world has to catch you by the throat and install its Goodies …

 

So, head back to https://aka.ms/vs/15/release/VC_redist.x64.exe

Download and install …with admin access of course…Now again try to install the PowerShell Modules

 

After this you need to ‘Restart’! Yippee!

Power of the Shell be with You

Now after Reboot and open the most favorite adventure app called Windows PowerShell… I like the ISE as it lets me interactively check documentation on modules and create scripts… You could have the same adventure as this blog with the regular PowerShell as well…

Now we need to import the modules we ‘Installed’… Other shells don’t have such needs! Why! The explanation is a bit lengthy …but google it and you should get a good answer

 

We Import the modules using the following command

>Import-Module SkypeOnlineConnector

 

This sadly results in an error!

The reason is that by default the execution policy is set to Restricted and hence Mighty Powerful magic like Import-Module is not allowed… So, we need to change to Signed…And not just Signed but to ‘RemoteSigned’ as our execution is going to happen remotely in Office365 Servers…

>Set-ExecutionPolicy RemoteSigned -Scope CurrentUser

You should be presented with a confirmation if you have enough strength to wield such mighty powers and if you want to wield it always

I usually do ‘A’ but you would be safer with ‘Y’

 

Now let’s do the Import

>Import-Module SkypeOnlineConnector

We now get something going and a confirmation appears again if all the new magic skills are something you can handle?

I’m a pro so I say ‘A’ …again if you want to be careful, then choose ‘R’

 

Now we are all loaded up…Time to do some magic…

Let’s prepare to do some magic

First authenticate ourselves… Lets get our credentials into a variable called $userCredential

>$userCredential = Get-Credential

cmdlet Get-Credential at command pipeline position 1

Supply values for the following parameters:

 

Awesome… Now create a session to build a bridge to the Ether World

>$sfbSession = New-CsOnlineSession -Credential $userCredential

> Import-PSSession $sfbSession

If you see this…then it means that It is working!

 

ModuleType Version Name ExportedCommands

———- ——- —- —————-

Script 1.0 tmp_w5fa1s0p.qns {Clear-CsOnlineTelephoneNumberReservation, ConvertTo-JsonForPSWS, Copy-C…

 

Finally! let’s do the stuff we actually wanted to do

Check what is the Broadcast Policy set globally

>Get-CsTeamsMeetingBroadcastPolicy -identity Global

 

Darn it asked for credentials again!

 

But something went wrong….

Creating a new session for implicit remoting of “Get-CsTeamsMeetingBroadcastPolicy” command…

New-PSSession : [admin3a.online.lync.com] Connecting to remote server admin3a.online.lync.com failed with the following error

message : The WinRM client cannot process the request. The authentication mechanism requested by the client is not supported by the

server or unencrypted traffic is disabled in the service configuration. Verify the unencrypted traffic setting in the service

configuration or specify one of the authentication mechanisms supported by the server. To use Kerberos, specify the computer name

as the remote destination. Also verify that the client computer and the destination computer are joined to a domain. To use Basic,

specify the computer name as the remote destination, specify Basic authentication and provide user name and password. Possible

authentication mechanisms reported by server: For more information, see the about_Remote_Troubleshooting Help topic.

At C:\Users\<removed>\AppData\Local\Temp\tmp_w5fa1s0p.qns\tmp_w5fa1s0p.qns.psm1:136 char:17

+ & $script:NewPSSession `

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : OpenError: (System.Manageme….RemoteRunspace:RemoteRunspace) [New-PSSession], PSRemotingTransportExce

ption

+ FullyQualifiedErrorId : AccessDenied,PSSessionOpenFailed

Exception calling “GetSteppablePipeline” with “1” argument(s): “No session has been associated with this implicit remoting module.”

At C:\Users\<removed>\AppData\Local\Temp\tmp_w5fa1s0p.qns\tmp_w5fa1s0p.qns.psm1:10423 char:13

+ $steppablePipeline = $scriptCmd.GetSteppablePipeline($myI …

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : NotSpecified: (:) [], ParentContainsErrorRecordException

+ FullyQualifiedErrorId : RuntimeException

Back to the Spell Book

A bit of googling later it turns out that Import-PSSession only imports the ingredients of our spell but the darn pentagram is stuck in the cloud! So, lets enter the cloud …

> Enter-PSSession $sfbSession

[admin3a.online.lync.com]: PS>

How do you know you are on the cloud…? You see the Command Prompt has changed! You may get a different server name…. but if you reached here…You are doing Good!

Now let’s check the global policy for TeamsMeetingBroadcast…

[admin3a.online.lync.com]: PS> Get-CsTeamsMeetingBroadcastPolicy -identity Global

Description :

AllowBroadcastScheduling : True

AllowBroadcastTranscription : False

BroadcastAttendeeVisibilityMode : EveryoneInCompany

BroadcastRecordingMode : AlwaysEnabled

Key :[{urn:schema:Microsoft.Rtc.Management.Policy.Teams.2017}TeamsMeetingBroadcastPolicy,Tenant{800fdedd-6533-43f5-9557-965b3eca76f6},Global]

ScopeClass : Global

Anchor : Microsoft.Rtc.Management.ScopeFramework.GlobalScopeAnchor

Identity : Global

TypedIdentity : Global

Element : <TeamsMeetingBroadcastPolicy xmlns=”urn:schema:Microsoft.Rtc.Management.Policy.Teams.2017″

AllowBroadcastScheduling=”true” AllowBroadcastTranscription=”false”

BroadcastAttendeeVisibilityMode=”EveryoneInCompany” BroadcastRecordingMode=”AlwaysEnabled” />

We need to specifically focus on the status of AllowBroadcastScheduling to be True… For me it is true and if you have already fiddled on the GUI Policies, then this must be true…else Please go back to the GUI Admin Centre and enable Meeting scheduling to True in the Global Policy

 

Are we there yet?

If you’ve come this far then now we are ready to do the magic we came all this way for

[admin3a.online.lync.com]: PS> Grant-CsTeamsMeetingBroadcastPolicy -Identity <type full user name here> -PolicyName $null -Verbose

 

Whoosh!

VERBOSE: Performing the operation “Grant-CsTeamsMeetingBroadcastPolicy” on target “<the username will appear here>”.

VERBOSE: Audit disabled on Cmdlet level

We finally did it!

 

How do I check?

Head back to the streams portal and click on Create drop down…the user for whom you did the magic should be able to see the ‘Live Event (preview)’

Now head back to Teams Client or Web Page and create a new Live Event Meeting and the user should be able to see the ‘External Encoder’ enabled…

Awesome! Thanks for being with me on this adventure! Now your user can configure External Encoder in their Live Events!

 

I wish the Microsoft Teams Dev Team put a little more effort and do away with this adventure and let the administrator enable/disable the External Encoder from the GUI itself… IMHO, PowerShell for this is overkill as only a few people will be given this magic gauntlet

What Next? I want more adventure…

Now may be a good time to check out Luca Vitali’s article on how to use OBS as a external encoder for your event at https://lucavitali.wordpress.com/2018/08/24/how-to-use-obs-studio-external-encoder-for-live-events/

For other more ‘Not Free’ solutions head on to https://docs.microsoft.com/en-us/stream/live-encoder-setup

All the Best!!

DevOps Environment Architecture – My Thoughts

After writing about my thoughts on how application architecture might look like in the future, I have been now thinking about how CTOs would want to remodel their DevOps Environment to cater to the whole new multi-cloud ecosystem with completely new Jargons flying around… Lemme illustrate: Cloud Native / 12 Factor Applications, Multi-Cloud, Hybrid–Cloud, Micro-Segmentation, Containers, ChatOps, NoOps, PaaS, FaaS, Serverless, TDD, BDD, CI/CD, Blue-Green, A/B, Canary … You get the picture right…. All these were alien terms in the old Waterfall Model of Application Development but are now the new reality but retrofitting the waterfall style of governance on this ecosystem is a sure recipe for disaster!

So how can we approach this?

I see two dimensions by which we should approach the new estate

  1. The Environmental State Dimension – In this dimension we look from the context of the state of the work item in terms of modern agile Life-Cycle
  2. The Application Life-Cycle State Dimension – From this perspective we see the work item from a user experience impact perspective….

Let’s Explore the State Dimension…

I see four clear states that the code ultimately will go through in a multi-cloud CI/CD environment

Developer Station

  1. This is the environment that the developer uses to write, perform local tests, branch and sync with multiple developers’ s work
  2. This can range from a completely unmanaged BYOD environment to a hyper secured VDI Client
  3. A few Options in increasing order of IT Control I can think of are as below:
    1. BYOD Laptop/Desktop with Developer’s own tools and environment
    2. IT provided Laptop/Desktop/Workstation with mix of IT and Developer installed tools
    3. Virtual App based IT supplied Environment on Developers Device
    4. VDI Client Accessible from Developer Device

Test Zone

  1. This would be the zone where the code gets committed for Integration Tests and Compliance Tests against the bigger SOA / MicroServices Environment
  2. This typically would be cloud based to minimize cost as the load would vary significantly based on working slots of developers and commit levels based of application change loads
  3. Automation is inevitable and manual intervention is not advisable considering the maturity of testing tools automation available in the market

Staging Zone

  1. This zone would be a small scale replica of the Production zone in terms of Multi-Cloud Architecture, Storage distribution, Networking and Security
  2. The Aim would be to Test the Application in terms of Performance, UX and Resilience on multiple Cloud Failure Scenarios. 100% Automation is Possible and hence manual intervention should be avoided
  3. Observability Assurance would be another important goal post in this environment… Though I personally have doubts on maturity of automation capability… Unless Developer Adheres to Corporate Standards, Observability would not be possible for the given code and automation of this is doubtful and imo may need manual intervention in certain scenarios…

Production Zone

  1. I don’t think this zone needs any introduction
  2. This is where the whole ITIL/IT4IT comes to play from a governance and management perspective
  3. This also would be the zone where multiple clouds thrive in an interconnected, secured and 100% IT Governed manner

 

Now to the other dimension…

Application Life-cycle

I have already talked about this in a previous blog (Digital {{Dev}} Lifecycle) …

But overtime I believe there is more needed in an ever changing multi-modal enterprise environment… But that I leave for the next post … Till then bye!

An Approach to Cognify Enterprise Applications

I recently witnessed the setup of my brand new Windows 10 Laptop and was surprised when Cortana guided the installation with voice recognition! This was happening before OS is there on the laptop! … I wouldn’t have imagined this 5 years ago and set off imagining how the experience would have been if the setup designer decided to completely remove any mouse/keyboard inputs. Further, what if Cortana had matured to converse with me naturally without any Pre-Coded questions being asked in sequence! Instead of saying yes or no I dabble about how good the laptop looks and Cortana responds with affirmation or otherwise but gently getting me to respond to the key questions needed to be answered before the full blown OS installation could start… It sounds cool but in future releases this may be the reality!

Back to the topic of Enterprise Applications, Conversational experiences are being continuously developed and improved upon with the bots learning how to converse from both pre-built flows and historical conversation logs. In the enterprise context it now becomes important that CIOs & CTOs start thinking about how their Business Applications can be used on these Conversational Platforms. Enterprise Leaders need to think carefully about how this gets architected and deployed so that it does not become something mechanical and irritating like traditional IVR Solutions. To Succeed in the endeavor we need to look not just at the New Cognitive Platform but also the Services expected to be enabled on the bot and keep the experience exciting so it does not meet the same fate as IVR in terms of experience.

I see the following SUPER aspects of the Solution to be first Scrutinised carefully before project initiation:

  • Service – Look at where the Service is currently performed and check for viability of being able to Integrate with the Cognitive Platform
  • User Experience – Look at how complex is the service to be executed over Automated Interfaces like Phone, Virtual Assistants and Chat UI
  • Peripherals – Look for the peripherals where the services have been provided currently and check if the same can be reused or replacement would be required. Oversight here could lead to Urgent and Expensive replacement later and decreased User Adoption.
  • Environment – Different Services are performed in different work conditions and careful consideration should be made so appropriate services are not provided in certain conditions. For example, speaking out Bank Balance on a Loud Personal Assistant as Speech could embarrass users and lead to privacy concerns of a different nature.
  • Reliability – Here the Cognitive Platform itself should be judged in terms of fragility not just in terms of uptime but in terms of handling edge cases. This is where the continuous unsupervised learning capability needs to be looked at very carefully and evaluated to ensure that the Platform builds up cognition over time.

Here is an approach of how Enterprise Leaders can start moving their workforce to embrace Cognitive Applications

Step 1) Service Audit – Perform an Audit of Services Being performed and the related applications.

Step 2) Cognitive Index Evaluation – User the SUPER aspects to Evaluate the Cognification of each service.

Step 3) Build Road Map – Categorise the Services in terms of ease of introduction and ease of development and batch them in phases.

Step 4) Identify Rollout Strategy – Based on complexity and number of possible solutions and channels under consideration, one or more POCs may need to be initiated followed by bigger rollouts. In case of multiplicity of Business Applications needing to be integrated, then Business Abstraction Layer Solutions could be brought in to significantly boost Integration time.

Step 5) Monitor and Manage –  While the Cognitive Solution brings reduction in service tickets to IT, injection of capabilities like ‘Undirected Engagement’ could lead to monitoring and management of conversations in terms of Ethics, Privacy and Corporate Diversity Policy.

What do you think?