Enabling External Encoder in Microsoft Teams Live Events for Extreme Noobs

This is an exciting time with the Teams Collaboration market that got triggered by Slack and has caused giants like Microsoft and Cisco to build and introduce their own versions of Team Collaboration Solutions. Each one is trying to address this market with supposedly unique experiences. While I’m a big fan of Cisco Webex Teams for its completeness of vision, my favorite happens to be Microsoft Teams. The reason is its rebel stance it has taken against the Traditional Office Applications by not adhering to their Architecture. Instead this team (Microsoft Team’s dev team) has gone ahead with open source ecosystem to the extent possible and use the Traditional .Net/Visual C++ copy paste to a minimum. The Efficiency benefits shows up with the relatively tiny installation file in the 70-80 MB range that can be installed by the user without admin rights… this is Preposterous for any Traditional Microsoft developer! I love this open attitude and for a 1-year old software Microsoft Teams is loaded with features and keeps coming up with new features every month. I would advice you to check their twitter feed @MicrosoftTeams if you don’t believe me… In comparison, both Traditional Microsoft oldies and other competition are just too slow to come up with updating their capabilities… Unlike a traditional admin, I’m a person who like rapid change and this fluidity of Microsoft Teams is something I love!

Getting back to the topic, Microsoft recently announced the new feature called Live Events as part of their Meetings Capabilities. While the regular Meetings is for Many-To-Many Real-Time Multi-Media Collaboration……

Live Events is specifically geared for ‘Near Real-time’, ‘Some-to-Many’ Video Collaboration.

Bidirectional capabilities are restricted to text and not voice or video. On the flip side the capacity of the audience is greatly increased beyond the 250-participant limit of regular Meetings. Further capability to bring in External Encoders to make the event rich with Studio like capabilities completely blast all other competition out of the water!

If this was a audio/video blog you should be hearing a loud bomb sound now

So great features, but how do they actually perform. The Regular Live Events setup and run is pretty simple and well documented, you can check here (https://docs.microsoft.com/en-us/microsoftteams/teams-live-events/what-are-teams-live-events)for more details to get started quickly

Further links here will guide you through on how to enable live events for all or selective users. Everything can be achieved over GUI and boring and hence I’m not going to blog about here…

Now, when the time came to enable External Encoder in my lab account, I had some interesting nerdish adventure and I believe this would be of interest to someone who has just started administering Microsoft Teams and has not faced PowerShell before. If you are an IT Pro who manages Skype for Business Online on a regular basis then this article may be boring and you may want to stop reading….

For the rest of us, join me on a trip to Teams ‘PowerShell’ Wonderland

 

Getting Started

Typically, I wouldn’t have gone into this as I typically try out Office365 stuff from my desktop which is fully setup. This I tried on my new laptop with zero Office365 activity and that meant starting from scratch… Compared to the rest of Microsoft Teams administration, this one was old school and hence this blog

The first thing you need to have is a ‘Windows’ OS, preferably Windows 10 Creators Update or later… if you are something older, then you may have some other adventure in addition to what I experienced😉… Do let me know in the comments.

 

Install Skype Online PowerShell Modules

This usually is supposed to be a boring activity…Just head over to https://download.microsoft.com/download/2/0/5/2050B39B-4DA5-48E0-B768-583533B42C3B/SkypeOnlinePowerShell.Exe

Download and install….

Beyond the need for admin rights what could go wrong??? Wrong…

 

….the old world has to catch you by the throat and install its Goodies …

 

So, head back to https://aka.ms/vs/15/release/VC_redist.x64.exe

Download and install …with admin access of course…Now again try to install the PowerShell Modules

 

After this you need to ‘Restart’! Yippee!

Power of the Shell be with You

Now after Reboot and open the most favorite adventure app called Windows PowerShell… I like the ISE as it lets me interactively check documentation on modules and create scripts… You could have the same adventure as this blog with the regular PowerShell as well…

Now we need to import the modules we ‘Installed’… Other shells don’t have such needs! Why! The explanation is a bit lengthy …but google it and you should get a good answer

 

We Import the modules using the following command

>Import-Module SkypeOnlineConnector

 

This sadly results in an error!

The reason is that by default the execution policy is set to Restricted and hence Mighty Powerful magic like Import-Module is not allowed… So, we need to change to Signed…And not just Signed but to ‘RemoteSigned’ as our execution is going to happen remotely in Office365 Servers…

>Set-ExecutionPolicy RemoteSigned -Scope CurrentUser

You should be presented with a confirmation if you have enough strength to wield such mighty powers and if you want to wield it always

I usually do ‘A’ but you would be safer with ‘Y’

 

Now let’s do the Import

>Import-Module SkypeOnlineConnector

We now get something going and a confirmation appears again if all the new magic skills are something you can handle?

I’m a pro so I say ‘A’ …again if you want to be careful, then choose ‘R’

 

Now we are all loaded up…Time to do some magic…

Let’s prepare to do some magic

First authenticate ourselves… Lets get our credentials into a variable called $userCredential

>$userCredential = Get-Credential

cmdlet Get-Credential at command pipeline position 1

Supply values for the following parameters:

 

Awesome… Now create a session to build a bridge to the Ether World

>$sfbSession = New-CsOnlineSession -Credential $userCredential

> Import-PSSession $sfbSession

If you see this…then it means that It is working!

 

ModuleType Version Name ExportedCommands

———- ——- —- —————-

Script 1.0 tmp_w5fa1s0p.qns {Clear-CsOnlineTelephoneNumberReservation, ConvertTo-JsonForPSWS, Copy-C…

 

Finally! let’s do the stuff we actually wanted to do

Check what is the Broadcast Policy set globally

>Get-CsTeamsMeetingBroadcastPolicy -identity Global

 

Darn it asked for credentials again!

 

But something went wrong….

Creating a new session for implicit remoting of “Get-CsTeamsMeetingBroadcastPolicy” command…

New-PSSession : [admin3a.online.lync.com] Connecting to remote server admin3a.online.lync.com failed with the following error

message : The WinRM client cannot process the request. The authentication mechanism requested by the client is not supported by the

server or unencrypted traffic is disabled in the service configuration. Verify the unencrypted traffic setting in the service

configuration or specify one of the authentication mechanisms supported by the server. To use Kerberos, specify the computer name

as the remote destination. Also verify that the client computer and the destination computer are joined to a domain. To use Basic,

specify the computer name as the remote destination, specify Basic authentication and provide user name and password. Possible

authentication mechanisms reported by server: For more information, see the about_Remote_Troubleshooting Help topic.

At C:\Users\<removed>\AppData\Local\Temp\tmp_w5fa1s0p.qns\tmp_w5fa1s0p.qns.psm1:136 char:17

+ & $script:NewPSSession `

+ ~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : OpenError: (System.Manageme….RemoteRunspace:RemoteRunspace) [New-PSSession], PSRemotingTransportExce

ption

+ FullyQualifiedErrorId : AccessDenied,PSSessionOpenFailed

Exception calling “GetSteppablePipeline” with “1” argument(s): “No session has been associated with this implicit remoting module.”

At C:\Users\<removed>\AppData\Local\Temp\tmp_w5fa1s0p.qns\tmp_w5fa1s0p.qns.psm1:10423 char:13

+ $steppablePipeline = $scriptCmd.GetSteppablePipeline($myI …

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : NotSpecified: (:) [], ParentContainsErrorRecordException

+ FullyQualifiedErrorId : RuntimeException

Back to the Spell Book

A bit of googling later it turns out that Import-PSSession only imports the ingredients of our spell but the darn pentagram is stuck in the cloud! So, lets enter the cloud …

> Enter-PSSession $sfbSession

[admin3a.online.lync.com]: PS>

How do you know you are on the cloud…? You see the Command Prompt has changed! You may get a different server name…. but if you reached here…You are doing Good!

Now let’s check the global policy for TeamsMeetingBroadcast…

[admin3a.online.lync.com]: PS> Get-CsTeamsMeetingBroadcastPolicy -identity Global

Description :

AllowBroadcastScheduling : True

AllowBroadcastTranscription : False

BroadcastAttendeeVisibilityMode : EveryoneInCompany

BroadcastRecordingMode : AlwaysEnabled

Key :[{urn:schema:Microsoft.Rtc.Management.Policy.Teams.2017}TeamsMeetingBroadcastPolicy,Tenant{800fdedd-6533-43f5-9557-965b3eca76f6},Global]

ScopeClass : Global

Anchor : Microsoft.Rtc.Management.ScopeFramework.GlobalScopeAnchor

Identity : Global

TypedIdentity : Global

Element : <TeamsMeetingBroadcastPolicy xmlns=”urn:schema:Microsoft.Rtc.Management.Policy.Teams.2017″

AllowBroadcastScheduling=”true” AllowBroadcastTranscription=”false”

BroadcastAttendeeVisibilityMode=”EveryoneInCompany” BroadcastRecordingMode=”AlwaysEnabled” />

We need to specifically focus on the status of AllowBroadcastScheduling to be True… For me it is true and if you have already fiddled on the GUI Policies, then this must be true…else Please go back to the GUI Admin Centre and enable Meeting scheduling to True in the Global Policy

 

Are we there yet?

If you’ve come this far then now we are ready to do the magic we came all this way for

[admin3a.online.lync.com]: PS> Grant-CsTeamsMeetingBroadcastPolicy -Identity <type full user name here> -PolicyName $null -Verbose

 

Whoosh!

VERBOSE: Performing the operation “Grant-CsTeamsMeetingBroadcastPolicy” on target “<the username will appear here>”.

VERBOSE: Audit disabled on Cmdlet level

We finally did it!

 

How do I check?

Head back to the streams portal and click on Create drop down…the user for whom you did the magic should be able to see the ‘Live Event (preview)’

Now head back to Teams Client or Web Page and create a new Live Event Meeting and the user should be able to see the ‘External Encoder’ enabled…

Awesome! Thanks for being with me on this adventure! Now your user can configure External Encoder in their Live Events!

 

I wish the Microsoft Teams Dev Team put a little more effort and do away with this adventure and let the administrator enable/disable the External Encoder from the GUI itself… IMHO, PowerShell for this is overkill as only a few people will be given this magic gauntlet

What Next? I want more adventure…

Now may be a good time to check out Luca Vitali’s article on how to use OBS as a external encoder for your event at https://lucavitali.wordpress.com/2018/08/24/how-to-use-obs-studio-external-encoder-for-live-events/

For other more ‘Not Free’ solutions head on to https://docs.microsoft.com/en-us/stream/live-encoder-setup

All the Best!!

Advertisements

How could applications look like in the future? #2

This is in continuation to my previous blog with the same topic… https://julianfrank.wordpress.com/2018/01/21/how-could-future-apps-look-like/ … So why again??

One of my pet project was going complicated beyond control to the extent that I hardly remember why I wrote the code in a particular way :sigh: … Hence I’ve been looking for converting my tiny yet complicated monolithic application into a micro service application! Looked around and found istio a few months back and love the level of policed flexibility it provides. This is great for enterprise environments with relatively large teams handling the whole new stack… Sadly for my project that was way too much overhead and complexity for a relatively few features that I was trying to build…

Then I discovered moleculer. This is a nice framework that has all the fundamental features of micro-service ecosystem ticked and yet relatively small and less complicated for my weekend project needs…You can read about its features at a glance at http://moleculer.services/docs/0.13/ … I moved some services -> got into trouble -> cried out for help -> got support from @icebobcsi and others in its gitter.im/moleculerjs/moleculer channel -> now my app is back on track…. Very impressed!

To add to the benefits, the capability to merge and separate services across nodes means that I can code in a system with all services in one node… When deploying to production, I can split the services into separate nodes and make then run in separate container or vm instances… The Framework cares nothing about where you host except that they are able to reach each other over the network. To make things simple you can start with service discovery over TCP Broadcast! Of course for large scale deployment, other mechanisms like NATS, MQTT, Redis etc would be a better fit. My only quip was that this is locked to NodeJS. I Hope that in future, other cloud friendly languages like Golang is also supported…

Back to the topic… My idea of apps being Universally Discover-able is already a possibility Now!

What is left is the service (which I referred to as app in my previous blog) to be self-aware and adapt to work conditions as per needed….

Just imagine a situation where we got five services as part of our application: A, B, C, D, E

Now imagine a situation that the app is bombarded with two users with one user targeting A while the second user using a mix of services

A self-Aware application would in this situation split itself into multiple nodes, letting the first node focus on user 2 while creating copies to handle the high demand from user 1…

When the load is removed, the services move back to a minimal node instance back to original size….

Currently this is facilitated by K8S with careful engineering from IT. But in future I believe, the framework would take care of deciding how the service behaves and the process of mitosis by itself… Am I just day-dreaming?? I don’t know !!

Kubernetes YAML 3-page Bit Paper

The Kubernetes YAML has been evolving like forever and still continues to change. However, after some time it starts making sense.  The sheer number of specifications can  however be overwhelming for anyone and so I came up with this bitpaper that can save countless visits to the kubernetes.io website. This is not exhaustive nor complete but has most of the commonly used specs and their nuances and can be printed on 3 sides.

This has been made for Kubernetes 1.10.

Hope you find this useful

M - Mandatory , Must Specify

O - Optional
kind

Deployment

ReplicaSet

StatefulSet

apiVersion

apps/v1

apps/v1

apps/v1

metadata.name

M

M

M

metadata.namespace

O

O

O

spec.selector

spec.matchLabels
spec.matchExpressions

spec.matchLabels

spec.matchLabels

spec.template

M(Pod template)

M(Pod template)

M(Pod template)

spec.template.spec.restartPolicy

Always only

spec.template.metadata.labels

M

M

M

spec.replicas

O(1)

O(1)

O(1)

spec.strategy.type

O(RollingUpdate)
Recreate

spec.maxSurge

M(strategy==RollingUpdate)

spec.maxUnavailable

M(strategy==RollingUpdate)

spec.minReadySeconds

O(0)

spec.progressDeadlineSeconds

O(> minReadySeconds)

spec.revisionHistoryLimit

O(10s)

spec.paused

O(False)

spec.serviceName

O

spec.volumeClaimTemplates

M

Notes kubectl rollout status deployments
kubectl rollout history –record | –revision=n
kubectl rollout undo | –to-revision=n

kubectl scale deployment | –replicas=n

kubectl autoscale deployment –min= –max= –cpu-percent=

kubectl rollout pause

kubectl set resources deployment -c= –limits=cpu=,memory=

kubectl rollout resume

Allergic to kubectl rolling-update command
You can delete a ReplicaSet without affecting any of its pods, using kubectl delete with the –cascade=false
kind

DaemonSet

HorizontalPodAutoscaler

 ReplicationController

apiVersion

apps/v1

batch/v2alpha1

v1

metadata.name

M

M

M

metadata.namespace

O

O

spec.selector

spec.matchLabels
spec.matchExpressions

spec.template

M(Pod template)

M(Pod template)

spec.template.spec.restartPolicy

Always Only

Always Only

spec.template.metadata.labels

M

M

spec.template.spec.nodeSelector

O

spec.template.spec.affinity

O

Respect Taints and Tolerations

Yes

spec.replicas

O(1)

spec.scaleTargetRef

M(ReplicaSet)

spec.minReplicas

?

spec.maxReplicas

?

spec.targetCPUUtilizationPercentage

?

Notes Allergic to kubectl rolling-update command
You can delete a rc without affecting any of its pods, using kubectl delete with the –cascade=false
kind

Job

CronJob

apiVersion

apps/v1

autoscaling/v1

metadata.name

M

M

metadata.namespace

O

O

spec.selector

O

spec.template

M(Pod template)

spec.template.spec.restartPolicy

Never or OnFailure only

spec.parallelism

O

spec.completions

O
1 -single (default)
>1 -Parallel Jobs

.spec.backoffLimit

O

.spec.activeDeadlineSeconds

O

spec.jobTemplate

M(Job)

spec.startingDeadlineSeconds

O (Large Number)

spec.concurrencyPolicy

O
Allow(default)
Forbid
Replace

spec.suspend

O (False)

spec.successfulJobsHistoryLimit

O (3)

spec.failedJobsHistoryLimit

O (1)

Notes For Non-Parralel Jobs: spec.parallelism & spec.completions =1 (unset)
Fixed completion : spec.completions>1
Queue: spec.paralleism>1

As you must have noted by now, I’m only handling Controllers right now… Will do something similar for Services, Pods and other Infra components in later post

DevOps Environment Architecture – My Thoughts

After writing about my thoughts on how application architecture might look like in the future, I have been now thinking about how CTOs would want to remodel their DevOps Environment to cater to the whole new multi-cloud ecosystem with completely new Jargons flying around… Lemme illustrate: Cloud Native / 12 Factor Applications, Multi-Cloud, Hybrid–Cloud, Micro-Segmentation, Containers, ChatOps, NoOps, PaaS, FaaS, Serverless, TDD, BDD, CI/CD, Blue-Green, A/B, Canary … You get the picture right…. All these were alien terms in the old Waterfall Model of Application Development but are now the new reality but retrofitting the waterfall style of governance on this ecosystem is a sure recipe for disaster!

So how can we approach this?

I see two dimensions by which we should approach the new estate

  1. The Environmental State Dimension – In this dimension we look from the context of the state of the work item in terms of modern agile Life-Cycle
  2. The Application Life-Cycle State Dimension – From this perspective we see the work item from a user experience impact perspective….

Let’s Explore the State Dimension…

I see four clear states that the code ultimately will go through in a multi-cloud CI/CD environment

Developer Station

  1. This is the environment that the developer uses to write, perform local tests, branch and sync with multiple developers’ s work
  2. This can range from a completely unmanaged BYOD environment to a hyper secured VDI Client
  3. A few Options in increasing order of IT Control I can think of are as below:
    1. BYOD Laptop/Desktop with Developer’s own tools and environment
    2. IT provided Laptop/Desktop/Workstation with mix of IT and Developer installed tools
    3. Virtual App based IT supplied Environment on Developers Device
    4. VDI Client Accessible from Developer Device

Test Zone

  1. This would be the zone where the code gets committed for Integration Tests and Compliance Tests against the bigger SOA / MicroServices Environment
  2. This typically would be cloud based to minimize cost as the load would vary significantly based on working slots of developers and commit levels based of application change loads
  3. Automation is inevitable and manual intervention is not advisable considering the maturity of testing tools automation available in the market

Staging Zone

  1. This zone would be a small scale replica of the Production zone in terms of Multi-Cloud Architecture, Storage distribution, Networking and Security
  2. The Aim would be to Test the Application in terms of Performance, UX and Resilience on multiple Cloud Failure Scenarios. 100% Automation is Possible and hence manual intervention should be avoided
  3. Observability Assurance would be another important goal post in this environment… Though I personally have doubts on maturity of automation capability… Unless Developer Adheres to Corporate Standards, Observability would not be possible for the given code and automation of this is doubtful and imo may need manual intervention in certain scenarios…

Production Zone

  1. I don’t think this zone needs any introduction
  2. This is where the whole ITIL/IT4IT comes to play from a governance and management perspective
  3. This also would be the zone where multiple clouds thrive in an interconnected, secured and 100% IT Governed manner

 

Now to the other dimension…

Application Life-cycle

I have already talked about this in a previous blog (Digital {{Dev}} Lifecycle) …

But overtime I believe there is more needed in an ever changing multi-modal enterprise environment… But that I leave for the next post … Till then bye!

Onedrive for Business (16) Greedy Cache Problem

For the past week I’ve been struggling with my c: filling up quickly despite me having cleared the temporary files and even moved my ost file to a different drive… I noticed that Onedrive4B was refusing to complete its sync and then it occurred to me that probably it was the culprit… Every time i freed up some space I found the pending file count to reduce and then stall while the OS started complaining of low space in c:

After googling and a few hacks later I found the main culprit. Onedrive4B cache cannot be moved to other location other than its pre-designated location! That’s IMO a capability there in pre 14 versions but now not supported!

Anyway I went to the C:\Users\<username>\AppData\Local\Microsoft\Office\16.0 folder and to my horror saw multiple OfficeFileCachexxx.old folders with each occupying GBs of space! I just deleted all the *.old folders and my c drive gained plenty of spare GBs it started off with! Now problem partially solved… Onedrive now syncs and maintains only one copy of cache and leaves the rest of space in c: alone… But why doesn’t microsoft not allow the cache to be moved to more spacious locations? I wonder!

Rendezvous with Picroft

This weekend I decided to try out Picroft which happens to be the Simplified Software Only version of the Mycroft Project that is ready to be run on any Raspberry Pi 3 hardware…. Great! So I read through the instructions provided in their website … There were two deviations that I wanted to use

  1. Use the HDMI Audio for Audio instead of the hard coded Analog Stereo Output in the RasPi3
  2. Use Microphone outside the list of mics mentioned in their site…. I’m referring to the microphone in my ancient Microsoft Lifecam VX-6000 to be precise!

So I installed and connected per instruction… The microphone got detected without any problem and Mycroft learnt ‘my’ voice so well that it wouldn’t respond to my son!

Unfortunately the Audio Output is hardcoded to the on-board analog stereo jack which I did test and was working well but I wanted out put on my HDMI display which has much better acoustics! A bit of googling and I found the solution in this site: https://mycroft.ai/documentation/picroft/picroft-audio/#how-to-output-audio-via-hdmi … Its simple … Just

'Control-C'

out of the debug screen and then open the auto_run.sh file… In the line that says sudo amixer cset numid 3 “1” … Change it to

sudo amixer cset numid 3 "2"

…thats it … on the cli type

exit

and picroft will reload with audio output on the screen instead of the audio jack! Yipee!

Unfortunately looks like the software is still under heavy development on the picroft side… I found the response time to be slower than AVS and also the accuracy is good but needs improvement….

Next I need to find if it can work in a pure LAN environment without Internet! But … Weekend almost over … So that’s for some time later… Peace Out till then

An Approach to Cognify Enterprise Applications

I recently witnessed the setup of my brand new Windows 10 Laptop and was surprised when Cortana guided the installation with voice recognition! This was happening before OS is there on the laptop! … I wouldn’t have imagined this 5 years ago and set off imagining how the experience would have been if the setup designer decided to completely remove any mouse/keyboard inputs. Further, what if Cortana had matured to converse with me naturally without any Pre-Coded questions being asked in sequence! Instead of saying yes or no I dabble about how good the laptop looks and Cortana responds with affirmation or otherwise but gently getting me to respond to the key questions needed to be answered before the full blown OS installation could start… It sounds cool but in future releases this may be the reality!

Back to the topic of Enterprise Applications, Conversational experiences are being continuously developed and improved upon with the bots learning how to converse from both pre-built flows and historical conversation logs. In the enterprise context it now becomes important that CIOs & CTOs start thinking about how their Business Applications can be used on these Conversational Platforms. Enterprise Leaders need to think carefully about how this gets architected and deployed so that it does not become something mechanical and irritating like traditional IVR Solutions. To Succeed in the endeavor we need to look not just at the New Cognitive Platform but also the Services expected to be enabled on the bot and keep the experience exciting so it does not meet the same fate as IVR in terms of experience.

I see the following SUPER aspects of the Solution to be first Scrutinised carefully before project initiation:

  • Service – Look at where the Service is currently performed and check for viability of being able to Integrate with the Cognitive Platform
  • User Experience – Look at how complex is the service to be executed over Automated Interfaces like Phone, Virtual Assistants and Chat UI
  • Peripherals – Look for the peripherals where the services have been provided currently and check if the same can be reused or replacement would be required. Oversight here could lead to Urgent and Expensive replacement later and decreased User Adoption.
  • Environment – Different Services are performed in different work conditions and careful consideration should be made so appropriate services are not provided in certain conditions. For example, speaking out Bank Balance on a Loud Personal Assistant as Speech could embarrass users and lead to privacy concerns of a different nature.
  • Reliability – Here the Cognitive Platform itself should be judged in terms of fragility not just in terms of uptime but in terms of handling edge cases. This is where the continuous unsupervised learning capability needs to be looked at very carefully and evaluated to ensure that the Platform builds up cognition over time.

Here is an approach of how Enterprise Leaders can start moving their workforce to embrace Cognitive Applications

Step 1) Service Audit – Perform an Audit of Services Being performed and the related applications.

Step 2) Cognitive Index Evaluation – User the SUPER aspects to Evaluate the Cognification of each service.

Step 3) Build Road Map – Categorise the Services in terms of ease of introduction and ease of development and batch them in phases.

Step 4) Identify Rollout Strategy – Based on complexity and number of possible solutions and channels under consideration, one or more POCs may need to be initiated followed by bigger rollouts. In case of multiplicity of Business Applications needing to be integrated, then Business Abstraction Layer Solutions could be brought in to significantly boost Integration time.

Step 5) Monitor and Manage –  While the Cognitive Solution brings reduction in service tickets to IT, injection of capabilities like ‘Undirected Engagement’ could lead to monitoring and management of conversations in terms of Ethics, Privacy and Corporate Diversity Policy.

What do you think?

How Could Future Apps Look Like?

I’ve been looking at entries in my diary I had made during college days and some interesting ideas popped up in terms of how future enterprise apps could look like. But first lets start from where we are right now

The Past

It is believed in common parlance that all enterprise apps are monoliths. This however is not true and many orgs that I happened to work with in the start of my career (early 2000s) had already split their software stack into layers and modules irrespective of whether the interconnection mechanism was SOAP or just plain old file transfer! However the individual application services were still carefully managed in dedicated servers.

The Present

Virtualisation fuelled with the Boom in Web Standards has now made the concept of Services Oriented Architecture a norm rather than an exception. Now the services are being maintained in dedicated environments but can be easily moved around (relatively) fast with minimal downtime. Cloud and PaaS have further made it relatively easy to distribute the services across geographies and service providers. Server-less is the latest buzzword which works great for IOT boom and uni-kernel infrastructure architectures that are slowly but steady being implemented by service providers.

The Future (IMHO)

I believe that the next trend would be to make the services themselves to be self aware, universally discover-able and self portable! Let me explain these one by one:

Self-Aware

The Applications will be built to know their life and need in the system. They would also have security systems in place to let them realise if they are operating in the right location or not -AND- if they are servicing the correct service/humans. They would also have a distributed block-chain inspired credit system that will be used to decide if they need to remain active or self-destruct!

Universally Discover-able

The Security Standards already are being redesigned to be universal instead of being perimeter limited. The same will extend to make the services themselves to be discover-able much in the same way we humans are slowly moving into using National-ID Card systems. It goes without saying that they would have some mechanism to disappear and replicate without causing confusion in the system as well! Bottom line if I create a software service and it needs a service then it would be able to discover and setup a contract to perform the service.

Self Portable

My Service would have compute credits that would be shared with the service I call to perform my services! Once Credits are over my service would self-destruct! But during its lifetime it would move across “certified” Cloud Domains and make itself available where necessary and leaving replicas to ensure distributed services.

This is not new ideas really, but just a bad actor being used for good purposes…. I’m referring to the lightweight viruses that for decades have been travelling and replicating across computers including mine two decades ago wiping out my programs… AaaaaW how I hate viruses!

Anyway they gave me some ideas to write about this week… Lets see if these come true!

Future of Multi-Cloud using CLOAKS

Its Pongal 2018! Feels good after trying out Veshti… That too the non-Velcro type. Getting to get a few trees planted was a bonus. Sadly both these ventures are in their infancy and took my thoughts back to the IT world in terms of the Multi-Cloud pitch which is now slowly showing signs of maturing into a practical architecture.

I’ve always prophesied that the future of cloud is Peer to Peer Multi-Cloud and has always been an aspiration ever since the ‘cloud’ got popularized. The first use-case that came to my mind was the capability to port application/services across different service providers, geographies and partners. However IMO we should looks at more parameters to truly evaluate how things ave been changing and what does the future hold for us! Here is an CLOAKS based attempt:

  1. Competition
    • Number of vendors that support the architecture exact same stack
  2. Location
    • Number of locations the Infrastructure Architecture can accommodate for the provided Applications
  3. Openness
    • The Level of Openness in the individual specifications
  4. Applications
    • Readiness of Applications to make full use of the Multi-Cloud Architecture
  5. Kinematics
    • Appreciation of the heavy duty impact of the CAP implications in Application Design considering the multi-cloud scenario
  6. Security
    • Maturity of Security of in-scope workloads for both data at rest, in motion, in compute, identity, policy and control of data leak.

Its been at least 5 years but maturity across all these capabilities has not been truly demonstrated even close to my expectations. However there is good news as we are seeing things changing in the right direction and I believe it would be interesting to look at these evolving in different ages as below:

Age 1) Service Provider defined

This is something that became practical with AWS and Azure IaaS workloads providing network peering with on-premise workloads. Further multi-Region Networking is provided to handle movement of workloads within the same provider 😉

Age 2) Platform Vendor Defined

We are currently in this age with Vendors provided solutions that let enterprises scale their applications between their on-premise data-center and the cloud. the VMware Cloud solutions for AWS and Bluemix are a step in the right direction but still restricted to and supported between the same Platform only. There is still a lot to happen in this space this year and only time will tell what other vendors have in store!

Age 3) Community Defined

This I believe is the future and will be built by communities of like minded technocrats and disrupted by some new player who will force the cloud biggies to shut down the walls that they have built to discourage inter-operability between vendors and clouds.

DC Planner Web Version

I had built an Excel Solver Add-in based DC Planner long time ago. It was fine but limited to the capabilities of excel…. Recently while exploring ML I discovered a Javascript based Constraint solver (https://github.com/Wizcorp/constrained) and thought I could use it to build a online version of the DC Planner. So today using a bit on VueJS I built one and hosted on Heroku… check it out at https://dctopup.herokuapp.com/

It’s simple to use

Just Plug in your numbers for the racks and the server types and click evaluate button…and the output table populates with the result if the solution is possible

Got more racks and server types… Just add them and then click evaluate…

If the solution is not possible then you would get a solution not feasible error message….

Being the first version and made in a day, it is bound to have some bugs… Do Please report the bugs by raising an issue in https://github.com/julianfrank/dcplanner

If you like it Please do Star this project in github…

Thanks & Enjoy!

Andromeda: performance, isolation, and velocity at scale in cloud network virtualization

Want to know how GCP (Google Cloud Platform) Does their Virtual Networking to achieve 30Gbps … Read this article on Adrian Colyer’s blog

the morning paper

Andromeda: performance, isolation, and velocity at scale in cloud network virtualization Dalton et al., NSDI’18

Yesterday we took a look at the Microsoft Azure networking stack, today it’s the turn of the Google Cloud Platform. (It’s a very handy coincidence to have two such experience and system design report papers appearing side by side so that we can compare). Andromeda has similar design goals to AccelNet: performance close to hardware, serviceability, and the flexibility and velocity of a software-based architecture. The Google team solve those challenges in a very different way though, being prepared to make use of host cores (which you’ll recall the Azure team wanted to avoid).

We opted for a high-performance software-based architecture instead of a hardware-only solution like SR-IOV because software enables flexible, high-velocity feature deployment… Andromeda consumes a few percent of the CPU and memory on-host. One physical CPU core is reserved for the Andromeda…

View original post 1,279 more words