DevOps Environment Architecture – My Thoughts

After writing about my thoughts on how application architecture might look like in the future, I have been now thinking about how CTOs would want to remodel their DevOps Environment to cater to the whole new multi-cloud ecosystem with completely new Jargons flying around… Lemme illustrate: Cloud Native / 12 Factor Applications, Multi-Cloud, Hybrid–Cloud, Micro-Segmentation, Containers, ChatOps, NoOps, PaaS, FaaS, Serverless, TDD, BDD, CI/CD, Blue-Green, A/B, Canary … You get the picture right…. All these were alien terms in the old Waterfall Model of Application Development but are now the new reality but retrofitting the waterfall style of governance on this ecosystem is a sure recipe for disaster!

So how can we approach this?

I see two dimensions by which we should approach the new estate

  1. The Environmental State Dimension – In this dimension we look from the context of the state of the work item in terms of modern agile Life-Cycle
  2. The Application Life-Cycle State Dimension – From this perspective we see the work item from a user experience impact perspective….

Let’s Explore the State Dimension…

I see four clear states that the code ultimately will go through in a multi-cloud CI/CD environment

Developer Station

  1. This is the environment that the developer uses to write, perform local tests, branch and sync with multiple developers’ s work
  2. This can range from a completely unmanaged BYOD environment to a hyper secured VDI Client
  3. A few Options in increasing order of IT Control I can think of are as below:
    1. BYOD Laptop/Desktop with Developer’s own tools and environment
    2. IT provided Laptop/Desktop/Workstation with mix of IT and Developer installed tools
    3. Virtual App based IT supplied Environment on Developers Device
    4. VDI Client Accessible from Developer Device

Test Zone

  1. This would be the zone where the code gets committed for Integration Tests and Compliance Tests against the bigger SOA / MicroServices Environment
  2. This typically would be cloud based to minimize cost as the load would vary significantly based on working slots of developers and commit levels based of application change loads
  3. Automation is inevitable and manual intervention is not advisable considering the maturity of testing tools automation available in the market

Staging Zone

  1. This zone would be a small scale replica of the Production zone in terms of Multi-Cloud Architecture, Storage distribution, Networking and Security
  2. The Aim would be to Test the Application in terms of Performance, UX and Resilience on multiple Cloud Failure Scenarios. 100% Automation is Possible and hence manual intervention should be avoided
  3. Observability Assurance would be another important goal post in this environment… Though I personally have doubts on maturity of automation capability… Unless Developer Adheres to Corporate Standards, Observability would not be possible for the given code and automation of this is doubtful and imo may need manual intervention in certain scenarios…

Production Zone

  1. I don’t think this zone needs any introduction
  2. This is where the whole ITIL/IT4IT comes to play from a governance and management perspective
  3. This also would be the zone where multiple clouds thrive in an interconnected, secured and 100% IT Governed manner


Now to the other dimension…

Application Life-cycle

I have already talked about this in a previous blog (Digital {{Dev}} Lifecycle) …

But overtime I believe there is more needed in an ever changing multi-modal enterprise environment… But that I leave for the next post … Till then bye!


Onedrive for Business (16) Greedy Cache Problem

For the past week I’ve been struggling with my c: filling up quickly despite me having cleared the temporary files and even moved my ost file to a different drive… I noticed that Onedrive4B was refusing to complete its sync and then it occurred to me that probably it was the culprit… Every time i freed up some space I found the pending file count to reduce and then stall while the OS started complaining of low space in c:

After googling and a few hacks later I found the main culprit. Onedrive4B cache cannot be moved to other location other than its pre-designated location! That’s IMO a capability there in pre 14 versions but now not supported!

Anyway I went to the C:\Users\<username>\AppData\Local\Microsoft\Office\16.0 folder and to my horror saw multiple OfficeFileCachexxx.old folders with each occupying GBs of space! I just deleted all the *.old folders and my c drive gained plenty of spare GBs it started off with! Now problem partially solved… Onedrive now syncs and maintains only one copy of cache and leaves the rest of space in c: alone… But why doesn’t microsoft not allow the cache to be moved to more spacious locations? I wonder!

Rendezvous with Picroft

This weekend I decided to try out Picroft which happens to be the Simplified Software Only version of the Mycroft Project that is ready to be run on any Raspberry Pi 3 hardware…. Great! So I read through the instructions provided in their website … There were two deviations that I wanted to use

  1. Use the HDMI Audio for Audio instead of the hard coded Analog Stereo Output in the RasPi3
  2. Use Microphone outside the list of mics mentioned in their site…. I’m referring to the microphone in my ancient Microsoft Lifecam VX-6000 to be precise!

So I installed and connected per instruction… The microphone got detected without any problem and Mycroft learnt ‘my’ voice so well that it wouldn’t respond to my son!

Unfortunately the Audio Output is hardcoded to the on-board analog stereo jack which I did test and was working well but I wanted out put on my HDMI display which has much better acoustics! A bit of googling and I found the solution in this site: … Its simple … Just


out of the debug screen and then open the file… In the line that says sudo amixer cset numid 3 “1” … Change it to

sudo amixer cset numid 3 "2"

…thats it … on the cli type


and picroft will reload with audio output on the screen instead of the audio jack! Yipee!

Unfortunately looks like the software is still under heavy development on the picroft side… I found the response time to be slower than AVS and also the accuracy is good but needs improvement….

Next I need to find if it can work in a pure LAN environment without Internet! But … Weekend almost over … So that’s for some time later… Peace Out till then

An Approach to Cognify Enterprise Applications

I recently witnessed the setup of my brand new Windows 10 Laptop and was surprised when Cortana guided the installation with voice recognition! This was happening before OS is there on the laptop! … I wouldn’t have imagined this 5 years ago and set off imagining how the experience would have been if the setup designer decided to completely remove any mouse/keyboard inputs. Further, what if Cortana had matured to converse with me naturally without any Pre-Coded questions being asked in sequence! Instead of saying yes or no I dabble about how good the laptop looks and Cortana responds with affirmation or otherwise but gently getting me to respond to the key questions needed to be answered before the full blown OS installation could start… It sounds cool but in future releases this may be the reality!

Back to the topic of Enterprise Applications, Conversational experiences are being continuously developed and improved upon with the bots learning how to converse from both pre-built flows and historical conversation logs. In the enterprise context it now becomes important that CIOs & CTOs start thinking about how their Business Applications can be used on these Conversational Platforms. Enterprise Leaders need to think carefully about how this gets architected and deployed so that it does not become something mechanical and irritating like traditional IVR Solutions. To Succeed in the endeavor we need to look not just at the New Cognitive Platform but also the Services expected to be enabled on the bot and keep the experience exciting so it does not meet the same fate as IVR in terms of experience.

I see the following SUPER aspects of the Solution to be first Scrutinised carefully before project initiation:

  • Service – Look at where the Service is currently performed and check for viability of being able to Integrate with the Cognitive Platform
  • User Experience – Look at how complex is the service to be executed over Automated Interfaces like Phone, Virtual Assistants and Chat UI
  • Peripherals – Look for the peripherals where the services have been provided currently and check if the same can be reused or replacement would be required. Oversight here could lead to Urgent and Expensive replacement later and decreased User Adoption.
  • Environment – Different Services are performed in different work conditions and careful consideration should be made so appropriate services are not provided in certain conditions. For example, speaking out Bank Balance on a Loud Personal Assistant as Speech could embarrass users and lead to privacy concerns of a different nature.
  • Reliability – Here the Cognitive Platform itself should be judged in terms of fragility not just in terms of uptime but in terms of handling edge cases. This is where the continuous unsupervised learning capability needs to be looked at very carefully and evaluated to ensure that the Platform builds up cognition over time.

Here is an approach of how Enterprise Leaders can start moving their workforce to embrace Cognitive Applications

Step 1) Service Audit – Perform an Audit of Services Being performed and the related applications.

Step 2) Cognitive Index Evaluation – User the SUPER aspects to Evaluate the Cognification of each service.

Step 3) Build Road Map – Categorise the Services in terms of ease of introduction and ease of development and batch them in phases.

Step 4) Identify Rollout Strategy – Based on complexity and number of possible solutions and channels under consideration, one or more POCs may need to be initiated followed by bigger rollouts. In case of multiplicity of Business Applications needing to be integrated, then Business Abstraction Layer Solutions could be brought in to significantly boost Integration time.

Step 5) Monitor and Manage –  While the Cognitive Solution brings reduction in service tickets to IT, injection of capabilities like ‘Undirected Engagement’ could lead to monitoring and management of conversations in terms of Ethics, Privacy and Corporate Diversity Policy.

What do you think?

How Could Future Apps Look Like?

I’ve been looking at entries in my diary I had made during college days and some interesting ideas popped up in terms of how future enterprise apps could look like. But first lets start from where we are right now

The Past

It is believed in common parlance that all enterprise apps are monoliths. This however is not true and many orgs that I happened to work with in the start of my career (early 2000s) had already split their software stack into layers and modules irrespective of whether the interconnection mechanism was SOAP or just plain old file transfer! However the individual application services were still carefully managed in dedicated servers.

The Present

Virtualisation fuelled with the Boom in Web Standards has now made the concept of Services Oriented Architecture a norm rather than an exception. Now the services are being maintained in dedicated environments but can be easily moved around (relatively) fast with minimal downtime. Cloud and PaaS have further made it relatively easy to distribute the services across geographies and service providers. Server-less is the latest buzzword which works great for IOT boom and uni-kernel infrastructure architectures that are slowly but steady being implemented by service providers.

The Future (IMHO)

I believe that the next trend would be to make the services themselves to be self aware, universally discover-able and self portable! Let me explain these one by one:


The Applications will be built to know their life and need in the system. They would also have security systems in place to let them realise if they are operating in the right location or not -AND- if they are servicing the correct service/humans. They would also have a distributed block-chain inspired credit system that will be used to decide if they need to remain active or self-destruct!

Universally Discover-able

The Security Standards already are being redesigned to be universal instead of being perimeter limited. The same will extend to make the services themselves to be discover-able much in the same way we humans are slowly moving into using National-ID Card systems. It goes without saying that they would have some mechanism to disappear and replicate without causing confusion in the system as well! Bottom line if I create a software service and it needs a service then it would be able to discover and setup a contract to perform the service.

Self Portable

My Service would have compute credits that would be shared with the service I call to perform my services! Once Credits are over my service would self-destruct! But during its lifetime it would move across “certified” Cloud Domains and make itself available where necessary and leaving replicas to ensure distributed services.

This is not new ideas really, but just a bad actor being used for good purposes…. I’m referring to the lightweight viruses that for decades have been travelling and replicating across computers including mine two decades ago wiping out my programs… AaaaaW how I hate viruses!

Anyway they gave me some ideas to write about this week… Lets see if these come true!

Future of Multi-Cloud using CLOAKS

Its Pongal 2018! Feels good after trying out Veshti… That too the non-Velcro type. Getting to get a few trees planted was a bonus. Sadly both these ventures are in their infancy and took my thoughts back to the IT world in terms of the Multi-Cloud pitch which is now slowly showing signs of maturing into a practical architecture.

I’ve always prophesied that the future of cloud is Peer to Peer Multi-Cloud and has always been an aspiration ever since the ‘cloud’ got popularized. The first use-case that came to my mind was the capability to port application/services across different service providers, geographies and partners. However IMO we should looks at more parameters to truly evaluate how things ave been changing and what does the future hold for us! Here is an CLOAKS based attempt:

  1. Competition
    • Number of vendors that support the architecture exact same stack
  2. Location
    • Number of locations the Infrastructure Architecture can accommodate for the provided Applications
  3. Openness
    • The Level of Openness in the individual specifications
  4. Applications
    • Readiness of Applications to make full use of the Multi-Cloud Architecture
  5. Kinematics
    • Appreciation of the heavy duty impact of the CAP implications in Application Design considering the multi-cloud scenario
  6. Security
    • Maturity of Security of in-scope workloads for both data at rest, in motion, in compute, identity, policy and control of data leak.

Its been at least 5 years but maturity across all these capabilities has not been truly demonstrated even close to my expectations. However there is good news as we are seeing things changing in the right direction and I believe it would be interesting to look at these evolving in different ages as below:

Age 1) Service Provider defined

This is something that became practical with AWS and Azure IaaS workloads providing network peering with on-premise workloads. Further multi-Region Networking is provided to handle movement of workloads within the same provider 😉

Age 2) Platform Vendor Defined

We are currently in this age with Vendors provided solutions that let enterprises scale their applications between their on-premise data-center and the cloud. the VMware Cloud solutions for AWS and Bluemix are a step in the right direction but still restricted to and supported between the same Platform only. There is still a lot to happen in this space this year and only time will tell what other vendors have in store!

Age 3) Community Defined

This I believe is the future and will be built by communities of like minded technocrats and disrupted by some new player who will force the cloud biggies to shut down the walls that they have built to discourage inter-operability between vendors and clouds.

Andromeda: performance, isolation, and velocity at scale in cloud network virtualization

Want to know how GCP (Google Cloud Platform) Does their Virtual Networking to achieve 30Gbps … Read this article on Adrian Colyer’s blog

the morning paper

Andromeda: performance, isolation, and velocity at scale in cloud network virtualization Dalton et al., NSDI’18

Yesterday we took a look at the Microsoft Azure networking stack, today it’s the turn of the Google Cloud Platform. (It’s a very handy coincidence to have two such experience and system design report papers appearing side by side so that we can compare). Andromeda has similar design goals to AccelNet: performance close to hardware, serviceability, and the flexibility and velocity of a software-based architecture. The Google team solve those challenges in a very different way though, being prepared to make use of host cores (which you’ll recall the Azure team wanted to avoid).

We opted for a high-performance software-based architecture instead of a hardware-only solution like SR-IOV because software enables flexible, high-velocity feature deployment… Andromeda consumes a few percent of the CPU and memory on-host. One physical CPU core is reserved for the Andromeda…

View original post 1,279 more words

Migrating from Avaya to Skype for Business – Part 2 – Discovery

For People moving from Avaya to Skype4Business On PRemise Setup …


In Part 1 of this series I gave an overview on how to approach these types of projects. If you haven’t read Part 1, you can do here.

As I said previously, the discovery is the most important part of these types of projects. You cannot just turn up and expect to migrate users, that’s going to end in disaster. When I perform the discovery phase I look at the following architectures

  1. Client layer
  2. Network layer
  3. Server layer
  4. Telephony layer
  5. Process layer

The reason I take a complete discovery on these layers is to avoid ignorance and making decisions largely upon nothing more than assumptions and people’s say so.

Client Layer

In this discovery I look at the end user state. My aim is to establish a baseline of what hardware, peripherals and working environment is used within the business. You’ll need to document the standard hardware profiles offered…

View original post 2,892 more words