Interested in getting your own thought leader profile? Get Started Today.

Gaurav Agarwaal

Senior Vice President | Cybersecurity and Digital Transformation Leader | Coach | Mentor | Chief Architect | Technology Thought Leader at Avanade

Redmond, United States

Gaurav Agarwaal, Senior Vice President, Global Cybersecurity Lead - Cloud, App, and Data at Avanade Inc

Gaurav is a Technology thought leader and strategist in Cybersecurity, Cloud, and Digital Transformation with 26 plus years of experience. He is known for his innovative and disruptive approach to driving Digital Transformation, developing scalable practices in Cybersecurity, Cloud, Application Modernization, Intelligent Experience, and IT outsourcing domains.

Prior to becoming an integral part of Avanade, Gaurav was the Director, Technical Sales at Microsoft Singapore and India. He was pivotal in developing innovative Cloud solutions and Cloud practices with Microsoft Global SI and the National SI partner ecosystem. As co-founder and Chief Architect for Partner Enterprise Architect Team (PEAT) at Microsoft, he established a Global SI-led $500M Azure consumption business for Microsoft. He has built and led a large global team of Cloud Solution Architects and aspiring Solution Architects. Earlier as Senior Architect at Microsoft India; he has been pivotal in defining the solution architecture for first of its kind projects like Direct Benefits Transfer scheme, BangaloreOne, eDistrict, Rural ICT, Nemmadi. In addition to his technical excellence on delivering large-scale complex projects, he has gained wide industry recognition for his consistent success in achieving sustained revenue and profit gains.

Throughout 28+ years of IT journey spanning management, leading a large team of Architects, Pre-sales, Partner Business Development and Consulting experience, he has gained wide industry recognition for his consistent success in achieving sustained revenue and profit gains.

Available For: Authoring, Consulting, Influencing, Speaking
Travels From: Seattle
Speaking Topics: Cybersecurity, Cloud Transformation, Application Modernization Digital Transformation, Technology Leadership, 5G, Microsoft Azure

Speaking Fee $1,000 (In-Person)

Gaurav Agarwaal Points
Academic 0
Author 51
Influencer 248
Speaker 0
Entrepreneur 30
Total 329

Points based upon Thinkers360 patent-pending algorithm.

Thought Leader Profile

Portfolio Mix

Company Information

Company Type: Enterprise
Business Unit: Technology
Theatre: Global
Minimum Project Size: Undisclosed
Average Hourly Rate: Undisclosed
Number of Employees: Undisclosed
Company Founded Date: Undisclosed

Areas of Expertise

5G 30.11
Agile 30.04
AI 30.01
Big Data
Business Continuity 30.40
Business Strategy 30.24
Change Management
Cloud 33.28
COVID19 37.52
Customer Experience
Cybersecurity 30.83
Data Center 30.12
Design Thinking
Digital Disruption 30.14
Digital Transformation 30.76
Digital Twins 42.67
Emerging Technology
Entrepreneurship 30.08
Future of Work
Innovation 30.09
Leadership 30.66
Management 30.05
Predictive Analytics
Security 30.17
Supply Chain 30.04
Climate Change 30.22

Industry Experience

Federal & Public Sector
Financial Services & Banking
Professional Services


24 Article/Blogs
Securing Tomorrow: Unleashing the Power of Breach and Attack Simulation (BAS) Technology
September 13, 2023
As the cybersecurity landscape continues to evolve, the challenges associated with defending against cyber threats have grown exponentially. Threat vectors have expanded, and cyber attackers now employ increasingly sophisticated tools and methods. Moreover, the complexity of managing security in today's distributed hybrid/multi-cloud architecture, heavily reliant on high-speed connectivity for both people and IoT devices, further compounds the challenges of #cyberdefense.

See publication

Tags: Cybersecurity, Security

Decoding new SEC Regulations: A CISO's Guide to DOs and DON'Ts in Cybersecurity
September 09, 2023
U.S. Securities and Exchange Commission (SEC) is the latest federal agency putting a spotlight on U.S. companies’ cybersecurity practices and pushing boards and executive management teams to place a greater focus on their cyber risk management. On July 26, 2023, the U.S. Securities and Exchange Commission (SEC) adopted amendments ( | SEC Adopts Rules on Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure by Public Companies) to its rules to enhance disclosures regarding material cybersecurity incidents and cybersecurity risk management, strategy and governance processes by registrants.

See publication

Tags: Cybersecurity

12 Secrets for Successful Digital Transformation
February 02, 2022
Here are 12 secrets to lead a successful digital transformation and deliver financial results without wasting critical tech resources.

See publication

Tags: Digital Transformation, Leadership

Letter from My Desk #15: 4 Managerial Archetypes that employees encounter in their career
November 15, 2021
The success pillars of organizations are their employees who dedicate themselves to achieving a company’s desired goals. Current employees are the future leaders who will lead the organizations. In my opinion, it is imperative to empower employees by developing their managerial and leadership skills to teach them a better leadership style.

See publication

Tags: Digital Transformation, Leadership

Letter From My Desk #14: 4 Innovation postures of organizations led by the pandemic
October 12, 2021
The COVID-19 pandemic has transformed nearly every aspect of life, be it personal or professional. For example, it significantly impacted how companies interacted with customers, customers' purchasing, and the supply chain delivery process.

See publication

Tags: Digital Transformation, Innovation, Supply Chain

Letter From My Desk #13: Specialize & Generalize - A mixed strategy to pivot your career
October 05, 2021
My mentees often ask me which is a better option for them when they are trying to make a successful career, being a generalist or a specialist?

See publication

Tags: Digital Transformation, Innovation, Leadership

Advice to my Younger Self as a Software Engineer
Import from
October 01, 2021
A piece of advice to my younger self — “Set all kinds of goals for what you want to achieve in life. No goal is too big with the right plan and vision.”In the past 25 years, I had my fair share of success and failure. I find this journey memorable as it made me understand the importance o

See publication

Tags: Cloud, Cybersecurity, Digital Transformation

The Cloud Continuum
Import from
September 27, 2021
Organizations are reinventing their futures amid exceptional situations. Nowadays, change has become the new ordinary. That is why so many firms are reinventing themselves and moving systems and apps to the Cloud. And they’re doing it while their industries and businesses are changing. It’s anal

See publication

Tags: Cloud, Cybersecurity, Digital Transformation

Letter From My Desk #12: 4 strategies to maximize the benefits from brainstorming.
September 24, 2021
On a daily basis, I attend uncountable meetings to create new solutions that benefit our organization and customers. For the past 25 years, I’ve been able to focus on starting my day with a good work plan. However, implementing that plan sometimes requires more effort, discussions, modifications, and whatnot.

See publication

Tags: Digital Transformation, Management, Leadership

Advice to my Younger Self as a Software Engineer
September 24, 2021
A piece of advice to my younger self - “Set all kinds of goals for what you want to achieve in life. No goal is too big with the right plan and vision.”

See publication

Tags: Cloud, Digital Transformation, Leadership

Thought Leadership
Import from
September 20, 2021
You want to be A Thought Leader“Thought leadership” is a hot topic in the corporate world right now, and it’s simple to understand why. When people turn to you for your knowledge and insight, you may establish yourself as a prominent industry expert and a well-known voice in your field.Thoug

See publication

Tags: Cloud, Cybersecurity, Digital Transformation

So You want to be a Thought Leader with Growth Mindset
September 17, 2021
"Thought leadership" is a hot topic in the corporate world right now, and it's simple to understand why. When people turn to you for your knowledge and insight, you may establish yourself as a prominent industry expert and a well-known voice in your field.

See publication

Tags: Cybersecurity, Digital Transformation, Leadership

Industry Cloud Is The Future Of Cloud Transformation And Realization
Import from
September 17, 2021
Today, the cloud underpins most new technological disruptions and has proven itself during times of uncertainty with its resiliency, scalability, flexibility, and speed.According to Gartner, Cloud adoption has expanded rapidly — more than 20% CAGR from 2020 to 2025 in total spending. But gues

See publication

Tags: Cloud, Cybersecurity, Digital Transformation

Learning Path - Digital Transformation Strategy and Innovation
September 16, 2021
Welcome to the 4th part of the series on “Upskilling to Digital Architect” wherein I will focus on the first foundational skills block “Digital Transformation Strategy and Innovation” for digital architects.

See publication

Tags: Cybersecurity, Digital Transformation, Leadership

7 Mantras to Pivot Your Career
Import from
September 16, 2021
The coronavirus pandemic caused massive downturns in numerous industries across the world resulting in unemployment, pay cuts, furloughs, and layoffs. This has changed the idea of a career pivot from something a professional considers for a change, to the necessity for survival. Even employers have

See publication

Tags: Cloud, Cybersecurity, Digital Transformation

Industry Cloud Is The Future Of Cloud Transformation And Realization
September 10, 2021
Definitely, there are reasons like Regulatory compliance, security concerns, shortage of Cloud Skilled resources, technology debt, etc; but my view is, so far Cloud providers have been solving the Technology Style problem instead of providing real industry-specific business process-centric solutions.

See publication

Tags: Cloud, Digital Transformation, Leadership

7 Mantras to Pivot Your Career
September 10, 2021
The coronavirus pandemic caused massive downturns in numerous industries across the world resulting in unemployment, pay cuts, furloughs, and layoffs. This has changed the idea of a career pivot from something a professional considers for a change, to the necessity for survival. Even employers have noticed and utilized the advantage of people who pivot their careers.

See publication

Tags: Cybersecurity, Digital Transformation, Leadership

Microsoft Cloud for Retail – Architect Perspective
August 31, 2021
2020 was the year within which we experienced disruptive changes at a pace and a scale that we could never have imagined. COVID-19 caused disruptions in product supply and demand, in the labor pool, and consumer spending. But it also allowed many sectors to embrace digitalization like never before.

See publication

Tags: Cybersecurity, Digital Transformation, Leadership

Microsoft Cloud for Healthcare - Architect Perspective
August 30, 2021
The COVID-19 pandemic has impacted not just every aspect of people's lives but every aspect of the healthcare system. It's preventing healthcare delivery practices from operating at normal business levels, it's disrupting patient access to high-quality medical care, and it's forcing everyone to think about how to continue pushing forward in new and different ways.

See publication

Tags: Cloud, Digital Transformation, Leadership

Microsoft Cloud for Retail — Architect Perspective
Import from
August 30, 2021
Microsoft Cloud for Retail — Architect Perspective2020 was the year within which we experienced disruptive changes at a pace and a scale that we could never have imagined. COVID-19 caused disruptions in product supply and demand, in the labor pool, and consumer spending. But it also allowed ma

See publication

Tags: Cloud, Cybersecurity, Digital Transformation

Microsoft Cloud for Healthcare — Architect Perspective
Import from
August 27, 2021
Microsoft Cloud for Healthcare — Architect PerspectiveThe COVID-19 pandemic has impacted not just every aspect of people’s lives but every aspect of the healthcare system. It’s preventing healthcare delivery practices from operating at normal business levels, it’s disrupting patient acce

See publication

Tags: Cloud, Cybersecurity, Digital Transformation

Microsoft Cloud for Manufacturing — Architect Perspective
Import from
August 26, 2021
Microsoft Cloud for Manufacturing — Architect PerspectiveManufacturers around the globe are becoming more agile and adaptable post-COVID 2020 has been a year that we will not soon forget. This interference has led to the high demand for Innovation, fast delivery, and better user experience. Fi

See publication

Tags: Cloud, Cybersecurity, Digital Transformation

Microsoft Cloud for Financial Services — Architect Perspective
Import from
August 25, 2021
Microsoft Cloud for Financial Services — Architect PerspectiveFinancial institutions have embraced digital innovation at a recorded pace to adopt new ways of working, serving the financial needs of customers, and keep the markets performing. Moreover, they have done so while still operating wi

See publication

Tags: Cloud, Cybersecurity, Digital Transformation

Impact of AI on Software Development And Testing — Ethical And Productivity Implications Of…
Import from
July 15, 2021
Impact of AI on Software Development And Testing — Ethical And Productivity Implications Of Intelligent Code Creation (ICC)AI technology is changing the working process of software engineers and test engineers. It is promoting productivity, quality, and speed. Businesses use AI algorithms to

See publication

Tags: Cloud, Cybersecurity, Digital Transformation

1 Board Membership
Forbes Technology Council
Forbes Technology Council
December 01, 2020
Forbes Technology Council
Gaurav Aggarwal, Vice President, Global Lead for Everything on Cloud Strategy at Avanade Inc. Member of Forbes Councils

See publication

Tags: Cloud, Digital Transformation, Leadership

Thinkers360 Credentials

7 Badges



27 Article/Blogs
Securing Tomorrow: Unleashing the Power of Breach and Attack Simulation (BAS) Technology
September 14, 2023

As the cybersecurity landscape continues to evolve, the challenges associated with defending
against cyber threats have grown exponentially. Threat vectors have expanded, and cyber
attackers now employ increasingly sophisticated tools and methods. Moreover, the
complexity of managing security in today's distributed hybrid/multi-cloud architecture,
heavily reliant on high-speed connectivity for both people and IoT devices, further
compounds the challenges of #cyberdefense.
One of the foremost concerns for corporate executives and boards of directors is
the demonstrable effectiveness of cybersecurity investments. However, quantifying and
justifying the appropriate level of spending remains a formidable obstacle for most enterprise
security teams. Securing additional budget allocations to bolster an already robust security
posture becomes particularly challenging in the face of a rising number of #cyberbreaches,
which have inflicted substantial reputational and financial harm on companies across diverse
The modern enterprise's IT infrastructure is an intricate web of dynamic networks,
cloud resources, an array of software applications, and a multitude of endpoint
devices. These enterprise IT ecosystems are vast and intricate, featuring a myriad of network
solutions, a diverse array of endpoint devices, and a mix of Windows and Linux servers.
Additionally, you'll find desktops and laptops running various versions of both Windows and
macOS dispersed throughout this intricate landscape. Each component within this
architecture boasts its own set of #securitycontrols, making the enterprise susceptible to
#cyberthreats due to even the slightest misconfiguration or a shift towards less secure
In this environment, a simple misconfiguration, or even a minor deviation towards less
secure configurations, can provide attackers with the foothold they need to breach an
organization's infrastructure, networks, devices, and software. It underscores the critical
importance of maintaining a vigilant and proactive approach to cybersecurity in this everevolving digital era.
As organizations look for ways to demonstrate the effectiveness of their security spend and
the policies and procedures put in place to remediate and respond to security
threats, vulnerability testing can be an important component of a security team’s
vulnerability management activities. There are several testing approaches that
organizations use as part of their vulnerability management practices. Four of the most
common are listed below:
• Penetration testing: is a common testing approach that Enterprises employ to
uncover vulnerabilities in their infrastructure. A Pen test involves highly skilled
security experts using tools and attack methods employed by actual attackers to
achieve a specific pre-defined breach objective. The pen test covers networks,
applications, and endpoint devices.
• Red Teaming: A red team performs “ethical hacking” by imitating advanced threat
actors to test an organization's cyber defenses. They employ stealthy techniques to
identify security gaps, offering valuable insights to enhance defenses. The results
from a red-teaming exercise help identify needed improvements in security controls.
• Blue Teaming: is an internal security team that actively defend against real attackers
and respond to red team activities. Blue Teams should be distinguished from standard
security teams because of the mission to provide constant and continuous cyber
defense against all forms of cyber-attacks.
• Purple Teaming: The objective of purple teams is to align red and blue team efforts.
By leveraging insights from both sides, they provide a comprehensive understanding
of cyber threats, prioritize vulnerabilities, and offer a realistic APT (Advanced
Persistent Threat) experience to improve overall security.
Although these vulnerability testing approaches are commonly used by organizations,
there are several challenges associated with them:
• These approaches are highly manual and resource intensive, which for many
organizations translates to high cost and a lack of skilled in-house resources to
perform these tests.
• The outcome of these vulnerability tests provides vital information back to the
organization to act on, they are performed infrequently due largely to the cost and
lack of skilled resources mentioned previously.
• These methods provide a point-in-time view of an organization’s security
posture which is becoming less effective for companies moving to a more dynamic
cloud-based IT architecture with an increasing diversity of endpoints and applications.
Traditional vulnerability testing approaches yield very little value because the security
landscape and enterprise IT architectures are dynamic and constantly changing.
Since testing the cybersecurity posture of organizations is becoming a top priority, it
triggered an increased demand for the latest and most comprehensive testing
solutions. Moreover, it’s almost impossible, from a practical standpoint, for multiple
enterprise security teams to manually coordinate their work and optimize configurations for
all the overlapping systems. Different teams have their own management tasks, mandates,
and security concerns. Additionally, performing constant optimizations and manual testing
imposes a heavy burden on already short-staffed security teams. This is why security teams
are turning to Breach and Attack Simulation (BAS) to mitigate constantly emerging (and
mostly self-inflicted) security weaknesses.

Definition - Breach and Attack Simulation (BAS)
Gartner defines, Breach and Attack Simulation (BAS) technologies as tools “that allow
enterprises to continually and consistently simulate the full attack cycle (including
insider threats, lateral movement and data exfiltration) against enterprise
infrastructure, using software agents, virtual machines and other means.”
BAS tools replicate real-world cyber attacker tactics, techniques, and procedures
(TTPs). They assist organizations in proactively identifying vulnerabilities, evaluating
security controls, and improving incident response readiness. By simulating these attacks in a
controlled environment, organizations gain valuable insights into security weaknesses,
enabling proactive measures to strengthen overall #cybersecurity.
BAS automates the testing of threat vectors, including external and insider threats, lateral
movement, and data exfiltration. While it complements red teaming and penetration testing,
BAS cannot entirely replace them. It validates an organization's security posture by testing its
ability to detect a range of simulated attacks using SaaS platforms, software agents, and
virtual machines.
Most BAS solutions operate seamlessly on LAN networks without disrupting critical
business operations. They produce detailed reports highlighting security gaps and prioritize
remediation efforts based on risk levels. Typical users of BAS technologies include financial
institutions, insurance companies, and various other industries.
BAS Primary Functions
Typical BAS offerings encompass much of what traditional vulnerability testing includes, it
differs in a very critical way. At a high level, BAS primary functions are as follows:
• Attack (mimic / simulate real threats)
• Visualize (clear picture of threat and exposures)
• Prioritize (assign a severity or criticality rating to exploitable vulnerabilities)
• Remediate (mitigate / address gaps

See blog

Tags: Cybersecurity

Digital Resilience Requires Changes In The Taxonomy Of Business IT Systems
June 02, 2022

We are living in a hyper-digitally dynamic ecosystem. As we are moving towards a digitally dependent future, the need for Digital Resilience is increasing rapidly.

Digital Resilience helps companies by providing several ways for businesses to use digital tools and systems to recover from crises quickly. Today, digital resilience and supply chain resilience no longer imply merely the ability to manage risk. It now means that managing risk means being better positioned than competitors to deal with disruptions and even gain an advantage from them. 

The need for Digital Resilience and sustainable supply chains has undoubtedly brought about a change in business IT taxonomy, enhancing business processes and performance. 

In this article, I intend to highlight these very transformational changes in IT systems that have happened over two decades and new IT systems that enterprises need to develop and transform to become more successful, productive, and efficient. 

Timeline View Of Changes In Taxonomy Of Business IT Systems 


· Systems of Records (SoR)

Systems of Records (SoR) are software solutions that serve as the backbone for business processes.

The power of SoR is that they are the ultimate source and therefore “record” of critical business data. Essentially, SoR can be understood as a data storage and retrieval system for a company that works as an authoritative data source for the entire organization. 

Systems of Records (SoR) are valuable to a company as it becomes a single source of truth that provides essential insights and information to a company’s management teams. 

Initially, SoR’s were similar to the enterprise resource planning (ERP) systems, when on-premises ERP’s (like from Oracle, SAP) were in full use. However, over time, companies started to realize the time and cost inefficiencies of incorporating ERP. It required dedicated IT teams to set it up and had just had a sub-par user interface. 

During the last decade, SoR took a turn by incorporating SaaS-powered software lock-in tools such as Workday, SuccessFactors, Salesforce, Dynamics 365, which have proved to be a little more efficient than the previous ones. SoR is critical for companies to mandate data integrity. 

· Systems of Engagement (SoE)

Systems of Engagement (SoE) was introduced to help employees, customers, and partner ecosystems to engage with the Systems of Records (SoR), data, and process flow better. SoE are systems that essentially collect data and enable customers, employees, and partners to interact with the business and associated methodology. They are task-based systems or tools used to retrieve specific information and data. Enterprises are incorporating SoE, not for their software, but for introducing new, data-driven processes to talent acquisition and talent management and business processes for operational excellence. 

Organizations see changes in their internal management systems and are now moving away from top-down management to creating a more agile self-management system. Enterprises usually integrated Systems of Records (SoR) and Systems of Engagement (SoE) for better efficiency. Stand-alone traditional ERPs were becoming more and more expensive to handle and couldn’t keep up with the faced-paced digital transformation and innovation. Hence, by integrating SoE (a two-tier approach), organizations could make their business operations much more agile and cheaper. 

The idea is simple: 

Systems of Engagement are placed to “engage customers or engage with customers” and are supposed to be designed for flexibility and scalability. In contrast, Systems of Records (ERP, HRMS, ITSM, etc.) are just data repositories. 


As innovation accelerated and businesses started automating and utilizing analytical tools in their operations, the need for different systems grew. As data-driven companies began to grow, the necessity for high-quality data and data modeling tools grew to gain actionable insights to make better business decisions. Hence, IT systems that support decision-making started to include Systems of Innovation and Intelligence (SoII) into their operations.

· Systems of Innovation and Intelligence (SoII)

Systems of Innovation and Intelligence (SoII) gathers data from Systems of Record (SoR) and Systems of Engagement (SoE) and derives insights. It analyses the accumulated data and suggests improvements to enhance business performance and decisions. Earlier in businesses, SoR and SoE were examined and observed separately. However, now, SoII converges these data to derive better insights that can be turned to create better business outcomes or to innovate new products. 

According to Forrester’s research, businesses are “drowning in data” and failing to gather insights. Big data, agile business intelligence, data analytics, etc., surely do give enterprises insight. However, they solve the problem partially. The solution lies in companies employing a structured system that harnesses the actual value from gathered data. The answer lies in embedding “closed looped systems” or nothing but the Systems of Innovation and Intelligence (SoII). 

With SoII, enterprises can perform different types of analysis on collected data through predictive analytics, descriptive analytics, cognitive analytics, etc. Although enterprises have incorporated BI and analytics to gather insight, it won’t be enough without proper systems that convert these insights into actions. It is crucial to test what works, what doesn’t and understand the required changes. With data, you need to act intelligently. 

Benefits of Systems of Innovation and Intelligence (SoII) 

  1. Better Customer Experience and Employee Experience: SoII enables businesses to offer customization, hyper-personalization to interact with the stakeholders (Customers, employees, vendors) by providing a continuous understanding of their data through their usage patterns, and most importantly, proactive communication of the possible occurrence of an activity. 
  2. Data-Driven Decisions: SoII helps businesses discover essential insights and analyze, measure, and learn from the results. SoII adds value to the equation by supporting the other systems with the assistance of AI to refine data, grow more imaginative, and take the decision-making journey ahead. 
  3. Reduced cost: Through continuous improvement by utilizing the correct data, a business can avoid significant failures or anomalies. Mistakes can be prevented, saving you time, effort and money. Although the setting up of soil may be expensive, the cost of business failures is much higher. Hence, the payoff is what matters here. 
  4. Faster Time to Market
  5. Innovation Catalyst: Data and insight-driven decisions are the key ingredients to the success of any digital business today. SoII helps digital businesses not worry about converting data to insights and can allocate this time and energy to innovate and scale their business.
  6. Insight-to-execution process: SoII is generally delivered by Agile teams, both technical and business-oriented, who collaborate to take analytical insights to make better decisions. This loop system of continuous improvement is what gives a competitive advantage in today’s ever-evolving technological landscape. 

What Needs To Change Going Forward?  

To be successful in the new normal world, it is crucial to adapt to the fast-paced digital transformation and take a holistic approach to enhance business agility and performance. Here’s what needs to change going forward, in my opinion:

· Systems of Records (SoR)? Systems of Records and Intelligence

Systems of Records (SoR) should modernize with a cloud-first approach to transform them into Systems of Records and Intelligence. 

Modernizing an existing SoR to Systems of Records and Intelligence will require a view of the workloads from an underlying infrastructure perspective and core application architecture characteristics to determine its suitability to operate on the cloud. But a cloud-first approach will leverage a rich ecosystem of services from the cloud marketplace, enabling rapid development of SOE applications

· Systems of Engagement (SoE)? Systems of Engagement and Experience (SoEE)

Systems of Engagement should now transition and offer different experiences for employees and customers, at least from a perspective of:

  1. different generations (Gen Z, Gen X) 
  2. new experiences are driven by Augmented Reality (AR) and Virtual Reality (VR) 
  3. various form factors and Natural User interface 

It is essential to know whether employees/customers can seamlessly adapt to these or not. For example, employees of different generations (Gen Z to Gen X) must engage with varying factors of form and experience mediums. They must offer different experiences to employees, and this can happen by refining your Systems of Engagements to a new System of Engagement and Experience. 

· Systems of Innovation (SoI)

Systems of Innovation (SoI) should now define the enterprises’ future. They need to focus more on establishing a System of Innovation (SoI) with a cloud-first mindset. SoI should enable fast-paced innovation for developing newly Sustainable and Resilient Products and Services. 

· Systems of Insights and Compliance (SoIC)

Enterprises need to ensure they leverage Data as Assets and implement Systems of Insights to support Data-Driven Decision making across the entire Supply Chain and the broad spectrum of business processes and functions. Today enterprises are under immense pressure from Regulatory Authorities, Cyberattacks, and the pivot in customer buying patterns to prefer Trusted, Responsible and Sustainable Products. This effectively means enterprises need to look at Security and Compliance by design – which is best implemented by transitioning to Systems of Insights and Compliance.

· System of Knowledge and Learning (SoKL)

Skills and Talent are the new currency for the business. It is critical to capture the knowledge and experiences across the company both from a perspective of improving productivity and faster time to market. 

Many enterprises implemented Learning Management Systems (LMS) in some shape or form. Still, they were seen as a secondary system for Talent Retention and tracking employee training. But to succeed in this new fast-paced innovation, businesses should now plan to implement Systems of Knowledge and Learning (SoKL).

SoKL reduces the costs of inefficiency by making company knowledge more available, accessible, and accurate. 

Benefits of Systems of Knowledge and Learning

Some of the benefits of Systems of knowledge and learning are:

  1. Employee Productivity
  2. Talent Acquisition and Retention
  3. Shorter cycle from Hire to Job Ready stage
  4. Making knowledge available to facilitate more innovative product and service development and allowing the workforce to have easy access to relevant ideas, knowledge, job-aids, and connection to Subject Matter experts
  5. Facilitate the creation of an active and vibrant knowledge-sharing community network 
  6. Better Customer experience powered by re-use of knowledge and better employee experience 
  7. Managing innovation, learning, and solving problems


Digital transformation has revolutionized businesses and IT Systems that support decision-making. Building digital Resilience into every aspect of IT infrastructure, systems, and software will enable organizations to rapidly meet changing market and customer needs and create sustainable competitive advantages in this new reality. 

To be successful in the hyper-digitally dynamic ecosystem, enterprises need to relook at their Business IT systems and transform the outdated Systems of Records, System of Engagements taxonomy to differentiated Systems of Records and Intelligence, Systems of Engagement and Experience (SoEE), Systems of Innovation (SoI), Systems of Insights and Compliance (SoIC) and System of Knowledge and Learning (SoKL).

See blog

Tags: Cloud, Digital Transformation, Digital Twins

How AI Democratization Helped Against COVID-19
June 01, 2022

AI not only helped in data gathering but also in data processing, data analyses, number crunching, genome sequencing, and making the all-important automated protein molecule binding prediction. AI’s use will not end with the vaccine’s discovery and distribution; it will be used to study the side effects in the billions of vaccinations

Many countries have rolled out coronavirus vaccines and many are conducting dry runs to check the preparedness for vaccination drives. The World Health Organisation has extended emergency use approval to the Pfizer/BioNTech vaccine. This has paved the way for developing countries, which do not have the infrastructure to run vaccine trials. They can now begin immunizing their populations against Covid-19.

The world was quick to realize the importance of coming together to share genome sequencing data and other technical know-how, which accelerated the pace of vaccine development. However, this would have been impossible without the presence of cloud computing and Artificial Intelligence (AI).

AI helped not only in data gathering but also in data processing, data analyses, number crunching, genome sequencing, and making the all-important automated protein molecule binding prediction.

Coronavirus as we know is a cousin of Severe Acute Respiratory Syndrome (SARS) that caused many deaths over a decade ago. Researchers predicted that the pathogen may have transmitted through animals then. These kinds of predictions could only be done with the help of AI.

Sanjay Sehgal, Chairman, and CEO, MSys Group said, “In case of coronavirus, the first prediction was done by a Canadian firm BlueDot, which specializes in infectious disease investigation through AI. The firm used its AI-powered system to go through animal implant disease networks. It also used AI to collect information and predict the outfall of the virus and warned its clients to retrain their travel activities much before governments declared it officially.”

CNBC reported that BlueDot had spotted COVID-19 nine days before the World Health Organisation released its statement alerting people to the emergence of a novel coronavirus. AI and cloud computing have, in fact, been helping the pharma sector for some time now.

Gaurav Aggarwal, VP, Global Cloud Solutions Strategy and GTM Lead, Avanade, stated that the democratization of AI and Machine Learning (ML) in the public cloud has revolutionized science and engineering. The pharma industry is slowly maturing to leverage the same. “The advent of AI as an adaptive and predictive technology coupled with democratization of AI/ML, Augmented Reality/Virtual Reality (AR/VR) technology by public cloud providers such as Microsoft, Google, AWS offers the possibility for radical optimization of core research, business processes, reshaping market opportunities for pharmaceutical companies and challenging the status quo on access to affordable medicine worldwide,” he added.

Speeding Matters 

The process of drug discovery requires running complex mathematical models of behavior using high-performance computing (HPC). The modern data analysis tools, such as cloud and AI, accelerated the process of identifying the molecular stimulators for further evaluation.

These tools helped in the search of antibodies that would prevent and fight Coronavirus. Additionally, research databases such as COVID Open Research Dataset (CORD-19) powered by AI helped researchers in their studies.

These technologies aren’t 100% accurate though. In 2008, Google launched an AI-powered service to track the flu outbreak using SEO and tracing people’s search queries. The data collated comprised people’s supermarket purchases, browsing patterns, and the theme and rate of private messages.

“Though Google’s AI service predicted the flu outbreak much before government and its agencies, its reports had to be pulled down after being found that the service had been consistently over-estimating the pervasiveness of the disease,” Sehgal pointed out.

Based on case studies such as these, it should be noted that AI-run algorithms can help simplify the huge amount of data from several experiments that help in discovering the patterns that a human brain might miss to spot, but in the end, AI still cannot predict the success of vaccine on humans. “We will have to wait and watch how the vaccine and its effects unfold,” Sehgal said.

While we can never expect overnight success when dealing with something as complex as vaccine development, but we can act now by using AI, ML, and public cloud to optimize the overall process and remove some of the constraints and bottlenecks. “Amplifying progress in creating new medications for diseases is among the most profound near-term objectives of AI and Covid-19 vaccines availability in less than 12 months, is an example of how AI can help in crisis response,” said Aggarwal. 

The use of AI is not going to end with the discovery and distribution of the vaccine. It is also going to be used to study the side effects of the billions of vaccinations. Fortune recently reported that the UK arm of Genpact has been asked to design a machine learning system that can ingest reports of side effects and pick up on potential safety concerns.

AI to study side effects of drugs has been the focus of academic researchers for several years. Many governments, apart from the UK, are also using AI to study coronavirus vaccine side effects. The quick rollout of the vaccine has proved to be a huge success for AI and ML as it has paved the way for greater use of these tools in the health and pharma sector.

See blog

Tags: Cloud, COVID19, Digital Twins

Systematic and Chaotic Testing: A Way to Achieve Cloud Resilience
May 27, 2022

In today’s digital technology era where downtime translates to shut down, it is imperative to build resilient cloud structures. For example, in the pandemic, IT maintenance teams can no longer be on-premises to reboot any server in the data center. This may lead to a big hindrance in accessing all the data or software, putting a halt on productivity, and creating overall business loss if the on-premises hardware is down. However, the solution here would be to transmit all your IT operations to cloud infrastructure that ensures security by rendering 24/7, round-the-clock tech support by remote members. Cloud essentially poses as a savior here.

Recently, companies have been fully utilizing the cloud potency, and hence, observability and resilience of cloud operations become imperative as downtime now equates to disconnection and business loss.

Imagining a cloud failure in today’s technology-driven business economy would be disastrous. Any faults and disruption will lead to a domino effect, hampering the company’s system performances. Hence, it becomes essential for organizations and companies to build resilience into their cloud structures through chaotic and systematic testing. In this blog, I will take you through what resilience and observability mean, and why resilience and chaos testing are vital to avoid downtimes.

To avoid cloud failure, enterprises must build resilience into their cloud architecture by testing it in continuous and chaotic ways.

1.) Observability

Observability can be understood through two lenses. One is through control theorywhich explains observability as the process of understanding the state of a system through the inference of its external outputs. Another lens explains the discipline and the approach of observability as being built to gauge uncertainties and unknowns.

It helps to understand the property of a system or an application. Observability for cloud computing is a prerequisite that leverages end-to-end monitoring across various domains, scales, and services. Observability shouldn’t be confused with monitoring, as monitoring is used to understand the root cause of problems and anomalies in applications. Monitoring tells you when something goes wrong, whereas observability helps you understand why it went wrong. They each serve a different purpose but certainly complement one another.

Observability along with resilience are needed for cloud systems to ensure less downtime, faster velocity of applications, and more.  

2.) Resilience


Is it on/reachable?


Will it work the way it should consistently and when we need it to?


Is it reliably accessible from anywhere, any time?


How does the system respond to challenges so that its available reliably?

Every enterprise migrating to cloud infrastructure should ensure and test its systems for stability, reliability, availability, and resilience, with resilience being at the top of the hierarchy. Stability is to ensure that the systems and servers do not crash often; availability ensures system uptime by distributing applications across different locations to ease the workload; reliability ensures that cloud systems are efficiently functioning and available. But, if the enterprise wants to tackle unforeseen problems, then constantly testing resilience becomes indispensable.

Resilience is the expectation that something will go wrong and that the system is tested in a way to address and maneuver itself to tackle that problem. The resilience of a system isn’t automatically achieved. A resilient system acknowledges complex systems and problems and works to progressively take steps to counter errors. It requires constant testing to reduce the impact of a problem or a failure. Continuous testing avoids cloud failure, assuring higher performance and efficiency.

Resilience can be achieved through site resilient design and leveraging systematic testing approaches like chaos testing, etc.

Conventional Testing and Why It Is Not Enough

Conventional testing ensures a seamless setup and migration of applications into cloud systems and additionally monitors that they perform and work efficiently. This is adequate to ensure that the cloud system does not change application performance and functions in accordance with design considerations.

Conventional testing doesn’t suffice as it is inefficient in uncovering underlying hidden architectural issues and anomalies. Some of the faults appear dormant as they only become visible when specific conditions are triggered.

High Availability Promises of Cloud

“We see a faster rate of evolution in the digital space. Cloud lets us scale up at the pace of Moore’s Law, but also scale out rapidly and use less infrastructure” says Scott Guthrie on the future and high promises of cloud. As a result of the pandemic and everyone being forced to work from home, there has not been a surge in cloud investments. But, due to this unprecedented demand, all hyperscalers had to bring in throttling and prioritization controls, which is against the on-demand elasticity principle of the public cloud.

The public cloud isn’t invincible when it comes to outages and downtime. For example, the recent Google outage that halted multiple Google services like Gmail and Youtube showcases how the public cloud isn’t necessarily free of system downtimes either. Hence, I would say the pandemic has added a couple of additional perspectives to resilient cloud systems:

  1. The system must operate smoothly and be unaltered even when they receive an unexpected surge in online traffic
  2. The system must look for alternate ways to manage the functionality and resource pool in case additional resource allocation requests are declined or throttled by the Cloud provider.
  3. The system should be accessible and secure to handle unknown locations and shift to hybrid work environments (may be a number of endpoints coming outside the network firewall).

The pandemic has highlighted the value of continuous and chaotic testing of even resilient cloud systems. A resilient and thoroughly tested system will be able to manage that extra congested traffic in a secure, seamless, and stable way. In order to detect the unknowns, chaos testing and chaos engineering are needed.

Cloud-Native Application Design Alone Is Not Sufficient for Resiliency

In the public cloud world, architecting for application resiliency is more critical due to the gaps in base capabilities provided by cloud providers, the multi-tier/multiple technology infrastructure, and the distributed nature of cloud systems. This can cause cloud applications to fail in unpredictable ways even though the underlying infrastructure availability and resiliency are provided by the cloud provider.

To establish a good base for application resiliency, during design the cloud engineers should adopt the following strategies to test, evaluate and characterize application layer resilience:

  1. Leverage a well-architected framework for overall Solution Architecture and adopt the cloud-native capabilities for availability and disaster recovery.
  2. Collaborate with cloud architects and technology architectsto define availability goals and derive application and database layer resilience attributes. 
    • . Along with threat modeling, define hypothetical failure models based on expected or observed usage patterns and establish a testing plan for these failure modes based on business impact.

By adopting an architecture-driven testing approach, organizations can gain insights into the base level of cloud application resiliency well before going live and they can allot sufficient time for performance remediation activities. But you still would need to test the application for unknown failure and aspects of multiple failure points in cloud-native application design.

Chaos Testing and Engineering

Chaos testing is an approach that intentionally induces stress and anomalies into the cloud structure to systematically test the resilience of the system.

Firstly, let me make it clear that chaos testing is not a replacement for actual testing systems. It’s just another way to gauge errors. By introducing degradations to the system, IT teams can see what happens and how it reacts. But, most importantly it helps them to gauge the gaps in the observability and resilience of the system — the things that went under the radar initially.

This robust testing approach was first emulated by Netflix during their migration to cloud systems back in 2011, and since then, it has effectively established this method. Chaos testing brings to light inefficiencies and ushers the development team to change, measure, and improve resilience, and it helps cloud architects to better understand and change their design.

Constant, systematic, and chaotic testing increase the resilience of cloud infrastructure, which effectively enhances the systems' resilience and ultimately boosts the confidence of managerial and operational teams in the systems that they’re building.

A resilient enterprise must create resilient IT systems partly or entirely on cloud infrastructure.

Using chaos and site reliability engineering helps enterprises to be resilient across:

  • Cloud and infrastructure resilience
  • Data resilience via continuous monitoring.
  • User and customer experience resilience by ensuring user interfaces hold up under high-stress conditions
  • Resilient cybersecurity by integrating security with governance and control mechanisms
  • Resilient support for infrastructure, applications, and data

To establish complete application resiliency, in addition to earlier mentioned cloud application design aspects, the solution architect needs to adopt architecture patterns that allow you to inject specific faults to trigger internal errors which simulate failures during the development and testing phase.

Some of the common examples of fault triggers are delay in response, resource-hogging, network outages, transient conditions, extreme actions by users, and many more.

  1. Plan for continuous monitoring, management, and automate the incident response for common identified scenarios
  2. Establish chaos testing framework and environment
  3. Inject faults with varying severity and combination and monitor application-layer behavior
  4. Identify anomalous behavior and iterate the above steps to confirm criticality

How to Perform the Chaos Test

Chaos testing can be done by introducing an anomaly into any seven layers of the cloud structure that helps you to assess the impact on resilience.

When Netflix successfully announced its resiliency tool — Chaos Monkey in 2011 — many developing teams adopted it for chaos engineering test systems. There’s another tool test system developed by software engineers called Gremlin that essentially does the same thing. But, if you’re looking to perform a chaos test in the current context of COVID-19, you can do so by using GameDay. This stimulates an anomaly wherein there’s a sudden increase in traffic; for example, customers accessing a mobile application at the same time. The goal of GameDay is to not just test the resilience but also enhance the reliability of the system.

The steps you need to take to ensure a successful chaos testing are the following:

  1. Identify: Identify key weaknesses within your system and create a hypothesis along with an expected outcome. Engineers need to identify and assess what kind of failures to inject within the hypothesis framework.
  2. Simulate: Inject anomalies during production based on real-life events. This ensures that you include situations that may happen within your systems. This could entail an application or network disruption or node failure.
  3. Automate: You need to automate these experiments, which could be every hour/week, etc. This ensures continuity, a detrimental factor in chaotic engineering.
  4. Continuous feedback and refinement: There are two outcomes to your experiment. It could either assure resilience or detect a problem that needs to be solved. Both are good results from which you can take feedback to refine your system.

Other specific ways to induce a faulty attack and sequence on the system could be:

  1. Adding network latency
  2. Cutting off scheduled tasks
  3. Cutting off microservices
  4. Disconnecting system from the datacenter


In today’s digital age where cloud transition and cloud usage is surging, it becomes imperative to enhance cloud resilience for the effective performance of your applications. Continuous and systematic testing is imperative in the life cycle of a project, but also to ensure cloud resiliency at a time where even the public cloud is over-burdened. By preventing lengthy outages and future disruptions, businesses save significant costs, goodwill, and additionally, assure service durability for customers. Chaos engineering, therefore, becomes a must for large-scale distributed systems.

See blog

Tags: Climate Change, Cloud, Digital Twins

Changing The AI Landscape Through Professionalization
May 24, 2022

Artificial Intelligence has been the talk of the town for a while. But, why does AI matter? How can an organization successfully scale AI? And, what role does professionalization play in the process of successful AI deployment? Read this blog to know more about all things the data-driven AI landscape.

In the past three years, we have seen companies spend more than $300B on AI applications, and this has turned a spotlight on the AI landscape, making it a high-stakes business priority. 

According to the Forrester report, organizations that scale AI are 7x more likely to be the fastest-growing businesses in their industry.  In a study by Accenture, it was found that 75% of global executives believe that if they don’t scale AI, they risk going out of business in just 5 years.

While scaling AI is crucial, most companies are still at the stage of running pilots and experimenting and struggling in achieving the value they expected.

So, Why Does AI Matter? 

In a study by Accenture, 84% of C-suite executives recognize the need to leverage AI to achieve their business goals. AI applications, with the help of machine learning and deep learning, can utilize the data in real-time and adapt to new changes to ensure that the business benefit is compounded. In this way, AI enables businesses to ensure agility with a regular stream of insights to drive innovation and competitive advantage.

As the innovation in the AI landscape progresses, we are inching towards an era where algorithms tell us all about our tastes and preferences. Looking at this, we can say that AI in a leadership position doesn’t seem like a wild fantasy anymore.

The Covid-19 pandemic has left organizations vulnerable and has exposed their daily operations. This has alleviated the need for real-time insights and has opened our eyes to the gaps in the capabilities to access, mobilize and utilize data. Additionally, the air around AI is still not clear and this is causing challenges for business leaders who are geared up to scale with AI but are yet to introduce their teams to the “scale or fail” approach.

How Do We Scale AI?

To scale business processes, organizations must cultivate confidence in AI and design the right governance structure to allow an ethical collaboration between humans and machines. Additionally, it is important to define business and technical challenges that AI can help solve, and the efficiencies for stakeholders across organizations that AI can help achieve. Based on these, C-suite executives should prioritize the following technology and human capital investments to achieve their long-term goals:

1. Establish a Data-Driven Culture and AI Strategy

Here, we talk about one of the critical investments for an organization – Human Capital. It is necessary to create a company of believers and for that, an organization needs to leverage its goal of data-driven reinvention.

Organizations should work with data architects, business owners, and solution architects to develop their AI strategy underpinned by data strategy, data taxonomy, and analyzing the value that their company can and wishes to create. For “Establishing a Data-Driven culture is the key—and often the biggest challenge—to scaling artificial intelligence across your organization.”

While your technology enables business, your workforce is the essential driving force. It is crucial to democratize data and AI literacy by encouraging skilling, upskilling, and reskilling. Resources in the organization would need to change their mindset from experience-based, leadership-driven decision making to data-driven decision making, where employees augment their intuition and judgment with AI algorithms’ recommendations to arrive at the best answers that either humans or machines could reach on their own.

My recommendation would be to carve out “System of Knowledge and Learning” as a separate stream in overall Enterprise Architecture, along with System of Records, Systems of Engagement and Experiences, Systems of Innovation and Insight.

AI and data literacy will help in increasing employee satisfaction because the organization is allowing its workforce to identify new areas for professional development. This culture aims to educate employees to adopt an “out of the box” approach to facing rapid and unprecedented changes. 

2. Choose a Simple Technological Ecosystem

Clients, today, need organizations that value simplification of their system and vendor ecosystem. Enterprises should prioritize choosing the right AI/ML Technology provider partner, like Microsoft, with a capable partner and ISV ecosystem. To simplify these ecosystems, an organization needs to identify the functional gaps that exist, evaluate the applications that align the business strategy, and streamline the infrastructure for ongoing operations. 

3. Leveraging a “Universal Language of Business”

Who doesn’t hate a typical case of Chinese whispers? Organizations need to define a common taxonomy of business terms, including the KPI, ORA, leading indicators, and domain model. This should be implemented to avoid the need for an interpreter between 2 different users so that everyone in the business (including the extended partner ecosystem in the supply chain) speaks the same language and makes the right decision without any confusion. This Unified taxonomy should be pushed through consistently across “System of Knowledge and Learning”, System of Records, Systems of Engagement and Experiences, and Systems of Innovation and Insight.

4. Reduce Data “Noise” to Capture the Right Information

More data is not always better. In a world where data is proliferating and data begets more data, it can be tempting to gather more and more. Having a strong data strategy ensures you’re curating the right data to deliver the desired outcome and then capturing its insights to fuel an AI strategy that delivers that outcome at speed and scale.

5. Recognizing the Need to Professionalize AI

In a study by Accenture, three out of four C-suite leaders believed that if they fail to scale AI in the coming years, they will risk their business. As professionalization is the precursor to successful AI scaling, this has encouraged organizations to employ professionalization techniques like establishing multidisciplinary teams and clear lines of accountability. 

To fuel the need for AI scaling, the pandemic has sharpened the contrast between those who have professionalized their AI capabilities and those who have not. Businesses are competing against each other to embrace new data capabilities to return to sustainable growth, which is possible through successful professionalization. 

Explore the Benefits of Professionalization

a.) When organizations adopt a professionalized approach of deploying trained, interdisciplinary teams, to work on these applications, you can successfully maximize the value of your AI investment.

b.) Professionalization helps organizations to achieve consistency in results when performing the same or similar actions in the future. Trained data practitioners build cutting-edge technologies across use cases by leveraging repeatability. 

c.) Professionalization of AI processes contributes to making technological applications more ethical and transparent. This helps in building a culture that encourages trust. Companies need accountable processes to leverage successful responsible AI.

6. Leadership Training 

There is a lack of consensus between our world leaders and we are not paying enough attention to training our leaders. This includes good leadership education for our business leaders, our political leaders, and our societal leaders. While scaling AI, many executives struggle when it comes to making sense of the business cases for how AI can bring value to their organizations. In the current world, these leaders are following a herd of their contemporaries who have referred to surveys that highlight the importance of engaging in AI adoption. But building a unique business case is not headlining their priority. 

The need of the hour dictates that our leaders can adapt and be agile to cope with unprecedented circumstances. Leadership needs to define AI value for today—with a vision for tomorrow.

7. Exploring Composite AI

AI will become the new co-worker. It will be critical for organizations to clearly define wherein the loop of the business process should they automate, where should the depend solely on machines, and where should they ensure collaboration between humans and machines to make sure that automation and the use of AI don’t lead to a work culture where humans don’t feel like they are the subordinates of the machines.  Humans believe in building a culture where they communicate and represent the values of the company to create business value.

Leadership is about dealing with change. You need to understand what it means to be a human – you can have human concerns, taking into account that you can be compassionate, and you can be humane. At the same time, leaders should be able to imagine strategies for collaboration between machines and humans. This collaboration will be used to build strategies to combat the unprecedented and to brainstorm ways in which processes can be adjusted to create the same value. A leader needs to be able to make an abstraction of this, and AI is not able to do this.

With a long-term view, some of the other aspects that organization needs to plan for Scaling AI are: 

a.) Transition from siloed work to interdisciplinary collaboration, where business, operational, IT, and analytics experts work side by side, by bringing a diversity of perspectives and ensuring initiatives address organizational priorities.

b.) Establish strong AIOps practice for managing processes for developing, deploying, and governance 

c.) Shift from traditional leader-only decisions, rigid and risk-averse to agile, experimental, and adaptable mindset by creating a minimum viable product in weeks rather than months and embracing the test-and-learn mindset.

d.) Define and follow the Ethical AI framework and principles

e.) Ensure Data Security and Trust in the data

f.) Organize for scale – divide key roles between a central “Analytics Hub” (typically led by a chief data officer) and “spokes” (business units, functions, or geographies).

g.) Reinforce the change – With most AI transformations taking 2-3 years to complete, leaders must also take steps to keep the momentum for AI going during the journey by tracking the adoption, celebrating small successes, and providing incentives for change.

The AI landscape is dynamic thanks to the constant technological innovations and C-suite executives recognize the need to leverage AI for a data-driven reinvention. The secret to scaling AI is cultivating confidence in AI and designing the right governance structure to allow an ethical collaboration between humans and machines.

Professionalization is an integral part of scaling your AI and data practices. Enterprises that have leveraged professionalization to scale their AI processes are leading their industry when compared to their contemporaries who are still deliberating over ways to adopt responsible AI. By a clear understanding of what professionalization can do for the AI landscape, exploring the benefits, and employing correct leadership who can successfully delegate composite AI, an organization can make a considerable mark in the field of technological innovations.

How does your organization employ professionalization to scale AI processes?

See blog

Tags: AI, Cloud, Digital Twins

Edge Computing: The Future of Cloud
April 25, 2022

The IDC forecasts the global edge computing market to reach $250 billion by 2024, with a compounded annual growth of 12.5%. No wonder the industry is talking about Edge Computing.

Edge computing is one of the “new revolutionary technologies” that can change organizations wanting to break free from previous limitations of traditional cloud-based networks. The next 12–18 months will prove to be the natural inflection for edge computing. Practical applications are finally emerging where this architecture can bring real benefits.

91% of our data today is created and processed in centralized data centers. Cloud computing will continue to contribute to businesses regarding cost optimization, agility, resiliency, and innovation catalyst. But in the future, the “Internet of Behaviors (IoB)” will power the next level of growth with endless new possibilities to re-imagine the products & services, user experiences, and operational excellence. The IoB is one of the most sought-after and spoken-about strategic technology trends of 2021. As per Gartner, the IoB has ethical and societal implications, depending on the goals and outcomes of individual uses. It is also concerned with utilizing data to change behaviors. For instance, with increased technologies that quickly gather dust, information can influence behaviors from feedback loops during times such as COVID-19 monitoring.

IoT, IIoT, AI, ML, Digital Twin, and Edge computing are at the core of the Internet of Behaviors. As per Gartner’s research, about 75% of all data will require analysis and action at the Edge by 2022. Organizations have been debating what separates edge computing from the other traditional data processing solutions. Also, whether it is excellent for their business and to what extent is a hot topic.

The foundational principles of edge computing are relatively simple to comprehend but understanding its benefits can be complex. Edge computing can provide a direct on-ramp to a business’ cloud platform of choice and assists in achieving flexibility to facilitate a seamless IT infrastructure.

What Is Edge Computing?

It is a distributed computing model where computing is conducted close to the geographical location of the data collection and analysis center, overusing a centralized server or Cloud. The improved infrastructure uses sensors to gather data, while the edge servers safely process data on-site in real-time.

By miniaturizing the processing and storage tech, the network architecture landscape has experienced a massive shift in the right direction, where businesses can worry less about data security. The present-day IoT devices can quickly gather, store, and process vast amounts of data than they could before. This creates more opportunities for businesses to integrate and update their networks to relocate their processing functions in proximity to the data gathered at the network edge to be assessed and applied in real-time closer to the intended users.

Why is Edge Computing Relevant Now?

Edge computing is essential now because it is an upgrade for global businesses to improve their operational efficiency, boost their performance, and ensure data safety. It will also facilitate the automation of all core business processes and bring about the “always-on” feature. Edge computing holds the key to achieving total digital transformation of conducting business more efficiently.

Edge technology is relevant today as it’s empowered by new technologies such as 5G, Digital Twin, and Cloud-native Application, Database, and Integration platforms.

Key Edge Enablers

1. 5G — Speed and Low Latency

By 2025, we will witness 1.2 billion 5G connections covering 34% of the global population. Highly reliable low-latency is the new currency of the networking universe, underpinning new capabilities in many previously impossible industries. With 5G, we’ll see a whole new range of applications enabled by the low-latency of 5G and the proliferation of edge computing, transforming the art of the possible.

Moreover, Private 5G Network will fuel Edge computing and push enterprises to the Edge. Forrester sees immediate value in private 5G — a network dedicated to a specific business or locale like a warehouse, shipyard, or factor.

2. Need for Near Real-Time Response

Response time or speed of response is an absolute necessity for the AI/ML-powered solution, especially if deployed in a remote location or the user is on the move. If there is even a millisecond of delay in the algorithms of a remote patient monitoring system at hospitals, it could cost someone their life. Companies that render data-driven services cannot afford to lag in speed as it can have severe consequences to the brand reputation and customer’s quality of experience.

3. Containers

Container technology like Docker and Kubernetes allows companies to run prepackaged software containers more quickly, reliably, and efficiently. Armed with these technologies, companies can set up and scale Micro Clouds wherever and however they want.

4. Service and Data mesh

Service and Data mesh facilitate a channel to release and query data or services distributed through datastores and containers across the Edge, making it a critical enabler. It also allows bulk queries for the entire population within the Edge over each device, bringing greater ease.

5. Software-Defined Networking (SDN)

Software-defined networking enables the configuration of the overlay networks by users, making it simpler to customize routing and bandwidth to determine a way to connect edge devices and the Cloud.

6. Digital Twin

The digital twin is a crucial enabler responsible for organizing physical-to-digital and cloud-to-edge, letting domain experts (not just software engineers) configure their applications to observe, think and act according to the Edge.

7. Maturity and adoption of IIoT for OT and IT convergence

The maturity of IIoT platforms and Edge AI pave the way for IT-OT convergence, thereby offering an innovation advantage to the business.

8. Industrialization of IoT Sensors

The industrial Internet of Things or IIoT sensors provides a more significant business advantage such as greater productivity and efficiency and cost reduction for data collection, analysis, and exchange.

9. Multi-Access Edge Computing (MEC)

MES transforms the topology and the architecture of mobile networks from a pure communication network for voice and data to an application platform for services. MEC complements and enables the service environment that will characterize 5G. Example: Connected Cars, Industry 4.0, Remote Patient Monitoring, eHealth.

10. Extended Reality (XR)

XR represents an immersive interface for work collaboration in a virtualized environment. With the help of edge computing, these experiences become even more detailed and interactive.

11. Heterogeneous Hardware and Neuromorphic Processors

Innovation for Heterogeneous hardware and ruggedized HCI / Edge devices is making Edge computing more pervasive as they process a greater volume of data quickly by using lesser power. Integrating this specialized hardware on Edge enables efficient computation within physical environments while accelerating the response rates.

Hyperscalers, along with 5G and chip OEM, are innovating at speed to capture the market. Azure Percept is Microsoft’s latest edge computing platform, bringing the best hardware, software, and cloud services to the Edge. Azure Percept is an excellent device for makers and builders to build and prototype intelligent IoT applications powered by Azure Cognitive Services and Azure Machine Learning Services.

12. Neuromorphic System Architectures:

  1. Allow devices to adapt to changes in context.
  2. Are several orders of magnitude more energy-efficient than general-purpose computing architectures used earlier
  3. Excel at processing continuous data streams and deploying neuromorphic processors at the Edge reduces the delay to analysis.
  4. Have enabled rapid learning from little data capabilities beyond most conventional AI systems.
  5. Privacy-Oriented Technology

New privacy-oriented technologies include techniques and hardware that enable data to be processed without exposing all the problematic aspects. Data is encrypted during storage and transmission. However, privacy-preserving tech is bound to safeguard data even in the computing stage, making it more reliable for other lines of the organization and its partners, especially when required to be processed on Edge.

13. Robotics

Robotics can be configured to act following signals and updates by the Edge. This has been seen in life-saving surgical procedures where agility and precision are of utmost importance. Both Edge and Cloud are of utmost importance to control the robot’s moves and executions through stored data while ensuring no lag between movements.

How Edge Computing Will Drive Cloud Computing

From what we are witnessing so far, Edge Computing represents the future of a cloud technology extension by making it bulletproof. Discussed below are a few ways we may see this continuum manifest:

1. Extension of AI and IoT

A substantial amount of computing is already being carried out on Edge at manufacturing units, hospitals, and retail sectors, where the majority operate on the most sensitive data. It also powers the most critical systems that are required to function safely and reliably. Edge can facilitate the driving of decisions on these core functional systems. When there is the opportunity for AI and IoT to tap into these systems.

2. Value Creation by Multi-Partner / Multi-Company Solutions

Understanding and assuming control of the Edge also gives you control of the closest point of data action. Utilizing this unique opportunity to relay differentiated services can help a business in great ventures with valuable partnerships that branch out.

For instance, edge computing is beneficial to an automobile manufacturer and the insurance vendor, the companies that provide energy and utilities, and the city planners. Edge computing can offer your business new data, and you can offer more excellent value to your partners, which is a win-win scenario. The new edge-friendly data and services are processed in the Cloud, integrating with other organizational applications and data.

3. Revolutionizes New Tech like 5G, Robotics, XR, and Other Connected Devices

Edge computing is the need of the hour to maximize the returns of the next-generation technologies, as the current scope needs to be broadened anyway. As time passes, so does the need for a better technological support system for data processing that is faster, smarter, and more efficient. Their collective effect can give new features such as voice input to your vehicle or remorse operations using teleoperation. Edge facilitates the control and programmability required to link these capabilities into an organization.

Does Edge Computing Replace Cloud Computing?

Today’s Cloud world is characterized by limited mega data centers in remote locations. Data is traversing from one device to the Cloud and back to execute a computation or data analysis. Data typically make this round trip traveling at 50 to 100 milliseconds over today’s 4G networks.

Data traveling over 5G at less than five milliseconds facilitates the edge cloud and the ability to create new services that it empowers.

Decentralizing traditional IT infrastructure is at the core of edge computing and complementary to centralized cloud computing.

Distributed Cloud

One of the three origins of distributed Cloud is edge computing, making it highly relevant for prospects. CIOs can use distributed cloud models to target location-dependent cloud use cases required in the future. As per Gartner, by 2024, most cloud service platforms will facilitate at least a few distributed cloud services that execute at the point of need. Distributed Cloud retains the benefits of cloud computing. However, it extends the range and uses cases for the Cloud, making it a better version.


Today, everything is getting “smart” / “intelligent” because of technology. From home appliances and automobiles to industrial equipment, substantial products and services are employing the aid of AI to interpret commands, analyze data, recognize patterns, and make decisions for us. Most of the processing that powers today’s intelligent products is handled remotely (in the Cloud or a data center), where there’s enough computing power to run the required algorithms.

Edge, combined with 5G’s higher bandwidth and Distributed Cloud’s low-latency computation, is the future that was imagined less than a decade ago and is within our reach, now more than ever. What is impressive about this technology is how several technological leaps and bounds are more significant than we could imagine. To think about it, edge computing is just like science fiction materializing, only that the experience is full of greater possibilities and expansion. Not only will your business be facing a new generation of success, using the Edge will help you run your organization more efficiently, create more incredible innovations at a fast-paced process faster and derive better value from affiliations.

See blog

Tags: Cloud, COVID19, Digital Twins

Data Governance — A QuickStart With Azure Purview
April 22, 2022

When we talk about assets on the balance sheet, Data deserves its row” — Satya Nadella — Microsoft CEO.

As an organization, you have a big question in front of you “How to handle user’s data?”, it can be either used to support your business, or it can be used to give your end-users a better experience.

With enough data and a roadmap to use that data effectively, you can accelerate your company’s growth. Using Data effectively is incomplete without the term data governance. Here’s every “Why? How? Where?” you need to know about Data governance and Azure Purview.

Why Data Governance?

Data is the new currency of the current digital age. But data within organizations is growing at exponential rates. 90% of data today was created in just the last two years. And by 2025, 80% of data will be unstructured data. This influx of data has increased the organization and challenges many folds.

To get real business value from Data, the organization needs to know:

  1. What Data exists within the organization?
  2. Who owns the Data? Who can access the data?
  3. For what purposes can they use the Data responsibly and ethically?
  4. Data lineage (traceability of data flow and its usage in solutions)
  5. Duplicate data
  6. Quality of data and common taxonomy
  7. Security and compliance for the data captured
  8. Where and How the Data is stored or archived (and overall lifespan of data)

Lack of understanding of any of the above can create operational inefficiencies, confusion related to Data and information being distributed internally and externally, and poor business decisions based on flawed or misunderstood data. Well, that’s only a part of the problem set as regulators are cracking down on companies for any compliance data privacy and data sovereignty (and I won’t be surprised if soon we start seeing regulations around the ethical use of data).

What is Data Governance?

According to Gartner, “Data governance is the specification of decision rights and an accountability framework to ensure the appropriate behavior in the valuation, creation, consumption, and control of data and analytics.”

Data governance helps ensure the data is usable, accessible, and protected. It also helps in more informed data analytics because an organization can come to a well-informed conclusion. Data governance also improves the consistency of the data, removes redundancies, and helps make sense of garbage data, which can save an organization from a big decision-making problem. 

Data governance also allows organizations with:

  • Data consistency.
  • Reduced data management costs.
  • Increased data access for everyone involved for better data-driven decision-making. 
  • Improved employee experience (thus higher engagement level and Productivity).
  • Improved customer experience by enabling insights into customer behavior/ patterns faster and facilitate 360 views to drive personalized experiences at scale.
  • Overall brand value.

What’s Microsoft Azure Purview?

Microsoft Azure Purview is a fully managed, unified data governance service that helps you manage and govern your on-premises, multi-cloud, and SaaS data. Purview creates a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. Purview empowers data consumers to find valuable, trustworthy data.

It’s built over Apache Atlas, an open-source project for metadata management and governance for data assets. Azure purview also has a data share mechanism that securely shares data with external business partners without setting up extra FTP nodes or creating redundant large datasets. Azure Purview does not move or store customer data out of the region in which it is deployed.

Purview is Available for Public Preview

There is currently no licensing cost associated with Purview; you pay for what you use. The pay-per-use model offered by Microsoft as part of Public Preview is exciting for Microsoft customers looking to move quickly without having to create a business case to secure an additional budget. Azure Purview reduces costs on multiple fronts, including cutting down on manual and custom efforts to discover and classify data and eliminating hidden and explicit costs of maintaining homegrown systems and Excel-based solutions.

Data Sources Supported by Azure Purview

It supports the following type of data sources at the time of writing:

  1. SQL Server on-premises
  2. Azure Data Lake Storage Gen1
  3. Azure Data Lake Storage Gen2
  4. Azure Blob Storage
  5. Azure Data Explorer
  6. Azure SQL DB
  7. Azure SQL DB Managed Instance
  8. Azure Synapse Analytics (formerly SQL DW)
  9. Azure Cosmos DB
  10. Power BI
  11. Teradata
  12. ERP sources like SAP S/4 HANA and SAP ECC.
  13. Oracle DB as a data source
  14. Amazon S3 �� Azure Purview customers can now scan and classify data residing in Amazon AWS S3 with the help of automated scanning, AI-powered built-in and custom classifiers, and Microsoft Information Protection sensitivity labels.

Critical Capabilities of Azure Purview

Azure Purview consists of below main features:

1. Azure Purview Data Map

Azure Purview Data Map provides the foundation for data discovery and effective data governance. It’s a cloud-native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Purview Data Map is automatically kept up to date with a built-in automated scanning and classification system. Business users can configure and use the Purview Data Map through an intuitive UI, and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.0 APIs.

Purview Data Map powers the Purview Data Catalog and Purview Data insights as unified experiences within the Purview Studio.

Data Map extracts metadata, lineage, and classifications from existing data stores. It enables you to enrich your understanding with the help of classifiers at cloud scale classify data using 100+ built-in classifiers and your custom classifiers. With Purview Data Map, organizations can centrally manage, publish and inventory metadata at cloud scale and further extend using Atlas Apache open APIs.

Label-sensitive data feature is supported consistently across the database servers, Azure, Microsoft 365, and Power BI. Along with that lets you easily integrate all your data systems using Apache Atlas Open-source APIs.

2. Purview Data catalog

With Data Catalog, Purview enables rich data discovery with the luxury of searching business & technical terms & understanding data by browsing associated technical, business, semantic, and operational metadata.

Data catalog, along with information on the data source and interactive data lineage visualization, empowers data scientists, engineers, and analysts with business context to drive BI, analytics, AI, and machine learning initiatives.

Purview helps companies to understand their data supply chain from raw data to business insights. From a Data lineage perspective, Purview currently supports:

  1. Scan your Power BI environment and Azure Synapse Analytics workspaces with a few clicks and automatically publish all discovered assets and lineage to the Purview Data Map.
  2. Connect Azure Purview to Azure Data Factory instances to automatically collect data integration lineage. Quickly determine which analytics and reports already exist without reinventing the wheel.

3. Purview Data Insights

Using Purview Data Insights, data officers and security officers can get a bird’s eye view and, at a glance, understand what Data is actively scanned, where sensitive data is, and how it moves

The data governance component provides users a bird’s-eye view of your organization’s data landscape; by quickly determining which analytics and reports are stored. It enables stakeholders to maintain and use an organization’s data efficiently if it exists already or not. This view allows you to get crucial insights such as data distribution across environments, how Data is being moved, and where sensitive data is stored.

4. Purview Studio

Purview Studio is essentially an environment created for you to work through the Azure purview services after creating an account. This studio is a central control area that allows developers, administrators, and end-users to work through Purview. This tool is the next step in the process of using Azure Purview.

Challenges of Azure Purview

Azure Purview is in its early days and has few gaps that need to be addressed. Here are few limitations of Azure Purview:

  1. Purview has a minimal list of data sources; even most Azure data services are not accessible for scanning, not to mention other extensive management systems and BI tools.
  2. User Interface is missing basic data management capabilities in the data catalog. For example, once classified, assets cannot be deleted with the UI.
  3. No support for the classification of zip file content.
  4. No support for Data Marketplace
  5. No support for automation and alerting
  6. Relations between assets are set manually, and it’s not possible to specify the type or nature of the relationship.
  7. The maximum length of an asset name and classification name is just 4 KB
  8. Currently, Azure Purview only provides you with 10GB storage capacity for four capacity unit platforms and 40GB for 16 capacity unit platforms.

While currently, Azure Purview is not a one-shop-stop solution for enterprise-level data governance capabilities but based on the roadmap shared, it won’t be long before the Purview team pull up their socks and cover enough to make Azure Purview an enterprise-grade Data governance suite.

How Azure Purview helps with Data as Asset

Azure purview is there to help you manage your data better and here’s how it’s going to help you process it and convert your data into an asset:

a) Inventory

Azure purview allows you to catalog your data and have a customized tag over it, allowing you, the end-user, to locate better and understand it.

b) Quality Control

It also helps you maintain Data Quality in situations where your data must be complete, unique, valid, accurate, consistent, relevant, reliable, and accessible. Governance tools such as the data catalog will help you with this.

c) Security Compliance

As an organization, it falls on you to provide the utmost security to end-user data. According to government laws and data mandates, the end-users can demand to remove their data from companies severs and even change its content at any given point; Azure Purview lets you create an automated process that will streamline these service requests and produce documentation required by the law.

d) Unified Roadmap

It provides a unified map of your data assets. This helps in forming an effective data governance system.

e) Provides Semantic Search Options

You can run searches based on technical, business, and operational terms. One can identify the sensitivity level of the data and can understand the interactive data lineage.

f) Constant Update of Data Running Through the System

Get continuous updates about the location of the data and continuous insight into its movement through your multi-layer data landscape. Along with this, Azure Purview provides you with services like a Data catalog and Business glossary.

g) Data Catalog

It is a core element of any data governance software, which can scan all the data sources, identify, index, connect and classify registered users’ data sets.

h) Business Glossary 

It is a collection of terms with brief definitions which connect to other terms. With Business Glossary, it’s possible to automate the process of classifying the data set and annotate them with correct business terms so end-users can understand them more simply. Any business glossary is the foundation of the semantic layer that an organization uses to define a medium of communication behind its business.

With features like these, Microsoft Azure Purview allows your data to become a crucial asset.


Data Governance is a must-have solution strategy for all enterprises to use Data as assets. Data Governance is a complex solution yet a foundational pillar in any enterprise’s data journey. Data governance helps to democratize data responsibly through accessible, trusted, and connected enterprise data at scale. 

Microsoft Azure Purview provides a good starting point for Cloud-native Data governance solutions. Azure Purview helps answer the who, what, when, how, where, and why of data. From the feature checkpoint of view of Azure Purview, I would say it has the potential to be a game-changer with features like Data catalog, Data insights, Data mapping, Business Glossary, Pipelines to manage your data sources and destinations. 

Azure Purview has a solid potential to shape up a new Data Governance as A Service Industry (DGaaS) and open up some new opportunities for businesses to explore.

See blog

Tags: Cloud, COVID19, Digital Twins

Cloud Cost Optimization: A Pivotal Part of Cloud Strategy
April 20, 2022

According to Gartner, businesses will be spending about $333 billion by the end of 2022 on cloud infrastructure, and according to McKinsey, cloud spending will increase by 47% in the year 2021. These numbers are staggering and certainly depict a very positive picture here. However, cloud consumers need to assess the pay-off of such significant cloud spending. 

McKinsey also reported that companies exceeded their cloud budget by 23% and that 30% of their outlays were wasted. This leads me to wonder if businesses have been able to optimize operations from their cloud investments. Whether the Cloud has just added to their costs or has it been good value for their money? And lastly, why do some companies still grapple with mismanaged costs or added costs during their cloud journey?

These pertinent questions need to be debunked in times where companies are struggling to stay afloat and are trying to mitigate their overall costs. Cloud costs don’t necessarily mean IT costs but also include certain operational and managerial costs as well.

So, how do organizations harness the cloud cost optimization journey? Let me guide you through the same in this blog.

Why Cloud Cost Optimization?

According to Gartner, 45% of the organizations that perform a ‘lift and shift’ to cloud architecture endure higher costs and end up overspending by 70% in the first year. Mckinsey calculates, “80% of the enterprises believe that managing cloud spend poses a challenge”. Flexera noted, “organizations waste an average of about 35% of their Cloud spend.

Other than just high overhead costs, poor cost management certainly reflects on business innovation and overall agility. Additionally, according to a cloud ability survey, more than 57% have experienced a negative business impact due to inefficient cloud cost management. This is because much of the importance is only given to cloud adoption and not cloud optimization. Organizations must look to save costs here and look to bring about a cultural and behavioral change to maintain a fiscal discipline. As we enter the post-COVID world and the next stage of the economic cycle, IT leaders must now work smart to ensure business efficiency through cloud cost management.

Six Challenges in the Cloud Cost Optimization Journey

Despite conceding to the benefits derived from cloud cost optimization, many organizations struggle with it. It is essential to address key challenges and hurdles faced by cloud users in optimizing cloud costs. Let me take you through some common ones:

  1. Provision for peak mindset:The easy access to point-and-click web consoles and APIs in the absence of capacity constraints can lead to “resource sprawl” and unexpected charges. This can be sometimes construed as a threat as companies will be unable to anticipate and assess how many resources they require vs. how many are being allocated by their providers. What do unused resources mean? More costs. Hence, provisioning can be a tricky factor here. To maximize the Cloud’s value, it is essential to apply the “just in time, pay for what you need” mindset.
  2. Ease of use and lack of governance model:Often, the inherent scalability, flexibility, and easy provisioning of cloud service can lead to resource sprawl and cost overruns. Lack of governance for cloud resources adds a multiplier effect to unexpected charges of resource sprawl.
  3. Complex, multilayered pricing and billing structures from hyperscalers:Cloud consumption bills are challenging to understand and make it difficult to build “budget vs. forecast vs. actual usage” comparisons. Major cloud platforms such as AWS, Google Cloud Platform (GCP), and Microsoft Azure do not provide standardization of billing models, billing formats, APIs, or services. Moreover, Cloud providers constantly change their billing and invoicing practices that can sometimes cause a complex issue. To make things worse, Public cloud pricing and billing structures are multilayered and tedious to understand. The lack of a standard billing model, formats, or APIs per the cloud vendors’ discretion adds to this complexity.
  4. Low visibility:Most of the customer has deployed solution for monitoring the Usage from Budget and financial engineering perspective. You can’t manage what you can’t measure, which often results in uncontrolled consumption.
  5. Too many options in cloud feature catalog:Complex cloud catalog options require careful consideration, and it’s not easy to find the best-suited feature with the lowest cost for a given context. Every year, cloud vendors announce hundreds of new services, features, instance types, pricing reductions, and even new pricing models. Organizations struggle to keep up with this pace and understand how each announcement affects their financials.
  6. Excess of alternative architectures:The same application can be built using many different architectures, services, and components resulting in very different costs. Organizations can struggle to calculate and identify the most cost-effective alternative to deliver their requirements.

Seven Mantras for Cloud Cost Optimization

Let’s investigate seven mantras that IT and Business leaders can use to accelerate their Cloud Cost Optimization journey.

1. Cloud-First Mindset

Cloud deployment entails some structural and systemic changes in an organization. A cloud-first mindset helps organizations become agile in bringing forth these changes, whether in business or revenue models. It also helps if IT teams can make decisions around the movement of the Cloud-based on the dynamic needs of various groups. Investing in PaaS capabilities and cloud-native toolsets can help here.

2. Architect Solutions for Cloud Economics

Cloud optimization is not something you do per se but rather a mindset you inculcate. An organization needs to arrive at the most cost-effective cloud architecture to meet their requirements by factoring in what’s on offer in the cloud catalog, including newer features, and knowing what resources to use by interpreting usage trends from billing.

In the past, organizations designed for availability, performance, and security to be delivered from a finite set of pre-resources planned for peak workload. The Cloud reverses this paradigm and allows for a more precise design that’s perfectly aligned to workload requirements. The architectural components in the Cloud carry a price tag, and thus optimal cloud architectures need to be designed with cost in mind.

Some of the core elements or principles of cloud economics today include:

  • Unlocking the true potential of any cloud provider by accessing IaaS, SaaS, and PaaS.
  • Ensuring cost transparency.
  • Right-sizing — During ‘lift and shift” migration to the Cloud, right-sizing is often overlooked or ignored. This may lead to oversized instances and unused resources, ultimately increasing the costs.
  • Remove unused resources.
  • Workload Usage-based optimization using Autoscale up and down capabilities — as rightfully pointed out by McKinsey, organizations need to inculcate a “consumption approach” and need to “continuously match their demand with the best-fitting cloud services.”
  • Aiming for Evergreen Cloud Managed Services (at least for Infra, Security, App Runtime) and always be on the lookout for advanced cloud features that be more cost-efficient than the previous one.
  • Everything as Code — To maximize cloud economics benefits, establish a strong automation foundation with everything as code (infra, security, configuration, network, documentation).
  • Automate your consumption forecasting and capacity management in the CI/CD pipeline.

3. Adopt a Cloud Cost Optimization Framework

Many organizations need to rethink what the Cloud can do for their business in the current climate. Acceleration and optimization of the Cloud are critical components to a successful cloud journey; both must be considered and intertwined. Whether an organization looks to optimize first for maximum cost and consumption efficiencies or accelerate first for greater scalability, there is no “best way.” Moving to the Cloud could reduce IT costs if it is planned and managed correctly. When you optimize as you go, the savings are significant, controlled, and scalable.

3 Key Pillars to Cloud Cost Optimization Approach
  1. Optimize Resources (rightsizing, right features)
  2. Create Visibility and Control (transparency of costs, Usage, and forecasts)

You cannot optimize Cloud cost if you don’t have visibility into the spent and a baseline. A good starting point for the Cloud Optimization framework is to ensure visibility of your spending and control over cloud expenditure.

Organizing cloud costs could entail resource tagging, cost allocation, and chargeback and show-back models. Additionally, creating and using a clear BI dashboard for visibility and control can help your organizations tremendously in the following ways:

  • It can help you to understand what you are paying for.
  • It will prompt you immediately when your resources are misused.
  • It will help you to keep cloud costs under control.                
  • Establish Effective Governance (identify usage, ownership, and department allocation)

To maintain an optimal state, you need to ensure that sound policies around budgeting are adhered to. In terms of Governance, the framework should oversee resource creation permissions as well. Microsoft offers automation tools like Microsoft Advisor and Microsoft Cost Management to monitor your spending and cost spikes.

The journey for any cloud cost optimization starts with initial analyses of current cloud estate and identifying optimization opportunities across compute, network, storage, and other cloud-native features. Any cloud cost optimization framework needs to have a repository of cost levers with associated architecture and feature trade-offs. Businesses would need governance — the policies around budget adherence, resource creation permissions, etc. — to maintain an optimal state.

The key here is to focus on quick wins first, followed by dashboard creation for better visibility and control. Lastly, establish a Governance model to maintain an optimal state.

To maintain an optimal state, you will need:

  1. Governance — the policies around budget adherence, resource creation permissions, and more.
  2. Transparency — reports of costs, Usage, and forecasts.
  3. Well defined KPI.
  4. Continuous Review.

4. Continuous Cost Optimization Process

The Cloud is ever-evolving, and hence, organizations must also ensure to evolve their portfolio as well. For example, automation, autoscaling, serverless services, containers, etc. have evolved the cloud game and, if adopted, can ensure continuity in reducing costs. Hence, it becomes of utmost importance to find new optimization strategies and opportunities to ensure continuous reviewing. The key here is not just to have a one-time cloud cost optimization journey but to ensure a continuous optimization cycle at every stage.

5. Implement a Chargeback Model

Cloud consumers should be responsible for what they consume. Enable them to create forecasts and pursue optimization opportunities. A good starting point would be to develop a resource tagging (e.g., usage, ownership, department, and cost center) model to implement the chargeback model. With proper resource tagging, it is possible to associate resource cost to resource owner — thus, a cost center code.

6. Use the Right Sourcing, Pricing, and Discounting Model

Choose the suitable sourcing model from allocation-based and consumption-based services.

  • Choose the right pricing and discounting models: Reserved instances are pricing options or discounts based on upfront payments and time commitment. Reserved instances can be purchased for one or three years, and hence, it becomes imperative to assess past Usage and make the right decision. Microsoft Azure (Azure Reserved VM instances) and Amazon (AWS Management Console) offer reserved instances and pricing models.
  1. Take a careful decision regarding RI as it works on a contractual basis.
  2. Always be on the lookout for potential price reductions and grab the opportunity before it ceases to exist.

7. Establish Cross-Functional Cloud FinOps Team

With the advent of pay-as-you-go (PAYG) models, financial decisions have been decentralized. This means that, in previous traditional IT models, only a few people were responsible for making financial decisions about infrastructure purchases. With new pricing models, anyone can make cloud spending and cost management decisions, and how has now become everyone’s responsibility.

Hence, it becomes imperative to integrate FinOps. FinOps is nothing but a combination of FINance (budgeting and cost modes) and OPerations (infrastructure, apps, data).


Cloud cost optimization calls for a paradigm shift at the organizational level and at the behavioral level to ensure that cloud investments are utilized responsibly and optimally. It is not just an operational concern or merely about “cost reduction”; it’s a value-driven strategic move. The path toward it will not be linear and requires tight collaboration among governance, architecture, operations, product management, finance, and application development to be successful. 

With the right strategic interventions, control, and operating model, the Cloud provides excellent visibility to organizations on IT spends and is undoubtedly the most crucial and promising/futuristic technology investment an organization can make.

See blog

Tags: Cloud, COVID19, Digital Twins

Cloud as an Enabler for Sustainability
April 19, 2022

Sustainability has been a buzzword and rightfully so owing to the accelerated global climate crisis. Corporations worldwide acknowledge the urgent need to act on climate change and have even pledged to set climate targets.

According to Gartner, Sustainability is defined as “An objective that guides decision making by incorporating economic, social & environmental impacts.”

According to the “Business Ambition for 1.5°C — Our Only Future” campaign, 177 corporations from across the sectors have come forward to reduce the rising global temperature. They are not just giving out empty promises but are also taking concrete actions to bring policy changes within their own countries and regions. These are undoubtedly positive breakthroughs and require to be implemented now more than ever as the time between now and 2030 is known as the Decade of Action.

Sustainability is pervading all facets of our lives, and it has undoubtedly pervaded the digital technology industry as well. No doubt, Digital transformation is disruptive and innovative, but is it truly sustainable? Well, I would say there are two sides to the coin here.

On the one hand, we can contest that software technologies are intelligent solutions built to support the environment. For example, Microsoft created an AI tool called AI for Earth Initiative to assist environmental organizations. Alternatively, it is also recorded that our digital technology usage is currently responsible for 4% of global CO2 emissions. With more and more digital use, the number is only going to increase. So, where does the solution lie? How can Cloud help with Sustainability?

The solution lies in utilizing Technology to deliver sustainable solutions and to incorporate environmental-friendly IT practices. To make ethical choices regarding design and Technology that invariably contribute to the internet’s broader ecological impact. My perspective is to harness the power of the Cloud as Cloud computing truly is the silver lining, or should we say the ‘green lining’ here.

Therefore, in this blog, I intend to highlight cloud computing efficacy in driving sustainability goals.

4 Primary Ways Cloud Helps With Sustainability

1. Sustainable Platform for Infrastructure, Application, and Data

Microsoft and WSP USA conducted a study wherein it was discovered that Microsoft cloud computing was 93% more energy efficient. Additionally, it was noted that the Microsoft cloud platform had 98% lower carbon emissions than on-premises data centers. This comprehensive study does highlight the efficiency of cloud infrastructure in maintaining Sustainability and has numerous environmental benefits. report published by Accenture strategy also stated that “migrations to the public cloud can reduce CO2 emissions by 59 million tons per year which equates to taking 22 million cars off the road”. Such statistical data isn’t just hyperbole but a reality.

Let me take you through the different ways the cloud infrastructure poses a sustainable option:

Green Datacenters

Typically, on-premises data centers consume an unreasonable amount of energy as you require a constant power supply, a cooling system to avoid overheating, etc. Additionally, server underutilization also results in energy waste as servers are left unused to build up more e-waste.

Additionally, Public cloud providers can deploy ‘green data centers’ by utilizing other energy sources. For example, Microsoft’s data centers use renewable energies like wind, solar, and hydroelectricity.


Migrating to the Cloud will replace high carbon-emitting machines with virtual equivalents. Such a replacement ensures to reduce the company’s carbon footprint significantly. For example, the Cloud enables seamless virtual services like video streaming rather than utilizing heavy hardware equipment that consumes more energy. Elimination of significant physical hardware from the day-to-day operations ensures dematerialization and reduces cost, waste, effort, and environmental impact.

2. Rapid Innovation for Sustainability Centric Solutions

Corporations have been leveraging technologies to create more sustainable businesses while ensuring to minimize their environmental impact. New innovations and technologies using cloud computing infrastructure have powered sustainability goals. For example, virtual meeting applications have been one of the most notable cloud-powered innovations so far. Companies can now have periodical employee meetings online, saving high costs, energies and time.

In a research conducted by Microsoft, it was noted that the Cloud could help to provide scalable technological solutions such as smart grids, intelligent buildings, etc., to ICT sectors. Moreover, significant enterprises are harnessing the power of the Cloud to find sustainable solutions as well. Take, for example, the case of AGL. AGL, one of Australia’s leading energy companies, utilized Microsoft’s Azure cloud platform to manage solar batteries remotely. The company was able to derive a sustainable solution with the help of cloud computing infrastructure efficiently.

These use cases certainly highlight that the cloud infrastructure isn’t just inherently a sustainable solution but also an infrastructure that powers rapid Innovation for sustainability-centric solutions.

3. Software as a Service (SaaS) Solutions

SaaS has transformed the way we work, communicate, and share data. With Sustainability, the reporting requirements have become crucial and complex. As such, it has become essential to have secure, accessible, and accurate data. SaaS platform essentially provides a cloud application solution that drives business operations by managing and automating key activities.

4. Innovation and Investment From Hyperscalers

Hyperscalers can invest vast amounts of money in Innovation for energy-efficient Datacenters and Technology due to an increase in cloud consumption and the number of cloud users. For example, Microsoft is investing in building data centers based on new leading-edge designs (e.g., Microsoft has created a data center underwater) to improve the average PUE (Power Usage Effectiveness). Such investments in green infrastructures will significantly reduce the per-user footprint when cloud business applications are being used.

Sustainability Benefits of Cloud From Different CXO's Perspective

Sustainability initiatives are indeed being harnessed across all levels of hierarchy, from CEOs to CFOs and CIOs.There has been significant pressure from customers and stakeholders to take a stand on Sustainability as well truly.

Hence, by embracing the power of the Cloud, CXO’s can efficiently harness growth and Innovation. Let me tell you how:

1. CEO Perspective

According to a report published by Accenture strategy, it was noted that 21% of the CEOs and CXO’s acknowledged the importance of embedding sustainability goals into their corporate strategy. However, less than half were able to integrate it into their business operations.

Incorporating sustainable goals doesn’t just ensure a competitive advantage but also allows companies to fight against climate change proactively.

Indeed the uncertainties bought about by the pandemic have halted and distracted the CEO’s sustainable efforts. However, the accelerated migration to the sustainable Cloud has also invariably solved this very problem.

Establish a Sustainable Ecosystem

Leaders of technology-driven businesses or other businesses can drive sustainable actions by incorporating Cloud computing technologies. Achieving sustainability goals doesn’t happen in a vacuum but requires bringing together different technologists, employees, and stakeholders to realize the importance of leveraging sustainable options into operations.

Aligning Sustainability With Profit Mindset

The goal is not to make profits from Sustainability but rather to make profits sustainably. CEOs must align Sustainability with profits, business operations, investments, Innovation, and growth. By embracing the power of the sustainable Cloud, they can quickly alleviate the pressures to implement Global Goals and concentrate on strategizing for business success.

2. CFO Perspective

CFO’s have always viewed non-financial metrics like Sustainability as a cost rather than as a source of value. This can be owed to the language barrier between the CFO’s and Sustainability colleagues, as rightfully pointed by a Harvard Business Review study. However, according to the same research done by Harvard Business Review, it was noted that “Non-financial metrics such as carbon emissions can reveal hundreds of millions of dollars in sustainability-related savings and growth.”

a) Sustainability investments have tangible and intangible benefits, and CFO’s must realize this now more than ever.

b) To maximize the value from Sustainability initiatives, a model like the “The ROSI (Return on Sustainability Investment) analytical modelis a great starting point.

c) Another important model is CISL’s “Net Zero Framework for Business” — designed for those companies tasked with delivering net zero in a business context and influencing this ambition’s societal transition. By drawing on a range of leading frameworks and CISL’s insights, it provides a ‘one-stop-shop’ for the essential tasks that need to be set in place to align with net-zero.

Source: Targeting Net Zero — framework — Cambridge Institute for Sustainability Leadership

Let us now understand how the Cloud can make a business case for CFOs:

Paradigm Shift From CAPEX to OpEx for It Infrastructures and Operations

Allows CFO to use different Financial Engineering to drive the operations in the core business stream instead of worrying about huge cash flow upfront.

Cost Reduction on IT Systems, Operations

In the Smart 2020 report, it was estimated that technology-enabled energy efficiency would result in a total of $947 billion worth of total cost savings. This is huge as CFO’s can streamline and manage these cost savings for better Innovation, scalability, and growth. The Cloud also allows CXO’s to shift their outlook to think ‘green,’ to contribute to something larger than just their companies. It poses an opportunity to participate in the fight against climate change without considering mitigating costs or analyzing risks.

Cost Saving From the Reduction in Carbon Footprint Offset the Expense

By migrating to the Cloud, CFO’s can quickly mitigate and avoid carbon footprint expenses (Expenses like emission taxes, penalties for non-compliance, etc.) that might be incurred later on.

Faster Value Creation With Business Agility

Cloud enables CFO’s to move on from immediate financial imperatives to engage in better value creation. By incorporating sustainability goals into their business operations through the Cloud, CFO’s can build a better Environmental, Social, and Governance profile (ESG). This can help to build stronger relationships with customers, shareholders, and broader stakeholders.

3. CIO and CTO Perspective

In addition to standard Cloud Economics and business agility benefits of Cloud adoption, the Sustainability benefits of Cloud help CIO and CTO with:

  1. A sustainable platform for Innovation
  2. It helps to optimize your application by enhancing image size, caching, and data. This can reduce not just the amount of data but also the speed at which it is transferred. This ensures to reduce your cloud spend and energy levels at the granular level.
  3. Cloud can help to control energy consumption levels by reducing unnecessary dependencies that consume extra storage or resources.
  4. Focus on applying Technology to solve core business challenges instead of managing secondary aspects of Energy Management, Power Supply, etc.
  5. Reduce Technology Debt along with freeing up current Datacenter footprint (reduction in carbon footprint for Energy, Travel, and other operational aspects)
  6. IT as an asset for Sustainability goals

4. Workforce Perspective

70% of the employees are now looking to work at a company that has strong environmental goals. This is reflected within the IT sectors as employees are now urging their organizations to take greater responsibility and action towards Sustainability. Additionally, the sustainable benefits of cloud computing certainly reach all levels of hierarchy, including the workforce as well. With CEOs establishing to maintain corporate social responsibility (CSR), the workforce or the employees can make a collective and collaborative effort to ensure minimal carbon footprint. Using the new sustainable Productivity tools and IT systems, employees can easily ensure the consumption of energy.


Leading cloud providers like Microsoft’s Azure are pledging to be carbon negative by 2030 and match 100% of their global annual energy consumption with renewable energy credits (REC). This highlights how serious cloud providers are about Sustainability and how much effort they are willing to put to uphold their environmental credentials. Hence, it is time for CXOs to consider the sustainability benefits of the Cloud now more than ever, as Sustainability is no longer just a perspective but a business imperative. CXOs need to collaborate to align their operational goals with sustainability goals to build a corporate purpose to tackle climate change. Sustainability benefits can be truly harnessed only if the cross-divisional teams understand the urgency.

In my opinion, Cloud computing can definitely support the company’s sustainable efforts by saving billions of dollars in energy costs and reducing carbon emissions by millions of metric tons.

See blog

Tags: Cloud, Business Strategy, Digital Twins

How Is 5G Relevant in the Present Day?
April 18, 2022

As 2021 approached, 5G was predicted to reach around 34% of the global population in the next 4 years. How will 5G revolutionize the present scenario of the market? What are my predictions for a 5G-enabled future? What are the risks associated with 5G? Read this article to know what I expect from the 5G-enabled future.

5G networks are part of a major digital transformation trend that is impacting consumer, public sector, and enterprise spaces. New devices and applications are emerging in the market that takes advantage of the dramatically reduced latency and much higher throughput than 5G offers.  Examples include accelerated adoption of smart cities, smart factories, next-generation in-store experiences, healthcare services autonomous cars, and much more. It’s an exciting time.  By now, you are familiar with the excitement surrounding the 5G revolution.

A new technological revolution will begin with 5G, creating massive disruptions in perceiving and using tech. Let’s start by looking at what this technology is all about.

5G is no longer the technology of the future, but a current reality. Globally markets have already started to switch to 5G, which marks the beginning of a new era. This is the only technology created so far with a huge potential to elevate the use of the Internet of Things (IoT), foster an environment of interconnectivity, and sustain economic growth. 5G will bring along a plethora of benefits, such as increased data speed, lower latency on network response time, and higher reliability.

According to Ericsson’s Mobility Report released in June 2019, 5G subscriptions will reach 1.9 billion by the end of 2024, making up over 20% of all mobile subscriptions at that time. So, while we’re still quite early in the game, it is not too soon to start thinking about the 5G implications for your business, both positive and negative.

What Is 5G?

5G is the next generation of mobile networks that will allow us to have higher speed and lower latency. To put it in layman’s terms, we say 5G will allow us to do everything that we currently do, but at a much higher speed. It runs on radio frequencies ranging from below 1 GHz all the way up to very high frequencies, called “millimeter wave” (or mmWave). 

Imagine a world with smart glasses, VR, and drones co-operating to carry out emergency missions and communicating wirelessly with each other and ground base stations over 5G networks in real-time.

Key characteristics of 5G

Network Densification

Support for millions of devices/sq miles. Until today, 4G supported 2000 devices per 0.38 sq miles. On the other hand, 5G will support 1 MILLION users per 0.38 sq miles. This means a better and more affordable internet will be accessible with 5G!

High Speed

Look for speeds up to 10Gbps.

Lower Data Rates and Cost Savings

The 5th Generation of the internet will come with a speed of 1Gbps. This would mean a sturdy revolution in the health sector, enabling us to have a bird’s eye view 24/7.

As we have progressed, you must be thinking we have made the internet cheaper and more accessible throughout the world.

Low Latency Rates

Lower latency would mean less time required for data to travel from one point to another and low chances of experiencing lags. The latency of the network is inversely proportional to the network speed. The 5G revolution will see an entirely new range of applications enabled by low latency factors and the expansion of edge computing, altering the art of the possible.

5G being 10-times faster will give better results and lesser latency. Overall, this generation of wireless is expected to have a 10x decrease in end-to-end latency.

Mobility and Reliability

Better mobility would mean a better network while traveling.  Data has predicted that we will be able to travel at the speed of 500 Km/hr.  That’s more than reliable for high-speed train travel. Along with that, 5G offers the reliability of 99.999% uptime and ultra-consistent network service.

The Horizon of Possibility With 5G

Until now, we have used technology to make our lives easier. You press a button on your screen, and numerous calculations deliver your foods and goods to your doorstep. Wearable technology, to improve the human lifespan, is just the beginning of the technological revolution we are about to witness.

According to a survey recently released by Gartner, two-thirds of organizations were planning to deploy 5G by 2020. Yet businesses want to embark on the 5G journey faster than communication vendors can provide it. Furthermore, they plan to use 5G networks mainly for IoT communications, with operational efficiency as a critical driver.

Barriers to 5G Adoption

  • Upfront investment
  • Security
  • User uptick
  • Change management
  • 4G is good enough and this underestimating the 5G Disruption

How Mobile Network Operators Are Changing With 5G

We’re noticing a significant shift in industries and quite interesting partnerships among Mobile network operators (MNO) and hyperscalers such as Microsoft, AWS, and Google. MNOs have an edge because they own infrastructures for edge technologies. There is mounting evidence that suggests that AWS, Microsoft, and Google are moving fast and developing their edge infrastructure.

How Are We Looking at 5G Currently?

With every passing day, we are consuming more data. Ever wondered how much data we create? The answer would be 2.5 quintillion bytes per person per day as of 2020.

With such a mammoth demand in data, existing spectrum bands are becoming congested, leading to service breakdowns, particularly when many people in the same area are simultaneously accessing online mobile services. This has increased our need for a network spectrum that can handle the surge in demand.

At the end of 2020, we had just about 5 to 10% market coverage of the 5G mobile network. Most commercial networks will be of demonstrative purpose for major events, such as the summer Olympics in Tokyo. Research shows that around 45% of carriers globally will have launched a commercial 5G network by 2025.

What Will the Future Look Like for 5G?

It is estimated that 1.5 BILLION connections, covering 34% of the global population, will be well established, covering 25% of all mobile data, influencing and creating new markets.

5G will bring a whole new range of products to the market, ranging from a simple 5G-enabled phone to a fridge that orders food from local grocery shops according to how your day is going in traffic.

Every app and website is moving towards a better user experience to keep us interested and will undergo a redesigning phase. Technologies like VR and AR will get more common in regular use. Companies, like Instagram, that are focusing on redesigning their app, will develop a new dopamine response cycle to keep the users on their platform for longer as the technology will get faster.

5G Ecosystem

To be successful in a 5G world, partnering and collaboration with MNO and 5G ecosystem players would be of utmost importance, as operators sit at the center of new ecosystems developed around the ultra-reliable low latency, real-time data at scale and responsiveness that the "edge cloud" delivers.

All hyperscalers are working in establishing the partnership and ecosystem around 5G and Edge computing. Microsoft is leading the bandwagon with a couple of acquisitions (e.g., Metaswitch, Affirmed) and broader ecosystem partnerships.

Risks With 5G

However significant 5G may seem right now, the industry is still very much at the infant stage. A major downside of 5G adoption is that it puts the entire service provider ecosystem at great risk for cyberattacks. With consumers and businesses becoming steadily reliant on digital services, security shouldn’t be an afterthought.

Securing applications and data across the network, endpoints, data centers, branch locations, and cloud all remain critical challenges. As key enablers of the 5G value chain, mobile network operators are in a position to enable the connection and security of the 5G digital economy.

Health Risk

There are few potential health risks that we can encounter during the transition to the world of 5G. Various studies back the fact that frequencies with power densities of less than 6 mW/cm2 result in various health issues, the most severe one being DNA damage. Long-term exposure to low-intensity EHF EMR has a severe effect on non-specific immunity indexes.

Cybersecurity Risks

Along with the massive potential of a technological revolution, 5G also introduces the chances of increased cyber security risks. Attackers with a minuscule knowledge of cellular paging and protocols will be enforced with the ability to intercept calls and track locations, leaving our privacy vulnerable.

5G networks are a software-based distribution system, digital routing approach, increasing the number of entry points for someone with ill intentions to intercept the data packets.

What Do I Predict for the 5G Future?

How 5G Could Change Cloud Computing

1. 5G Supports a Blend of Computing, Storage, and Networking at the Edge Called Multi-access Edge Computing (MEC).

To capitalize on the emergence of 5G, cloud providers have expanded their hybrid and edge offerings, partnered with Telco’s, and built or acquired 5G-specific services. AWS Outposts, Google Anthos, and Microsoft Azure Stack Hub, Azure Edge Zones are hybrid cloud appliances and services that naturally lend themselves to 5G MEC use cases.  Cloud providers, software vendors, enterprises, and others will use MECs to run applications directly within the edge of a telco network.

2. 5G Will Enable Cloud Service to Reach Mobile Enterprise Customers Quickly and Reliably.

Access to virtual machines via phones will become common because of the more extensive computing and machine-to-machine communication provided by 5G. Cloud computing enterprises will offer more features and options to mobile users, and hotspots will become faster, allowing remote workers access to cloud services, even at places where internet connectivity is lacking.

3. Cloud Providers, Software Vendors, Enterprises, and Others Will Use MECs To Run Applications Directly Within the Edge of a Telco Network. 

5G technology will bring significant improvements to the cloud computing world because most technology innovations can be more efficient when cloud-dependent. 5G, in turn, improves that integration with its low latency, enabling smoother communications.

4. 5G Is Bringing AI To Change Your Life.  

As of now, we’ve only witnessed AI working with Chatbots and figuring out minuscule tasks, but with the power of 5G’s Edge-Cloud and low latency rates, we will be introduced to some drastic changes across the businesses.

5. 5G Will Transform the Art of the Possible.

We will see accelerated adoption of Digital Twins, Self-Monitoring machines, Augmented Audits and inspections, and collaborative robots.  This will allow us to make better data-driven decisions and distribute our priorities in a more efficient way. AI & 5G will bring the “intelligent edge” to life.

6. Private 5G Network

To leverage the full potential of 5G, one of the biggest paradigm shifts would be the creation of private 5G networks, or Private Wireless Networks (PWN). PWNs deployed and managed by providers like Nokia, Ericsson, Huawei, and larger Telco providers would be quite common in 2021. PWNs will increasingly take root in enterprises involving public safety, gaming, airline, entertainment, retail, home care, hospital settings, field service management, utilities, and similar industries.


5G is an evolution. 5G Networks and devices continue to evolve. The arrival of 5G technologies will transform how consumers live and how businesses interact with customers. You want to start planning for 5G now, but depending on your application, you may deploy 5G devices as early as 2021. The 5th Generation of networking will revolutionize how we live and function in this world, and we are right in the middle of it.

2021 promised to be a year of exciting change. While a crystal ball would come in handy in such situations to predict the future, we can be sure that change will come at an increasing pace, enabling new technologies to emerge. Having an insight into the innovations on the digital horizon will aid your organization with planning and staying ahead of the competition. What are your expectations for your organization from the 5G-enabled future.

See blog

Tags: Cloud, 5G, Digital Twins

Multiple Cloud Perspectives and Introduction to Azure Arc
April 15, 2022

Decoding Multiple Cloud Perspectives

In today’s day and age, business enterprises are finding it difficult to navigate through different complex environments that run across data centers, edge, and multiple clouds. While single cloud still holds relevance, most companies are adopting multi-cloud and hybrid cloud models. However, the terms hybrid cloud and multi-cloud are inconsistently used. A multi-cloud strategy entails using multiple cloud services from different providers based on their performance levels at certain tasks. 

With the deployment of multi-cloud and hybrid cloud infrastructures and it being a reality, players like Microsoft, Google, and AWS have entered this market, propelling for greater cloud innovation. All hyperscalers have built control planes for hybrid and multiple cloud deployment models, that overlook the lifecycle of managed services like Internet of Things (IoT), functions databases, virtual machines, and observability, etc.

I believe that these control planes deliver the promise of robust hybrid/multi-cloud technologies in this ever-changing multi-cloud services infrastructure. Currently, Microsoft Azure Arc and Google Anthos are the most popular control planes in this domain. However, Microsoft Azure Arc — stands out amongst others because of its unique design architecture.

In this article, I will deep dive and dissect the efficacy of Microsoft Azure Arc.

What is Azure Arc?

Azure Arc is a software solution that enables you to project your on-premises and other cloud resources, such as virtual or physical servers and Kubernetes clusters, into Azure Resource Manager.

Essentially, Azure Arc is an extension of Azure Resource Management (ARM) that gives support to resources running outside of Azure. It uses ARM as a framework by extending its management capabilities and simplifying the use for customers across different hybrid and multi-cloud environments. Azure Arc is about extending the Azure control plane to manage resources beyond Azure, like VMs and Kubernetes clusters wherever they are, whether they’re Windows, Linux, or any Cloud Native Computing Foundation-Certified Kubernetes distro.

Organizations can even manage resources even if they’re not always connected to the internet. Thus, non-Azure deployments can be managed alongside Azure deployments using the same user interfaces and services, such as tags and policies.

Image source 

Azure Arc is a unique approach undertaken by Microsoft to accelerate innovation across hybrid and multi-cloud environments.

What Does Azure Arc Offer?

a.) Arc enables management and governance of resources that can live virtually anywhere (on-premises, in Azure, Azure Stack, or in a third-party cloud or at the edge). These resources can be servers, virtual machines, bare-metal servers, Kubernetes clusters, or even SQL databases. With Arc, you can use familiar Azure services and management capabilities including Create, Read, Update and Delete (CRUD) policies and role-based management.

b.) Arc provides a single pane of glass — Using the same scripting and tools, you can see those resources alongside everything else in Azure. Furthermore, they can cover, monitor, and back all these services no matter where they live.

c.) Arc enables customers to easily modernize on-premises and multi-cloud operations through a plethora of Azure management and governance services. Supports Asset organization and inventory.

d.) Arc can support enforcing organization standards and assess compliance at scale for all your resources, anywhere based on subscription, resource groups, and tags

e.) Arc also provides other cloud benefits such as fast deployment and automation at scale. For example, using Kubernetes-based orchestration, you can deploy a database in seconds by utilizing either GUI or CLI tools.

f.) Arc allows organizations to extend the adoption of consistent toolset and frameworks for Identity, DevOps / DevSecOPS, automation, and security capabilities across hybrid/multi-cloud infrastructures and lastly, to innovate everywhere.

g.) Arc supports the use of GitOps-based configuration as code management, such as GitHub, to deploy applications and configuration across one or more clusters directly from source control.

h.) Arc helps organizations to make the right decisions about cloud migrations. Using Azure Arc, you can gather the workload data (discovery) and uncover insights to decide where your workloads should run — whether on-premises, in Azure, or in a third-party cloud or at the edge. This insight-driven approach can save you significant time, effort and migration cost too.

i.) A unified experience viewing your Azure Arc enabled resources whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API.

Key Features of Azure Arc

Azure Arc allows enterprises to manage the following resource types outside the realm of Azure:

1. Azure Arc for Servers

Azure Arc-enabled servers became generally available in September 2020.

Servers, be it physical or virtual machines, running Windows or Linux, are supported by Azure Arc. Azure Arc-enabled servers are in a way considered agnostic to infrastructure for this reason. These Machines, when connected, are given an ID amongst the resource group and are deemed as another resource in Azure. Azure Arc servers enable various configuration management and monitoring tasks, making it easier for hybrid machines to have better resource management.

Additionally, service providers handling customers’ or enterprise’s in-house infrastructure, can treat hybrid machines similar to how they treat native virtual machines using Azure Lighthouse. 

2. Azure Arc enabled Kubernetes

Managing Kubernetes applications in Azure Arc entails the attachment and configuration of Kubernetes clusters inside or outside of Azure. This could entail bare-metal clusters running on-premises, managed clusters like Google Kubernetes Engine (GKE), Amazon EKS etc.

Azure Arc enabled Kubernetes allows you to connect Kubernetes clusters to Azure for extending Azure’s management capabilities like Azure Monitor and Azure Policy. By attaching external Kubernetes clusters, users can avail all the features that enable them to control external clusters like Azure’s own internal cluster. But keep in mind, unlike AKS, the maintenance of the underlying Kubernetes cluster itself is done by you. 

Azure ARC is beyond minimum viable feature approach with Kubernetes

3. Azure Arc enabled Data Services

Azure Arc enabled data services help to run data services, using your preferred infrastructure, on-premises, and at the edge. Currently, Azure Arc-enabled data services are available for preview in services like SQL Managed Instance and PostgreSQL Hyperscale. Azure Arc supported SQL Managed Instance and PostgreSQL Hyperscale can be run on AWS, Google Cloud Platform (GCP), or even in a private datacenter.

Azure Arc enabled data services such as Azure Arc enabled SQL Managed Instance and Azure Arc enabled PostgreSQL Hyperscale to receive updates on a frequent basis, including servicing patches and all the new features in Azure. Updates from the Microsoft Container Registry are provided to you and deployment cadences are set by you in accordance with your policies. 

This way, on-premises databases can stay up to date while ensuring you maintain control. Because Azure Arc-enabled data services are subscription services, you will no longer face end-of-support situations for your databases. 

Azure Arc enabled Data Services also support cloud-like Elastic Scale, which can support burst scenarios that have volatile needs, including scenarios that require ingesting and querying data in real-time, at any scale, with sub-second response time. In addition, you can also scale out database instances using the unique hyper-scale deployment option of Azure Database for PostgreSQL Hyperscale. 

This capability gives data workloads an additional boost on capacity optimization, using unique scale-out reads and writes. Many of the services such as self-service provisioning, automated backups/restore, and monitoring can run locally in your infrastructure with or without a direct connection to Azure. 

I believe companies can find this as an attractive service if they need to use cloud-based tools outside the premise of Microsoft.

4. Azure Arc enabled SQL Server

Azure Arc enabled SQL Server is part of the Azure Arc for servers. It extends Azure services to SQL Server instances hosted outside of Azure in the customer’s datacenter, on the edge or in multi-cloud environment.

Azure Arc vs. Azure Stack Hub

You must be wondering why has Microsoft introduced Azure Arc when there’s an already existing hybrid cloud service — Azure Stack? Azure Stack is a hardware solution that enables you to run an Azure environment on-premises. Whereas Azure Arc is a software solution that enables you to project your on-premises and multi-cloud resources, such as virtual or physical servers and Kubernetes clusters, into Azure Resource Manager.

For applications that use a mix of on-premises software and Azure services, local deployment of Azure services through Azure Arc can reduce the communication latency to Azure, while providing the same deployment and management model as Azure. 

While Azure Stack Hub is still viable for few businesses, Azure Arc becomes a holistic strategy for organizations that are looking to offload their workloads on both private and public clouds, both off-premises, on-premises.

Azure Arc vs. Google Anthos vs. Aws Outposts

So, how does Azure Arc compare to other hyperscalers who are offering hybrid and multi-cloud strategies?

AWS Outposts is a fairly new solution and is currently more aligned to Hybrid Cloud deployment models. Google Anthos allows you to build and manage applications on-premises, Google cloud, and even on AWS Outposts and Microsoft’s Azure. Anthos does NOT make GCP services available in either your own data center or in other clouds. To access GCP services (storage, databases, AI/ML services, etc.), the containers running in your data centers must reach back to Google cloud.

Google Anthos and Azure Arc have very similar capabilities and approaches. Anthos is more focused on getting everything deployed to containers and has limited capabilities to manage VMs or servers running in your data center or in any third-party clouds. Additionally, Google Anthos currently might be a costly component. Moreover, according to my analysis, Google Anthos is quite prescriptive. This is because to run Google Anthos, you require GKE (Google Kubernetes Engine), be it to deploy to Google Cloud. 

This isn’t the case with Microsoft’s Azure Arc as it goes beyond Kubernetes into areas like centralized discovery, a common toolset for security, configuration, management of Data Services. It also offers more choices for Kubernetes environments, giving the option to customers to choose the Kubernetes platform. Azure Arc offers more portability and less lock-in than Anthos. 

Azure Arc Pricing

Azure Arc is offered at no additional cost when managing Azure Arc-enabled servers. Add-on Azure management services (Azure Monitor, etc.) may be charged differently for Azure VMs or Azure Arc enabled servers. Service by service pricing is available on the Azure Arc pricing page. Azure Arc enabled Kubernetes clusters and Azure Arc enabled data services are in preview and are offered at no additional cost at this time.

Roadmap Of Azure Arc

The current roadmap as stated on the Microsoft website includes adding more resource infrastructures pertaining to servers and Kubernetes clusters. In the future, you can expect:

a.) Self-hostable gateway for API Management — allows management of APIs hosted outside of Azure using the Azure-hosted API Management service.

b.) Other database services, such as Cosmos DB, are likely to be supported by the data services feature.

c.) Furthermore, support for deploying other types of Azure services outside of Azure could be added to Arc in the future.


To encapsulate, public cloud providers are churning out services to attain a spot in your company’s on-premises data center. The growing demand for hybrid-cloud and multiple cloud platforms and services has ushered Microsoft to launch Azure Arc as part of its cloud strategy.

So, what does this innovation mean to IT infrastructures? Well, with the demand for single management systems in multi-cloud environments soaring, I think it is more than a viable option. Simply because, once you register with Azure, Microsoft Azure Arc enables enterprises to jump on the hybrid cloud bandwagon regardless of whether they own an old version of Oracle on Linux or a modern one. I think this strategy is a game-changer as it helps to simplify complex and distributed systems across various environments like on-premises, multi-cloud, and on edge. Additionally, Azure Arc can be deemed as a compelling choice for enterprises that want to maintain balance by using traditional VM-based workloads and modernized container-based workloads.

Azure Arc, can hence, distinguish itself as a legacy management tool for hybrid cloud applications infrastructure propelling for greater digital transformation. I feel the simplicity of Azure Arc will be enough to lure enterprises to adapt to it.

See blog

Tags: Cloud, COVID19, Digital Twins

Microsoft Cloud for Manufacturing: An Architecture Perspective
April 14, 2022

Manufacturers around the globe are becoming more agile and adaptable. Post-COVID 2020 has been a year that we will not soon forget. 

This interference has led to the high demand for innovation, fast delivery, and better user experience. Filled with unimaginable change caused by the pandemic, the manufacturing industry witnessed a perfect storm, a significant disruption in terms of business continuity, operational visibility, remote work, employee safety, and the list goes on. However, businesses have responded, adapted, and are recovering. 

The manufacturing industries show higher adoption rates of the cloud than other industries. As Information Technology (IT) and Operations Technology (OT) automates highly manual and specific industrial operations, firms shifted to the cloud to strategically store and operate the data. Frost and Sullivan estimate that connected industrial devices are growing at 15.5% CAGR through 2026, as manufacturers invest in Industrial Internet of Things (IIoT) architectures that extend the cloud to the edge.

Manufacturing industries are moving half of their workloads to the cloud (public or hosted private). That number is expected to grow faster than other industries over the next five years. Firms that leveraged public cloud as catalysts for Innovation and foundation to boost productivity; enable collaboration among remote and frontline employees achieved more success than those who followed traditional methods. 

But, still, there is much to be done. So, I would say “Microsoft Cloud for Manufacturing” will be a game-changer for Manufacturing companies.

Five Manufacturing Trends in 2021

Before we investigate the unique value proposition of Microsoft Cloud for Manufacturing, let’s look into the top 5 trends of 2021 in Manufacturing in 2021. 

1. Resilient Core Operations

Manufacturers need to protect the core operations of their business while building resilient and scalable operations. In addition, they should create flexibility to scale and drive new business models for resilient and diverse supply chains and supplier ecosystems.

2. Products and Services

Manufacturers will focus more on developing new products, sustainable, environment-friendly practices, new partnerships, and good corporate citizenship.

3. Operation Resilience

Coming manufacturing industries will turn to digital capabilities to build a resilient business and scalable operations. This will include operational resilience (supply chain, security, product development, risk management, etc.) and critical financial management (cost management, cash flow, spend analytics, supplier performance analysis, etc.)

4. Talent Agility Is the Key

All manufacturing industries will focus more on increasing agility in operations. Manufacturers can respond to the disruptions caused by the pandemic by investing in digital initiatives to optimize the workplaces and factories. Employees will be empowered with the latest technologies such as AI, mixed reality, and automation technologies, which will transform how they interact with customers and innovate the company’s strategic direction

5. Customer Operations

To fulfill the changing needs of customers and partners, manufacturers should create the capability to promote new channels. As a result, there will be a high requirement of an enhanced focus on customer and partner communications, security, sustainability, safety with good corporate citizenship, and customer management.

Cloud Imperative in Manufacturing

The 2020 Frost and Sullivan Global Cloud User Survey reveal the progress and challenges for manufacturers. Among the interesting findings of this year’s survey:

As the digital transformation is picking up pace in the manufacturing industry, the requirement for flexible infrastructure to support applications and data is increasing. Manufacturers require modified data management tools and smooth cloud connections to pool, process, and protect the critical data assets that drive their intelligent operations. That is why manufacturing firms are shifting to the cloud to store and process the data collected.

Manufacturers use an average of two public cloud providers today, compared with just over three across all industries. This indicates that manufacturers prefer to develop deep relationships with the providers that best serve their needs. On average, manufacturing firms have placed just under half of their workloads in the cloud (public or hosted private). That number is expected to grow faster than other industries over the next five years.

Introducing Microsoft Cloud for Manufacturing (MCFM)

According to the survey, 45% of manufacturing firms use Microsoft Azure to run cloud applications and data. 94% say they are satisfied or delighted with their Azure services. 

I am sure you are wondering if Microsoft is Azure is already a leading Cloud platform for Manufacturing customers, then what’s the buzz about “Microsoft Cloud for Manufacturing,” and what makes it unique? 

Microsoft Industry Clouds provide an on-ramp to the broader portfolio of Microsoft cloud services because they are designed to enable consumers to start with the areas where the demand for technology or business transformation is very high. Industry Cloud Offerings about integrated user experience by providing full support to the buyer journey for the industry.  

Microsoft Cloud for Manufacturing (MCFM) is designed to deliver capabilities that support the core processes and requirements of the industry.  These end-to-end manufacturing solutions include new capabilities that seamlessly connect people, assets, workflow, and business processes, empowering organizations to be more resilient. In addition, MCFM commits to industry-specific standards and communities, such as the Open Manufacturing Platform,  the OPC Foundation, the Digital Twins Consortium, and the co-innovation with our rich ecosystem of partners. 

The architecture of Microsoft Cloud for Manufacturing

Five Focus Areas of Microsoft Cloud for Manufacturing (MCFM)

Microsoft Cloud for Manufacturing (MCFM) aligns cloud services to manufacturing industry-specific requirements. It gives customers a starting point in the cloud that easily integrates into their existing operations. MCFM will help manufacturers connect experiences across their operations, workforce, design and engineering processes, customer engagements, and the end-to-end value chain. 

It focuses on the following five key areas where manufacturers are accelerating investments in the subsections below.

1. Transforming the Workforce 

MCFM is helping manufacturers transform their workforce to gain more productivity by: 

2. Building More Agile Factories

According to a pre-pandemic PwC study, 91 % of industrial companies have invested in Digital Factories. However, only 6% of all respondents describe their factories as “fully digitized.” 

MCFM will help in building more agile factories by the following: 

3. Creating More Resilient Supply Chains

This Microsoft solution gives access to new suppliers to a guided and easy-to-use role-based tool to qualify a supplier and onboard the supplier’s IT team to allow API data integration. It supports a consistent workflow and enables manufacturers to maintain their supplier business relationship and API data integration. 

The Dynamics 365 Supply Chain Management app provides new suppliers access to easy-to-use role-based tools to qualify a supplier and onboard the supplier’s IT team to enable API data integration.

4. Turbocharged Digital Innovation

Microsoft Cloud for Manufacturing accelerates Innovation by allowing manufacturers to create, encourage and validate sustainable operations and products by the following:

5. Engaging Customers in New Ways

Pandemic accelerated the need for product-as-a-service, and now customer service companies are offering dynamic service using AI. As a result, there is a high need for a fully connected system that provides one single view of their customers and devices. MCFR aims to improve consumer satisfaction, engagement, and business value by: 

Partner Play

The Microsoft Partner Network invests in its partners and offers them the resources, programs, and tools to help you train their team, build innovative solutions, differentiate in the marketplace, and connect with customers. With access to a broad range of products and services, MCFM partners are empowered to build and deliver solutions that can address any customer scenario. In addition, Microsoft enables the digital transformation of an intelligent cloud to empower its partner ecosystem to achieve more.

How SI Needs to Change 

System integration needs to support the traditional business model, but at the same time, be ready to help customers with new skills. The market is changing quickly, and SI’s are not prepared for it in serious difficulty. Therefore, it has become imperative for SI to develop the skills required to be a credible and reliable partner for the industries they serve.

The pandemic has forced organizations to become more agile and adapt quickly to changing market dynamics. Gartner’s research found that over two-thirds of corporate directors accelerated their digital business initiatives during 2020 and planned to increase their spend on IT and technology by an average of 6.9%. Operational flexibility has been – and will continue to be – a significant differentiator. According to the Institute for Supply Management, 97% of US businesses have been or will be impacted by supply chain disruptions due to the pandemic.

Suggested changes in the SI business model and solution design approach from long drawn custom development to:

  1. Faster and Lean migration to Public Cloud and Distributed Cloudfor Cost saving and IT agility 
  2. Outcome-based migration, modernization, and Managed Services for Cloud Workloads
  3. Invest in developing Regional Manufacturing, HSE, and Privacy compliance controls and associated automation for Security and Compliance as Code bundled with Managed SIEM and SOC solution or complete Managed Cybersecurity Services bundle
  4. Avoid Bespoke solutions and adopt Composable Solution architecturebased on the foundation provided by Manufacturing Cloud Solutions (e.g., MCFM) for Innovation in business models, new resilient and sustainable supply chain, digitally connected plants, predictive maintenance, Re-imagined products and services with Digital Twins and Digital Threads, re-imagined partnerships, and customer experience (AR/VR/MR), PLM, Customer on-boarding and personalized marketing and pricing/subscription-based bundling, and integrated Process Integration and optimization.
  5. Accelerated development using Low Code No Code platform, and Hyper-Automation:
    In addition to providing significant cost savings, these enterprise low-code solutions help streamline, automate, accelerate, and increase the efficiency and performance of the app development process. As a result, enterprises no longer need to utilize only software developers.
  6. Deliveries aligned to Faster Time to Market and Market Innovation on top of Healthcare Common Data Model: Common Data Model simplifies data management and app development by unifying data into a known form and applying structural and semantic consistency across multiple apps and deployments.
  7. Bring best of niche industry partner ecosystem relationships and expertise for Composable Solutions: This concept enables computing, storage, and networking devices to be pooled and used as needed without requiring physical configuration, opening the door to a nimbler environment, in which resources are not only accessed quickly but also used precisely.
  8. Bring in Cross-Industry integration, learnings and develop new decentralized computing-enabled business models and solutions.


The Covid-19 pandemic has taught us that businesses must accelerate their digital journey to thrive in the future. The manufacturing industries that have already begun their digital transformation journey will soon observe dramatic gains in productivity, high business value, employee engagement, better customer experience, and whatnot—as a result, manufacturing a more resilient and sustainable future. To remain competitive, manufacturing firms must consider the cloud not as an end but as a foundation that will enable them to quickly and effectively adapt any technology, today and into the future. 

Microsoft Cloud for Manufacturing will help in the accelerated journey for a resilient and sustainable future. It will allow manufacturers to transform their workforce, build agile factories, create resilient supply chains without sacrificing their Innovation and essential resources. "Microsoft Cloud for Manufacturing” will be a game-changer for Manufacturing companies.

See blog

Tags: Cloud, COVID19, Digital Twins

Mainframe Modernization to Cloud
April 12, 2022

Despite the efficacy and benefits of Cloud-native development, the Mainframe remains a core and valuable enterprise technology for large enterprise customers (especially in finance, insurance, manufacturing, retail, and the public sector). Maybe because Mainframe offers resiliency, reliability, and trusted security. But it isn’t enough to sustain in today’s disruptive business environment. 

Faster time to market, employee, and customer experience has become a business imperative, and the Mainframe systems are a significant barrier in achieving the same. The flexibility and versatility of Cloud DevOps are overshadowing the Mainframe’s ability to innovate in the current business environment. Cost considerations and changes in the workforce also impact Mainframe’s viability as a long-term technology solution.

Moreover, the Covid-19 pandemic has highlighted the need for Mainframe modernization:

  1. Skills/talent availability, remote management, and access for handling and running the Mainframe System.
  2. Scalability and flexibility are required for unprecedented demands (user and transaction volumes).
  3. Remote monitoring and management for system support.

Public Cloud platforms like Microsoft Azure offer Mainframe alternatives capable of delivering equivalent functionality and features, thus eliminating the problems and costs incurred from utilizing a legacy Mainframe system.

Read through this article to know more about the challenges and benefits of Mainframe Modernization to cloud and understand the correct Mainframe Modernization approach to kickstart your migration journey.

What Is Mainframe Modernization?

Mainframe Modernization entails the process of migrating or improving the IT operations to reduce IT spending efficiently.

In the realm of improving, we can define Mainframe Modernization as the process of enhancing legacy infrastructure by incorporating modern interfaces, code modernization, and performance modernization. In terms of migration, it is the process of shifting the enterprise’s code and functionality to a newer platform technology like cloud systems. The strategy employed to modernize Mainframe structures relies on factors like business/customer objectives, IT budgets, and costs of running new technology vs. costs incurred from not modernizing.

Challenges of Mainframe Systems

Mainframe disadvantages result in high costs that are increasingly difficult to justify. Common challenges that business faces with Mainframe Legacy systems are:

  1. Shrinking talent pool of mainframe expertise: According to a Forrester studycommissioned by Compuware, 23% of the Mainframe workforce has been lost, and 63% have not been replaced.  This can leave mainframe environments vulnerable to a shrinking skills base and a material loss of knowledge about how these legacy systems work. 
  2. High costs for hardware/storage: Mainframes systems are costly as compared to the modern “scale-out,” on-demand, pay-as-go computing model offered by Public Cloud providers.
  3. Outdated programming language: Computing languages used by most Mainframes (COBOL, Fortran, PL/I, Natural) are considered obsolete. This makes finding or building innovative solutions for the Mainframe difficult.
  4. Tightly coupled integrations: Tightly coupled and complex mainframe architectures with interdependencies make the solution difficult to maintain
  5. Resistance to change
  6. Lack of documentation: Most of the enterprises today are struggling with lack of documentation (and sometimes not even having original source code) for mainframe systems
  7. Challenges with business continuity and DR deployment for Mainframe Systems
  8. Platform issues and Inflexibility
  9. Lack of Business Agility offered by Mainframe System (g., need for Faster Time to Market, New regulatory compliance, new SLA, new business models, and integration with new core business systems developed outside Mainframes) — resulting in the inability to keep up with the digital disruption.
  10. Limited Test environments
  11. Lack of Test Automation and DevOps processes

Due to the challenges mentioned above, most customers are stuck with Mainframes’ legacy, with Technology Debt increasing every year.

While mainframe modernization delivers significant benefits, it can have far-reaching consequences that affect your technology, workforce, customer experience, and other related activities. 

Benefits of Mainframe Modernization to Cloud

Cloud can offer economies of scale and new functions that are not available through mainframe computing. The benefits of cloud technologies and the law of diminishing returns in the Mainframe are calling for an increased demand for migration strategies. By taking a thoughtful, phased approach to modernizing your mainframe environment, you can overcome common obstacles and enjoy some of the advantages of cloud computing without putting core functionality at risk.

  1. Reduced cost: With modernization, you can reduce the cost of MIPS (Millions of Instructions per Second), the cost of spending on hardware and support personnel (who are becoming more and more thanks to advanced technologies). As mentioned earlier, the more the overload on Mainframe structures, the more the incurred costs.
  2. Faster Time to Market
  3. Unlock your Data: Data is the new currency for business and key for future growth. Businesses can unlock their data’s true potential (Data as an asset for Data-Driven Decision Making) with Cloud scale Datawarehouse, AI, and ML models. Convert for
  4. Support for System on Innovation and experimentationrequired to sustain business in digital age disrupted by unicorns and startups with new business models and alternative products/services.
  5. Business Agility: By modernizing your Mainframe environment by migrating to the cloud, you can reap the benefits and capabilities of this new system that ensures increased agility, scalability, and cost-efficiency.
  6. Adaptability and Faster Integration: With new application development happening worldwide, it is imperative to integrate modern technologies to sustain businesses. It is considered that 85% view application development as the driving force of business growth. Embracing new technologies like IoT, Virtual Reality, etc., requires cloud infrastructures that offer platforms to implement these.
  7. Competitive Edge: Commencing your modernization journey affords you a competitive edge that puts you ahead in the market.
  8. Modern Skills and Productivity: Instead of halting your businesses and operations by suffering from skills risk or maintaining legacy systems, you can fully exploit the capabilities of cloud computing systems that offer you enhanced security and the means to work remotely.
  9. Better Employee and Customer Experience
  10. Near Realtime Business Continuity and DR

Considerations of Mainframe Modernization to Cloud 

Business Case Evaluation

The prerequisites in strategizing for Mainframe Modernization to Cloud or a co-existence journey should be based on a holistic and strategic view. According to a study published by Accenture, enterprises’ primary motivation to move to the cloud is the cost-efficiency benefit. Hence, to truly reap the benefits of cost-efficiency, businesses must consider the following steps laid down by the study to assess their mainframe-to-cloud cost business case:

  1. The opportunity cost of not modernizing(or missing out on delivering new features to customers).
  2. Analyze current mainframe costs: An enterprise shouldn’t necessarily consider the recharge costs here but should consider real mainframe spending, which includes hardware, software, DC space, electricity, and workforce supporting the Mainframe Systems.
  3. Define target state architecture: A business should define its desired architecture and mainframe modernization approach. They can genuinely assess and calculate the run costs such as — solution components, software, cloud providers, and internal headcount.
  4. Add transformation costs: Transformation costs should include — assessment, migration/modernization services cost, other third-party tools used in migration/modernization, tools required in the target state, internal changes (For example, costs incurred from management changes from transitioning to a new system), change management cost, any 3rd party costs for compliance and audit for the target architecture and managed services cost for the new solution.
  5. Evaluate the results from steps 1- 4 to understand the effectiveness of your migration. Assess the TCO and ROI of modernizing to a new cloud system.

7R Approach for Mainframe Modernization

It’s clear that organizations should at least take steps to modernize their mainframe environment to meet current market conditions and customer expectations. The question is, how to do it and what might be getting in the way? This approach should consist of a planned strategy wherein you analyze all your applications and chalk out a systemic application strategy. 

This approach should be navigated using the 7R approach to ease your modernization process:

  1. Retain — Retain some of the existing mainframe capabilities and remediate or amend any specific pain points.
  2. Replace — Replace with a combination of COTS (commercial-off-the-shelf) or SaaS. Migrate data to cloud infrastructure.
  3. Rehost — Rehost to another less expensive location to gain cost benefits without carrying the risk resulting from programming language changes.
  4. Re-platform — Migrate to new platform/operating system infrastructure with minimal changes to code/programming languages.
  5. Refactor — Restructure existing code or programming language to a modern one to reduce the risk of technical debt and several skill risks. You can exploit new capabilities from modern operating systems easily.
  6. Rebuild — Rewrite components of your application based on new requirements without hampering individual specifications of it.
  7. Reimagine — Reimagine your business agility and performance by exploiting the new cloud computing features. Use domain-driven designs to rewrite your applications based on new requirements. This lets you not establish your application on current needs, allowing you to modernize your technology and obsolete business processes.
  8. To be successful with Mainframe Modernization, Enterprise needs to avoid risky “big bang” conversions and migrations.
  9. Each Mainframe application might require a different application disposition (a separate path on 7R).
  10. For large monolithic apps, sometimes a journey might be a mix of 2–3 disposition paths (for example: Refactor the Database, Rehost / Replatform the UI, and Decouple the business logic).

During the Modernization journey, pay additional attention to address the following aspects:

1.Begin with use cases that increase business value through improved agility and flexibility. Your efforts should gain credibility based on smaller, successful initiatives.

2.Perception of no need for near real-time business continuity and DR.

3.There is zero room for error. Any outage during migration could have devastating consequences in terms of disrupting mission-critical operations.

4.Perception of Mainframe is secure.

5.Resistance to change: In a recent survey, 56% of the respondents reported their organizations resisted legacy systems’ modernization. This stems from being very comfortable in their existing systems. Also, owing to specific mission-critical applications being much more difficult to modernize, they choose to leave them in their legacy systems because of their interdependencies and complexities.

6.Risk-averse nature of business stakeholders.

7.Going against the proven functionality and the “It works” effect of Mainframe- Mainframes have proven functionality. Businesses that use Mainframes are in a “comfort zone” owing to their “It works” attitude. This attitude of “They’ve always been there, and they’ve always worked” can essentially make businesses complacent.

8.Executive sponsor beyond IT and availability of funds.

Typical Mainframe Migration/Modernization Process

The process for migration/modernization should entail these steps:

  1. Discover
  2. Access and plan
  3. Proof of architecture
  4. Migrate/modernize
  5. Test
  6. Change management
  7. Go live and transition

During the migration/modernization process, you need to ensure the following:

1.Avoid big-bang approach.

2.Perform detailed Analysis and Assessment.

3.Map the migration/modernization approach with the right partner with deep expertise on Mainframe systems, target Cloud platform, and healthy relationship with Mainframe modernization ecosystem partners like RaincodeModernSystemsLzLabs, MicroFocus, BluAgeTMaxSoft, etc.

4.Identify the right use case to gain critical momentum and credibility by identifying use cases that improve agility and flexibility.


To summarize, there is no ‘one-size-fits-all’ strategy when it comes to Mainframe Modernization. Each business case is unique, and they must consider a well-thought and thorough approach to modernizing their legacy Mainframe systems.

While many large and well-tenured enterprises are still using Mainframe, it becomes essential to note that modernizing remains a business-critical strategic imperative in today’s technological environment. It helps them keep abreast with the ever-changing customer expectations and attain a more agile and straightforward architectural framework.

Public Cloud, as noted, is the way to modernize your legacy Mainframe environment as cloud computing offers 70% cost saving, agility, resilience to accelerate Digital Transformation. Yes, there is no magical ‘lift-and-shift’ approach or a silver bullet to modernize your Mainframe systems or applications. But it can be done by employing and leveraging a well-planned and researched cloud migration strategy. 

I say the time to modernize is NOW and that the cloud benefits should warrant a re-think for companies that are still on the fence.

See blog

Tags: Cloud, COVID19, Digital Twins

12 Secrets for Successful Digital Transformation
April 11, 2022

Digital Transformation reforms the way an enterprise functions. An Everest Group research found that 73% of companies failed to see any addition to their business value from their digital transformation efforts. In this blog, I will reveal 12 secrets to a perfectly executed digital transformation journey. 

1. Define Your Ambition

Success starts when your organization can answer questions like — What is the desired outcome of your transformation? Are you looking for more sales, revenue, cost-saving, or selling to new/existing customers? Where is your transformation headed in the future?

When dealing with these questions, you need to:

  • Articulate your digital business strategy by ensuring broad organizational alignment.
  • Rethink your business for the digital age.
  • Define governance and prioritization for how the enterprise will balance transformation objectives.
  • Create a compelling communication strategy to sell the transformation story to the organization.
  • Clearly define the speed of adoption and the levels of risks — financial, regulatory, reputational you are willing to take in this journey.

2. Establish the Office of Chief Digital and Innovation

The CDIO or the Chief Digital and Innovation Officer is the central leader and integrator in the digital transformation process. CDIO works as the cardinal point for decision-making for hard and complicated situations that involve aligning cross-department efforts, resolving conflicts, and orchestrating the rollout of digital initiatives and capabilities. Why CDIO should hold the reins on this project:

  • CDIO helps in fully integrating technology with the business and solving the performance gaps within and between various business units of the enterprise.
  • They help create a common language and decision-making framework.
  • They help establish an Innovation office to identify POC and Pilot. These include innovation on the following scale -
  • Enterprise innovation by means of intelligent simulation
  • Functional innovation by means of automated routine work
  • Edge innovation by means of AI-augmented workflows
  • Identify separate workstreams and leads for:
  • Digitization / Digital Optimization
  • Employee Productivity
  • Customer Experience
  • Operations Excellence

3. Treat Digital Strategies as an Integral Part of Your Business Strategies

Digital transformation is more about strategy and mindset than about technology. It helps organizations evolve their operations and contribute towards the evolution of the business itself. Digital transformation is a long-drawn journey — the goals and target metrics will keep shifting and moving constantly. To ensure inclusion:

  • Establish your current baseline for Digital Maturity and develop a strategic roadmap.
  • Establish a simple but consolidated view of Digital Initiatives.
  • Quantify your digital progress and use Key Performance Indicators (KPIs) as metrics. Additionally, it would be helpful to create real-time dashboards for oversight.
  • Define how your governance will evolve to reflect the unique needs of your specific initiatives.

4. Focus on the Customer Journey

When developing an organization’s vision, I suggest one to be customer-first with a focus on how the customer journey would look like. This would give way to leveraging technology to create more relevant ways to engage with customers, and deliver exceptional customer experience at all touchpoints along the journey. To create an effective customer relationship:

  • Create a deeper emotional connection with your customer.
  • Evaluate the impact of your digital business transformation strategy on your customers and your industry.
  • Rethink the meaning of customer loyalty, and learn to engage with customers as a digital-first business.
  • Establish Customer engagement hubs as a part of Digitally enhanced experiences for your customers.
  • Design your frameworks while ensuring that emerging customer needs are at the center of your efforts.
  • Monitor the environment to assess how and when changes occur that impact your digital ambitions.

5. Shift to Product Mindset and Everything-as-a-Service (XaaS) Model

What enables a digital transformation? My first and foremost guess would be a mindset shift. As enterprises look for effective, manageable entry points into digital transformation, they need to unleash their Inner Futurist, think big, and Shift to Product Mindset and Everything-as-a-Service (XaaS) model. Here's how to do it:

  • Business leaders should consider “dream customer journey scenarios” when setting up their long-term strategies.
  • Evaluate disruptive market and technology innovations in your industry to ensure your knowledge remains current.
  • Establish a culture of experimentation and shift to Product Mindset, Product consumption with Everything-as-a-Service (XaaS) model.

6. Employ a Proper Framework to Help Your Workforce Through the Change

  • Educate the workforce about the benefits of digital transformation. Change is usually met with resistance and it is important to warm your workforce to the upgrades that digital transformation will bring about.
  • Create a psychologically safe environment for the workforce and be transparent about the associated changes. It is crucial that you coordinate work according to the hierarchical structure of the organization and DO NOT surprise workforce and executive committee members with unplanned changes.
  • Drive the Change Mindset — Beat homeostasis and prepare your workforce to deal with change effectively. This can be enabled by properly educating your workforce about the expectations of the transition and how it will help their respective departments.
  • Find the right "champion for change".

7. Be Agile. Fail Fast, Learn Fast, Deliver Fast!

  • Establish the building block of DevOps/DevSecOps and consider Low Code/No-Code platforms as a viable alternative.
  • Stack your short-term goals along with your big projects and be agile with their deployment. Focus on the pilots, learn from the results, and apply changes accordingly. When you focus on the tiniest details of your pilot, you observe the red flags at the elementary level and learn from your mistakes.
  • Scope continuous improvements to keep your processes up-to-date. This will include the following measures:
  • Digital optimization: Lean and Agile workflows
  • Workforce analytics
  • Business Process analytics
  • Utilize application leaders that are responsible for developing strategies for digital business to accomplish the following tasks:
  • Hack the culture with small but powerful steps, such as re-defining teams, roles, and personas to drive greater collaboration and innovation.
  • Embed a product-firstthought into the developmental processes to encourage improvement and delivery along with customer value.
  • Invest in new multi-experience and low-code/no-code technologies that maximize Mesh App and Service Architecture (MASA).

8. Security as an Enabler For Digital Transformation

Gartner predicted that 60% of digital businesses would suffer major service failures by 2020 due to the inability of security teams to manage digital risk. The world has shifted to a virtual presence and companies are faced with compliance and regulatory challenges.

Customers are much more tech-savvy today and prefer interacting with enterprises that are ethical, compliant, and prioritize security. For enterprises to be successful in the journey, security CANNOT be an afterthought. Security has to be an elementary element of the design process. If you take my word, this will help you reduce unnecessary costs and minimize the need to re-engineer solutions late in the game.

9. Data as an Asset

Data, is the new currency of digital businesses.” 

The leadership in an organization needs to invest resources in generating data that can be shared across departments and managed to create value in the digital era. Here's how mining and managing data will aid in your operations:

  • It helps with identifying hidden patterns and establishing a framework to capture the trend into many functions and contexts in the organization.
  • Customer-focused functions can use the information from data to enhance customer lead conversion, gain new customers, and retain existing clients.
  • Digital-native companies can invest resources to generate data that offers greater insight into product creation and future value-creation opportunities.
  • Establish the right mix of Data Analytics solutions (to answer What happened? and What’s happening?) and AI-powered Predictive Analytics (to predict What will happen? What If?).

10. Own Your Digital Transformation

It can be quite tempting to fall into the vendor trap. It can be tempting, especially with different moving parts, tools & techniques, technology platforms, and different stakeholders reeling you in different directions. This is how you can own your destiny:

  • Manage vendor innovation.
  • Move beyond typical outsourcing to co-Innovate, co-invest models with System Integration partners or vendors.
  • Develop your capabilities in CDO, business units, and IT organization.
  • Measure all feedback — good and bad

11. Establish a Technology Roadmap and Strategic Technology Partnerships

  • Beware of falling into the trap of enticing technology (even the buzzwords!).
  • Align your Digital Transformation with a technology roadmap and identify the gaps.
  • Deliver innovation in terms of IT tools and design solution experimentation.
  • Support technology alignment for digital business transformation.
  • Develop strategic Technology partnerships instead of considering technology providers as vendors.
  • Develop a strategic partnership with a System Integration partner with the right outcome-based delivery model.
  • Define your Multi-Cloud/Distributed Cloud strategy.
  • Evaluate the use of open platforms.
  • Leverage Low Code/No Code Platforms.
  • Reduce your technical debt by adopting Digital Decoupling and 7R application disposition model and not just remortgaging with Robotics Process Automation (RPA).
  • Adopt Human Centric Design.
  • Don’t forget to have gun and celebrate success along the way.

12. Develop the Partnership Ecosystem for Network effect

  • Seek out strategic partnerships that align with your transformation vision, and increase the speed and quality of your initiatives
  • Digital disruption is blurring the lines between traditional industries. Most of the markets are being dramatically reshaped as agile new entrants (e.g. Fintech, Startup, niche Cloud borne companies) are putting increasingly empowered customers in control and realizing the benefits of these new business models. Leverage the network effect of strategic partnerships, to build a powerful ecosystem to advance digital transformation, encourage collaboration, and deliver meaningful outcomes to customers


While the failure of certain pilots in the digital transformation journey doesn’t spell the end of the process, it costs the enterprise a lot more — money, wasted resources, wasted efforts of the workforce, delay in turnaround time. The remedy here is to closely observe the results that the digital transformation processes yielded and analyze them.

The key to solving these issues lies in the tiny details of the project. Analyze the mistakes while keeping agility in mind — Fail Fast, Learn Fast, Deliver Fast.

See blog

Tags: Cloud, Digital Transformation, Digital Twins

The Green Lining in Cloud Computing
April 08, 2022

In current COVID times, the mantra for success includes a healthy mix of innovation with thoughtfulness and corporate social responsibility. As the efforts toward digital transformation are accelerating, so are the pressures to operate as responsible businesses. More and more CXOs are working on striking the right balance between accelerating digital transformation and their sustainability strategy, in addition to adopting a more digitized stand with a “cloud-first” approach.

Sustainability in the Cloud

Companies have historically driven financial, security, and agility benefits through the cloud, but sustainability is becoming imperative. According to the United Nations Global Compact-Accenture Strategy CEO Study on Sustainability, more than 99% of CEOs from large companies now agree that “sustainability issues are important to the future success of their businesses.” Two-thirds of the CEOs view the fourth industrial revolution (4IR) technologies as a critical factor for accelerating socio-economic impact. 59% of CEOs say that they are deploying low-carbon and renewable energy across their operations today. 

By embracing the power of a sustainable cloud, CXOs can alleviate the pressures and discover new sources of innovation and growth. Over the next five years, every enterprise will find itself having to respond to pressures around improved environmental, social, and governance (ESG) efforts. This pressure will come from diverse stakeholders, notably investors, regulators, the supply chain partners.

Moreover, customers/consumers increasingly expect brands to act, organizations must now demonstrate that they are purposeful about sustainability, hold strong ethical standards, and operate responsibly in everything they do.

According to the UNCG-Accenture study, it is evident that 44% of the CEOs see a net-zero future for their company in the next 10 years. While "Sustainability" is the favorite keyword of the season, leadership is moving away from a "nice-to-think-about" approach, beyond buying Carbon offset credits, when it comes to Sustainability and trying to invest in a technological infrastructure that drives innovation as well as thoughtfulness.

A study conducted by Microsoft, Accenture, and WSP Environment & Energy shows that organizations can achieve an energy and carbon (CO2) emissions reduction of 30 to 90 percent by switching to cloud computing. Small businesses benefit the most by moving to the cloud, but even large companies can see substantial net energy and carbon savings.

Is it possible that migrating to cloud computing might help your business achieve its sustainability goals and positively affect its bottom line?


Cloud migration can deliver reducing costs and carbon emissions if migration is approached rightly from a sustainability perspective.

While the public cloud can help with an organization’s Sustainability goals, one needs to have a focused approach to cloud migration. This can help reduce global carbon (CO2) emissions, drive greater circularity and result in more sustainable products and services.

5 Considerations for a Sustainable Cloud-First Journey

1. Define Your Sustainability Soal and Strategy

  • The sustainable cloud journey involves different levels of ambition — the greater the ambition, the greater the benefits.
  • Establish a shared definition for Sustainability goals, and ambitions — Without a shared definition and perspective, the enterprise reduces the likelihood of a strategic and coherent approach, resulting in wasted time and money. Develop a shared and clear understanding of concepts and approaches that enable sustainability, and scope the enterprise’s approach, including corporate social responsibility, greenhouse gas (GHG) emissions, carbon neutrality, and the circular economy.
  • Establish governance and management processes
  • Have a dedicated team driving Sustainability agenda across corporate functions

While defining your strategy, take advantage of the current heightened focus on environmental sustainability to push disruptive approaches. Use the current push of “new normal” to accelerate your Cloud adoption and revisit your:

  • Security policies
  • Application modernization disposition strategy
  • Cloud Operating model
  • Cloud strategy

2. Select the Right Cloud Provider

The second step in the journey begins with selecting the right carbon-thoughtful provider. Cloud providers set different corporate commitments towards sustainability, which in turn determine how they plan, build, power, operate, and retire their data center. Carbon emissions can differ widely across providers even though all providers have focused on driving down energy consumption to standard benchmarks.

It is important to choose the right cloud partner who has corporate commitments towards sustainability that is compatible with your enterprise. All major hyperscalers have published their Sustainability goals and CO3 emissions facts — Microsoft, AWS, and Google.

In most cases, cloud providers also have greater renewable energy mixes than cloud users and minimize data center carbon footprints through renewable energy. For a typical organization, Public Cloud migrations can lead to an impressive 60%+ energy reduction and 80%+ carbon reduction.

While selecting your Cloud partner, look for openness, transparency, and level of support provided for sustainability goals. Simple things can make a difference:

  • Commitment to technological innovation applied with Sustainability in Public Cloud Datacenters
  • Circular value chains of cloud provider’s hardware
  • Customer-facing services, like carbon calculators or granular cloud lifecycle emissions reporting, help companies monitor their cloud footprint. Example: Microsoft Sustainability Calculator
  • Partnership models like Transform to Net Zero

3. Plan Your Cloud Migration With the Right Capabilities

To achieve your sustainability goals, you need to plan your sustainable cloud migration carefully. For an enterprise that is new to this journey of digital transformation, it is important to get a consultant or an expert on board.

Some key areas that you might want to focus on are:

3a. Infrastructure as a service (IaaS) migration

  • Establish a balance between IaaS (Infrastructure as a Service) and CaaS (Container as a service) as a destination deployment model
  • Reduce the overlap of on-premise and cloud infra as much as possible — During your migration, evaluate approaches that include Simulation/Emulation based solutions for accelerated migrations and MIPS reduction.
  • Right-sizing
  • Plan the migration using the right level of automation

3b. Sustainable Software engineering practices

  • While IaaS migrations can reduce carbon emissions by more than 80% compared with conventional infrastructure, reductions can be pushed even higher — up to 95%, by designing applications specifically for the cloud.
  • Selecting the appropriate fit-for-purpose programming language and capabilities instead of best of breed options
  • Choosing the right application model
  • Level of customization vs adoption of standard capabilities
  • Choosing the right AI models, approach for model training, the balance between the accuracy of the analytical model and cloud resource consumption
  • Choosing the right Automated testing tools

You need to ensure that your solution is capable of the following factors -

  • Dynamic Provisioning — With dynamic provisioning, large operations provide server capacity demands tailored to your organization’s needs. This dynamic service is based on an ongoing basis to match their server needs for efficient operations.
  • Choosing Multi-Tenancy
  • Selective Server Utilization — Footprint can be reduced by focusing on portions of a server’s capacity that are subjected to higher workloads. By following this procedure, cloud providers can encourage efficient processes along with producing a smaller infrastructure footprint.
  • Select the right partner and ISV ecosystem — Choose your partner and ISV solution with the right level of values in a circular economy — the one where the resources are kept functional for as long as possible. 

4. Develop Your Cloud Operating Model and Optimize Your Cloud Usage

Cloud operating model is your organization’s blueprint for delivering on its cloud strategy. I would suggest, redefine your Cloud operating model and add Sustainability as a principle in your Cloud Operating model. The three principles of your cloud operating model — People, Technology, Processes, should co-exist with green initiatives.

  • Your organization should be well-versed with the concepts of sustainability and recycling. The enterprise should utilize processes that comply with sustainable governing policies and are in line with its broader sustainability goals.
  • Ensure Cloud Cost Optimization is an integral part of the Cloud operating model
  • Add Sustainability considerations in your Cloud cost optimization controls
  • Get your Cloud economics right with clear Sustainability goals associated with Cloud optimization strategy. Consider what your business needs in terms of backup and retention and develop a process to manage and maintain this data. 

5. Develop Your Own Well-Architected Framework

Most of the hyperscalers have developed the Well-Architected Framework (WAF) that helps your enterprise stay abreast with the best practices in Architecture for designing and operating efficient cloud systems. A typical WAF consists of 5 major pillars:

  • Cost Optimization — The ability to deliver value at the lowest price point.
  • Security — The ability to protect data, systems, and assets to improve the security of the enterprise.
  • Reliability — The ability to perform the assigned functions correctly and consistently.
  • Operational Excellence — The ability to support the operations of the enterprise, learn from the experiences, and to improve the processes based on past learnings.
  • Performance Efficiency — The ability of technological resources to efficiently meet the system requirements and to maintain efficiency according to the trend of technological evolution.

I would suggest adding the following two additional pillars to the WAF framework to include the “green agenda” and achieve our sustainability goals: 

5a. Sustainability pillar for a solution in Public Cloud

This brings us back to opting for Multi-tenancy in Public cloud and selective server utilization.

When your enterprise chooses the shared public cloud, you save on the heavy infrastructure and maintenance costs as well as you cut down on the energy consumption and carbon emission because there isn’t a need for a separate data center to use the non-renewable resources and focus on your operations.

Additionally, you can save more energy by shifting to a Selective Server Utilization approach where you let the heavy workload do its thing and let the less occupied sections of the server rest.

5b. Cloud Migration and Managed Service Provider

In addition to selecting the right Cloud provider, you also need to select your Cloud Managed Services and migration partner carefully. While considering cloud-managed services, infrastructure, and app outsourcing providers, look for the right set of solutions in areas including levels of automation, sustainability by design in the toolset, office space, and workforce management.

What I describe as the green lining of these MSPs is when their “green agenda” is in line with your sustainability goals — this strikes the right chord of action-oriented sustainability policies.

Some of the things to watch out for when choosing your Cloud MSP include:

  • Corporate sustainability goals that go beyond carbon offset
  • Sustainable design and processes used for the workforce, Cloud Managed Services (CMS) solution and beyond
  • Level of automation used in CMS/CMP solution
  • Penetration of public cloud in internal IT of MSP
  • Adoption of DevSecOps practices, Infra as Code, Everything as Code (Document, Configuration, Security, etc.)
  • The ability of the Cloud MSP to adopt new capabilities from hyperscalers, as most of the time new capabilities would be cheaper and greener too.
  • Criteria for a successful cloud migration that is green — your Cloud MSP should understand your goals, curates the right set of solutions tailored to your processes, and focuses on the right level of automation required for action-oriented sustainability policy.

Is Cloud Computing for Sustainability Worth the Hype?

It’s time for enterprises to ensure sustainability goals are an integral part of corporate strategy and its purpose. Cloud is critical to unlocking greater financial, social, and environmental benefits through cloud-based circular operations and sustainable products and services.

By combining Cloud with 4th industrial revolution technologies, companies can drive better customer outcomes. Careful association of sustainability perspective to cloud computing and accelerated Cloud adoption can help an organization reduce energy use and the carbon footprint associated with running business applications.

It is important to choose the correct cloud service providers with the right level of automation and an action-oriented Sustainability policy that is compatible with your corporate responsibilities involving Sustainability. Additionally, organizations should focus on a circular economy where longevity and recyclability are ensured to make the most out of their resources.

Additional References:

  • T. Jena, J.R. Mohanty, and R. Sahoo, “Paradigm shift to green cloud computing”, J. Theor. Appl. Inform. Technol., vol. 77, no. 3, pp. 1–10, 2015.
  • Balasooriya, Prasanna & Wibowo, Santoso & Wells, Marilyn. (2016). Green Cloud Computing and Economics of the Cloud. Journal on Computing. 
  • Microsoft Will Be Carbon Negative by 2030, Microsoft.
  • “What Is a Circular Economy?”, Ellen MacArthur Foundation.
  • “What Is the Paris Agreement?”, United Nations Climate Change.

See blog

Tags: Cloud, COVID19, Digital Twins

What Is .NET 5 And What It Means To You
April 07, 2022

Microsoft has emerged with a new innovation called .NET 5 wherein it provides a cross-platform development experience overcoming the previous fragmentations in the .NET world. This comes as part of Microsoft’s direct strategy to unify and simplify the .NET platform as a result of listening to their customer’s needs.

.NET 5 is the natural development and evolution of the .NET Core 3.1 and .NET 4.6 framework. This is a great transition as the .NET framework and .NET standard will cease to exist (which have been posing a lot of difficulties to .NET developers in the past). This makes .NET 5 is a step towards a single platform to develop dynamic applications across all devices, right form mobile and desktop apps (Xamarin/WPF), to front-end web development (Blazor), including REST (ASP.NET), Gaming (Unity), gRPC and web sockets (SignalR), AI Apps (ML.NET, .NET for Apache Spark) and Quantum programming (Q#). 

You can download .NET 5.0, for Windows, macOS, and Linux, for x86, x64, Arm32, Arm64. You will still be able to use .NET Framework with old operating systems but with Microsoft decreasing its life support cycle, it may be terminated sooner than expected.

So, what does this sharp turn in .NET 5 evolution mean to .NET developers and customers? that's what we will explore that in this article.

.NET 5.0 is the first release in our .NET unification journey. .NET 5.0 is a current release. That means that it will be supported for three months after .NET 6.0 is released. As a result, we expect to support .NET 5.0 through the middle of February 2022. .NET 6.0 will be an LTS release and will be supported for three years, just like .NET Core 3.1.

Image source

Top 7 New Features of .NET 5

One of the key underscoring features of the .NET 5 is its ability to target cross-platforms from iOS, Mac OS, Windows, WatchOS, Android, tvOS, etc. This new release focuses on the improvements from .NET 3.1 and prepares for delivering better performance.

C# 9 and F# 5 are part of the .NET 5.0 release and included in the .NET 5.0 SDK. Visual Basic is also included in the 5.0 SDK. It does not include language changes but has improvements to support the Visual Basic Application Framework on .NET Core. There have been several upgrades and added features from the older version of the .NET Framework. Let me take you through how these upgrades overcome few challenges faced in the previous platforms. For a complete list of new features refer to What’s new in .NET 5 | Microsoft Docs

  • Support For Windows Arm64

This is a significant improvement from the previous .NET 3.1 platform. .NET 3.1 rendered support to Linux ARM64 but was lacking in the performance department and there wasn’t any native support given to Windows ARM64. In the previous .NET 3.1 Framework, one could optimize methods using x64/x86 intrinsics. But unfortunately, systems that couldn’t run using these intrinsics performed below par and delivered slower performance levels. 

Hence, this new update is a sigh of relief for you .NET developers, wherein this update intrinsics will afford better performance yield across ARM architecture. For example, by providing native support to Windows ARM64, Windows Forms, WPF applications will be able to run on devices like Surface X Pro.

Microsoft invested in delivering improvements in these areas:

  • .NET 5 provides JIT improvements (just-in-time) and optimizations for ARM64.
  • Provides specialized instruction sets to support ARM64
  • Detrimental algorithms for customized performance.
  • Single File Applications

A great addition to .NET 5. This is, I believe, is an attractive upgrade for any application developer. Single file applications are deployed as a single file which includes the file and all its dependencies (including .NET runtime). Single file applications were available in .NET 3 Framework as well but however, it has been optimized and enhanced in .NET 5. Previously in .NET Core 3.0, when a user ran the single application, the core host would extract all the files tentatively to a directory. But, with this new upgrade, extraction is no longer required.

Single File deployment is available for both the framework-dependent deployment model and self-contained applications.

  • App Trimming

One of the big differences between the .NET Core and .NET Framework is that .NET Core supports self-contained deployment — everything needed to run the application is bundled together. It doesn’t depend on having the framework separately installed. From an application developer perspective, this means that you know exactly which version of the runtime is being used, and the installation/setup is easier. The downside is the size — it pulls along a complete copy of the runtime & framework.

To resolve the size problem, Microsoft earlier supported an option to trim unused assemblies as part of publishing self-contained applications. In .NET 5, Microsoft has taken this further by cracking open the assemblies and removing the types and members that are not used by the application. This further reduces the size.

  • Support For C# 9

.NET 5 application developers will have access to the new capabilities of C# 9. Some of the key features include pattern matching, record, top-level statements, etc. For instance, in Records, a with expression has been introduced which is essentially a reference type that produces no copy when passed to member functions. Records are perfect for people working with data. The flexibility of a class, and the pragmatism of a struct. This omits the IEquatable implementation task, saving much time and energy for you!

In terms of, top-level statements, the Main has been omitted to make C# learning quicker and easier for adoption.

Some of the other enhancements include:

  • New keywords and, or and not.
  • JSON enhancements make the .NET framework more independent and closely tied with framework evolutions.

Overall, this has made the programming language efficient to implement and learn, saving .NET developers certain tedious tasks.

  • Performance Enhancements

  • a) The Garbage Collector (GC) has been made significantly more efficient in .NET 5 and has provisioned for better scaling for machines with more core count. Key GC improvementsinclude:

  • Optimize decommitting GC heap memory pages
  • Expensive memory resetting is evaded
  • GC has employed Vectorize sorting
  • Significant time has been reduced for GC to close threads.
  • b) Another enhancement in performance is the “Just In Time (JIT) compiler”, wherein it offers upgraded hardware intrinsics(e.g. SSE and AVX). Here, it renders support to particular instruction sets of selected processors.
  • c) .NET 5 offers an Ahead-of-Time (AOT) compiler alongside its JIT tooling to address the need for precompiled code for Web Assembly and mobile operating systems.

Lastly, there have been improvements in the .NET Compiler platform as well, with the introduction of C# Source Generators and Performance Focused Analyzers. For more details on .NET 5 performance improvements refer to Performance Improvements in .NET 5 | .NET Blog (

  • Cloud-Native Support 

In container workload situations, multiple enhancements have been implemented that improve the overall performance in cloud container environments. Furthermore, it pushed for a reduction in container image size which offers size savings, and additionally, extends the option to select from a larger plethora of container images.

This is good news. With ASP.NET being provisioned, docker cloud image size will be significantly reduced by at least 40%. This could be done as a result of Microsoft terminating the build pack dependency (buildpack-deps). Lastly, in order to simplify the functionality of the .NET platform, the new update supports new container APIs as well ensuring faster and smoother operations overall.

  • Security

In terms of Security in .NET 5, certain changes have been made around the usage of OpenSSL enabling support for TLS 1.3 on Linux. TLS 1.3 wasn’t available on Windows previously but now it’s been made a default. 

Is the Migration/Upgrade Complex and Tedious?

Well, it is subjective and based on individual cases. You need to ask yourself and analyze how many of the features and properties your applications are dependent on that will hinder the process or even fail to migrate to the new .NET 5 update? Another important factor to consider when you intend to upgrade and migrate is to look at what .NET platform you are using. Because migration from .NET core to .NET 5 will be relatively easier than say from .NET Framework to .NET 5.

However, the good thing about the upgrade is that it is free of cost (from the runtime/framework perspective). Essentially, the only thing it will cost you is a lot of time and effort, which again, is reliant on the versions of .NET your applications use. Additionally, the upgrade is available on all versions of Windows servers. The only thing you need to keep an eye out for is the life cycle of your existing server because it will only make things more jarring and tedious to migrate from an old server.

WHEN and WHY to Migrate?

In my opinion, I’d say NOW. You don’t need to wait for .NET 6 to get started with migrations. The sooner you start, the better, giving you time to deal with any issues that may emerge. If you stall the upgrade/migration process, it only will complicate things and increase your Technical Debt for you as .NET 5 is inevitable. 

You should see .NET 5 as the first step in the .NET unification journey, one where you should start to take all that legacy code and decide what’s necessary to bring forward by porting and updating, and what needs to be completely replaced.

Let me chalk down the process for easier migration:

  • Assess Your Applications — analyze and assess which of your applications use .NET Framework and .NET Core so that you meticulously and efficiently migrate to this upgrade.
  • Consult Stakeholders — It’s always better to consult your stakeholders regarding the transition to see if they’re willing to invest in re-writing applications. Once a green signal is received, it makes things easier for developers to move on from there.
  • Pan Out a Plan — Keep tracking the cycle of the old .NET Framework and .NET Core and schedule a plan to simplify the migration process.

.Net 5 Roadmap

.NET 5 has been a major update since the introduction of the .NET Core in 2016. This has considerably been a major innovational leap by Microsoft and, an inevitable one. More features will be added in terms of runtime, framework, etc. along with extending the product’s scope as time progresses.

.NET 5 isn’t long-term supported (LTS). Microsoft intends to introduce new releases every year in November to enhance various layers of the product in terms of language, runtime, compilers, and tools.

To know more about each feature’s implementation timeline, you can visit here.

What’s The Future?

What is especially intriguing to me is the new .NET 6 release and what it holds. One of the key features that I am looking forward to is .NET MAUI. It is a new framework that proposes a universal model for building UIs on mobile and desktop platforms.

Microsoft announced that it would extend its .NET support to Macs with Apple Silicon. .NET 5 in addition to this year’s .NET 6 November release, which will offer a repertoire of developments for .NET developers and app developers across cross-platform ecosystems. This unification will afford .NET with expansive capabilities/utilities whilst still maintaining simplicity.

I see a bright future on the horizon wherein one can use .NET across different operating systems, devices, or even chip architectures.

Rest assured; it is a game-changer!

In an upcoming article, I will go deeper on .NET 5 Upgrade benefits, migration challenges and recommended Upgrade process.

See blog

Tags: Cloud, Digital Transformation, Digital Twins

The State of Remote Working and Productivity During the Pandemic
April 01, 2022

As we cross 2020, most of us are still functioning remotely and preparing for a hybrid work environment where we will be expected to work from home. How can an organization determine productivity? What can we expect from remote working in 2021? How can leaders motivate their employees who are low on motivation? Read this article as I dissect the remote working environment of 2021.

In 2020, this digital revolution was accelerated due to the pandemic outbreak, when most of the global workforce was forced to work remotely from the safe boundaries of their homes. We can witness the change in the work environment when we look at the introduction of various collaboration software, which helps employees connect virtually over a meeting and streamline their processes, to cloud-based connectivity, and a data-centric approach to strategic decision-making powered by the synergy between artificial and human intelligence. These have helped organizations reimagine the way to work.

The pandemic introduced most of the organizations to the possibility of switching to remote work, partially or in full capacity. Take a look at this infographic I did earlier to understand the change in working habits enabled by the pandemic.

The extended crisis has left employees wondering about the possibility of a hybrid work schedule — a combination of on-site work as well as remote working and preparing for it. Technological revolution and rapid digital transformation have enabled organizations to change the Social Capital Management landscape. In a study by Humanyze, it was observed that working remotely has extended people’s working hours by an average of 10–20%. These employees also observed an increase in work-related and overall mental stress, increasing confidence and focus on well-being and a higher degree of formal and informal interaction with colleagues.

But, how do you determine the relative productivity of an organization compared to its competitors? Let’s take a look.

How Do You Determine the Productivity of an Organization?

In their book, Time, Talent, Energy: Overcome Organizational Drag and Unleash Your Team’s Productive Power, Michael Mankins and Eric Garton mention that “the effect of remote working on productivity cannot be generalized and varies across industries and the individual talent.” The book mentions three factors that affect the productivity level of an organization:

1. Time

Each employee, while working remotely, can be distracted by excessive e-communications, random virtual meetings, and/or administrative procedures and paperwork. This can affect the amount of time the employee spends doing productive work.

While the remote working orders have liberated the amount of time that the employees spent commuting, it has enabled them to invest additional time in their jobs. A study by Raffaella Sadun, Jeffrey Polzer, and others noticed that the length of the average workday increased by 48.5 minutes, across 16 global cities, in the early weeks of the lockdown. Miscommunication among colleagues and inefficient work practices have reduced the time of productivity by 2% to 3% for most organizations.

2. Talent

Any organization’s best talent must be properly deployed, assigned to a proper team, and led by managers who help them bring out the best in them. The talent of an individual and the team in totality affects the productivity of the organization.

Organizations that have perfected the process to acquire, develop, team, and lead scarce, difference-making talent, have recorded a 20% increase in productivity as compared to their counterparts. But the Pandemic has encouraged a slump of demand for certain products and services that have kept them out of the labor market and forced them to lose their best talent. In a way, it can be said that COVID-19’s effect on talent management has led to a negative impact on productivity.

3. Energy

Every job involves the amount of discretionary energy and willingness every employee can invest. This can dictate the level of productivity and success of the company, its customers, and other stakeholders.

In a study by Bain and Company, it was found that an engaged employee is 45% more productive than a merely satisfied worker. The pandemic and the subsequent work-from-home orders have forced organizations to find ways to keep their employees virtually engaged. For instance, in our firm senior executive leadership began conducting virtual town halls, a weekly video series for important Covid-19 and business updates, along with tips from fellow employees.

Observations That Are Defining Remote Work During the Pandemic

As hybrid working arrangements are becoming a matter of safety over convenience, let’s take a look at some trends that have transformed remote work in the present era.

1. Strong Virtual Human Connections

The market for collaboration software, like Zoom and Microsoft Teams, has seen exponential growth in adoption (both in terms of the number of users and # of hours spent online). Initially, these tools helped in facilitating the work continuity but with the fast pace of agile innovation and Artificial Intelligence, these collaboration platforms are becoming more of cohabitation platforms that allow users, who are geographically distributed, to exist in the same space simultaneously.

With the help of these new tools (cohabitation platforms), we can forge deeper connections that make the virtual world more humane by going beyond simply collaborating — running businesses, visiting family, attending weddings, and educating our children through technology.

2. Encouraging a Flexible Professional Life

Most of the employees are favoring flexibility when it comes to their work schedules. More and more professionals are choosing the hybrid work setting — a healthy mix of both on-site and remote working. But in the case of Gen Z employees, employees favor on-site work as they look to the workplace as a source of socialization, network, and learning. Leaders must understand the importance of minimizing screen time, allowing parents to be part-time teachers, and enabling a professional life that supports their personal life.

3. Technology as an Enabler of Inclusion

Virtual meetings are the greatest equalizer. Companies are increasingly facing the pressure to be diverse, equal, and inclusive, and technology proves to be the biggest enabler for such situations. Virtual meetings on collaboration software make it difficult to engage in office politics or show-off. Additionally, this software gives organizations the privilege to capture, record, and analyze meeting data, and enables them to evaluate Diversity, Equality, and Inclusion in real-time.

4. Social Capital Management Beyond Borders

The pandemic, along with the forced remote working, has encouraged organizations to rely on talent residing in a different corner of the globe. The field of Software development saw this shift in social capital management way before the pandemic and other industries followed them. This was further fueled by the record of high unemployment in many areas of the world. The main enabler of this is technology — it has untethered talent from the location. However, this is limited to specific industries that have the luxury of collaborating over video conferences.

The Negative Impact of Working Remotely

SAP, Qualtrics, and Mind Share Partners conducted a global study in 7 countries that found that over 40% of employees in these countries reported a decline in their mental health since the pandemic outbreak. In the same period, workers reported an increase in anxiety, stress, and fear related to the COVID-19 pandemic.

In a study titled "Cybersecurity in the Age of Coronavirus", conducted by Twingate, it was recorded that 40% of professionals have experienced mental exhaustion from virtual meetings. Additionally, 59% of employees felt their office was more cyber secure when compared to their home. 

A survey by Doddle cited symptoms of burnout among employees. It was observed that a full week of video conference meetings left 38% of employees feeling exhausted while 30% felt stressed. Employees experienced performance anxiety as 63 % of employees said that they were likely to record and evaluate their virtual meetings to help them become better presenters.

Employees end up spending extra time working from home which drives them towards burnout. This overworking behavior has been mockingly called “Sleeping at work,” as employees start their day by opening their laptop screens and end their day by closing them. The willingness to be accessible after work hours has, in turn, increased their screen time, often related to an increase in stress and anxiety.

How Can Leaders Motivate Their Employees in a Time of Extended Crisis?

While the world is gradually inching towards normalcy, many of us are still working from within the walls of our homes. This stagnation in professional, personal, and social life is bound to flag motivation, performance, and well-being for many. In such times of extended crises, it is up to the leadership to keep their employees’ morale high. Leaders can provide structure, guidance, and regulation, and provide a healthy work environment where individuals can foster internal motivation by implementing the following:

  1. Be transparentwhen it comes to demands and provide a rationale behind them. This will encourage your employees to put in their full effort and take up responsibility.
  2. Validate your employees’ emotionsas well as their reactions and foster collaboration.
  3. Try to reduce team size to the number necessary, allow each member to shine in their capacity, and make them feel involved in decision-making processes.
  4. Minimize coercive controls, like unrealistic deadlines and micro-management. Find ways to engage your employees and increase their motivation through encouragement and positive feedback.
  5. Start emphasizing employees’ well-beingand not just their productivity.
  6. Define New Productivity Metrics for Hybrid Work.
  7. Regularly discuss progress on individual goals and help your employees create a roadmap to meet goals.When problems arise, get full feedback from the involved parties, and identify the biggest issues and obstacles.
  8. New Performance Management for Hybrid Work.

No-one knows what the future holds for us or when we will go back to on-site practice.

But, in such situations, the onus lies on the leaders of the organization to look after their workforce. They should invest more in untethered social capital management and cohabitation platform usage. Productivity is an important aspect of the working environment, but it should not trump an employee’s well-being. This means that they should encourage the employees to set boundaries when it comes to working timings by educating them that more online/screen time does not relate to increased productivity. This is also the time for leadership to reimagine the social capital management landscape, the opportunity for hybrid work, and the possible innovations in the field of composite AI.

What other measures can managers take to help boost their employees’ productivity in these uncertain times?


See blog

Tags: Cloud, COVID19, Digital Twins

When Trouble Knocks: How System Resilience Can Be Built and Rebuilt During a Pandemic
March 31, 2022

The Covid-19 pandemic is, without doubt, the biggest challenge the world has encountered in this century and arguably the biggest since the Second World War. As the year 2020 draws to a close with mixed news — positive on the vaccine development front only to be offset by reports of a new, powerful strand of the novel coronavirus emerging — the world will have to soldier on in a long battle ahead.

Given the ongoing COVID-19 crisis, the need for resilience has never been greater. Resilience is the ability to “sustain and recover quickly from difficult, uncertain scenarios.” However, amid chaos, fear, uncertainty, and often insufficient resources, the call to "be resilient" may feel like an impractical demand, especially if you view resilience as something one either has or does not have.

As uncertainty prevails and more efforts are made by governments and health researchers to find a lasting solution to the pandemic, the onus of adjusting to the new normal rests with enterprises and management. Leaders must consider the resilience of individuals, as well as the resilience of their teams, organization, culture, and system. Better systems resilience can help businesses limit the damage caused by the pandemic and create a future that is immune to such disruptions.

Business continuity hinges on the solutions provided by the technology wings to beat the Covid blues. It also depends on the innovations that chief information officers (CIOs) and IT managers carry out to make processes, systems, and businesses adapt better. In this blog, I will talk about certain frameworks needed for seamless operational continuity.

Data shows that those enterprises that were already focusing on contingency plans and had synchronized their IT capabilities and supply chain in that direction absorbed the shock better. They were better equipped to deal with the new realities such as working remotely, fall or a sudden spike in demand, supply chain disruptions, lack of intra-organizational synergy, and communication challenges to name a few.

According to Accenture’s Future Systems research survey of 8,300 companies conducted before the COVID-19 crisis, only a small minority of companies — the top 10 percent — had cracked the code on systems resilience.

Needless to say, the IT managers in these companies adapted much better and so did the business. Some reconfigured traffic to maximize capacity for critical applications when there was a surge in demand while others moved low-priority applications to the cloud to free valuable system and human resources. One large retailer, for example, was able to handle a massive surge in sales by offloading traffic from the core e-commerce site to a cloud-based coupon application, and in another case, a hospital adopted a new virtual assistant to manage the massive increase in incoming calls during the COVID-19 crisis. All these were possible because the IT infrastructure was in place and the managers put in quick thinking to come up with effective solutions in a short period.

For others, who are still not quite there, the three Rs — respond, reset and renew — that I have written about in the past can prove to be very effective.

There are a few areas that are integral to a resilient system and every organization needs to work towards these to ensure foolproof systems resilience. These are:

1. Adaptive Leadership and Purpose

When the business environment is dynamic, to begin with, and disruptions such as Covid-19 further compound the equation, organizations need the right hands to steer the ship. The leadership needs to understand the culture, aspirations, and goals of the organization and only an adaptive leadership can create frameworks that can provide the right momentum with each time lethargy or panic sets in. That happens because adaptive leadership is well aware of the purpose of the enterprise and can link individual aspirations to organizational goals.

2. Ability to Reconfigure, Re-Deploy and Repurpose Resources

While technology is critical, managing people is the bedrock for seamless functioning. In a connected world, businesses are more integrated in terms of workers, vendors, customers, partners, and suppliers, making business continuity and pandemic plans far more complex to test and carry out. Employees and business partners are the most important cogs in any organizational wheel and to make them familiar with the crisis-induced changes is imperative. This is where proven models in the past that were considered efficient can prove to be a bone of contention between those who rigidly defend these models and others wanting to completely do away with them. As such employees have to be briefed about the changing needs, the human resource wing has to skill and reskill them to reset their performances and align those better with the changing organizational goals.

Similarly, partners have to be told about the need for course correction and how the company has planned for that scenario. They have to be informed about the challenges the management expects as it tries to build a resilient system and the roadmap to overcome those. Forming trusted bonds between the employer and the employees that last long is paramount.

3. Ability to Re-balance Supply Chain

The intensity with which disruption can hit a supply chain can vary but limited exposure may not necessarily mean complete immunity. In an integrated world, even a small problem in the assembly line producing the tiniest of the component can bring supply chains to a halt. A resilient system is one that anticipates the scenario to make sure the controllable fall in place and is quick to adapt when a crisis erupts. This can be achieved by investing in disaster-proof physical assets, diversifying the list of suppliers, designing products with common components, enabling swift movement of products and services across platforms, and strengthening industry ecosystems. All this may involve investments but the enterprises will save a lot in the long term. Resilience and efficiency should not come at the cost of one another.

4. Data-Driven Decision Making

Using newer technologies, such as analytics and artificial intelligence, help with improving transparency and efficiency on the one hand, and responding to the crisis on the other. Data is a critical tool to identify vulnerabilities and can help predict scenarios to identify the most effective solution, should a crisis hit. Several Fortune 500 companies are already using data insights for inventory and risk management.

5. Innovation

Since the pandemic has hastened the digital transformation, the IT infrastructure and the business decision-makers should also work towards achieving what is called the Product mindset. This is essentially achieving better synergy between the verticals and coming up with faster solutions. The guiding principles to achieve these are minimizing time to value, solving for need, and excelling at change.

6. IT System Readiness for Resilient

  1. A flexible workplace:As the immediate aftermath of Covid-19 showed, that many tasks that were considered as something to be executed within office premises were moved to people’s homes. This made providing competent IT support a challenge and many had to build capacity in that direction.
  2. Hyper Automation:Human resources need to be freed up for more important works and many tasks that depend on the repetition of the same set of activities should be automated at the earliest. As the Covid crisis showed, an intelligent, automated process can always be handy in tricky scenarios.
  3. Time to Market for new Apps and scenarios: With competition in the market intensifying, you have to stay ahead of the curve always. Organizations have to make significant investments in telling the clients and the outside world that how their solutions are best suited in different, and often unpredictable, scenarios.
  4. Site Resilient and Performance engineering:It is high time that all existing legacy systems were modernized to deal with any scenario that may create roadblocks for business continuity. Through effective system architecture, not only can applications be scaled to their maximum potential but also site resiliency any performance constraint on existing IT infrastructure can be effectively resolved.
  5. Cloud adoption and optimization:By optimizing their workflow over the cloud, businesses can build effective systems to tackle demand fluctuations, come up with viable products and services, reduce costs and integrate multiple verticals. Public Cloud has been a great way for an organization to establish a resilient foundation for IT systems from a perspective of resiliency, elasticity, security, cost, and geo-redundant options for Disaster Recovery, business continuity, and on-demand Cloud burst.
  6. Service continuity:At a time when companies are looking to get expertise on board in lesser time than usual and when identifying the right resources through virtual onboarding sessions is becoming a norm, companies need to identify and prioritize their requirements to ensure no disruption in service.

Organizations should reorient their IT processes to meet any new challenges that can logjam even the best of workflows.

7. Cybersecurity

As IT infrastructure and CIOs are set to play a major role in building flexible systems, it is natural that the vulnerabilities need to be understood and gaps plugged in time. Systems have to be made fast to react in case of an external threat that might come in their way. More robust systems and firewalls need to be put in place to avoid any breaches. The effectiveness of the existing cybersecurity mechanisms should be properly evaluated and wherever possible, alternatives should be considered to minimize risks.

Adjusting to the new normal will continue in 2021. For all we know, the new normal may be significantly different from what we envisage. A lean and agile system, backed by proper allocation of resources and investments, will help build higher resilience in the existing framework. These measures, if implemented in time, will pay larger dividends for a long time to come and make life easier during the next crisis, whatever its form may be.

See blog

Tags: Cloud, COVID19, Digital Twins

5G-Powered Digital Transformation Scenarios
March 29, 2022

5G is now a standard for seamless industrial operations. But how will different industries be affected by this cellular standard? What innovations are awaiting for humanity once the telecom operators finish the 5G rollout? Read this piece to test the compatibility of 5G with different industries.

In 2019, it was predicted that, by 2025, there will be 1.2 billion 5G connections covering 34% of the global population. Though the pandemic tried to douse the raging fire around the 5G hype, it still happens to be the most talked-about tech phenomenon in 2020.

With 5G, we’ll see an entirely new range of applications enabled that will help with transforming the art of the possible.

My take — Enterprises will witness 5G opening new doors to services and product innovation creating new customer segments and revenue streams at scale.

In this article, I will dissect how 5G will transform particular industries while discussing their business cases.


An article in the New England Journal of Medicine mentions that healthcare owns 30% of the world’s stored data and that every patient typically generates about 80 MB of data each year. In another article, How 5G Wireless (and Concomitant Technologies) Will Revolutionize Healthcare? The authors mentioned four major deficiencies that affected the healthcare industry — lack of a patient-centric systemabsence of personalization, deficiency of accessibility, and no focus on data.

Experts believe that the implications of 5G will address all these deficiencies through its low latency capability and high capacity to collect data. When hospitals upgrade to 5G, they will spend less time on capturing, transferring, and managing data in real-time, unlock the ability of remote patient care access by enabling mobile networks to support real-time high-quality video, and ensure strengthened cyber security through cloud-based data centers. With the flourishing wearable technology that is compatible with 5G, it would be easier for patients to engage with their physicians remotely and keep them updated with their real-time vital signs.

I, personally, cannot wait to see the strides that 5G takes when it comes to transforming the Healthcare scene. The Pandemic has added excessive pressure on our medical and para-medical community, and these frontline warriors deserve all the help we can gather.


According to Statista, in 2022, e-tail revenues are forecasted to rise to 6.54 trillion US dollars. With the introduction of 5G, retail chains all around the world will transform through proper connectivity — Reliable and fast networksophisticated AR/VR use cases, and adequate network capability for new-age applications used by warehouses, that help them reach their performance peaks.

So, the question remains — How can 5G aid the retail industry?

  1. 5G will enhance the engagement between the brand and the customer by enabling immersive experience tech, like Augmented Reality, Mixed Reality, and Virtual Reality.
  2. In-Store augmented experiences, like Magic Mirror and face-to-face virtual shopping assistance.
  3. Facial and object recognition solutions to simplify target marketing and to enable security measures against suspicious behavior or transactions.
  4. Sustainable and smart energy management, like smart lighting, to optimize the use of energy in-store.


With the growth of 5G, comes the need for compliance with federal laws. That would be the number one concern for the public sector. The public sector recognizes the impact that 5G will bring but, to deal with it appropriately, local governments have to understand the technology and get the most out of it while complying with the federal laws. These laws will be triggered by the constant threat of data breaches due to the massive amount of data that will be collected using the technology.

But, once the technology is under the government’s reach, we can start talking about the implementation of the following use cases:

  1. Enhanced Mobile Broadband that will help improve connection speeds for roles like Law enforcement officers, Human services caseworkers, and Postal Workers.
  2. Ultra-reliable Low-latency communications, as the name suggests, will provide high reliability and low latency to aid the development of smart ports, smart military stations, and broaden the health care system’s capacity through remote surgery.
  3. Massive machine-type communication for reimagined public works, like real-time information on leakages and blockages, and efficient resource allocation.


In a new report commissioned by Intel, it was projected that the Media and entertainment experiences enabled by 5G will measure a generation of up to $1.3 trillion in revenue by 2028. As smartphones are increasingly winning hearts either as a viewing device or a drive to control in-home and industrial networks, the 5G technology stack is opening new avenues for a richer media experience to consumer devices. This will include lightning-fast data speedsmassive bandwidth, and low-network latency.

Let me give you a little sneak peek at how 5G will reform the world of media and entertainment:

  1. Live and On-Demand shared real-time entertainment experiences of location-based events, like concerts, sporting events, and movie theatres, carried out across time zones and continents.
  2. Acceleration of Mixed Reality, Augmented Reality, and Virtual Reality for immersive content consumer experience.
  3. Re-imagined interactive gaming, mixed reality with the introduction of mobile cloud gaming.
  4. Redefined audience interactions as content creators focus on increased brand loyalty, better conversion as well as targeting.


To understand how the 5G technology stack will transform the Manufacturing industry, let’s take a look at the use cases that pertain to industrial operations:

  • Faster wireless communication and improved reliability will enable real-time insights using edge computing.
  • High-quality, real-time video feed for surveillance.
  • Remote control of the distributed production line from a central command center.
  • Improved monitoring and alert systems owing to the reliable and secure network.
  • Remote maintenance and training solutions using high-resolution AR/VR technology.
  • 5G adoption goes beyond facilitating industrial operations and can extend seamlessly across the entire supply chain.

Let’s review how that would work.


In addition to the commendable speed, the 5G technology stack is also focused on device density and latency, which is said to transform the way industries work. This is extended to the disruption of supply chains. To refer to my take on how the pandemic has induced competition between supply chains, read my blog.

Let me take you through the use cases that 5G will enable for supply chains. Industries can have a reliable logistics operation with automated labeling, tracking, and recording shipments as opposed to the manual track and trace. This will help them solve challenges like lost cargo, misplaced containers, and counterfeiting. When it comes to inventory and warehouse management, 5G will help to optimize processes, enable remote maintenance and control, and deploy autonomous vehicles.

When it comes to the future of supply chain management, 5G gives us exactly what we want — transparency across the channel to ensure that the control lies in the central command node.


The smart home technology, present today, gives a fragmented experience that 5G is here to solve. 5G aims to provide us a seamless experience across all our home technologies — from our favorite room temperature to our preferred light shades, entertainment, and education suites, fitness and health devices, and door security features. Along with seamless device orchestration, the technology stack will also focus more on increased data privacy and security from an ethical point of view.

8. AR/VR

In a report by Ericsson, by 2030, AR will account for almost 50% of all revenue from immersive video formats. What will be exciting about the use cases of AR is that they will take an in-venue experience and transform it into a digital experience.

When combined with AR/VR/Mixed Reality, experiences, like concerts, movie premieres, gaming, sporting events, retail fashion, and even home planning, education, and advertising, offer an irresistible prospect that many tech-junkies, including me, are waiting to experience in its full glory.


5G will make the logistics life simpler! The high-reliable, low latency technology stack has proved beneficial for the transportation industry through several use cases. Smart sensors can monitor the condition of the roads and measure the stress levels to determine the time for repair. Sensors to predict potholes proactively, allowing the municipalities to take preventative measures. The technology will also enable cameras to provide real-time insights into the traffic flow to redirect vehicle and pedestrian traffic for efficiency and civic safety. 5G will accelerate the adoption of autonomous vehicles. 5G will unlock the ability to support real-time responses on vehicle safety status and autonomous controls for collision avoidance.

10. IoT

The fifth-generation cellular standard has enabled new business cases for IoT. The tech industry realizes that it is difficult to curate an end-to-end IoT solution with cellular connectivity. It needs the right mix of several elements — expertise in embedded systems, connectivity, time series-based systems, antenna design, cloud computing, and more. but, I see telecom operators offering a similar opportunity enabled by 5G.

So, what can be the future business use cases for 5G-powered IoT? The first use is an improved asset tracking solution to track the small amounts of data, of energy usage or condition of the product, periodically. This further helps with tracking whether the product is handled according to the safety and compliance requirements issued. Second is the business-critical applications for command and control of AGVs and robots in small factories. The third use is connecting assets in restaurants, cafes, and brownfield areas to the cloud to convert them into smart devices.


The camaraderie between 5G and high-speed flash storage will create a lot of avenues for use cases for the storage industry. First, enabling virtual 5G networks in the cloud to ensure bandwidth, latency, and quality of service. Second, becoming the backbone of high-resolution video streaming by enabling a shift from 4K to 8K and beyond. Third, introducing full-on cloud gaming with the opportunity to stream video games and play them anywhere.

Theoretically, 5G has been a successful champion in combating the present-day challenges faced by these industries, and it focuses on seamless productivity. Enterprises will witness 5G opening new doors to services and product innovation, creating new customer segments and revenue streams at scale. From the dissection above, it is clear that ultra-reliable low latency is the new currency of the network world, underpinning new capabilities in many industries that were previously impossible.

Now, it is all a waiting game for us to see how the actual deployment of 5G turns out to be.

What do you expect from the 5G rollout?

See blog

Tags: Digital Transformation, Digital Disruption, 5G

Top 15 Digital Transformation Trends of 2021
March 28, 2022

I can try and express the impact of the Pandemic in hundred different sentences, but the summary remains the same — COVID-19 has impacted the world in ways no one could have imagined or predicted. While many leaders are used to constant change, this disruption caused by COVID-19 is the hallmark of 2020. The pandemic has forced organizations to re-think, re-imagine, and re-evaluate their processes.

Organizations continue to respond & renew their strategy during the pandemic and explore new ways to operate and drive growth. In the coming year, the global Digital Transformation scene will focus on increased resilience and preparedness for the post-COVID effects. Keeping that in mind, I am sharing a list of 15 strategic trends that will transform the digital scene. These trends highlight areas of opportunity and ways for organizations to differentiate themselves from competitors.

So, What Does 2021 Have in Store for Us?  

1. 5G in the Ascendant

2020 was a huge year for 5G — we saw regular and agile 5G deployments on a global scale by Qualcomm, AT&T, Verizon, Nokia, Ericsson, and Huawei on a global level. While telecommunications is booming with the use of 5G, it will also be used in the advancement of Edge Computing, near real-time monitoring and low-latency high-speed application enabled scenarios. In 2021, 5G will champion the disruption scene as it will continue to transform each and every industry that affects our day-to-day living.

2. Intelligent Composable Enterprise

Composable Enterprise (CE) is employed to diversify business functions using microservices using application networks, APIs, and beyond. This means that instead of providing a single product or service, the enterprise offers a variety of microservices to its customers.

Pandemic has taught enterprises to be agile with adaptability and that is where Composable enterprises come in handy. Enterprises with a single product or service will find it more difficult to adjust to unprecedented changes as opposed to composable enterprises that have microservices as new business models powered by the New Normal.

Additionally, Composable enterprise has helped with the evolution of Everything as a Service (XaaS), where most IT functions are scalable as separate services for enterprise consumption.

3. Responsible Enterprise

To be successful in the “New Normal” era, to be successful, enterprises would need to focus on explicit initiatives and technology adoption in the areas such as Sustainability, Innovation as a Service, Enterprise Risk Management, and Transparency and Traceability.

4. Cloud First

According to Gartner, over half of the enterprise-generated data will be produced and processed outside traditional data centers or a single centralized cloud by 2022, compared to just 10% today. The conversation has evolved from choosing between a private or a public cloud. Enterprises are now focusing on a mix of Hybrid, Multi-cloud, and Distributed Cloud strategies.

In the post-COVID world of 2021, Cloud providers would be seen as strategic partners that help with cost reduction and better resilience by fuelling Cloud to Cloud Migration.

5. Internet of Behaviors

The Internet of Behavior (IoB) is all about changing behaviors by using data. IoB acts as a major step in the evolution of how data is collected and used. There is a rise in technologies that collect data that span the digital and the physical world — for example, facial recognition, location tracking, 5G powered Edge Computing, and big data for example. This data collected can be used to influence behavior through feedback loops.

I have identified some of the common use cases and technologies in the IoB scene -

  • Everything Connected (XConnected)
  • Autonomous Things
  • Digital Me
  • Digital Twin
  • IIoT

6. Anywhere Operations

Anywhere Operations refers to the IT model that focuses on supporting customers, enabling employees, and orchestrating the deployment of business products and services from any geographic location.

This IT operation model also helps with the Hyperlocal business model where the enterprise collaborates with local businesses for the agile distribution of products and services from an offline location within the proximity of a few kilometers. This concept has helped revolutionize the conventional idea of Supply Chains. To understand how the pandemic has reformed the competition between supply chains, read my blog here.

A study by Gartner predicts that, by 2023, 40% of organizations will have adopted Anywhere Operations for an overall optimized customer and employee experience.

7. Data as an Asset

It is an undeniable fact that at the center of all digital transformation is the newest digital currency — DATA. Organizations that have been successful in adapting to the Pandemic changes have recorded data to analyze their past mistakes and their current operation, and to predict the trends of tomorrow.

The IT industry, with the help of big data and a high level of technologies, like machine learning, can be used to effectively analyze data to help in figuring out an appropriate response to crises.

In 2021, organizations will be curating data, looking for ways to capture more for monetization, and using it indirectly for use cases like Responsible business, Supply chain resiliency, Employee experience, Customer Experience, New Pricing, Business models, and Security.

8. Responsible AI

I feel that the new year will be all about embracing AI not just as an innovation initiative, but as part of the core strategy for the company. CXOs need to consider how AI and Composite Architectures can work together, in sync, to help their companies solve the business challenges presented by the dynamic ecosystem of the pandemic.

The joint forces of Artificial Intelligence and Machine Learning will be the force multiplier to drive new business models and insights. Other AI trends that will be popular, in my opinion, are Composite AI, Generative AI, Formative AI, AI Security, DataOps, ModelOps, and DevOps, and AI Democratization.

Additionally, the concept of Algorithmic Trust will trend as you will see the development of Algorithm Digital Economy in 2021. Algorithms would be the way to differentiate products on the basis of enterprise business resiliency, marketing, and business models.

While AI would be all trending like a raging fire, organizations would need to ensure everything done in the AI area needs to be well within the range of the AI Ethics, rules, and regulations.

9. Cybersecurity and Digital Fraud Prevention

Workforces will be collaborating with their colleagues remotely and to make this digital journey possible, Cybersecurity as a business imperative will play a crucial role. In 2021, enterprises would be focusing on privacy, compliance-enhancing computing, digital privacy, and AI ethics, Distributed Digital Identity, and Zero Trust Design.

Remote work has boosted the number of security breaches, and there is an increasing need to protect data. Zero Trust design ensures the protection of modern digital environments using network segmentation and providing a Layer 7 threat prevention. To adopt Zero Trust Architecture, you don’t need to remodel or make alterations to the existing technology. Zero Trust architecture is highly dynamic as it has to adapt to your existing processes and make them more efficient.

10. Low-Code/No-Code Platforms

Workforces no longer have the luxury to work out of the same location, and this has impacted the productivity of a lot of enterprises. This can mean investing a lot of time, effort, and money to develop platforms to streamline the business processes. This can be done with ease when they opt for a Low Code/No Code first strategy.

A prediction by Gartner projects that, by 2024, low-code will account for more than 65% of all application development activities. Low Code/No Code offers a robust model that allows interoperability amongst functions that may be necessary for scaling up operations in the future.

11. Hyper Automation with the Right Mix of Low-Code/No-Code 

In the last two years, Automation has evolved from the use of RPA and Infra as Code to Hyper Automation. This technology employs advanced technologies, like AI, Machine learning, and Robotic Process Automation, to automate processes that were carried out by the human workforce.

There is a need for businesses to automate their operations as much as possible. This is when I recommend businesses to turn to the practice of Hyper Automation using tools, like AI Machine Learning, Event-driven software, robotic process automation, and other types of the decision process and task automation tools.

It is crucial to “reach the customers where they are” — on their mobile devices. With Low Code/No Code, it is possible to automate the process of creating apps with ease. Other features, like Everything as a Code and Intelligent Automation Processes, will also make its presence feel in the coming year.

In 2021, it is predicted that more than 20% of IT workers would be working along with Personal Contextual Digital Assistance, a technology that is beyond AI-infused apps and simple chatbots.

12. Remote Workforce and Talent Management

With virtual working taking a front seat, remote workforce and talent management is a vital necessity. With employees geographically dispersed, it takes a well-designed technology to streamline the workflow.

Rapid changes in the business models, work environments, customer expectations have left workers in need of tools that can help them maximize their productivity. These tools can range from individual changes, structural tweaks, or the provision of effective technology to meet the goal. These tools can help employees understand their goals clearly and improve their efficiency accordingly.

Tools, like reliable video conferencing, cloud-based connectivity, digital collaboration tools, and a decision-making approach that is powered by the synergy between artificial and human intelligence, are helping the HR departments reimagine the way to manage talent globally.

13. New Computing Architectures and Ubiquitous Computing

I am focusing on two new architectural introductions in the arena of computing.

  • Quantum Computing, a design that has the potential to change the face of multiple industries — from telecommunications and cybersecurity to advanced manufacturing, finance, medicine, and beyond. The beauty of Quantum Computing lies in the use of quantum mechanics and the efficiency to process massive and complex datasets more efficiently than classical computers.
  • A secure Computing Environment is helpful when it comes to the implementation of safeguards, ethical regulations, and compliance requirements.

14. Human Augmentation

Unprecedented times call for an enhancement of a person’s cognitive and physical capabilities. This is where human augmentation comes in with its use of technology. Human Augmentation works on two different levels:

  • Physical Augmentation, where an inherent physical capability is altered by implanting or hosting technology within or on the body.
  • Cognitive augmentation helps an individual think better and make well-informed decisions. This involves exploiting information and applications for a better learning experience from episodes.

15. Intelligent Experience

Intelligent Experience is an amalgamation of Customer Experience, Employee Experience, Workplace Experience, User Experience, Multi Experiences, Digital Experience, and Emotional Experience.

In addition to the above trends, in 2021, we would also see where few technologies like Blockchain, 3D Printing, AR/VR would move pilots and will become mainstream to develop new business models, resilience supply chain and re-imaging the telemedicine, healthcare scenarios, remote monitoring, B2B data sharing, learning, training, and after-sales support scenarios.

2021 is all about resilience and bouncing back from the challenges posed by the pandemic. The major focus is placed on the health of the business and its stakeholders. Innovations in terms of business models, business operations, and security are the predicted highlights of the coming year.

What should be the first order of business for you? Evaluating the weak links in your enterprise, gathering data on your strong suits, and implementing these trends accordingly. That’s what progress is about — seizing opportunities out of crises.

Which trend, in your opinion, would rule the digital transformation scene in 2021?

See blog

Tags: Digital Disruption, Digital Transformation, Digital Twins

Competition Is No Longer Between Companies — It’s Between Supply Chains!
March 25, 2022

Disasters like the COVID-19 pandemic can wreak havoc on even the biggest of a company’s business. Even when your employees and office space are secure, the difference in a time of calamity lies in having a secure and resilient supply chain. In this article, I explore how the direct competition between brands is no longer relevant, and why the only competition that remains is between the companies’ supply chains. I will also take a look at different ways that leaders can employ to build secure resilient supply chains for their businesses.

When the news stories related to the pandemic started breaking on the internet, none of us could have realized the massive scale of the calamity. Given that such a disaster only happens once in a century, we can be forgiven for that. However, what we can’t forgive ourselves for is putting all the eggs in one basket when it comes to securing our business supplies.

The recent disruption of global business supply chains is mostly due to them being either concentrated in a single geographic area or following just-in-time manufacturing and lean production strategies to cut costs. Such businesses now find themselves in a precarious situation with a noticeable shift in the customer consumption patterns, where purchases are now more inclined towards local availability, followed by how responsibly the brand behaves, as against the earlier considerations of price or favorite brand/product.

While the saying goes that a chain is as strong as its weakest link, the same stands true for any supply chain. The impact of the COVID-19 crisis on supply chains can’t be more stressed. According to a recent McKinsey survey of senior-level supply chain executives, the study found that 73% of businesses that were surveyed had encountered issues with their suppliers, while 75% faced issues with production and distribution; 100% of the respondents in food consumer-goods industries had encountered production and distribution problems.

A separate IDC survey on "Supply Chain Agility in the Pharmaceutical Industry" found that 46% of respondents had faced drug shortages during the pandemic, while 70% agreed that their supply chain was very vulnerable or facing more problems with the continuation of the pandemic. 65% of the respondents in the survey also reported that they could no longer accurately plan supply and had lost faith in their demand forecasts, and a stark 43% of respondents lacked the necessary agility and redundancy to survive major business disruptions.

With businesses in the past few years moving towards globalization, they are facing increasing challenges in terms of acquiring customers, onboarding workers and vendors, finding partners and suppliers, and ensuring continuity of business in the pandemic.

Today the biggest question that all manufacturers need to answer is — Which strategies should they employ to mitigate the disruption risks for supply chains? And where should organizations start?

Here is my handpicked 10-point approach for any such business looking to ensure continuity of service by building a secure and resilient supply chain:

1. Build an Early Warning System

To be resilient, a business needs to develop a balance between just-in-time manufacturing and lean production strategies and developing AI and analytics-powered early warning system models within supply chain risk assessment tools. Such tools can offer complete visibility across all levels and can sound off alarms whenever they identify any slowdown, interruption, or any other issue. Early warning systems based on a well-defined predictive model, KPI, and leading indicators can also help businesses identify their weak links in the supply chain, so they can plan around it by building redundancies.

2. Diversify Manufacturing and Supply Chain

While in the past two decades businesses have built their supply chains around cost optimization, the current pandemic has shown that this may not be the best practice. Lower levels of available inventory and disrupted supply chains have driven many businesses into the ground.

In the next few years, businesses should focus on simplifying their product portfolio and diversify the manufacturing capability (aspects like different locations, different suppliers) so that they can keep their manufacturing and supply chain differentiated. The ability to operate alternative sites for manufacturing and securing supplies from different suppliers may add some extra (and might require some discipline in quality assurance) cost but provides insurance against any disruption.

3. Create an Agile Supply Chain

Businesses need to create a much more agile supply chain. This can be done by collaborating and sharing data between all stakeholders involved in building the supply chain. Both agility and resiliency can be improved by new near real-time B2B integration and sharing data down the supply chain. Especially with the Tier 2 and 3 suppliers, providing them more visibility of plans can allow.

Creating a coherent digital strategy to combine global and local supply chain strategies, by using insights from data to derive competitive advantage and to achieve business outcomes, should become the new normal.

4. Fresh Commercial, Contract, and Delivery Strategies

The main issue with the pandemic has been restricted geographical access. Thus, I would like to see a much fresher approach by businesses to explore and identify new commercial strategies to fuel growth in areas that were left untouched before the present pandemic.

A Second Look at Contracts and Redundancies in the Supply Chain

COVID-19 has highlighted many vulnerabilities in the conventional strategies being utilized for building supply chains. Businesses need to work on a more collaborative model and redefine the contracts with their suppliers, vendors, and partner ecosystem. Contracts need to allow for a new set of KPI, location independence, risk and rewards mechanism, elasticity in supply, cash flow, payment terms, and most importantly redundant paths for a single point of failure in the chain including alternate routing.

Identify New Last-Mile Delivery Channels

With things not remaining the same as before, businesses need to invest more in restructuring their last-mile delivery channels, via identifying new ones — such as order online, pickup in-store, curbside pickup, delivery using robots and drones.

5. Operate a Hub-Spoke Model for Vendor and Supply Chain Relationship

Organizations should centralize any information necessary for the core functioning of the vendors and supply chains to prevent any eventuality where the concerned managers may not be available. They should also look to localize various vendor relationships and delegate decisions to local teams.

6. Implement Resilience-as-a-Service

Businesses are being crippled by losses as insurance providers are introducing new insurance policy exclusions. They need to understand that traditional insurance is not adequate when dealing with pandemic-related business interruptions. The best insurance in the future against incurring any such losses is prevention itself. Building resilience-as-a-service into the system is now a necessity.

I would rather take it to next level and recommend that businesses apply the chaos testing model for pandemic readiness. Businesses need to test and investigate their pandemic readiness regularly very similar to the chaos testing frameworks used in the IT application domain. They need to implement a workplace pandemic preparedness plan in sync with their continuity plans.

7. Leverage Technology as a Business Enabler

Scale the Use of Responsible AI

A data-driven reinvention of business practices will be critical in post-pandemic growth. Businesses will need to embrace the use of responsible AI capabilities to recover and return to their pre-pandemic growth plans. Businesses would need to leverage AI across their supply chain, business processes, customer experience, and employee experience. Who does it more professionally and ethically will be the key.

Accelerate Digitization

Businesses should adopt the use of digital data and insights more extensively as part of their strategy. Cross-platform digitization can help them understand things clearer and faster to predict and prevent unnecessary blockages in their supply chain and vendor management structures.

Apply Systems Thinking

Leaders need to realize the importance of system thinking as the missing skill-set in this whole pandemic scenario. Businesses should utilize systems thinking to look at a holistic picture of all the interconnected ecosystems. They will then be able to predict evolving complexities and the risks associated with them.

Define New Processes and Create Multidisciplinary Teams

To be successful in the future, businesses need to define more flexible, adaptive, and resilient processes and establish multi-disciplinary teams to define new processes and innovate for new products.

Adopt a Cloud-First Approach

As business stakeholders are redefining products, services for new customer experiences, IT needs to catch up with the fast pace of delivering new applications and new functional requirements. Enterprises need to take a cloud-first approach. The cloud-first approach would allow for faster innovation with lower cost and allow enterprises to focus on core functionality instead of building resilient IT platforms.

Reduce Technology Debt

As organizations are looking to scale up their technological capabilities, including the IT and cloud infrastructure, to be ready for the new normal. Some of the areas that I would recommend reducing technology debt are:

  1. Modernizing any legacy systems still in operation.
  2. Ensuring core applications are architected for resiliency and have a good business continuity and DR solution implemented.
  3. Reducing the skills risk of legacy systems on Mainframe, IBM I series, COBOL, DB2, Solaris, and older 4GL.
  4. Embracing DevSecOPS.
  5. Adopting a low-code/no-code approach.

8. Develop a New Strategy for Remote Working and Social Workforce Management

Developing a remote working strategy for your business can ensure that everyone can securely access the tools they would require to work remotely, including any access to business systems — including HR, ERP, payroll, CRM, unified communication (UC), and collaboration tools, file storage, and email. At the same time, setting up an omnichannel workforce management system should also be a priority, with an increased focus on employing digital solutions to tackle the new challenges.

Focus on Upskilling and Skill Rotation

Increasingly, companies are discovering that remote working scenario has created a mismatch between the type of skills that are needed and what their employees have. As business leaders turn to automation, digitization, and value data extraction, the workforce should be able to complement the value that is being added by the new technology.

Re-imagine Processes and Productivity for Composite AI

While businesses have been automating processes for efficiency in operations, they need to do so for every department in a post-COVID world. Utilizing composite AI to re-imagine the ways of doing business while factoring in various productivity scenarios will be an in-demand strategy in the new world.

Get HR Ready

For businesses to make remote working a more responsible process, the HR teams have to come up with their own set of remote working procedures that can communicate organizational expectations to employees and also ensure any upskilling that may be required on their part. HR and IT teams need to work in more cohesion in the new world.

9. Innovate for New Products and Services

In a new interconnected world, to drive realistic customer demand and to sustain continuity of business, organizations should look to create a ‘digital twin’ of the entire supply chain (or of key processes) to simulate and adjust for the developing scenarios.

Also, increased dependence on subscription-based models should help cater to evolving customer demands. In short, businesses need to shift to a product mindset and leverage technology as an enabler to define new services and subscription models.

10. Invest in Cybersecurity

Businesses need to ensure that their cybersecurity teams are all geared up to protect their data, applications, and resources from any kind of threats or to respond to alerts. As remote working finds a more comprehensive place in the business ecosystem, CISOs must draft new security policies (for data security, network security, app security, and remote wipe out) to accommodate for the increase in telework.

What’s in the Future for Global Supply Chains? 

While globalization was the talk of the town in the '90s, I feel that eventually, global supply chains may end up becoming a victim of their own success. As businesses were over-reliant on strategies focused on cutting costs, they had increasingly become less flexible and resilient with their supply chain structures. This needs to change, and the ones to adapt quickly will be the new market leaders.

See blog

Tags: Business Strategy, Business Continuity, COVID19

Respond, Reset, and Renew Your Business Strategy Using Low Code/No Code Platform
March 24, 2022

Business leaders are being forced to rework their strategies and relook at their offerings in the aftermath of the COVID-19 crisis. Gone are the days when work took place as an established practice. 

The pandemic has opened multiple Pandora’s boxes for every business. As the pandemic’s impact deepens, I take a look at the importance of low code/no code approach for IT applications which businesses can implement for a faster recovery. 

While every business goes through a cyclical change of events, the pandemic can easily throw a spanner in their plans. In the initial few days, businesses struggled to put together a plan of action. Now a year and a half into the pandemic, they look better positioned to implement new models to manage productivity with safety.

For any business to formulate an effective response to such a crisis, they need to adopt a three-phased approach:

  • Respond: Take immediate action to curb any bleeding losses, to protect customers, workforce, operations, and the supply chain.
  • Reset: Rework the action plan, restructure the business model, and identify capabilities to focus on scaling. Actions more linked to managing through an economic slowdown as a lighter weight, more agile business.
  • Renew: Position to re-emerge strongly and gain share in the recovery by implementing learnings from the initial phases to build on this new foundation for growth. Reinvent the business model to address existing and new opportunities with a stronger, more resilient version of the enterprise.

Current State of Custom App Dev

  • Digital transformation is driving enterprise software to be increasingly consumed directly by customers and partners.
  • Enterprises increasingly need more effective applications/solutions to stay competitive and innovate in a digital era.
  • Custom apps and solutions have emerged as key points of competitive differentiation.
  • There is pressure to innovate faster and speed time to market.
  • Business units are frequently forced to address their custom application development needs independent of IT, often leading to shadow IT.
  • Many internal software projects are either not approved or prioritized because of low ROI (benefit vs effort/cost), long time to market, or limited development team capacity.
  • The legacy workforce may not possess digital skills

While this may look simple on paper, the major impact of the current crisis has been on productivity as teams no longer have the luxury to operate out of the same location. While this can be countered easily by the few big names in the industry, for many others, this may mean investing time, effort, and above all, large sums of money in developing platforms to boost productivity unless they opt for a low code/no code-first strategy.

What Is Low Code/No Code?

Low code/no code is a software engineering approach for developing cloud-native applications/custom software apps fast and with minimal hand-coding.

Why Should We Adopt Low Code/No Code?

Gartner reports predict that low code will account for more than 65 percent of all application development activities by 2024.

Most of the enterprises and governments have accelerated the process of digitization and the COVID-19 pandemic has increased the demand for near-real-time and data-based decision instead of offline surveys. While the regularly staffed IT departments are capable of handling a few requests, with almost every business function in need of digitization, businesses have their hands full to implement digital practices overnight.

The low code/no code platform-based applications can be created as quickly as they are needed. Also, they offer a robust model with avenues for interoperability amongst various devices and functions that may be necessary for scaling up operations in the future.

Low code application platforms (LCAP) have enabled two-way interaction between people and increased e-participation between employers and employees as well as governments and citizens.

The application-based data capture using a low code/no code platform allows for better protection of sensitive data and people’s privacy. It has also made easy to implement activity-driven applications, enabling organizations and governments around the world to capture real data via apps to take much informed decisions.

How to Select a Low Code/ No Code Platform for Your Requirements

Step 1: Identify Who Is Going to Work on the Platform.

Low code/no code platforms are usually classified under two broad categories, viz for developers and for business end-users. As such, before choosing a platform, companies must clearly understand who is going to work on it. It’s best not to put a developer-oriented platform in front of ‘citizen developers’ — those with no programming skills. While choosing a platform for developers does offer you more customizable control over it.

Step 2: Figure Out How It Will Be Used.

Choose a low code/no code product that offers you more functions for your requirements as every tool is different in its capabilities.

Step 3: Think Scalability and Governance.

You don’t want to end up with an application which offers no upgrade or support. When choosing a low code/no code platform, exercise extreme judgement in ensuring that it should be able to offer you a viable solution for the required duration, and offer you continuous access to scale up as per your evolving requirements.

Key Scenarios to Use Low Code/No Code Development

In this age of information, we are all racing against time. This is where low code/no code platforms can be utilized to the maximum. Let’s look at a few such scenarios:

  • For governments and companies that need AI-enabled decision making to process acquired data
  • Data-driven, quick response policy decisions in times of crisis
  • Operations processes
  • Contact tracing applications developed for any context (malls, offices, cities, hospitals, states, countries, etc.)
  • Awareness applications for information sharing
  • Activity-based apps like self-diagnosis automation apps, FAQ apps for pandemic response, remote customer service/remote consulting, telemedicine, and retail/restaurant-related delivery and pick-up applications
  • Business unit or departmental applications
  • Mobile-enabled applications
  • Composite applications/mashups
  • Systems of engagement
  • Application prototypes and MVPs
  • SaaS extensions
  • Opportunistic short duration applications

What Are the Challenges of Using Low Code/No Code Technology?

While low code/no code platforms offer faster time to market and faster development time, they can also be limited in their capabilities due to the block-model manner in which they are built. A few challenges include:

  • It’s relatively harder to debug an app built on low-code platforms as deep functionality changes are next to impossible.
  • Low code can at times bloat the codebase and result in slowing down the app.
  • Training the team on a particular low code platform can inhibit off-the-shelf hiring from institutes, as low-code training isn’t part of the usual curriculum.
  • Even tools that promise no code writing skills are necessary, may require some coding skills for tweaks and troubleshooting.
  • Configuring complex use cases might actually take longer on low code platforms than writing a simple piece of code.
  • Securing data from a low code platform to a local database can be tough or in some cases, impossible.
  • Cross-platform integration may suffer from limitations or may require you to spend more.
  • You need a proper governance model and operating mode to avoid the sprawl of low code/no code apps

With all that said, I firmly believe that low code/no code technology is the most promising way to scale for the faster application development. When implemented efficiently, it has proven to be effective beyond expectations.

What Is in Store for the Future of Low Code/No Code?

I see the "Renew phase" for businesses and governments absorbing learnings from the previous "Reset phase" and building on those to expand the scope of operation of platforms and apps built on the low code/no code model.

With the increasing adoption of the cloud as a necessity rather than a luxury, and rapid progress being made on the cloud-development front itself, I expect more low code/no code platforms being utilized by organizations to respond to their fast-evolving needs. Workflow automation and anywhere-anytime solutions built on low code/no code platforms look set to grow their reach even further.

The adoption of 5G services, which provide faster speeds and low latency, should fuel the rise of LCAP platforms and share increase in citizen developers.

I had often heard that necessity is the mother of invention, but perhaps the COVID-19 pandemic has led to a creation of many necessities, which are now leading to a digital transformation of our society. I see a bright future ahead for the low code/no code platform, because in this new future, software is going to be anyone’s game!

See blog

Tags: Business Continuity, Business Strategy, COVID19

Multiple Cloud Perspectives and Introduction to Azure Arc
March 17, 2022

Decoding Multiple Cloud Perspectives

In today’s day and age, business enterprises are finding it difficult to navigate through different complex environments that run across data centers, edge, and multiple clouds. While single cloud still holds relevance, most companies are adopting multi-cloud and hybrid cloud models. However, the terms hybrid cloud and multi-cloud are inconsistently used. 

A multi-cloud strategy entails using multiple cloud services from different providers based on their performance levels at certain tasks. 

With the deployment of multi-cloud and hybrid cloud infrastructures and it being a reality, players like Microsoft, Google, and AWS have entered this market, propelling for greater cloud innovation. All hyperscalers have built control planes for hybrid and multiple cloud deployment models, that overlook the lifecycle of managed services like Internet of Things (IoT), functions databases, virtual machines, and observability, etc.

I believe that these control planes deliver the promise of robust hybrid/multi-cloud technologies in this ever-changing multi-cloud services infrastructure. Currently, Microsoft Azure Arc and Google Anthos are the most popular control planes in this domain. However, Microsoft Azure Arc — stands out amongst others because of its unique design architecture.

In this article, I will deep dive and dissect the efficacy of Microsoft Azure Arc.

What is Azure Arc?

Azure Arc is a software solution that enables you to project your on-premises and other cloud resources, such as virtual or physical servers and Kubernetes clusters, into Azure Resource Manager.

Think about Azure Arc as a management and governance tool that enables you to manage your resources as if they’re running in Azure, using a single pane of glass for managing across your estate. Essentially, Azure Arc is an extension of Azure Resource Management (ARM) that gives support to resources running outside of Azure. It uses ARM as a framework by extending its management capabilities and simplifying the use for customers across different hybrid and multi-cloud environments. 

Azure Arc is about extending the Azure control plane to manage resources beyond Azure, like VMs and Kubernetes clusters wherever they are, whether they’re Windows, Linux or any Cloud Native Computing Foundation-Certified Kubernetes distro. Organizations can even manage resources even if they’re not always connected to the internet. Thus, non-Azure deployments can be managed alongside Azure deployments using the same user interfaces and services, such as tags and policies.

Azure Arc is a unique approach undertaken by Microsoft to accelerate innovation across hybrid and multi-cloud environments. So, in a nutshell, what does Azure Arc offer?

a.) Arc enables management and governance of resources that can live virtually anywhere (on-premises, in Azure, Azure Stack, or in a third-party cloud or at the edge). These resources can be servers, virtual machines, bare-metal servers, Kubernetes clusters, or even SQL databases. With Arc, you can use familiar Azure services and management capabilities including Create, Read, Update and Delete (CRUD) policies and role-based management.

b.) Arc provides a single pane of glass — Using the same scripting and tools, you can see those resources alongside everything else in Azure. Furthermore, they can cover, monitor, and back all these services no matter where they live.

c.) Arc enables customers to easily modernize on-premises and multi-cloud operations through a plethora of Azure management and governance services. Supports Asset organization and inventory.

d.) Arc can support enforcing organization standards and assess compliance at scale for all your resources, anywhere based on subscription, resource groups, and tags

e.) Arc also provides other cloud benefits such as fast deployment and automation at scale. For example, using Kubernetes-based orchestration, you can deploy a database in seconds by utilizing either GUI or CLI tools.

f.) Arc allows organizations to extend the adoption of consistent toolset and frameworks for Identity, DevOps / DevSecOPS, automation, and security capabilities across hybrid/multi-cloud infrastructures and lastly, to innovate everywhere.

g.) Arc supports the use of GitOps-based configuration as code management, such as GitHub, to deploy applications and configuration across one or more clusters directly from source control.

h.) Arc helps organizations to make the right decisions about cloud migrations. Using Azure Arc, you can gather the workload data (discovery) and uncover insights to decide where your workloads should run — whether on-premises, in Azure, or in a third-party cloud or at the edge. This insight-driven approach can save you significant time, effort and migration cost too.

i.) A unified experience viewing your Azure Arc enabled resources whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API.

Key Features of Azure Arc

Azure Arc allows enterprises to manage the following resource types outside the realm of Azure:

1. Azure Arc for Servers

Azure Arc-enabled servers became generally available in September 2020.

Servers, be it physical or virtual machines, running Windows or Linux, are supported by Azure Arc. Azure Arc-enabled servers are in a way considered agnostic to infrastructure for this reason. These Machines, when connected, are given an ID amongst the resource group and are deemed as another resource in Azure. Azure Arc servers enable various configuration management and monitoring tasks, making it easier for hybrid machines to have better resource management.

Additionally, service providers handling customer’s or enterprise’s in-house infrastructure, can treat hybrid machines similar to how they treat native virtual machines using Azure Lighthouse.

2. Azure Arc enabled Kubernetes

Managing Kubernetes applications in Azure Arc entails the attachment and configuration of Kubernetes clusters inside or outside of Azure. This could entail bare-metal clusters running on-premises, managed clusters like Google Kubernetes Engine (GKE), Amazon EKS etc.

Azure Arc enabled Kubernetes allows you to connect Kubernetes clusters to Azure for extending Azure’s management capabilities like Azure Monitor and Azure Policy. By attaching external Kubernetes clusters, users can avail all the features that enable them to control external clusters like Azure’s own internal cluster. But keep in mind, unlike AKS, the maintenance of the underlying Kubernetes cluster itself is done by you. 

3. Azure Arc enabled Data Services

Azure Arc enabled data services to help to run data services, using your preferred infrastructure, on-premises and at the edge. Currently, Azure Arc-enabled data services are available for preview in services like SQL Managed Instance and PostgreSQL Hyperscale. Azure Arc supported SQL Managed Instance and PostgreSQL Hyperscale can be run on AWS, Google Cloud Platform (GCP) or even in a private datacenter.

Azure Arc enabled data services such as Azure Arc enabled SQL Managed Instance and Azure Arc enabled PostgreSQL Hyperscale to receive updates on a frequent basis, including servicing patches and all the new features in Azure. Updates from the Microsoft Container Registry are provided to you and deployment cadences are set by you in accordance with your policies. 

Azure Arc enabled Data Services also support cloud-like Elastic Scale, which can support burst scenarios that have volatile needs, including scenarios that require ingesting and querying data in real-time, at any scale, with sub-second response time. In addition, you can also scale out database instances using the unique hyper-scale deployment option of Azure Database for PostgreSQL Hyperscale. 

This capability gives data workloads an additional boost on capacity optimization, using unique scale-out reads and writes. Many of the services such as self-service provisioning, automated backups/restore, and monitoring can run locally in your infrastructure with or without a direct connection to Azure. 

4. Azure Arc enabled SQL Server

Azure Arc enabled SQL Server is part of the Azure Arc for servers. It extends Azure services to SQL Server instances hosted outside of Azure in the customer’s datacenter, on the edge, or in a multi-cloud environment.

Azure Arc vs Azure Stack Hub

You must be wondering why has Microsoft introduced Azure Arc when there’s an already existing hybrid cloud service — Azure Stack?

Azure Stack is a hardware solution that enables you to run an Azure environment on-premises. Whereas Azure Arc is a software solution that enables you to project your on-premises and multi-cloud resources, such as virtual or physical servers and Kubernetes clusters, into Azure Resource Manager.

For applications that use a mix of on-premises software and Azure services, local deployment of Azure services through Azure Arc can reduce the communication latency to Azure, while providing the same deployment and management model as Azure.

While Azure Stack Hub is still viable for few businesses, Azure Arc becomes a holistic strategy for organizations that are looking to offload their workloads on both private and public clouds, both off-premises, and on-premises.

Azure Arc vs Google Anthos vs Aws Outposts

So, how does Azure Arc compare to other hyperscalers who are offering hybrid and multi-cloud strategies?

AWS Outposts is a fairly new solution and is currently more aligned to Hybrid Cloud deployment models. Google Anthos allows you to build and manage applications on-premises, Google cloud, and even on AWS Outposts and Microsoft’s Azure. Anthos does NOT make GCP services available in either your own data center or in other clouds. To access GCP services (storage, databases, AI/ML services, etc.), the containers running in your data centers must reach back to Google cloud.

Google Anthos and Azure Arc have very similar capabilities and approaches. Anthos is more focused on getting everything deployed to containers and has limited capabilities to manage VMs or servers running in your data center or in any third-party clouds. Additionally, Google Anthos currently might be a costly component. Moreover, according to me, Google Anthos is quite prescriptive. This is because to run Google Anthos, you require GKE (Google Kubernetes Engine), be it to deploy to Google Cloud or on-premises. 

This isn’t the case with Microsoft’s Azure Arc as it goes beyond Kubernetes into areas like centralized discovery, a common toolset for security, configuration, management Data Services. It also offers more choices for Kubernetes environments, giving the option to customers to choose the Kubernetes platform. Azure Arc offers more portability and less lock-in than Anthos. Basically, Azure Arc does everything Anthos does and much more; making Azure Arc more versatile to adapt to.

Azure Arc Pricing

Azure Arc is offered at no additional cost when managing Azure Arc-enabled servers. Add-on Azure management services (Azure Monitor, etc.) may be charged differently for Azure VMs or Azure Arc enabled servers. Service by service pricing is available on the Azure Arc pricing page. Azure Arc enabled Kubernetes clusters and Azure Arc enabled data services are in preview and are offered at no additional cost at this time.

Roadmap Of Azure Arc

The current roadmap as stated on the Microsoft website includes adding more resource infrastructures pertaining to servers and Kubernetes clusters. In the future, you can expect:

a.) Self-hostable gateway for API Management — allows management of APIs hosted outside of Azure using the Azure-hosted API Management service.

b.) Other database services, such as Cosmos DB, are likely to be supported by the data services feature.

c.) Furthermore, support for deploying other types of Azure services outside of Azure could be added to Arc in the future.


To encapsulate, public cloud providers are churning out services to attain a spot in your company’s on-premises data center. The growing demand for hybrid-cloud and multiple cloud platforms and services has ushered Microsoft to launch Azure Arc as part of its cloud strategy.

So, what does this innovation mean to IT infrastructures? Well, with the demand for single management systems in multi-cloud environments soaring, I think it is more than a viable option. Simply because, once you register with Azure, Microsoft Azure Arc enables enterprises to jump on the hybrid cloud bandwagon regardless of whether they own an old version of Oracle on Linux or a modern one. 

I think this strategy is a game-changer as it helps to simplify complex and distributed systems across various environments like on-premises, multi-cloud and on edge. Additionally, Azure Arc can be deemed as a compelling choice for enterprises that want to maintain balance by using traditional VM based workloads and modernized container-based workloads.

Azure Arc, can hence, distinguish itself as a legacy management tool for hybrid cloud applications infrastructure propelling for greater digital transformation. I feel the simplicity of Azure Arc will be enough to lure enterprises to adapt to it.

See blog

Tags: Cloud, Business Strategy, Data Center

Industry Cloud Is the Future of Cloud Transformation and Realization
March 16, 2022

Today, the cloud underpins most new technological disruptions and has proven itself during times of uncertainty with its resiliency, scalability, flexibility, and speed.

According to Gartner, Cloud adoption has expanded rapidly — more than 20% CAGR from 2020 to 2025 in total spending. But guess what — total cloud spend still makes up ‘only’ about 10% of global enterprise IT spend. So where is the barrier? What’s holding the Public Cloud penetration and pervasive usage inside enterprises?

Definitely, there are reasons like regulatory compliance, security concerns, shortage of Cloud skilled resources, technology debt, etc. My view is, so far Cloud providers have been solving the Technology Style problem instead of providing real industry-specific business process-centric solutions.

Business executives are demanding a path to digital operational excellence and most of the enterprises are looking at Public Cloud as the platform as the core foundational block for the next generation of innovation. To build a future resilient and sustainable business, companies need to prepare for a paradigm shift to industry-specific solutions as the applied innovation testing platform.

Gartner estimates that about 5% of organizations currently use an Industry cloud solution. Broader adoption within enterprises will require more vertically targeted composable applications and business processes with “whole product” solutions designed for industry scenarios and process models, rather than technology-oriented solutions that enterprises must largely configure and integrate themselves.

What makes industry clouds different from today’s cloud offerings are as follows:

  1. Industry clouds offer a combination of IaaS, PaaS, SaaS, and composable ISV / partner solutions to address specific vertical requirements by providing extensible design. “Composable Business” approach encourages modularity of business capabilities, teamed with autonomous operations and explicitly orchestrated workflows.
  2. By leveraging composability, industry clouds can offer much-required adaptability than traditional SaaS applications thus addressing the need for customization / differentiated functionality.
  3. Industry clouds are not a copy or split-off version of Public Cloud with Industry-specific or Region-specific controls that need to be maintained separately and offer the full set of capabilities to users.
  4. Industry clouds are a step toward Composable Business and “whole products,” or a cloud that addresses every customer’s need.
  5. Industry Cloud enables customers to connect to a vast ecosystem of partners and suppliers that offer an expansive array of services. Organizations will increasingly use Industry cloud services to create agile, innovative business designs that enhance their core competencies.

With Industry Cloud?

 Customers stand to benefit with:

  1. Shared best practices and thought leadership offered through vertically specialized go-to-market and implementation partners.
  2. Compliance with the infrastructure and application platform with industry-specific regulations.
  3. Common Data model for analytical capabilities to mine the data from their existing applications — thus enabling Data-Driven Decision Making.
  4. The industry-specific marketplace of functional applications and collections of industry-specific functional building blocks and capability to choosing a combination of the above as a (pre-) composed industry solution.
  5. Value creation for enterprises by bringing together traditionally separate purchased solutions in pre-integrated solutions thus simplifying the sourcing, implementation, and integration process.
  6. A primary benefit is the amount of time and effort Industry Cloud will save the businesses in designing and transitioning to cloud environments as they have been specifically tailored for them.
  7. Industry Cloud can help deliver diverse customer experiences in days and not months. This means new revenue streams and greater operational efficiency and agility with higher customer connection and lifetime value.
  8. Industry Cloud would pave the faster and standardized path for Cross Industry collaboration. This will also translate to new business models, services, and partnerships.

Microsoft Industry Cloud Offerings 

Microsoft is building clouds for priority industries by connecting and customizing public cloud services across firms with tailored products and service bundles.

  1. Microsoft Cloud for Financial Services:Unique templates, API’s and additional industry-specific standards, along with multi-layered security and compliance coverage helps deliver differentiated customer experiences, improve employee collaboration and productivity, manage risk and modernize core systems.
  2. Microsoft Cloud for Manufacturing:Businesses can connect experiences across operations, workforce, design and engineering processes, customer engagements, and the end-to-end value chain with this.
  3. Microsoft Cloud for Non-profit:It connects the cloud capabilities of Microsoft to the most common non-profit scenarios such as constituent engagement, program design and delivery, volunteer management, and fundraising, all brought together by the non-profit common data model.
  4. Microsoft Cloud for Healthcare:It empowers the caregivers to have a holistic, 360-degree view of a patients’ care plans without having to manage multiple systems. streamline workflows, have seamless follow-ups and wellness guidance from their care teams and have a highly secure care collaboration channel.
  5. Microsoft Cloud for Retail: It is designed to give retailers the flexibility to adapt the capabilities they need to address their most pressing business needs, be it knowing customers better, empowering employees, building resilient supply chains, or reimagining retail.
  6. Microsoft Cloud for Sustainability: It is designed to help customers, incorporate sustainability goals into their cloud strategies by leveraging cloud providers’ environmental initiatives and advances that deliver additional business value.

Industry Clouds Make the Journey Easier

Today, every enterprise wants to create disruptive capabilities, specific to their business, that will help them stay differentiated and innovative. Industry Cloud would move the dial of the cloud conversation from a cost to a growth narrative by identifying a new area of opportunities that can be actioned. 

With Industry Cloud, Public Computing will transition to becoming the composable foundation for business innovation instead of just being a technology style for delivering applications. It will redefine the baseline for long-lasting impact on cloud strategy and adoption as they are blurring the lines between established cloud services such as infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). 

Industry Cloud will empower the fast pace innovation for new business models and revenue streams for its customers. It is the required catalyst of future Cloud adoption and will accelerate the digital transformation journey.

See blog

Tags: Cloud, Digital Transformation, Business Strategy

Are Solution Architects Relevant in Post-Covid Time?
March 15, 2022

The unprecedented times have forced business needs to evolve with a digital-first approach. How can solution architects help with this transition? Read the blog as I answer your questions.

I have an analogy where I compare solution architects to lifeguards. The role of a solution architect is all about solving problems by orchestrating digital components to address an organization’s needs. It is all about planning strategies that are concerned with reducing costs, eliminating redundancies in technology and processes, and preparing for the impact of change through appropriate mitigation and management. 

While pandemic has changed everything but I see it has created a new level playing field for everyone and as we have an opportunity to start from a clean slate and redefine roles in the ecosystem of cloud computing, and broadening the scope of solution architecture and enterprise architecture.

The majority of the population wince at the mention of “new normal”, but the pandemic has imposed drastic adjustments to our lives by changing our perceptions and altering our priorities. With this, we can witness an acceleration of digital adoption, and it has opened various avenues that are yet to be explored.

Businesses have considerably expanded the threat landscape by sending employees home which has amplified the need to challenge and redefine all business processes and policies. For instance, due to on-site working restrictions, many organizations have turned to the WFH system. This means that unprotected networks have access to corporate data without proper vigilance. This calls for action to protect the data through Zero Trust Design.

Additionally, the narrative in the cloud computing industry has changed from ‘build this huge architecture’ to ‘build a sustainable architecture that proves to be efficient. It’s much more than just about Agile development with a series of Minimum Viable Product (MVP). Now, it is a continuous process where Agile is not even enough anymore. You must be capable of constantly shifting to a state of “Being Agile in using Agile Development”.

While one cannot foresee the future, one can create logical analogies that are based on the current situation and our first responses. In my opinion, solution architects can take these observations into account while dealing with the current uncertainties -

  1. You need to move past the conventional approaches for solution development(Cloud Migration : Discover -> Access -> POC / POA -> Pilot -> Build or solution Build : Planning -> Requirement Analysis -> Design -> Build -> Test -> Go Live) This, unfortunately, wouldn’t work in a modern, post-COVID setting. Organizations need to follow the “Act fast, Fail Fast, Learn Fast” approach.
  2. Don’t be bound with earlier experience and limitations— Keep in mind that business priorities have changed, and every past policy and assumption can be challenged now.
  3. Get involved in the development of new capabilities that cooperate with the new ways of working. At the very least, this would help to bless the design and ensure organizations are not creating a technical debt of significance.
  4. Do not focus on multiple long-term goals. If there is one thing that the pandemic has taught us, it is to make plans while keeping the scope of adaptability to uncertainties wide open.

As a result of the pandemic, businesses have changed. Customer expectations and buying patterns have changed. This calls for a thorough review of business capabilities which result due to a lot of reasons — suspension of processes, adaptation to environmental changes, replacement or extension due to a merger/acquisition, expansion due to significant adoptions, modification attributed to improved business resilience, and introduction of new products and services. Thus, solution architects need to look at new realities and change their way of working. There is a need for solution architects to unlearn old skills and embrace new skills.

By unlearning the old skills, I don’t mean abandoning the fundamentals of being a solution architect. My experience has taught me that there are 6 guiding principles that an ideal solution architect should swear by:

  1. You should always be a technical leaderin the team.
  2. You should possess a thorough understandingof software development.
  3. You should possess the requisite technological and problem-solving expertise.
  4. You should try to minimize riskthrough negotiations, collaboration, and communication skills.
  5. You should aim to maximize returnsthrough the cloud, financial engineering.
  6. You should possess a strong set of soft skills.

Let’s dissect these skills to understand them better.

1. Leadership Skills

A solution architect is essentially the leadership figure that molds business solutions to fit into the enterprise architecture. This requires an individual to bring technical leadership to the table which can be achieved only by staying abreast with all the innovations in the field of cloud computing and solution architecture, and spending resources to upskill and reskill yourself.

2. Technical Knowledge

With the changing ecosystems, technology decisions take a front seat and an architect’s capabilities play a big role in promoting the long-term strength and scalability of the organizations. To be relevant in the post-COVID world, Solution Architects need to upskill themselves to acquire knowledge in the following area:

  • Adaptive integration scenarios with a new B2B Partnership
  • Site Resilient Architecture & Engineering
  • Distributed Cloud
  • Creativity and re-imagine business with the Internet of Behaviours (IoB), Human Augmentation, and 5G
  • Zero Trust Architecture: Virtual operations have exposed every organization’s functions to potential threats. This is where the knowledge of Zero Trust Architecture helps a solution architect.
  • Distributed Digital Identity: — Familiarity with identity standards such as Self Sovereign, DLT, Biometrics, facial recognition, and cryptography.
  • Architecture patterns like Digital Decoupling, Event-Driven Architecture, Observability
  • Architecture patterns beyond Data acquisition (Hot path, Cold path), Data Curation, Master Data Management, Data Governance, BI & Insights for elevating data estate conversation to “Data as Asset”
  • Responsible AI
  • Hyper Automation
  • Low Code/No Code Platforms instead of developing prototypes and multiple POC. This involves using Intelligent Process Automation, which works great if the future form of the target operating model is not clear.
  • Knowledge of Event Storming and how to apply it along with the traditional Design Thinking
  • An eye on the technological transformations and upcoming trends.
  • Responsible Enterprise

3. Understanding of Software Development Lifecycle

A Solution Architect needs to possess the know-how to achieve Rapid Solution Delivery at Scale. This can be possible by an increasing reliance on adaptive reuse, encouraging innovations in lightweight architecture, moving beyond simple Design Thinking, and expanding knowledge about DevOps/DevSecOps and AIOps.

4. Negotiations & Financial Engineering

Going forward Solution Architects would be seen as owners of the overall business case for the solution in context and that effectively would require Solution Architect to develop expertise in the area of Financial Engineering.

The catch to successfully evolve here is to learn 'How to Build a Business Case'. Solution architects need to think beyond technological solutions and learn how to build a business case keeping the entire operations of the organization in mind. While TCO and ROI are measurable metrics, solution architects need to think beyond numbers and focus on the joint business case with Technology vendors and Cloud providers. Another thing that helps is to think beyond the App-level business case and start thinking Portfolio-Level. I am, personally, for a shift to quicker, short-termed ROI of 2 years, instead of the usual 5-year ROI.

When you are a Solution architecture leader, you are expected to collaborate with various teams and get to a middle ground — this is called the art of negotiation. Every individual’s success depends on creating value for the organization through gaining resources, solving a problem, or coordinating mergers, acquisitions, joint ventures, alliances, management buyouts, share issues, and financial restructuring. A solution architect needs to know how to get out of misunderstandings that kill potential deals and leave stifled opportunities.

5. Functional skills for Cloud Computing

In the past, the solution architect role was seen as a bridge between Infra Architect, Network Architect, Security Architect, Storage Architect, Application Architect, and Database Architect. With Public Cloud adoption, some of the roles have been shifting or becoming irrelevant with a new persona of Cloud Solution Architect. Going forward, Solution Architects needs to transition to Cloud Solution Architect persona and kind of become a jack of all for the cloud technologies (with knowledge of L300+ and network of subject matter experts with L400+ skills) and Risk Management expert for Cloud solutions from a perspective of scalability, resiliency, sustainability and cost optimization.

6. Soft Skills

Along with the technical skills, a solution architect needs to demonstrate the following soft skills to lead the technical vision of the enterprise -

  • Creativity
  • Collaboration
  • Adaptability
  • Decision making
  • High EQ
  • Problem Solving
  • Communication

Take a look at my blog to get a perspective on how these soft skills can help an ideal solution architect to achieve the peak of digital innovation in the current times.

See blog

Tags: Agile, Business Strategy, COVID19


Contact Gaurav Agarwaal

Book Gaurav Agarwaal for Speaking

Book a Meeting

Media Kit

Share Profile

Contact Info


Gaurav Agarwaal


Redmond, United States

Latest Activity

Latest Member Blogs