Transforming the Future of Technology | Data & Analytics Maestro, Intelligent Application, Cybersecurity Innovator and Cloud Champion
I'm a trailblazer in the tech world, celebrated for my forward-thinking approach in Data Modernization, GenAI, Intelligent Applications, Cybersecurity, and Cloud Transformations. As a Technology Strategist, coach & Mentor and Microsoft MVP, I've dedicated my career to revolutionizing how we interact with and leverage technology, consistently staying ahead of the curve.
Leadership Through Innovation
Currently steering the ship at Onix as the Senior Vice President, Global Lead - Data & Analytics and Intelligent Apps Practice, I specialize in pioneering solutions in Data Modernization, Data Governance, Advanced Analytics and AI/ML.
Earlier my tenure at Avanade was marked by the successful creation of a $550 million Security Services practice, focusing on comprehensive cybersecurity.
A Track Record of Excellence
- Developed Avanade's 'Everything on Azure' initiative, generating an impressive $310 million portfolio.
- Established $520M Cloud and Modern Application business for Avanade (APAC and LATAM)
- Key player at Microsoft in developing a $1 billion Azure consumption business, leading a global team of Cloud Solution Architects.
Accolades and Industry Impact
My journey is peppered with significant revenue boosts and industry recognition, stemming from a deep understanding of technology paired with strategic business acumen.
Lifelong Learning and Expertise
I pride myself on my commitment to continuous learning, holding an array of certifications such as Microsoft Certified Architect Solutions and Databases, TOGAF 9, AWS Solutions Architect – Professional, among others.
My Core Specializations
- Championing Data Modernization, Data Governance, AI/ML solution for Data Driven Business Transformation
- Cyber Resilience and spearheading Digital Evolution.
- Championing Cyber Resilience and spearheading Digital Evolution.
- Creating groundbreaking Solution offerings and Go-to-Market strategies.
- Cultivating scalable practices and nurturing high-performance teams.
- Rich industry experience spanning ITES, FSI, Retail, Public Sector, and Oil & Gas.
- Deep expertise in Azure, GCP, AWS, and AI/ML.
I believe in the power of looking forward and embracing technological challenges as opportunities. My mission is to lead at the vanguard of the tech industry, shaping solutions that not only meet current needs but also anticipate future trends.
Available For: Authoring, Consulting, Influencing, Speaking
Travels From: Seattle
Speaking Topics: Cybersecurity, Cloud Transformation, Application Modernization Digital Transformation, Technology Leadership, 5G, Microsoft Azure
Gaurav Agarwaal | Points |
---|---|
Academic | 0 |
Author | 78 |
Influencer | 249 |
Speaker | 0 |
Entrepreneur | 30 |
Total | 357 |
Points based upon Thinkers360 patent-pending algorithm.
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: Diversity and Inclusion
Tags: Generative AI
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: AI, Generative AI
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: Cybersecurity, Security
Tags: Cybersecurity
Tags: Digital Transformation, Leadership
Tags: Digital Transformation, Leadership
Tags: Digital Transformation, Innovation, Supply Chain
Tags: Digital Transformation, Innovation, Leadership
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: Cloud, Cybersecurity, Digital Transformation
Tags: Digital Transformation, Management, Leadership
Tags: Cloud, Digital Transformation, Leadership
Tags: Cloud, Digital Transformation, Leadership
Tags: Cloud, Digital Transformation, Leadership
Tags: Cloud, Digital Transformation, Leadership
Tags: Cloud, Digital Transformation, Leadership
Tags: Cloud, Digital Transformation, Leadership
Tags: Cloud, Digital Transformation, Leadership
Tags: Cloud, Digital Transformation, Leadership
Tags: Cloud, Digital Transformation, Leadership
Tags: Cloud, Digital Transformation, Leadership
Tags: Cloud, Digital Transformation, Leadership
Tags: Cloud, Digital Transformation, Leadership
Tags: Cloud, Digital Transformation, Leadership
As the cybersecurity landscape continues to evolve, the challenges associated with defending
against cyber threats have grown exponentially. Threat vectors have expanded, and cyber
attackers now employ increasingly sophisticated tools and methods. Moreover, the
complexity of managing security in today's distributed hybrid/multi-cloud architecture,
heavily reliant on high-speed connectivity for both people and IoT devices, further
compounds the challenges of #cyberdefense.
One of the foremost concerns for corporate executives and boards of directors is
the demonstrable effectiveness of cybersecurity investments. However, quantifying and
justifying the appropriate level of spending remains a formidable obstacle for most enterprise
security teams. Securing additional budget allocations to bolster an already robust security
posture becomes particularly challenging in the face of a rising number of #cyberbreaches,
which have inflicted substantial reputational and financial harm on companies across diverse
industries.
The modern enterprise's IT infrastructure is an intricate web of dynamic networks,
cloud resources, an array of software applications, and a multitude of endpoint
devices. These enterprise IT ecosystems are vast and intricate, featuring a myriad of network
solutions, a diverse array of endpoint devices, and a mix of Windows and Linux servers.
Additionally, you'll find desktops and laptops running various versions of both Windows and
macOS dispersed throughout this intricate landscape. Each component within this
architecture boasts its own set of #securitycontrols, making the enterprise susceptible to
#cyberthreats due to even the slightest misconfiguration or a shift towards less secure
settings.
In this environment, a simple misconfiguration, or even a minor deviation towards less
secure configurations, can provide attackers with the foothold they need to breach an
organization's infrastructure, networks, devices, and software. It underscores the critical
importance of maintaining a vigilant and proactive approach to cybersecurity in this everevolving digital era.
As organizations look for ways to demonstrate the effectiveness of their security spend and
the policies and procedures put in place to remediate and respond to security
threats, vulnerability testing can be an important component of a security team’s
vulnerability management activities. There are several testing approaches that
organizations use as part of their vulnerability management practices. Four of the most
common are listed below:
• Penetration testing: is a common testing approach that Enterprises employ to
uncover vulnerabilities in their infrastructure. A Pen test involves highly skilled
security experts using tools and attack methods employed by actual attackers to
achieve a specific pre-defined breach objective. The pen test covers networks,
applications, and endpoint devices.
• Red Teaming: A red team performs “ethical hacking” by imitating advanced threat
actors to test an organization's cyber defenses. They employ stealthy techniques to
identify security gaps, offering valuable insights to enhance defenses. The results
from a red-teaming exercise help identify needed improvements in security controls.
• Blue Teaming: is an internal security team that actively defend against real attackers
and respond to red team activities. Blue Teams should be distinguished from standard
security teams because of the mission to provide constant and continuous cyber
defense against all forms of cyber-attacks.
• Purple Teaming: The objective of purple teams is to align red and blue team efforts.
By leveraging insights from both sides, they provide a comprehensive understanding
of cyber threats, prioritize vulnerabilities, and offer a realistic APT (Advanced
Persistent Threat) experience to improve overall security.
Although these vulnerability testing approaches are commonly used by organizations,
there are several challenges associated with them:
• These approaches are highly manual and resource intensive, which for many
organizations translates to high cost and a lack of skilled in-house resources to
perform these tests.
• The outcome of these vulnerability tests provides vital information back to the
organization to act on, they are performed infrequently due largely to the cost and
lack of skilled resources mentioned previously.
• These methods provide a point-in-time view of an organization’s security
posture which is becoming less effective for companies moving to a more dynamic
cloud-based IT architecture with an increasing diversity of endpoints and applications.
Traditional vulnerability testing approaches yield very little value because the security
landscape and enterprise IT architectures are dynamic and constantly changing.
Since testing the cybersecurity posture of organizations is becoming a top priority, it
triggered an increased demand for the latest and most comprehensive testing
solutions. Moreover, it’s almost impossible, from a practical standpoint, for multiple
enterprise security teams to manually coordinate their work and optimize configurations for
all the overlapping systems. Different teams have their own management tasks, mandates,
and security concerns. Additionally, performing constant optimizations and manual testing
imposes a heavy burden on already short-staffed security teams. This is why security teams
are turning to Breach and Attack Simulation (BAS) to mitigate constantly emerging (and
mostly self-inflicted) security weaknesses.
Definition - Breach and Attack Simulation (BAS)
Gartner defines, Breach and Attack Simulation (BAS) technologies as tools “that allow
enterprises to continually and consistently simulate the full attack cycle (including
insider threats, lateral movement and data exfiltration) against enterprise
infrastructure, using software agents, virtual machines and other means.”
BAS tools replicate real-world cyber attacker tactics, techniques, and procedures
(TTPs). They assist organizations in proactively identifying vulnerabilities, evaluating
security controls, and improving incident response readiness. By simulating these attacks in a
controlled environment, organizations gain valuable insights into security weaknesses,
enabling proactive measures to strengthen overall #cybersecurity.
BAS automates the testing of threat vectors, including external and insider threats, lateral
movement, and data exfiltration. While it complements red teaming and penetration testing,
BAS cannot entirely replace them. It validates an organization's security posture by testing its
ability to detect a range of simulated attacks using SaaS platforms, software agents, and
virtual machines.
Most BAS solutions operate seamlessly on LAN networks without disrupting critical
business operations. They produce detailed reports highlighting security gaps and prioritize
remediation efforts based on risk levels. Typical users of BAS technologies include financial
institutions, insurance companies, and various other industries.
BAS Primary Functions
Typical BAS offerings encompass much of what traditional vulnerability testing includes, it
differs in a very critical way. At a high level, BAS primary functions are as follows:
• Attack (mimic / simulate real threats)
• Visualize (clear picture of threat and exposures)
• Prioritize (assign a severity or criticality rating to exploitable vulnerabilities)
• Remediate (mitigate / address gaps
Tags: Cybersecurity
We are living in a hyper-digitally dynamic ecosystem. As we are moving towards a digitally dependent future, the need for Digital Resilience is increasing rapidly.
Digital Resilience helps companies by providing several ways for businesses to use digital tools and systems to recover from crises quickly. Today, digital resilience and supply chain resilience no longer imply merely the ability to manage risk. It now means that managing risk means being better positioned than competitors to deal with disruptions and even gain an advantage from them.
The need for Digital Resilience and sustainable supply chains has undoubtedly brought about a change in business IT taxonomy, enhancing business processes and performance.
In this article, I intend to highlight these very transformational changes in IT systems that have happened over two decades and new IT systems that enterprises need to develop and transform to become more successful, productive, and efficient.
Systems of Records (SoR) are software solutions that serve as the backbone for business processes.
The power of SoR is that they are the ultimate source and therefore “record” of critical business data. Essentially, SoR can be understood as a data storage and retrieval system for a company that works as an authoritative data source for the entire organization.
Systems of Records (SoR) are valuable to a company as it becomes a single source of truth that provides essential insights and information to a company’s management teams.
Initially, SoR’s were similar to the enterprise resource planning (ERP) systems, when on-premises ERP’s (like from Oracle, SAP) were in full use. However, over time, companies started to realize the time and cost inefficiencies of incorporating ERP. It required dedicated IT teams to set it up and had just had a sub-par user interface.
During the last decade, SoR took a turn by incorporating SaaS-powered software lock-in tools such as Workday, SuccessFactors, Salesforce, Dynamics 365, which have proved to be a little more efficient than the previous ones. SoR is critical for companies to mandate data integrity.
Systems of Engagement (SoE) was introduced to help employees, customers, and partner ecosystems to engage with the Systems of Records (SoR), data, and process flow better. SoE are systems that essentially collect data and enable customers, employees, and partners to interact with the business and associated methodology. They are task-based systems or tools used to retrieve specific information and data. Enterprises are incorporating SoE, not for their software, but for introducing new, data-driven processes to talent acquisition and talent management and business processes for operational excellence.
Organizations see changes in their internal management systems and are now moving away from top-down management to creating a more agile self-management system. Enterprises usually integrated Systems of Records (SoR) and Systems of Engagement (SoE) for better efficiency. Stand-alone traditional ERPs were becoming more and more expensive to handle and couldn’t keep up with the faced-paced digital transformation and innovation. Hence, by integrating SoE (a two-tier approach), organizations could make their business operations much more agile and cheaper.
The idea is simple:
Systems of Engagement are placed to “engage customers or engage with customers” and are supposed to be designed for flexibility and scalability. In contrast, Systems of Records (ERP, HRMS, ITSM, etc.) are just data repositories.
As innovation accelerated and businesses started automating and utilizing analytical tools in their operations, the need for different systems grew. As data-driven companies began to grow, the necessity for high-quality data and data modeling tools grew to gain actionable insights to make better business decisions. Hence, IT systems that support decision-making started to include Systems of Innovation and Intelligence (SoII) into their operations.
Systems of Innovation and Intelligence (SoII) gathers data from Systems of Record (SoR) and Systems of Engagement (SoE) and derives insights. It analyses the accumulated data and suggests improvements to enhance business performance and decisions. Earlier in businesses, SoR and SoE were examined and observed separately. However, now, SoII converges these data to derive better insights that can be turned to create better business outcomes or to innovate new products.
According to Forrester’s research, businesses are “drowning in data” and failing to gather insights. Big data, agile business intelligence, data analytics, etc., surely do give enterprises insight. However, they solve the problem partially. The solution lies in companies employing a structured system that harnesses the actual value from gathered data. The answer lies in embedding “closed looped systems” or nothing but the Systems of Innovation and Intelligence (SoII).
With SoII, enterprises can perform different types of analysis on collected data through predictive analytics, descriptive analytics, cognitive analytics, etc. Although enterprises have incorporated BI and analytics to gather insight, it won’t be enough without proper systems that convert these insights into actions. It is crucial to test what works, what doesn’t and understand the required changes. With data, you need to act intelligently.
To be successful in the new normal world, it is crucial to adapt to the fast-paced digital transformation and take a holistic approach to enhance business agility and performance. Here’s what needs to change going forward, in my opinion:
Systems of Records (SoR) should modernize with a cloud-first approach to transform them into Systems of Records and Intelligence.
Modernizing an existing SoR to Systems of Records and Intelligence will require a view of the workloads from an underlying infrastructure perspective and core application architecture characteristics to determine its suitability to operate on the cloud. But a cloud-first approach will leverage a rich ecosystem of services from the cloud marketplace, enabling rapid development of SOE applications
Systems of Engagement should now transition and offer different experiences for employees and customers, at least from a perspective of:
It is essential to know whether employees/customers can seamlessly adapt to these or not. For example, employees of different generations (Gen Z to Gen X) must engage with varying factors of form and experience mediums. They must offer different experiences to employees, and this can happen by refining your Systems of Engagements to a new System of Engagement and Experience.
Systems of Innovation (SoI) should now define the enterprises’ future. They need to focus more on establishing a System of Innovation (SoI) with a cloud-first mindset. SoI should enable fast-paced innovation for developing newly Sustainable and Resilient Products and Services.
Enterprises need to ensure they leverage Data as Assets and implement Systems of Insights to support Data-Driven Decision making across the entire Supply Chain and the broad spectrum of business processes and functions. Today enterprises are under immense pressure from Regulatory Authorities, Cyberattacks, and the pivot in customer buying patterns to prefer Trusted, Responsible and Sustainable Products. This effectively means enterprises need to look at Security and Compliance by design – which is best implemented by transitioning to Systems of Insights and Compliance.
Skills and Talent are the new currency for the business. It is critical to capture the knowledge and experiences across the company both from a perspective of improving productivity and faster time to market.
Many enterprises implemented Learning Management Systems (LMS) in some shape or form. Still, they were seen as a secondary system for Talent Retention and tracking employee training. But to succeed in this new fast-paced innovation, businesses should now plan to implement Systems of Knowledge and Learning (SoKL).
SoKL reduces the costs of inefficiency by making company knowledge more available, accessible, and accurate.
Some of the benefits of Systems of knowledge and learning are:
Digital transformation has revolutionized businesses and IT Systems that support decision-making. Building digital Resilience into every aspect of IT infrastructure, systems, and software will enable organizations to rapidly meet changing market and customer needs and create sustainable competitive advantages in this new reality.
To be successful in the hyper-digitally dynamic ecosystem, enterprises need to relook at their Business IT systems and transform the outdated Systems of Records, System of Engagements taxonomy to differentiated Systems of Records and Intelligence, Systems of Engagement and Experience (SoEE), Systems of Innovation (SoI), Systems of Insights and Compliance (SoIC) and System of Knowledge and Learning (SoKL).
Tags: Cloud, Digital Transformation, Digital Twins
AI not only helped in data gathering but also in data processing, data analyses, number crunching, genome sequencing, and making the all-important automated protein molecule binding prediction. AI’s use will not end with the vaccine’s discovery and distribution; it will be used to study the side effects in the billions of vaccinations
Many countries have rolled out coronavirus vaccines and many are conducting dry runs to check the preparedness for vaccination drives. The World Health Organisation has extended emergency use approval to the Pfizer/BioNTech vaccine. This has paved the way for developing countries, which do not have the infrastructure to run vaccine trials. They can now begin immunizing their populations against Covid-19.
The world was quick to realize the importance of coming together to share genome sequencing data and other technical know-how, which accelerated the pace of vaccine development. However, this would have been impossible without the presence of cloud computing and Artificial Intelligence (AI).
AI helped not only in data gathering but also in data processing, data analyses, number crunching, genome sequencing, and making the all-important automated protein molecule binding prediction.
Coronavirus as we know is a cousin of Severe Acute Respiratory Syndrome (SARS) that caused many deaths over a decade ago. Researchers predicted that the pathogen may have transmitted through animals then. These kinds of predictions could only be done with the help of AI.
Sanjay Sehgal, Chairman, and CEO, MSys Group said, “In case of coronavirus, the first prediction was done by a Canadian firm BlueDot, which specializes in infectious disease investigation through AI. The firm used its AI-powered system to go through animal implant disease networks. It also used AI to collect information and predict the outfall of the virus and warned its clients to retrain their travel activities much before governments declared it officially.”
CNBC reported that BlueDot had spotted COVID-19 nine days before the World Health Organisation released its statement alerting people to the emergence of a novel coronavirus. AI and cloud computing have, in fact, been helping the pharma sector for some time now.
Gaurav Aggarwal, VP, Global Cloud Solutions Strategy and GTM Lead, Avanade, stated that the democratization of AI and Machine Learning (ML) in the public cloud has revolutionized science and engineering. The pharma industry is slowly maturing to leverage the same. “The advent of AI as an adaptive and predictive technology coupled with democratization of AI/ML, Augmented Reality/Virtual Reality (AR/VR) technology by public cloud providers such as Microsoft, Google, AWS offers the possibility for radical optimization of core research, business processes, reshaping market opportunities for pharmaceutical companies and challenging the status quo on access to affordable medicine worldwide,” he added.
The process of drug discovery requires running complex mathematical models of behavior using high-performance computing (HPC). The modern data analysis tools, such as cloud and AI, accelerated the process of identifying the molecular stimulators for further evaluation.
These tools helped in the search of antibodies that would prevent and fight Coronavirus. Additionally, research databases such as COVID Open Research Dataset (CORD-19) powered by AI helped researchers in their studies.
These technologies aren’t 100% accurate though. In 2008, Google launched an AI-powered service to track the flu outbreak using SEO and tracing people’s search queries. The data collated comprised people’s supermarket purchases, browsing patterns, and the theme and rate of private messages.
“Though Google’s AI service predicted the flu outbreak much before government and its agencies, its reports had to be pulled down after being found that the service had been consistently over-estimating the pervasiveness of the disease,” Sehgal pointed out.
Based on case studies such as these, it should be noted that AI-run algorithms can help simplify the huge amount of data from several experiments that help in discovering the patterns that a human brain might miss to spot, but in the end, AI still cannot predict the success of vaccine on humans. “We will have to wait and watch how the vaccine and its effects unfold,” Sehgal said.
While we can never expect overnight success when dealing with something as complex as vaccine development, but we can act now by using AI, ML, and public cloud to optimize the overall process and remove some of the constraints and bottlenecks. “Amplifying progress in creating new medications for diseases is among the most profound near-term objectives of AI and Covid-19 vaccines availability in less than 12 months, is an example of how AI can help in crisis response,” said Aggarwal.
The use of AI is not going to end with the discovery and distribution of the vaccine. It is also going to be used to study the side effects of the billions of vaccinations. Fortune recently reported that the UK arm of Genpact has been asked to design a machine learning system that can ingest reports of side effects and pick up on potential safety concerns.
AI to study side effects of drugs has been the focus of academic researchers for several years. Many governments, apart from the UK, are also using AI to study coronavirus vaccine side effects. The quick rollout of the vaccine has proved to be a huge success for AI and ML as it has paved the way for greater use of these tools in the health and pharma sector.
Tags: Cloud, COVID19, Digital Twins
In today’s digital technology era where downtime translates to shut down, it is imperative to build resilient cloud structures. For example, in the pandemic, IT maintenance teams can no longer be on-premises to reboot any server in the data center. This may lead to a big hindrance in accessing all the data or software, putting a halt on productivity, and creating overall business loss if the on-premises hardware is down. However, the solution here would be to transmit all your IT operations to cloud infrastructure that ensures security by rendering 24/7, round-the-clock tech support by remote members. Cloud essentially poses as a savior here.
Recently, companies have been fully utilizing the cloud potency, and hence, observability and resilience of cloud operations become imperative as downtime now equates to disconnection and business loss.
Imagining a cloud failure in today’s technology-driven business economy would be disastrous. Any faults and disruption will lead to a domino effect, hampering the company’s system performances. Hence, it becomes essential for organizations and companies to build resilience into their cloud structures through chaotic and systematic testing. In this blog, I will take you through what resilience and observability mean, and why resilience and chaos testing are vital to avoid downtimes.
To avoid cloud failure, enterprises must build resilience into their cloud architecture by testing it in continuous and chaotic ways.
Observability can be understood through two lenses. One is through control theory, which explains observability as the process of understanding the state of a system through the inference of its external outputs. Another lens explains the discipline and the approach of observability as being built to gauge uncertainties and unknowns.
It helps to understand the property of a system or an application. Observability for cloud computing is a prerequisite that leverages end-to-end monitoring across various domains, scales, and services. Observability shouldn’t be confused with monitoring, as monitoring is used to understand the root cause of problems and anomalies in applications. Monitoring tells you when something goes wrong, whereas observability helps you understand why it went wrong. They each serve a different purpose but certainly complement one another.
Observability along with resilience are needed for cloud systems to ensure less downtime, faster velocity of applications, and more.
Stability |
Is it on/reachable? |
Reliability |
Will it work the way it should consistently and when we need it to? |
Availability |
Is it reliably accessible from anywhere, any time? |
Resilience |
How does the system respond to challenges so that its available reliably? |
Every enterprise migrating to cloud infrastructure should ensure and test its systems for stability, reliability, availability, and resilience, with resilience being at the top of the hierarchy. Stability is to ensure that the systems and servers do not crash often; availability ensures system uptime by distributing applications across different locations to ease the workload; reliability ensures that cloud systems are efficiently functioning and available. But, if the enterprise wants to tackle unforeseen problems, then constantly testing resilience becomes indispensable.
Resilience is the expectation that something will go wrong and that the system is tested in a way to address and maneuver itself to tackle that problem. The resilience of a system isn’t automatically achieved. A resilient system acknowledges complex systems and problems and works to progressively take steps to counter errors. It requires constant testing to reduce the impact of a problem or a failure. Continuous testing avoids cloud failure, assuring higher performance and efficiency.
Resilience can be achieved through site resilient design and leveraging systematic testing approaches like chaos testing, etc.
Conventional testing ensures a seamless setup and migration of applications into cloud systems and additionally monitors that they perform and work efficiently. This is adequate to ensure that the cloud system does not change application performance and functions in accordance with design considerations.
Conventional testing doesn’t suffice as it is inefficient in uncovering underlying hidden architectural issues and anomalies. Some of the faults appear dormant as they only become visible when specific conditions are triggered.
“We see a faster rate of evolution in the digital space. Cloud lets us scale up at the pace of Moore’s Law, but also scale out rapidly and use less infrastructure” says Scott Guthrie on the future and high promises of cloud. As a result of the pandemic and everyone being forced to work from home, there has not been a surge in cloud investments. But, due to this unprecedented demand, all hyperscalers had to bring in throttling and prioritization controls, which is against the on-demand elasticity principle of the public cloud.
The public cloud isn’t invincible when it comes to outages and downtime. For example, the recent Google outage that halted multiple Google services like Gmail and Youtube showcases how the public cloud isn’t necessarily free of system downtimes either. Hence, I would say the pandemic has added a couple of additional perspectives to resilient cloud systems:
The pandemic has highlighted the value of continuous and chaotic testing of even resilient cloud systems. A resilient and thoroughly tested system will be able to manage that extra congested traffic in a secure, seamless, and stable way. In order to detect the unknowns, chaos testing and chaos engineering are needed.
In the public cloud world, architecting for application resiliency is more critical due to the gaps in base capabilities provided by cloud providers, the multi-tier/multiple technology infrastructure, and the distributed nature of cloud systems. This can cause cloud applications to fail in unpredictable ways even though the underlying infrastructure availability and resiliency are provided by the cloud provider.
To establish a good base for application resiliency, during design the cloud engineers should adopt the following strategies to test, evaluate and characterize application layer resilience:
By adopting an architecture-driven testing approach, organizations can gain insights into the base level of cloud application resiliency well before going live and they can allot sufficient time for performance remediation activities. But you still would need to test the application for unknown failure and aspects of multiple failure points in cloud-native application design.
Chaos testing is an approach that intentionally induces stress and anomalies into the cloud structure to systematically test the resilience of the system.
Firstly, let me make it clear that chaos testing is not a replacement for actual testing systems. It’s just another way to gauge errors. By introducing degradations to the system, IT teams can see what happens and how it reacts. But, most importantly it helps them to gauge the gaps in the observability and resilience of the system — the things that went under the radar initially.
This robust testing approach was first emulated by Netflix during their migration to cloud systems back in 2011, and since then, it has effectively established this method. Chaos testing brings to light inefficiencies and ushers the development team to change, measure, and improve resilience, and it helps cloud architects to better understand and change their design.
Constant, systematic, and chaotic testing increase the resilience of cloud infrastructure, which effectively enhances the systems' resilience and ultimately boosts the confidence of managerial and operational teams in the systems that they’re building.
A resilient enterprise must create resilient IT systems partly or entirely on cloud infrastructure.
Using chaos and site reliability engineering helps enterprises to be resilient across:
To establish complete application resiliency, in addition to earlier mentioned cloud application design aspects, the solution architect needs to adopt architecture patterns that allow you to inject specific faults to trigger internal errors which simulate failures during the development and testing phase.
Some of the common examples of fault triggers are delay in response, resource-hogging, network outages, transient conditions, extreme actions by users, and many more.
Chaos testing can be done by introducing an anomaly into any seven layers of the cloud structure that helps you to assess the impact on resilience.
When Netflix successfully announced its resiliency tool — Chaos Monkey in 2011 — many developing teams adopted it for chaos engineering test systems. There’s another tool test system developed by software engineers called Gremlin that essentially does the same thing. But, if you’re looking to perform a chaos test in the current context of COVID-19, you can do so by using GameDay. This stimulates an anomaly wherein there’s a sudden increase in traffic; for example, customers accessing a mobile application at the same time. The goal of GameDay is to not just test the resilience but also enhance the reliability of the system.
The steps you need to take to ensure a successful chaos testing are the following:
Other specific ways to induce a faulty attack and sequence on the system could be:
In today’s digital age where cloud transition and cloud usage is surging, it becomes imperative to enhance cloud resilience for the effective performance of your applications. Continuous and systematic testing is imperative in the life cycle of a project, but also to ensure cloud resiliency at a time where even the public cloud is over-burdened. By preventing lengthy outages and future disruptions, businesses save significant costs, goodwill, and additionally, assure service durability for customers. Chaos engineering, therefore, becomes a must for large-scale distributed systems.
Tags: Climate Change, Cloud, Digital Twins
Artificial Intelligence has been the talk of the town for a while. But, why does AI matter? How can an organization successfully scale AI? And, what role does professionalization play in the process of successful AI deployment? Read this blog to know more about all things the data-driven AI landscape.
In the past three years, we have seen companies spend more than $300B on AI applications, and this has turned a spotlight on the AI landscape, making it a high-stakes business priority.
According to the Forrester report, organizations that scale AI are 7x more likely to be the fastest-growing businesses in their industry. In a study by Accenture, it was found that 75% of global executives believe that if they don’t scale AI, they risk going out of business in just 5 years.
While scaling AI is crucial, most companies are still at the stage of running pilots and experimenting and struggling in achieving the value they expected.
In a study by Accenture, 84% of C-suite executives recognize the need to leverage AI to achieve their business goals. AI applications, with the help of machine learning and deep learning, can utilize the data in real-time and adapt to new changes to ensure that the business benefit is compounded. In this way, AI enables businesses to ensure agility with a regular stream of insights to drive innovation and competitive advantage.
As the innovation in the AI landscape progresses, we are inching towards an era where algorithms tell us all about our tastes and preferences. Looking at this, we can say that AI in a leadership position doesn’t seem like a wild fantasy anymore.
The Covid-19 pandemic has left organizations vulnerable and has exposed their daily operations. This has alleviated the need for real-time insights and has opened our eyes to the gaps in the capabilities to access, mobilize and utilize data. Additionally, the air around AI is still not clear and this is causing challenges for business leaders who are geared up to scale with AI but are yet to introduce their teams to the “scale or fail” approach.
To scale business processes, organizations must cultivate confidence in AI and design the right governance structure to allow an ethical collaboration between humans and machines. Additionally, it is important to define business and technical challenges that AI can help solve, and the efficiencies for stakeholders across organizations that AI can help achieve. Based on these, C-suite executives should prioritize the following technology and human capital investments to achieve their long-term goals:
Here, we talk about one of the critical investments for an organization – Human Capital. It is necessary to create a company of believers and for that, an organization needs to leverage its goal of data-driven reinvention.
Organizations should work with data architects, business owners, and solution architects to develop their AI strategy underpinned by data strategy, data taxonomy, and analyzing the value that their company can and wishes to create. For “Establishing a Data-Driven culture is the key—and often the biggest challenge—to scaling artificial intelligence across your organization.”
While your technology enables business, your workforce is the essential driving force. It is crucial to democratize data and AI literacy by encouraging skilling, upskilling, and reskilling. Resources in the organization would need to change their mindset from experience-based, leadership-driven decision making to data-driven decision making, where employees augment their intuition and judgment with AI algorithms’ recommendations to arrive at the best answers that either humans or machines could reach on their own.
My recommendation would be to carve out “System of Knowledge and Learning” as a separate stream in overall Enterprise Architecture, along with System of Records, Systems of Engagement and Experiences, Systems of Innovation and Insight.
AI and data literacy will help in increasing employee satisfaction because the organization is allowing its workforce to identify new areas for professional development. This culture aims to educate employees to adopt an “out of the box” approach to facing rapid and unprecedented changes.
Clients, today, need organizations that value simplification of their system and vendor ecosystem. Enterprises should prioritize choosing the right AI/ML Technology provider partner, like Microsoft, with a capable partner and ISV ecosystem. To simplify these ecosystems, an organization needs to identify the functional gaps that exist, evaluate the applications that align the business strategy, and streamline the infrastructure for ongoing operations.
Who doesn’t hate a typical case of Chinese whispers? Organizations need to define a common taxonomy of business terms, including the KPI, ORA, leading indicators, and domain model. This should be implemented to avoid the need for an interpreter between 2 different users so that everyone in the business (including the extended partner ecosystem in the supply chain) speaks the same language and makes the right decision without any confusion. This Unified taxonomy should be pushed through consistently across “System of Knowledge and Learning”, System of Records, Systems of Engagement and Experiences, and Systems of Innovation and Insight.
More data is not always better. In a world where data is proliferating and data begets more data, it can be tempting to gather more and more. Having a strong data strategy ensures you’re curating the right data to deliver the desired outcome and then capturing its insights to fuel an AI strategy that delivers that outcome at speed and scale.
In a study by Accenture, three out of four C-suite leaders believed that if they fail to scale AI in the coming years, they will risk their business. As professionalization is the precursor to successful AI scaling, this has encouraged organizations to employ professionalization techniques like establishing multidisciplinary teams and clear lines of accountability.
To fuel the need for AI scaling, the pandemic has sharpened the contrast between those who have professionalized their AI capabilities and those who have not. Businesses are competing against each other to embrace new data capabilities to return to sustainable growth, which is possible through successful professionalization.
a.) When organizations adopt a professionalized approach of deploying trained, interdisciplinary teams, to work on these applications, you can successfully maximize the value of your AI investment.
b.) Professionalization helps organizations to achieve consistency in results when performing the same or similar actions in the future. Trained data practitioners build cutting-edge technologies across use cases by leveraging repeatability.
c.) Professionalization of AI processes contributes to making technological applications more ethical and transparent. This helps in building a culture that encourages trust. Companies need accountable processes to leverage successful responsible AI.
There is a lack of consensus between our world leaders and we are not paying enough attention to training our leaders. This includes good leadership education for our business leaders, our political leaders, and our societal leaders. While scaling AI, many executives struggle when it comes to making sense of the business cases for how AI can bring value to their organizations. In the current world, these leaders are following a herd of their contemporaries who have referred to surveys that highlight the importance of engaging in AI adoption. But building a unique business case is not headlining their priority.
The need of the hour dictates that our leaders can adapt and be agile to cope with unprecedented circumstances. Leadership needs to define AI value for today—with a vision for tomorrow.
AI will become the new co-worker. It will be critical for organizations to clearly define wherein the loop of the business process should they automate, where should the depend solely on machines, and where should they ensure collaboration between humans and machines to make sure that automation and the use of AI don’t lead to a work culture where humans don’t feel like they are the subordinates of the machines. Humans believe in building a culture where they communicate and represent the values of the company to create business value.
Leadership is about dealing with change. You need to understand what it means to be a human – you can have human concerns, taking into account that you can be compassionate, and you can be humane. At the same time, leaders should be able to imagine strategies for collaboration between machines and humans. This collaboration will be used to build strategies to combat the unprecedented and to brainstorm ways in which processes can be adjusted to create the same value. A leader needs to be able to make an abstraction of this, and AI is not able to do this.
With a long-term view, some of the other aspects that organization needs to plan for Scaling AI are:
a.) Transition from siloed work to interdisciplinary collaboration, where business, operational, IT, and analytics experts work side by side, by bringing a diversity of perspectives and ensuring initiatives address organizational priorities.
b.) Establish strong AIOps practice for managing processes for developing, deploying, and governance
c.) Shift from traditional leader-only decisions, rigid and risk-averse to agile, experimental, and adaptable mindset by creating a minimum viable product in weeks rather than months and embracing the test-and-learn mindset.
d.) Define and follow the Ethical AI framework and principles
e.) Ensure Data Security and Trust in the data
f.) Organize for scale – divide key roles between a central “Analytics Hub” (typically led by a chief data officer) and “spokes” (business units, functions, or geographies).
g.) Reinforce the change – With most AI transformations taking 2-3 years to complete, leaders must also take steps to keep the momentum for AI going during the journey by tracking the adoption, celebrating small successes, and providing incentives for change.
The AI landscape is dynamic thanks to the constant technological innovations and C-suite executives recognize the need to leverage AI for a data-driven reinvention. The secret to scaling AI is cultivating confidence in AI and designing the right governance structure to allow an ethical collaboration between humans and machines.
Professionalization is an integral part of scaling your AI and data practices. Enterprises that have leveraged professionalization to scale their AI processes are leading their industry when compared to their contemporaries who are still deliberating over ways to adopt responsible AI. By a clear understanding of what professionalization can do for the AI landscape, exploring the benefits, and employing correct leadership who can successfully delegate composite AI, an organization can make a considerable mark in the field of technological innovations.
How does your organization employ professionalization to scale AI processes?
Tags: AI, Cloud, Digital Twins
The IDC forecasts the global edge computing market to reach $250 billion by 2024, with a compounded annual growth of 12.5%. No wonder the industry is talking about Edge Computing.
Edge computing is one of the “new revolutionary technologies” that can change organizations wanting to break free from previous limitations of traditional cloud-based networks. The next 12–18 months will prove to be the natural inflection for edge computing. Practical applications are finally emerging where this architecture can bring real benefits.
91% of our data today is created and processed in centralized data centers. Cloud computing will continue to contribute to businesses regarding cost optimization, agility, resiliency, and innovation catalyst. But in the future, the “Internet of Behaviors (IoB)” will power the next level of growth with endless new possibilities to re-imagine the products & services, user experiences, and operational excellence. The IoB is one of the most sought-after and spoken-about strategic technology trends of 2021. As per Gartner, the IoB has ethical and societal implications, depending on the goals and outcomes of individual uses. It is also concerned with utilizing data to change behaviors. For instance, with increased technologies that quickly gather dust, information can influence behaviors from feedback loops during times such as COVID-19 monitoring.
IoT, IIoT, AI, ML, Digital Twin, and Edge computing are at the core of the Internet of Behaviors. As per Gartner’s research, about 75% of all data will require analysis and action at the Edge by 2022. Organizations have been debating what separates edge computing from the other traditional data processing solutions. Also, whether it is excellent for their business and to what extent is a hot topic.
The foundational principles of edge computing are relatively simple to comprehend but understanding its benefits can be complex. Edge computing can provide a direct on-ramp to a business’ cloud platform of choice and assists in achieving flexibility to facilitate a seamless IT infrastructure.
It is a distributed computing model where computing is conducted close to the geographical location of the data collection and analysis center, overusing a centralized server or Cloud. The improved infrastructure uses sensors to gather data, while the edge servers safely process data on-site in real-time.
By miniaturizing the processing and storage tech, the network architecture landscape has experienced a massive shift in the right direction, where businesses can worry less about data security. The present-day IoT devices can quickly gather, store, and process vast amounts of data than they could before. This creates more opportunities for businesses to integrate and update their networks to relocate their processing functions in proximity to the data gathered at the network edge to be assessed and applied in real-time closer to the intended users.
Edge computing is essential now because it is an upgrade for global businesses to improve their operational efficiency, boost their performance, and ensure data safety. It will also facilitate the automation of all core business processes and bring about the “always-on” feature. Edge computing holds the key to achieving total digital transformation of conducting business more efficiently.
Edge technology is relevant today as it’s empowered by new technologies such as 5G, Digital Twin, and Cloud-native Application, Database, and Integration platforms.
By 2025, we will witness 1.2 billion 5G connections covering 34% of the global population. Highly reliable low-latency is the new currency of the networking universe, underpinning new capabilities in many previously impossible industries. With 5G, we’ll see a whole new range of applications enabled by the low-latency of 5G and the proliferation of edge computing, transforming the art of the possible.
Moreover, Private 5G Network will fuel Edge computing and push enterprises to the Edge. Forrester sees immediate value in private 5G — a network dedicated to a specific business or locale like a warehouse, shipyard, or factor.
Response time or speed of response is an absolute necessity for the AI/ML-powered solution, especially if deployed in a remote location or the user is on the move. If there is even a millisecond of delay in the algorithms of a remote patient monitoring system at hospitals, it could cost someone their life. Companies that render data-driven services cannot afford to lag in speed as it can have severe consequences to the brand reputation and customer’s quality of experience.
Container technology like Docker and Kubernetes allows companies to run prepackaged software containers more quickly, reliably, and efficiently. Armed with these technologies, companies can set up and scale Micro Clouds wherever and however they want.
Service and Data mesh facilitate a channel to release and query data or services distributed through datastores and containers across the Edge, making it a critical enabler. It also allows bulk queries for the entire population within the Edge over each device, bringing greater ease.
Software-defined networking enables the configuration of the overlay networks by users, making it simpler to customize routing and bandwidth to determine a way to connect edge devices and the Cloud.
The digital twin is a crucial enabler responsible for organizing physical-to-digital and cloud-to-edge, letting domain experts (not just software engineers) configure their applications to observe, think and act according to the Edge.
The maturity of IIoT platforms and Edge AI pave the way for IT-OT convergence, thereby offering an innovation advantage to the business.
The industrial Internet of Things or IIoT sensors provides a more significant business advantage such as greater productivity and efficiency and cost reduction for data collection, analysis, and exchange.
MES transforms the topology and the architecture of mobile networks from a pure communication network for voice and data to an application platform for services. MEC complements and enables the service environment that will characterize 5G. Example: Connected Cars, Industry 4.0, Remote Patient Monitoring, eHealth.
XR represents an immersive interface for work collaboration in a virtualized environment. With the help of edge computing, these experiences become even more detailed and interactive.
Innovation for Heterogeneous hardware and ruggedized HCI / Edge devices is making Edge computing more pervasive as they process a greater volume of data quickly by using lesser power. Integrating this specialized hardware on Edge enables efficient computation within physical environments while accelerating the response rates.
Hyperscalers, along with 5G and chip OEM, are innovating at speed to capture the market. Azure Percept is Microsoft’s latest edge computing platform, bringing the best hardware, software, and cloud services to the Edge. Azure Percept is an excellent device for makers and builders to build and prototype intelligent IoT applications powered by Azure Cognitive Services and Azure Machine Learning Services.
New privacy-oriented technologies include techniques and hardware that enable data to be processed without exposing all the problematic aspects. Data is encrypted during storage and transmission. However, privacy-preserving tech is bound to safeguard data even in the computing stage, making it more reliable for other lines of the organization and its partners, especially when required to be processed on Edge.
Robotics can be configured to act following signals and updates by the Edge. This has been seen in life-saving surgical procedures where agility and precision are of utmost importance. Both Edge and Cloud are of utmost importance to control the robot’s moves and executions through stored data while ensuring no lag between movements.
From what we are witnessing so far, Edge Computing represents the future of a cloud technology extension by making it bulletproof. Discussed below are a few ways we may see this continuum manifest:
A substantial amount of computing is already being carried out on Edge at manufacturing units, hospitals, and retail sectors, where the majority operate on the most sensitive data. It also powers the most critical systems that are required to function safely and reliably. Edge can facilitate the driving of decisions on these core functional systems. When there is the opportunity for AI and IoT to tap into these systems.
Understanding and assuming control of the Edge also gives you control of the closest point of data action. Utilizing this unique opportunity to relay differentiated services can help a business in great ventures with valuable partnerships that branch out.
For instance, edge computing is beneficial to an automobile manufacturer and the insurance vendor, the companies that provide energy and utilities, and the city planners. Edge computing can offer your business new data, and you can offer more excellent value to your partners, which is a win-win scenario. The new edge-friendly data and services are processed in the Cloud, integrating with other organizational applications and data.
Edge computing is the need of the hour to maximize the returns of the next-generation technologies, as the current scope needs to be broadened anyway. As time passes, so does the need for a better technological support system for data processing that is faster, smarter, and more efficient. Their collective effect can give new features such as voice input to your vehicle or remorse operations using teleoperation. Edge facilitates the control and programmability required to link these capabilities into an organization.
Today’s Cloud world is characterized by limited mega data centers in remote locations. Data is traversing from one device to the Cloud and back to execute a computation or data analysis. Data typically make this round trip traveling at 50 to 100 milliseconds over today’s 4G networks.
Data traveling over 5G at less than five milliseconds facilitates the edge cloud and the ability to create new services that it empowers.
Decentralizing traditional IT infrastructure is at the core of edge computing and complementary to centralized cloud computing.
One of the three origins of distributed Cloud is edge computing, making it highly relevant for prospects. CIOs can use distributed cloud models to target location-dependent cloud use cases required in the future. As per Gartner, by 2024, most cloud service platforms will facilitate at least a few distributed cloud services that execute at the point of need. Distributed Cloud retains the benefits of cloud computing. However, it extends the range and uses cases for the Cloud, making it a better version.
Today, everything is getting “smart” / “intelligent” because of technology. From home appliances and automobiles to industrial equipment, substantial products and services are employing the aid of AI to interpret commands, analyze data, recognize patterns, and make decisions for us. Most of the processing that powers today’s intelligent products is handled remotely (in the Cloud or a data center), where there’s enough computing power to run the required algorithms.
Edge, combined with 5G’s higher bandwidth and Distributed Cloud’s low-latency computation, is the future that was imagined less than a decade ago and is within our reach, now more than ever. What is impressive about this technology is how several technological leaps and bounds are more significant than we could imagine. To think about it, edge computing is just like science fiction materializing, only that the experience is full of greater possibilities and expansion. Not only will your business be facing a new generation of success, using the Edge will help you run your organization more efficiently, create more incredible innovations at a fast-paced process faster and derive better value from affiliations.
Tags: Cloud, COVID19, Digital Twins
When we talk about assets on the balance sheet, Data deserves its row” — Satya Nadella — Microsoft CEO.
As an organization, you have a big question in front of you “How to handle user’s data?”, it can be either used to support your business, or it can be used to give your end-users a better experience.
With enough data and a roadmap to use that data effectively, you can accelerate your company’s growth. Using Data effectively is incomplete without the term data governance. Here’s every “Why? How? Where?” you need to know about Data governance and Azure Purview.
Data is the new currency of the current digital age. But data within organizations is growing at exponential rates. 90% of data today was created in just the last two years. And by 2025, 80% of data will be unstructured data. This influx of data has increased the organization and challenges many folds.
To get real business value from Data, the organization needs to know:
Lack of understanding of any of the above can create operational inefficiencies, confusion related to Data and information being distributed internally and externally, and poor business decisions based on flawed or misunderstood data. Well, that’s only a part of the problem set as regulators are cracking down on companies for any compliance data privacy and data sovereignty (and I won’t be surprised if soon we start seeing regulations around the ethical use of data).
According to Gartner, “Data governance is the specification of decision rights and an accountability framework to ensure the appropriate behavior in the valuation, creation, consumption, and control of data and analytics.”
Data governance helps ensure the data is usable, accessible, and protected. It also helps in more informed data analytics because an organization can come to a well-informed conclusion. Data governance also improves the consistency of the data, removes redundancies, and helps make sense of garbage data, which can save an organization from a big decision-making problem.
Data governance also allows organizations with:
Microsoft Azure Purview is a fully managed, unified data governance service that helps you manage and govern your on-premises, multi-cloud, and SaaS data. Purview creates a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. Purview empowers data consumers to find valuable, trustworthy data.
It’s built over Apache Atlas, an open-source project for metadata management and governance for data assets. Azure purview also has a data share mechanism that securely shares data with external business partners without setting up extra FTP nodes or creating redundant large datasets. Azure Purview does not move or store customer data out of the region in which it is deployed.
There is currently no licensing cost associated with Purview; you pay for what you use. The pay-per-use model offered by Microsoft as part of Public Preview is exciting for Microsoft customers looking to move quickly without having to create a business case to secure an additional budget. Azure Purview reduces costs on multiple fronts, including cutting down on manual and custom efforts to discover and classify data and eliminating hidden and explicit costs of maintaining homegrown systems and Excel-based solutions.
It supports the following type of data sources at the time of writing:
Azure Purview consists of below main features:
Azure Purview Data Map provides the foundation for data discovery and effective data governance. It’s a cloud-native PaaS service that captures metadata about enterprise data present in analytics and operation systems on-premises and cloud. Purview Data Map is automatically kept up to date with a built-in automated scanning and classification system. Business users can configure and use the Purview Data Map through an intuitive UI, and developers can programmatically interact with the Data Map using open-source Apache Atlas 2.0 APIs.
Purview Data Map powers the Purview Data Catalog and Purview Data insights as unified experiences within the Purview Studio.
Data Map extracts metadata, lineage, and classifications from existing data stores. It enables you to enrich your understanding with the help of classifiers at cloud scale classify data using 100+ built-in classifiers and your custom classifiers. With Purview Data Map, organizations can centrally manage, publish and inventory metadata at cloud scale and further extend using Atlas Apache open APIs.
Label-sensitive data feature is supported consistently across the database servers, Azure, Microsoft 365, and Power BI. Along with that lets you easily integrate all your data systems using Apache Atlas Open-source APIs.
With Data Catalog, Purview enables rich data discovery with the luxury of searching business & technical terms & understanding data by browsing associated technical, business, semantic, and operational metadata.
Data catalog, along with information on the data source and interactive data lineage visualization, empowers data scientists, engineers, and analysts with business context to drive BI, analytics, AI, and machine learning initiatives.
Purview helps companies to understand their data supply chain from raw data to business insights. From a Data lineage perspective, Purview currently supports:
Using Purview Data Insights, data officers and security officers can get a bird’s eye view and, at a glance, understand what Data is actively scanned, where sensitive data is, and how it moves
The data governance component provides users a bird’s-eye view of your organization’s data landscape; by quickly determining which analytics and reports are stored. It enables stakeholders to maintain and use an organization’s data efficiently if it exists already or not. This view allows you to get crucial insights such as data distribution across environments, how Data is being moved, and where sensitive data is stored.
Purview Studio is essentially an environment created for you to work through the Azure purview services after creating an account. This studio is a central control area that allows developers, administrators, and end-users to work through Purview. This tool is the next step in the process of using Azure Purview.
Azure Purview is in its early days and has few gaps that need to be addressed. Here are few limitations of Azure Purview:
While currently, Azure Purview is not a one-shop-stop solution for enterprise-level data governance capabilities but based on the roadmap shared, it won’t be long before the Purview team pull up their socks and cover enough to make Azure Purview an enterprise-grade Data governance suite.
Azure purview is there to help you manage your data better and here’s how it’s going to help you process it and convert your data into an asset:
Azure purview allows you to catalog your data and have a customized tag over it, allowing you, the end-user, to locate better and understand it.
It also helps you maintain Data Quality in situations where your data must be complete, unique, valid, accurate, consistent, relevant, reliable, and accessible. Governance tools such as the data catalog will help you with this.
As an organization, it falls on you to provide the utmost security to end-user data. According to government laws and data mandates, the end-users can demand to remove their data from companies severs and even change its content at any given point; Azure Purview lets you create an automated process that will streamline these service requests and produce documentation required by the law.
It provides a unified map of your data assets. This helps in forming an effective data governance system.
You can run searches based on technical, business, and operational terms. One can identify the sensitivity level of the data and can understand the interactive data lineage.
Get continuous updates about the location of the data and continuous insight into its movement through your multi-layer data landscape. Along with this, Azure Purview provides you with services like a Data catalog and Business glossary.
It is a core element of any data governance software, which can scan all the data sources, identify, index, connect and classify registered users’ data sets.
It is a collection of terms with brief definitions which connect to other terms. With Business Glossary, it’s possible to automate the process of classifying the data set and annotate them with correct business terms so end-users can understand them more simply. Any business glossary is the foundation of the semantic layer that an organization uses to define a medium of communication behind its business.
With features like these, Microsoft Azure Purview allows your data to become a crucial asset.
Data Governance is a must-have solution strategy for all enterprises to use Data as assets. Data Governance is a complex solution yet a foundational pillar in any enterprise’s data journey. Data governance helps to democratize data responsibly through accessible, trusted, and connected enterprise data at scale.
Microsoft Azure Purview provides a good starting point for Cloud-native Data governance solutions. Azure Purview helps answer the who, what, when, how, where, and why of data. From the feature checkpoint of view of Azure Purview, I would say it has the potential to be a game-changer with features like Data catalog, Data insights, Data mapping, Business Glossary, Pipelines to manage your data sources and destinations.
Azure Purview has a solid potential to shape up a new Data Governance as A Service Industry (DGaaS) and open up some new opportunities for businesses to explore.
Tags: Cloud, COVID19, Digital Twins
According to Gartner, businesses will be spending about $333 billion by the end of 2022 on cloud infrastructure, and according to McKinsey, cloud spending will increase by 47% in the year 2021. These numbers are staggering and certainly depict a very positive picture here. However, cloud consumers need to assess the pay-off of such significant cloud spending.
McKinsey also reported that companies exceeded their cloud budget by 23% and that 30% of their outlays were wasted. This leads me to wonder if businesses have been able to optimize operations from their cloud investments. Whether the Cloud has just added to their costs or has it been good value for their money? And lastly, why do some companies still grapple with mismanaged costs or added costs during their cloud journey?
These pertinent questions need to be debunked in times where companies are struggling to stay afloat and are trying to mitigate their overall costs. Cloud costs don’t necessarily mean IT costs but also include certain operational and managerial costs as well.
So, how do organizations harness the cloud cost optimization journey? Let me guide you through the same in this blog.
According to Gartner, 45% of the organizations that perform a ‘lift and shift’ to cloud architecture endure higher costs and end up overspending by 70% in the first year. Mckinsey calculates, “80% of the enterprises believe that managing cloud spend poses a challenge”. Flexera noted, “organizations waste an average of about 35% of their Cloud spend.
Other than just high overhead costs, poor cost management certainly reflects on business innovation and overall agility. Additionally, according to a cloud ability survey, more than 57% have experienced a negative business impact due to inefficient cloud cost management. This is because much of the importance is only given to cloud adoption and not cloud optimization. Organizations must look to save costs here and look to bring about a cultural and behavioral change to maintain a fiscal discipline. As we enter the post-COVID world and the next stage of the economic cycle, IT leaders must now work smart to ensure business efficiency through cloud cost management.
Despite conceding to the benefits derived from cloud cost optimization, many organizations struggle with it. It is essential to address key challenges and hurdles faced by cloud users in optimizing cloud costs. Let me take you through some common ones:
Let’s investigate seven mantras that IT and Business leaders can use to accelerate their Cloud Cost Optimization journey.
Cloud deployment entails some structural and systemic changes in an organization. A cloud-first mindset helps organizations become agile in bringing forth these changes, whether in business or revenue models. It also helps if IT teams can make decisions around the movement of the Cloud-based on the dynamic needs of various groups. Investing in PaaS capabilities and cloud-native toolsets can help here.
Cloud optimization is not something you do per se but rather a mindset you inculcate. An organization needs to arrive at the most cost-effective cloud architecture to meet their requirements by factoring in what’s on offer in the cloud catalog, including newer features, and knowing what resources to use by interpreting usage trends from billing.
In the past, organizations designed for availability, performance, and security to be delivered from a finite set of pre-resources planned for peak workload. The Cloud reverses this paradigm and allows for a more precise design that’s perfectly aligned to workload requirements. The architectural components in the Cloud carry a price tag, and thus optimal cloud architectures need to be designed with cost in mind.
Some of the core elements or principles of cloud economics today include:
Many organizations need to rethink what the Cloud can do for their business in the current climate. Acceleration and optimization of the Cloud are critical components to a successful cloud journey; both must be considered and intertwined. Whether an organization looks to optimize first for maximum cost and consumption efficiencies or accelerate first for greater scalability, there is no “best way.” Moving to the Cloud could reduce IT costs if it is planned and managed correctly. When you optimize as you go, the savings are significant, controlled, and scalable.
You cannot optimize Cloud cost if you don’t have visibility into the spent and a baseline. A good starting point for the Cloud Optimization framework is to ensure visibility of your spending and control over cloud expenditure.
Organizing cloud costs could entail resource tagging, cost allocation, and chargeback and show-back models. Additionally, creating and using a clear BI dashboard for visibility and control can help your organizations tremendously in the following ways:
To maintain an optimal state, you need to ensure that sound policies around budgeting are adhered to. In terms of Governance, the framework should oversee resource creation permissions as well. Microsoft offers automation tools like Microsoft Advisor and Microsoft Cost Management to monitor your spending and cost spikes.
The journey for any cloud cost optimization starts with initial analyses of current cloud estate and identifying optimization opportunities across compute, network, storage, and other cloud-native features. Any cloud cost optimization framework needs to have a repository of cost levers with associated architecture and feature trade-offs. Businesses would need governance — the policies around budget adherence, resource creation permissions, etc. — to maintain an optimal state.
The key here is to focus on quick wins first, followed by dashboard creation for better visibility and control. Lastly, establish a Governance model to maintain an optimal state.
To maintain an optimal state, you will need:
The Cloud is ever-evolving, and hence, organizations must also ensure to evolve their portfolio as well. For example, automation, autoscaling, serverless services, containers, etc. have evolved the cloud game and, if adopted, can ensure continuity in reducing costs. Hence, it becomes of utmost importance to find new optimization strategies and opportunities to ensure continuous reviewing. The key here is not just to have a one-time cloud cost optimization journey but to ensure a continuous optimization cycle at every stage.
Cloud consumers should be responsible for what they consume. Enable them to create forecasts and pursue optimization opportunities. A good starting point would be to develop a resource tagging (e.g., usage, ownership, department, and cost center) model to implement the chargeback model. With proper resource tagging, it is possible to associate resource cost to resource owner — thus, a cost center code.
Choose the suitable sourcing model from allocation-based and consumption-based services.
With the advent of pay-as-you-go (PAYG) models, financial decisions have been decentralized. This means that, in previous traditional IT models, only a few people were responsible for making financial decisions about infrastructure purchases. With new pricing models, anyone can make cloud spending and cost management decisions, and how has now become everyone’s responsibility.
Hence, it becomes imperative to integrate FinOps. FinOps is nothing but a combination of FINance (budgeting and cost modes) and OPerations (infrastructure, apps, data).
Cloud cost optimization calls for a paradigm shift at the organizational level and at the behavioral level to ensure that cloud investments are utilized responsibly and optimally. It is not just an operational concern or merely about “cost reduction”; it’s a value-driven strategic move. The path toward it will not be linear and requires tight collaboration among governance, architecture, operations, product management, finance, and application development to be successful.
With the right strategic interventions, control, and operating model, the Cloud provides excellent visibility to organizations on IT spends and is undoubtedly the most crucial and promising/futuristic technology investment an organization can make.
Tags: Cloud, COVID19, Digital Twins
Sustainability has been a buzzword and rightfully so owing to the accelerated global climate crisis. Corporations worldwide acknowledge the urgent need to act on climate change and have even pledged to set climate targets.
According to Gartner, Sustainability is defined as “An objective that guides decision making by incorporating economic, social & environmental impacts.”
According to the “Business Ambition for 1.5°C — Our Only Future” campaign, 177 corporations from across the sectors have come forward to reduce the rising global temperature. They are not just giving out empty promises but are also taking concrete actions to bring policy changes within their own countries and regions. These are undoubtedly positive breakthroughs and require to be implemented now more than ever as the time between now and 2030 is known as the Decade of Action.
Sustainability is pervading all facets of our lives, and it has undoubtedly pervaded the digital technology industry as well. No doubt, Digital transformation is disruptive and innovative, but is it truly sustainable? Well, I would say there are two sides to the coin here.
On the one hand, we can contest that software technologies are intelligent solutions built to support the environment. For example, Microsoft created an AI tool called AI for Earth Initiative to assist environmental organizations. Alternatively, it is also recorded that our digital technology usage is currently responsible for 4% of global CO2 emissions. With more and more digital use, the number is only going to increase. So, where does the solution lie? How can Cloud help with Sustainability?
The solution lies in utilizing Technology to deliver sustainable solutions and to incorporate environmental-friendly IT practices. To make ethical choices regarding design and Technology that invariably contribute to the internet’s broader ecological impact. My perspective is to harness the power of the Cloud as Cloud computing truly is the silver lining, or should we say the ‘green lining’ here.
Therefore, in this blog, I intend to highlight cloud computing efficacy in driving sustainability goals.
Microsoft and WSP USA conducted a study wherein it was discovered that Microsoft cloud computing was 93% more energy efficient. Additionally, it was noted that the Microsoft cloud platform had 98% lower carbon emissions than on-premises data centers. This comprehensive study does highlight the efficiency of cloud infrastructure in maintaining Sustainability and has numerous environmental benefits. A report published by Accenture strategy also stated that “migrations to the public cloud can reduce CO2 emissions by 59 million tons per year which equates to taking 22 million cars off the road”. Such statistical data isn’t just hyperbole but a reality.
Let me take you through the different ways the cloud infrastructure poses a sustainable option:
Typically, on-premises data centers consume an unreasonable amount of energy as you require a constant power supply, a cooling system to avoid overheating, etc. Additionally, server underutilization also results in energy waste as servers are left unused to build up more e-waste.
Additionally, Public cloud providers can deploy ‘green data centers’ by utilizing other energy sources. For example, Microsoft’s data centers use renewable energies like wind, solar, and hydroelectricity.
Migrating to the Cloud will replace high carbon-emitting machines with virtual equivalents. Such a replacement ensures to reduce the company’s carbon footprint significantly. For example, the Cloud enables seamless virtual services like video streaming rather than utilizing heavy hardware equipment that consumes more energy. Elimination of significant physical hardware from the day-to-day operations ensures dematerialization and reduces cost, waste, effort, and environmental impact.
Corporations have been leveraging technologies to create more sustainable businesses while ensuring to minimize their environmental impact. New innovations and technologies using cloud computing infrastructure have powered sustainability goals. For example, virtual meeting applications have been one of the most notable cloud-powered innovations so far. Companies can now have periodical employee meetings online, saving high costs, energies and time.
In a research conducted by Microsoft, it was noted that the Cloud could help to provide scalable technological solutions such as smart grids, intelligent buildings, etc., to ICT sectors. Moreover, significant enterprises are harnessing the power of the Cloud to find sustainable solutions as well. Take, for example, the case of AGL. AGL, one of Australia’s leading energy companies, utilized Microsoft’s Azure cloud platform to manage solar batteries remotely. The company was able to derive a sustainable solution with the help of cloud computing infrastructure efficiently.
These use cases certainly highlight that the cloud infrastructure isn’t just inherently a sustainable solution but also an infrastructure that powers rapid Innovation for sustainability-centric solutions.
SaaS has transformed the way we work, communicate, and share data. With Sustainability, the reporting requirements have become crucial and complex. As such, it has become essential to have secure, accessible, and accurate data. SaaS platform essentially provides a cloud application solution that drives business operations by managing and automating key activities.
Hyperscalers can invest vast amounts of money in Innovation for energy-efficient Datacenters and Technology due to an increase in cloud consumption and the number of cloud users. For example, Microsoft is investing in building data centers based on new leading-edge designs (e.g., Microsoft has created a data center underwater) to improve the average PUE (Power Usage Effectiveness). Such investments in green infrastructures will significantly reduce the per-user footprint when cloud business applications are being used.
Sustainability initiatives are indeed being harnessed across all levels of hierarchy, from CEOs to CFOs and CIOs.There has been significant pressure from customers and stakeholders to take a stand on Sustainability as well truly.
Hence, by embracing the power of the Cloud, CXO’s can efficiently harness growth and Innovation. Let me tell you how:
According to a report published by Accenture strategy, it was noted that 21% of the CEOs and CXO’s acknowledged the importance of embedding sustainability goals into their corporate strategy. However, less than half were able to integrate it into their business operations.
Incorporating sustainable goals doesn’t just ensure a competitive advantage but also allows companies to fight against climate change proactively.
Indeed the uncertainties bought about by the pandemic have halted and distracted the CEO’s sustainable efforts. However, the accelerated migration to the sustainable Cloud has also invariably solved this very problem.
Leaders of technology-driven businesses or other businesses can drive sustainable actions by incorporating Cloud computing technologies. Achieving sustainability goals doesn’t happen in a vacuum but requires bringing together different technologists, employees, and stakeholders to realize the importance of leveraging sustainable options into operations.
The goal is not to make profits from Sustainability but rather to make profits sustainably. CEOs must align Sustainability with profits, business operations, investments, Innovation, and growth. By embracing the power of the sustainable Cloud, they can quickly alleviate the pressures to implement Global Goals and concentrate on strategizing for business success.
CFO’s have always viewed non-financial metrics like Sustainability as a cost rather than as a source of value. This can be owed to the language barrier between the CFO’s and Sustainability colleagues, as rightfully pointed by a Harvard Business Review study. However, according to the same research done by Harvard Business Review, it was noted that “Non-financial metrics such as carbon emissions can reveal hundreds of millions of dollars in sustainability-related savings and growth.”
a) Sustainability investments have tangible and intangible benefits, and CFO’s must realize this now more than ever.
b) To maximize the value from Sustainability initiatives, a model like the “The ROSI (Return on Sustainability Investment) analytical modelis a great starting point.
c) Another important model is CISL’s “Net Zero Framework for Business” — designed for those companies tasked with delivering net zero in a business context and influencing this ambition’s societal transition. By drawing on a range of leading frameworks and CISL’s insights, it provides a ‘one-stop-shop’ for the essential tasks that need to be set in place to align with net-zero.
Source: Targeting Net Zero — framework — Cambridge Institute for Sustainability Leadership
Let us now understand how the Cloud can make a business case for CFOs:
Allows CFO to use different Financial Engineering to drive the operations in the core business stream instead of worrying about huge cash flow upfront.
In the Smart 2020 report, it was estimated that technology-enabled energy efficiency would result in a total of $947 billion worth of total cost savings. This is huge as CFO’s can streamline and manage these cost savings for better Innovation, scalability, and growth. The Cloud also allows CXO’s to shift their outlook to think ‘green,’ to contribute to something larger than just their companies. It poses an opportunity to participate in the fight against climate change without considering mitigating costs or analyzing risks.
By migrating to the Cloud, CFO’s can quickly mitigate and avoid carbon footprint expenses (Expenses like emission taxes, penalties for non-compliance, etc.) that might be incurred later on.
Cloud enables CFO’s to move on from immediate financial imperatives to engage in better value creation. By incorporating sustainability goals into their business operations through the Cloud, CFO’s can build a better Environmental, Social, and Governance profile (ESG). This can help to build stronger relationships with customers, shareholders, and broader stakeholders.
In addition to standard Cloud Economics and business agility benefits of Cloud adoption, the Sustainability benefits of Cloud help CIO and CTO with:
70% of the employees are now looking to work at a company that has strong environmental goals. This is reflected within the IT sectors as employees are now urging their organizations to take greater responsibility and action towards Sustainability. Additionally, the sustainable benefits of cloud computing certainly reach all levels of hierarchy, including the workforce as well. With CEOs establishing to maintain corporate social responsibility (CSR), the workforce or the employees can make a collective and collaborative effort to ensure minimal carbon footprint. Using the new sustainable Productivity tools and IT systems, employees can easily ensure the consumption of energy.
Leading cloud providers like Microsoft’s Azure are pledging to be carbon negative by 2030 and match 100% of their global annual energy consumption with renewable energy credits (REC). This highlights how serious cloud providers are about Sustainability and how much effort they are willing to put to uphold their environmental credentials. Hence, it is time for CXOs to consider the sustainability benefits of the Cloud now more than ever, as Sustainability is no longer just a perspective but a business imperative. CXOs need to collaborate to align their operational goals with sustainability goals to build a corporate purpose to tackle climate change. Sustainability benefits can be truly harnessed only if the cross-divisional teams understand the urgency.
In my opinion, Cloud computing can definitely support the company’s sustainable efforts by saving billions of dollars in energy costs and reducing carbon emissions by millions of metric tons.
Tags: Cloud, Business Strategy, Digital Twins
As 2021 approached, 5G was predicted to reach around 34% of the global population in the next 4 years. How will 5G revolutionize the present scenario of the market? What are my predictions for a 5G-enabled future? What are the risks associated with 5G? Read this article to know what I expect from the 5G-enabled future.
5G networks are part of a major digital transformation trend that is impacting consumer, public sector, and enterprise spaces. New devices and applications are emerging in the market that takes advantage of the dramatically reduced latency and much higher throughput than 5G offers. Examples include accelerated adoption of smart cities, smart factories, next-generation in-store experiences, healthcare services autonomous cars, and much more. It’s an exciting time. By now, you are familiar with the excitement surrounding the 5G revolution.
A new technological revolution will begin with 5G, creating massive disruptions in perceiving and using tech. Let’s start by looking at what this technology is all about.
5G is no longer the technology of the future, but a current reality. Globally markets have already started to switch to 5G, which marks the beginning of a new era. This is the only technology created so far with a huge potential to elevate the use of the Internet of Things (IoT), foster an environment of interconnectivity, and sustain economic growth. 5G will bring along a plethora of benefits, such as increased data speed, lower latency on network response time, and higher reliability.
According to Ericsson’s Mobility Report released in June 2019, 5G subscriptions will reach 1.9 billion by the end of 2024, making up over 20% of all mobile subscriptions at that time. So, while we’re still quite early in the game, it is not too soon to start thinking about the 5G implications for your business, both positive and negative.
5G is the next generation of mobile networks that will allow us to have higher speed and lower latency. To put it in layman’s terms, we say 5G will allow us to do everything that we currently do, but at a much higher speed. It runs on radio frequencies ranging from below 1 GHz all the way up to very high frequencies, called “millimeter wave” (or mmWave).
Imagine a world with smart glasses, VR, and drones co-operating to carry out emergency missions and communicating wirelessly with each other and ground base stations over 5G networks in real-time.
Support for millions of devices/sq miles. Until today, 4G supported 2000 devices per 0.38 sq miles. On the other hand, 5G will support 1 MILLION users per 0.38 sq miles. This means a better and more affordable internet will be accessible with 5G!
Look for speeds up to 10Gbps.
The 5th Generation of the internet will come with a speed of 1Gbps. This would mean a sturdy revolution in the health sector, enabling us to have a bird’s eye view 24/7.
As we have progressed, you must be thinking we have made the internet cheaper and more accessible throughout the world.
Lower latency would mean less time required for data to travel from one point to another and low chances of experiencing lags. The latency of the network is inversely proportional to the network speed. The 5G revolution will see an entirely new range of applications enabled by low latency factors and the expansion of edge computing, altering the art of the possible.
5G being 10-times faster will give better results and lesser latency. Overall, this generation of wireless is expected to have a 10x decrease in end-to-end latency.
Better mobility would mean a better network while traveling. Data has predicted that we will be able to travel at the speed of 500 Km/hr. That’s more than reliable for high-speed train travel. Along with that, 5G offers the reliability of 99.999% uptime and ultra-consistent network service.
Until now, we have used technology to make our lives easier. You press a button on your screen, and numerous calculations deliver your foods and goods to your doorstep. Wearable technology, to improve the human lifespan, is just the beginning of the technological revolution we are about to witness.
According to a survey recently released by Gartner, two-thirds of organizations were planning to deploy 5G by 2020. Yet businesses want to embark on the 5G journey faster than communication vendors can provide it. Furthermore, they plan to use 5G networks mainly for IoT communications, with operational efficiency as a critical driver.
We’re noticing a significant shift in industries and quite interesting partnerships among Mobile network operators (MNO) and hyperscalers such as Microsoft, AWS, and Google. MNOs have an edge because they own infrastructures for edge technologies. There is mounting evidence that suggests that AWS, Microsoft, and Google are moving fast and developing their edge infrastructure.
With every passing day, we are consuming more data. Ever wondered how much data we create? The answer would be 2.5 quintillion bytes per person per day as of 2020.
With such a mammoth demand in data, existing spectrum bands are becoming congested, leading to service breakdowns, particularly when many people in the same area are simultaneously accessing online mobile services. This has increased our need for a network spectrum that can handle the surge in demand.
At the end of 2020, we had just about 5 to 10% market coverage of the 5G mobile network. Most commercial networks will be of demonstrative purpose for major events, such as the summer Olympics in Tokyo. Research shows that around 45% of carriers globally will have launched a commercial 5G network by 2025.
It is estimated that 1.5 BILLION connections, covering 34% of the global population, will be well established, covering 25% of all mobile data, influencing and creating new markets.
5G will bring a whole new range of products to the market, ranging from a simple 5G-enabled phone to a fridge that orders food from local grocery shops according to how your day is going in traffic.
Every app and website is moving towards a better user experience to keep us interested and will undergo a redesigning phase. Technologies like VR and AR will get more common in regular use. Companies, like Instagram, that are focusing on redesigning their app, will develop a new dopamine response cycle to keep the users on their platform for longer as the technology will get faster.
To be successful in a 5G world, partnering and collaboration with MNO and 5G ecosystem players would be of utmost importance, as operators sit at the center of new ecosystems developed around the ultra-reliable low latency, real-time data at scale and responsiveness that the "edge cloud" delivers.
All hyperscalers are working in establishing the partnership and ecosystem around 5G and Edge computing. Microsoft is leading the bandwagon with a couple of acquisitions (e.g., Metaswitch, Affirmed) and broader ecosystem partnerships.
However significant 5G may seem right now, the industry is still very much at the infant stage. A major downside of 5G adoption is that it puts the entire service provider ecosystem at great risk for cyberattacks. With consumers and businesses becoming steadily reliant on digital services, security shouldn’t be an afterthought.
Securing applications and data across the network, endpoints, data centers, branch locations, and cloud all remain critical challenges. As key enablers of the 5G value chain, mobile network operators are in a position to enable the connection and security of the 5G digital economy.
There are few potential health risks that we can encounter during the transition to the world of 5G. Various studies back the fact that frequencies with power densities of less than 6 mW/cm2 result in various health issues, the most severe one being DNA damage. Long-term exposure to low-intensity EHF EMR has a severe effect on non-specific immunity indexes.
Along with the massive potential of a technological revolution, 5G also introduces the chances of increased cyber security risks. Attackers with a minuscule knowledge of cellular paging and protocols will be enforced with the ability to intercept calls and track locations, leaving our privacy vulnerable.
5G networks are a software-based distribution system, digital routing approach, increasing the number of entry points for someone with ill intentions to intercept the data packets.
To capitalize on the emergence of 5G, cloud providers have expanded their hybrid and edge offerings, partnered with Telco’s, and built or acquired 5G-specific services. AWS Outposts, Google Anthos, and Microsoft Azure Stack Hub, Azure Edge Zones are hybrid cloud appliances and services that naturally lend themselves to 5G MEC use cases. Cloud providers, software vendors, enterprises, and others will use MECs to run applications directly within the edge of a telco network.
Access to virtual machines via phones will become common because of the more extensive computing and machine-to-machine communication provided by 5G. Cloud computing enterprises will offer more features and options to mobile users, and hotspots will become faster, allowing remote workers access to cloud services, even at places where internet connectivity is lacking.
5G technology will bring significant improvements to the cloud computing world because most technology innovations can be more efficient when cloud-dependent. 5G, in turn, improves that integration with its low latency, enabling smoother communications.
As of now, we’ve only witnessed AI working with Chatbots and figuring out minuscule tasks, but with the power of 5G’s Edge-Cloud and low latency rates, we will be introduced to some drastic changes across the businesses.
We will see accelerated adoption of Digital Twins, Self-Monitoring machines, Augmented Audits and inspections, and collaborative robots. This will allow us to make better data-driven decisions and distribute our priorities in a more efficient way. AI & 5G will bring the “intelligent edge” to life.
To leverage the full potential of 5G, one of the biggest paradigm shifts would be the creation of private 5G networks, or Private Wireless Networks (PWN). PWNs deployed and managed by providers like Nokia, Ericsson, Huawei, and larger Telco providers would be quite common in 2021. PWNs will increasingly take root in enterprises involving public safety, gaming, airline, entertainment, retail, home care, hospital settings, field service management, utilities, and similar industries.
5G is an evolution. 5G Networks and devices continue to evolve. The arrival of 5G technologies will transform how consumers live and how businesses interact with customers. You want to start planning for 5G now, but depending on your application, you may deploy 5G devices as early as 2021. The 5th Generation of networking will revolutionize how we live and function in this world, and we are right in the middle of it.
2021 promised to be a year of exciting change. While a crystal ball would come in handy in such situations to predict the future, we can be sure that change will come at an increasing pace, enabling new technologies to emerge. Having an insight into the innovations on the digital horizon will aid your organization with planning and staying ahead of the competition. What are your expectations for your organization from the 5G-enabled future.
Tags: Cloud, 5G, Digital Twins
In today’s day and age, business enterprises are finding it difficult to navigate through different complex environments that run across data centers, edge, and multiple clouds. While single cloud still holds relevance, most companies are adopting multi-cloud and hybrid cloud models. However, the terms hybrid cloud and multi-cloud are inconsistently used. A multi-cloud strategy entails using multiple cloud services from different providers based on their performance levels at certain tasks.
With the deployment of multi-cloud and hybrid cloud infrastructures and it being a reality, players like Microsoft, Google, and AWS have entered this market, propelling for greater cloud innovation. All hyperscalers have built control planes for hybrid and multiple cloud deployment models, that overlook the lifecycle of managed services like Internet of Things (IoT), functions databases, virtual machines, and observability, etc.
I believe that these control planes deliver the promise of robust hybrid/multi-cloud technologies in this ever-changing multi-cloud services infrastructure. Currently, Microsoft Azure Arc and Google Anthos are the most popular control planes in this domain. However, Microsoft Azure Arc — stands out amongst others because of its unique design architecture.
In this article, I will deep dive and dissect the efficacy of Microsoft Azure Arc.
Azure Arc is a software solution that enables you to project your on-premises and other cloud resources, such as virtual or physical servers and Kubernetes clusters, into Azure Resource Manager.
Essentially, Azure Arc is an extension of Azure Resource Management (ARM) that gives support to resources running outside of Azure. It uses ARM as a framework by extending its management capabilities and simplifying the use for customers across different hybrid and multi-cloud environments. Azure Arc is about extending the Azure control plane to manage resources beyond Azure, like VMs and Kubernetes clusters wherever they are, whether they’re Windows, Linux, or any Cloud Native Computing Foundation-Certified Kubernetes distro.
Organizations can even manage resources even if they’re not always connected to the internet. Thus, non-Azure deployments can be managed alongside Azure deployments using the same user interfaces and services, such as tags and policies.
Azure Arc is a unique approach undertaken by Microsoft to accelerate innovation across hybrid and multi-cloud environments.
a.) Arc enables management and governance of resources that can live virtually anywhere (on-premises, in Azure, Azure Stack, or in a third-party cloud or at the edge). These resources can be servers, virtual machines, bare-metal servers, Kubernetes clusters, or even SQL databases. With Arc, you can use familiar Azure services and management capabilities including Create, Read, Update and Delete (CRUD) policies and role-based management.
b.) Arc provides a single pane of glass — Using the same scripting and tools, you can see those resources alongside everything else in Azure. Furthermore, they can cover, monitor, and back all these services no matter where they live.
c.) Arc enables customers to easily modernize on-premises and multi-cloud operations through a plethora of Azure management and governance services. Supports Asset organization and inventory.
d.) Arc can support enforcing organization standards and assess compliance at scale for all your resources, anywhere based on subscription, resource groups, and tags
e.) Arc also provides other cloud benefits such as fast deployment and automation at scale. For example, using Kubernetes-based orchestration, you can deploy a database in seconds by utilizing either GUI or CLI tools.
f.) Arc allows organizations to extend the adoption of consistent toolset and frameworks for Identity, DevOps / DevSecOPS, automation, and security capabilities across hybrid/multi-cloud infrastructures and lastly, to innovate everywhere.
g.) Arc supports the use of GitOps-based configuration as code management, such as GitHub, to deploy applications and configuration across one or more clusters directly from source control.
h.) Arc helps organizations to make the right decisions about cloud migrations. Using Azure Arc, you can gather the workload data (discovery) and uncover insights to decide where your workloads should run — whether on-premises, in Azure, or in a third-party cloud or at the edge. This insight-driven approach can save you significant time, effort and migration cost too.
i.) A unified experience viewing your Azure Arc enabled resources whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API.
Azure Arc allows enterprises to manage the following resource types outside the realm of Azure:
Azure Arc-enabled servers became generally available in September 2020.
Servers, be it physical or virtual machines, running Windows or Linux, are supported by Azure Arc. Azure Arc-enabled servers are in a way considered agnostic to infrastructure for this reason. These Machines, when connected, are given an ID amongst the resource group and are deemed as another resource in Azure. Azure Arc servers enable various configuration management and monitoring tasks, making it easier for hybrid machines to have better resource management.
Additionally, service providers handling customers’ or enterprise’s in-house infrastructure, can treat hybrid machines similar to how they treat native virtual machines using Azure Lighthouse.
Managing Kubernetes applications in Azure Arc entails the attachment and configuration of Kubernetes clusters inside or outside of Azure. This could entail bare-metal clusters running on-premises, managed clusters like Google Kubernetes Engine (GKE), Amazon EKS etc.
Azure Arc enabled Kubernetes allows you to connect Kubernetes clusters to Azure for extending Azure’s management capabilities like Azure Monitor and Azure Policy. By attaching external Kubernetes clusters, users can avail all the features that enable them to control external clusters like Azure’s own internal cluster. But keep in mind, unlike AKS, the maintenance of the underlying Kubernetes cluster itself is done by you.
Azure ARC is beyond minimum viable feature approach with Kubernetes
Azure Arc enabled data services help to run data services, using your preferred infrastructure, on-premises, and at the edge. Currently, Azure Arc-enabled data services are available for preview in services like SQL Managed Instance and PostgreSQL Hyperscale. Azure Arc supported SQL Managed Instance and PostgreSQL Hyperscale can be run on AWS, Google Cloud Platform (GCP), or even in a private datacenter.
Azure Arc enabled data services such as Azure Arc enabled SQL Managed Instance and Azure Arc enabled PostgreSQL Hyperscale to receive updates on a frequent basis, including servicing patches and all the new features in Azure. Updates from the Microsoft Container Registry are provided to you and deployment cadences are set by you in accordance with your policies.
This way, on-premises databases can stay up to date while ensuring you maintain control. Because Azure Arc-enabled data services are subscription services, you will no longer face end-of-support situations for your databases.
Azure Arc enabled Data Services also support cloud-like Elastic Scale, which can support burst scenarios that have volatile needs, including scenarios that require ingesting and querying data in real-time, at any scale, with sub-second response time. In addition, you can also scale out database instances using the unique hyper-scale deployment option of Azure Database for PostgreSQL Hyperscale.
This capability gives data workloads an additional boost on capacity optimization, using unique scale-out reads and writes. Many of the services such as self-service provisioning, automated backups/restore, and monitoring can run locally in your infrastructure with or without a direct connection to Azure.
I believe companies can find this as an attractive service if they need to use cloud-based tools outside the premise of Microsoft.
Azure Arc enabled SQL Server is part of the Azure Arc for servers. It extends Azure services to SQL Server instances hosted outside of Azure in the customer’s datacenter, on the edge or in multi-cloud environment.
You must be wondering why has Microsoft introduced Azure Arc when there’s an already existing hybrid cloud service — Azure Stack? Azure Stack is a hardware solution that enables you to run an Azure environment on-premises. Whereas Azure Arc is a software solution that enables you to project your on-premises and multi-cloud resources, such as virtual or physical servers and Kubernetes clusters, into Azure Resource Manager.
For applications that use a mix of on-premises software and Azure services, local deployment of Azure services through Azure Arc can reduce the communication latency to Azure, while providing the same deployment and management model as Azure.
While Azure Stack Hub is still viable for few businesses, Azure Arc becomes a holistic strategy for organizations that are looking to offload their workloads on both private and public clouds, both off-premises, on-premises.
So, how does Azure Arc compare to other hyperscalers who are offering hybrid and multi-cloud strategies?
AWS Outposts is a fairly new solution and is currently more aligned to Hybrid Cloud deployment models. Google Anthos allows you to build and manage applications on-premises, Google cloud, and even on AWS Outposts and Microsoft’s Azure. Anthos does NOT make GCP services available in either your own data center or in other clouds. To access GCP services (storage, databases, AI/ML services, etc.), the containers running in your data centers must reach back to Google cloud.
Google Anthos and Azure Arc have very similar capabilities and approaches. Anthos is more focused on getting everything deployed to containers and has limited capabilities to manage VMs or servers running in your data center or in any third-party clouds. Additionally, Google Anthos currently might be a costly component. Moreover, according to my analysis, Google Anthos is quite prescriptive. This is because to run Google Anthos, you require GKE (Google Kubernetes Engine), be it to deploy to Google Cloud.
This isn’t the case with Microsoft’s Azure Arc as it goes beyond Kubernetes into areas like centralized discovery, a common toolset for security, configuration, management of Data Services. It also offers more choices for Kubernetes environments, giving the option to customers to choose the Kubernetes platform. Azure Arc offers more portability and less lock-in than Anthos.
Azure Arc is offered at no additional cost when managing Azure Arc-enabled servers. Add-on Azure management services (Azure Monitor, etc.) may be charged differently for Azure VMs or Azure Arc enabled servers. Service by service pricing is available on the Azure Arc pricing page. Azure Arc enabled Kubernetes clusters and Azure Arc enabled data services are in preview and are offered at no additional cost at this time.
The current roadmap as stated on the Microsoft website includes adding more resource infrastructures pertaining to servers and Kubernetes clusters. In the future, you can expect:
a.) Self-hostable gateway for API Management — allows management of APIs hosted outside of Azure using the Azure-hosted API Management service.
b.) Other database services, such as Cosmos DB, are likely to be supported by the data services feature.
c.) Furthermore, support for deploying other types of Azure services outside of Azure could be added to Arc in the future.
To encapsulate, public cloud providers are churning out services to attain a spot in your company’s on-premises data center. The growing demand for hybrid-cloud and multiple cloud platforms and services has ushered Microsoft to launch Azure Arc as part of its cloud strategy.
So, what does this innovation mean to IT infrastructures? Well, with the demand for single management systems in multi-cloud environments soaring, I think it is more than a viable option. Simply because, once you register with Azure, Microsoft Azure Arc enables enterprises to jump on the hybrid cloud bandwagon regardless of whether they own an old version of Oracle on Linux or a modern one. I think this strategy is a game-changer as it helps to simplify complex and distributed systems across various environments like on-premises, multi-cloud, and on edge. Additionally, Azure Arc can be deemed as a compelling choice for enterprises that want to maintain balance by using traditional VM-based workloads and modernized container-based workloads.
Azure Arc, can hence, distinguish itself as a legacy management tool for hybrid cloud applications infrastructure propelling for greater digital transformation. I feel the simplicity of Azure Arc will be enough to lure enterprises to adapt to it.
Tags: Cloud, COVID19, Digital Twins
Manufacturers around the globe are becoming more agile and adaptable. Post-COVID 2020 has been a year that we will not soon forget.
This interference has led to the high demand for innovation, fast delivery, and better user experience. Filled with unimaginable change caused by the pandemic, the manufacturing industry witnessed a perfect storm, a significant disruption in terms of business continuity, operational visibility, remote work, employee safety, and the list goes on. However, businesses have responded, adapted, and are recovering.
The manufacturing industries show higher adoption rates of the cloud than other industries. As Information Technology (IT) and Operations Technology (OT) automates highly manual and specific industrial operations, firms shifted to the cloud to strategically store and operate the data. Frost and Sullivan estimate that connected industrial devices are growing at 15.5% CAGR through 2026, as manufacturers invest in Industrial Internet of Things (IIoT) architectures that extend the cloud to the edge.
Manufacturing industries are moving half of their workloads to the cloud (public or hosted private). That number is expected to grow faster than other industries over the next five years. Firms that leveraged public cloud as catalysts for Innovation and foundation to boost productivity; enable collaboration among remote and frontline employees achieved more success than those who followed traditional methods.
But, still, there is much to be done. So, I would say “Microsoft Cloud for Manufacturing” will be a game-changer for Manufacturing companies.
Before we investigate the unique value proposition of Microsoft Cloud for Manufacturing, let’s look into the top 5 trends of 2021 in Manufacturing in 2021.
Manufacturers need to protect the core operations of their business while building resilient and scalable operations. In addition, they should create flexibility to scale and drive new business models for resilient and diverse supply chains and supplier ecosystems.
Manufacturers will focus more on developing new products, sustainable, environment-friendly practices, new partnerships, and good corporate citizenship.
Coming manufacturing industries will turn to digital capabilities to build a resilient business and scalable operations. This will include operational resilience (supply chain, security, product development, risk management, etc.) and critical financial management (cost management, cash flow, spend analytics, supplier performance analysis, etc.)
All manufacturing industries will focus more on increasing agility in operations. Manufacturers can respond to the disruptions caused by the pandemic by investing in digital initiatives to optimize the workplaces and factories. Employees will be empowered with the latest technologies such as AI, mixed reality, and automation technologies, which will transform how they interact with customers and innovate the company’s strategic direction
To fulfill the changing needs of customers and partners, manufacturers should create the capability to promote new channels. As a result, there will be a high requirement of an enhanced focus on customer and partner communications, security, sustainability, safety with good corporate citizenship, and customer management.
The 2020 Frost and Sullivan Global Cloud User Survey reveal the progress and challenges for manufacturers. Among the interesting findings of this year’s survey:
As the digital transformation is picking up pace in the manufacturing industry, the requirement for flexible infrastructure to support applications and data is increasing. Manufacturers require modified data management tools and smooth cloud connections to pool, process, and protect the critical data assets that drive their intelligent operations. That is why manufacturing firms are shifting to the cloud to store and process the data collected.
Manufacturers use an average of two public cloud providers today, compared with just over three across all industries. This indicates that manufacturers prefer to develop deep relationships with the providers that best serve their needs. On average, manufacturing firms have placed just under half of their workloads in the cloud (public or hosted private). That number is expected to grow faster than other industries over the next five years.
According to the survey, 45% of manufacturing firms use Microsoft Azure to run cloud applications and data. 94% say they are satisfied or delighted with their Azure services.
I am sure you are wondering if Microsoft is Azure is already a leading Cloud platform for Manufacturing customers, then what’s the buzz about “Microsoft Cloud for Manufacturing,” and what makes it unique?
Microsoft Industry Clouds provide an on-ramp to the broader portfolio of Microsoft cloud services because they are designed to enable consumers to start with the areas where the demand for technology or business transformation is very high. Industry Cloud Offerings about integrated user experience by providing full support to the buyer journey for the industry.
Microsoft Cloud for Manufacturing (MCFM) is designed to deliver capabilities that support the core processes and requirements of the industry. These end-to-end manufacturing solutions include new capabilities that seamlessly connect people, assets, workflow, and business processes, empowering organizations to be more resilient. In addition, MCFM commits to industry-specific standards and communities, such as the Open Manufacturing Platform, the OPC Foundation, the Digital Twins Consortium, and the co-innovation with our rich ecosystem of partners.
Microsoft Cloud for Manufacturing (MCFM) aligns cloud services to manufacturing industry-specific requirements. It gives customers a starting point in the cloud that easily integrates into their existing operations. MCFM will help manufacturers connect experiences across their operations, workforce, design and engineering processes, customer engagements, and the end-to-end value chain.
It focuses on the following five key areas where manufacturers are accelerating investments in the subsections below.
MCFM is helping manufacturers transform their workforce to gain more productivity by:
According to a pre-pandemic PwC study, 91 % of industrial companies have invested in Digital Factories. However, only 6% of all respondents describe their factories as “fully digitized.”
MCFM will help in building more agile factories by the following:
This Microsoft solution gives access to new suppliers to a guided and easy-to-use role-based tool to qualify a supplier and onboard the supplier’s IT team to allow API data integration. It supports a consistent workflow and enables manufacturers to maintain their supplier business relationship and API data integration.
The Dynamics 365 Supply Chain Management app provides new suppliers access to easy-to-use role-based tools to qualify a supplier and onboard the supplier’s IT team to enable API data integration.
Microsoft Cloud for Manufacturing accelerates Innovation by allowing manufacturers to create, encourage and validate sustainable operations and products by the following:
Pandemic accelerated the need for product-as-a-service, and now customer service companies are offering dynamic service using AI. As a result, there is a high need for a fully connected system that provides one single view of their customers and devices. MCFR aims to improve consumer satisfaction, engagement, and business value by:
The Microsoft Partner Network invests in its partners and offers them the resources, programs, and tools to help you train their team, build innovative solutions, differentiate in the marketplace, and connect with customers. With access to a broad range of products and services, MCFM partners are empowered to build and deliver solutions that can address any customer scenario. In addition, Microsoft enables the digital transformation of an intelligent cloud to empower its partner ecosystem to achieve more.
System integration needs to support the traditional business model, but at the same time, be ready to help customers with new skills. The market is changing quickly, and SI’s are not prepared for it in serious difficulty. Therefore, it has become imperative for SI to develop the skills required to be a credible and reliable partner for the industries they serve.
The pandemic has forced organizations to become more agile and adapt quickly to changing market dynamics. Gartner’s research found that over two-thirds of corporate directors accelerated their digital business initiatives during 2020 and planned to increase their spend on IT and technology by an average of 6.9%. Operational flexibility has been – and will continue to be – a significant differentiator. According to the Institute for Supply Management, 97% of US businesses have been or will be impacted by supply chain disruptions due to the pandemic.
Suggested changes in the SI business model and solution design approach from long drawn custom development to:
The Covid-19 pandemic has taught us that businesses must accelerate their digital journey to thrive in the future. The manufacturing industries that have already begun their digital transformation journey will soon observe dramatic gains in productivity, high business value, employee engagement, better customer experience, and whatnot—as a result, manufacturing a more resilient and sustainable future. To remain competitive, manufacturing firms must consider the cloud not as an end but as a foundation that will enable them to quickly and effectively adapt any technology, today and into the future.
Microsoft Cloud for Manufacturing will help in the accelerated journey for a resilient and sustainable future. It will allow manufacturers to transform their workforce, build agile factories, create resilient supply chains without sacrificing their Innovation and essential resources. "Microsoft Cloud for Manufacturing” will be a game-changer for Manufacturing companies.
Tags: Cloud, COVID19, Digital Twins
Despite the efficacy and benefits of Cloud-native development, the Mainframe remains a core and valuable enterprise technology for large enterprise customers (especially in finance, insurance, manufacturing, retail, and the public sector). Maybe because Mainframe offers resiliency, reliability, and trusted security. But it isn’t enough to sustain in today’s disruptive business environment.
Faster time to market, employee, and customer experience has become a business imperative, and the Mainframe systems are a significant barrier in achieving the same. The flexibility and versatility of Cloud DevOps are overshadowing the Mainframe’s ability to innovate in the current business environment. Cost considerations and changes in the workforce also impact Mainframe’s viability as a long-term technology solution.
Public Cloud platforms like Microsoft Azure offer Mainframe alternatives capable of delivering equivalent functionality and features, thus eliminating the problems and costs incurred from utilizing a legacy Mainframe system.
Read through this article to know more about the challenges and benefits of Mainframe Modernization to cloud and understand the correct Mainframe Modernization approach to kickstart your migration journey.
Mainframe Modernization entails the process of migrating or improving the IT operations to reduce IT spending efficiently.
In the realm of improving, we can define Mainframe Modernization as the process of enhancing legacy infrastructure by incorporating modern interfaces, code modernization, and performance modernization. In terms of migration, it is the process of shifting the enterprise’s code and functionality to a newer platform technology like cloud systems. The strategy employed to modernize Mainframe structures relies on factors like business/customer objectives, IT budgets, and costs of running new technology vs. costs incurred from not modernizing.
Mainframe disadvantages result in high costs that are increasingly difficult to justify. Common challenges that business faces with Mainframe Legacy systems are:
Due to the challenges mentioned above, most customers are stuck with Mainframes’ legacy, with Technology Debt increasing every year.
While mainframe modernization delivers significant benefits, it can have far-reaching consequences that affect your technology, workforce, customer experience, and other related activities.
Cloud can offer economies of scale and new functions that are not available through mainframe computing. The benefits of cloud technologies and the law of diminishing returns in the Mainframe are calling for an increased demand for migration strategies. By taking a thoughtful, phased approach to modernizing your mainframe environment, you can overcome common obstacles and enjoy some of the advantages of cloud computing without putting core functionality at risk.
The prerequisites in strategizing for Mainframe Modernization to Cloud or a co-existence journey should be based on a holistic and strategic view. According to a study published by Accenture, enterprises’ primary motivation to move to the cloud is the cost-efficiency benefit. Hence, to truly reap the benefits of cost-efficiency, businesses must consider the following steps laid down by the study to assess their mainframe-to-cloud cost business case:
It’s clear that organizations should at least take steps to modernize their mainframe environment to meet current market conditions and customer expectations. The question is, how to do it and what might be getting in the way? This approach should consist of a planned strategy wherein you analyze all your applications and chalk out a systemic application strategy.
This approach should be navigated using the 7R approach to ease your modernization process:
During the Modernization journey, pay additional attention to address the following aspects:
1.Begin with use cases that increase business value through improved agility and flexibility. Your efforts should gain credibility based on smaller, successful initiatives.
2.Perception of no need for near real-time business continuity and DR.
3.There is zero room for error. Any outage during migration could have devastating consequences in terms of disrupting mission-critical operations.
4.Perception of Mainframe is secure.
5.Resistance to change: In a recent survey, 56% of the respondents reported their organizations resisted legacy systems’ modernization. This stems from being very comfortable in their existing systems. Also, owing to specific mission-critical applications being much more difficult to modernize, they choose to leave them in their legacy systems because of their interdependencies and complexities.
6.Risk-averse nature of business stakeholders.
7.Going against the proven functionality and the “It works” effect of Mainframe- Mainframes have proven functionality. Businesses that use Mainframes are in a “comfort zone” owing to their “It works” attitude. This attitude of “They’ve always been there, and they’ve always worked” can essentially make businesses complacent.
8.Executive sponsor beyond IT and availability of funds.
The process for migration/modernization should entail these steps:
1.Avoid big-bang approach.
2.Perform detailed Analysis and Assessment.
3.Map the migration/modernization approach with the right partner with deep expertise on Mainframe systems, target Cloud platform, and healthy relationship with Mainframe modernization ecosystem partners like Raincode, ModernSystems, LzLabs, MicroFocus, BluAge, TMaxSoft, etc.
4.Identify the right use case to gain critical momentum and credibility by identifying use cases that improve agility and flexibility.
To summarize, there is no ‘one-size-fits-all’ strategy when it comes to Mainframe Modernization. Each business case is unique, and they must consider a well-thought and thorough approach to modernizing their legacy Mainframe systems.
While many large and well-tenured enterprises are still using Mainframe, it becomes essential to note that modernizing remains a business-critical strategic imperative in today’s technological environment. It helps them keep abreast with the ever-changing customer expectations and attain a more agile and straightforward architectural framework.
Public Cloud, as noted, is the way to modernize your legacy Mainframe environment as cloud computing offers 70% cost saving, agility, resilience to accelerate Digital Transformation. Yes, there is no magical ‘lift-and-shift’ approach or a silver bullet to modernize your Mainframe systems or applications. But it can be done by employing and leveraging a well-planned and researched cloud migration strategy.
I say the time to modernize is NOW and that the cloud benefits should warrant a re-think for companies that are still on the fence.
Tags: Cloud, COVID19, Digital Twins
Digital Transformation reforms the way an enterprise functions. An Everest Group research found that 73% of companies failed to see any addition to their business value from their digital transformation efforts. In this blog, I will reveal 12 secrets to a perfectly executed digital transformation journey.
Success starts when your organization can answer questions like — What is the desired outcome of your transformation? Are you looking for more sales, revenue, cost-saving, or selling to new/existing customers? Where is your transformation headed in the future?
When dealing with these questions, you need to:
The CDIO or the Chief Digital and Innovation Officer is the central leader and integrator in the digital transformation process. CDIO works as the cardinal point for decision-making for hard and complicated situations that involve aligning cross-department efforts, resolving conflicts, and orchestrating the rollout of digital initiatives and capabilities. Why CDIO should hold the reins on this project:
Digital transformation is more about strategy and mindset than about technology. It helps organizations evolve their operations and contribute towards the evolution of the business itself. Digital transformation is a long-drawn journey — the goals and target metrics will keep shifting and moving constantly. To ensure inclusion:
When developing an organization’s vision, I suggest one to be customer-first with a focus on how the customer journey would look like. This would give way to leveraging technology to create more relevant ways to engage with customers, and deliver exceptional customer experience at all touchpoints along the journey. To create an effective customer relationship:
What enables a digital transformation? My first and foremost guess would be a mindset shift. As enterprises look for effective, manageable entry points into digital transformation, they need to unleash their Inner Futurist, think big, and Shift to Product Mindset and Everything-as-a-Service (XaaS) model. Here's how to do it:
Gartner predicted that 60% of digital businesses would suffer major service failures by 2020 due to the inability of security teams to manage digital risk. The world has shifted to a virtual presence and companies are faced with compliance and regulatory challenges.
Customers are much more tech-savvy today and prefer interacting with enterprises that are ethical, compliant, and prioritize security. For enterprises to be successful in the journey, security CANNOT be an afterthought. Security has to be an elementary element of the design process. If you take my word, this will help you reduce unnecessary costs and minimize the need to re-engineer solutions late in the game.
“Data, is the new currency of digital businesses.”
The leadership in an organization needs to invest resources in generating data that can be shared across departments and managed to create value in the digital era. Here's how mining and managing data will aid in your operations:
It can be quite tempting to fall into the vendor trap. It can be tempting, especially with different moving parts, tools & techniques, technology platforms, and different stakeholders reeling you in different directions. This is how you can own your destiny:
While the failure of certain pilots in the digital transformation journey doesn’t spell the end of the process, it costs the enterprise a lot more — money, wasted resources, wasted efforts of the workforce, delay in turnaround time. The remedy here is to closely observe the results that the digital transformation processes yielded and analyze them.
The key to solving these issues lies in the tiny details of the project. Analyze the mistakes while keeping agility in mind — Fail Fast, Learn Fast, Deliver Fast.
Tags: Cloud, Digital Transformation, Digital Twins
In current COVID times, the mantra for success includes a healthy mix of innovation with thoughtfulness and corporate social responsibility. As the efforts toward digital transformation are accelerating, so are the pressures to operate as responsible businesses. More and more CXOs are working on striking the right balance between accelerating digital transformation and their sustainability strategy, in addition to adopting a more digitized stand with a “cloud-first” approach.
Companies have historically driven financial, security, and agility benefits through the cloud, but sustainability is becoming imperative. According to the United Nations Global Compact-Accenture Strategy CEO Study on Sustainability, more than 99% of CEOs from large companies now agree that “sustainability issues are important to the future success of their businesses.” Two-thirds of the CEOs view the fourth industrial revolution (4IR) technologies as a critical factor for accelerating socio-economic impact. 59% of CEOs say that they are deploying low-carbon and renewable energy across their operations today.
By embracing the power of a sustainable cloud, CXOs can alleviate the pressures and discover new sources of innovation and growth. Over the next five years, every enterprise will find itself having to respond to pressures around improved environmental, social, and governance (ESG) efforts. This pressure will come from diverse stakeholders, notably investors, regulators, the supply chain partners.
Moreover, customers/consumers increasingly expect brands to act, organizations must now demonstrate that they are purposeful about sustainability, hold strong ethical standards, and operate responsibly in everything they do.
According to the UNCG-Accenture study, it is evident that 44% of the CEOs see a net-zero future for their company in the next 10 years. While "Sustainability" is the favorite keyword of the season, leadership is moving away from a "nice-to-think-about" approach, beyond buying Carbon offset credits, when it comes to Sustainability and trying to invest in a technological infrastructure that drives innovation as well as thoughtfulness.
A study conducted by Microsoft, Accenture, and WSP Environment & Energy shows that organizations can achieve an energy and carbon (CO2) emissions reduction of 30 to 90 percent by switching to cloud computing. Small businesses benefit the most by moving to the cloud, but even large companies can see substantial net energy and carbon savings.
Is it possible that migrating to cloud computing might help your business achieve its sustainability goals and positively affect its bottom line?
Yes!
Cloud migration can deliver reducing costs and carbon emissions if migration is approached rightly from a sustainability perspective.
While the public cloud can help with an organization’s Sustainability goals, one needs to have a focused approach to cloud migration. This can help reduce global carbon (CO2) emissions, drive greater circularity and result in more sustainable products and services.
While defining your strategy, take advantage of the current heightened focus on environmental sustainability to push disruptive approaches. Use the current push of “new normal” to accelerate your Cloud adoption and revisit your:
The second step in the journey begins with selecting the right carbon-thoughtful provider. Cloud providers set different corporate commitments towards sustainability, which in turn determine how they plan, build, power, operate, and retire their data center. Carbon emissions can differ widely across providers even though all providers have focused on driving down energy consumption to standard benchmarks.
It is important to choose the right cloud partner who has corporate commitments towards sustainability that is compatible with your enterprise. All major hyperscalers have published their Sustainability goals and CO3 emissions facts — Microsoft, AWS, and Google.
In most cases, cloud providers also have greater renewable energy mixes than cloud users and minimize data center carbon footprints through renewable energy. For a typical organization, Public Cloud migrations can lead to an impressive 60%+ energy reduction and 80%+ carbon reduction.
While selecting your Cloud partner, look for openness, transparency, and level of support provided for sustainability goals. Simple things can make a difference:
To achieve your sustainability goals, you need to plan your sustainable cloud migration carefully. For an enterprise that is new to this journey of digital transformation, it is important to get a consultant or an expert on board.
Some key areas that you might want to focus on are:
You need to ensure that your solution is capable of the following factors -
Cloud operating model is your organization’s blueprint for delivering on its cloud strategy. I would suggest, redefine your Cloud operating model and add Sustainability as a principle in your Cloud Operating model. The three principles of your cloud operating model — People, Technology, Processes, should co-exist with green initiatives.
Most of the hyperscalers have developed the Well-Architected Framework (WAF) that helps your enterprise stay abreast with the best practices in Architecture for designing and operating efficient cloud systems. A typical WAF consists of 5 major pillars:
I would suggest adding the following two additional pillars to the WAF framework to include the “green agenda” and achieve our sustainability goals:
This brings us back to opting for Multi-tenancy in Public cloud and selective server utilization.
When your enterprise chooses the shared public cloud, you save on the heavy infrastructure and maintenance costs as well as you cut down on the energy consumption and carbon emission because there isn’t a need for a separate data center to use the non-renewable resources and focus on your operations.
Additionally, you can save more energy by shifting to a Selective Server Utilization approach where you let the heavy workload do its thing and let the less occupied sections of the server rest.
In addition to selecting the right Cloud provider, you also need to select your Cloud Managed Services and migration partner carefully. While considering cloud-managed services, infrastructure, and app outsourcing providers, look for the right set of solutions in areas including levels of automation, sustainability by design in the toolset, office space, and workforce management.
What I describe as the green lining of these MSPs is when their “green agenda” is in line with your sustainability goals — this strikes the right chord of action-oriented sustainability policies.
Some of the things to watch out for when choosing your Cloud MSP include:
It’s time for enterprises to ensure sustainability goals are an integral part of corporate strategy and its purpose. Cloud is critical to unlocking greater financial, social, and environmental benefits through cloud-based circular operations and sustainable products and services.
By combining Cloud with 4th industrial revolution technologies, companies can drive better customer outcomes. Careful association of sustainability perspective to cloud computing and accelerated Cloud adoption can help an organization reduce energy use and the carbon footprint associated with running business applications.
It is important to choose the correct cloud service providers with the right level of automation and an action-oriented Sustainability policy that is compatible with your corporate responsibilities involving Sustainability. Additionally, organizations should focus on a circular economy where longevity and recyclability are ensured to make the most out of their resources.
Tags: Cloud, COVID19, Digital Twins
Microsoft has emerged with a new innovation called .NET 5 wherein it provides a cross-platform development experience overcoming the previous fragmentations in the .NET world. This comes as part of Microsoft’s direct strategy to unify and simplify the .NET platform as a result of listening to their customer’s needs.
.NET 5 is the natural development and evolution of the .NET Core 3.1 and .NET 4.6 framework. This is a great transition as the .NET framework and .NET standard will cease to exist (which have been posing a lot of difficulties to .NET developers in the past). This makes .NET 5 is a step towards a single platform to develop dynamic applications across all devices, right form mobile and desktop apps (Xamarin/WPF), to front-end web development (Blazor), including REST (ASP.NET), Gaming (Unity), gRPC and web sockets (SignalR), AI Apps (ML.NET, .NET for Apache Spark) and Quantum programming (Q#).
You can download .NET 5.0, for Windows, macOS, and Linux, for x86, x64, Arm32, Arm64. You will still be able to use .NET Framework with old operating systems but with Microsoft decreasing its life support cycle, it may be terminated sooner than expected.
So, what does this sharp turn in .NET 5 evolution mean to .NET developers and customers? that's what we will explore that in this article.
.NET 5.0 is the first release in our .NET unification journey. .NET 5.0 is a current release. That means that it will be supported for three months after .NET 6.0 is released. As a result, we expect to support .NET 5.0 through the middle of February 2022. .NET 6.0 will be an LTS release and will be supported for three years, just like .NET Core 3.1.
One of the key underscoring features of the .NET 5 is its ability to target cross-platforms from iOS, Mac OS, Windows, WatchOS, Android, tvOS, etc. This new release focuses on the improvements from .NET 3.1 and prepares for delivering better performance.
C# 9 and F# 5 are part of the .NET 5.0 release and included in the .NET 5.0 SDK. Visual Basic is also included in the 5.0 SDK. It does not include language changes but has improvements to support the Visual Basic Application Framework on .NET Core. There have been several upgrades and added features from the older version of the .NET Framework. Let me take you through how these upgrades overcome few challenges faced in the previous platforms. For a complete list of new features refer to What’s new in .NET 5 | Microsoft Docs
This is a significant improvement from the previous .NET 3.1 platform. .NET 3.1 rendered support to Linux ARM64 but was lacking in the performance department and there wasn’t any native support given to Windows ARM64. In the previous .NET 3.1 Framework, one could optimize methods using x64/x86 intrinsics. But unfortunately, systems that couldn’t run using these intrinsics performed below par and delivered slower performance levels.
Hence, this new update is a sigh of relief for you .NET developers, wherein this update intrinsics will afford better performance yield across ARM architecture. For example, by providing native support to Windows ARM64, Windows Forms, WPF applications will be able to run on devices like Surface X Pro.
Microsoft invested in delivering improvements in these areas:
A great addition to .NET 5. This is, I believe, is an attractive upgrade for any application developer. Single file applications are deployed as a single file which includes the file and all its dependencies (including .NET runtime). Single file applications were available in .NET 3 Framework as well but however, it has been optimized and enhanced in .NET 5. Previously in .NET Core 3.0, when a user ran the single application, the core host would extract all the files tentatively to a directory. But, with this new upgrade, extraction is no longer required.
Single File deployment is available for both the framework-dependent deployment model and self-contained applications.
One of the big differences between the .NET Core and .NET Framework is that .NET Core supports self-contained deployment — everything needed to run the application is bundled together. It doesn’t depend on having the framework separately installed. From an application developer perspective, this means that you know exactly which version of the runtime is being used, and the installation/setup is easier. The downside is the size — it pulls along a complete copy of the runtime & framework.
To resolve the size problem, Microsoft earlier supported an option to trim unused assemblies as part of publishing self-contained applications. In .NET 5, Microsoft has taken this further by cracking open the assemblies and removing the types and members that are not used by the application. This further reduces the size.
.NET 5 application developers will have access to the new capabilities of C# 9. Some of the key features include pattern matching, record, top-level statements, etc. For instance, in Records, a with expression has been introduced which is essentially a reference type that produces no copy when passed to member functions. Records are perfect for people working with data. The flexibility of a class, and the pragmatism of a struct. This omits the IEquatable implementation task, saving much time and energy for you!
In terms of, top-level statements, the Main has been omitted to make C# learning quicker and easier for adoption.
Some of the other enhancements include:
Overall, this has made the programming language efficient to implement and learn, saving .NET developers certain tedious tasks.
a) The Garbage Collector (GC) has been made significantly more efficient in .NET 5 and has provisioned for better scaling for machines with more core count. Key GC improvementsinclude:
Lastly, there have been improvements in the .NET Compiler platform as well, with the introduction of C# Source Generators and Performance Focused Analyzers. For more details on .NET 5 performance improvements refer to Performance Improvements in .NET 5 | .NET Blog (microsoft.com)
In container workload situations, multiple enhancements have been implemented that improve the overall performance in cloud container environments. Furthermore, it pushed for a reduction in container image size which offers size savings, and additionally, extends the option to select from a larger plethora of container images.
This is good news. With ASP.NET being provisioned, docker cloud image size will be significantly reduced by at least 40%. This could be done as a result of Microsoft terminating the build pack dependency (buildpack-deps). Lastly, in order to simplify the functionality of the .NET platform, the new update supports new container APIs as well ensuring faster and smoother operations overall.
In terms of Security in .NET 5, certain changes have been made around the usage of OpenSSL enabling support for TLS 1.3 on Linux. TLS 1.3 wasn’t available on Windows previously but now it’s been made a default.
Well, it is subjective and based on individual cases. You need to ask yourself and analyze how many of the features and properties your applications are dependent on that will hinder the process or even fail to migrate to the new .NET 5 update? Another important factor to consider when you intend to upgrade and migrate is to look at what .NET platform you are using. Because migration from .NET core to .NET 5 will be relatively easier than say from .NET Framework to .NET 5.
However, the good thing about the upgrade is that it is free of cost (from the runtime/framework perspective). Essentially, the only thing it will cost you is a lot of time and effort, which again, is reliant on the versions of .NET your applications use. Additionally, the upgrade is available on all versions of Windows servers. The only thing you need to keep an eye out for is the life cycle of your existing server because it will only make things more jarring and tedious to migrate from an old server.
In my opinion, I’d say NOW. You don’t need to wait for .NET 6 to get started with migrations. The sooner you start, the better, giving you time to deal with any issues that may emerge. If you stall the upgrade/migration process, it only will complicate things and increase your Technical Debt for you as .NET 5 is inevitable.
You should see .NET 5 as the first step in the .NET unification journey, one where you should start to take all that legacy code and decide what’s necessary to bring forward by porting and updating, and what needs to be completely replaced.
Let me chalk down the process for easier migration:
.NET 5 has been a major update since the introduction of the .NET Core in 2016. This has considerably been a major innovational leap by Microsoft and, an inevitable one. More features will be added in terms of runtime, framework, etc. along with extending the product’s scope as time progresses.
.NET 5 isn’t long-term supported (LTS). Microsoft intends to introduce new releases every year in November to enhance various layers of the product in terms of language, runtime, compilers, and tools.
To know more about each feature’s implementation timeline, you can visit here.
What is especially intriguing to me is the new .NET 6 release and what it holds. One of the key features that I am looking forward to is .NET MAUI. It is a new framework that proposes a universal model for building UIs on mobile and desktop platforms.
Microsoft announced that it would extend its .NET support to Macs with Apple Silicon. .NET 5 in addition to this year’s .NET 6 November release, which will offer a repertoire of developments for .NET developers and app developers across cross-platform ecosystems. This unification will afford .NET with expansive capabilities/utilities whilst still maintaining simplicity.
I see a bright future on the horizon wherein one can use .NET across different operating systems, devices, or even chip architectures.
Rest assured; it is a game-changer!
In an upcoming article, I will go deeper on .NET 5 Upgrade benefits, migration challenges and recommended Upgrade process.
Tags: Cloud, Digital Transformation, Digital Twins
As we cross 2020, most of us are still functioning remotely and preparing for a hybrid work environment where we will be expected to work from home. How can an organization determine productivity? What can we expect from remote working in 2021? How can leaders motivate their employees who are low on motivation? Read this article as I dissect the remote working environment of 2021.
In 2020, this digital revolution was accelerated due to the pandemic outbreak, when most of the global workforce was forced to work remotely from the safe boundaries of their homes. We can witness the change in the work environment when we look at the introduction of various collaboration software, which helps employees connect virtually over a meeting and streamline their processes, to cloud-based connectivity, and a data-centric approach to strategic decision-making powered by the synergy between artificial and human intelligence. These have helped organizations reimagine the way to work.
The pandemic introduced most of the organizations to the possibility of switching to remote work, partially or in full capacity. Take a look at this infographic I did earlier to understand the change in working habits enabled by the pandemic.
The extended crisis has left employees wondering about the possibility of a hybrid work schedule — a combination of on-site work as well as remote working and preparing for it. Technological revolution and rapid digital transformation have enabled organizations to change the Social Capital Management landscape. In a study by Humanyze, it was observed that working remotely has extended people’s working hours by an average of 10–20%. These employees also observed an increase in work-related and overall mental stress, increasing confidence and focus on well-being and a higher degree of formal and informal interaction with colleagues.
But, how do you determine the relative productivity of an organization compared to its competitors? Let’s take a look.
In their book, Time, Talent, Energy: Overcome Organizational Drag and Unleash Your Team’s Productive Power, Michael Mankins and Eric Garton mention that “the effect of remote working on productivity cannot be generalized and varies across industries and the individual talent.” The book mentions three factors that affect the productivity level of an organization:
Each employee, while working remotely, can be distracted by excessive e-communications, random virtual meetings, and/or administrative procedures and paperwork. This can affect the amount of time the employee spends doing productive work.
While the remote working orders have liberated the amount of time that the employees spent commuting, it has enabled them to invest additional time in their jobs. A study by Raffaella Sadun, Jeffrey Polzer, and others noticed that the length of the average workday increased by 48.5 minutes, across 16 global cities, in the early weeks of the lockdown. Miscommunication among colleagues and inefficient work practices have reduced the time of productivity by 2% to 3% for most organizations.
Any organization’s best talent must be properly deployed, assigned to a proper team, and led by managers who help them bring out the best in them. The talent of an individual and the team in totality affects the productivity of the organization.
Organizations that have perfected the process to acquire, develop, team, and lead scarce, difference-making talent, have recorded a 20% increase in productivity as compared to their counterparts. But the Pandemic has encouraged a slump of demand for certain products and services that have kept them out of the labor market and forced them to lose their best talent. In a way, it can be said that COVID-19’s effect on talent management has led to a negative impact on productivity.
Every job involves the amount of discretionary energy and willingness every employee can invest. This can dictate the level of productivity and success of the company, its customers, and other stakeholders.
In a study by Bain and Company, it was found that an engaged employee is 45% more productive than a merely satisfied worker. The pandemic and the subsequent work-from-home orders have forced organizations to find ways to keep their employees virtually engaged. For instance, in our firm senior executive leadership began conducting virtual town halls, a weekly video series for important Covid-19 and business updates, along with tips from fellow employees.
As hybrid working arrangements are becoming a matter of safety over convenience, let’s take a look at some trends that have transformed remote work in the present era.
The market for collaboration software, like Zoom and Microsoft Teams, has seen exponential growth in adoption (both in terms of the number of users and # of hours spent online). Initially, these tools helped in facilitating the work continuity but with the fast pace of agile innovation and Artificial Intelligence, these collaboration platforms are becoming more of cohabitation platforms that allow users, who are geographically distributed, to exist in the same space simultaneously.
With the help of these new tools (cohabitation platforms), we can forge deeper connections that make the virtual world more humane by going beyond simply collaborating — running businesses, visiting family, attending weddings, and educating our children through technology.
Most of the employees are favoring flexibility when it comes to their work schedules. More and more professionals are choosing the hybrid work setting — a healthy mix of both on-site and remote working. But in the case of Gen Z employees, employees favor on-site work as they look to the workplace as a source of socialization, network, and learning. Leaders must understand the importance of minimizing screen time, allowing parents to be part-time teachers, and enabling a professional life that supports their personal life.
Virtual meetings are the greatest equalizer. Companies are increasingly facing the pressure to be diverse, equal, and inclusive, and technology proves to be the biggest enabler for such situations. Virtual meetings on collaboration software make it difficult to engage in office politics or show-off. Additionally, this software gives organizations the privilege to capture, record, and analyze meeting data, and enables them to evaluate Diversity, Equality, and Inclusion in real-time.
The pandemic, along with the forced remote working, has encouraged organizations to rely on talent residing in a different corner of the globe. The field of Software development saw this shift in social capital management way before the pandemic and other industries followed them. This was further fueled by the record of high unemployment in many areas of the world. The main enabler of this is technology — it has untethered talent from the location. However, this is limited to specific industries that have the luxury of collaborating over video conferences.
SAP, Qualtrics, and Mind Share Partners conducted a global study in 7 countries that found that over 40% of employees in these countries reported a decline in their mental health since the pandemic outbreak. In the same period, workers reported an increase in anxiety, stress, and fear related to the COVID-19 pandemic.
In a study titled "Cybersecurity in the Age of Coronavirus", conducted by Twingate, it was recorded that 40% of professionals have experienced mental exhaustion from virtual meetings. Additionally, 59% of employees felt their office was more cyber secure when compared to their home.
A survey by Doddle cited symptoms of burnout among employees. It was observed that a full week of video conference meetings left 38% of employees feeling exhausted while 30% felt stressed. Employees experienced performance anxiety as 63 % of employees said that they were likely to record and evaluate their virtual meetings to help them become better presenters.
Employees end up spending extra time working from home which drives them towards burnout. This overworking behavior has been mockingly called “Sleeping at work,” as employees start their day by opening their laptop screens and end their day by closing them. The willingness to be accessible after work hours has, in turn, increased their screen time, often related to an increase in stress and anxiety.
While the world is gradually inching towards normalcy, many of us are still working from within the walls of our homes. This stagnation in professional, personal, and social life is bound to flag motivation, performance, and well-being for many. In such times of extended crises, it is up to the leadership to keep their employees’ morale high. Leaders can provide structure, guidance, and regulation, and provide a healthy work environment where individuals can foster internal motivation by implementing the following:
No-one knows what the future holds for us or when we will go back to on-site practice.
But, in such situations, the onus lies on the leaders of the organization to look after their workforce. They should invest more in untethered social capital management and cohabitation platform usage. Productivity is an important aspect of the working environment, but it should not trump an employee’s well-being. This means that they should encourage the employees to set boundaries when it comes to working timings by educating them that more online/screen time does not relate to increased productivity. This is also the time for leadership to reimagine the social capital management landscape, the opportunity for hybrid work, and the possible innovations in the field of composite AI.
What other measures can managers take to help boost their employees’ productivity in these uncertain times?
Tags: Cloud, COVID19, Digital Twins
The Covid-19 pandemic is, without doubt, the biggest challenge the world has encountered in this century and arguably the biggest since the Second World War. As the year 2020 draws to a close with mixed news — positive on the vaccine development front only to be offset by reports of a new, powerful strand of the novel coronavirus emerging — the world will have to soldier on in a long battle ahead.
Given the ongoing COVID-19 crisis, the need for resilience has never been greater. Resilience is the ability to “sustain and recover quickly from difficult, uncertain scenarios.” However, amid chaos, fear, uncertainty, and often insufficient resources, the call to "be resilient" may feel like an impractical demand, especially if you view resilience as something one either has or does not have.
As uncertainty prevails and more efforts are made by governments and health researchers to find a lasting solution to the pandemic, the onus of adjusting to the new normal rests with enterprises and management. Leaders must consider the resilience of individuals, as well as the resilience of their teams, organization, culture, and system. Better systems resilience can help businesses limit the damage caused by the pandemic and create a future that is immune to such disruptions.
Business continuity hinges on the solutions provided by the technology wings to beat the Covid blues. It also depends on the innovations that chief information officers (CIOs) and IT managers carry out to make processes, systems, and businesses adapt better. In this blog, I will talk about certain frameworks needed for seamless operational continuity.
Data shows that those enterprises that were already focusing on contingency plans and had synchronized their IT capabilities and supply chain in that direction absorbed the shock better. They were better equipped to deal with the new realities such as working remotely, fall or a sudden spike in demand, supply chain disruptions, lack of intra-organizational synergy, and communication challenges to name a few.
According to Accenture’s Future Systems research survey of 8,300 companies conducted before the COVID-19 crisis, only a small minority of companies — the top 10 percent — had cracked the code on systems resilience.
Needless to say, the IT managers in these companies adapted much better and so did the business. Some reconfigured traffic to maximize capacity for critical applications when there was a surge in demand while others moved low-priority applications to the cloud to free valuable system and human resources. One large retailer, for example, was able to handle a massive surge in sales by offloading traffic from the core e-commerce site to a cloud-based coupon application, and in another case, a hospital adopted a new virtual assistant to manage the massive increase in incoming calls during the COVID-19 crisis. All these were possible because the IT infrastructure was in place and the managers put in quick thinking to come up with effective solutions in a short period.
For others, who are still not quite there, the three Rs — respond, reset and renew — that I have written about in the past can prove to be very effective.
There are a few areas that are integral to a resilient system and every organization needs to work towards these to ensure foolproof systems resilience. These are:
When the business environment is dynamic, to begin with, and disruptions such as Covid-19 further compound the equation, organizations need the right hands to steer the ship. The leadership needs to understand the culture, aspirations, and goals of the organization and only an adaptive leadership can create frameworks that can provide the right momentum with each time lethargy or panic sets in. That happens because adaptive leadership is well aware of the purpose of the enterprise and can link individual aspirations to organizational goals.
While technology is critical, managing people is the bedrock for seamless functioning. In a connected world, businesses are more integrated in terms of workers, vendors, customers, partners, and suppliers, making business continuity and pandemic plans far more complex to test and carry out. Employees and business partners are the most important cogs in any organizational wheel and to make them familiar with the crisis-induced changes is imperative. This is where proven models in the past that were considered efficient can prove to be a bone of contention between those who rigidly defend these models and others wanting to completely do away with them. As such employees have to be briefed about the changing needs, the human resource wing has to skill and reskill them to reset their performances and align those better with the changing organizational goals.
Similarly, partners have to be told about the need for course correction and how the company has planned for that scenario. They have to be informed about the challenges the management expects as it tries to build a resilient system and the roadmap to overcome those. Forming trusted bonds between the employer and the employees that last long is paramount.
The intensity with which disruption can hit a supply chain can vary but limited exposure may not necessarily mean complete immunity. In an integrated world, even a small problem in the assembly line producing the tiniest of the component can bring supply chains to a halt. A resilient system is one that anticipates the scenario to make sure the controllable fall in place and is quick to adapt when a crisis erupts. This can be achieved by investing in disaster-proof physical assets, diversifying the list of suppliers, designing products with common components, enabling swift movement of products and services across platforms, and strengthening industry ecosystems. All this may involve investments but the enterprises will save a lot in the long term. Resilience and efficiency should not come at the cost of one another.
Using newer technologies, such as analytics and artificial intelligence, help with improving transparency and efficiency on the one hand, and responding to the crisis on the other. Data is a critical tool to identify vulnerabilities and can help predict scenarios to identify the most effective solution, should a crisis hit. Several Fortune 500 companies are already using data insights for inventory and risk management.
Since the pandemic has hastened the digital transformation, the IT infrastructure and the business decision-makers should also work towards achieving what is called the Product mindset. This is essentially achieving better synergy between the verticals and coming up with faster solutions. The guiding principles to achieve these are minimizing time to value, solving for need, and excelling at change.
Service continuity:At a time when companies are looking to get expertise on board in lesser time than usual and when identifying the right resources through virtual onboarding sessions is becoming a norm, companies need to identify and prioritize their requirements to ensure no disruption in service.
Organizations should reorient their IT processes to meet any new challenges that can logjam even the best of workflows.
As IT infrastructure and CIOs are set to play a major role in building flexible systems, it is natural that the vulnerabilities need to be understood and gaps plugged in time. Systems have to be made fast to react in case of an external threat that might come in their way. More robust systems and firewalls need to be put in place to avoid any breaches. The effectiveness of the existing cybersecurity mechanisms should be properly evaluated and wherever possible, alternatives should be considered to minimize risks.
Adjusting to the new normal will continue in 2021. For all we know, the new normal may be significantly different from what we envisage. A lean and agile system, backed by proper allocation of resources and investments, will help build higher resilience in the existing framework. These measures, if implemented in time, will pay larger dividends for a long time to come and make life easier during the next crisis, whatever its form may be.
Tags: Cloud, COVID19, Digital Twins
5G is now a standard for seamless industrial operations. But how will different industries be affected by this cellular standard? What innovations are awaiting for humanity once the telecom operators finish the 5G rollout? Read this piece to test the compatibility of 5G with different industries.
In 2019, it was predicted that, by 2025, there will be 1.2 billion 5G connections covering 34% of the global population. Though the pandemic tried to douse the raging fire around the 5G hype, it still happens to be the most talked-about tech phenomenon in 2020.
With 5G, we’ll see an entirely new range of applications enabled that will help with transforming the art of the possible.
My take — Enterprises will witness 5G opening new doors to services and product innovation creating new customer segments and revenue streams at scale.
In this article, I will dissect how 5G will transform particular industries while discussing their business cases.
An article in the New England Journal of Medicine mentions that healthcare owns 30% of the world’s stored data and that every patient typically generates about 80 MB of data each year. In another article, How 5G Wireless (and Concomitant Technologies) Will Revolutionize Healthcare? The authors mentioned four major deficiencies that affected the healthcare industry — lack of a patient-centric system, absence of personalization, deficiency of accessibility, and no focus on data.
Experts believe that the implications of 5G will address all these deficiencies through its low latency capability and high capacity to collect data. When hospitals upgrade to 5G, they will spend less time on capturing, transferring, and managing data in real-time, unlock the ability of remote patient care access by enabling mobile networks to support real-time high-quality video, and ensure strengthened cyber security through cloud-based data centers. With the flourishing wearable technology that is compatible with 5G, it would be easier for patients to engage with their physicians remotely and keep them updated with their real-time vital signs.
I, personally, cannot wait to see the strides that 5G takes when it comes to transforming the Healthcare scene. The Pandemic has added excessive pressure on our medical and para-medical community, and these frontline warriors deserve all the help we can gather.
According to Statista, in 2022, e-tail revenues are forecasted to rise to 6.54 trillion US dollars. With the introduction of 5G, retail chains all around the world will transform through proper connectivity — Reliable and fast network, sophisticated AR/VR use cases, and adequate network capability for new-age applications used by warehouses, that help them reach their performance peaks.
So, the question remains — How can 5G aid the retail industry?
With the growth of 5G, comes the need for compliance with federal laws. That would be the number one concern for the public sector. The public sector recognizes the impact that 5G will bring but, to deal with it appropriately, local governments have to understand the technology and get the most out of it while complying with the federal laws. These laws will be triggered by the constant threat of data breaches due to the massive amount of data that will be collected using the technology.
But, once the technology is under the government’s reach, we can start talking about the implementation of the following use cases:
In a new report commissioned by Intel, it was projected that the Media and entertainment experiences enabled by 5G will measure a generation of up to $1.3 trillion in revenue by 2028. As smartphones are increasingly winning hearts either as a viewing device or a drive to control in-home and industrial networks, the 5G technology stack is opening new avenues for a richer media experience to consumer devices. This will include lightning-fast data speeds, massive bandwidth, and low-network latency.
Let me give you a little sneak peek at how 5G will reform the world of media and entertainment:
To understand how the 5G technology stack will transform the Manufacturing industry, let’s take a look at the use cases that pertain to industrial operations:
Let’s review how that would work.
In addition to the commendable speed, the 5G technology stack is also focused on device density and latency, which is said to transform the way industries work. This is extended to the disruption of supply chains. To refer to my take on how the pandemic has induced competition between supply chains, read my blog.
Let me take you through the use cases that 5G will enable for supply chains. Industries can have a reliable logistics operation with automated labeling, tracking, and recording shipments as opposed to the manual track and trace. This will help them solve challenges like lost cargo, misplaced containers, and counterfeiting. When it comes to inventory and warehouse management, 5G will help to optimize processes, enable remote maintenance and control, and deploy autonomous vehicles.
When it comes to the future of supply chain management, 5G gives us exactly what we want — transparency across the channel to ensure that the control lies in the central command node.
The smart home technology, present today, gives a fragmented experience that 5G is here to solve. 5G aims to provide us a seamless experience across all our home technologies — from our favorite room temperature to our preferred light shades, entertainment, and education suites, fitness and health devices, and door security features. Along with seamless device orchestration, the technology stack will also focus more on increased data privacy and security from an ethical point of view.
In a report by Ericsson, by 2030, AR will account for almost 50% of all revenue from immersive video formats. What will be exciting about the use cases of AR is that they will take an in-venue experience and transform it into a digital experience.
When combined with AR/VR/Mixed Reality, experiences, like concerts, movie premieres, gaming, sporting events, retail fashion, and even home planning, education, and advertising, offer an irresistible prospect that many tech-junkies, including me, are waiting to experience in its full glory.
5G will make the logistics life simpler! The high-reliable, low latency technology stack has proved beneficial for the transportation industry through several use cases. Smart sensors can monitor the condition of the roads and measure the stress levels to determine the time for repair. Sensors to predict potholes proactively, allowing the municipalities to take preventative measures. The technology will also enable cameras to provide real-time insights into the traffic flow to redirect vehicle and pedestrian traffic for efficiency and civic safety. 5G will accelerate the adoption of autonomous vehicles. 5G will unlock the ability to support real-time responses on vehicle safety status and autonomous controls for collision avoidance.
The fifth-generation cellular standard has enabled new business cases for IoT. The tech industry realizes that it is difficult to curate an end-to-end IoT solution with cellular connectivity. It needs the right mix of several elements — expertise in embedded systems, connectivity, time series-based systems, antenna design, cloud computing, and more. but, I see telecom operators offering a similar opportunity enabled by 5G.
So, what can be the future business use cases for 5G-powered IoT? The first use is an improved asset tracking solution to track the small amounts of data, of energy usage or condition of the product, periodically. This further helps with tracking whether the product is handled according to the safety and compliance requirements issued. Second is the business-critical applications for command and control of AGVs and robots in small factories. The third use is connecting assets in restaurants, cafes, and brownfield areas to the cloud to convert them into smart devices.
The camaraderie between 5G and high-speed flash storage will create a lot of avenues for use cases for the storage industry. First, enabling virtual 5G networks in the cloud to ensure bandwidth, latency, and quality of service. Second, becoming the backbone of high-resolution video streaming by enabling a shift from 4K to 8K and beyond. Third, introducing full-on cloud gaming with the opportunity to stream video games and play them anywhere.
Theoretically, 5G has been a successful champion in combating the present-day challenges faced by these industries, and it focuses on seamless productivity. Enterprises will witness 5G opening new doors to services and product innovation, creating new customer segments and revenue streams at scale. From the dissection above, it is clear that ultra-reliable low latency is the new currency of the network world, underpinning new capabilities in many industries that were previously impossible.
Now, it is all a waiting game for us to see how the actual deployment of 5G turns out to be.
What do you expect from the 5G rollout?
Tags: Digital Transformation, Digital Disruption, 5G
I can try and express the impact of the Pandemic in hundred different sentences, but the summary remains the same — COVID-19 has impacted the world in ways no one could have imagined or predicted. While many leaders are used to constant change, this disruption caused by COVID-19 is the hallmark of 2020. The pandemic has forced organizations to re-think, re-imagine, and re-evaluate their processes.
Organizations continue to respond & renew their strategy during the pandemic and explore new ways to operate and drive growth. In the coming year, the global Digital Transformation scene will focus on increased resilience and preparedness for the post-COVID effects. Keeping that in mind, I am sharing a list of 15 strategic trends that will transform the digital scene. These trends highlight areas of opportunity and ways for organizations to differentiate themselves from competitors.
2020 was a huge year for 5G — we saw regular and agile 5G deployments on a global scale by Qualcomm, AT&T, Verizon, Nokia, Ericsson, and Huawei on a global level. While telecommunications is booming with the use of 5G, it will also be used in the advancement of Edge Computing, near real-time monitoring and low-latency high-speed application enabled scenarios. In 2021, 5G will champion the disruption scene as it will continue to transform each and every industry that affects our day-to-day living.
Composable Enterprise (CE) is employed to diversify business functions using microservices using application networks, APIs, and beyond. This means that instead of providing a single product or service, the enterprise offers a variety of microservices to its customers.
Pandemic has taught enterprises to be agile with adaptability and that is where Composable enterprises come in handy. Enterprises with a single product or service will find it more difficult to adjust to unprecedented changes as opposed to composable enterprises that have microservices as new business models powered by the New Normal.
Additionally, Composable enterprise has helped with the evolution of Everything as a Service (XaaS), where most IT functions are scalable as separate services for enterprise consumption.
To be successful in the “New Normal” era, to be successful, enterprises would need to focus on explicit initiatives and technology adoption in the areas such as Sustainability, Innovation as a Service, Enterprise Risk Management, and Transparency and Traceability.
According to Gartner, over half of the enterprise-generated data will be produced and processed outside traditional data centers or a single centralized cloud by 2022, compared to just 10% today. The conversation has evolved from choosing between a private or a public cloud. Enterprises are now focusing on a mix of Hybrid, Multi-cloud, and Distributed Cloud strategies.
In the post-COVID world of 2021, Cloud providers would be seen as strategic partners that help with cost reduction and better resilience by fuelling Cloud to Cloud Migration.
The Internet of Behavior (IoB) is all about changing behaviors by using data. IoB acts as a major step in the evolution of how data is collected and used. There is a rise in technologies that collect data that span the digital and the physical world — for example, facial recognition, location tracking, 5G powered Edge Computing, and big data for example. This data collected can be used to influence behavior through feedback loops.
I have identified some of the common use cases and technologies in the IoB scene -
Anywhere Operations refers to the IT model that focuses on supporting customers, enabling employees, and orchestrating the deployment of business products and services from any geographic location.
This IT operation model also helps with the Hyperlocal business model where the enterprise collaborates with local businesses for the agile distribution of products and services from an offline location within the proximity of a few kilometers. This concept has helped revolutionize the conventional idea of Supply Chains. To understand how the pandemic has reformed the competition between supply chains, read my blog here.
A study by Gartner predicts that, by 2023, 40% of organizations will have adopted Anywhere Operations for an overall optimized customer and employee experience.
It is an undeniable fact that at the center of all digital transformation is the newest digital currency — DATA. Organizations that have been successful in adapting to the Pandemic changes have recorded data to analyze their past mistakes and their current operation, and to predict the trends of tomorrow.
The IT industry, with the help of big data and a high level of technologies, like machine learning, can be used to effectively analyze data to help in figuring out an appropriate response to crises.
In 2021, organizations will be curating data, looking for ways to capture more for monetization, and using it indirectly for use cases like Responsible business, Supply chain resiliency, Employee experience, Customer Experience, New Pricing, Business models, and Security.
I feel that the new year will be all about embracing AI not just as an innovation initiative, but as part of the core strategy for the company. CXOs need to consider how AI and Composite Architectures can work together, in sync, to help their companies solve the business challenges presented by the dynamic ecosystem of the pandemic.
The joint forces of Artificial Intelligence and Machine Learning will be the force multiplier to drive new business models and insights. Other AI trends that will be popular, in my opinion, are Composite AI, Generative AI, Formative AI, AI Security, DataOps, ModelOps, and DevOps, and AI Democratization.
Additionally, the concept of Algorithmic Trust will trend as you will see the development of Algorithm Digital Economy in 2021. Algorithms would be the way to differentiate products on the basis of enterprise business resiliency, marketing, and business models.
While AI would be all trending like a raging fire, organizations would need to ensure everything done in the AI area needs to be well within the range of the AI Ethics, rules, and regulations.
Workforces will be collaborating with their colleagues remotely and to make this digital journey possible, Cybersecurity as a business imperative will play a crucial role. In 2021, enterprises would be focusing on privacy, compliance-enhancing computing, digital privacy, and AI ethics, Distributed Digital Identity, and Zero Trust Design.
Remote work has boosted the number of security breaches, and there is an increasing need to protect data. Zero Trust design ensures the protection of modern digital environments using network segmentation and providing a Layer 7 threat prevention. To adopt Zero Trust Architecture, you don’t need to remodel or make alterations to the existing technology. Zero Trust architecture is highly dynamic as it has to adapt to your existing processes and make them more efficient.
Workforces no longer have the luxury to work out of the same location, and this has impacted the productivity of a lot of enterprises. This can mean investing a lot of time, effort, and money to develop platforms to streamline the business processes. This can be done with ease when they opt for a Low Code/No Code first strategy.
A prediction by Gartner projects that, by 2024, low-code will account for more than 65% of all application development activities. Low Code/No Code offers a robust model that allows interoperability amongst functions that may be necessary for scaling up operations in the future.
In the last two years, Automation has evolved from the use of RPA and Infra as Code to Hyper Automation. This technology employs advanced technologies, like AI, Machine learning, and Robotic Process Automation, to automate processes that were carried out by the human workforce.
There is a need for businesses to automate their operations as much as possible. This is when I recommend businesses to turn to the practice of Hyper Automation using tools, like AI Machine Learning, Event-driven software, robotic process automation, and other types of the decision process and task automation tools.
It is crucial to “reach the customers where they are” — on their mobile devices. With Low Code/No Code, it is possible to automate the process of creating apps with ease. Other features, like Everything as a Code and Intelligent Automation Processes, will also make its presence feel in the coming year.
In 2021, it is predicted that more than 20% of IT workers would be working along with Personal Contextual Digital Assistance, a technology that is beyond AI-infused apps and simple chatbots.
With virtual working taking a front seat, remote workforce and talent management is a vital necessity. With employees geographically dispersed, it takes a well-designed technology to streamline the workflow.
Rapid changes in the business models, work environments, customer expectations have left workers in need of tools that can help them maximize their productivity. These tools can range from individual changes, structural tweaks, or the provision of effective technology to meet the goal. These tools can help employees understand their goals clearly and improve their efficiency accordingly.
Tools, like reliable video conferencing, cloud-based connectivity, digital collaboration tools, and a decision-making approach that is powered by the synergy between artificial and human intelligence, are helping the HR departments reimagine the way to manage talent globally.
I am focusing on two new architectural introductions in the arena of computing.
Unprecedented times call for an enhancement of a person’s cognitive and physical capabilities. This is where human augmentation comes in with its use of technology. Human Augmentation works on two different levels:
Intelligent Experience is an amalgamation of Customer Experience, Employee Experience, Workplace Experience, User Experience, Multi Experiences, Digital Experience, and Emotional Experience.
In addition to the above trends, in 2021, we would also see where few technologies like Blockchain, 3D Printing, AR/VR would move pilots and will become mainstream to develop new business models, resilience supply chain and re-imaging the telemedicine, healthcare scenarios, remote monitoring, B2B data sharing, learning, training, and after-sales support scenarios.
2021 is all about resilience and bouncing back from the challenges posed by the pandemic. The major focus is placed on the health of the business and its stakeholders. Innovations in terms of business models, business operations, and security are the predicted highlights of the coming year.
What should be the first order of business for you? Evaluating the weak links in your enterprise, gathering data on your strong suits, and implementing these trends accordingly. That’s what progress is about — seizing opportunities out of crises.
Which trend, in your opinion, would rule the digital transformation scene in 2021?
Tags: Digital Disruption, Digital Transformation, Digital Twins
Disasters like the COVID-19 pandemic can wreak havoc on even the biggest of a company’s business. Even when your employees and office space are secure, the difference in a time of calamity lies in having a secure and resilient supply chain. In this article, I explore how the direct competition between brands is no longer relevant, and why the only competition that remains is between the companies’ supply chains. I will also take a look at different ways that leaders can employ to build secure resilient supply chains for their businesses.
When the news stories related to the pandemic started breaking on the internet, none of us could have realized the massive scale of the calamity. Given that such a disaster only happens once in a century, we can be forgiven for that. However, what we can’t forgive ourselves for is putting all the eggs in one basket when it comes to securing our business supplies.
The recent disruption of global business supply chains is mostly due to them being either concentrated in a single geographic area or following just-in-time manufacturing and lean production strategies to cut costs. Such businesses now find themselves in a precarious situation with a noticeable shift in the customer consumption patterns, where purchases are now more inclined towards local availability, followed by how responsibly the brand behaves, as against the earlier considerations of price or favorite brand/product.
While the saying goes that a chain is as strong as its weakest link, the same stands true for any supply chain. The impact of the COVID-19 crisis on supply chains can’t be more stressed. According to a recent McKinsey survey of senior-level supply chain executives, the study found that 73% of businesses that were surveyed had encountered issues with their suppliers, while 75% faced issues with production and distribution; 100% of the respondents in food consumer-goods industries had encountered production and distribution problems.
A separate IDC survey on "Supply Chain Agility in the Pharmaceutical Industry" found that 46% of respondents had faced drug shortages during the pandemic, while 70% agreed that their supply chain was very vulnerable or facing more problems with the continuation of the pandemic. 65% of the respondents in the survey also reported that they could no longer accurately plan supply and had lost faith in their demand forecasts, and a stark 43% of respondents lacked the necessary agility and redundancy to survive major business disruptions.
With businesses in the past few years moving towards globalization, they are facing increasing challenges in terms of acquiring customers, onboarding workers and vendors, finding partners and suppliers, and ensuring continuity of business in the pandemic.
Today the biggest question that all manufacturers need to answer is — Which strategies should they employ to mitigate the disruption risks for supply chains? And where should organizations start?
Here is my handpicked 10-point approach for any such business looking to ensure continuity of service by building a secure and resilient supply chain:
To be resilient, a business needs to develop a balance between just-in-time manufacturing and lean production strategies and developing AI and analytics-powered early warning system models within supply chain risk assessment tools. Such tools can offer complete visibility across all levels and can sound off alarms whenever they identify any slowdown, interruption, or any other issue. Early warning systems based on a well-defined predictive model, KPI, and leading indicators can also help businesses identify their weak links in the supply chain, so they can plan around it by building redundancies.
While in the past two decades businesses have built their supply chains around cost optimization, the current pandemic has shown that this may not be the best practice. Lower levels of available inventory and disrupted supply chains have driven many businesses into the ground.
In the next few years, businesses should focus on simplifying their product portfolio and diversify the manufacturing capability (aspects like different locations, different suppliers) so that they can keep their manufacturing and supply chain differentiated. The ability to operate alternative sites for manufacturing and securing supplies from different suppliers may add some extra (and might require some discipline in quality assurance) cost but provides insurance against any disruption.
Businesses need to create a much more agile supply chain. This can be done by collaborating and sharing data between all stakeholders involved in building the supply chain. Both agility and resiliency can be improved by new near real-time B2B integration and sharing data down the supply chain. Especially with the Tier 2 and 3 suppliers, providing them more visibility of plans can allow.
Creating a coherent digital strategy to combine global and local supply chain strategies, by using insights from data to derive competitive advantage and to achieve business outcomes, should become the new normal.
The main issue with the pandemic has been restricted geographical access. Thus, I would like to see a much fresher approach by businesses to explore and identify new commercial strategies to fuel growth in areas that were left untouched before the present pandemic.
COVID-19 has highlighted many vulnerabilities in the conventional strategies being utilized for building supply chains. Businesses need to work on a more collaborative model and redefine the contracts with their suppliers, vendors, and partner ecosystem. Contracts need to allow for a new set of KPI, location independence, risk and rewards mechanism, elasticity in supply, cash flow, payment terms, and most importantly redundant paths for a single point of failure in the chain including alternate routing.
With things not remaining the same as before, businesses need to invest more in restructuring their last-mile delivery channels, via identifying new ones — such as order online, pickup in-store, curbside pickup, delivery using robots and drones.
Organizations should centralize any information necessary for the core functioning of the vendors and supply chains to prevent any eventuality where the concerned managers may not be available. They should also look to localize various vendor relationships and delegate decisions to local teams.
Businesses are being crippled by losses as insurance providers are introducing new insurance policy exclusions. They need to understand that traditional insurance is not adequate when dealing with pandemic-related business interruptions. The best insurance in the future against incurring any such losses is prevention itself. Building resilience-as-a-service into the system is now a necessity.
I would rather take it to next level and recommend that businesses apply the chaos testing model for pandemic readiness. Businesses need to test and investigate their pandemic readiness regularly very similar to the chaos testing frameworks used in the IT application domain. They need to implement a workplace pandemic preparedness plan in sync with their continuity plans.
A data-driven reinvention of business practices will be critical in post-pandemic growth. Businesses will need to embrace the use of responsible AI capabilities to recover and return to their pre-pandemic growth plans. Businesses would need to leverage AI across their supply chain, business processes, customer experience, and employee experience. Who does it more professionally and ethically will be the key.
Businesses should adopt the use of digital data and insights more extensively as part of their strategy. Cross-platform digitization can help them understand things clearer and faster to predict and prevent unnecessary blockages in their supply chain and vendor management structures.
Leaders need to realize the importance of system thinking as the missing skill-set in this whole pandemic scenario. Businesses should utilize systems thinking to look at a holistic picture of all the interconnected ecosystems. They will then be able to predict evolving complexities and the risks associated with them.
To be successful in the future, businesses need to define more flexible, adaptive, and resilient processes and establish multi-disciplinary teams to define new processes and innovate for new products.
As business stakeholders are redefining products, services for new customer experiences, IT needs to catch up with the fast pace of delivering new applications and new functional requirements. Enterprises need to take a cloud-first approach. The cloud-first approach would allow for faster innovation with lower cost and allow enterprises to focus on core functionality instead of building resilient IT platforms.
As organizations are looking to scale up their technological capabilities, including the IT and cloud infrastructure, to be ready for the new normal. Some of the areas that I would recommend reducing technology debt are:
Developing a remote working strategy for your business can ensure that everyone can securely access the tools they would require to work remotely, including any access to business systems — including HR, ERP, payroll, CRM, unified communication (UC), and collaboration tools, file storage, and email. At the same time, setting up an omnichannel workforce management system should also be a priority, with an increased focus on employing digital solutions to tackle the new challenges.
Increasingly, companies are discovering that remote working scenario has created a mismatch between the type of skills that are needed and what their employees have. As business leaders turn to automation, digitization, and value data extraction, the workforce should be able to complement the value that is being added by the new technology.
While businesses have been automating processes for efficiency in operations, they need to do so for every department in a post-COVID world. Utilizing composite AI to re-imagine the ways of doing business while factoring in various productivity scenarios will be an in-demand strategy in the new world.
For businesses to make remote working a more responsible process, the HR teams have to come up with their own set of remote working procedures that can communicate organizational expectations to employees and also ensure any upskilling that may be required on their part. HR and IT teams need to work in more cohesion in the new world.
In a new interconnected world, to drive realistic customer demand and to sustain continuity of business, organizations should look to create a ‘digital twin’ of the entire supply chain (or of key processes) to simulate and adjust for the developing scenarios.
Also, increased dependence on subscription-based models should help cater to evolving customer demands. In short, businesses need to shift to a product mindset and leverage technology as an enabler to define new services and subscription models.
Businesses need to ensure that their cybersecurity teams are all geared up to protect their data, applications, and resources from any kind of threats or to respond to alerts. As remote working finds a more comprehensive place in the business ecosystem, CISOs must draft new security policies (for data security, network security, app security, and remote wipe out) to accommodate for the increase in telework.
While globalization was the talk of the town in the '90s, I feel that eventually, global supply chains may end up becoming a victim of their own success. As businesses were over-reliant on strategies focused on cutting costs, they had increasingly become less flexible and resilient with their supply chain structures. This needs to change, and the ones to adapt quickly will be the new market leaders.
Tags: Business Strategy, Business Continuity, COVID19
Business leaders are being forced to rework their strategies and relook at their offerings in the aftermath of the COVID-19 crisis. Gone are the days when work took place as an established practice.
The pandemic has opened multiple Pandora’s boxes for every business. As the pandemic’s impact deepens, I take a look at the importance of low code/no code approach for IT applications which businesses can implement for a faster recovery.
While every business goes through a cyclical change of events, the pandemic can easily throw a spanner in their plans. In the initial few days, businesses struggled to put together a plan of action. Now a year and a half into the pandemic, they look better positioned to implement new models to manage productivity with safety.
For any business to formulate an effective response to such a crisis, they need to adopt a three-phased approach:
While this may look simple on paper, the major impact of the current crisis has been on productivity as teams no longer have the luxury to operate out of the same location. While this can be countered easily by the few big names in the industry, for many others, this may mean investing time, effort, and above all, large sums of money in developing platforms to boost productivity unless they opt for a low code/no code-first strategy.
Low code/no code is a software engineering approach for developing cloud-native applications/custom software apps fast and with minimal hand-coding.
Gartner reports predict that low code will account for more than 65 percent of all application development activities by 2024.
Most of the enterprises and governments have accelerated the process of digitization and the COVID-19 pandemic has increased the demand for near-real-time and data-based decision instead of offline surveys. While the regularly staffed IT departments are capable of handling a few requests, with almost every business function in need of digitization, businesses have their hands full to implement digital practices overnight.
The low code/no code platform-based applications can be created as quickly as they are needed. Also, they offer a robust model with avenues for interoperability amongst various devices and functions that may be necessary for scaling up operations in the future.
Low code application platforms (LCAP) have enabled two-way interaction between people and increased e-participation between employers and employees as well as governments and citizens.
The application-based data capture using a low code/no code platform allows for better protection of sensitive data and people’s privacy. It has also made easy to implement activity-driven applications, enabling organizations and governments around the world to capture real data via apps to take much informed decisions.
Low code/no code platforms are usually classified under two broad categories, viz for developers and for business end-users. As such, before choosing a platform, companies must clearly understand who is going to work on it. It’s best not to put a developer-oriented platform in front of ‘citizen developers’ — those with no programming skills. While choosing a platform for developers does offer you more customizable control over it.
Choose a low code/no code product that offers you more functions for your requirements as every tool is different in its capabilities.
You don’t want to end up with an application which offers no upgrade or support. When choosing a low code/no code platform, exercise extreme judgement in ensuring that it should be able to offer you a viable solution for the required duration, and offer you continuous access to scale up as per your evolving requirements.
In this age of information, we are all racing against time. This is where low code/no code platforms can be utilized to the maximum. Let’s look at a few such scenarios:
While low code/no code platforms offer faster time to market and faster development time, they can also be limited in their capabilities due to the block-model manner in which they are built. A few challenges include:
With all that said, I firmly believe that low code/no code technology is the most promising way to scale for the faster application development. When implemented efficiently, it has proven to be effective beyond expectations.
I see the "Renew phase" for businesses and governments absorbing learnings from the previous "Reset phase" and building on those to expand the scope of operation of platforms and apps built on the low code/no code model.
With the increasing adoption of the cloud as a necessity rather than a luxury, and rapid progress being made on the cloud-development front itself, I expect more low code/no code platforms being utilized by organizations to respond to their fast-evolving needs. Workflow automation and anywhere-anytime solutions built on low code/no code platforms look set to grow their reach even further.
The adoption of 5G services, which provide faster speeds and low latency, should fuel the rise of LCAP platforms and share increase in citizen developers.
I had often heard that necessity is the mother of invention, but perhaps the COVID-19 pandemic has led to a creation of many necessities, which are now leading to a digital transformation of our society. I see a bright future ahead for the low code/no code platform, because in this new future, software is going to be anyone’s game!
Tags: Business Continuity, Business Strategy, COVID19
In today’s day and age, business enterprises are finding it difficult to navigate through different complex environments that run across data centers, edge, and multiple clouds. While single cloud still holds relevance, most companies are adopting multi-cloud and hybrid cloud models. However, the terms hybrid cloud and multi-cloud are inconsistently used.
A multi-cloud strategy entails using multiple cloud services from different providers based on their performance levels at certain tasks.
With the deployment of multi-cloud and hybrid cloud infrastructures and it being a reality, players like Microsoft, Google, and AWS have entered this market, propelling for greater cloud innovation. All hyperscalers have built control planes for hybrid and multiple cloud deployment models, that overlook the lifecycle of managed services like Internet of Things (IoT), functions databases, virtual machines, and observability, etc.
I believe that these control planes deliver the promise of robust hybrid/multi-cloud technologies in this ever-changing multi-cloud services infrastructure. Currently, Microsoft Azure Arc and Google Anthos are the most popular control planes in this domain. However, Microsoft Azure Arc — stands out amongst others because of its unique design architecture.
In this article, I will deep dive and dissect the efficacy of Microsoft Azure Arc.
Azure Arc is a software solution that enables you to project your on-premises and other cloud resources, such as virtual or physical servers and Kubernetes clusters, into Azure Resource Manager.
Think about Azure Arc as a management and governance tool that enables you to manage your resources as if they’re running in Azure, using a single pane of glass for managing across your estate. Essentially, Azure Arc is an extension of Azure Resource Management (ARM) that gives support to resources running outside of Azure. It uses ARM as a framework by extending its management capabilities and simplifying the use for customers across different hybrid and multi-cloud environments.
Azure Arc is about extending the Azure control plane to manage resources beyond Azure, like VMs and Kubernetes clusters wherever they are, whether they’re Windows, Linux or any Cloud Native Computing Foundation-Certified Kubernetes distro. Organizations can even manage resources even if they’re not always connected to the internet. Thus, non-Azure deployments can be managed alongside Azure deployments using the same user interfaces and services, such as tags and policies.
Azure Arc is a unique approach undertaken by Microsoft to accelerate innovation across hybrid and multi-cloud environments. So, in a nutshell, what does Azure Arc offer?
a.) Arc enables management and governance of resources that can live virtually anywhere (on-premises, in Azure, Azure Stack, or in a third-party cloud or at the edge). These resources can be servers, virtual machines, bare-metal servers, Kubernetes clusters, or even SQL databases. With Arc, you can use familiar Azure services and management capabilities including Create, Read, Update and Delete (CRUD) policies and role-based management.
b.) Arc provides a single pane of glass — Using the same scripting and tools, you can see those resources alongside everything else in Azure. Furthermore, they can cover, monitor, and back all these services no matter where they live.
c.) Arc enables customers to easily modernize on-premises and multi-cloud operations through a plethora of Azure management and governance services. Supports Asset organization and inventory.
d.) Arc can support enforcing organization standards and assess compliance at scale for all your resources, anywhere based on subscription, resource groups, and tags
e.) Arc also provides other cloud benefits such as fast deployment and automation at scale. For example, using Kubernetes-based orchestration, you can deploy a database in seconds by utilizing either GUI or CLI tools.
f.) Arc allows organizations to extend the adoption of consistent toolset and frameworks for Identity, DevOps / DevSecOPS, automation, and security capabilities across hybrid/multi-cloud infrastructures and lastly, to innovate everywhere.
g.) Arc supports the use of GitOps-based configuration as code management, such as GitHub, to deploy applications and configuration across one or more clusters directly from source control.
h.) Arc helps organizations to make the right decisions about cloud migrations. Using Azure Arc, you can gather the workload data (discovery) and uncover insights to decide where your workloads should run — whether on-premises, in Azure, or in a third-party cloud or at the edge. This insight-driven approach can save you significant time, effort and migration cost too.
i.) A unified experience viewing your Azure Arc enabled resources whether you are using the Azure portal, the Azure CLI, Azure PowerShell, or Azure REST API.
Azure Arc allows enterprises to manage the following resource types outside the realm of Azure:
Azure Arc-enabled servers became generally available in September 2020.
Servers, be it physical or virtual machines, running Windows or Linux, are supported by Azure Arc. Azure Arc-enabled servers are in a way considered agnostic to infrastructure for this reason. These Machines, when connected, are given an ID amongst the resource group and are deemed as another resource in Azure. Azure Arc servers enable various configuration management and monitoring tasks, making it easier for hybrid machines to have better resource management.
Additionally, service providers handling customer’s or enterprise’s in-house infrastructure, can treat hybrid machines similar to how they treat native virtual machines using Azure Lighthouse.
Managing Kubernetes applications in Azure Arc entails the attachment and configuration of Kubernetes clusters inside or outside of Azure. This could entail bare-metal clusters running on-premises, managed clusters like Google Kubernetes Engine (GKE), Amazon EKS etc.
Azure Arc enabled Kubernetes allows you to connect Kubernetes clusters to Azure for extending Azure’s management capabilities like Azure Monitor and Azure Policy. By attaching external Kubernetes clusters, users can avail all the features that enable them to control external clusters like Azure’s own internal cluster. But keep in mind, unlike AKS, the maintenance of the underlying Kubernetes cluster itself is done by you.
Azure Arc enabled data services to help to run data services, using your preferred infrastructure, on-premises and at the edge. Currently, Azure Arc-enabled data services are available for preview in services like SQL Managed Instance and PostgreSQL Hyperscale. Azure Arc supported SQL Managed Instance and PostgreSQL Hyperscale can be run on AWS, Google Cloud Platform (GCP) or even in a private datacenter.
Azure Arc enabled data services such as Azure Arc enabled SQL Managed Instance and Azure Arc enabled PostgreSQL Hyperscale to receive updates on a frequent basis, including servicing patches and all the new features in Azure. Updates from the Microsoft Container Registry are provided to you and deployment cadences are set by you in accordance with your policies.
Azure Arc enabled Data Services also support cloud-like Elastic Scale, which can support burst scenarios that have volatile needs, including scenarios that require ingesting and querying data in real-time, at any scale, with sub-second response time. In addition, you can also scale out database instances using the unique hyper-scale deployment option of Azure Database for PostgreSQL Hyperscale.
This capability gives data workloads an additional boost on capacity optimization, using unique scale-out reads and writes. Many of the services such as self-service provisioning, automated backups/restore, and monitoring can run locally in your infrastructure with or without a direct connection to Azure.
Azure Arc enabled SQL Server is part of the Azure Arc for servers. It extends Azure services to SQL Server instances hosted outside of Azure in the customer’s datacenter, on the edge, or in a multi-cloud environment.
You must be wondering why has Microsoft introduced Azure Arc when there’s an already existing hybrid cloud service — Azure Stack?
Azure Stack is a hardware solution that enables you to run an Azure environment on-premises. Whereas Azure Arc is a software solution that enables you to project your on-premises and multi-cloud resources, such as virtual or physical servers and Kubernetes clusters, into Azure Resource Manager.
For applications that use a mix of on-premises software and Azure services, local deployment of Azure services through Azure Arc can reduce the communication latency to Azure, while providing the same deployment and management model as Azure.
While Azure Stack Hub is still viable for few businesses, Azure Arc becomes a holistic strategy for organizations that are looking to offload their workloads on both private and public clouds, both off-premises, and on-premises.
So, how does Azure Arc compare to other hyperscalers who are offering hybrid and multi-cloud strategies?
AWS Outposts is a fairly new solution and is currently more aligned to Hybrid Cloud deployment models. Google Anthos allows you to build and manage applications on-premises, Google cloud, and even on AWS Outposts and Microsoft’s Azure. Anthos does NOT make GCP services available in either your own data center or in other clouds. To access GCP services (storage, databases, AI/ML services, etc.), the containers running in your data centers must reach back to Google cloud.
Google Anthos and Azure Arc have very similar capabilities and approaches. Anthos is more focused on getting everything deployed to containers and has limited capabilities to manage VMs or servers running in your data center or in any third-party clouds. Additionally, Google Anthos currently might be a costly component. Moreover, according to me, Google Anthos is quite prescriptive. This is because to run Google Anthos, you require GKE (Google Kubernetes Engine), be it to deploy to Google Cloud or on-premises.
This isn’t the case with Microsoft’s Azure Arc as it goes beyond Kubernetes into areas like centralized discovery, a common toolset for security, configuration, management Data Services. It also offers more choices for Kubernetes environments, giving the option to customers to choose the Kubernetes platform. Azure Arc offers more portability and less lock-in than Anthos. Basically, Azure Arc does everything Anthos does and much more; making Azure Arc more versatile to adapt to.
Azure Arc is offered at no additional cost when managing Azure Arc-enabled servers. Add-on Azure management services (Azure Monitor, etc.) may be charged differently for Azure VMs or Azure Arc enabled servers. Service by service pricing is available on the Azure Arc pricing page. Azure Arc enabled Kubernetes clusters and Azure Arc enabled data services are in preview and are offered at no additional cost at this time.
The current roadmap as stated on the Microsoft website includes adding more resource infrastructures pertaining to servers and Kubernetes clusters. In the future, you can expect:
a.) Self-hostable gateway for API Management — allows management of APIs hosted outside of Azure using the Azure-hosted API Management service.
b.) Other database services, such as Cosmos DB, are likely to be supported by the data services feature.
c.) Furthermore, support for deploying other types of Azure services outside of Azure could be added to Arc in the future.
To encapsulate, public cloud providers are churning out services to attain a spot in your company’s on-premises data center. The growing demand for hybrid-cloud and multiple cloud platforms and services has ushered Microsoft to launch Azure Arc as part of its cloud strategy.
So, what does this innovation mean to IT infrastructures? Well, with the demand for single management systems in multi-cloud environments soaring, I think it is more than a viable option. Simply because, once you register with Azure, Microsoft Azure Arc enables enterprises to jump on the hybrid cloud bandwagon regardless of whether they own an old version of Oracle on Linux or a modern one.
I think this strategy is a game-changer as it helps to simplify complex and distributed systems across various environments like on-premises, multi-cloud and on edge. Additionally, Azure Arc can be deemed as a compelling choice for enterprises that want to maintain balance by using traditional VM based workloads and modernized container-based workloads.
Azure Arc, can hence, distinguish itself as a legacy management tool for hybrid cloud applications infrastructure propelling for greater digital transformation. I feel the simplicity of Azure Arc will be enough to lure enterprises to adapt to it.
Tags: Cloud, Business Strategy, Data Center
Today, the cloud underpins most new technological disruptions and has proven itself during times of uncertainty with its resiliency, scalability, flexibility, and speed.
According to Gartner, Cloud adoption has expanded rapidly — more than 20% CAGR from 2020 to 2025 in total spending. But guess what — total cloud spend still makes up ‘only’ about 10% of global enterprise IT spend. So where is the barrier? What’s holding the Public Cloud penetration and pervasive usage inside enterprises?
Definitely, there are reasons like regulatory compliance, security concerns, shortage of Cloud skilled resources, technology debt, etc. My view is, so far Cloud providers have been solving the Technology Style problem instead of providing real industry-specific business process-centric solutions.
Business executives are demanding a path to digital operational excellence and most of the enterprises are looking at Public Cloud as the platform as the core foundational block for the next generation of innovation. To build a future resilient and sustainable business, companies need to prepare for a paradigm shift to industry-specific solutions as the applied innovation testing platform.
Gartner estimates that about 5% of organizations currently use an Industry cloud solution. Broader adoption within enterprises will require more vertically targeted composable applications and business processes with “whole product” solutions designed for industry scenarios and process models, rather than technology-oriented solutions that enterprises must largely configure and integrate themselves.
Microsoft is building clouds for priority industries by connecting and customizing public cloud services across firms with tailored products and service bundles.
Today, every enterprise wants to create disruptive capabilities, specific to their business, that will help them stay differentiated and innovative. Industry Cloud would move the dial of the cloud conversation from a cost to a growth narrative by identifying a new area of opportunities that can be actioned.
With Industry Cloud, Public Computing will transition to becoming the composable foundation for business innovation instead of just being a technology style for delivering applications. It will redefine the baseline for long-lasting impact on cloud strategy and adoption as they are blurring the lines between established cloud services such as infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS).
Industry Cloud will empower the fast pace innovation for new business models and revenue streams for its customers. It is the required catalyst of future Cloud adoption and will accelerate the digital transformation journey.
Tags: Cloud, Digital Transformation, Business Strategy
The unprecedented times have forced business needs to evolve with a digital-first approach. How can solution architects help with this transition? Read the blog as I answer your questions.
I have an analogy where I compare solution architects to lifeguards. The role of a solution architect is all about solving problems by orchestrating digital components to address an organization’s needs. It is all about planning strategies that are concerned with reducing costs, eliminating redundancies in technology and processes, and preparing for the impact of change through appropriate mitigation and management.
While pandemic has changed everything but I see it has created a new level playing field for everyone and as we have an opportunity to start from a clean slate and redefine roles in the ecosystem of cloud computing, and broadening the scope of solution architecture and enterprise architecture.
The majority of the population wince at the mention of “new normal”, but the pandemic has imposed drastic adjustments to our lives by changing our perceptions and altering our priorities. With this, we can witness an acceleration of digital adoption, and it has opened various avenues that are yet to be explored.
Businesses have considerably expanded the threat landscape by sending employees home which has amplified the need to challenge and redefine all business processes and policies. For instance, due to on-site working restrictions, many organizations have turned to the WFH system. This means that unprotected networks have access to corporate data without proper vigilance. This calls for action to protect the data through Zero Trust Design.
Additionally, the narrative in the cloud computing industry has changed from ‘build this huge architecture’ to ‘build a sustainable architecture that proves to be efficient. It’s much more than just about Agile development with a series of Minimum Viable Product (MVP). Now, it is a continuous process where Agile is not even enough anymore. You must be capable of constantly shifting to a state of “Being Agile in using Agile Development”.
While one cannot foresee the future, one can create logical analogies that are based on the current situation and our first responses. In my opinion, solution architects can take these observations into account while dealing with the current uncertainties -
As a result of the pandemic, businesses have changed. Customer expectations and buying patterns have changed. This calls for a thorough review of business capabilities which result due to a lot of reasons — suspension of processes, adaptation to environmental changes, replacement or extension due to a merger/acquisition, expansion due to significant adoptions, modification attributed to improved business resilience, and introduction of new products and services. Thus, solution architects need to look at new realities and change their way of working. There is a need for solution architects to unlearn old skills and embrace new skills.
By unlearning the old skills, I don’t mean abandoning the fundamentals of being a solution architect. My experience has taught me that there are 6 guiding principles that an ideal solution architect should swear by:
Let’s dissect these skills to understand them better.
A solution architect is essentially the leadership figure that molds business solutions to fit into the enterprise architecture. This requires an individual to bring technical leadership to the table which can be achieved only by staying abreast with all the innovations in the field of cloud computing and solution architecture, and spending resources to upskill and reskill yourself.
With the changing ecosystems, technology decisions take a front seat and an architect’s capabilities play a big role in promoting the long-term strength and scalability of the organizations. To be relevant in the post-COVID world, Solution Architects need to upskill themselves to acquire knowledge in the following area:
A Solution Architect needs to possess the know-how to achieve Rapid Solution Delivery at Scale. This can be possible by an increasing reliance on adaptive reuse, encouraging innovations in lightweight architecture, moving beyond simple Design Thinking, and expanding knowledge about DevOps/DevSecOps and AIOps.
Going forward Solution Architects would be seen as owners of the overall business case for the solution in context and that effectively would require Solution Architect to develop expertise in the area of Financial Engineering.
The catch to successfully evolve here is to learn 'How to Build a Business Case'. Solution architects need to think beyond technological solutions and learn how to build a business case keeping the entire operations of the organization in mind. While TCO and ROI are measurable metrics, solution architects need to think beyond numbers and focus on the joint business case with Technology vendors and Cloud providers. Another thing that helps is to think beyond the App-level business case and start thinking Portfolio-Level. I am, personally, for a shift to quicker, short-termed ROI of 2 years, instead of the usual 5-year ROI.
When you are a Solution architecture leader, you are expected to collaborate with various teams and get to a middle ground — this is called the art of negotiation. Every individual’s success depends on creating value for the organization through gaining resources, solving a problem, or coordinating mergers, acquisitions, joint ventures, alliances, management buyouts, share issues, and financial restructuring. A solution architect needs to know how to get out of misunderstandings that kill potential deals and leave stifled opportunities.
In the past, the solution architect role was seen as a bridge between Infra Architect, Network Architect, Security Architect, Storage Architect, Application Architect, and Database Architect. With Public Cloud adoption, some of the roles have been shifting or becoming irrelevant with a new persona of Cloud Solution Architect. Going forward, Solution Architects needs to transition to Cloud Solution Architect persona and kind of become a jack of all for the cloud technologies (with knowledge of L300+ and network of subject matter experts with L400+ skills) and Risk Management expert for Cloud solutions from a perspective of scalability, resiliency, sustainability and cost optimization.
Along with the technical skills, a solution architect needs to demonstrate the following soft skills to lead the technical vision of the enterprise -
Take a look at my blog to get a perspective on how these soft skills can help an ideal solution architect to achieve the peak of digital innovation in the current times.
Tags: Agile, Business Strategy, COVID19