Quantum computing, insurance, insurtech, sustainability
Available For: Advising, Authoring, Influencing, Speaking
Travels From: London
Speaking Topics: Business Impact of Quantum Computing, The growth and evolution of Insurtech, Digital transformation in Insurance
Paolo Cuomo | Points |
---|---|
Academic | 0 |
Author | 117 |
Influencer | 22 |
Speaker | 130 |
Entrepreneur | 65 |
Total | 334 |
Points based upon Thinkers360 patent-pending algorithm.
Tags: Climate Change, Renewable Energy, Sustainability
Tags: Innovation, InsurTech, Quantum Computing
Tags: Innovation, InsurTech, Quantum Computing
Tags: Climate Change, Renewable Energy, Sustainability
Tags: Cybersecurity, Quantum Computing, Security
Tags: Innovation, InsurTech, Quantum Computing
Tags: Sustainability, Quantum Computing
Tags: Sustainability, Climate Change, Renewable Energy
Tags: Digital Disruption, Digital Transformation, Innovation
Tags: Digital Disruption, Digital Transformation, Innovation
Tags: Digital Transformation, Digital Disruption, Innovation
Tags: Digital Disruption, Digital Transformation, Innovation
Tags: Digital Transformation, Digital Disruption, Innovation
Tags: Digital Transformation, Digital Disruption, Innovation
Tags: Digital Disruption, Digital Transformation, Innovation
Tags: Innovation, InsurTech
Tags: Sustainability, Quantum Computing
Tags: Sustainability, Quantum Computing
Tags: Quantum Computing
Tags: InsurTech, Quantum Computing, Sustainability
Tags: Sustainability, Quantum Computing
Tags: Sustainability, Quantum Computing
Tags: FinTech, Quantum Computing, InsurTech
Tags: Quantum Computing, Finance
Tags: Quantum Computing
Tags: Digital Transformation, Quantum Computing
Tags: Digital Disruption, Innovation, InsurTech
Tags: Emerging Technology, Innovation, Quantum Computing
Tags: InsurTech
Tags: Climate Change, Renewable Energy, Sustainability
Tags: Quantum Computing
Tags: Quantum Computing
Tags: Quantum Computing
Tags: InsurTech
Tags: Quantum Computing
Tags: Quantum Computing
Tags: AI, Generative AI, InsurTech
Tags: Digital Transformation, Emerging Technology, InsurTech
Tags: AI, FinTech, InsurTech
Tags: Innovation, InsurTech, Quantum Computing
Tags: Business Strategy, Digital Disruption, InsurTech
Tags: Digital Transformation, Innovation, InsurTech
Tags: Change Management, Digital Transformation, InsurTech
Tags: InsurTech
Tags: Diversity and Inclusion, Emerging Technology, InsurTech
Tags: InsurTech
Tags: InsurTech
Tags: Change Management, Digital Transformation, Leadership
Tags: FinTech, InsurTech, Startups
Tags: Digital Disruption, Digital Transformation, InsurTech
Tags: Digital Transformation, Innovation, InsurTech
Tags: Quantum Computing
Tags: InsurTech
Tags: Analytics, Innovation, InsurTech
Tags: Sustainability, Climate Change, Renewable Energy
Tags: Digital Transformation, Innovation, InsurTech
Tags: AI, Generative AI, InsurTech
Tags: Climate Change, Renewable Energy, Sustainability
Tags: AI, Generative AI, InsurTech
Tags: Cybersecurity, InsurTech, Quantum Computing
Tags: Change Management, Digital Transformation, InsurTech
Tags: Climate Change, Renewable Energy, Sustainability
Tags: Digital Disruption, Innovation, Quantum Computing
Tags: Digital Transformation, Innovation, InsurTech
Tags: Digital Transformation, Innovation, InsurTech
Tags: InsurTech
Tags: InsurTech
Tags: Quantum Computing
Tags: InsurTech
Tags: Generative AI
Tags: Cybersecurity
Tags: InsurTech
Tags: Digital Disruption, Generative AI, InsurTech
Tags: InsurTech
Tags: Quantum Computing
Tags: AI, Emerging Technology, Quantum Computing
Tags: AI, Innovation, Quantum Computing
Tags: AI, Innovation, Quantum Computing
Tags: AI, Emerging Technology, Quantum Computing
Tags: AI, Digital Disruption, InsurTech
Tags: Analytics, Digital Disruption, InsurTech
Tags: Innovation, FinTech, Quantum Computing
Tags: Quantum Computing
Tags: Quantum Computing
Tags: InsurTech
Tags: InsurTech
Tags: Climate Change, Renewable Energy, Sustainability
Tags: Quantum Computing
I’ve had some interesting discussions recently about conference agendas for late 2024 and into 2025. “How do we avoid simply having the word AI everywhere?”
AI is clearly becoming ubiquitous, however much some elements are currently overhyped. Thus we can’t “not include it” in conferences and webinars. What we want is more specific use-cases that are detailed enough to be helpful but broad enough to inform a diverse audience.
AI-enabled agents feels like it sits in that sweet sport. Not to mention fitting nicely with the Digital Minions vs Digital Sherpa trope that I’ve discussed with many of you.
So, if you’re planning a 2025 conference with tech elements consider this as an agenda item.
For a good paper on the topic search for the recent one from McKinsey:
"Over the past couple of years, the world has marveled at the capabilities and possibilities unleashed by generative AI (gen AI). Foundation models such as large language models (LLMs) can perform impressive feats, extracting insights and generating content across numerous mediums, such as text, audio, images, and video. But the next stage of gen AI is likely to be more transformative.
We are beginning an evolution from knowledge-based, gen-AI-powered tools — say, chatbots that answer questions and generate content — to gen AI–enabled “agents” that use foundation models to execute complex, multistep workflows across a digital world. In short, the technology is moving from thought to action."
Tags: AI, Emerging Technology, Generative AI
Recently I was asked a tongue-twister of a question: “What are potential impacts that will arise from the cyber-security risks posed by cryptographically relevant quantum computers, and the impact on the insurance market”.
At first I thought it was just a list of buzzwords. On further consideration I realised it is a critical important topic. Below is the answer I gave.
The promise of quantum computing to create opportunities in a wide range of industries is an exciting one. The insurance industry is used to grappling with the impact of innovation and new technologies on its clients. Nowhere is that more true than in the London insurance market which, for centuries, has engage with and insured every new technology that the world has invented, from the motorcar (which was initially classified by Lloyd’s of London underwriters as “a boat that travels on land”) to crypto wallets and the metaverse. As such many of the changes from quantum computing will be handled in the same way underwriters have handled the arrival of mainframes computers, the internet, smartphones and the Cloud.
The potential for quantum computers to render much of our current encryption infrastructure obsolete is a specific challenge as it combines an unknown future date with a potential ‘cliff edge’ impact. Comparisons to the Y2K “Millennium Bug” fall short as the lack of a known Day 0 means it is harder to build a sense of urgency. Equally challenging is the fact — as every security expect knows — that you only need a single weakness in an end-to-end system; thus companies cannot resolve this alone but must seek an ecosystem approach.
The sophistication and experience of cyber underwriting has grown exponentially over the past decade, both in terms of understanding risks and engaging clients in risk mitigation activities. Similarly, the approach by insurers and regulators to understanding and preparing for systemic risk is constantly improving. Underwriters, brokers and cyber experts are increasingly considering the risk from QC-enabled decryption and beginning to engage clients on the topic. Market engagement has been ongoing since 2021, with events such as a presentation by the Lloyd’s Market Association and Quantum London to the CISO community, and a public webinar delivered by the Lloyd’s Lab and Quantum London with a panel of global experts. The IIL (Insurance Institute of London) is running a similar educational webinar in April 2023 combining views from academia, underwriting and broking. State-led initiatives such as the US government signing Post-Quantum Cybersecurity Guidelines into law in December 2022 are observed closely and taken into consideration by the underwriting community.
Without doubt, being unprepared for the arrival of cryptographically relevant quantum computers would lead to systemic challenges. The insurance industry globally will be working with technology and communications firms, governments, security experts, individual organisations CISOs, academic to understand the risks and ensure clients take the necessary steps to mitigate them. Where some risks then crystalise into problems, and loses are incurred, insurers will be there to work with their clients to minimise the impact.
Tags: Cybersecurity, InsurTech, Quantum Computing
A picture paints 1000 words…
Most of you reading blog posts and articles online have by now seen lots of AI-generated imagery — even if you often weren’t aware that it was.
One of the better-known tools for making this art is called DALL.E and it is now available for free online from Microsoft — just go to Bing and select CoPilot towards the top. This will take you to the same type of prompt that we’re already used to for #Chat GTP but as well as asking it to create outputs of words (poems, homework assignments, away-day agendas) or to summarise documents or emails, you can now use it to create pictures.
It’s a little temperamental so if it doesn’t do your bidding first time do persevere a bit.
For the demo I gave I requested the obligatory examples of cute animals using the following prompts:
1. Please draw me a picture of a happy dog.
2. Please give it a Dutch theme.
3. Please make it more like a digital photo.
and for feline/canine balance
4. Please make a steampunk image of a bad-arse group of cats [x2]
I have only one professional use case so far — I had written a PowerPoint with a lot of images “from the web”. The night before my presentation the conference organiser said they would like to share the presentation afterwards on their website. I didn’t have the rights to several of the images and others had no licence to purchase. I therefore used the GenAI tool to create similar images that fitted the themes of my slides but which I could then use in a publicly-shareable presentation.
What other useful professional use-cases have people seen?
(Oh, and yes, I’m expecting some questions/thoughts (trolling even?) on the question of the appropriateness of using images trained on other people’s artwork and photos. And this is a critically important topic, but as things currently stand, what most GenAI tools produce can be freely used by the “creator” so let’s have that debate at a separate time.
Tags: AI, Generative AI, Innovation
This is a brief post in response to the current focus on the topic. Something longer will follow.
Q: Will quantum computers defeat encryption?
A: Yes, certain types on encryption, including many that are in common use today.
Q: Do we see that as an immediate, existential threat?
A: No. Or, at least we didn’t a few weeks ago as the timeline for suitable quantum computers to be here was “years away”. The recent paper published by researchers in China has raised some interesting questions.
The paper was published a few weeks back but the news got into the mainstream press on 4 Jan and my inbox has been buzzing ever since.
If you haven’t heard about it, this article by Alexander Martin in The Record gives a useful (albeit quite technical) summary, including the reasons experts are sceptical.
In 1994 Peter Shor spiced up the nascent world of quantum computing theory by presenting an approach (now simply referred to as Shor’s Algorithm) that would allow quantum computers to crack certain types of mainstream encryption.
This was important insofar as it was one of the first specific ‘use cases’ for quantum computers. It was also highly theoretical as the hardware required to execute was in the far distant future.
What is important to understand is that just about any encryption can be “decoded” mathematically. It’s just that it takes an awful lot of calculations and, even with the use of today’s strongest super computers, that means then even basic encryption can take years, decades or even millennia to decode. The way quantum computers do certain mathematical calculations makes them millions of times more efficient. However, the only quantum computers currently available are very early versions which people have never thought suited to apply Shor’s Algorithm in an effective manner.
It’s a bit like what we see with data transfer: these days we get frustrated if it takes more than a minute or so to download a two-hour video to our phone or tablet. In the 1990s when I first started travelling for work I was excited to get a 14.4KBps modem download speed. That meant synchronising Lotus Notes (there was not Outlook in those days) could require a full hour if there were even just 3 or 4 modest PowerPoint attachments. This was the norm. When Broadband came along, measured in MBps, things were far far better for email but we still didn’t dream of downloading full videos or music albums in moments.
That is where we are supposed to be now with Quantum Computers. Despite the incredible achievements of the past decade it is still early days. We call it the NISQ era (more here) as in Noise Intermediate-Scale Quantum, because the number of qubits and their quality is low.
This is supposed to be the period we move to detailed ideas of how we use quantum computers in the business world once they are available. (See e.g., the work that people like Esperanza Cuenca Gomez
are doing at Multiverse Computing). It’s not the year we’re supposed to be seeing them truly “in action”.Therefore the news that researchers have identified how to break a 2048-bit algorithm using a 372-qubit quantum computer is startling. IBM has machines that already have more qubits than that.
It’s important to note that this is still theory as their work was on a 10 qubit device and they have ‘extrapolated’ results, and one thing we know about the quantum domain (thank you Marvel Cinematic Universe and Ant Man!) is that it is far from linear in its behaviour.
Also, the publication hasn’t been peer reviewed, and, while there is plenty of precedent for purposefully false scientific paper publications, as we moved beyond the post-truth world of the last few years it is hard to see why they would chose to publish something that is purposefully wrong. (Though possibly to generate some form of reaction and observe how governments and corporations react.)
The expectation was (still is?) that when quantum computers are several orders of magnitude more powerful than today they will readily crack certain types of code.
This threat is well understood. The timeline isn’t. I recently wrote a piece (not yet published so no link) that touches on the similarities and differences to the Millennium Bug.
Similarity: Real chance of major (existential?) disruption to IT systems and communication
Similarity: If the right steps are taken and the problem avert then all the commentators will say “gosh — all that stress for nothing. What a waste of money!”
Difference: We knew the time line for Y2K. We have no idea for Y2Q.
And when there is no timeline it is far harder to get alignment, to get budgets, to engage the broader set of stakeholders.
Certain actions, such as last month’s signing into law by the US government of the Post-Quantum Cybersecurity Guidelines is helpful for (a) getting headlines and (b) giving CISOs a lever for engaging their colleagues. But fundamentally no-one is excited about the idea of spending large amounts of money changing their encryption structures, and even if they were every infosec professional will remind you that there only needs to be one weakness so we don’t just need to fix what we own but be confident that everything upstream and downstream is also made “Quantum-proof”.
This piece I wrote back in 2021 with Anahita Zardoshti (with input from Emanuele Colonnella) gives you a sense of things from the point of view of an underwriter of security professional.
Calling in the experts
My go-to person on these topics is Professor Michele Mosca. His Quantum Threat Timeline Report is the leading annual view on this topic.
I’m sure he’ll have a word or two to say on the topic and I’m delighted in fact that we have a webinar planned with him in late March. A lot will have happened by then and I look forward to discussing it with him (and comparing to the previous discussions with him in December 2020 and in the Quantum London “Quantum Computing in Insurance” event with Lloyd’s of London in February 2021)
Even if this all turns out to be a misunderstanding, two problems still remain. One is the point above that we need to move all our IT and comms systems to Post-Quantum Encryption and that takes time and money (more of each than we would like).
The second is a clear and present problem that is currently here but not discussed broadly. Namely that some of that data that can be stolen today will still be of value when quantum computers can decrypt it (in 10+ years based on previously presumed timelines). As such we have the perfect ‘tomorrow crime’ where someone does something bad today but the pain only shows up many years hence.
We call this Harvest-today, Decrypt-tomorrow. It had a moment in the daylight in November 2021 after a Booz Allen Hamilton report talked about the concept, but little happened. I suspect this topic will get more attention in the coming weeks now.
What next?
Over the coming days I’m sure we’ll learn more about this. In particular the scalability. There was a general request by the world at the back end of last year for no black swans in 2023. It will be scary but oh so interesting if a huge one arrived in the first week of the year.
Tags: Cybersecurity, Emerging Technology, Quantum Computing
Author’s Note. This post is short. The topic is incredibly important and nuanced and worthy of a much longer article. At this point though I’m keeping this short and snappy to ensure maximum attention.
One might think that with a decade’s warning there’s plenty of time to act. Well, a couple of questions?
Even if the answer to these is ‘not currently’ then one might think with a bit of a push momentum could increase.
That however is thinking only about the future risk. Not the present risk.
Most of the petabytes of data we produce every day is of little value to anyone beyond the creator and his/her intended audience. However, as we well know there is data that can have value to third parties when stolen.
This takes multiple forms: fraud linked to theft of social security or credit card details of individuals; corporate or personal ‘blackmail’ from confidential information; corporate secrets around product design, manufacturing processes; and so many more
Most data has an “expiry date” — the point at which it ceases to have value. The expiry date on any piece of data may vary for different ‘users’ of the data.
Take a credit card number. Once the card is expired it has no obvious use to a criminal as it’s hard to make a purchase with an expired card. For the original owner, they may see value in that number at a later date if for example, they want to claim a refund on a product bought with the card.
Similarly, a young professional may at some point see no further value in photos from hedonistic college parties, while someone trying to discredit the individual in the future may find it worth holding onto those photos for a longer period.
Typically though data, becomes less valuable over time. If that time is a few years then this topic is not relevant. If the data still has value in a decade or two, then someone having access in the future is a genuine (business) problem.
If you think about it, there is plenty of data that will still have value in a decade or more — medical data, banking data, military plans, government decisions, the recipe for Coke…
Data hacks are regularly in the news. In fact it has become so common that they rarely make headlines unless there is some kind of spin — e.g., a huge number of impacted people or particularly juicy data
Those are the ones we hear about. What about those we don’t?
Given most data is encrypted both at rest and in transit stealing it in a usable form requires either a degree of luck (searching for something that accidentally hasn’t been encrypted or where the decryption key is accessible) or careful planning (by inserting yourself at a point where the data is ‘in use’ and (appropriately) not encrypted).
As such many cyber weaknesses are not a major problem as the risk and cost — to the data owner, the responsible organization, the insurer etc — of loss of well-encrypted data is far less than unencrypted data (though reputational damage may still be significant )
Enter quantum computing and HTDT.
If we think back to our opening statements, we know that we are getting close to when people will have access to quantum computers that can decrypt many of the common encryption protocols.
Thus, what if fully-encrypted data was stolen and held onto? As long as the data expiry period is longer than the period before decryption is possible there is value to the thief and damage to the original owner.
Furthermore, when a hack becomes public, changes are made to stop the problem. If data is being stolen and no noise made about it, how long might the security hole remain open allowing a continual siphoning off of that data?
This concept of harvesting data today, and then decrypting in the future is what makes this top one of real urgency. Urgent for information security teams to think about; and urgent for cyber underwriters to consider.
This topic needs broader discussion, and while it may not appear as pressing as post-pandemic recovery, and rebooting the drive around sustainability, it is deserving of a place in your “key topics for 2021” discussion list.
Image from presentation by Prof Michele Mosca to Quantum London (see video here) showing why we need to start thinking now0
The millennium bug
Some commentators draw parallels to the so-called Millennium Bug. As you may recall, this was the fear that as the two-digital year counters in older IT systems moved from 99 to 00 at midnight on 31 December 1999, computers would get confused with unpredictable and potentially devastating results.
As it turned out we all awoke on the morning of 1/1/2000 and the world was still there. Many decried the vast amounts of money spent on the problem; others pointed out that it was fine as a result of that spend!
The lack of just about any major failure back at the turn of the millennium will likely hamper communication on this risk with plenty of executives suggesting it is fear-mongering on the part of risk managers or CISOs who are looking for additional budget.
Time will tell how things play out.
Tags: Cybersecurity, Quantum Computing, Risk Management
In mid-March David Cabral posted this question on LinkedIn.
Is there a role for Space in ESG?
His post then continued as follows and tagged a number of experts and acquaintances. (Emphasis added by me.)
We are working on an exciting proposition and the role of Space has generated very heated discussions!
The Positives:
Space supports solutions that can be used to improve aspects of our world. A growing number of organisations own a fleet of small satellites that use infrared or radar to look through cloud cover and collect data to generate immediate insights into commodities, business development and environmental change (the price of micro-satellites is below $100 and dozens can be released in low orbit with a single rocket launch). These insights can: 1) generate predictions in areas such as crop yields, oil prices, prosperity; and 2) identify risks for certain industries at a corporate or local level, such as how much pollution a company’s factories emit.The Negatives:
Using rockets to launch satellites is considered a “dirty” business.Do the positives outweigh the negatives……..? Frank feedback would be appreciated!
This would have got me thinking generally. But in the context of the increasing discussions about #GreenTech & #ClimateTech my interest was doubly piqued.
Substantial comment followed as you can see if you go to the article. The parts that most stood out to me were about the low environmental impact of the launch and the plethora of examples of how satellite tech is supporting more sustainable ways of living.
Starting with the latter point: these start with the obvious — GPS allows vehicles to find the fastest route to their destination. While fastest and most fuel-efficient are not synonymous, no one can doubt the emission reduction that comes from being directed straight to your destination, rather than having to drive around for the last five minutes looking for street names and house numbers.
Maybe less obvious to most spectators is the use of satellite in farming. With such a huge share of inputs (energy and water) going into agriculture and efficiency improvement can have a notable effect. (As a side point on agriculture, the interest in this briefest of articles on Google’s agritech moonshot (called Mineral) was interesting to see.)
A third area is of course the use of satellite (for decades now) to monitor and understand the Earth and — in theory — help us make decisions around how to keep things more sustainable. Many people would comment that the reaction to what these satellites have told us is inadequate.
For those in the insurtech space, the use of satellites to support rapid reviews of claims after events — both major catastrophe events and more localised incidents — both speeds up the claims handling (leading to social good) and can remove the need to fly experts half-way across the world simply to inspect a destroyed building. (Matthew Grant posted a link to the recent InsTech London insurtech podcast about ICEYE.)
Returning to the more novel point of the carbon footprint of a small satellite.
Charles Blanchet shared a link to an article where Elon is saying the carbon cost of a lanch is offset but the use of just a free dozen Tesla cars. Bearing in mind that a single launch has multiple small satellites in it, it is clear (assuming you believe Elon’s maths) that from a pure launch carbon consideration there are minimal concerns.
Elon Musk only has to sell 59 Teslas to offset the CO2 from a single SpaceX launch within a year
thenextweb.com
Has a through-lifecycle assessment of the environmental impact of the design, construction, launch and operation as well as deorbiting of a satellite or constellation of satellites been performed? It would be surprising if the launch itself were the major factor although it is the most visible one.
This lifecycle analysis also needs to include the end-of-life handling of satellites. In the case of microsats the expectation is that most will burn up as they fall out of orbit at end of life, but with lower orbits filling us with smaller satellites and legacy junk, there is an increasing need to launch large clean-up satellites which in turn come with their own carbon footprint.
As with many sustainability-realted topics there is more complexity than the original question might suggest...
Tags: Climate Change, Emerging Technology, Sustainability
Location: Online Fees: 0
Service Type: Service Offered