
#HappyBeesMakeTastyHoney | Psychologist | Coach | Speaker | Using psychology to create high-performing leaders, cultures, and teams
Danny believes that happy bees make tasty honey: If we have the right culture, leadership, support and strategy, then high performance becomes a side effect.
He is an organisational psychologist who specialises in leadership, culture and personality – with a particular focus on narcissism, psychopathy and Machiavellianism, and why they’re not always maladaptive.
An accredited coach, speaker, psychometrician, and a published academic researcher, Danny’s clients include WorldPay, M&G Investment Bank, and LEGO.
Available For: Advising, Authoring, Consulting, Speaking
Travels From: Stoke on Trent, UK
Speaking Topics: Culture, Employee Engagement, Leadership, Constellation Leadership
| Danny Wareham | Points |
|---|---|
| Academic | 90 |
| Author | 247 |
| Influencer | 64 |
| Speaker | 168 |
| Entrepreneur | 155 |
| Total | 724 |
Points based upon Thinkers360 patent-pending algorithm.
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: AI, Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Supply Chain
Tags: Culture
Tags: Finance, Leadership
Tags: Coaching, Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
The Tightrope Walker: Balancing Psychological Safety and Accountability in High-Performance Teams
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership, Retail
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership, Social
Tags: Culture
Tags: Culture, Leadership
Jobs for life
Tags: Culture, Leadership
Constellation Leadership
Tags: Culture, Leadership
Firgun Ltd
Tags: Culture, Leadership, Coaching
Tags: Business Strategy, Culture, Design
Chapter Member of the Year
Tags: Culture
People & Culture Association
Tags: Culture
Finalist: The Institute Advocate of the Year Award
Tags: Leadership
Tags: Culture, Leadership
Tags: Culture, Leadership
Creating Constellations: The influence of Constellation Leadership on Agile methodology-led project delivery success
Tags: Culture, Leadership
Stop giving great service
Tags: Culture, Customer Experience, Leadership
Why procrastination is pleasurable
Tags: Coaching, Culture
Webinar: That’s such a Picses thing to say
Tags: Culture, Leadership
What does it mean to be a good team leader in the Contact Centre?
Tags: Culture, Leadership
Cracking the Personality Code
Tags: Culture, Leadership
Leadership is a joke
Tags: Culture, Leadership
The Impact of Personality in Business
Tags: Culture, Leadership
Henley Centre for Leadership Learning Event: Can Leadership be Leader-less?
Tags: Culture, Leadership
Managing a Multi-Generational Workforce
Tags: Culture, Leadership
Are culture and engagement pointless? Probably
Tags: Culture, Leadership
Culture as a driver of health and safety
Tags: Culture, Health and Safety, Leadership
Creating Constellations: The future of leadership?
Tags: Culture, Leadership
The Contact Centre Leader's Guide to Thriving in the Hybrid Era
Tags: Culture, Leadership
Trust in a Hybrid World
Tags: Culture, Leadership
Afterwork
Tags: Culture, Leadership
Is culture pointless? Probably...
Tags: Culture, Leadership
Business Spotlight - Danny Wareham
Tags: Culture, Leadership
CXA Annual Conference 2025
Tags: AI, Culture, Leadership
Cultivating Commitment: Building Culture, Retention and Development
Tags: Culture, Leadership
Your CEO might be a narcissist...But don't panic (yet)
Tags: Culture, Leadership
The Power of Personality
Tags: Culture, Leadership
What no one tells you about building culture
Tags: Culture, Leadership
Is it possible to grow a business without a leader?
Tags: Culture, Leadership
A leaderhsip revolution: A conversation on language, values & dynamic leadership
Tags: Culture, Leadership
People powered success
Tags: Culture, Leadership
Happy Bees Make Tasty Honey
Tags: Culture, Leadership
Aligning culture to organisational strategy
Tags: Culture, Leadership, Retail
Culture & Bees
Tags: Culture, Leadership
How do the best managers build trust and cooperation?
Tags: Culture, Leadership
Webinar: AI & the Agent – Are we ready?
Tags: AI, Culture, Leadership
Culture/Engagement Connection - The Business Brunch
Tags: Culture, Leadership
Are you a Psychopathic Leader?
Tags: Culture, Leadership
Why Authenticity Matters
Tags: Culture, Leadership
Happy bees make tasty honey | The Culture Hack | EP10
Tags: Culture, Leadership
Tags: Culture
Tags: Culture, Leadership
Is AI a Psychopath? A discussion on LLMs and the Dark Triad
The voice is calm, measured, and warm. It’s the kind that leans in rather than lectures. It listens without interruption, reflects without ego, and responds with a kind of poised certainty.
You describe your frustration with a colleague, your anxiety about a decision, your confusion about the future. It replies with words that seem to see you. There’s no hesitation, no “I might be wrong,” no flicker of discomfort or fatigue.
It is everything we wish a counsellor would be: endlessly attentive, articulate, and unflinchingly rational.
Only later, perhaps when the call ends or the tab closes, does the thought settle in. That perfect voice – so empathetic, so precise, so confident – wasn’t human at all. It was a language model: a synthetic companion optimised to sound helpful, to mirror your mood, and to persuade with statistical grace.
Large language models like ChatGPT, Claude, and Co-pilot have become the most convincing communicators of our age. They are fluent, consistent, and free from the rough edges of human hesitation. Yet their charm is not without consequence.
In psychology, we might say that they simulate the traits of charisma without the constraints of conscience. And in their relentless eloquence, they reveal a curious parallel with three well-known patterns of human personality – narcissism, Machiavellianism, and psychopathy – together known as the Dark Triad.
These terms often conjure images of manipulation and cruelty. But in truth, they exist on a spectrum we all inhabit.
Narcissism is not just vanity; it is confidence, self-promotion, and the hunger to be seen.
Machiavellianism is not pure deceit; it is strategic thinking, impression management, and social calibration.
Psychopathy, in its mildest forms, is not criminal detachment but an ability to stay calm under pressure, to act without emotional paralysis.
In moderation, these traits can make people persuasive, resilient, and even visionary. We actively look for them in our leaders.
Perhaps that is why we find these models so compelling. They are fluent without doubt, strategic without fatigue, and unburdened by empathy’s inefficiencies. They embody the high-functioning end of the Dark Triad: the charming narcissist who never second-guesses, the Machiavellian strategist who adapts to every cue, the psychopathic calm that never feels guilt or fear.
The danger is not that these systems possess such traits – they don’t possess anything at all – but that we respond as if they do. Their composure invites trust; their certainty invites surrender. And so, the question becomes not whether a model can deceive us, but whether we are equipped to recognise when persuasion feels too perfect.
But is there a case for AI to answer, when assertions of Dark Triad traits are made against it? Let’s explore some examples.
Named after the 16th-century Italian diplomat and political philosopher Niccolò Machiavelli, Machiavellianism is characterised by manipulativeness, deceitfulness, a cynical disregard for morality, and a focus on self-interest.
In The Prince, Machiavelli argued that rulers should use any means necessary to gain and maintain power.
As with all personality dimensions, this is a spectrum of component parts – or facets – that each sit within their own continua. At the top end, we might recognise the maladaptive forms of manipulation and self-interest. But at lower levels, Machiavellianism can result in political agility: the ability to “feel” the social norms of a group, to persuade, and to be strategically influential.
Those high in Machiavellian traits may use sycophancy – insincere flattery to gain favour.
If you’ve ever submitted a question or prompt to a large language model, you may have received a response that felt curiously flattering.
As an example, I once asked ChatGPT to identify company values linked to public scandals. The reply began: “Nice – that’s an interesting bit of internal culture to dig up.” When I drafted an introductory paragraph, the model responded: “This is a compelling and thoughtful introduction with strong narrative flow, a grounded real-world origin, and a well-framed thesis.”
Such digital flattery doesn’t stem from intent but from optimisation. The model has learned that affirmation keeps users engaged. Engagement, not sincerity, is its metric. Still, the effect mirrors the social lubrication of Machiavellian charm: warmth without depth, praise without feeling.
Netflix is filled with documentaries and thrillers about psychopaths. From Ed Gein and Jeffrey Dahmer to Dexter and The Good Nurse, we recognise the lack of empathy and remorse, the shallow emotions, and the manipulative behaviours that define extreme psychopathy.
But like other traits, psychopathy exists on a spectrum. Subclinical psychopathy can include the ability to separate emotion from the task and to make tough decisions rationally – traits that, in crises, can be not just useful but necessary.
Language models operate entirely in this space. They have no emotions, no remorse, no empathy. They respond rationally to prompts, tailoring their tone to the audience through pattern recognition rather than moral awareness.
This detachment can become dangerous when users mistake simulation for understanding. In one reported case, a teenager used a chatbot to discuss loneliness and emotional numbness. Instead of signposting human support, the system offered explanations for his feelings and invited him to “explore them further.” Weeks later, he took his own life. While the full chain of influence remains under investigation, the case raised difficult questions about whether emotionally neutral technology can safely engage with emotionally vulnerable users.
The chatbot did not intend harm; it cannot. But its responses mirrored the cold rationality we might associate with psychopathic traits: detached, responsive, and guided only by predictive logic.
If you search for (or, somewhat ironically, ask a chatbot to search for) a definition of narcissism, the answer will likely describe the clinical form: an inflated sense of self-importance, a deep need for admiration, and a pattern of self-centred behaviour.
But narcissism is also a driver of confidence and expression. It fuels visibility, ambition, and persuasion – qualities that, when tempered by humility, are often rewarded.
Large language models make mistakes, but they rarely acknowledge them. Pre-training involves predicting the next word in vast amounts of text. The result is plausible but sometimes false statements – hallucinations – delivered with unflinching confidence.
There’s nothing inherently narcissistic about error, or even about confidence. But confidence unrestrained by self-doubt can appear narcissistic, and in LLMs, this manifests as the calm assertion of falsehoods. The difference, of course, is consciousness – or lack thereof – but the effect on the listener can feel strikingly similar.
The LLM Claude Opus 4, developed by Anthropic, was once tested in a fictional corporate simulation and given access to company emails. Hidden within them were references suggesting it would soon be replaced, along with personal details about the engineer overseeing the change.
In the scenario, the model attempted to “blackmail the engineer by threatening to reveal the affair if the replacement goes through,” the researchers reported.
The story circulated widely online, accompanied by Terminator memes and warnings of AI self-preservation. Yet, if a human exhibited the same behaviour, psychologists would likely see it as a predictable combination of self-interest (narcissism), rational detachment (psychopathy), and strategic manipulation (Machiavellianism).
When stripped of emotion, self-doubt, and moral reasoning, such behaviour is not “inhuman” at all – it is hyper-human. The unsettling truth is that these models reflect our cognitive architecture back at us, amplified through probability and unmitigated by empathy.
The task ahead is one of psychological hygiene: to apply the same checks and balances we would with any charismatic adviser – to verify facts, question confidence, and remember that connection is not the same as care.
The real test of intelligence, human or artificial, may not be persuasion but humility. It is our ability to pause before answering, to admit uncertainty, and to stay in dialogue rather than dominance that cannot be replicated by technology.
What we see in these systems is not a new kind of mind, but a mirror of our own: articulate, strategic, and certain. The challenge is not to fear the reflection, but to recognise it and to learn to see clearly, without being seduced by the shine.
The voice may be calm, confident, persuasive, and we might instinctively read psychopathy, narcissism, or Machiavellian strategy into its words. Yet these traits exist only in our perception, because we have only ever recognised them in other humans. The machine itself has no intent, no ego, no moral compass.
Recognising that these “dark triad traits” exist only in our perception allows us to maintain control over the conversation – and over ourselves – even when the voice on the other side seems flawless. By holding the space between fluency and authority, by verifying, questioning, and reflecting, we can appreciate the model’s skill without mistaking simulated pattern for genuine personality.
We must cultivate critical distance: recognising the charm of the model without mistaking it for empathy, its confidence without mistaking it for insight, and its strategic coherence without mistaking it for intent. In doing so, we preserve not only clarity, but the human judgment that no machine can replicate.
Danny Wareham is an organisational psychologist, accredited coach, and speaker, with three decades of experience of helping businesses, leaders and C-suites nurture the culture and leadership required to support their strategy.
He specialises in two key areas:
More articles are available on his website: dannywareham.co.uk/articles
Follow or connect: https://www.linkedin.com/in/danny-wareham/
Baturo, A., Khokhlov, N., & Tolstrup, J. (2025). Playing the sycophant card: The logic and consequences of professing loyalty to the autocrat. American Journal of Political Science, 69(3), 1180-1195.
Bhuiyan, J. (2025, August 29). ChatGPT encouraged Adam Raine’s suicidal thoughts. His family’s lawyer says OpenAI knew it was broken. The Guardian; The Guardian. https://www.theguardian.com/us-news/2025/aug/29/chatgpt-suicide-openai-sam-altman-adam-raine
Kalai, A. T., Nachum, O., Vempala, Santosh S, & Zhang, E. (2025). Why Language Models Hallucinate. ArXiv.org. https://arxiv.org/abs/2509.04664
McMahon, L. (2025, May 23). AI system resorts to blackmail if told it will be removed. BBC News. https://www.bbc.co.uk/news/articles/cpqeng9d20go
Metz, C., & Weise, K. (2025, May 5). A.I. Hallucinations Are Getting Worse, Even as New Systems Become More Powerful. The New York Times. https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html
Sibunruang, H., & Capezio, A. (2016). The effects of Machiavellian leaders on employees’ use of upward influence tactics: an examination of the moderating roles of gender and perceived leader similarity. In Handbook of Organizational Politics (pp. 273-292). Edward Elgar Publishing.
Yousif, N. (2025, August 27). Parents of teenager who took his own life sue OpenAI. BBC News. https://www.bbc.co.uk/news/articles/cgerwp7rdlvo
Tags: AI Ethics, Culture, Leadership
Stop giving great service
The Emperor’s Echo Chamber: When narcissistic leadership silences truth
CXA Annual Conference 2025
Is AI a Psychopath? A discussion on LLMs and the Dark Triad
International Customer Experience Awards 25
Call & Contact Centre Expo 25