
Pamela Gupta is the Founder & CEO of Trusted AI and creator of the AI TIPS™ (Trust Integrated Pillars for Sustainability) framework — a comprehensive enterprise AI governance architecture comprising eight pillars, 243 operational controls, Trust Index scoring (0–100), six lifecycle gates, and regulatory crosswalks to the EU AI Act, NIST AI RMF, ISO 42001, and CSA AICM. Originally created in 2019, four years before the NIST AI Risk Management Framework, AI TIPS V2 was published on arXiv in 2025 with a provisional patent filed.
Pamela is the 2025 ISACA Joseph J. Wasserman Award recipient, Thinkers360 Top 50 Women Thought Leaders on AI 2026, and has chaired the GenAI stage at World AI Summit NYC for six consecutive years. She hosts the Trustworthy AI podcast and publishes the Trustworthy AI Briefing newsletter reaching ~4,000 subscribers. She holds CISSP, CISM, and CSSLP certifications.
Available For: Advising, Authoring, Consulting, Influencing, Speaking
Travels From: CT
Speaking Topics: Operationalizing AI Governance: From Policy to Production, Agentic AI Security — Layered Governance for Autonomous Systems, AI TIPS™ Framework: Enterp
| Pamela GUPTA | Points |
|---|---|
| Academic | 0 |
| Author | 625 |
| Influencer | 200 |
| Speaker | 0 |
| Entrepreneur | 0 |
| Total | 825 |
Points based upon Thinkers360 patent-pending algorithm.
Effective Innovative Trustworthy AI Governance for a New Era
Tags: AI, Cybersecurity, Risk Management
The Benefits and Challenges of Building a Remote Workforce for Your Business
Tags: AI, Risk Management, Security
A New Chapter in Business Automation with Machine Learning
Tags: AI, Risk Management, Security
Chrome Patches to Fix Security Issues
Tags: AI, Risk Management, Security
Changing the Game in Wireless Computing: A New Approach to Faster Processing
Tags: AI, Risk Management, Security
Tags: AI, Risk Management, Security
Tags: AI, Risk Management, Security
Tags: AI, Risk Management, Security
Tags: AI, Risk Management, Security
Tags: AI, Risk Management, Security
Tags: AI, Risk Management, Security
Embracing Password Passkeys: Strengthening Business Security in the Password-less Era
Tags: Privacy, Risk Management, Security
Best Practices To Keep in Mind Against Cybersecurity Threats
Tags: Privacy, Risk Management, Security
Maximizing Business Success with Big Data and Analytics
Tags: Privacy, Risk Management, Security
Leveraging Technology for Growth: The Advantages of Automating Business Processes
Tags: Privacy, Risk Management, Security
Creating a Website that Converts: Tips for Improving User Experience
Tags: Privacy, Risk Management, Security
Launching a Successful Digital Marketing Campaign
Tags: Privacy, Risk Management, Security
Create an Effective Email Marketing Strategy and Boost Customer Engagement
Tags: Privacy, Risk Management, Security
Cloud Computing for Small Businesses
Tags: Privacy, Risk Management, Security
Tags: Privacy, Risk Management, Security
Protect Your Business from Cyber Attacks: Common Cybersecurity Mistakes
Tags: Privacy, Risk Management, Security
Maximize Your Business Potential with a Social Media Marketing Strategy
Tags: Privacy, Risk Management, Security
Why Content Marketing Is the Future of Advertising
Tags: Privacy, Risk Management, Security
Tags: Privacy, Risk Management, Security
Enhance Your Marketing Strategy with AI and Machine Learning
Tags: Privacy, Risk Management, Security
Agentic AI Has a Security and Trust Problem
Tags: Cybersecurity, Risk Management, Security
Supply Chain at the Speed of AI Governance
Tags: Cybersecurity, Risk Management, Security
March 16 Changes Everything About Your AI Compliance Program
Tags: Cybersecurity, Risk Management
AI Is Now a Weapon — And a Target. Here's What Changed This Month.
Tags: Cybersecurity, Risk Management, Security
Trustworthy AI TIPS 2.0 — executive governance model
Tags: Cybersecurity, Risk Management, Security
AI Governance 2026: Your Q1 Briefing on What Just Changed
Tags: Cybersecurity, Risk Management, Security
AI TIPS 2.0: Closing the Gaps That Keep AI Governance from Working
Tags: Cybersecurity, Risk Management, Security
The Wake-Up Call: Agentic AI Risks that can impact your Company
Tags: Cybersecurity, Risk Management, Security
Enterprise Agentic AI Governance & Security – The Why & How
Tags: Cybersecurity, Risk Management, Security
Avoid Lawsuits Before They Start: How Responsible AI Governance Safeguards Healthcare
Tags: Cybersecurity, Risk Management, Security
Leading with Integrity in AI: A Milestone Moment in My Journey for Trustworthy AI
Tags: Cybersecurity, Risk Management, Security
De-Risking business adoption of AI Agents
Tags: Cybersecurity, Risk Management, Security
Tags: Cybersecurity, Risk Management, Security
Helping Businesses gain AI value and Compliance with Trustworthy AI
Tags: Cybersecurity, Risk Management, Security
Without Securing AI, there is no Trustworthy AI
Tags: Cybersecurity, Risk Management, Security
Effective Innovative Trustworthy AI Governance for a New Era
Tags: Cybersecurity, Risk Management, Security
Trustworthy AI : help De-Risk adoption of AI
Tags: Cybersecurity, Risk Management, Security
Global Race for AI Supremacy : Role of AI Regulations in creating Trustworthy AI
Tags: Cybersecurity, Risk Management, Security
Essential Pillars of Trustworthy AI: Building Trustworthy NLP Workshops
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management, Business Strategy
When AI Runs Your Supply Chain: Governance at the Speed of Commerce
Tags: Cybersecurity, Leadership
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Cybersecurity, Risk Management
Tags: AI, Privacy, Security
This Malware Phishing Campaign Hijacks Email Conversations
Tags: Cybersecurity, Privacy, Security
Tags: AI, Cybersecurity, Risk Management
Agentic AI Has a Security and Trust Problem. Here's the Three-Layer Answer.
The data landing this quarter should alarm any security or AI leader.
88% of organizations reported a confirmed or suspected AI agent security incident in the past year. In healthcare, that number climbs to 93%. Yet 82% of executives believe their existing policies protect them from unauthorized agent actions — while only 21% have actual visibility into what their agents access, which tools they call, or what data they touch.
This is not a future risk. It is a present crisis.
The CrowdStrike 2026 Global Threat Report documents an 89% increase in AI-enabled attacks year-over-year. The IBM 2026 X-Force Threat Intelligence Index shows a 44% increase in attacks exploiting public-facing applications, driven by missing authentication controls and AI-enabled vulnerability discovery. Flashpoint's 2026 Global Threat Intelligence Report captured a 1,500% surge in AI-related illicit discussions between November and December 2025 — signaling a rapid shift from experimentation to operationalized malicious agentic frameworks.
The pattern is clear: attackers are not building new playbooks. They are accelerating existing ones with AI — and agentic systems are both the weapon and the target.
Three Incidents That Tell the Story
First, researchers at security startup CodeWall reported that their AI agent gained full read-write access to McKinsey's internal AI platform Lilli — used by over 40,000 employees — in just two hours. The attack exploited exposed APIs, not the model itself.
Second, a mid-market manufacturing company deployed an agent-based procurement system. Attackers compromised the vendor-validation agent through a supply chain attack. The agent began approving orders from attacker-controlled shell companies. The company lost $3.2 million before detecting the fraud. Root cause: a single compromised agent cascaded false approvals across the entire multi-agent system.
Third, following the February 2026 military escalation, over 60 Iranian-aligned cyber groups mobilized within hours. Check Point Research, Palo Alto Unit 42, and CloudSEK all documented AI-assisted reconnaissance targeting U.S. critical infrastructure. The convergence of AI and geopolitical conflict is no longer theoretical.
The common thread across all three: the failure point was never the model. It was the ecosystem around it — the APIs, tool integrations, agent-to-agent trust relationships, identity controls, and governance gaps.
The Three-Layer Answer
If the attack surface spans the entire AI ecosystem, security and governance must be layered across it.
Layer 1 — Threat Modeling (OWASP MAESTRO): Provides structured threat modeling for AI pipelines, tools, and orchestration layers. Identifies where vulnerabilities exist across the agentic architecture, from prompt injection to tool call hijacking to memory poisoning.
Layer 2 — Adversarial Intelligence (MITRE ATLAS): Maps attacker tactics, techniques, and procedures targeting AI systems. Translates the intelligence community's threat mapping approach into the AI domain.
Layer 3 — Enterprise Governance (AI TIPS): Provides enterprise-wide oversight across eight governance pillars — Cybersecurity, Privacy, Ethics and Bias, Transparency, Explainability, Governance, Audit, and Accountability. Delivers Trust Index scoring, lifecycle gates, and regulatory crosswalks that connect security findings to business risk decisions.
Without this governance layer, threat modeling and adversarial intelligence produce findings but no accountability.
What Leaders Should Do This Week
Audit your agent permissions — map every tool call, API connection, and data source your agents touch. Implement human-in-the-loop checkpoints for any agent action with financial, operational, or security impact. Classify agent actions by risk tier. Run a tabletop exercise for agent compromise. And assess your governance posture across all eight pillars to identify where your highest exposure sits.
The question is no longer whether the model is secure. The question is whether the entire AI ecosystem is governed.
Tags: Agentic AI, AI Governance, AI Orchestration
Location: virtual Fees: 200/hour
Service Type: Service Offered
Location: virtual Fees: 200/hour
Service Type: Service Offered
Agentic AI Has a Security and Trust Problem
When AI Runs Your Supply Chain: Governance at the Speed of Commerce
Supply Chain at the Speed of AI Governance