Thinkers360

Why Your AI Ethics Policy is Most Probably a Paper Tiger

Jan



Today, I remembered a conversation I recently had in a pretty cold corner of a private lounge in South Jakarta. The hum of the city’s relentless traffic felt far away, but the tension inside the room was palpable. Across from me sat a commissioner of one of Indonesia’s largest family-owned conglomerates. Between sips of an over-extracted black coffee, he pointed to a thick, glossy binder on the table, the company’s brand-new "AI Ethics and Governance Framework."

"We’ve spent six months on this with a top-tier consultancy," he said, looking genuinely relieved. "Every value is there. Transparency. Fairness. Inclusivity. We’re fully covered, aren’t we?"

I was looking out at the afternoon gridlock on Sudirman Street and thought about a hot chocolate teapot. The binder was sophisticated. It was posh. It looked fantastic in the annual report. And during a real technological crisis, it was utterly useless. It was a classic case of “CEO’s New Clothes." In the rush to look "AI-ready," many of our CxOs in Jakarta and beyond are walking into a digital storm stark naked, draped only in the fine silk of PR-friendly buzzwords.

Sudirman Scramble: Speed vs. Substance

Let’s be brutally honest. Most AI ethics policies in our country today are what I call "Paper Tigers." Designed by marketing and legal teams to appease shareholders and regulators, not by GRC (Governance, Risk, and Compliance) experts to manage the messy, unpredictable reality of machine learning. We are currently in the middle of a digital gold rush in Indonesia. From Fintech startups in the Mega Kuningan area to legacy banking giants in Thamrin, everyone wants a piece of GenAI pie. But in this scramble for the "first-mover advantage," safety is often treated like a seatbelt in a Jakarta online taxi. Present for appearance, but rarely actually clicked into place.

The problem? Agentic AI doesn't care about your decks or your vision and mission statement. When you make a bold move from simple chatbots and start deploying autonomous agents, systems that can execute trades, manage customer databases, or negotiate with vendors without a human in the loop, you aren't just "upgrading your tech." You are delegating your corporate authority to an algorithm. And if your governance framework is purely aspirational, you have essentially handed the keys of the company’s multi-decade reputation to a black-box system that doesn't understand the concept of a "fiduciary duty."

"Sungkan" Factor: Silent Killer of (IT) Governance

In Indonesia, we have a cultural nuance often called "sungkan" a.k.a “gak enakan”. A deep-seated reluctance to challenge authority, deliver bad news, or "correct" a superior’s vision. In the boardroom, this translates to a dangerous, expensive silence. When the CTO, CIO, or a flashy external vendor says the new AI model is "fully optimised and ready for deployment," very few Directors have the technical confidence or cultural "permission" to ask uncomfortable questions.

I saw this play out recently with a multinational retail banking giant. They had implemented an AI model to "predict" customer creditworthiness and automate loan approvals. Technically speaking, it was a masterclass of operational efficiency. They were cooking. In reality, the model had developed a subtle bias against applicants from certain rural provinces outside Java. Simply not due to the developer’s team being unconsciously biased, but because the training data fed was from old-school credit gatekeeping and regional economic disparities.

Because of the "sungkan" culture, the junior tech resources who noticed the drift didn't feel empowered to stop the launch. The human reviewers, lulled into a false sense of security by the "trusted" AI, were rubber-stamping the machine’s output. This is what we call Automation Bias, and it is a GRC nightmare. It took a massive spike in non-performing loans and a brewing PR scandal for them to call for a deep-dive audit finally. It required a hard-coded intervention. A recalibration of the risk logic and a complete overhaul of their data governance.

Anatomy of a Real AI Audit: Five Pillars for C-Suite

If you are a commissioner or a director, you should stop looking at high-level checklists and start demanding "live" audits. In my experience, a robust AI audit must rest on these five non-negotiable pillars:

1. Data Lineage and Provenance ("Where" and "Why")

In Indonesia’s corporate world, data is often a "rojak" of fragmented legacy systems. If you don't know exactly where the data originated, and whether it was obtained ethically and legally, you cannot govern the AI. An AI is only as honest as its training data.

2. Adversarial Red Teaming

You need to hire people whose only job is to be "naughty." They should try to break your AI, trick it into leaking confidential board minutes, or bypass safety filters. If a bored teenager can trick your corporate chatbot into giving away trade secrets by using a clever "jailbreak" prompt, your 40-page Ethics Policy isn't worth the paper it’s printed on.

3. Localised Bias Testing

Global AI models are often trained on Western datasets. They don't understand the nuances of Indonesian culture, our varied dialects, or our socio-economic realities. Testing for "fairness" in a London or San Francisco context is functionally useless for a business operating in Surabaya, Medan, or Makassar.

4. Explainability (XAI Factor)

If the AI rejects a customer’s application or flags a transaction as fraudulent, can your staff explain the "Why"? A "black box" that says "Trust Me" is a legal and regulatory liability that no Director should ever sign off on.

5. Model Drift and Continuous Monitoring

AI is not a "set and forget" asset like a laptop or a desk. It is more like a living organism. It changes as it interacts with new data. You need something like a permanent pulse check. A dashboard that shows the "health" of the AI in real-time. Not just a one-off certificate from a vendor.

The Shadow AI Pandemic: Beat the Traffic, Breach the Data

While you are sitting in committee meetings debating high-level strategy, your staff are already using AI in ways that would make your Chief Risk Officer have a heart attack. This is the "Shadow AI" pandemic.

Think about the typical overworked analyst in a Kuningan office. They want to beat the 5 PM Jakarta traffic. To save three hours of work, they copy-paste a messy, confidential Excel sheet containing sensitive, or confidential client data into a free, public version of ChatGPT to "clean it up and summarise." It feels harmless. Their productivity boosts. It feels efficient.

But that data is now part of a global, public training set. Your company’s intellectual property has just been leaked into the wild, and you don't even have a record of it happening. Cybersecurity in 2026 isn't just about firewalls and antivirus; it’s about Data Sovereignty. It’s about creating "Walled Gardens". Secure, enterprise-grade AI environments where your employees, all of them, including you, can be productive without leaking the "crown jewels." If you don't provide the tools, your employees will go over the fence to find them.

Cloud Computing: "Shared Responsibility" Trap

I often hear a continuous, almost charmingly naive myth in Jakarta’s boardrooms: "We’ve moved to the Cloud (AWS, Google, or Azure), so security and compliance are now their problem."

This is a dangerous lie that has led to some of the most significant data breaches in recent history. In the industry, we call it the Shared Responsibility Model. In short, the cloud provider is responsible for the "security of the cloud" (the hardware, data centres, and physical pipes). You, on the other hand, as the cloud consumer, are responsible for "security in the cloud" – the data, access logs, and AI agents you utilised included.

Therefore, if you integrate an AI agent into your “cloud stack” without a solid, firm Identity and Access Management (IAM) policy and procedure, you, truthfully, leave your back door wide open. Having seen the case in which a poorly configured AI agent, designed to "optimise" cloud costs, accidentally granted itself administrative privileges and deleted a backup server because it deemed it "redundant." A cyber-attack from outside? Absolutely, no. A governance failure from the inside, it is.

Agentic AI and Kill-Switch Culture

As we move toward Agentic AI, as the systems that have the "agency" to act on our behalf, the concept of a "Kill-Switch" becomes paramount. We are talking about AI that can book flights, move funds between accounts, or change a manufacturing blueprint in a factory in Cikarang.

The question for the Board is: Who has the finger on the button and wants to roll up their sleeves?

First and foremost, IT Governance must evolve to accommodate Human-in-the-Loop or, at the very least, Human-on-the-loop approaches for high-stakes and or strategic decisions. Wouldn't you hire a procurement officer and give them a 1-billion-rupiah credit limit without their first line manager’s signature? Then, why would we provide a similar authority to an algorithm that doesn't feel the weight of responsibility? Accountability? We need to foster a "Kill-Switch Culture" where stopping vague, ambiguous processes is celebrated as much as launching a new feature.

AGI: Preparing for Final Frontier

The conversation inevitably turns to Artificial General Intelligence (AGI). While some dismiss it as "sci-fi faffing," the rapid trajectory of Agentic AI suggests we are closer to the "Ghost in the Machine" than many are comfortable admitting. For a policymaker or a commissioner, AGI is the ultimate governance challenge because it represents a shift from "Narrow AI" (doing one thing well) to "General AI" (doing everything as well as, or better than, a human).

If we cannot govern a simple chatbot that occasionally hallucinates legal advice today, how on earth do we expect to govern a system that matches human intelligence across every domain?

The preparation for AGI doesn't start with futuristic laws; it starts with fixing your GRC basics today. It begins with cleaning up your data silos. Start with Data Governance. Then continue with a Cybersecurity posture that "assumes breach" rather than "hopes for the best." And most importantly, it starts with a culture of Informed Scepticism. We need Directors who aren't afraid to look like the "slowest" person in the room by asking for a technical explanation of how a decision was reached.

Indonesian Context: Leading or Following?

As a nation, Indonesia has a choice. We can either be a "testing ground" or merely a market for global AI companies, taking their black-box models and hoping for the best, or we can put ourselves in the global Trusted AI maps.

Our regulators are watching. OJK and Indonesian Central Bank increasingly focus on digital operations, not only on transformation but also on resilience. The organisations that will thrive in this new era are those that can demonstrate their AI is safe, ethical, and governed. In the global marketplace, Trust is the new currency. If you can’t prove your AI won't hallucinate a fake financial report or leak customer data, nobody will want to do business with you.

Final Thoughts: Putting Paper Tigers Away

So, as you head into your following strategic review or board meeting in one of those sleek Sudirman towers, I challenge you to look at your AI Ethics policy with fresh eyes.

Is it a living, breathing part of your GRC framework, integrated into your Cybersecurity response plan and your IT Governance protocols? Or is it just "corporate wallpaper"? Something that looks nice and reassuring but doesn't actually hold anything up when the wind starts to blow.

"CEO’s New Clothes" is a cautionary tale about the dangers of vanity and the fear of appearing "un-hip" or "un-tech." But in the world of high-stakes corporate leadership, that vanity can lead to a multi-million dollar fine, a destroyed reputation, and a permanent loss of customer trust.

Time to put away the paper tigers. Time for some proper, international-standard rigour. At the end of the day, when the regulators come knocking, and the algorithms start acting up, "we meant well" simply won't make the cut.

Let’s stop the faffing. Let’s get to work. Keep going and keep building secure and innovative AI.

By Goutama Bachtiar, MAIB, MBA, FRSA, FFIN, FPT, MAICD, TAISE

Keywords: AI Ethics, AI Governance, GRC

Share this article
Search
How do I climb the Thinkers360 thought leadership leaderboards?
What enterprise services are offered by Thinkers360?
How can I run a B2B Influencer Marketing campaign on Thinkers360?