Interview: Monique Hodges - The Human-Agent Contract: Leading Organisations Through the AI Shift
The real AI challenge is not technology. It is leadership.
Today, we speak with Monique R. Hodges, whom I had the pleasure of meeting in Shangai during a GEMBA program on innovation opportunities in China delivered jointly by IESE Business School and China Europe International Business School. Our conversations there already reflected the themes that define Monique’s work today: organisational transformation, leadership under uncertainty, and the human dimensions of strategic change.
Monique is the Founder & CEO of Gebanah, a consultancy focused on culture, organisational optimisation, and responsible transformation. Drawing on more than 15 years of experience across market expansion, product launches, board restructuring, and employee engagement, she approaches leadership not as a static management discipline, but as an evolving practice of alignment between people, systems, incentives, and long-term value creation.
Together, we explore the ideas behind Monique’s manifesto, The Human-Agent Contract, and her vision for the next generation of organisations. In this conversation, Monique argues that the AI debate is often framed too narrowly around tools and productivity. The deeper challenge, she suggests, is organizational: governance, alignment, human capital, and the operating systems companies use to make decisions under uncertainty.
The Illusion of AI Transformation
Your manifesto argues that many companies are automating chaos rather than creating value. What are leaders still getting fundamentally wrong about AI adoption?
Leaders are missing the substance within the noise — and the trap is deceptively simple. Without a complete investment thesis and clearly outlined desired outcomes, AI adoption is directionless. If chaos already exists within the chain of command, AI will not fix it; it will operationalize it.
Ask yourself: If your leaders are living and dying by the quarter, what about AI will change that? If there is little coordination and collaboration in how leaders communicate to their teams today, AI will not fix it. If the answer to an unfavorable P&L is to cut costs, lay off people, or add more KPIs, AI will not rectify that either. This is precisely what I mean by automating chaos.
The Human-Agent Contract addresses this directly. It frames agentic AI as three things simultaneously: a tool, an investment, and a synthetic workforce. Treating it as all three requires massive, strategic changes to operational efficiency — not a rushed rollout.
“AI will not fix chaos; it will operationalize it.”
That distinction between automation and alignment becomes central throughout the conversation — particularly in how organisations think about culture, governance, and execution.
In your view, why will some companies use AI to compound advantage while others use it to accelerate existing dysfunction?
The differentiator is understanding the difference between efficiency and effectiveness.
Recognizing that AI is a tool, an investment, and a synthetic workforce is the starting point. But some leaders will know this and still only chase “efficiency” — and I put that in quotes because it is usually a manufactured KPI anchored to last quarter, YOY at best. Those leaders will accelerate existing dysfunction.
The leaders who capture compound advantage will pursue AI effectiveness alongside efficiency impact. They understand that organizational change is already difficult, and those same challenges are amplified with AI. We all have access to the same LLMs. It is just artificial intelligence, not magic.
Creating compound advantage means using AI output as the raw material for increasingly advanced qualitative decisions — driven by human interpretation, direction, and judgment fed back into the model over time.
And here is what most leaders miss entirely: the compound advantage is the function of the cultural complexity of the company whose language it digested. Not the model’s design. Not the departments it touches. Not how many people have access to it. The culture. That cannot be faked or shortcut.
Culture, Governance, and Human Capital
For Monique, the underlying issue is not technological maturity alone, but organizational maturity — specifically how companies govern human capital, decision-making, and strategic alignment.
You frame culture not as a soft topic, but as a direct value driver. What changes when a CEO starts treating alignment as a financial discipline rather than an HR issue?
Every CEO knows culture and value are intertwined, but most do not see the cause and effect running throughout their entire organization. There are three types of capital: financial, physical, and human. Only human capital directly drives financial and physical outcomes — and that changes everything about how you govern it.
Culture is not a department metric. It is an output of all business practices, compounding over time with both intrinsic and extrinsic value — not unlike how goodwill is calculated. It is not owned by HR. Human capital encompasses the entire enterprise: collaborators, stakeholders, business units, and then HR. Human capital surpasses and envelops HR, not the other way around.
When a CEO treats alignment as a financial discipline, they stop asking HR to fix what is fundamentally a leadership and governance problem. That shift alone changes the trajectory.
“Culture is not a department metric. It is an output of how the business actually runs.”
What is the real organizational risk of deploying AI before the company has clarity on decision rights, priorities, and accountability?
The risks are three and they are serious: legal, reputational, and financial.
Because AI is a tool that drives the entire business, its impact cannot be contained to a single department. A single AI use case in a factory, for example, simultaneously touches supply chain efficiency, marketing investment, and sales output. Without guardrails, there is no way to guarantee that AI deployment will not affect R&D, insurance liability, regulatory scrutiny, or your ability to grow.
This risk compounds further when you consider how language models actually work. LLMs are models of social reasoning with the human removed. Anthropic’s own research indicates that only 8.7% of users pause to verify what the model produces. So what happens to your legal, reputational, and financial exposure when the social reasoning that produced the training data begins to thin?
This is not a hypothetical. It will happen — and organizations that have not established clear decision rights and accountability will have no defense when it does.
The Human-Agent Contract
These tensions ultimately lead to the core framework behind Monique’s work: what she calls The Human-Agent Contract.
You introduce the idea of a Human-Agent Contract. What does that mean in practice for executive teams making high-stakes decisions today?
In practice, it means leaders must actively choose their AI capacity — and that choice cannot be deferred.
We are all operating in extremely uncertain, globally volatile times, and decisions still must be made. The Human-Agent Contract asks executive teams two foundational questions: Does your team have the skills to interpret and act on AI outputs? And is the executive team aligned on the company’s topline priorities?
As we move into Q3, what is your North Star? Are decisions consistently being made according to those same priorities across every level of the organization?
We have not seen a technological shift of this magnitude since the internet. I believe executive alignment will be the differentiator. Just as executive teams govern physical and financial capital, they must govern human capital as a measurable asset and treat culture as an output that can be architected and scaled.
“Alignment is the new multiplier.”
What will separate the winning executive teams is their readiness to leverage AI as a driver of business — not a dependency to do business. Decision-making must be grounded in the quality of context, outputs, and historical understanding.
And this is really exciting — the most forward-thinking work I am seeing right now involves organizations deploying experienced people who understand legacy systems to vet AI outputs, lead recoding efforts, and practice what I call digital archaeology: surfacing latent intelligence that would otherwise be lost. That is where the real value transfer happens.
Boards, Execution, and Strategic Readiness
If AI is becoming part of the operating structure of the enterprise, the next question is whether leadership teams and boards are actually prepared to govern it at scale.
Where do you see the biggest gap between how boards talk about AI and how organizations are actually prepared to operationalize it?
The gap is in understanding how language models work — and what that means for long-term value creation.
Boards are largely focused on how much money AI can save. That conversation is incomplete and, over time, dangerous. The board-level conversation must also prepare for two things: the plateau and the implications.
The plateau: Agentic AI depends on the social complexity of human language production. But when the human is progressively removed from that process, models quickly begin training on outputs generated by other language models — or themselves. The result is compounding, statistically average output that is plausible but hollow.
Boards must ask: how will the business continue to replenish the stream of human-generated data that the model depends on?
The implications: Systematic AI deployment narrows the diversity of outputs over time. You lose minority viewpoints — the ones that create market disruption. You lose rare knowledge — the kind that drives differentiation. You lose unusual formulations that find efficiencies, and edge-case perspectives that power value propositions. These do not disappear all at once. They gradually thin and vanish.
And this is what I believe boards are not yet reckoning with: when AI deployment systematically reduces the social complexity it depends on — through cognitive offboarding, homogenization of creative output, and the elimination of interaction-dense work — the technology begins undermining the very conditions that made it valuable in the first place.
“The dangerous part is not the failures. It is the successes.”
Every efficiency gain, every layer of human judgment removed, quietly narrows the substrate the model feeds on. By the time the results show up in your P&L, it is too late.
Many leaders are under pressure to move fast on AI. How should they distinguish between responsible speed and reckless adoption?
Start searching for signals over noise — and if a full investment thesis sounds like too much, think of it as a pro/con list you can do over coffee in the morning.
Ask yourself: To what end will this investment improve the business? What are my existing challenges — and will AI address them or amplify them? What is our exit plan if it does not work? What is our company strategy, and does this AI investment serve it? How would I describe public perception of my company, and what is driving it?
Now read your answers back. Does the AI investment help or hurt? Does your board have enough context to know the difference? How far down the leadership chain can you go and get the same answers you just wrote?
That last question is where most organizations discover the real problem. Responsible speed requires that the answer is consistent from the boardroom to the floor. Reckless adoption is when only the C-suite can answer it.
What are the clearest signs an organization is not ready for agentic AI?
All businesses are ready for agentic AI. Their leaders however must choose at what capacity they want to deploy — and be able to handle it.
The clearest warning sign is when agentic AI is treated as a magic solution rather than an operating-model change. When an organization already has misaligned priorities, unclear decision rights, weak governance, and low trust, those conditions do not disappear with AI deployment.
They show up as vague mission statements, broken messaging across levels, shadow processes, weak governor roles, and constant human rework of AI outputs.
Culture is not a standalone problem. It reflects how the business actually runs. Any existing leak will be amplified — not corrected — by autonomous systems operating at scale.
From Theory to Practice
Beyond diagnosis, Monique’s work through Gebanah focuses on translating these organizational questions into measurable intervention and operational change.
How does Gebanah’s methodology help leaders move from abstract concern to measurable intervention, concrete use cases, and real proof of business impact?
Leaders set clear goals in Q1, and then the year accelerates and those plans drift. Strategy should not. What I do is help leaders maintain alignment so they can keep moving toward their North Star regardless of conditions.
The leaders I work with want their executive teams, shareholders, and cross-functional groups aligned — from the boardroom to the innovation lab — with the coordination and collaboration required to scale innovation across the organization.
They know alignment matters. What they do not have is the time to manage it cross-functionally while also running the business. That is where Gebanah comes in: turning concern about misalignment into measurable intervention and real business impact.
It starts by identifying where the leak is happening and what it is costing — not through abstract culture talk, but through hard numbers.
The Level One Diagnostic provides the roadmap: a Capital and Culture Risk Assessment and Remediation plan that quantifies exposure, maps priorities at risk, and links improvements directly to governance, valuation, retention, and operational stability.
The Gebanah Strategic Alignment framework is a linear, sequential delivery model — designed so that each phase builds on the last and every intervention can be tied to a specific business outcome.
The Next Generation of Organizations
Ultimately, the interview returns to a larger question: what kinds of organizations will emerge successfully from this transition — and what kinds will quietly lose differentiation along the way?
Looking ahead three to five years, what will define the leaders and companies that successfully evolve their organization and culture through AI?
The leaders and companies that win will treat AI as an operating-model redesign — not a tool rollout. Alignment is the new multiplier.
The winning organizations will have a true Human-Agent Contract: strong governance, clear decision rights, disciplined deployment, trusted data, and leaders who can translate strategy into day-to-day execution. The best leaders will not ask how much AI they can add. They will show measurable gains in productivity, quality, speed, retention, and valuation.
That will be achieved through three things:
1. Mission clarity. High performers already stand out in governance, deployment, and data availability. They will rank priorities clearly so AI optimizes for real value — not just activity.
2. Cascading communication. Strategy will be translated cleanly from boardroom to frontline so teams and systems act on the same intent. The strongest organizations will know precisely what humans own, what agents own, and where escalation or override is required.
3. Governor talent at scale. Managers will evolve from doers into orchestrators — supervising AI, monitoring risk, and protecting critical priorities. They will reduce shadow processes, disengagement, and rework by building credibility and consistency into how work gets done.
And here is what the data is already telling us: AI creates less unique experiences and stories over time. The leaders who win will leverage humans, not eliminate them. You may look at your own numbers and believe your company is winning — but when you zoom out, you may find you have lost your competitive edge and become a “me too” product in a field of companies that all fed from the same models. That is individual gain and collective loss. The separation between winning and losing organizations will happen in real time, and the differentiator will not be the AI. It will be the humans directing it.
This conversation with Monique R. Hodges ultimately reframes AI not as a technology story, but as a leadership and organisational design challenge.
Across governance, culture, accountability, and execution, Monique argues that the companies creating durable advantage will not necessarily be those deploying the most AI, but those most capable of aligning human judgment, strategic clarity, and operational discipline around it. Her central warning is equally clear: AI does not neutralise dysfunction. It scales it. Misaligned incentives, weak governance, fragmented communication, and short-term thinking do not disappear under automation; they become embedded into the operating system of the enterprise itself.
In the years ahead, the dividing line between successful and struggling organisations may not be access to the same models or technologies. It may be whether leaders understand that the true differentiator was never the AI itself, but the quality of the human systems directing it.


