left qoute Artificial Integrity: AI Beyond Short-Termism right qoute

Interview with Hamilton Mann

Von |

Does ‘AI for good’ matter and why should we care? To find out, we spoke to the author of “Artificial Integrity“, Hamilton Mann, a Tech Executive, Digital and AI for Good Pioneer, keynote speaker, and the originator of the concept of Artificial Integrity. 

xBN: In a world driven by short-termism, how does ‘AI for good’ matter?

Hamilton Mann: The question isn’t whether ‘AI for good’ matters. The question is, can we afford to build AI for anything else? We should push boundaries for AI, not shying away from the remarkable advancements it can offer to society, and, symmetrically, not shying away either from the equally great responsibility it imposes on us. The latter being inseparable from the former. The excitement and rush toward AI is no excuse or tolerance for irresponsibility; it’s quite the opposite.
Relying on AI only makes responsible sense if the system is built by ensuring it delivers performance while being fundamentally guided by integrity first—especially in outcomes that may, more often than we think, be life-altering.

‘AI for good’ is a responsibility to design AI that elevates humanity, not diminishes it. It’s about more than just creating something new or trendy or fashionable; with the implications this technology has in society, it’s about creating something that is safe and sustainable.

If we let short-term thinking drive us in that construct, we end up with AI technologies that solve the wrong problems or no problems at all in the best-case scenarios or that create and exacerbate ones, most of the time. AI development should lead to focus on what truly matters, not just what is expedient. In inventing the car, we didn’t aim to make the fastest carriage; in harnessing and utilizing electricity, we didn’t aim to make the biggest oil lamp. The uncharted thinking of those who led us to achieve some of the most transformative inventions that have positively changed society should serve as a testament to how we build AI. Such inspirations in mind, we must draw both from the benefits of what has brought good and the lessons from what has had negative implications for society, to develop AI within the framework of a new definition of what progress truly means, not just for our time, but also for the next generation.

AI should amplify human potential, not just serve quarterly profits or the latest trend. Sure, short-termism is tempting. It’s easier to optimize for clicks or quick wins. But the world doesn’t remember those who chase the immediate; it remembers those who redefine what’s possible. ‘AI for good’ matters because it’s about leaving the world better than we found it. It doesn’t have to matter to each and every single person in the world—the fact that it could matter to a few is a necessary and sufficient condition to make meaningful change happen. The business and society case for this is not about temporary or short-term gains but about a lasting ‘return on investment’ that all of us will increasingly seek—as customers, users, and citizens—wherever we are, whatever our differences, preferences, daily needs, or constraints: artificial integrity over artificial intelligence.

xBN: Integrity is often the great unseen. Do you think today’s economy rewards it, or is it overlooked?

Hamilton Mann: Today’s economy is not rewarding it but it bans businesses forever because of the lack of it. In that sense, this is not always an extra bonus to have it, even though this is currently evolving. However, this is a necessary condition to last and that makes Integrity a quality that no businesses should overlook. Companies have long recognized that brand reputation and customer loyalty depend on an uncompromising integrity-driven social proof as a do-or-die imperative. The entire history of business is filled with examples of integrity lapses that led ‘Achilles-type’ companies to collapse, such as Enron, Lehman Brothers, WorldCom, Arthur Andersen, and, more recently, WeWork, Theranos and FTX.
Yet, as businesses integrate AI into their operations—from customer service to marketing and decision-making—all eyes are fixed on the promise of productivity and efficiency gains, and many overlook a critical factor: the integrity of their AI systems’ outcomes.

What could be more irresponsible? Without this, companies face considerable risks, from regulatory scrutiny to legal repercussions to brand reputation erosion to potential collapse. The rule in business has always been performance, but performance achieved at the cost of amoral behavior is neither profitable nor sustainable. As Warren Buffett famously said, ‘In looking for people to hire, look for three qualities: integrity, intelligence, and energy. But if they don’t have the first, the other two will kill you.’
While ‘hiring’ AI to run their operations or to deliver value to their customers, leaders must ensure that it is not operating unchecked regarding integrity for the sake of the company’s reputation, values and value. The urgency for artificial integrity oversight to guide AI in businesses is anything but artificial.

xBN: How is Artificial Integrity different from AI Ethics?

Hamilton Mann: Artificial Integrity goes beyond AI ethical guidelines. It represents a self-regulating quality embedded within the AI system itself. Artificial Integrity is about incorporating ethical principles into AI design to guide its functioning and outcomes, much like how human integrity guides behavior and impact even without external oversight, to mobilize intelligence for good. It fills the critical gap that ethical guidelines alone cannot address by enabling several important shifts: 

Shifting from inputs to outcomes: 
  • AI ethical guidelines are typically rules, codes, or frameworks established by external entities such as governments, organizations, or oversight bodies. They are often imposed on AI systems from the outside as an input, requiring compliance without being an integral part of the system’s core functioning. 
  • Artificial Integrity is an inherent, self-regulating quality embedded within the AI system itself. Rather than merely following externally imposed rules, an AI with integrity “understands” and automatically incorporates ethical principles into its decision-making processes. This internal compass ensures that the AI acts in line with ethical values even when external oversight is minimal or absent, maximizing the delivery of integrity-led outcomes. 
Shifting from compliance to core functioning:
  • AI ethical guidelines focus on compliance and adherence. AI systems might meet these guidelines by following a checklist or performing certain actions when prompted. However, this compliance is often reactive and surface-level, requiring monitoring and enforcement. 
  • Artificial Integrity represents a built-in core function within the AI. It operates proactively and continuously, guiding decisions based on ethical principles without needing to refer to a rule book. It’s similar to how human integrity guides someone to do the right thing even when no one is watching.
Shifting from fixed stances to contextual sensitivity: 
  • AI ethical guidelines are often rigid and can struggle to account for nuanced or rapidly changing situations. They are typically designed for broad applicability and might not adapt well to every context an AI system encounters. 
  • Artificial Integrity is adaptable and context-sensitive, allowing AI to apply ethical reasoning dynamically in real-time scenarios. An AI with integrity would weigh the ethical implications of different options in context, making decisions that align with core values rather than rigidly applying rules that may not fully address the situation’s complexity.
Shifting from reactive to proactive decision-making: 
  • AI ethical guidelines are often applied reactively, after a potential issue or ethical violation is identified. They are used to correct behavior or prevent repeated errors. However, by the time these guidelines come into play, harm may have already occurred. 
  • Artificial Integrity operates proactively, assessing potential risks and ethical dilemmas before they arise. Instead of merely avoiding punishable actions, an AI with integrity seeks to align every decision with ethical principles from the outset, minimizing the likelihood of harmful outcomes.
Shifting from enforcement to autonomy: 
  • AI ethical guidelines require enforcement mechanisms, like audits, regulations, or penalties, to ensure that AI systems adhere to them. The AI doesn’t inherently prioritize these rules. 
  • Artificial Integrity autonomously enforces its ethical standards. It doesn’t require external policing, because its ethical considerations are intrinsic to its decision-making architecture. This kind of system would, for example, refuse to act on commands that violate fundamental ethical principles, even without explicit human intervention.
Hamilton Mann
Artificial Integrity also goes beyond AI guardrails. 

These mechanisms, while foundational, exhibit limitations that highlight the need for a transformative shift towards an approach grounded in Artificial Integrity. Current guardrails such as content filters, output optimizers, process orchestrators, and governance layers aim to identify, correct, and manage issues in AI outputs while ensuring compliance with ethical standards. 

Content filters function by detecting offensive, biased, or harmful language, but they often rely on static, predefined rules that fail to adapt to complex or evolving contexts.
Output optimizers address errors identified by filters, refining AI-generated responses, yet their reactive nature limits their ability to anticipate problems before they arise. Process orchestrators coordinate iterative interactions between filters and optimizers, ensuring that outputs meet thresholds, but these systems are resource-intensive and prone to delivering suboptimal results if corrections are capped.

Governance layers provide oversight and logging, enabling accountability, but they depend heavily on initial ethical frameworks, which can be rigid and prone to bias, particularly in unanticipated scenarios.
Despite their contributions, these guardrails expose critical gaps in the broader mission to create ethical AI systems. Their reactive design means they address problems only after they occur, rather than preventing them. They lack the contextual awareness necessary to navigate nuanced or situational ethics, which often leads to outputs that are ethically sound in isolation but problematic in context. They rely heavily on static, human-defined standards, which risks perpetuating systemic biases rather than challenging or correcting them. Furthermore, their iterative processes are computationally intensive, raising concerns about energy inefficiency and scalability in real-world applications. The limitations of these mechanisms point to the need for a new paradigm that embeds integrity-led reasoning into the core of AI systems. 

Artificial Integrity represents this shift by moving beyond the rule-based constraints of guardrails or the static-based constraints of ethical guidelines to systems capable of proactive ethical reasoning, contextual awareness, and dynamic adaptation to evolving societal norms.

Unlike existing AI systems, Artificial Integrity allows AI to anticipate ethical dilemmas and adapt its outputs to align with human values, even in complex or unforeseen situations. By focusing on contextual understanding, AI systems with Artificial Integrity can make nuanced decisions that balance ethical considerations with operational goals, avoiding the pitfalls of rigid compliance models. 

Artificial Integrity also addresses the pervasive issue of bias by enabling systems to self-evaluate and refine their ethical frameworks based on continuous learning. This adaptability ensures that AI systems remain aligned with diverse user needs and societal expectations rather than reinforcing pre-existing inequalities.

By embedding these safeguards into the AI’s core logic, Artificial Integrity eliminates the inefficiencies of iterative guardrail processes, delivering outputs that are ethically sound and resource-efficient in real time. The transition from ethical AI guidelines and guardrails to Artificial Integrity is a new AI frontier that includes AI Ethics but goes beyond, enabling AI systems to mimic ethical, social, and moral intelligence.

xBN: Your Artificial Integrity Metamodel offers a new framework. How can organizations practically implement it?

Hamilton Mann: To systematically address the challenges of Artificial Integrity, organizations can adopt a framework structured around three pillars: the Society Values Model, the AI Core Model, and the Human and AI Co-Intelligence Model.  Each of these pillars reinforces each other and focuses on different aspects of integrity, from AI conception to real-world application. 

The Society Values Model revolves around the core values and integrity-led standards that an AI system is expected to uphold. This model demands that organizations start to consider doing the following: 

  • Clearly define integrity principles that align with human rights, societal values, and sector-specific regulations to ensure that the AI’s operation is always responsible, fair, and sustainable. 
  • Consider broader societal impacts, such as energy consumption and environmental sustainability, ensuring that AI systems are designed to operate efficiently and with minimal environmental footprint, while still maintaining integrity-led standards. 
  • Embed these values into AI design by incorporating integrity principles into the AI’s objectives and decision-making logic, ensuring that the system reflects and upholds these values in all its operations while optimizing its behavior in prioritizing value alignment over performance. 
  • Integrate autonomous auditing and self-monitoring mechanisms directly into the AI system, enabling real-time evaluation against integrity-led standards and automated generation of transparent reports that stakeholders can access to assess compliance, integrity, and sustainability. 

This is about building the “Outer“ perspective of the AI systems. 

The AI Core Model addresses the design of built-in mechanisms that ensure safety, explicability, and transparency, upholding the accountability of the systems and improving their ability to safeguard against misuse over time. Key components may include: 

  • Implementing robust data governance frameworks that not only ensure data quality but also actively mitigate biases and ensure fairness across all training and operational phases of the AI system. 
  • Designing explainable and interpretable AI models that allow stakeholders, both technical and non-technical, to understand the AI’s decision-making process, increasing trust and transparency.
  • Establishing built-in safety mechanisms that actively prevent harmful use or misuse, such as the generation of unsafe content, unethical decisions, or bias amplification. These mechanisms should operate autonomously, detecting potential risks and blocking harmful outputs in real time. 
  • Creating adaptive learning frameworks where the AI is regularly retrained and updated to accommodate new data, address emerging integrity concerns, and continuously correct any biases or errors with regard to the value model that may occur over time. 

This is about building the “Inner“ perspective of the AI systems. 

The Human and AI Co-Intelligence Model emphasizes the symbiotic relationship between humans and AI, highlighting the need of AI systems to function considering the balance between “Human Value Added” and “AI Value Added”, where the synergy between human and technology redefines the core design of our society, while preserving societal integrity. They would be able to function considering four distinct operating modes: 

Marginal Mode: In the context of Artificial Integrity, Marginal Mode refers to situations where neither human input nor AI involvement adds meaningful value. These are tasks or processes that have become obsolete, overly routine, or inefficient to the point where they no longer contribute positively to an organization’s or society’s goals. In this mode, the priority is not about using AI to enhance human capabilities, but about identifying areas where both human and AI involvement has become useless. 
One of the key roles of Artificial Integrity in Marginal Mode is the proactive detection of signals indicating when a process or task no longer contributes to the organization.

For example, if a customer support system’s workload drastically decreases due to automation or improved self-service options, AI could recognize the diminishing need for human involvement in that area, helping the organization to take action to prepare the workforce for more value-driven work.

AI-First Mode: Here, AI’s strength in processing vast amounts of data with speed and accuracy takes precedence to the human contribution. Artificial Integrity would ensure that, even in these AI-dominated processes, integrity-led standards like fairness and cultural context are embedded.

When Artificial Integrity prevails, an AI system that analyzes patient data to identify health trends would be able to explain how it arrives at its conclusions (e.g., a recommendation for early cancer screening), ensuring transparency. The system would also be designed to avoid bias—for example, by ensuring that the model considers diverse populations, ensuring that conclusions drawn from predominantly one demographic group don’t lead to biased or unreliable medical advice.

Human-First Mode: This mode prioritizes human cognitive and emotional intelligence, with AI serving in a supportive role to assist human decision-making. Artificial Integrity ensures that AI systems here are designed to complement human judgment without overriding it, protecting humans from any form of interference with the healthy functioning of their cognition, such as avoiding influences that exploit vulnerabilities in our brain’s reward system, which can lead to addiction. 

In legal settings, AI can assist judges by analyzing previous case law, but should not replace a judge’s moral and ethical reasoning. The AI system would need to ensure explainability, by showing how it arrived at its conclusions while adhering to cultural context and values that apply differently across regions or legal systems, while ensuring that human agency is not compromised regarding the decisions being made.

Fusion Mode: This is the mode where Artificial Integrity involves a synergy between human intelligence and AI capabilities, combining the best of both worlds. In autonomous vehicles operating in Fusion Mode, AI would manage a vehicle’s operations, such as speed, navigation, and obstacle avoidance, while human oversight, potentially through emerging technologies like brain-computer interfaces (BCIs) would offer real-time input on complex ethical dilemmas. For instance, in unavoidable crash situations, a BCI could enable direct communication between the human brain and AI, allowing ethical decision-making to occur in real time, blending AI’s precision with human moral reasoning. These kinds of advanced integrations between human and machine will require Artificial Integrity at its highest level of maturity. Artificial Integrity would ensure not only technical excellence but also ethical, moral, and social soundness, guarding against the potential exploitation or manipulation of neural data and prioritizing the preservation of human safety, autonomy, and agency.  

Finally, Artificial Integrity systems would be able to perform in each mode, while transitioning from one mode to another, depending on the situation, the need, and the context in which they operate. 
Considering the Marginal Mode (where limited AI contribution and human intelligence is required—think of it as “less is more”), AI-First Mode (where AI takes precedence over human intelligence), Human-First Mode (where human intelligence takes precedence over AI), and Fusion Mode (where a synergy between human intelligence and AI is required), the model Human and AI Co-Intelligence ensures that: 

  • Human oversight remains central in all critical decision-making processes, with AI serving to complement human intelligence rather than replace it, especially in areas where ethical judgment and accountability are paramount. 
  • AI usage promotes responsible and integrity-driven behavior, ensuring that its deployment is aligned with both organizational and societal values, fostering an environment where AI systems contribute positively without causing harm. 
  • AI usage establishes continuous feedback loops between human insights and AI learning, where these inform each other’s development. Human feedback enhances AI’s integrity-driven intelligence, while AI’s data-driven insights help refine human decision-making, leading to mutual improvement in performance and integrity-led outcomes. 
  • AI systems are able to perform in each mode, while transitioning from one mode to another, depending on the situation, the need, and the context in which they operate.

Reinforced by the cohesive functioning of the two previous models, the Human and AI Co-Intelligence Model reflects the “Inter“ relations, dependencies, mediation, and connectedness between humans and AI systems. 

This is the aim of Artificial Integrity. Systems designed with this purpose will embody Artificial Integrity, emphasizing AI’s alignment with human-centered values. 
This necessitates a holistic approach to AI development and deployment, considering not just AI’s capabilities but its impact on human and societal values. It’s about building AI systems that are not only intelligent but also understand the broader implications of their actions. 

xBN: What is the concept of Economic AIquity?

Hamilton Mann: In my book, I define “Economic AIquity” as the pursuit of fairness and equity in artificial intelligence applications, ensuring AI systems do not perpetuate or exacerbate social inequalities.
This term underscores the importance of designing and implementing AI technologies that are inclusive and fair, promoting equal opportunities and treatment for all individuals.

Some recent news has highlighted concerns about Economic AIquity. A class action lawsuit filed on November 14, 2023, in the U.S. District Court in Minnesota alleges that UnitedHealth Group and its subsidiaries, UnitedHealthcare and Navihealth, used AI technology “in place of real medical professionals to wrongfully deny elderly patients care.” The lawsuit claims that the companies were aware the AI model had a 90% error rate in evaluating claims and had overridden determinations made by patients’ physicians, raising serious ethical and economic equity issues in healthcare.

Another illustration has recently come from the UK, where an artificial intelligence system used by the government to detect welfare fraud has been found to exhibit bias based on factors such as age, disability, marital status, and nationality, according to an investigation by The Guardian. An internal assessment of the machine-learning program, which evaluates thousands of universal credit claims
across England, revealed that it disproportionately flagged individuals from certain groups for investigation, leading to concerns about fairness and discrimination.

Considering the multifactorial implications AI has in society, the Economic AIquity dimension must be assessed and well-balanced—not as an afterthought but as a core design challenge for the AI applications we aim to deploy in society.

xBN: What advice would you offer to business leaders and policymakers to advance integrity-driven AI?

Hamilton Mann: Not an advice but more a call for leadership: It is no longer enough to create systems that compute value; we must create systems that comprehend values. Integrity must become the north star of every AI algorithm, a fundamental requirement that ensures AI technologies don’t just serve us but serve us well and don’t just do well but do it right.

What’s truly matter is about Artificial Integrity over Artificial Intelligence as no amount of the latter will ever replace the necessity of the former. From now on, true leadership readiness must ensure that machines don’t just work with us—but work for us, aligning with our highest ideals.

xBN: Thank you very much for taking the time to share your precious insights.

The interview was conducted by xBN Publisher Isabella Mader at the Global Peter Drucker Forum in November 2024.


Short bio

Hamilton Mann is a Tech Executive, Digital and AI for Good Pioneer, keynote speaker, and the originator of the concept of Artificial Integrity. 
He serves as Group Vice President at Thales, where he co-leads the AI initiative and Digital Transformation while also overseeing  global Digital Marketing activities. He also serves as a Senior Lecturer at INSEAD and HEC Paris as well as a mentor at the MIT Priscilla King Gray (PKG) Center.  He is a doctoral researcher in AI at École Nationale des Ponts et Chaussées – Institut Polytechnique de Paris. He writes regularly for Forbes as an AI Columnist, and has published articles about AI and its societal implications in prominent academic, business, and policy outlets such as Stanford Social Innovation Review (SSIR), Knowledge@Wharton, Leader to Leader (Wiley), Dialogue Duke Corporate Education, I by IMD, INSEAD Knowledge, the Harvard Business Review France, Polytechnique Insights and the European
Business Review. He hosts The Hamilton Mann Conversation, a podcast on Digital and AI for Good, ranked in the Top 10 for technology thought leadership by Technology Magazine. He was inducted into the Thinkers50 Radar as one of the 30 most prominent rising business thinkers. He has contributed to the books Driving Sustainable Innovation (Brightline Project Management Institute and Thinkers50, 2024) and Connectedness (Thinkers50 and Wiley, 2024). He’s the author of Artificial Integrity: The Paths to Leading AI Toward a Human-Centered Future (Wiley, 2024).

Upcoming events