AI Trust and Accountability: Navigating the Challenges of the ‘Black Box’

Trust is the foundation of both business and personal interactions, holding together investments, deals, and workforce relationships. For cloud service providers, trust from customers is essential for innovation, while managers rely on it to empower employees without the need for constant oversight. In both scenarios, trust enables creativity and strategic thinking. However, trust is not built on faith alone—processes and accountability are what truly inspire it.

The rapid adoption of artificial intelligence (AI) in businesses across the Gulf Cooperation Council (GCC) region, particularly in retail, has led many companies to trust AI systems. According to McKinsey’s 2023 research, 75% of regional retail companies now use AI to enhance at least one business function. But what is the basis of this trust, especially when AI’s inner workings often resemble a ‘black box’—opaque and difficult to interpret?

AI algorithms are typically non-deterministic, making it hard to trace decisions back to their source. However, when AI systems deliver successful results, decision-makers may trust them without thoroughly understanding how those insights are generated. While this may seem harmless, it reverses the natural order—trust should follow due diligence, not precede it.

Large language models (LLMs), such as those used in generative AI, are examples of powerful tools that can revolutionize decision-making processes across industries. According to PwC’s Strategy& consultancy, the GCC stands to generate nearly $10 for every dollar invested in generative AI. These models help non-technical users access large datasets and derive valuable insights, fostering an eagerness to trust AI’s capabilities.

However, in highly regulated industries like banking, financial services, and insurance (BFSI), trust comes with a need for transparency. BFSI processes are defined by data consistency and privacy. When it comes to explaining how AI processes sensitive data, the ‘black box’ nature of these systems can make accountability difficult, thereby undermining trust.

Regulation is one solution to this challenge. The EU’s Artificial Intelligence Act, which became law in August 2024, aims to ensure AI systems are accountable and operate in line with human rights and fundamental values. It categorizes AI risks into four distinct levels, from minimal to unacceptable, and bans the use of harmful models like social scoring.

For AI to fulfill its potential, trust must be earned through transparent governance and clear data-handling processes. GCC governments are already taking steps to regulate AI, but businesses can also play a part by adhering to three key principles: data transparency, division of AI tasks, and dynamic scaling. By doing so, they can minimize the risks posed by AI’s ‘black box’ problem and promote safe, effective innovation.

Ultimately, while trust is essential to economic growth, it is equally important to demand that AI earns this trust through accountability.

Leave a Reply

Your email address will not be published. Required fields are marked *