Ad
Opinion

Can We Ever Trust AI Agents?

Decentralized AI gives us a path to trusting the agents that will soon populate our digital lives, says Marko Stokic, Head of AI at Oasis.

Updated Sep 30, 2024, 5:06 p.m. Published Sep 30, 2024, 5:06 p.m.
CROP 16:9 AI, brain, artificial intelligence (Growtika/Unsplash)
CROP 16:9 AI, brain, artificial intelligence (Growtika/Unsplash)

The renowned Harvard psychologist B.F. Skinner once opined that the "real problem is not whether machines think but whether men do." This witty observation underscores a once crucial point: that our trust in technology hinges on human judgment. It's not machine intelligence we should worry about, but the wisdom and responsibility of those who control it. Or at least that was the case.

With software like ChatGPT now an integral part of many work lives, Skinner's insight seems almost quaint. The meteoric rise of AI agents – software entities capable of perceiving their environment and taking actions to achieve specific goals – has fundamentally shifted the paradigm. These digital assistants, born from the consumer AI boom of the early 2020s, now permeate our digital lives, handling tasks from scheduling appointments to making investment decisions.

What are AI agents?

AI agents differ significantly from large language models (LLMs) like ChatGPT in their capacity for autonomous action. While LLMs primarily process and generate text, AI agents are designed to perceive their environment, make decisions, and take actions to achieve specific goals. These agents combine various AI technologies, including natural language processing, computer vision, and reinforcement learning, allowing them to adapt and learn from their experiences.

But as AI agents proliferate and iterate, so too does a gnawing unease. Can we ever truly trust these digital entities? The question is far from academic. AI agents operate in complex environments, making decisions based on vast datasets and intricate algorithms that even their creators struggle to fully comprehend. This inherent opacity breeds mistrust. When an AI agent recommends a medical treatment or predicts market trends, how can we be certain of the reasoning behind its choices?

The consequences of misplaced trust in AI agents could be dire. Imagine an AI-powered financial advisor that inadvertently crashes markets due to a misinterpreted data point, or a healthcare AI that recommends incorrect treatments based on biased training data. The potential for harm is not limited to individual sectors; as AI agents become more integrated into our daily lives, their influence grows exponentially. A misstep could ripple through society, affecting everything from personal privacy to global economics.

At the heart of this trust deficit lies a fundamental issue: centralization. The development and deployment of AI models have largely been the purview of a handful of tech giants. These centralized AI models operate as black boxes, their decision-making processes obscured from public scrutiny. This lack of transparency makes it virtually impossible to trust their decisions in high-stakes operations. How can we rely on an AI agent to make critical choices when we cannot understand or verify its reasoning?

Decentralization as the answer

However, a solution to these concerns does exist: decentralized AI. A paradigm that offers a path towards more transparent and trustworthy AI agents. This approach leverages the strengths of blockchain technology and other decentralized systems to create AI models that are not only powerful but also accountable.

The tools for building trust in AI agents already exist. Blockchains can enable verifiable computation, ensuring that AI actions are auditable and traceable. Every decision an AI agent makes could be recorded on a public ledger, allowing for unprecedented transparency. Concurrently, advanced cryptographic techniques like trusted execution environment machine learning (TeeML) can protect sensitive data and maintain model integrity, achieving both transparency and privacy.

As AI agents increasingly operate adjacent to or directly on public blockchains, the concept of verifiability becomes crucial. Traditional AI models may struggle to prove the integrity of their operations, but blockchain-based AI agents can provide cryptographic guarantees of their behavior. This verifiability is not just a technical nicety; it's a fundamental requirement for trust in high-stakes environments.

Confidential computing techniques, particularly trusted execution environments (TEEs), offer an important layer of assurance. TEEs provide a secure enclave where AI computations can occur, isolated from potential interference. This technology ensures that even the operators of the AI system cannot tamper with or spy on the agent's decision-making process, further bolstering trust.

Frameworks like the Oasis Network’s Runtime Off-chain Logic (ROFL) represent the cutting edge of this approach, enabling seamless integration of verifiable AI computation with on-chain auditability and transparency. Such innovations expand the possibilities for AI-driven applications while maintaining the highest standards of trust and transparency.

Towards a trustworthy AI future

The path to trustworthy AI agents is not without challenges. Technical hurdles remain, and widespread adoption of decentralized AI systems will require a shift in both industry practices and public understanding. However, the potential rewards are immense. Imagine a world where AI agents make critical decisions with full transparency, where their actions can be verified and audited by anyone, and where the power of artificial intelligence is distributed rather than concentrated in the hands of a few corporations.

There is also the chance to unlock significant economic growth, too. One 2023 study out of Beijing found that a 1% increase in AI penetration leads to a 14.2% increase in total factor productivity (TFP). However, most AI productivity studies focus on general LLMs, not AI agents. Autonomous AI agents capable of performing multiple tasks independently could potentially yield greater productivity gains. Trustworthy and auditable AI agents would likely be even more effective.

Perhaps it's time to update Skinner's famous quote. The real problem is no longer whether machines think, but whether we can trust their thoughts. With decentralized AI and blockchain, we have the tools to build that trust. The question now is whether we have the wisdom to use them.

Note: The views expressed in this column are those of the author and do not necessarily reflect those of CoinDesk, Inc. or its owners and affiliates.

Marko Stokic

Marko Stokic is the Head of AI at the Oasis Protocol Foundation, where works with a team focused on developing cutting-edge AI applications integrated with blockchain technology. With a business background, Marko's interest in crypto was sparked by Bitcoin in 2017 and deepened through his experiences during the 2018 market crash. He pursued a master’s degree and gained expertise in venture capital, concentrating on enterprise AI startups before transitioning to a decentralized identity startup, where he developed privacy-preserving solutions. At Oasis, he merges strategic insight with technical knowledge to advocate for decentralized AI and confidential computing, educating the market on Oasis’ unique capabilities and fostering partnerships that empower developers. As an engaging public speaker, Marko shares insights on the future of AI, privacy, and security at industry events, positioning Oasis as a leader in responsible AI innovation.

picture of Marko Stokic