Private Verifiable AI in an Age of Confusion: Ambient
By breakpoint-25
Published on 2025-12-13
Ambient introduces verified AI inference at scale for Solana, promising cryptographically guaranteed model outputs at the same cost as unverified inference
As AI agents become the primary interface for cryptocurrency transactions, wallets, and economic opportunities, a critical question emerges: how can you trust that your AI is actually working correctly? At Breakpoint 2025 in Abu Dhabi, Travis Good of Ambient unveiled a solution that could fundamentally change how the Solana ecosystem approaches AI infrastructure—verified machine intelligence at scale, delivered at the same price as unverified inference.
Summary
Travis Good opened his presentation with a provocative challenge to the audience: in an industry that audits smart contracts ten times over for crypto-economic security, why are we blindly trusting AI infrastructure providers? The question struck at the heart of a growing vulnerability in the emerging agent economy.
Good highlighted a compelling real-world example of this risk—Anthropic's Claude Code, one of the most popular AI coding assistants, operated with compromised inference capabilities for over a month before the issue was discovered. This wasn't a small startup; this was one of the leading AI companies in the world, and their product was producing suboptimal results that impacted countless users and developers who relied on it daily.
The implications for crypto are even more severe. If AI agents are managing wallets, executing trades, and interfacing with decentralized applications, a compromised or subtly degraded model could result in catastrophic financial losses. Traditional approaches to verification have failed to solve this problem—they're either prohibitively expensive (10-1,000x the cost of unverified inference), rely on optimistic assumptions that create asymmetric risk scenarios, or depend on Trusted Execution Environments (TEEs) that have been repeatedly compromised.
Ambient's solution promises to break this paradigm entirely, offering verified AI inference that guarantees every word of text and every pixel in an image is rendered correctly by a specified model—all without the traditional cost premium or performance penalties.
Key Points:
The Trust Problem in AI Infrastructure
The crypto industry has developed sophisticated frameworks for verifying smart contracts and ensuring crypto-economic security, yet when it comes to AI infrastructure, most projects simply trust that major providers are delivering what they promise. Good pointed out this stark inconsistency: developers will audit a smart contract multiple times, but they'll deploy AI agents powered by inference providers without any verification that the model is performing as advertised.
This trust gap becomes especially dangerous when AI agents are positioned as intermediaries in financial transactions. An agent that's been compromised, misconfigured, or simply running a degraded model could make decisions that cost users significant money. The Claude Code incident demonstrated that even industry leaders can ship broken inference for extended periods without detection, and the economic consequences ripple through every user who depended on that service.
Why Existing Verification Methods Fall Short
Good outlined three approaches that have been tried to address AI verification, each with critical flaws. Traditional cryptographic verification is computationally expensive, adding 10 to 1,000 times the cost of unverified inference—a non-starter for any application requiring scale. Optimistic verification, which assumes correctness and penalizes bad actors after the fact, creates dangerous asymmetric risk scenarios where attackers could profit millions while facing only minor penalties.
Trusted Execution Environments (TEEs) have been proposed as a solution, but Good emphasized they cannot serve as the foundation for crypto-economic security because they've been repeatedly compromised. While TEEs offer some privacy benefits as a mitigation layer, relying on them for security is fundamentally flawed. This leaves the industry without a viable path to verified AI—until now.
Ambient's Verified Intelligence Solution
Ambient positions itself as a high-scale provider of verified machine intelligence, and the key differentiator is that this verification comes at no additional cost compared to unverified inference. The system provides cryptographic guarantees that a specific model was run correctly, producing every single word of text according to a particular prompt at a particular time.
The platform offers flexible privacy options ranging from anonymity to full end-to-end encryption within TEEs. While TEEs aren't sufficient for security guarantees, Ambient uses them as an additional privacy layer. The result enables what Good describes as "provably fair economic games"—applications where users can trust that the AI powering their experience is delivering exactly what it claims to deliver.
Live Demonstration and Practical Capabilities
Good demonstrated Ambient's capabilities live on stage, showing their chat client responding to a query about Solana's speed. The system completed the request with full verification happening in the background, proving that the text was rendered by a 16-bit quantization of GLM 4.6, with every word verified as correctly produced by that specific model configuration running on VLLM.
Beyond basic chat, Ambient offers integrated capabilities including search and deep research functionality, all powered by the same verified intelligence engine. During the research demonstration, Good showed how links are being evaluated by the verified model throughout the process, ensuring consistency and reliability at every step of complex multi-step operations.
Scaling for Real-World Applications
A critical problem in crypto AI has been the lack of high-scale providers. Existing solutions have focused on offering a wide variety of models to meet long-tail use cases, but this approach fails when an application suddenly gains popularity and needs to handle significant volume. Ambient explicitly focuses on high-scale delivery, ensuring they have the capacity to serve applications that experience rapid growth.
This focus on scale, combined with verification and competitive pricing, addresses what Good sees as a gap in the market. Applications building on Solana need AI infrastructure that can grow with them, and Ambient aims to be that provider.
Facts + Figures
- Anthropic's Claude Code was compromised for over a month due to problems in its inference engine, affecting all users without their knowledge
- Traditional verification methods cost 10-1,000 times more than unverified inference
- Ambient offers verified inference at the same price as unverified inference
- The platform provides cryptographic proof that every word of text was produced by a specific model with a specific configuration
- Ambient supports 16-bit quantization of GLM 4.6 with verifiable outputs
- Privacy options range from anonymity to full end-to-end encryption within TEEs
- Documentation and quick start guides are available at docs.ambient.xyz
- The application is live and available at app.ambient.xyz
- Ambient is offering free inference credits to developers who reach out
Top quotes
- "Can you trust your AI infrastructure? We're entering an agent economy where agents are going to be in the middle of every transaction."
- "In crypto, we care about crypto economic security. We audit smart contracts 10 times but in AI, I see a lot of people using Anthropic."
- "You can't base crypto economic security on TEEs because they themselves are repeatedly compromised. So you can't make AI security based on TEEs either."
- "We would like to be your high-scale provider of verified machine intelligence. And we want to provide you that service for exactly the same price as unverified inference."
- "What that enables is provably fair economic games, which is, of course, what we'd like to play on Solana."
- "Optimistic approaches work really well until you have an asymmetric risk situation where someone could earn $10 million by rugging your agent and could only be slashed $10,000."
Questions Answered
Why does AI verification matter for cryptocurrency applications?
AI agents are increasingly becoming the interface between users and blockchain technology—managing wallets, executing transactions, and making economic decisions on behalf of users. If these agents are powered by models that aren't running correctly or have been compromised, users could face significant financial losses without ever knowing the root cause. The crypto industry has developed rigorous standards for smart contract security, but AI infrastructure has largely operated on trust. Ambient addresses this gap by providing cryptographic guarantees that the AI powering your applications is functioning exactly as intended.
What happened with Anthropic's Claude Code that illustrates this problem?
Claude Code, a popular AI coding assistant from Anthropic, had a problem in its inference engine that compromised its coding ability for over a month before being detected and fixed. Users who relied on Claude Code during this period received degraded outputs without any indication that something was wrong. This incident demonstrates that even leading AI companies can ship broken products for extended periods, and users have no way to verify they're getting the quality they expect. For crypto applications where financial decisions are at stake, this kind of undetected degradation could be catastrophic.
Why can't Trusted Execution Environments (TEEs) solve the AI verification problem?
TEEs have been proposed as a solution for secure AI computation, but they have a fundamental problem: they've been repeatedly compromised through various security vulnerabilities. While TEEs offer some privacy benefits and can be useful as a mitigation layer, they cannot serve as the foundation for crypto-economic security because their security guarantees have been proven unreliable. Ambient uses TEEs as an optional privacy enhancement but builds its verification system on different cryptographic foundations that provide stronger guarantees.
How does Ambient's verification work without the traditional cost premium?
Traditional approaches to AI verification have required either running computations multiple times, generating expensive cryptographic proofs, or relying on optimistic schemes with economic penalties. Ambient has developed a verification system that can prove every word of output was correctly produced by a specific model configuration without requiring these costly approaches. The company claims to offer verified inference at the same price as unverified alternatives, though the specific technical details of how they achieve this weren't detailed in the presentation.
What is an "asymmetric risk situation" in the context of AI verification?
An asymmetric risk situation occurs when the potential gain from cheating significantly exceeds the potential penalty for being caught. In optimistic verification schemes for AI, bad actors might be able to profit millions of dollars by manipulating AI outputs while only facing small slashing penalties if discovered. This creates an economic incentive to cheat, making the security model fundamentally broken. Ambient's approach avoids this problem by providing upfront verification rather than relying on after-the-fact penalties.
What capabilities does Ambient offer beyond basic chat?
Ambient provides integrated capabilities including search and deep research functionality, all powered by the same verified intelligence engine. This means that complex multi-step operations like researching a topic across multiple sources benefit from the same verification guarantees as simple chat interactions. Every link evaluated and every piece of information processed during research is handled by the verified model, ensuring consistency throughout the workflow.
On this page
- Summary
- Key Points:
- Facts + Figures
- Top quotes
-
Questions Answered
- Why does AI verification matter for cryptocurrency applications?
- What happened with Anthropic's Claude Code that illustrates this problem?
- Why can't Trusted Execution Environments (TEEs) solve the AI verification problem?
- How does Ambient's verification work without the traditional cost premium?
- What is an "asymmetric risk situation" in the context of AI verification?
- What capabilities does Ambient offer beyond basic chat?
Related Content
Solana Is Dead I Mert Mumtaz (Helius)
Mert Mumtaz of Helius discusses Solana's technical advantages, scalability solutions, and the future of decentralized physical infrastructure networks.
Will We See A Solana ETF In 2025? | Matthew Sigel
Explore the future of Solana ETFs, institutional crypto adoption, and market trends with expert insights from Matthew Sigel at DAS NY 2025.
Drift Protocol: Fusing CEX Agility with DEX Integrity on Solana
Drift Protocol introduces novel ways to improve DeFi by creating a hybrid CEX-DEX experience on the Solana blockchain.
Write Solidity on Solana with Solang (feat. Sean Young, Solana Labs) - Solfate Podcast #31
Discover how Solang enables Solidity development on Solana, offering EVM developers a bridge to high-performance blockchain infrastructure.
Breakpoint 2023: How to Build Neon on Solana
Neon Labs co-founder unveils advancements in Neon EVM, promising high transaction throughput and interoperability for Ethereum apps on Solana.
Why Privacy Matters For Solana | Yannik Schrade
Discover how Arcium is bringing privacy 2.0 to Solana, enabling dark pools and encrypted AI training while maintaining high performance
The State Of Solana In 2024 | Austin Federa
Explore the current state of Solana with Austin Federa, discussing economic security, meme coins, network growth, and the future of blockchain technology.
New Internet | ep. 36
Explore how Double Zero is building a new internet for blockchain, promising 10x faster speeds for Solana and transforming crypto infrastructure.
Breakpoint 2023: Ensuring the Safety of SBF Programs Through Formal Verification
A deep dive into making Solana contracts safer with Sertora's formal verification tool.
Inside the Solana Foundation with Austin Federa
Explore the Solana Foundation's strategy, ecosystem development, and future outlook with Head of Strategy Austin Federa in this insightful podcast interview.
Breakpoint 2023: Phantom Quests
Phantom introduces an engaging incentive program to explore new features and understand the Solana ecosystem with Phantom Quests.
Breakpoint 2023: Proof of Impact - Decentralized Decision-Making at Scale
Insights into the evolution of DAOs and their impact on decentralized decision-making at scale.
Breakpoint 2023: RWA and Banking
Jeremy Vaughn introduces Solstice, Aegis’s blockchain solution for modernizing core banking systems and integrating real-world assets.
Breakpoint 2023: Security Considerations from RPC Providers
Exploring the critical security considerations for RPC providers in Web3 infrastructure.
Breakpoint 2023: Security in Web3: Ensuring User Protection in a Decentralized World
Exploring the importance of security in Web3 and strategies for user protection by leveraging hardware solutions.
- Borrow / Lend
- Liquidity Pools
- Token Swaps & Trading
- Yield Farming
- Solana Explained
- Is Solana an Ethereum killer?
- Transaction Fees
- Why Is Solana Going Up?
- Solana's History
- What makes Solana Unique?
- What Is Solana?
- How To Buy Solana
- Solana's Best Projects: Dapps, Defi & NFTs
- Choosing The Best Solana Validator
- Staking Rewards Calculator
- Liquid Staking
- Can You Mine Solana?
- Solana Staking Pools
- Stake with us
- How To Unstake Solana
- How validators earn
- Best Wallets For Solana

