U.S. Commodity Futures Trading Commission Chairman Michael Selig said blockchain could play an important role in verifying AI-generated content, arguing that the technology could help distinguish between real media and synthetic output amid growing concerns about misinformation.
During an appearance on The Pump Podcast on Thursday, Selig was asked by host Anthony Pompliano about the use of AI-generated memes and images in the market, and whether intent matters or whether such content should be completely restricted. He told Pompliano:
The private market has solutions. Blockchain technology is amazing. If we can time-stamp and see that each meme or AI-generated post has an identifier, we can verify whether it is real or generated by an AI. It’s important to implement these technologies here in the United States.
He said regulators are focused on maintaining U.S. leadership in the cryptocurrency space, adding that “AI is not possible without blockchain.”

sauce: Awesome podcast
Asked how regulators are approaching AI agents as autonomous trading becomes more prevalent in financial markets, forcing regulators to distinguish between automated tools and fully autonomous agents, and how to regulate the latter, Selig responded:
I’m concerned that we’re over-regulating and constricting some technologies here in the United States…I’m taking a very minimal effective regulatory approach. There, we are making sure that we are regulating…actors, not software developers. Software developers build tools but are not actually involved in financial transactions.
Selig said the CFTC is evaluating how AI models are used in the market and stressed that enforcement should focus on participants engaged in financial activities.
Related: AI and stablecoins are winning despite the crypto market downturn in 2026
Blockchain and identity verification tools emerge for AI verification
A central challenge as the use of artificial intelligence proliferates is distinguishing between real content and synthetic media. Selig’s comments can be seen as reflecting a broader push among policymakers and developers to use blockchain for content verification and provenance.
One approach is an identity verification system that aims to confirm that an account belongs to a real, unique person and not a bot. The most notable example is Sam Altman’s World, whose World ID protocol allows users to prove their humanity without revealing personal data. The system uses encrypted biometric iris scans stored on the user’s device, but has drawn criticism for privacy risks and possible coercion.
In March, World launched AgentKit, a toolkit that allows AI agents to prove they are linked to an authenticated human while interacting with online services. It integrates the x402 micropayments protocol developed by Coinbase and Cloudflare with proof of identity credentials, allowing agents to pay for access while presenting encrypted proof of human assistance.
Ethereum co-founder Vitalik Buterin proposes using cryptography and blockchain to make online systems more verifiable. This includes things like zero-knowledge proofs and on-chain timestamps that help verify how content is produced and distributed without exposing sensitive data.
The proposal comes as U.S. policymakers consider broader AI regulation. On March 20, the Trump administration announced a national framework calling for a uniform federal approach, warning that a patchwork of state laws could hinder innovation and competitiveness.
magazine: Agents wasted 14 hours of scammer’s time, LLM was ‘poisoned’ by Iran: AI Eye

