CFTC chair Selig says blockchain could help verify AI-generated content
Michael Selig, chair of the US Commodity Futures Trading Commission, said blockchain could play a key role in verifying AI-generated content, contending the technology can help distinguish authentic media from synthetic outputs as concerns over misinformation grow.
During an appearance on The Pomp Podcast on Thursday, Selig was asked by host Anthony Pompliano about the use of AI-generated memes and images in markets, and whether intent matters or such content should be restricted altogether. He told Pompliano:
The private markets have solutions — blockchain technology is a great one. If you can timestamp things and make sure there’s an identifier for each meme or AI generated posts, you can verify if it’s real or generated by AI… Having these technologies here in the US is critical.
He said regulators are focused on maintaining US leadership in crypto, adding that “you can’t have AI without blockchain.”
Regarding how regulators are approaching AI agents, as autonomous trading becomes more prevalent in financial markets and authorities are being pressed to distinguish between automated tools and fully autonomous agents, and how the latter should be regulated, Selig responded:
I’m concerned that we over-regulate and strangle some of the technology here in the US… I’m taking a very much minimum effective dose of regulation approach, where we’re… making sure that we’re regulating the actors… and not the software developers. The software developers are the ones building the tools, but they’re not actually engaging in the financial transactions.
Selig said the CFTC is assessing how AI models are used in markets, emphasizing that enforcement should focus on participants engaging in financial activity.
Related: AI and stablecoins are winning despite 2026 crypto market slump
Blockchain and proof-of-personhood tools emerge for AI verification
A central challenge amid the surge in artificial intelligence use is distinguishing real content from synthetic media. Selig’s comments could be seen to reflect a broader push among policymakers and developers to use blockchain for content verification and provenance.
One approach is proof-of-personhood systems, which aim to confirm that an account belongs to a real, unique human rather than a bot. The most prominent example is Sam Altman’s World, whose World ID protocol allows users to prove their humanity without revealing personal data. The system uses encrypted biometric iris scans stored on the user’s device, though it has drawn criticism over privacy risks and potential coercion.
In March, World launched AgentKit, a toolkit that allows AI agents to prove they are linked to a verified human while interacting with online services. It integrates proof-of-personhood credentials with the x402 micropayments protocol developed by Coinbase and Cloudflare, enabling agents to pay for access while presenting cryptographic proof of human backing.
Ethereum co-founder Vitalik Buterin has proposed using cryptography and blockchain to make online systems more verifiable, including through zero-knowledge proofs and onchain timestamps that could help validate how content is generated and distributed without exposing sensitive data.
The proposals come as US policymakers weigh broader AI regulation. On March 20, the Trump administration released a national framework calling for a unified federal approach, warning that a patchwork of state laws could hinder innovation and competitiveness.
Magazine: Agent wastes 14 hours of scammers’ time, LLMs ‘poisoned’ by Iran: AI Eye
You may also like
Archives
- March 2026
- February 2026
- January 2026
- December 2025
- November 2025
- October 2025
- September 2025
- August 2025
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- January 2024
- January 2023
- December 2022
- January 2022
- December 2021
- January 2021
Leave a Reply
You must be logged in to post a comment.