Claude’s Passport Demand: Surveillance Fears Fuel New Crypto Concern

Claude's Passport Demand: Surveillance Fears Fuel New Crypto Concern 2

Anthropic has introduced identity verification measures for its AI chatbot, Claude, requiring certain users to submit government-issued identification and a live selfie. This marks a significant departure from other major AI chatbots, which do not currently impose such requirements. The move occurs shortly after a substantial influx of users migrated to Claude, driven by concerns over OpenAI’s partnerships with government entities.

Key Takeaways

  • Anthropic is now requesting government ID and selfie verification for specific Claude users, a requirement not found in competing major AI chatbots.
  • This policy change follows a period of record user growth for Anthropic, largely attributed to users seeking alternatives to OpenAI due to privacy concerns.
  • Verification data is handled by Anthropic’s partner, Persona, and is not used by Anthropic for model training or other purposes beyond identity confirmation.

The new policy, detailed on a support page that went live on April 14, outlines that Anthropic has partnered with Persona Identities, a firm known for its Know Your Customer (KYC) infrastructure in financial services. Users may be asked for a physical passport, driver’s license, or national identity card, with photocopies, digital IDs, and student credentials not being accepted. A live selfie may also be part of the verification process.

Anthropic stated that identity verification is being implemented for “a few use cases” and may appear during “routine platform integrity checks, or other safety and compliance measures.” The company emphasized that the verification data is solely used to confirm user identity and is not utilized for other purposes, nor will it be shared with third parties for marketing. This data is handled by Persona’s servers, not Anthropic’s, and is encrypted in transit and at rest.

This development is particularly noteworthy given that millions of users reportedly switched to Anthropic in February. This migration was reportedly a response to OpenAI’s agreement to deploy its AI technology on classified Pentagon networks, a deal Anthropic declined due to concerns about mass surveillance and autonomous weapons. Anthropic reported a significant surge in daily signups and a 60% increase in free users during that period, positioning itself as a privacy-focused alternative.

AI KYC is here.

New claude subscribers asked for gov ID & photo.

Not even a regulatory requirement – Anthropic just doing it because they want to.

But regulatory is coming

Next up will be laws:

No AI without gov-issued ID
All AI use tracked to individual – no private AI

Claude now requires government ID verification (via Persona) before subscription.

ChatGPT doesn't.
Gemini doesn't.

Anthropic just handed their competitors a gift.

The introduction of mandatory ID verification has drawn criticism, with some users pointing out that it is a voluntary measure by Anthropic rather than a regulatory mandate. While the policy is not applied universally and is triggered for specific activities suggesting potential misuse or policy violations, the requirement for personal documentation has raised privacy concerns among a user base that had specifically sought out Anthropic for its perceived commitment to user privacy.

Concerns regarding the security of sensitive personal data are also prominent. Past incidents, such as a breach at Discord in October 2025 that exposed approximately 70,000 government IDs submitted for age verification, highlight the risks associated with third-party data storage. Although Persona is described as a significant player in the identity verification space, the reliance on a third party for government document custody introduces potential vulnerabilities.

This move towards stricter user controls aligns with Anthropic’s recent policy adjustments. In December, the company implemented classifiers to identify users who self-identify as minors, a system that led to the suspension of some adult users’ accounts and the potential loss of project data while appeals were processed. Furthermore, accounts registered in regions not formally supported by Anthropic are subject to bans, a measure that could disproportionately affect users in areas like China who rely on intermediaries to access the service, as a live selfie matched against a government document is difficult to falsify.

The Long-Term Technological Impact of AI Identity Verification

The implementation of government ID and selfie verification by Anthropic, while currently a niche requirement, could signal a broader trend toward enhanced identity management within AI systems. From a blockchain and Web3 perspective, this move raises questions about the future of decentralized identity solutions and verifiable credentials. If AI models become more integrated with real-world applications, the need for robust user authentication will grow. This could spur innovation in zero-knowledge proofs and self-sovereign identity frameworks, allowing users to control and selectively share verified attributes without necessarily submitting raw identity documents to a central third party. For Layer 2 scaling solutions, the increased transaction volume associated with more complex identity verification processes, if managed on-chain or via decentralized systems, could present new use cases and demand for efficient transaction processing. Ultimately, this development might accelerate the exploration of privacy-preserving identity technologies that balance AI’s growing capabilities with user autonomy and data security.

Learn more at : decrypt.co

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *