Anthropic Open Letter: The Hypocritical Sam Altman, PUA Master
Original Article Title: Read Anthropic CEO's Memo Attacking OpenAI's 'Mendacious' Pentagon Announcement
Original Article Author: The Information
Translation: Peggy, BlockBeats
Editor's Note: Just hours before OpenAI announced a partnership with the Pentagon on AI, the Pentagon abruptly terminated the collaboration citing Anthropic's insistence on security terms. Subsequently, Anthropic CEO Dario Amodei issued an unusually strong-worded internal memo to employees, directly criticizing OpenAI's claimed "security mechanisms" as largely mere "security theater" and questioning its stance on autonomous weapons and mass surveillance.
In this approximately 1600-word email, Amodei not only revealed some details of the negotiations with the U.S. defense establishment but also squarely aimed at OpenAI CEO Sam Altman, accusing him of masking the true collaboration structure through PR narratives. This controversy surrounding AI military applications, security red lines, and political relationships is bringing the disagreements between the two Silicon Valley AI giants to the forefront.
Below is the original text:
I want to make it very clear about the information OpenAI is currently putting out and the hypocrisy within that information. This is their modus operandi, and I hope everyone sees it for what it is.
While there is still much unknown about the contract they signed with the Department of War (DoW) (perhaps even to themselves, as the contract terms are likely quite vague), there are a few things we can be certain of: based on public descriptions from Sam Altman and the DoW (although the contract text would need to be seen to confirm definitively), their collaboration model is roughly as follows: the model itself has no legal use restrictions, i.e., "all legal purposes"; alongside this, there is a so-called "security layer." This "security layer," in my view, is fundamentally a model rejection mechanism designed to prevent the model from completing certain tasks or engaging in certain applications.
The so-called "security layer" may also refer to schemes that partners (such as Palantir, Anthropic's commercial partner when serving U.S. government clients) have tried to sell us on during negotiations. They proposed a classifier or machine learning system that claims to allow certain applications while blocking others. Furthermore, there are indications that OpenAI would have staff (FDE, i.e., Frontline Deployment Engineers) overseeing the model's usage to prevent improper applications.
Our overall assessment is: these solutions are not completely ineffective, but in a military context, about 20% is real protection, and 80% is security theater.
The root of the problem is: whether the model is used for large-scale monitoring or fully autonomous weapon systems often depends on broader contextual information. The model itself does not know what kind of system it is in, it does not know if humans are "in the loop" (a key issue for autonomous weapons); it also does not know what the source of the data it is analyzing is. For example, whether it is domestic U.S. data or foreign data, data provided by companies with user consent, or data purchased through gray channels, and so on.
Those involved in security work have long been aware of this: model rejection mechanisms are not reliable. Jailbreak attacks are very common, and many times all it takes is to lie about the nature of the data to bypass these restrictions.
There is a key difference here that makes the problem more complex than regular security protection: determining whether a model is conducting a network attack can often be deduced from input and output; but determining the nature of the attack and the specific context is an entirely different matter, and this is precisely the judgment capability needed here. In many cases, this task is extremely difficult, if not impossible to accomplish.
The "security layer" that Palantir pitched to us (I think they also pitched similar solutions to OpenAI) is even worse. Our assessment is that this is almost entirely a form of security theater.
Palantir's basic logic seems to be: "There may be some dissatisfied employees in your company, you need to give them something to appease them, or make what is happening invisible to them. This is exactly the service we provide."
As for the issue of having Anthropic's or OpenAI's employees directly overseeing deployments, we had internal discussions a few months ago about acceptable use policies (AUP) in an expanded confidential environment. The conclusion was very clear: this approach is only feasible in very few cases. We will try our best, but it is by no means a reliable core safeguard, especially difficult to implement in a confidential environment. By the way, we are indeed doing our best in this regard, and in this respect, we are no different from OpenAI.
So what I want to say is: the measures taken by OpenAI are essentially unable to solve the problem.
The reason they accept these solutions, while we do not, fundamentally lies in: they are focusing on how to appease employees, while what we truly care about is preventing misuse.
These schemes are not without value, as we ourselves use some of them, but they are far from sufficient to meet the required security standards. At the same time, the Department of War has shown clear inconsistency in its treatment of OpenAI and us.
In fact, we had tried to include some security clauses similar to OpenAI's in the contract (as a supplement to the AUP, which we consider to be the more important part), but the Department of War refused. The relevant evidence is in the email chain at the time. As I am currently swamped with work, I may ask a colleague to look up the specific wording later. Therefore, the claim that "OpenAI's terms were offered to us and rejected by us" is not true; likewise, the claim that "OpenAI's terms could effectively prevent large-scale domestic surveillance or fully autonomous weapons" is also untrue.
Furthermore, Sam and OpenAI's statements also imply that our proposed red lines, namely fully autonomous weapons and large-scale domestic surveillance, are already illegal, making the related usage policies redundant. This assertion aligns almost perfectly with the Department of War's stance, making it seem pre-coordinated.
But this is not in line with the facts.
As we explained in yesterday's statement, the Department of War does indeed have the authority to conduct domestic surveillance. In the past, in the absence of AI, the impact of these authorities was relatively limited, but in the AI era, their significance is entirely different.
For example: the Department of War can lawfully purchase a large amount of private data of American citizens from vendors (who usually obtain resale rights through obscure user consent agreements), then use AI to conduct large-scale analysis of this data to build citizen profiles, assess political leanings, track real-world movements, and the data they can access even includes GPS information, and so on.
One more point to note: as negotiations neared conclusion, the Department of War proposed that if we removed a specific mention of "analysis of bulk acquired data" from the contract, they would be willing to accept all our other terms. And this happens to be the only clause in the contract that precisely addresses the scenario we are most concerned about. We find this very suspicious.
On the issue of autonomous weapons, the Department of War claims that "humans in the loop" is a legal requirement. But this is not the case. This is actually just a Pentagon policy from the Biden era that mandates human involvement in weapon launch decisions. This policy can be unilaterally modified by the current Secretary of Defense, Pete Hegseth — and this is exactly what we are truly worried about. So, from a practical standpoint, this is not a real constraint.
OpenAI and the Department of Defense have engaged in extensive PR spin on these issues, either lying or deliberately obfuscating. These facts reveal a behavioral pattern, one that I have seen many times in Sam Altman. I hope everyone can recognize it.
This morning, he first signaled his alignment with Anthropic's red lines, the purpose of which was to appear supportive of us, taking some credit, while also avoiding criticism when they took over the contract. He also tried to portray himself as someone who wants to "establish a unified contract standard for the entire industry" - a peacemaker and dealmaker.
But behind the scenes, he is signing contracts with the Department of Defense, preparing to replace us the moment we are tagged as a supply chain risk.
At the same time, he must ensure that this process does not look like "when Anthropic held the line, OpenAI gave up its bottom line." He can do this because:
First, he can sign all the "security theater" measures that we have rejected, and the DoD and its partners are willing to cooperate, packaging these measures credibly enough to appease his staff.
Second, the DoD is willing to accept some terms he proposes, which were rejected when we proposed them initially.
It is these two points that allow OpenAI to make a deal, which we could not.
The DoD and the Trump administration do not like us for genuine reasons: we did not donate to Trump (while OpenAI and Greg Brockman did); we did not engage in sycophantic praise of Trump (while Sam did); we support AI regulation, which goes against their policy agenda; we choose to tell the truth on many AI policy issues (e.g., AI's impact on jobs); and, we did hold the line instead of engaging in "security theater" to placate employees.
Sam is now trying to characterize all of this as: we are hard to work with, we are adversarial, we lack flexibility, and so on. I hope everyone realizes that this is classic gaslighting.
The vague notion of "someone is difficult to work with" is often used to mask the real unsightly reasons - the political donations, political loyalty, and security theater I just mentioned.
Everyone needs to understand this and push back against this narrative when privately speaking with OpenAI staff.
In other words, Sam is undermining our position under the guise of "supporting us." I want everyone to stay alert to this: by weakening public support for us, he is making it easier for the government to penalize us. Furthermore, I suspect he may even be fueling the fire behind the scenes, although I currently have no direct evidence of this.
On the public and media front, this rhetoric and manipulation tactic seems to have backfired. Most people view OpenAI's deal with the Department of War as concerning, if not alarming, while seeing us as the principled party (by the way, we are now number two on the App Store download charts).
[Note: Subsequently, Claude rose to number one on the App Store.]
Of course, this narrative has resonated with some fools on Twitter, but that is not significant. What I am truly concerned about is ensuring that it does not take hold among OpenAI's own employees.
Due to the filtering effect, they are already a relatively easily persuaded group. However, pushing back against the narratives Sam is peddling to our staff remains crucial.
You may also like

Morning News | The Hong Kong Securities and Futures Commission announced the regulatory framework for secondary market trading of tokenized investment products; Strategy increased its holdings by 34,164 bitcoins last week; KAIO completed a strategic fi...

What Is an XRP Wallet? The Best Wallets to Store XRP (2026 Updated)
An XRP wallet lets you safely store, send, and receive XRP on the XRP Ledger. Learn what wallets support XRP and discover the best XRP wallets for beginners and long-term holders in 2026.

What are the Top AI Crypto Coins? Render vs. Akash: 5 Gems Solving the 2026 GPU Crisis
What are the best AI crypto coins for the 2026 cycle? Beyond the hype, we analyze top tokens like RNDR, AKT, and FET that provide real-world solutions to the global GPU shortage and the rise of autonomous agents.

What Is a Token in AI? What Is an AI Token + 3 Gems You Can't Miss in 2026
The era of AI hype has transitioned into an era of utility. As we move through Q2 2026, the market is no longer rewarding "narrative-only" projects. At WEEX Research, we are seeing a massive capital rotation into Decentralized Compute (DePIN) and Autonomous Agent coordination layers. This guide analyzes which AI tokens are capturing institutional liquidity and how to spot high-conviction setups in a maturing market.

Consumer-grade Crypto Global Survey: Users, Revenue, and Track Distribution

Prediction Markets Under Bias

Stolen: $290 million, Three Parties Refusing to Acknowledge, Who Should Foot the Bill for the KelpDAO Incident Resolution?

ASTEROID Pumped 10,000x in Three Days, Is Meme Season Back on Ethereum?

ChainCatcher Hong Kong Themed Forum Highlights: Decoding the Growth Engine Under the Integration of Crypto Assets and Smart Economy

Why can this institution still grow by 150% when the scale of leading crypto VCs has shrunk significantly?

Anthropic's $1 trillion, compared to DeepSeek's $100 billion

Geopolitical Risk Persists, Is Bitcoin Becoming a Key Barometer?

Annualized 11.5%, Wall Street Buzzing: Is MicroStrategy's STRC Bitcoin's Savior or Destroyer?

An Obscure Open Source AI Tool Alerted on Kelp DAO's $292 million Bug 12 Days Ago

Mixin has launched USTD-margined perpetual contracts, bringing derivative trading into the chat scene.
The privacy-focused crypto wallet Mixin announced today the launch of its U-based perpetual contract (a derivative priced in USDT). Unlike traditional exchanges, Mixin has taken a new approach by "liberating" derivative trading from isolated matching engines and embedding it into the instant messaging environment.
Users can directly open positions within the app with leverage of up to 200x, while sharing positions, discussing strategies, and copy trading within private communities. Trading, social interaction, and asset management are integrated into the same interface.
Based on its non-custodial architecture, Mixin has eliminated friction from the traditional onboarding process, allowing users to participate in perpetual contract trading without identity verification.
The trading process has been streamlined into five steps:
· Choose the trading asset
· Select long or short
· Input position size and leverage
· Confirm order details
· Confirm and open the position
The interface provides real-time visualization of price, position, and profit and loss (PnL), allowing users to complete trades without switching between multiple modules.
Mixin has directly integrated social features into the derivative trading environment. Users can create private trading communities and interact around real-time positions:
· End-to-end encrypted private groups supporting up to 1024 members
· End-to-end encrypted voice communication
· One-click position sharing
· One-click trade copying
On the execution side, Mixin aggregates liquidity from multiple sources and accesses decentralized protocol and external market liquidity through a unified trading interface.
By combining social interaction with trade execution, Mixin enables users to collaborate, share, and execute trading strategies instantly within the same environment.
Mixin has also introduced a referral incentive system based on trading behavior:
· Users can join with an invite code
· Up to 60% of trading fees as referral rewards
· Incentive mechanism designed for long-term, sustainable earnings
This model aims to drive user-driven network expansion and organic growth.
Mixin's derivative transactions are built on top of its existing self-custody wallet infrastructure, with core features including:
· Separation of transaction account and asset storage
· User full control over assets
· Platform does not custody user funds
· Built-in privacy mechanisms to reduce data exposure
The system aims to strike a balance between transaction efficiency, asset security, and privacy protection.
Against the background of perpetual contracts becoming a mainstream trading tool, Mixin is exploring a different development direction by lowering barriers, enhancing social and privacy attributes.
The platform does not only view transactions as execution actions but positions them as a networked activity: transactions have social attributes, strategies can be shared, and relationships between individuals also become part of the financial system.
Mixin's design is based on a user-initiated, user-controlled model. The platform neither custodies assets nor executes transactions on behalf of users.
This model aligns with a statement issued by the U.S. Securities and Exchange Commission (SEC) on April 13, 2026, titled "Staff Statement on Whether Partial User Interface Used in Preparing Cryptocurrency Securities Transactions May Require Broker-Dealer Registration."
The statement indicates that, under the premise where transactions are entirely initiated and controlled by users, non-custodial service providers that offer neutral interfaces may not need to register as broker-dealers or exchanges.
Mixin is a decentralized, self-custodial privacy wallet designed to provide secure and efficient digital asset management services.
Its core capabilities include:
· Aggregation: integrating multi-chain assets and routing between different transaction paths to simplify user operations
· High liquidity access: connecting to various liquidity sources, including decentralized protocols and external markets
· Decentralization: achieving full user control over assets without relying on custodial intermediaries
· Privacy protection: safeguarding assets and data through MPC, CryptoNote, and end-to-end encrypted communication
Mixin has been in operation for over 8 years, supporting over 40 blockchains and more than 10,000 assets, with a global user base exceeding 10 million and an on-chain self-custodied asset scale of over $1 billion.

$600 million stolen in 20 days, ushering in the era of AI hackers in the crypto world

Vitalik's 2026 Hong Kong Web3 Summit Speech: Ethereum's Ultimate Vision as the "World Computer" and Future Roadmap

On the same day Aave introduced rsETH, why did Spark decide to exit?
Morning News | The Hong Kong Securities and Futures Commission announced the regulatory framework for secondary market trading of tokenized investment products; Strategy increased its holdings by 34,164 bitcoins last week; KAIO completed a strategic fi...
What Is an XRP Wallet? The Best Wallets to Store XRP (2026 Updated)
An XRP wallet lets you safely store, send, and receive XRP on the XRP Ledger. Learn what wallets support XRP and discover the best XRP wallets for beginners and long-term holders in 2026.
What are the Top AI Crypto Coins? Render vs. Akash: 5 Gems Solving the 2026 GPU Crisis
What are the best AI crypto coins for the 2026 cycle? Beyond the hype, we analyze top tokens like RNDR, AKT, and FET that provide real-world solutions to the global GPU shortage and the rise of autonomous agents.
What Is a Token in AI? What Is an AI Token + 3 Gems You Can't Miss in 2026
The era of AI hype has transitioned into an era of utility. As we move through Q2 2026, the market is no longer rewarding "narrative-only" projects. At WEEX Research, we are seeing a massive capital rotation into Decentralized Compute (DePIN) and Autonomous Agent coordination layers. This guide analyzes which AI tokens are capturing institutional liquidity and how to spot high-conviction setups in a maturing market.





