I want the deal that the US Government got

I want the deal that the US Government got

How they really feel about privacy

I took at look at the prediction market Kalshi recently, and came across something interesting on their FAQ page. This page appears to me to be largely focused on explaining why people shouldn't be mad at Kalshi, emphasizing the fact that it is regulated and all the monitoring it does of its users. The bottom of the page features this graphic:

Kalshi comparing "regulated" and "unregulated" user signup

The thrust of argument seems to be that whereas "unregulated" companies collect minimal information, "regulated" companies like Kalshi make sure to collect all kinds of personal details. This isn't limited to signup. In fact in Kalshi's own words, they have a "surveillance system" to monitor users. Presumably mentioning this is intended to comfort potential regulators or politicians visiting this page who may have concerns about Kalshi. And of course since Kalshi is a respectable and regulated company, they have normal people signing up for their service, not those weirdos (probably criminals) who use an end-to-end encrypted email provider like ProtonMail.

We're all trying to find the people who did this

Completely unrelated, Anthropic has been clashing with the Department of War over the Departments designation of Anthropic as a supply chain risk. One of the points of contention between Anthropic and the Department is the use of the Anthropic's models for mass domestic surveillance. There has been an outpouring of support for Anthropic, with Amicus briefs supporting the company filed by diverse groups of individuals, ranging from former military and foreign policy officials to catholic theologians. Several employees of the tech and AI companies OpenAI and Google were among those who filed briefs, and this brief in particular goes into the risks surrounding AI and mass surveillance:

The risks of AI-enabled mass domestic surveillance merit greater public understanding. At its core, AI-enabled mass surveillance means the ability to monitor, analyze, and act on the behavior of an entire population continuously and in real time. The devices and data streams required to do this already exist. As of 2018, there were approximately 70 million surveillance cameras operating in the United States across airports, subway stations, parking lots, storefronts, and street corners. Every smartphone continuously broadcasts location data to carriers and dozens of applications. Credit and debit cards generate a timestamped record of nearly every commercial transaction Americans make. Social media platforms log not just what people post, but what they read, how long they browse, and what they posted before deleting it. Employers, insurers and data brokers have assembled behavioral profiles on most American adults that are already, in many cases, available for government purchase without a warrant. What does not yet exist is the AI layer that transforms this sprawling, fragmented data landscape into a unified, real-time surveillance apparatus. Today, these streams are siloed, inconsistent, and require significant human effort to connect. From our vantage point at frontier AI labs, we understand that an AI system used for mass surveillance could dissolve those silos, correlating face recognition data with location history, transaction records, social graphs, and behavioral patterns across hundreds of millions of people simultaneously.

As pointed out here, a core aspect of the problem comes not from AI per se, but from the widespread practice of business collecting large amounts of data on individuals. I also particularly appreciate that these technology company employees call out "employers, insurers, and data brokers", while failing to mention technology companies specifically. Of course tech companies would never collect large amounts of data on their customers and use that data to model behavior or monitor what people are doing. Nor would they ever speak out of both sides of their mouth, telling consumers how much they definitely respect people's privacy, while assisting governments with mass surveillance in order to forestall regulations that might impact the company.

When the US Government actually cares about privacy

An interesting fact that has been documented in this conflict between Anthropic and the Government is that the US Government doesn't have to use the normal process to access Anthropic's models that all the rest of us get. Anthropic partnered with the Government to make available "Claude Gov", Which has a number of advantages that the US Government seems to find desirable. For example, Anthropic explains that:

[A]s Claude is deployed in DoW environments—such as through air-gapped, classified cloud systems operated by third-party defense contractors—Anthropic has no ability to access, alter, or shut down the deployed model. Anthropic does not maintain any back door or remote “kill switch” in Claude. Anthropic personnel cannot, for example, log into a DoW system to modify or disable the models during an operation; the technology simply does not function that way. In these deployments, only the Government and its authorized cloud provider have access to the running system.

So, Anthropic can't access or alter the model that is available to the Government once the model is deployed. But what about prompts and data that the US Government enters into the system?:

Anthropic also cannot exfiltrate DoW data or conduct surveillance of DoW activities. Anthropic does not have access to DoW’s Claude prompts; because we lack any access to this customer data, there is nothing that we could exfiltrate or inspect. Any suggestion that Anthropic could engage in “data exfiltration” of DoW information is unfounded.

Now, when we speak of surveillance of random people, often privacy is seen as trading off with trust and safety. After all, if you have nothing to hide, why do you feel the need to keep secrets? If only we could take a quick peak at what you're up to and make sure you aren't doing anything bad, we could all sing Kumbaya together.

But interestingly, in this case, it is precisely the fact that Anthropic can't access the Department's systems that ends up helping Anthropic. The judge in the Northern District of California case questioned the parties about this very topic. The judge ultimately ended up granting a TRO on essentially all the grounds that Anthropic argued. Anthropic's inability to see or alter the environment in which the Claude Gov model is deployed without approval from the Government is extremely helpful to Anthropic because it suggests that the Governments claimed concern about "sabotage" is overblown. How are you supposed to sabotage something you can't even access? Far from being a reason for distrust between Anthropic and the Government, the isolation of the computing environment under the Government's control is precisely what casts doubt on the Government's claim that it needs to declare Anthropic a supply chain risk due to its lack of trust in the AI company. The preservation of confidentiality doesn't harm trust and cooperation here, it is actually essential to facilitating those things.

For me, but not for thee

Completely unrelated, there have been some complaints about degradation in the performance of Claude Code, with some speculation that this could be due to updates that Anthropic has deployed. One comment on a related Hacker News thread suggested that the poster received a response where the AI agent suggested it might have to "escalate this session to the legal department" over a potential copyright issue.

Now, I can't corroborate the accuracy of these reports, and it's entirely possible that this is just the normal and expected level of gripping that inevitably happens when people use a product that doesn't give them exactly what they want. I also have to applaud Anthropic on their principled defense of intellectual property rights. I'm sure they would never apply a much more nuanced view of copyright to their own work while cracking down on possible copyright issues related to their users so that they could use this surveillance as a defense in ongoing copyright litigation. That said, I think these complaints do make me wonder if normal users of AI models might possibly see some benefits in having access to AI models deployed for them in a similar way as Anthropic has deployed Claude Gov for the Government.

AI could enable mass surveillance by making it easier to automate and scale surveillance of existing data, as pointed out in the OpenAI and Google employee brief. But an underappreciated risk of increasing use of AI is that this usage itself becomes a highly centralized data collection point for a new and exceptionally rich source of data, logs of a person's conversations with AI models. If you are worried about AI-enabled power concentration, coups, or human disempowerment then building the infrastructure for privacy-preserving AI isn't really optional, it's a core building block for addressing these risks.

Won't someone please think of the terrorists

But if we made such an option generally available, how would we stop people using those AI models for bad stuff? Wouldn't terrorists use the models to help them build bioweapons, and wouldn't cybercriminals be able to leverage them for cyberattacks? Sure, maybe the Government should have confidential and secure access to these models, but that's the Government we're talking about here, they only have a long and well known history of doing the exact bad thing we were all talking about with the whole supply risk thing, but come on, giving random people access to a similar option would be crazy right?

Never fear, I think there is a simple way to address these concerns. Realistically it wouldn't be individual people with access their own copy of a model like Claude. Rather, cloud compute providers or other technology companies would make these models available in more secure and private environments, just as its being done for the US Government. Maybe those Proton guys could help out or something. Individual users wouldn't have the ability to remove guardrails, and a model provider like Anthropic could have agreements with model deployers that they need to keep in place any guardrails for things like bioweapons or cybersecurity. You could have automated pre- or post-processing steps that detect and block inputs or outputs that potentially overlap these areas of risk, all without ever making any user input or output available to the model deployer. Users still wouldn't get their risky output, but their privacy would still be preserved.

If anyone builds it

What if instead of being concerned about AI-enable power concentration, you believe that if anyone builds an advanced enough AI system, everyone dies. Naturally, if this is the case, we have to be on the "don't build it" plan. Where does this who privacy thing fit in with that?

First, to make the "don't build it" plan happen, you have to convince governments to adopt not building it as their official policy. Recall that part of the issue in the Anthropic v Department of War case is that a government might be essentially trying to nuke a company from orbit for not going along with its policy on AI. Governments retaliating against disfavored AI policy advocacy could be a problem if your plan is to do AI policy advocacy. The ability to protect your thoughts and communications from the Government and other actors is important for facilitating the ability to discuss disfavored ideas freely.

Second, the "don't build it" plan will probably involve regulations of large and powerful tech companies that those companies don't like. The expected playbook in that fight includes companies attempting to shift blame onto users by claiming that the real regulations that are needed are more intense surveillance rather than anything that would stop the companies from building and scaling their products. Systematically protecting user privacy heads off these types of evasive maneuvers.

The end

In conclusion, rather than being a massive threat to trust and safety, the ability to use AI securely and confidentially is a core need for addressing risks from advanced AI, and can uniquely facilitate cooperation between adversarial parties.