DeepSeek Arrived. So Did the Questions About Where Your Data Goes.
In January 2025, a Chinese AI lab called DeepSeek released an open-weight language model that matched or exceeded the performance of leading American AI systems at a fraction of the reported training cost. The release sent shockwaves through the technology industry. It also surfaced a set of data privacy questions that the rapid adoption of AI tools had been outpacing for years.
DeepSeek’s privacy policy states that it collects user inputs, chat history, device information, and other interaction data, and that this data may be stored on servers in the People’s Republic of China. Several countries — including Italy, South Korea, and Australia — moved quickly to restrict or investigate its use on government devices. The United States Navy and other federal agencies issued advisories against using it for work-related tasks.
The concern is the same one that applied to TikTok and to any AI service where the data handling practices, legal jurisdiction, and government access obligations of the operator are opaque to the user.
What the DeepSeek moment clarified — again — is that the question of where AI data goes is a governance question. When employees or clinicians use AI tools to draft documents, summarize records, or analyze data, the content of those interactions may be stored, retained, and potentially accessed by parties with no accountability to the user.
Data sovereignty is becoming a fundamental requirement for any organization that handles sensitive information.