ChatGPT and the Question Nobody Wants to Answer: Where Does the Data Go?
When ChatGPT crossed 100 million users in early 2023, it became the fastest-growing consumer application in history. It also became one of the most consequential unresolved data privacy questions of the year.
By default, conversations with ChatGPT are used to train future models. Users who paste in sensitive information — internal business documents, patient information, legal drafts, personal financial details — may be contributing that data to a training dataset accessible to the company and, under certain conditions, visible to other users or employees reviewing outputs.
Italy temporarily banned ChatGPT in March 2023, citing GDPR violations related to data collection without a clear legal basis and the lack of age verification. The ban was lifted after OpenAI made several concessions, including adding a data opt-out feature and clearer privacy disclosures.
The episode surfaced a question that organizations are now actively grappling with: should employees be allowed to use AI tools with company data? The answer varies by organization, but the era of informal AI adoption without data governance is ending. Policies are being written. Vendor assessments are being done. And the question of where enterprise data goes when it enters a large language model has become a board-level conversation.