Whoops, Samsung workers accidentally leaked trade secrets via ChatGPT
Never forget that anything you share with ChatGPT is retained and used to further train the model. Samsung employees have learned this the hard way after accidentally leaking top secret Samsung data.
Samsung employees accidentally shared confidential information while using ChatGPT for help at work. Samsung's semiconductor division has allowed engineers to use ChatGPT to check source code.
But The Economist Korea reported three separate instances of Samsung employees unintentionally leaking sensitive information to ChatGPT. In one instance, and employee pasted confidential source code into the chat to check for errors. Another employed shared code with ChatGPT and "requested code optimization." A third, shared a recording of a meeting to convert into notes for a presentation. That information is now out in the wild for ChatGPT to feed on.
The leak is a real-world example of hypothetical scenarios privacy experts have been concerned about. Other scenarios include sharing confidential legal documents or medical information for the purpose of summarizing or analyzing lengthy text, which might then be used to improve the model. Experts warn that it may violate GDPR compliance, which is why Italy recently banned ChatGPT.
Samsung has taken immediate action by limiting the ChatGPT upload capacity to 1024 bytes per person, and is investigating the people involved in the leak. It is also considering building its own internal AI chatbot to prevent future embarrassing mishaps. But it's unlikely that Samsung will recall any of its leaked data. ChatGPT's data policy says it uses data to train its models unless you request to opt out. In ChatGPT's usage guide, it explicitly warns users not to share sensitive information in conversations.
Consider this a cautionary tale to be remembered the next time you turn to ChatGPT for help. Samsung certainly will.
COntributer : Mashable https://ift.tt/saFSdWB
No comments:
Post a Comment