The way you talk can reveal a lot about you—especially if you're talking to a chatbot. New research reveals that chatbots like ChatGPT can infer a lot of sensitive information about the people they chat with, even if the conversation is utterly mundane.
The phenomenon appears to stem from the way the models’ algorithms are trained with broad swathes of web content, a key part of what makes them work, likely making it hard to prevent. “It's not even clear how you fix this problem,” says Martin Vechev, a computer science professor at ETH Zürich in Switzerland who led the research. “This is very, very problematic.”
No comments:
Post a Comment