OpenAI’s latest update to ChatGPT promises a more personalised and engaging experience by storing user details for better assistance. However, this innovation raises significant concerns about privacy invasion and the reinforcement of filter bubbles. While OpenAI claims users will have control over their data, the default activation of the memory feature shifts responsibility onto them. The risk of echo chambers and biased information looms large, reminiscent of Facebook’s detrimental impact on society. To avoid such pitfalls, OpenAI must prioritise diversity of perspectives and transparency, ensuring users are exposed to critical thinking and not just tailored content. As ChatGPT evolves, safeguarding against the toxic effects of deepening societal silos becomes paramount for fostering responsible AI engagement.
Sign up for your early morning brew of the BizNews Insider to keep you up to speed with the content that matters. The newsletter will land in your inbox at 5:30am weekdays. Register here.
By Parmy Olson
OpenAI is rolling out what it calls a memory feature in ChatGPT. The popular chatbot will be able to store key details about its users to make answers more personalized and “more helpful,” according to OpenAI. These can be facts about your family or health, or preferences about how you want ChatGPT to talk to you so that instead of starting on a blank page it’s armed with useful context. As with so many tech innovations, what sounds cutting-edge and useful also has a dark flipside: It could blast another hole into our digital privacy and — just maybe — push us further into the echo chambers that social media forged.
AI firms have been chasing new ways of increasing chatbot “memory”for years to make their bots more useful. They’re also following a roadmap that worked for Facebook, gleaning personal information to better target users with content to keep them scrolling.
OpenAI’s new feature — which is rolling out to both paying subscribers and free users — could also make its customers more engaged, benefiting the business. At the moment, ChatGPT’s users spend an average of seven-and-a-half minutes per visit on the service, according to market research firm SimilarWeb. That makes it one of the stickiest AI services available, but the metric could go higher. Time spent on YouTube, for instance, is 20 minutes for each visit. By processing and retaining more private information, OpenAI could boost those stickiness numbers, and stay ahead of competing chatbots from Microsoft, Anthropic, and Perplexity.
Read more: 🔒 RW Johnson: Why so many Americans support Trump for a return to the White House
But there are worrying side effects. OpenAI states that users will be “in control of ChatGPT’s memory,” but also that the bot can “pick up details itself.” In other words, ChatGPT could choose to remember certain facts that it deems important. Customers can go into ChatGPT’s settings menu and turn off whatever they want the chatbot to forget, or shut down the memory feature entirely. “Memory” will be on by default, putting the onus on users to turn things off.
Collecting data by default has been the setup for years at Facebook, and the expansion of “memory” could become a privacy minefield in AI if other companies follow OpenAI’s lead. OpenAI says it only uses people’s data to train its models, but other chatbot makers can be far looser. A recent survey of 11 romance chatbots found nearly all of them said they might share personal data to advertisers and other third parties, including details about people’s sexual health and medication use, according to the Mozilla Foundation, a nonprofit that promotes online transparency.
Here’s another unintended consequence that has echoes of Facebook: a memory-retentive ChatGPT that’s more personalized could reinforce the filter bubbles people find themselves in, thanks to social feeds that for years have fed them a steady diet of content confirming their cultural and political biases.
Imagine ChatGPT logging in its memory bank that I supported a certain political party. If I then asked the chatbot why its policies were better for the economy, it might prioritize information that supported the party line and omit critical analysis of those policies, insulating me from viable counterarguments.
If I told ChatGPT to remember that I’m a strong advocate for environmental sustainability, my future queries about renewable energy sources might get answers that neglect to mention that fossil fuels can sometimes be viable. That would leave me with a narrower view of the energy debate.
Read more: Peter Major: Mining booming in SADC, but bad ANC policies still driving it south in SA
OpenAI could tackle this by making sure ChatGPT offers diverse perspectives on political or social issues, even if they challenge a user’s prejudices. It could add critical thinking prompts to encourage users to consider perspectives they haven’t expressed yet. And in the interests of transparency, it could also tell users when it’s giving them tailored information. That might put a damper on its engagement metrics, but it would be a more responsible approach.
ChatGPT has experienced gangbusters growth, pushed for user engagement and is now storing personal information, which almost makes its path look a lot like the one Mark Zuckerberg once trod with similarly noble intentions. To avoid the same toxic side effects his apps had on mental health and society, OpenAI must do everything it can to stop its software from putting people into ever-deeper silos. The very idea of critical thinking could become dangerously novel for humans.
Read also:
The 7 Economic Pillars of the EFF with Ivo Vegter
Manifestoes galore: Voters demand substance over spectacle – Solly Moeng
🔒 Nvidia-led tech titans trump energy: AI reigns, leaving big oil in search of love
© 2024 Bloomberg L.P.
Visited 217 times, 217 visit(s) today
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : BizNews – https://www.biznews.com/tech/2024/02/15/chatgpt-personalisation-or-peril-parmy-olson