US Surgeon General Vivek Murthy wants to put a warning label on social media platforms, alerting young users of potential mental health harms.
“It is time to require a surgeon general’s warning label on social media platforms stating that social media is associated with significant mental health harms for adolescents,” Murthy wrote in a New York Times op-ed published Monday.
Murthy argued that a warning label is urgently needed because the “mental health crisis among young people is an emergency,” and adolescents overusing social media can increase risks of anxiety and depression and negatively impact body image.
Spiking mental health issues for young people began long before the surgeon general declared a youth behavioral health crisis during the pandemic, an April report from a New York nonprofit called the United Health Fund found. Between 2010 and 2022, “adolescents ages 12–17 have experienced the highest year-over-year increase in having a major depressive episode,” the report said. By 2022, 6.7 million adolescents in the US were reporting “suffering from one or more behavioral health condition.”
However, mental health experts have maintained that the science is divided, showing that kids can also benefit from social media depending on how they use it. Murthy’s warning label seems to ignore that tension, prioritizing raising awareness of potential harms even though parents potentially restricting online access due to the proposed label could end up harming some kids. The label also would seemingly fail to acknowledge known risks to young adults, whose brains continue developing after the age of 18.
To create the proposed warning label, Murthy is seeking better data from social media companies that have not always been transparent about studying or publicizing alleged harms to kids on their platforms. Last year, a Meta whistleblower, Arturo Bejar, testified to a US Senate subcommittee that Meta overlooks obvious reforms and “continues to publicly misrepresent the level and frequency of harm that users, especially children, experience” on its platforms Facebook and Instagram.
According to Murthy, the US is past the point of accepting promises from social media companies to make their platforms safer. “We need proof,” Murthy wrote.
“Companies must be required to share all of their data on health effects with independent scientists and the public—currently they do not—and allow independent safety audits,” Murthy wrote, arguing that parents need “assurance that trusted experts have investigated and ensured that these platforms are safe for our kids.”
“A surgeon general’s warning label, which requires congressional action, would regularly remind parents and adolescents that social media has not been proved safe,” Murthy wrote.
Kids need safer platforms, not a warning label
Leaving parents to police kids’ use of platforms is unacceptable, Murthy said, because their efforts are “pitted against some of the best product engineers and most well-resourced companies in the world.”
That is nearly an impossible battle for parents, Murthy argued. If platforms are allowed to ignore harms to kids while pursuing financial gains by developing features that are laser-focused on maximizing young users’ online engagement, platforms will “likely” perpetuate the cycle of problematic use that Murthy described in his op-ed, the American Psychological Association (APA) warned this year.
Downplayed in Murthy’s op-ed, however, is the fact that social media use is not universally harmful to kids and can be beneficial to some, especially children in marginalized groups. Monitoring this tension remains a focal point of the APA’s most recent guidance, which noted that in April 2024 that “society continues to wrestle with ways to maximize the benefits of these platforms while protecting youth from the potential harms associated with them.”
“Psychological science continues to reveal benefits from social media use, as well as risks and opportunities that certain content, features, and functions present to young social media users,” APA reported.
According to the APA, platforms urgently need to enact responsible safety standards that diminish risks without restricting kids’ access to beneficial social media use.
“By early 2024, few meaningful changes to social media platforms had been enacted by industry, and no federal policies had been adopted,” the APA report said. “There remains a need for social media companies to make fundamental changes to their platforms.”
The APA has recommended a range of platform reforms, including limiting infinite scroll, imposing time limits on young users, reducing kids’ push notifications, and adding protections to shield kids from malicious actors.
Bejar agreed with the APA that platforms owe it to parents to make meaningful reforms. His ideal future would see platforms gathering more granular feedback from young users to expose harms and confront them faster. He provided senators with recommendations that platforms could use to “radically improve the experience of our children on social media” without “eliminating the joy and value they otherwise get from using such services” and without “significantly” affecting profits.
Bejar’s reforms included platforms providing young users with open-ended ways to report harassment, abuse, and harmful content that allow users to explain exactly why a contact or content was unwanted—rather than platforms limiting feedback to certain categories they want to track. This could help ensure that companies that strategically limit language in reporting categories don’t obscure the harms and also provide platforms with more information to improve services, Bejar suggested.
By improving feedback mechanisms, Bejar said, platforms could more easily adjust kids’ feeds to stop recommending unwanted content. The APA’s report agreed that this was an obvious area for platform improvement, finding that “the absence of clear and transparent processes for addressing reports of harmful content makes it harder for youth to feel protected or able to get help in the face of harmful content.”
Ultimately, the APA, Bejar, and Murthy all seem to agree that it is important to bring in outside experts to help platforms come up with better solutions, especially as technology advances. The APA warned that “AI-recommended content has the potential to be especially influential and hard to resist” for some of the youngest users online (ages 10–13).
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Ars Technica – https://arstechnica.com/?p=2031954