Meta Faces More Questions Over Teen Safety in AI and VR

Meta Faces More Questions Over Teen Safety in AI and VR


Meta is set to come under regulatory scrutiny once again, after reports that it’s repeatedly failed to address safety concerns with its AI and VR projects.

First off, on AI, and its evolving AI engagement tools. In recent weeks, Meta has been accused of allowing its AI chatbots to engage in inappropriate conversations with minors, and provide misleading medical information, as it seeks to maximize take-up of its chatbot tools.

An investigation by Reuters uncovered internal Meta documentation that would essentially allow for such interactions to occur, without intervention. Meta has confirmed that such guidance did exist within its documentation, but it has since updated rules to address these elements.

Though that’s not enough for at least one U.S. Senator, who’s called for Meta to ban the use of its AI chatbots by minors outright.

As reported by NBC News:

Sen. Edward Markey said that [Meta] could have avoided the backlash if only it had listened to his warning two years ago. In September 2023, Markey wrote in a letter to Zuckerberg that allowing teens to use AI chatbots would ‘supercharge’ existing problems with social media and posed too many risks. He urged the company to pause the release of AI chatbots until it had an understanding of the impact on minors.”

Which, of course, is a concern that many have raised.

The biggest concern with the accelerated development of AI, and other interactive technologies, is that we don’t fully understand what the impacts of using them might be. And as we’ve seen with social media, which many jurisdictions are now trying to restrict to older teens, the impact of such on younger audiences can be significant, and it would be better to mitigate that harm ahead of time, as opposed to trying to address it retrospect.

But progress generally wins out in such considerations, and with U.S. tech companies pointing to the fact that China and Russia are also developing AI, U.S. authorities seem unlikely to implement any significant restrictions on AI development or use at this time.

Which also leads into another concern being leveled at Meta.

According to a new report from The Washington Post, Meta has repeatedly ignored and/or sought to supress reports of children being sexually propositioned within its VR environments, as it continues to expand its VR social experience.

The report suggests that Meta engaged in a concerted effort to bury such incidents, though Meta has responded by noting that it’s approved 180 different studies into youth safety and well-being in its next-level experiences.

It’s not the first time that concerns have been raised about the mental health impacts of VR, with the more immersive digital environment likely to have an even more significant impact on user perception than social apps.

Various Horizon VR users have reported incidents of sexual assault, even virtual rape, within the VR environment. In response, Meta has added new safety elements, like personal boundaries to restrict unwanted contact, though even with additional safety tools in place, it’s impossible for Meta to counter, or account for the full impacts of such at this stage.

And at the same time, Meta’s also reduced the age access limits of Horizon Worlds down to 13 years-old, then 10 last year.

That seems like a concern, right? That in between Meta being forced to implement new safety solutions to protect users, it’s also reducing the age barriers for access to the same.

Of course, Meta may well be conducting further safety studies, as it notes, and those could come back with further insights that will help to address safety concerns like this, ahead of a broader take-up of its VR tools. But there is a sense that Meta is willing to push ahead with its projects with progress as its guiding light, rather than safety. Which, again, is what we saw with social media initially.

Meta has been repeatedly hauled before Congress to answer questions about the safety of both Instagram and Facebook for teen users, and what it knows, or knew, about potential harms among younger audiences. Meta has long denied any direct links between social media usage and teen mental health, though various third-party reports have found clear connections on this front, which is what’s led to the latest efforts to stop young teens from accessing social apps.

But through it all, Meta’s remained steadfast in its approach, and in providing access to as many users as possible.

Which is what may be of most concern here, that Meta’s willing to ignore outside evidence if it could impede its own business growth.

So you either take Meta at its word, and trust that it is conducting safety experiments to ensure its projects don’t have a negative impact on teens, or you push for Meta to face tougher questioning, based on external studies and evidence to the contrary.

Meta maintains that it’s doing the work, but with so much on the line, it’s worth continuing to raise these questions.   



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *