While many regions are pushing for new age limits on social media access, the key challenge remains that there’s currently no universal technology that enables platforms to verify user ages, nor a process in place that would definitively stop youngsters from accessing social apps.
Over the last year, several European nations, including France, Greece and Denmark, have put their support behind a proposal to restrict social media access to users aged under 15, while Spain has proposed a 16 year-old access restriction.
Australia and New Zealand are also moving to implement their own laws that would restrict social media access to those over the age of 16, while Norway is also developing its own regulations.
Though none of these pushes is technically a huge variation from what’s in place right now, with all of the major social platforms restricted to users aged 14 and up.
Sure, you could argue that those additional two years come at a significant developmental time, so the impacts would still be relevant. But the bigger challenge lies in actually enforcing these regulations, and how social platforms and local authorities can feasibly address such in a uniform way.
Right now, any such restrictions are enforced by penalizing each individual platform, with each company having to implement their own checks to stop young teens from accessing their apps.
Meta has proposed an alternative on this front, which would make the app stores, owned by Apple and Google, responsible for verifying user ages at the download level. That would take the onus off the platforms themselves, and ensure a more uniform enforcement process. Though, of course, Apple and Google are less keen on that plan, and they’re certainly not going to voluntarily make themselves accountable for any fines as a result of regional violations. As such, they’re lobbying against this push wherever they can.
But it does make sense, and it would reduce complexity, and establish a more definitive checking barrier at the point of entry, which should also have more impact, given that most young teens are reliant on their parents to buy them a phone.
But barring that possibility, there are several other age-checking systems in testing, which may help to stop youngsters from breaking the rules.
Meta, for example, is currently trialing third party age-checking, using video analysis from Yoti, in order to estimate each potential users’ age.
Meta has rolled this out in selected regions, across both Facebook and Instagram, but it remains, effectively, in test mode as it continues to assess the option.
That’s the same process that the Australian government is trying out, using video ID to stop teens from getting access, and its more recent trials have suggested that it could be a viable avenue on this front.
As reported by Bloomberg earlier this week:
“The trial’s project director, Tony Allen, said that there were ‘no significant technological barriers’ to stopping under-16s gaining social media accounts. ‘These solutions are technically feasible, can be integrated flexibly into existing services and can support the safety and rights of children online,’ he said. The trial tested a range of methods and technologies, including facial scans, inferring a user’s age based on their behavior, age verification, as well as parental controls.”
That could enable the Australian government to implement this as the baseline measurement, and ensure that all platforms are being held to the same standards in keeping youngsters off their apps.
Though there is also another potential solution in development, which may have more appeal to tech platforms, given its more futuristic vibe.
As reported by Semafor, Reddit is currently exploring the use of eye-scanning to detect user identity.
As per Semafor:
“Reddit is considering using World ID, the verification system based on iris-scanning Orbs whose parent company was co-founded by OpenAI CEO Sam Altman. According to two people familiar with the matter, World ID could soon become a way for Reddit users to verify that they are unique individuals while remaining anonymous on the platform.”
The aim here, as Semafor notes, is to ensure that a real human is behind each account, with Reddit facing various new challenges related to AI bot profiles and bot interactions in the app.
Iris-scanning would provide a potential solution on this front, while recent research has also suggested that eye-scanning can accurately determine a person’s age as well.
That would mean that, at least in theory, there could soon be a new biomarker linked to age-checking, which categorizes each user based on this less intrusive ID element.
How that might work in practice is another question, as each individual user would seemingly need to access a World ID orb to scan their eyes in. There are online variations available, but this could prove to be a limitation in this respect.
The bottom line, however, is that we’re moving towards a future where age identification will be more enforceable, and will become a legal requirement, which will also keep people’s personally identifying information on servers around the world.
I mean, a heap of personally identifiable and intrusive information is already available in the server banks of the major social apps, but soon, there is likely to be a bigger push on biometric info, which will spark all new debates over data security, and the options we have, as users, to control how such information is relayed.
But if age limits are to be enacted, a level of confirmation will be required. And the expanded implications of that could be significant for the future of digital surveillance.