Now that Florida has approved a law limiting or banning social media use by kids younger than 16, parents are trying to understand how the rules will work.
Not all parents agree with the law – and even more are wondering how it will be enforced.
“Many kids will falsify their information,” said Miami mom Vanessa Rostant, who has a teenage son.
House Bill 3, which was approved on March 25, prohibits children under the age of 14 to have accounts on social media but 14 and 15-year-olds can have accounts with parental consent. The bill also requires sexually explicit websites to use age verification processes to prohibit minors from accessing the sites.
The bill won’t go into effect until Jan. 1, 2025. However, debates have already begun regarding the ethical nature of the bill and whether children being on social media requires government intervention.
And questions remain about how social media sites will implement restrictions and how they will be enforced.
“I don’t believe this law adequately addresses the issues it aims to solve,” said Rostant. “It would be hard to police based on age.”
The bill doesn’t specifically name the platforms that will be affected, but it included criteria of algorithms and addictive features such as users being able to view content and activities of other users, which would include popular platforms like Instagram and TikTok.
Facebook and Instagram already conduct age screenings when users sign up for the platforms. Facebook and Instagram users already on the platforms can report accounts that they suspect are owned by underage users. Meta also has content reviewers that flag reported accounts and delete them if the users can’t prove they meet the minimum age requirement to use the platform.
“We look at things like people wishing you a happy birthday and the age written in those messages, for example, ‘Happy 21st Bday!’ or ‘Happy Quinceañera,” Meta representatives said in a 2021 article. “We also look at the age you shared with us on Facebook and apply it to our other apps where you have linked your accounts and vice versa – so if you share your birthday with us on Facebook, we’ll use the same for your linked account on Instagram.”
Facebook also uses an artificial intelligence program, Yoti, to scan users’ faces through a video selfie that they upload. This is only used if a user under 18 tries to change their age to over 18. Users are also required to provide some form of identification, including a government ID or a non-government ID. After age estimation, both Facebook and Yoti delete the image and don’t share any additional information about the user. This age verification method is only available in some states.
Meta is considering working with operating system providers, internet browsers, and other providers to help them identify underage accounts. The company is also looking for ways to make their platforms safer for people under 13.
“The reality is that they’re already online, and with no foolproof way to stop people from misrepresenting their age, we want to build experiences designed specifically for them, managed by parents and guardians,” said Meta. “This includes a new Instagram experience for tweens.”
Rostant believes that there are alternatives to using these invasive methods.
“I feel perhaps limiting the time a certain account can be logged on to social media might be a better approach to using artificial intelligence, which is a controversial topic right now,” Rostant said.
To ease parents’ concerns about their children being on Facebook, the platform’s help center contains a list of common questions that parents may ask and answers to those questions. Facebook’s safety resources for parents covers topics such as minor safety on Facebook, requesting data from your child’s account, as well as enabling supervision on a child’s account.