https://www.context.news/big-tech/long-read/lawsuits-pile-up-as-us-parents-take-on-social-media-giants

I hadn’t realized the pushback against social media platforms had reached this point.

The article specifically addresses children and teenagers, and the impact that social media platforms have on their mental health. I have a lot of thoughts about it.

1. The concern is valid. Kids are at a developmental point where they can be disproportionately affected by the bullshit social media companies pull.

2. The article mentions interest–even among US Democrats–in rolling back Section 230, which is the clause that protects online social platforms from being held culpable for content users put on their sites. This of course would be catastrophic, and I think it’s the wrong tack and misses the point on the real problem at hand.

3. HOWEVER. The article–and various lawyers and groups involved–liken the negative communal impact of social media to tobacco and asbestos–both of which had to be litigated into taking responsibility for themselves. And I think that here there’s a good argument to be made. The issue is not with the content. It’s how the content is surfaced, and when, and how widely. And these groups are targeting fairly specific platforms: Instagram, Snapchat, Tiktok. In short: the issue here is the algorithms. 

These things are designed, with varying levels of aggressiveness, to automate the delivery of content based on various factors: how much the algorithm thinks the user will be interested, but also how much the algorithm’s creators think they can benefit from the exposure of that content. Will it generate engagement? Will it life their numbers? Will it let them sell more ad space?

I think there’s a case to be made, stepping around Section 230 altogether, because the algorithms are not about content or who is posting what. They’re tools, designed and deployed by their companies, to dictate the flow of that content. The company is not reasonably liable for what users put on their sites. They ARE reasonably liable for what their own tools do with that content.

And sure enough:

Legal experts said that case could set an important precedent for how Section 230 applies to the content recommendations that platforms’ algorithms make to users – including those made to children such as Laurie’s daughter. 

Meanwhile, as with a lot of things involving data privacy and respect for consumer welfare, other countries are ahead of the US.

Outside the United States, the balance has shifted still further, and is beginning to be reflected both in consumer lawsuits and regulation.

In September, a British government inquest faulted social media exposure for the suicide of a 14-year-old girl, and lawmakers are poised to implement stringent rules for age verification for social media firms.

But aside from a recent bill in California that mandates “age appropriate design” decisions, efforts in the United States to pass new laws governing digital platforms have largely faltered.

Age limits and verification are a stopgap measure, and invasive besides. The article cites cases where kids under 13 are already circumventing those to create accounts.

But there’s a lot of room to pursue responsible design and deployment of algorithms. Where an algorithm can increase virality, it can also decrease it. It can be limited in terms of who it’s deployed on–I’d be in favor of turning off algorithmically delivered content for anyone under the age of 16 at least, quite frankly.

Leave a Reply

Your email address will not be published. Required fields are marked *