there’s so much to tell about this subject that I might add more to some points on subsequent posts.
everything in the below post is from observation and reading about the experiences of others on web 2.0. please feel free to add anything you feel is necessary.
(socmed = social media in shorthand.)
What even is web 2.0?
Web 1.0: web model where dotcoms generated their own content and presented it to users for free, depending on advertisers for their income. ‘social media’ mostly made up of mailing lists and forums on these content-oriented sites. collapsed because ad revenue wasn’t sufficient to support site maintainance costs.
Web 2.0: web model where dotcoms create a free space for users to generate their own content, depending on advertisers for their income. these sites define social media today. likely to collapse because ad revenue still isn’t sufficient to support site maintainance costs (even after shucking the cost of paying content creators).
(if you want to read more about how ad revenue is the social media Achilles Heel, check this link out: Why Monetizing Social Media Through Advertising Is Doomed To Failure.)
What makes Web 2.0 social media so much worse than web 1.0?
mostly: web 2.0 socmed exacerbates the pre-existing conflict of interest between users and site owners: site owners need ads. Users want to avoid ads.
With web 1.0, users were attracted by site-created content that had to appeal to them: users were the clients and advertisers were the sponsors. (Forum interaction was a side offering. sites dedicated to user interaction were small, scattered, and supported by banner ads.)
Web 2.0 socmed strips users of client status entirely; the content we generate (for free!) and our eyes/eyes we attract to the site are products the site owner sells to the actual site client: advertisers.
early web 2.0 social media sites (livejournal, myspace) used hybridization to pay site costs – users could buy paid accounts or extra blog perks. they also had privacy/limited-spread sharing functions and closed communities, which still ‘exist’ but with limited capabilities on current socmed sites. privacy, it seems, isn’t very profitable.
now web 2.0 is geared towards spreading content as far as possible – and further if you’ll choke up a little cash to grease the algorithms. 😉
–
Web 1.0 had its fair share of problems. Web 2.0 generated new ones:
- following people instead of joining communities based on interests has negative emotional and social implications
- social media sites benefit from knocking down privacy walls. Maximizing content spread and minimizing blocking/blacklisting capabilities benefits advertisers – the true clients of websites.
- social media sites benefit from eroding online anonymity. they track user site interaction, searches, and more to precisely target their ads at your interests (unless you deliberately turn it off). tracking data can endanger anonymity and make doxxing easier.
- social media sites benefit from conflict. Conflict generates user response much more effectively than harmony/peace. More user interaction means more eyes on ads, increasing ad space value.
- social media sites are therefore deincentivized to address abuse reports, increase moderation, improve blacklisting tools, or offer privacy options. and there’s nothing you can do about it because
- there’s nowhere different to go. it’s difficult to compete with existing social media sites as a startup. to draw social media users, a newcomer must offer something bigger, better, and equally free*, and offering any of this on startup capital is … unlikely, at best.
*’I’d move if they just had privacy features!’ the joke is: any successful socmed site that starts with privacy features will have a hard time keeping them down the road under the present profit model. they will be forced to cater to their advertisers if they want to keep afloat.
–
how does the structure of web 2.0 socmed harm fandom?
in aggregate: it forces fandom[$], a diverse space where people go to indulge niche interests and specific tastes, into overexposure to outsiders and to one another, and exacerbates the situation by removing all semi-private interaction spaces, all moderation tools, all content-limiting tools, and all abuse protection.
The result is that fandom on web 2.0 – tumblr in particular – is overrun with widespread misinformation,
black & white reasoning obliterating nuanced debates, mob rule and shame culture as substitutes for moderation features, fear of dissent and oversensitivity to disagreement, hatedoms and anti- communities, and large/expanding pockets of extremist echo chambers that have no reality check to protect those trapped inside.
to be more specific:
- moderated communities were replaced by following unmoderated tags, directly leading to and encouraging the creation of hate spaces – ‘don’t tag your hate’ leads to negativity-specific tags that could themselves be followed, forming a foundation for anti- communities to develop from
- no privacy, minimal blacklisting options, poor blocking tools, lack of oversight, lack of meaningful consequences for TOS violations = ‘fandom police’/vigilanteism (attempts to assert authority over others without actually having that authority) – some people react to the inability to get away from content that they hate by trying to force that content to stop existing entirely. without actual moderating authority, they accomplish this by social pressure, intimidation, and shame tactics.
- the people-following structure of web 2.0 is fundamentally incompatible with web 2.0 reshare functions and search engines. content posted on a personal blog is rarely intended to stand alone because people who follow the blog presumably see all the blog’s content in an ongoing stream. but reshare functions and search results separate the content from the context in which is was presented, causing misunderstandings and strife. (for site owners, the strife is a feature, not a bug.)
- following people instead of joining communities based on a shared interest creates social stress – following/unfollowing an individual has more social & emotional implications than joining/leaving interest communities
- Unmoderated conflict is polarizing. Web 2.0 specializes in causing unmoderated conflict. – exacerbated by the depersonalizing effect of not being able to see or hear other users, conflict in the unmoderated spaces on web 2.0 social media quickly devolves into extremism and nastiness. web 2.0 socmed structure even eggs the conflict on: people are more likely to interact with content that makes them angry (’someone is wrong on the internet!’ effect), which shares the content with more users, which makes them angry, so they interact (and on, and on).
- The extreme antagonism generated by web 2.0 socmed creates echo chambers – the aggregate effect of unmoderated conflict is that the most extreme and polarizing content gets spread around the most. polarizing content doesn’t tend to convince people to change their minds, but rather entrenches them further in their ideas and undermines the credit of opposing points of view. it also increases sensitivity to dissent and drives people closer to those who share their opinions, creating echo chambers of agreement.
- reacting to content that enrages you increases the chances of encountering it again because algorithms – social media site algorithms are generally designed to bring users more of the content they interact with the most because they want more site interaction to happen. if you interact with posts that make you mad, you’ll get more recs related to content that makes you mad.
- everyone has an opinion to share and everyone’s opinion has to be reshared: reactionary blogging as a group solidarity exercise. when something notable happens and everybody has to share their reaction on social media, the reaction itself becomes an emotional and social experience, sometimes overwhelming and damaging.
- when the reaction is righteous anger that everyone can reaffirm in one another, it creates an addictive emotional high. one way to reproduce it? find more enraging content to be mad about (and web 2.0 is happy to bring it to you).
- It’s easy to spread misinformation (and hard to correct it) – no modern social media site offers ways to edit content and have that edit affect all reshares. Corrections can only reach fractions of the original audience of a misleading viral post.
- web 2.0 social media discourages leaving the site with new content notifications and by lacking tools that keep your ‘place’ on your dash, deincentivizing verification checks before resharing content.
- web 2.0’s viral qualities + misinformation machine + rage as a social bonding experience = shame culture and fear of being ‘next’ (tumblr bonus: no time stamps and everything you post is eternal) – when offending content is spread virally, each individual reaction may have proportion to the original offense, but the combined response is overwhelming and punishing. many people feel the right to have their anger heard and felt by the offender, resulting in a dogpile effect. fear of inciting this kind of widespread negative reaction depresses creativity and the willingness to take risks with shared content or fanworks.
- absolute democracy of information & misinformation plus too much available information leads to uncertainty of who/what is trustworthy and encourages equating feelings to facts – social media doesn’t give content increased spread and weight based on its truthfulness or the credibility of the OP. misinformation is as likely to spread as truth, and the sheer amount of available information – conflicting or not – on the web is overwhelming. when fact-checking, it’s hard to know who to trust, who is twisting the facts, or who is simply looking at the same fact from a different viewpoint. information moves so fast it’s hard to know what ‘fact’ will be debunked by new information tomorrow. People give up; they decide the truth is unknowable, or they go with what ‘feels’ right, out of sheer exhaustion.
- information fatigue caused by web 2.0 makes black & white thinking look attractive – conflict and polarization and partisanship erodes communication to the point that opposing points of view no longer even use language the same way, much less can reach a compromise. the wildly different reference points for looking at the same issue makes it difficult to even know what the middle ground is. from an outside point of view this makes everyone on both sides seem untrustworthy and distances the objective truth from everyone even more.
- it’s easy to radicalize people who are looking for someone or something to trust/are tired of being uncertain – information fatigue leads to people just wanting to be told what to think. who’s good and who’s bad? whose fault is this? and don’t worry – lots of people are ready to jump in and tell you what to think and who to blame.
- everyone is only 2 seconds away from being doxxed: our anonymity on the net is paper-thin thanks to web 2.0 – before facebook encouraged using our real names and the gradual aggregation of most people to a few major socmed sites, anonymity was easier to maintain. now we have long internet histories with consistent usernames and sites that track everything we do to improve ad targeting. anyone with minimal hacking knowledge could doxx the large majority of socmed users.
- and all it takes is one poorly-worded, virally spread tweet to send the whole of twitter after you with pitchforks.
[$] using the vld discourse survey as a reference, fandom is (probably) largely neurodivergent, largely queer/lesbian/gay/bi/pan/not straight, has many non-cis and/or afab members, and around 20% are abuse survivors/victims. fandom is a space we made for ourselves to cater to the interests we have in common with each other but mainstream society doesn’t often acknowledge.
Not to trash freedom-of-fanfic, because I’m a big fan. But while there are some good points here, there are also quite a few mistaken assumptions. In particular, I disagree with all the initial core points.
For starters, the definition of Web 2.0 here is wrong. Web 2.0 started with the rise of blogs. Web 2.0 basically means the internet as an interactive space vs. the static websites that came before. Just to be clear, Web 2.0 enabled online fandom as we now know it, AND it ruined it. It’s pretty much constantly doing both simultaneously.
I disagree with this: “site owners need ads. Users want to avoid ads.“ Ads are odious, yes. However, they are not the only way to monetize a site. They are not an inescapable part of Web 2.0. They are also frankly not the worst thing going on Web 2.0. Ads in themselves do only minor harm compared to a number of other behaviors–such as reverse-engineering ads into your site when your site wasn’t originally designed for them.
I disagree with “Web 2.0 socmed strips users of client status entirely” because this also implies it’s an intrinsic quality of Web 2.0 social media. It is not. It is ONE method by which social media sites can operate, but they have other options. Livejournal, as noted, sold optional paid subscriptions and add-ons. It was doing pretty well with that model. While they never released their revenue reports, a number of anecdotal reports from ex-employees indicate that Livejournal was profitable. The mistakes came in repeated fumbled attempts at community management which drove a large portion of the user base away.
Another point of disagreement: Most of what’s in this post does apply accurately to Tumblr and Twitter. But it’s a mistake to assume that what goes on with Tumblr and Twitter is representative of social media as a whole, or even the direction social media as a whole is taking. These two sites
are deliberately structured NOT to model human community, but rather to generate and reproduce memes. This is a very different logical architecture than most social media sites. All the stuff about maximizing content spread and eroding community boundaries is specific to Tumblr and Twitter.
Not a disagreement but I think it’s an overlooked point: There’s no mention here of the rise of interest-based algorithms, and they are way more important to creating the current online social climate (and the social climate in general) than most people give them credit for. The echo chamber effect is driven primarily by interest-based algorithms, in which content is delivered to you based on a set of keywords the site has generated that it believes represents what you’re most interested in (with an occasional extra slipped in there to make sure you get your share of ad content).
Interest-based algorithms are separate from but do often go hand in hand with social media advertising, because the site needs some way to identify the best audience for a given piece of advertising content. Beware where you see them rise (Tumblr, I’m looking at you), for ads are likely to follow.
Now back to monetization: yes, it is a problem–but a specific pattern of monetization is the problem. Namely, it’s very common for social media platforms to launch and begin to grow, only to discover as they grow that growing costs money. THEN they have to figure out how to fund themselves, because they didn’t bother thinking about it before.
Reverse-engineering revenue generation into your site is always a bigger challenge than building it in at the beginning. 1: It’s never going to fit as smoothly, or without altering some of the site’s existing functionality 2: now you’ve got a user base you’re going to disrupt with your shenanigans.
Ads are not the only possible solution. They’re just a popular one because they’re seen as being an easy solution by people who haven’t done their research into the market. (This means most social media developers, because these people are usually coders who have no background in the sociology of the internet or user psychology, and they’re either lazy or too caught up in their own optimistic idealism to bother facing facts about what they’re building.)
This does piss me off because it’s dumbass, and these people get warned, over and over again, that this IS going to be a problem and that “we’ll figure it out later” has historically never worked. It’s lazy, and it’s laziness that happens at the expense of the site’s user base. And at the expense of their company, because for all the massively used social media platforms out there, very few of them are actually earning income (Twitter has never turned a profit since it was first launched. Tumblr? Also not profitable. Facebook is, but at what cost?)
Every now and then, you find a site that takes funding itself into consideration BEFORE it becomes a problem. Dreamwidth, for example. Writscrib may end up being another good example.
Finally, regarding the profitability of unmoderated conflict: this also may be wrong. There are indications from studies that unrestrained conflict may do more economic harm than good, both in terms of slowing or even reversing growth of the user base, and in terms of the amount of investment that sites can attract. It may also adversely impact the bottom line of companies that advertise on conflict-prone websites, at least insofar as it lowers their earnings-per-ad-dollar.
The reason you don’t see more conflict moderation is…well, honestly? The primary reason is because most of these sites are captained by brohams, who hire brohams who either don’t realize or actively don’t care that they’re not doing anybody any favors by letting this stuff slide. Also the companies don’t want to invest in the staffing or training that would be required to ensure fair and just moderation. They haven’t quite caught on yet that there’s no future in allowing that sort of behavior.
Mainly I don’t want the takeaway of readers to be, “Oh, well, we’re just inevitably going to hell then.” That isn’t true. There are other options out there, some which exist now and some which can be built.
Along with Dreamwidth, Mastodon, Writscrib, keep in mind that Discord, Snapchat and other such ‘private’ networks are social media platforms too.
The biggest trick of social media is that the value of it to a user increases according to the number of other users on there who you want to connect with. This is called the network effect. So it’s up to YOU to choose your next platform not based on where “everybody else is going,” but based on what actually works.
from Tumblr http://ift.tt/2D1rfnb