In which we update you about our lawsuits, and welcome a wave of new friends!
As usual, I looked around and realized that the news update I just wrote was actually months and months ago, so I thought I'd take a few moments and update you all on the progress we've been making recently with Netchoice in our fight against the wave of terrible and unconstitutional social media bills that have been sweeping the country. (And if you live in the US and you have a moment, please call your Representative and tell them to oppose KOSA, the "Kids' Online Safety Act": it passed the Senate, but there's still time to stop it in the House. It's the national version of all these state laws we've been fighting, and if you've been around for a while, you know why they're a problem, but if you're new, the EFF has a great short overview of why it's a problem.)
First, though, I'd like to offer a warm welcome to our new dwenizens from Cohost, which is sadly shutting down at the end of the month. You can check out our tips for new users that we posted when we had an influx of folks joining us from Reddit; to that, I'd add that you should check out the beta features. You can ignore the "Temporarily revert updated journal page components" beta (we have that lingering around to help people who were having trouble with some elements of the design updates) and the "Two-Factor Authentication" beta will let you add your 2FA provider but not actually issue 2FA tokens yet (we keep finding more and more legacy methods of "log in while you're doing a task" that we have to figure out the 2FA workflow on), but the New Create Entries beta and the New Inbox beta are both (we think) a much superior version of those pages. We'll be enabling them for all users sometime relatively soon (okay, 'soon' by our standards), so we want to hear your feedback now, especially if you use assistive technology! If you're coming to us from Cohost and are looking for other Cohosters, feel free to matchmake in the comments. If you're looking for new friends, the unofficial community
addme is a good start, and browsing
dw_community_promo will let you find (or promote!) communities on all sorts of topics.
Anyway, since we last spoke, Dreamwidth has joined on to two more of Netchoice's lawsuits challenging unconstitutional deanonymization and mandated censorship bills. In addition to the two we've already let you know about, Netchoice, LLC v Bonta in California and Netchoice, LLC v Yost in Ohio, we've added Netchoice, LLC v. Fitch in Mississippi and NetChoice LLC v. Reyes in Utah. (That last one was originally filed before we became members, but Utah pulled a fast one where they pinky swore to the court right before the deadline that they were going to repeal and replace the law with a much improved version, and then the version they replaced it with actually made things worse and affected a lot more sites, so we were thrilled to join on the revised lawsuit.)
The July Supreme Court opinion in Netchoice v Moody (consolidated with Netchoice v Paxton) made it clear that sites' content moderation decisions are protected by the First Amendment, and there's an upcoming case this term (Free Speech Coalition v Paxton) that will tee up the constitutionality of "age verification" requirements. (The scarequotes there are because there's no conclusive way to "verify" the age of the person who's using an account at any given moment: at most you can approximate it at account creation.) Until then, remember: there's no way for a site to treat adults and minors differently unless they're able to conclusively identify who's an adult and who's a minor, and the only way for a site to conclusively do that is to deanonymize every single vistor and require you to upload your government ID in order to make an account. As I keep saying over and over in our declarations in support of these lawsuits: we don't want to ask you for that data, and we know you don't want to give it to us.
There's a whole heap of other issues with these bills, too: for instance, it's getting more and more common for states to throw in "parental consent" clauses, whereby a parent can write in to a site and demand access to or control of their under-18 kid's account to various degrees. That's also impossible for a site to establish: we have no way of telling who's the parent of one of our users at all, much less establishing whether they're the parent with legal decisionmaking authority, and there are dozens of scenarios where that ends incredibly badly. (I usually cover so many of them in my declarations that the outside lawyers Netchoice is working with have to politely ask me to pare them down a bit!) Most of these bills also include an obligation to restrict access to "content harmful to minors", and the definition of "harmful to minors" is always, always politicized: you probably all can sing along with the refrain by now because of how often I climb up on this soapbox, but there's well-established evidence that content by creators from historically marginalized groups, and especially content by queer creators, is judged "harmful to minors" at a much higher rate than functionally identical content by creators from groups that aren't historically marginalized. Each of the states that have included a "harmful to minors" clause has had a different range of what's considered "harmful to minors", but there's no doubt all of them sweep up a broad range of speech that can be helpful to at least some minors who would be denied access to that content.
I don't want to talk your ears off about all of the problems in all of these laws, because we'd be here all day -- my declarations in Netchoice v Reyes and Netchoice v Fitch cover a lot of the problems, but there's even more I didn't have time to address. (And that's with my declaration in Netchoice v Fitch running 23 pages!) Fortunately, there's good news: judges, and even appellate courts, across the country have been agreeing with us that these bills are unconstitutional, and it's not even close. Here's our win record so far:
I always struggle with trying to figure out how much of an update on our advocacy work is significant enough to deserve a full
dw_news post, because I know there are folks who want to stay informed, but having sitewide announcements for every little thing would quickly get annoying. To solve this problem, we've started
dw_advocacy, an announcements community in which we'll post announcements of wins, announcements of new cases we're participating in, and (if I have time!) deep dives into the legal issues that show up frequently in these challenges and the cases that keep getting cited over and over again. Subscribe to the community for everything from my glee at getting to scratch another state off the "I helped sue you!" list to my "hopefully reasonably okay for a non-lawyer" explanations of landmark cases that show why the latest state to pull some shenanigans should already know why their shenanigans are unconstitutional!
As always, our deepest gratitude to Netchoice for picking these fights and for inviting us along for them. We don't always agree with every lawsuit they choose to file, but that's precisely why Netchoice member companies don't have control over which fights Netchoice decides to pick, only which lawsuits we'll choose to give evidence for. Even when we disagree with them, the folks at the litigation center are extremely passionate about digital civil liberties, and they're a delight to work with. My main contact at the litigation center always tells me how heartening and inspiring he finds it to see how absolutely enthusiastic our users are about these issues and how much you all care that we're fighting (and winning!) these battles!
And one last announcement: if you didn't see our notice in
dw_maintenance, we've switched our offsite downtime notification/status page away from Twitter (excuse me, "X") because of their sharp decline in Trust & Safety standards, the inability for people to see posts or timelines without being logged in to a site account, and the general ongoing instability of the service. Our new offsite downtime notification/status page can be found at dreamwidth.org on Bluesky. Please bookmark that page! In the event we can't reach
dw_maintenance to let you know of any issues, we'll post there.
As always, thank you all for using and supporting Dreamwidth. We have the freedom to be so passionate about these fights for online civil liberties because we don't have to worry about keeping our advertisers or investors happy: the fact we're 100% user-supported gives us so much more leeway to give states the finger when they want us to compromise your privacy for the sake of "protecting the children" efforts that will do nothing to actually protect children online. (Isn't it so interesting that Senator Wyden's Invest In Child Safety Act, which would actually make a meaningful difference in protecting kids online, has gone nowhere? Why, it's almost like none of this is about protecting children at all.)
We remain committed to keeping Dreamwidth 100% free of advertising, venture capital, and outside investment, no matter what it takes. People ask us all the time whether we can raise the limits on some of our restrictions like image hosting and icons (two of the most expensive features we offer), and we would love to be able to, but our costs keep rising and inflation has outpaced the advances in disk space and transfer costs in the last few years. Your support is still covering our costs of operation, but the fact we've been in business for 15 years and our prices have remained the same has been nibbling away at the leeway we have. We aren't in any danger, but we've been starting to have the extremely difficult internal conversations about raising our prices that were first set in 2009 so they better reflect 2024 costs and the 2024 value of the dollar in order to make absolutely certain that remains true. Right now, we're still in the very early stages of that discussion (and we'd love to hear your thoughts!) and it's too soon to say what we'll end up deciding, but in the meantime, if you have a few dollars to spare, please consider buying some paid time, for your account, for a friend, or for a random active user. The financial support of those of you who choose to pay us is what allows us to keep offering the site for everybody, and we're incredibly grateful to those of you who keep offering that support!
First, though, I'd like to offer a warm welcome to our new dwenizens from Cohost, which is sadly shutting down at the end of the month. You can check out our tips for new users that we posted when we had an influx of folks joining us from Reddit; to that, I'd add that you should check out the beta features. You can ignore the "Temporarily revert updated journal page components" beta (we have that lingering around to help people who were having trouble with some elements of the design updates) and the "Two-Factor Authentication" beta will let you add your 2FA provider but not actually issue 2FA tokens yet (we keep finding more and more legacy methods of "log in while you're doing a task" that we have to figure out the 2FA workflow on), but the New Create Entries beta and the New Inbox beta are both (we think) a much superior version of those pages. We'll be enabling them for all users sometime relatively soon (okay, 'soon' by our standards), so we want to hear your feedback now, especially if you use assistive technology! If you're coming to us from Cohost and are looking for other Cohosters, feel free to matchmake in the comments. If you're looking for new friends, the unofficial community
![[community profile]](https://www.dreamwidth.org/img/silk/identity/community.png)
![[site community profile]](https://www.dreamwidth.org/img/comm_staff.png)
Anyway, since we last spoke, Dreamwidth has joined on to two more of Netchoice's lawsuits challenging unconstitutional deanonymization and mandated censorship bills. In addition to the two we've already let you know about, Netchoice, LLC v Bonta in California and Netchoice, LLC v Yost in Ohio, we've added Netchoice, LLC v. Fitch in Mississippi and NetChoice LLC v. Reyes in Utah. (That last one was originally filed before we became members, but Utah pulled a fast one where they pinky swore to the court right before the deadline that they were going to repeal and replace the law with a much improved version, and then the version they replaced it with actually made things worse and affected a lot more sites, so we were thrilled to join on the revised lawsuit.)
The July Supreme Court opinion in Netchoice v Moody (consolidated with Netchoice v Paxton) made it clear that sites' content moderation decisions are protected by the First Amendment, and there's an upcoming case this term (Free Speech Coalition v Paxton) that will tee up the constitutionality of "age verification" requirements. (The scarequotes there are because there's no conclusive way to "verify" the age of the person who's using an account at any given moment: at most you can approximate it at account creation.) Until then, remember: there's no way for a site to treat adults and minors differently unless they're able to conclusively identify who's an adult and who's a minor, and the only way for a site to conclusively do that is to deanonymize every single vistor and require you to upload your government ID in order to make an account. As I keep saying over and over in our declarations in support of these lawsuits: we don't want to ask you for that data, and we know you don't want to give it to us.
There's a whole heap of other issues with these bills, too: for instance, it's getting more and more common for states to throw in "parental consent" clauses, whereby a parent can write in to a site and demand access to or control of their under-18 kid's account to various degrees. That's also impossible for a site to establish: we have no way of telling who's the parent of one of our users at all, much less establishing whether they're the parent with legal decisionmaking authority, and there are dozens of scenarios where that ends incredibly badly. (I usually cover so many of them in my declarations that the outside lawyers Netchoice is working with have to politely ask me to pare them down a bit!) Most of these bills also include an obligation to restrict access to "content harmful to minors", and the definition of "harmful to minors" is always, always politicized: you probably all can sing along with the refrain by now because of how often I climb up on this soapbox, but there's well-established evidence that content by creators from historically marginalized groups, and especially content by queer creators, is judged "harmful to minors" at a much higher rate than functionally identical content by creators from groups that aren't historically marginalized. Each of the states that have included a "harmful to minors" clause has had a different range of what's considered "harmful to minors", but there's no doubt all of them sweep up a broad range of speech that can be helpful to at least some minors who would be denied access to that content.
I don't want to talk your ears off about all of the problems in all of these laws, because we'd be here all day -- my declarations in Netchoice v Reyes and Netchoice v Fitch cover a lot of the problems, but there's even more I didn't have time to address. (And that's with my declaration in Netchoice v Fitch running 23 pages!) Fortunately, there's good news: judges, and even appellate courts, across the country have been agreeing with us that these bills are unconstitutional, and it's not even close. Here's our win record so far:
- Netchoice, LLC v Bonta (5:22-cv-08861) N.D. California: Judge Beth Labson Freeman granted the preliminary injunction preventing the law from going into effect on 18 Sept 2023.
- California appealed: NetChoice, LLC v. Bonta (23-2969), 9th Circuit. About a month ago, on 16 August 2024, the 9th Circuit affirmed the district court for the most part and kept the injunction in place. The case now returns to the district court for further development.
- Netchoice, LLC v Yost (2:24-cv-00047) S.D. Ohio: Judge Algenon Marbley granted the preliminary injunction on 12 Feb 2024. To Ohio's (tiny bit of) credit, they didn't bother dragging out an appeal of the preliminary injunction: we have now moved on to the motions for summary judgement, in which both sides make their arguments to the judge on why the other side doesn't have a possible case, and are awaiting the judge's ruling on those.
- Netchoice, LLC v. Fitch (1:24-cv-00170) S.D. Mississippi: Judge Halil Suleyman Ozerden granted the preliminary injunction preventing the law from going into effect on 1 July 2024. (This is good, because the law was passed on 30 Apr 2024 and set to take effect 1 July: I could not believe how fast Netchoice and their outside counsel got that turned around!)
- Mississippi appealed: NetChoice v. Fitch (24-60341), 5th Circuit. We're still waiting for a ruling on that one, and the Fifth Circuit is known as the "Fifth Circus" for very good reason: I would not be surprised to see this one make it all the way up to the Supreme Court, too, because nobody trusts the Fifth Circus to be sensible. (Of course, nobody trusts the Supreme Court to be sensible, either, sigh.)
- NetChoice LLC v. Reyes (2:23-cv-00911) D. Utah: This is the one that made me realize "oh, I haven't updated everyone on things in ages"! Judge Robert Shelby granted the preliminary injunction preventing the law from going into effect on 10 Sept 2024. (Check out page 31: our declarations have been cited in the other decisions, but this one devoted nearly an entire paragraph to judge-speak for "hey, uh, you claim this law is narrowly tailored, but Dreamwidth easily disproves that". This one is especially funny because Utah governor Spencer Cox spent a lot of time loudly yelling at first amendment attorneys on Twitter about how of course the court would agree with him! The court really, really did not agree with him.)
I always struggle with trying to figure out how much of an update on our advocacy work is significant enough to deserve a full
![[site community profile]](https://www.dreamwidth.org/img/comm_staff.png)
![[site community profile]](https://www.dreamwidth.org/img/comm_staff.png)
As always, our deepest gratitude to Netchoice for picking these fights and for inviting us along for them. We don't always agree with every lawsuit they choose to file, but that's precisely why Netchoice member companies don't have control over which fights Netchoice decides to pick, only which lawsuits we'll choose to give evidence for. Even when we disagree with them, the folks at the litigation center are extremely passionate about digital civil liberties, and they're a delight to work with. My main contact at the litigation center always tells me how heartening and inspiring he finds it to see how absolutely enthusiastic our users are about these issues and how much you all care that we're fighting (and winning!) these battles!
And one last announcement: if you didn't see our notice in
![[site community profile]](https://www.dreamwidth.org/img/comm_staff.png)
![[site community profile]](https://www.dreamwidth.org/img/comm_staff.png)
As always, thank you all for using and supporting Dreamwidth. We have the freedom to be so passionate about these fights for online civil liberties because we don't have to worry about keeping our advertisers or investors happy: the fact we're 100% user-supported gives us so much more leeway to give states the finger when they want us to compromise your privacy for the sake of "protecting the children" efforts that will do nothing to actually protect children online. (Isn't it so interesting that Senator Wyden's Invest In Child Safety Act, which would actually make a meaningful difference in protecting kids online, has gone nowhere? Why, it's almost like none of this is about protecting children at all.)
We remain committed to keeping Dreamwidth 100% free of advertising, venture capital, and outside investment, no matter what it takes. People ask us all the time whether we can raise the limits on some of our restrictions like image hosting and icons (two of the most expensive features we offer), and we would love to be able to, but our costs keep rising and inflation has outpaced the advances in disk space and transfer costs in the last few years. Your support is still covering our costs of operation, but the fact we've been in business for 15 years and our prices have remained the same has been nibbling away at the leeway we have. We aren't in any danger, but we've been starting to have the extremely difficult internal conversations about raising our prices that were first set in 2009 so they better reflect 2024 costs and the 2024 value of the dollar in order to make absolutely certain that remains true. Right now, we're still in the very early stages of that discussion (and we'd love to hear your thoughts!) and it's too soon to say what we'll end up deciding, but in the meantime, if you have a few dollars to spare, please consider buying some paid time, for your account, for a friend, or for a random active user. The financial support of those of you who choose to pay us is what allows us to keep offering the site for everybody, and we're incredibly grateful to those of you who keep offering that support!
no subject
And thank you! I really would post to
no subject
(also, hi!)
no subject
One of the really interesting things about folks moving in from Cohost: I was doing the spam review and I could more or less tell the exact moment when they posted their shutdown announcement, just because all of a sudden I was no longer able to just glance at a profile and know instantly if it was spam or not: it was taking me 20-30 seconds per profile instead of "as fast as it loads". Y'all's username pattern and the way you fill out your profiles is just different enough than our prior patterns of username/profile/icon/etc that it broke my pattern recognition! Also interesting: it took me about eight or nine days before the pattern recognition adapted and I could speed back up again; I'm still not back up to "don't even have to consciously pay attention, just flip through with your eyes slightly unfocused and your brain will tell you when you hit a spammer" autopilot mode, but I'm getting closer and closer. I still have to actually read about 25% of profiles instead of just perceiving them as a gestalt, but the percentage keeps going down. Human brains are fucking wild, yo.
Anyway, we get about 150-200 new accounts a day on average, so it's not that burdensome; before the recent fluctuation in user patterns, I could zip through that in about half an hour a day. (We don't always do it daily, but we try not to let any more than about three days go by before we get current.) There's two of us working on it, me and
I keep meaning to get my development environment back up and running (it wound up fucked, to use the technical term, during my two years or so of "my spine is disintegrating and I can't sit up more than about 45 minutes a day" and I just haven't gotten around to unfucking it yet) and build some tools to make the actual workflow suck a little less, but my to-do list is always as long as my arm and I haven't gotten there yet, sigh. And of course it's the kind of thing where the spec is horribly amorphous and I can't communicate it well to someone else so they can implement it, because I don't even really know what I want myself yet and I'm gonna have to bang on it a bit and experiment with what will actually improve the workflow and what will just be superfluous yak-shaving.
But yeah, it's actually not as bad as you would think when you hear the words "manual review". Most of what the detection systems don't catch are either the really subtle accounts that are trying very hard to look like an actual user of the site and it's just total coincidence that they've got that link in their profile/entries, or accounts that haven't yet activated but are going to after they've aged a bit (at which point the detection systems will catch them, but if you get a reputation among the spam shops who do this for detecting/closing accounts before they can activate, you get taken off their "soft targets" lists and it lowers your overall spam amount). We get some small amount of comment spam and a slightly larger amount of post spam, but the vast majority of our spam is profile-only backlink spam where someone's trying to buy a spot on the first page of Google rankings by making it look like a lot of people are linking to that site on social media. That kind of stuff is less visible to users, so it's not as much of a problem if we miss a few, but again, the more you get the reputation for being able to shut those accounts down fast, the less spam you get overall.
What's really sad: our spam account percentage was creeping ever upward and upward and by the time it was hitting "80% of all newly created accounts are spam" territory and we were like "we need a much better way to block these registrations before they even happen because obviously our filters aren't working", we started digging into it and realized that if we just used our hosting provider's filters to geoblock access to the account creation page from any IP address that geolocates to seven specific countries that we had very, very few actual users from, it would knock out the overwhelming majority of our spam. (Because people always ask: Bangladesh, Cambodia, India, Indonesia, Pakistan, Singapore, and Vietnam. I've been eyeing Egypt and Morocco lately, too, but they come in bursts.) I don't like that we had to do it, but that one move knocked out a good 4/5ths of our spam overnight because most of the spam farms that generate the majority of the garbage don't bother trying to evade any of those kinds of blocks: it's cheaper for them to just move on to a different target. So much of the internet has become one giant game of "I don't have to outrun the bear, I just have to outrun you," sigh.
But people would be shocked by just how much of the internet is spam and how hard sites have to work to keep from being overrun. I was talking about the spam problem on another social site a while back and when people just would not believe me about the extent of the problem, I dug up some actual figures from various large sites' transparency reports:
* Meta (unsurprisingly) issues their transparency reports in a completely fucky way that makes it difficult to get exact numbers, but here's their spam section;
* Q2 2024, Discord removed 8 million accounts for spam, compared to 300K accounts for all other policy violations;
* Reddit removed 173 million pieces of content from July-December 2023 and 70% of the removals were spam;
* Q1 2024, YouTube removed 15.7 million channels (representing 104 million total videos); 96% of those removals were spam. They removed 1.4 billion comments to videos; 84% of those removals were spam;
* XBox removed 10.2m accounts from July-Dec 2023; 7.3M of them were removed for spam (and automated cheating; they lump them together);
* LinkedIn removed ~108M accounts for spam July-Dec 2023, with ~400K accounts removed for all other violations;
* Twitter (excuse me, I mean X) no longer issues transparency reports, but in July-Dec 2021, they blocked 133 million spam accounts from creation and removed an additional 5.4M spam accounts that snuck through.
Etcetera, etcetera. It's just so deeply depressing. If there's one thing that could make me lose my perpetual optimism in the idea of the internet, even if some of the actual execution has turned out to be less than optimal, it's the goddamn spam. And that's even before you start to learn about the conditions (or rather, in a lot of cases, "the threats") a lot of those people are working under :/
no subject
just 30ish minutes? that's pretty impressive! but also i guess mostly these spam accounts are mostly using generated text as opposed to having someone actually make something up- easier to spot patterns when the generated content is built on patterns.
it's also interesting- i help run a website for work and we have a plugin that will show how many IPs are being blocked from different countries, and the ones you mentioned are almost always in that top 5 list. i think i always just assumed they had some bots do it, but having heard of what goes on in the content moderation space, it wouldn't surprise me at all that people are being exploited there too.
thanks for the insight- it's an interesting read!
no subject
But it's an actual booming industry in Pakistan, India, and Vietnam in particular! If you ever look on Fiverr, almost 100% of the listings in the SEO Services category are either from an agency in one of those countries or are someone who contracts out to one of those agencies. I once suspended a backlink spammer's small collection of accounts and he wrote in to us angrily demanding that we unsuspend them because they were his homework and why did we suspend them, his teacher in his spamming, I mean marketing, class told him that Dreamwidth was a great place to spam and if we didn't unsuspend them he was going to get a bad grade in spamming. (Needless to say, I did not unsuspend them. I did, however, tell him to tell his teacher to stop fucking telling students that DW was a good place to spam.) At least part of why so many sites out there have chosen the "walled garden" approach instead of the public approach is because "walled garden" at least cuts off some of one category of spammers (the ones who are doing it for search engine juice), and that's one of the larger categories out there because no matter what you do, you absolutely cannot make the people who are doing it understand that yes, when we mean "no spam", we also mean your SEO fake backlink spam. At least scammers mostly know they're doing something wrong.
(I say "cuts off some of", because a lot of these people are so rotely following instructions they got somewhere else that they don't actually know how to evaluate whether or not their spam is effective. Because of stupid technical reasons, we were completely invisible to Google for about six months sometime ... last year or so? so nothing that got posted gave any Google juice at all. Spam went up! Google literally could not see the accounts! They were just wasting their time! But they don't know how to evaluate whether their techniques and campaigns are effective or not -- a lot of times they even used to make the site's rankings worse, because Google has sort of thrown in the towel now but they used to be really aggressive about detecting and penalizing that kind of behavior -- because they never actually learned anything about search engine optimization; all they learned was how to spam.)
...see? I told you, push button, get lecture, lol.
But yeah, most of our spam is humans, there's just a lot of them. And they have a gigantic number of IP addresses, both wired and mobile, and they have software that destroys the VM they just used and spins up a new one (to defeat browser fingerprinting) and grabs a fresh IP address after every spam account they make, and there are entire office complexes in Karachi and Lahore that have network drops from 20 different ISPs and employ thousands of people, sitting there spamming away, day in and day out. And I don't blame the individual people who are doing the actual work, they're just trying to do something easy to feed their families, but the people who run those operations? Yeah, I just want fifteen minutes alone with them to have a chat while the security cameras are off. What, this? That's my emotional support crowbar. It emotionally supports me.
(And then I'm going to visit Fiverr, because they know goddamn well all of those listings are for spamming services, and my emotional support crowbar and I would like a word.)