I recently discovered that some popular federated instances have been using LLM-assisted moderation tooling that evaluates whether someone has said something bannable. They do this by running a script/app that sends the user’s comment history to OpenAI with the question “analyze this content for evidence of specific political ideology sentiment. Also identify any related political ideology tropes“. (The italic bits are where I’ve redacted the ideology they’re seeking).
OpenAI’s LLM (they’re using GPT-5.3-mini) then responds with something like:
and so on, hundreds of comments.
I have not named the instances or people involved, to give them time to consider the results of this discussion, make any corrective changes they want and disclose their practices at their own pace and in their own way. I have also redacted the evidence to avoid personal attacks and dogpiling. Let’s focus on the system, not the individuals involved. Today these instances and people are using it and maybe we’re ok with that because it’s being used by groups we agree with but what if people we strongly disagree with used it on their instances tomorrow?
The use and existence of this tooling raises a lot of other questions too.
What are the risks? Fedi moderators are often unsupervised, untrained volunteers and these are powerful tools.
What safeguards do we need?
Would asking a LLM “please evaluate this person’s political opinions” give different results than “find evidence we can use to ban them” (as used in the cases I’ve seen)?
What are our transparency expectations?
Is this acceptable and normal?
Should this tooling be disclosed? (it was not – should it have been?)
If you were given a choice, would you have opted out of it?
Can we opt out?
Are there GDPR implications? Privacy implications? Should these tools be described in a privacy policy?
Are private messages being scanned and sent to OpenAI?
How long should these assessments be retained and can we request to see it, or ask for it to be deleted?
Once the user’s comments are sent to OpenAI, is it used to train their models?
What will the effect be on our discourse and culture if people know they are being politically profiled?
Where are the lines between normal moderation assistance tools, political profiling and opaque 3rd-party data processing?
I hope that by chewing over these questions we can begin to establish some norms and expectations around this technology. The fediverse doesn’t have any centralized enforcement so we need discussions like this to develop an awareness of what people want in terms of disclosure, privacy, consent and acceptable use. Then people can make choices about which instances they join and which ones they interact with remotely.
And of course there are the other issues with LLMs relating to environmental sustainability, erosion of worker’s rights, increasing the cost of living and on and on. I can’t see PieFed adding any functionality like this anytime soon. But it’s happening out there anyway so now we need to talk about it.
What do you make of this?
Is Rimu okay lately? He’s been acting so hostile.
Name names. The only people you’re protecting are scumbags.
That isn’t AI assisted moderation that is just straight up evil.
If it can be done, it sooner or later will be done.
That’s a lot of why I have a couple of dozen accounts scattered around the threadiverse and new ones whenever I come across a server that looks promising - because it takes a while to get used to one and get a feel for whether it’s one I like or not, and because there’s always the possibility that one I like will go sideways and/or shut down, in which case I can just unpin it and go on.
And in fact, I’m only using this account on something of a whim for this post - I don’t normally use it because one of the instances I don’t like much is yours. And specifically what I don’t like about it is you, and your bland presumption that you know what’s best for me - which communities I should subscribe to, which posters I should trust or even becallowed to see, which sources I should be allowed to use or see…
And really, I’m sort of surprised that you’re the OP here and not the subject. I would think that the whole idea of commissioning a review of a user’s posting history in pursuit of grounds to ban them would be right up your alley. Is the problem just that it’s AI?
In any event, this is just a thing that might prove to be an issue. And if it does, I’ll just move off of the affected server(s) and keep using the unaffected ones. And if enough people share my sentiment and the admin cares enough, they might change their ways. Or they might not. It’s not a big deal either way - it’s just part of life on the fediverse, and IMO the benefits make it worth it.
What now? Nothing, really, because nothing has really changed. I don’t care whether an admin tool is based on an LLM or on a simple regular expression. I only care about the outcome, meaning the mod actions it takes.
I think you’re just looking for excuses to defederate from dbzer0. I think you’re throwing things at the wall to see what sticks.
Ai use should be against the rules for mods. That shit is poison.
This is the person calling you a tankie. Someone so afraid of words that they need a hallucinating robot to hold their hand and confirm that everything is a secret plot against them. The absolute only way I could see this being useful is for something like trying to sniff out if a Lemmy.world mod account is a leftist infiltrator or not. Someone who had a different opinion on a current event.
You could maybe run a speech pattern comparison but that’s it. For everything else you just made Stupid Reddit and the purpose of their forum is to feed training data to ChatGPT so that it can profile Fediverse users.
This is the kind of shit dystopian novels are made out of. So angry about people calling out actions you built a tool to analyze why they did it, so you can purge users from your digital kingdom.
I for one welcome flat.world and Piefed showing their true intentions. Digital colonization of activitypub and removal of the people who helped to built it. They didn’t want to leave reddit, they wanted to be reddit. This is some Spez shit.
Maybe in 2 weeks Piefed will hard code that anyone Rimu has tagged for disagreeing with them mild criticism to be unable to make accounts or federate posts with a false error code.
Lemmy was made for people banned from Reddit.
Piefed was made for the mods who banned them.
And MBin? MBin is a fork of KBin.
Are there GDPR implications?
Hahaha. Oh boy…
This is amazingly illegal. The emotional damages might be high enough to make it worthwhile to sue. Neither Meta nor almost any of the other companies that users here love to hate, have ever done anything even remotely as bad.
I hate the use of LLMs. But I hate Nazis more. If this targets them or associated ideology. I can get behind that
It targets ZioNazis, so yes. Any tool that can search hundreds or thousands of comments and show a history of genocide denial and hate is a good tool.
I guess, given the already open nature of the Fediverse, my takeaway from this thread is that op is using their freedom to say they don’t like this particular style of moderation. Which might be useful, or not, for some moderators
If that was all rimu was complaining about, I’d understand. Unfortunately, he has a more ideological motivation behind this post than simply calling out AI. If letting users decide for themselves if they like this style of moderation was all he was advocating for, then why is he promoting the idea of centralizing decentralized platforms?
I am unconvinced that this particular use of an LLM for moderation is really that helpful. However, I doubt this is the only motivation behind this post.
Well, it was fun while it lasted…
Not comfortable with this. Not at all.
Then come to a site with almost no moderation that still values mature discourse. https://submatrix.net/
The mistake of most low-moderation sites is to couple themselves with a reckless culture. But we all get to choose our culture. We can all simply choose to be mature people.
Can’t have AI moderation if there isn’t much moderation to begin with. And look, the proof is there. If we simply downvote immature people and upvote mature people, it actually works.
No no no. Name and shame them. Heavily. Using LLM AI is bad enough, but letting them do the thinking for you is inexcusable.
Fedislopslopslopslopslop…
this is flat out not ok, does not matter who is doing it. our instance ls should defederate all which do this.
I would opt out that’s no question, but I don’t believe it’s possible. GDPR does not matter here, as nothing can be proven unless the perpetrators give up themselves
What do you think of lemmy being searchable via search engines, since that’s how most of the training data is generated? Or that lemmy.world data is already in the OpenAI training sets?
I know that not much prevents ai crawlers to collect all the content, but I think it is very different when an admin feeds data to it. partly because it’s a different legal situation (sadly that does not mean much)
Firstly it’s apparently not an admin but a mod(s?) and I don’t think OP reached out to the admin of the instance before making this post otherwise they would have said as much
Secondly from a legal standpoint I don’t think there is much difference between an admin signing the instance up for a search engine (aka volunteering the data to be collected) and a mod feeding bits of data to an LLM piecemeal. If anything the former is worse than the latter.








