veganism.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Veganism Social is a welcoming space on the internet for vegans to connect and engage with the broader decentralized social media community.

Administered by:

Server stats:

295
active users

#contentmoderation

3 posts3 participants0 posts today

What is the impact of #ContentModeration on social media? Find out with Shantay 0.5, my tool for analyzing the EU's #DSA #transparency database. This version adds options for HTML report customization, fixes an incompatibility with Pola.rs 1.31.0, and improves test coverage including by enabling CI.
github.com/apparebit/shantay

Trying to make sense of the EU's DSA Transparency DB - apparebit/shantay
GitHubGitHub - apparebit/shantay: Trying to make sense of the EU's DSA Transparency DBTrying to make sense of the EU's DSA Transparency DB - apparebit/shantay

Mitbekommen? Neben X und #Meta auch #Youtube:

YouTube passt seine #Moderation an und lässt künftig mehr Videos online, auch wenn sie teils gegen Richtlinien verstoßen.

Inhalte mit möglichem öffentlichem Interesse – etwa zu #Wahlen, #Geschlecht, #Migration oder #Meinungsfreiheit – dürfen bestehen bleiben, wenn der #Regelverstoß unter 50 Prozent liegt.

Ziel sei es, freie #Meinungsäußerung zu schützen und zugleich Schaden zu begrenzen.

theverge.com/news/682784/youtu

The Verge · YouTube has loosened its content moderation policiesBy Emma Roth

TikTok bans the ‘unhealthy’ SkinnyTok hashtag after pressure from regulators.

Efforts to protect kids online are gaining traction in Europe.

Social media platform TikTok has banned worldwide the popular SkinnyTok hashtag linked to weight-loss videos following scrutiny from policymakers in Brussels and Paris.

mediafaro.org/article/20250602

Politico.eu · TikTok bans the ‘unhealthy’ SkinnyTok hashtag after pressure from regulators.By Pieter Haeck

US will ban foreign officials to punish countries for social media rules | The Verge

The #StateDepartment will restrict visas for foreign officials that support content moderation ‘censorship,’ an apparent response to the #DSA and other laws
#censorship #socialmedia #foreignofficials #contentmoderation

theverge.com/news/675811/us-im

The Verge · US will ban foreign officials to punish countries for social media rulesBy Lauren Feiner
Continued thread

THE ALGORITHM VS. THE HUMAN MIND: A LOSING BATTLE
¯

_
NO RECOGNITION FOR THE AUTHOR

YouTube does not reward consistency, insight, or author reputation. A comment may become a “top comment” for a day, only to vanish the next. There’s no memory, no history of editorial value. The platform doesn’t surface authors who contribute regularly with structured, relevant input. There's no path for authorship to emerge or be noticed. The “like” system favors early commenters — the infamous firsts — who write “first,” “early,” or “30 seconds in” just after a video drops. These are the comments that rise to the top. Readers interact with the text, not the person behind it. This is by design. YouTube wants engagement to stay contained within the content creator’s channel, not spread toward the audience. A well-written comment should not amplify a small creator’s reach — that would disrupt the platform’s control over audience flow.
¯

_
USERS WHO’VE STOPPED THINKING

The algorithm trains people to wait for suggestions. Most users no longer take the initiative to explore or support anyone unless pushed by the system. Even when someone says something exceptional, the response remains cold. The author is just a font — not a presence. A familiar avatar doesn’t trigger curiosity. On these platforms, people follow only the already-famous. Anonymity is devalued by default. Most users would rather post their own comment (that no one will ever read) than reply to others. Interaction is solitary. YouTube, by design, encourages people to think only about themselves.
¯

_
ZERO MODERATION FOR SMALL CREATORS

Small creators have no support when it comes to moderation. In low-traffic streams, there's no way to filter harassment or mockery. Trolls can show up just to enjoy someone else's failure — and nothing stops them. Unlike big streamers who can appoint moderators, smaller channels lack both the tools and the visibility to protect themselves. YouTube provides no built-in safety net, even though these creators are often the most exposed.
¯

_
EXTERNAL LINKS ARE SABOTAGED

Trying to drive traffic to your own website? In the “About” section, YouTube adds a warning label to every external link: “You’re about to leave YouTube. This site may be unsafe.” It looks like an antivirus alert — not a routine redirect. It scares away casual users. And even if someone knows better, they still have to click again to confirm. That’s not protection — it’s manufactured discouragement. This cheap shot, disguised as safety, serves a single purpose: preventing viewers from leaving the ecosystem. YouTube has no authority to determine what is or isn’t a “safe” site beyond its own platform.
¯

_
HUMANS CAN’T OUTPERFORM THE MACHINE

At every level, the human loses. You can’t outsmart an algorithm that filters, sorts, buries. You can’t even decide who you want to support: the system always intervenes. Talent alone isn’t enough. Courage isn’t enough. You need to break through a machine built to elevate the dominant and bury the rest. YouTube claims to be a platform for expression. But what it really offers is a simulated discovery engine — locked down and heavily policed.
¯

_
||#HSLdiary #HSLmichael

Continued thread

UNPAID LABOR, ALGORITHMIC DENIAL, AND SYSTEMIC SABOTAGE
May 7, 2025

YouTube built an empire on our free time, our passion, our technical investments—and above all, on a promise: “share what you love, and the audience will follow.” Thousands of independent creators believed it. So did I. For ten years, I invested, produced, commented, hosted, edited, imported, repaired—with discipline, ambition, and stubborn hope, all in the shadows. What I discovered wasn’t opportunity. It was silence. A system of invisible filters, algorithmic contempt, and structural sabotage. An economic machine built on the unpaid, uncredited labor of creators who believed they had a chance. A platform that shows your video to four people, then punishes you for not being “engaging” enough. This four-part investigation details what YouTube has truly cost me—in money, in time, in mental health, and in collective momentum. Every number is cross-checked. Every claim is lived. Every example is documented. This is not a rant. It’s a report from inside the wreckage.
¯

_
INVISIBLE COMMENTS: 33,000 CONTRIBUTIONS THROWN IN THE TRASH

As part of my investigation, I decided to calculate what I’ve lost on YouTube. Not an easy task: if all my videos are shadowbanned, there’s no way to measure the value of that work through view counts. But I realized something else. The comments I leave on channels—whether they perform well or not—receive wildly different levels of visibility. It’s not unusual for one of my comments to get 500 likes and 25 replies within 24 hours. In other words, when I’m allowed to exist, I know how to draw attention.
¯

_
33,000 COMMENTS... FOR WHAT?

In 10 years of using the platform, I’ve posted 33,000 comments. Each one crafted, thoughtful, polished, aimed at grabbing attention. It’s a real creative effort: to spontaneously come up with something insightful to say, every day, for a decade. I’ve contributed to the YouTube community through my likes, my reactions, my input. These comments—modest, yes, but genuine—have helped sustain and grow the platform. If each comment takes roughly 3 minutes to write, that’s 99,000 minutes of my life—60 days spent commenting non-stop. Two entire months. Two months talking into the void.
¯

_
ALGORITHMIC INVISIBILITY

By default, not all comments are shown. The “Top comments” filter displays only a select few. You have to manually click on “Newest first” to see the rest. The way "Top comments" are chosen remains vague, and there’s no indication of whether some comments are deliberately hidden. When you load a page, your own comment always appears first—but only to you. Officially, it’s for “ergonomics.” Unofficially, it gives you the illusion that your opinion matters. I estimate that, on average, one out of six comments is invisible to other users. By comparing visible and hidden replies, a simple estimate emerges: over the course of 12 months, 2 months’ worth of comments go straight to the trash.
¯

_
TWO MONTHS A YEAR WRITING INTO THE VOID

If I’ve spent 60 days commenting over 10 years, that averages out to 6 days per year. Roughly half an hour of writing every month. So each year, I’m condemned to 1 full hour of content invisibilized — which means two months lost over a decade, dumped into a void of discarded contributions. I’m not claiming every comment I write is essential, but the complete lack of notification and the arbitrary nature of this filtering raise both moral and legal concerns.
¯

_
THE BIG PLAYERS RISE, THE REST ARE ERASED

From what I’ve observed, most major YouTubers benefit from a system that automatically boosts superficial comments to the top. The algorithm favors them. It’s always the same pattern: the system benefits a few, at the expense of everyone else.
¯

_
AN IGNORED EDITORIAL VALUE

In print journalism, a 1,500-word exclusive freelance piece is typically valued at around €300. Most YouTube comments are a few lines long—maybe 25 words. Mine often exceed 250 words. That’s ten times the average length, and far more structured. They’re not throwaway reactions, but crafted contributions: thoughtful, contextual, engaging. If we apply the same rate, then 30 such comments ≈ €1,500, and 1 comment ≈ €50. It’s a bold comparison—but a fair one, when you account for quality, relevance, and editorial intent. 33,000 comments × €50 = €1,650,000 of unpaid contribution to YouTube. YouTube never rewards this kind of engagement. It doesn’t promote channels where you comment frequently. The platform isn’t designed to recognize individuals. It’s designed to extract value—for itself.

¯

_
||#HSLdiary #HSLmichael

Mods wanted!!

Kolektiva.social has now been around for nearly five years. During that time, we have received lots of valuable feedback. It has and continues to help us better understand problems with our moderation, and what needs to change. A clear takeaway from the issues we've come up against is that we need more help with content moderation.

Over the past several months, It's become more evident than ever that our movements require autonomous social media networks. To be blunt, if we want Kolektiva (and the Fediverse more broadly) to continue to grow in the face of cyberlibertarian co-optation, we need more people to help out. Developing the Fediverse as an alternative, autonomous social network involves more than just using its free, open source, decentralized infrastructure as a simple substitute to surveillance capitalist platforms. It also takes shared responsibility and thoughtful, human centered moderation. As anarchists, we view content moderation through the lens of mutual aid: it is a form of collective care that enhances the quality of collective information flows and means of communication. Mutual aid is premised on working together to expand solidarity and build movements. It is about sharing time, attention, and responsibility. Stepping up to support with moderation means helping to maintain community standards, and to keep our space grounded in the values we share.

Corporate social media platforms do not operate on the principle of mutual aid. They operate on the basis of profit — mining their users for data, that they can process and sell to advertisers. Neither do the moderators of these social media platforms operate on the principle of mutual aid. They do these difficult and often brutal jobs because they are paid a wage out of the revenue brought in from advertisers. Kolektiva's moderation team consists of volunteers. If we want to do social media differently, it requires a shift in the service user/service provider mentality. It requires more people to step up, so that the burden of moderation is shared more equitably, and so that the moderation team is enriched by more perspectives.

If you join the Kolektiva moderation team, you’ll be part of a collective that spans several continents and brings different experiences and politics into conversation. Additionally, you'll build skills in navigating conflict and disagreement — skills that are valuable to our movements outside the Fediverse.

Of course, we know that not everyone can volunteer their time. We want to mention that there are plenty of ways to contribute: flag posts, report bugs and share direct feedback. We are grateful for everyone who has taken the time to do this and has discussed and engaged with us directly.

Since launching in 2020, Kolektiva has grown beyond what we ever expected. While our goal has never been to become massive, we value our place as a landing spot into the Fediverse for many — and a home base for some.

In addition to expanding our content moderation team, we have other plans in the works. These include starting a blog and developing educational materials to support people who want to create their own instances.

If you value Kolektiva, please consider joining the Kolektiva content moderation team!
Contact us at if you’re interested or have questions.

Looks like Mastodon is going to need better moderator tools

Underprivileged people are apparently especially easy to target on #ActivityPub , or so I have been told, and I believe it. They have been complaining about it to the Mastodon developers over the years, but the Mastodon developers at best don’t give a shit, at worst are hostile to the idea, and have been mostly ignoring these criticisms. Well, now we have “Nicole,” the infamous “Fediverse Chick”, a spambot that seems to be registering hundreds of accounts across several #Mastodon instances, and then once registered, sends everyone a direct message introducing itself.

You can’t block it by domain or by name since the name keeps changing and spans multiple instances. It is the responsibility of each domain to prevent registrations of bots like this.

But what happens when the bot designer ups the ante? What happens when they try this approach but with a different name each time? Who is to say that isn’t already happening and we don’t notice it? This seems to be an attempt to show everyone a huge weakness in the content moderation toolkit, and we are way overdue to address these weaknesses.

Professor of international and public affairs at Columbia University, Tamar Mitts for @time breaks down why the fight against online extremism always fails and why we need to understand that the issue "is bigger than any one site can handle."

flip.it/qS0RGQ

TIME · Why the Fight Against Online Extremism Keeps FailingYes, Big Tech can do more. But all online spaces must commit to a more unified stance against extremism.

So yesterday's Instagram incident was revealing: Meta can suddenly flood feeds worldwide with extremely violent content (even to children), then fix it "at the flick of a switch."

Yet for years they've insisted that effectively filtering harmful content, removing misinformation, and protecting users' mental health is too technically complex and costly to implement.

If you ask me, what happened yesterday has proven what we've actually already known (not that they're trying to hide this anymore): Meta's moderation challenges aren't technical limitations—they're business decisions. When motivated, they can act instantly. Their selective enforcement speaks volumes about their actual priorities.