As I write this, the moderation on major social media platforms has been degraded, and the tools available to the user to curate their feeds are quite poor. So how could we improve our science social media feeds?
As far as I can tell, scientists want a feed that is mainly science, with a side order of “other interests” whatever that may be. Even if one is only following scientists, and those people mainly post about science, the network effect means that reposting of things you don’t wish to see is inevitable. The content could be uninteresting, upsetting or infuriating, and is specific to the individual.
To avoid disappointment, I’ll say upfront that I’m not going to offer any hacks to help. This is just a collection of thoughts of ways that curation/moderation could be implemented, focusing on Mastodon and Bluesky. If there are tools that do any of these things, please let me know!

Keyword blocking
Blocking specific words works well, but it is incredibly limited.
Avoiding posts that circumvent the block
Scenario: posts that contain obfuscated versions of words that you have blocked. For example, if you have blocked the word “Trump” and a user posts “Tr*mp”, then the post finds its way into your feed. There are also a myriad of (usually insulting) ways someone can refer to Trump without using that word. Blocking all variations is not possible and anyway, what about posts that use the word and are not referring to the person in question?
Potential solution: context-dependent blocking of posts that mention a blocked person. An LLM can figure out who “orange manbaby” is talking about and so could be filtered on this basis. Equally, an LLM could determine if the euphemism for passing wind is being used or the person.
Scenario: you have blocked the word “elonmusk” and several variants, but users post screenshots of this person’s latest eructation. There is no ALT text to screen them out.
Potential solution: OCR in screenshots is possible on a phone, so it should be possible to block on the basis of text in a screenshot.
No more of that, thank you
Sometimes, you see a post and whether you find it offensive or not, you may simply not wish to see it or similar posts again. The obvious example here is something offensive, which should be reported, but I’m thinking more about feed-clutter.
- Chain posts: “Post 10 books that influenced you. No words only covers”. “Quote post this with the first album you bought”. Maybe you love these posts, but they can seriously clutter up a feed and quickly get tiresome.
- Quote posts that target a certain post. Do we really have to read the 10th post dunking on someone who has posted something ridiculous. No. We should have the power to switch these off, per post.
- Reposts! They can be hidden within the feed, but reopening the app means seeing the same content repeatedly. A “mark as read” button to say “I’ve seen this, please hide” would be great.
- Starter packs. They were great. Do we still need them? Maybe. Should posts featuring them be turn-offable? Yes.
Potential solution: I find Bluesky terrible for feed-clutter. Having a “hide this post” and/or a “hide posts like this” would go a long way to improving the experience.
Content blocking
The baseball problem
Scenario: you follow someone who posts really interesting stuff about science but they are a huge baseball/basketball/whatever fan, and when the weekend comes, they post a lot of #GoBlues content. Your options are pretty limited here. Unfollowing the person means you will miss out on their interesting posts. Muting at the weekend for a limited time, could get tedious pretty quickly. Blocking the hashtags they may use runs the risk of seeing all the posts where the poster forgets to add them.
Potential solution: the client could determine a baseball post from the person’s usual content and hide it. This would be ideal.
Context-dependent blocking
If the problem above is solvable, then surely it can be used more broadly, in place of (absent) moderation. You don’t want to read obnoxious posts about a topic, but are fine with reading posts that are about that topic in general. For example, trans issues or Gaza. Staying in the loop about what’s happening and people’s experiences is a key feature of social media. Sentiment analysis of the post could determine the difference in intent of the poster.
Single link, single image, nonsensical text
A user posts a single link with no explanation. Wouldn’t it make sense to be able to just hide these? Obviously, because “you wouldn’t click a link in an email…” but also because if the person can’t be bothered to explain why the link is of interest, you probably shouldn’t expend any effort to find out for yourself.
We should also have the ability to block a single image posts, which could be a meme of some kind. YMMV and maybe I’m grumpy, but it would be nice to expunge them from my feed. Many are amusing, but what is the percentage? 5%? Anyway, it would be good to have an option to simply hide this content if you don’t want to see it.
Gobbledygook posts from bots are a bit 2012, however, I’m thinking more about fragments of text from people excitedly watching a season finale of a TV show you have no interest in – or again, maybe related to a sports event. Posts with a fragment of text that is hard to parse could be hidden by using AI to figure out if the post can be shown to you or not. Or perhaps only posts with substantive text can be shown and all others hidden.
Advertising
On the big social media platforms, advertising is essentially what runs these sites. At one time it was possible to block ads on Twitter, but those days are long gone. On Bluesky, advertising is present but minimal because for now, the experience is usually timeline-based and follower-based. They are working on algorithmic feeds and let’s face it, it’s a matter of time before they have to generate some revenue, and advertising is the obvious income stream. A client-side content-aware filter would eliminate advertising, although if the site needs the revenue, they will move swiftly to make this very difficult. On Mastodon, no-one is selling anything – ok, apart from some hand painted craft – so this doesn’t matter.
Conclusion
People will post whatever they want and it falls to the end user (or at least, the client they use to access the service) to clean up their feed. On corporate social media, the companies have made clear that they are no longer interested in moderation. On decentralised social media, we can’t expect community norms to prevail. Even if all users were to protect controversial posts with CWs, it is impossible to know what the end user may find distressing/objectionable/whatever. So ultimately, empowering the end user to curate their feed for the experience they want is the only solution.
Ironically, most Mastodon on Bluesky users have fled “the algorithm” on other platforms. Originally, the purpose of “the algorithm” was to learn what people wanted to see and show them more of it. On LinkedIn, you can click “not interested” on individual posts for this purpose. Whether you engaged with or even dwelled on a post led the algorithm to learn what you do like. This became a way to profile the user for targeted advertising. Later, platforms used “the algorithm” to manipulate what users experience to try to sway their views. Ultimately, the algorithm turned bad. The ideas above are wishing for a user-controlled algorithm (in the original sense) for social media.
—
The post title comes from “Not What You Want” by Sleater-Kinney from their “Dig Me Out” LP.