On humane microblogging architecture
Designing human-centered social media using the Building Humane Tech framework
This article examines the current state of microblogging platforms from a humane technology perspective, and applies the open-source Building Humane Tech framework to envision better social platforms.
Axioms
Highlighting what we take for granted; we may or may not call these into question later.
Microblogging is the act of creating and publishing short-form media on the internet. It was originally text-centric, but has grown into a multimodal practice (images, audio, video...).
Microblogging Platforms are a medium for human interaction (“social media”) through creating, viewing, sharing and responding to said short-form media posts.
Some well-known examples are X (formerly Twitter), Bluesky, Mastodon, Threads, and Tumblr. There are also dedicated regional Platforms like Weibo, and video/image-centric sites like TikTok, YouTube Shorts, Instagram and Lapse.
The Service Provider makes this platform available to Users, and takes responsibility for availability, reliability and maintainability of the platform.
There is no restriction on distributing this responsibility.
A User is anyone who interacts with others through the microblogging platform. Our Humane Tech Goals are created with the User in mind.
The Feed is the primary user interface to create, view, share and respond to posts on the Platform.
Humane Tech Axioms
When you interact with humane tech, this is how it makes you feel:
Cared For: Puts users' needs at the center of the experience.
Present: Helps users feel more present in body and mind.
Fulfilled: Users feel satisfied with their experience & outcomes.
Connected: Users connect to themselves, each other, or nature.
Humane Tech Goals
What does it mean to feel Cared For while using Microblogging Platforms?
I feel cared for by the Service Provider: the Platform is designed with my needs and best interests at its core; and meeting my needs and best interests is clearly in the best interest of the Service Provider, by design.
I feel cared for by other Users. This includes minimal hate speech, ad hominems etc.
I feel cared for by myself. I can set boundaries and have agency over my experience using the Platform.
What does it mean to feel Present while using Microblogging Platforms?
I can use the Platform with intention - I can interact with the people and communities I want to, and find the posts/information I am looking for.
It feels easy to use the Platform with intention. This ease is amplified, not obstructed, by the Platform's design.
I'm aware of my body, my needs, my emotional/mental state. The algorithm isn't monopolizing my attention to the exclusion of my sense of self, awareness, and physical/emotional state.
What does it mean to feel Fulfilled while using Microblogging Platforms?
My interactions on the service translate into tangible benefits in my life, on and off the Platform.
I can express what I want to, learn what I want to, and connect with who I want to.
What does it mean to feel Connected while using Microblogging Platforms?
My posts reflect my authentic self and values, and I connect with the authentic self of other users through their posts.
The Platform does not obstruct my choice to be my authentic self.
Situation and motivation
Why is this necessary? What is lacking in current Microblogging Platforms? How do/don't they live up to the Humane Tech Goals above?
(Bolded statements capture the essence of each bullet.)
The good stuff
Empowering users to shape their microblogging experience
Mastodon decentralizes Service Provider power and responsibility by letting Users run their own servers (Platform instances), with flexible options for moderation, community membership, topical focus and more.
Bluesky enables Users to create dedicated Feeds for specific topics and communities, and to pin those Feeds to the top of their UI. This makes finding desired information and people much easier, cutting through the noise of the "firehose" (massive volume of new posts being created on Platforms).
Privacy-preserving social plugins (e.g., SafeButton) demonstrate that Facebook can maintain 7 out of 9 core social features without user tracking (Kontaxis et al., 2012, USENIX Security).
Decentralized networks like ReClaim achieve 95% connection rates with contacts within 10 minutes while providing mathematical privacy guarantees through homomorphic encryption (Zeilemaker et al., 2014, USENIX FOCI).
Supporting healthy usage patterns
Smartphone apps for TikTok, Instagram, Facebook and YouTube offer settings to help manage screen time, including daily limits and reminders to take a break.
Fostering authentic connection and community
Social media can support relationship maintenance and development through strategic communication practices (Williams, 2019, systematic review of 54 studies; Lovejoy & Saxton, 2012, Journal of Computer-Mediated Communication).
Community-driven moderation (as in Mastodon's federated approach) enables more nuanced, culturally appropriate content governance while avoiding biases of centralized algorithmic systems (Zhang et al., 2024, CSCW).
Verifying information being disseminated
The "Community Notes" feature on X functions as a decentralized fact-checking resource, combating disinformation while being highly resilient to the possibility - and accusation - of a centralized moderator pushing an agenda.
Initial evidence suggests that bridging-based ranking algorithms—which promote content receiving positive engagement from users across different audience groups—can increase consensus; though researchers note these approaches may inadvertently censor controversial topics (Lasser et al., 2025, Annals of the New York Academy of Sciences).
Friction-based interventions (confirmation steps and accuracy prompts before sharing) effectively reduce misinformation spread with minimal impact on legitimate content sharing (peer-reviewed behavioral science research).
The bad stuff
Doomscrolling and attention manipulation
A common failure mode of Microblogging Platforms is prolonged "doomscrolling", where Users aimlessly, extensively consume from an infinite Feed of whatever content an opaque recommender algorithm suggests to them.
Research identified instances of 44 existing dark pattern types across four major social networking platforms (Facebook, Instagram, TikTok, and Twitter), including addictive mechanisms like infinite scrolling, auto-playing content, and gamification elements designed to keep users engaged (Mildner et al., 2023, CHI).
Social media platforms employ design elements that exploit psychological vulnerabilities, with research showing that features like push notifications, read receipts, personalized content feeds, and reward mechanisms lead users to spend more time on platforms than intended, which strongly correlates with problematic social media use (r = 0.54) and predicts declines in well-being (Sindermann et al., 2022; Kross et al., 2013).
Parasitic, unethical monetization and business model conflicts
X promotes the posts and replies of paid users above others, creating a class hierarchy that takes precedence in disseminating information over simply creating high-quality, engaging posts.
Engagement optimization systems directly conflict with user well-being—Research shows that algorithms optimizing for engagement amplify emotionally charged, out-group hostile content that users say makes them feel worse about their political out-group (Milli et al., 2025, PNAS Nexus), while emotional connection to social media use is associated with negative health outcomes across multiple well-being measures (Bekalu et al., 2019, Health Education & Behavior).
Surveillance capitalism models require comprehensive user tracking for behavioral prediction products, fundamentally opposing privacy preservation and user autonomy (Stahl et al., 2022; multiple peer-reviewed analyses).
Filter bubbles/echo chamber effect and algorithmic harms
Feed recommender algorithms are designed to show more and more posts similar to what the user engages with. While this sounds appealing at first, it has a negative side effect called the "filter bubble"/"echo chamber" phenomenon. I.e. in contrast to the world outside the Platform, Users only see the opinions of other Users with beliefs similar to their own. This results in sociopolitical polarization, and stifles critical thinking and discourse.
When it's cheap and easy for content creators to make high-quality posts, social media algorithms that perfectly match content to users' tastes can increase polarization by encouraging creators to specialize in content that appeals to specific viewpoints, ultimately creating filter bubbles (Berman & Katona, 2020, Marketing Science).
Personalized feeds create information polarization through selective exposure mechanisms, with studies confirming that pro-attitudinal (like-minded) media exposure exacerbates political polarization (Kubin & von Sikorski, 2021, systematic review of 94 studies).
Psychological harm and social comparison
Social media upward comparison negatively affects self-evaluation, with particularly strong effects among females and less popular users (Nesi & Prinstein, 2015).
Fear of Missing Out (FOMO) correlates with lower psychological well-being, including reduced general mood and life satisfaction, with individuals high in FOMO showing increased social media use in a reciprocal cycle (Groenestein et al., 2025, three-wave longitudinal study of 1,341 participants).
Cyberbullying affects 46% of U.S. teens with significant associations to self-harm (OR = 2.35) and suicidal ideation (OR = 2.15) (Pew Research Center, 2022; John et al., 2018, Journal of Medical Internet Research).
Spreading disinformation, hate speech and hidden agendas
Instagram and X are known for having problems with toxic users and hate speech, and their moderation teams/features are often criticized.
Platforms like X and Truth Social are built around a leader's cult of personality, and the information disseminated there can be biased toward the leader's beliefs. This is not acceptable for a Platform built for the general public, and can be seen as an extreme failure mode of a centralized Service Provider.
AI-generated disinformation has increased 10-fold since 2023 through AI-enabled fake news sites (NewsGuard, 2025), with traditional moderation approaches proving insufficient for sophisticated synthetic content.
Synthetic media and deepfakes threaten democratic functions by undermining epistemic quality of public discourse and enabling the "liar's dividend" effect, where authentic evidence can be dismissed as potentially fake (Chesney & Citron, 2019; Pawelec, 2022).
Microblogging Platform design
How can the Platform further enable users feeling Cared For, Present, Fulfilled and Connected?
Feed recommender AI
How can the feed in a Microblogging Platform (i.e. post recommender algorithm) enable users feeling Cared For, Present, Fulfilled and Connected?
How can we make algorithms transparent, and give Users more control over algorithm behavior?
Alternative UI/UX
Notice how a typical "feed" establishes a consumer-consumed dynamic between Users and posts. Could a different UI better serve the User? Perhaps emphasizing the people/entities creating the posts, not merely the act of consuming them. Maybe similar to Web 2.0 personal websites, MySpace…
Platform UIs could be built around alternative usage paradigms like collaborative co-creation of information around a topic, vs. consumption of opaquely selected content.
Sorting high-volume content: The most popular Platforms like X, Threads and Instagram do not offer easy ways to find specific content beyond a general search box and hashtags. This is arguably another catalyst for the doomscrolling failure mode, where Users resign themselves to scrolling the Feed in hopes of finding something interesting. A general Feed may be an OK UI to interact with one's close friends and view their latest updates, but Microblogging Platforms have grown into ways to interact with Users beyond a personal friendship context. The sheer volume of posts and information could greatly benefit from a UI that abstracts post categories (similar to Bluesky UI) to help the User sort through the noise to carry out their intention of interacting with specific people, communities or posts.
Nonintrusive advertising: Many Platforms are heavily monetized through advertising, and collect and sell user data to target ads. While monetization is not inherently a problem, it should not be intrusive or disrupt the core UX. Ads should also be clearly distinguishable from genuine content, especially when the Platform promotes such ads.
Centralized vs decentralized Platform architecture
How can we best place the community’s needs at the core of the Platform, by incentive and design? What are the tradeoffs in centralized and decentralized architecture? Could a better Platform incorporate both centralized and decentralized aspects?
Platform moderation
Who should be responsible for the content created by, and distributed to, the general public? How can we maximize User agency over the content they interact with? E.g. Block/mute lists, algorithm transparency and control…
… what else?
Let’s build something cool!


Wow this is great! So many yummy scientific articles to explore! Thank you Andalib
Love that you brought in the goals of humane tech and really explored them in this context https://www.buildinghumanetech.com/#promises