Behind Your Feed: Platform Capitalism’s Disposable Workers
by Vedika Ochani
5 min read • April 11, 2026

Today, we live in a world ruled by digital algorithms. Algorithms that operate probabilistically and optimise for engagement rather than well being, leaving users exposed to unpredictability. A thirty second social media scroll can present a meme or graphic violence bearing resonance only to a fraction of the violence in the world outside the screen.
Either way, there is no way to be certain about what content follows the one you consume at present. That is a typical social media doomscrolling session. The accessibility and the sheer possibility of being presented with any content fathomable also brings with it vulnerability upon the user. Vulnerability to explicit content, perhaps even sexually explicit content that you can scroll past or ag as inappropriate to platforms like Meta, which claim to enforce such violations under their Community Guidelines.
Algorithms, applications and websites assure me of their safety features. They ag, restrict and block sexually explicit content before I need to view it. As a woman with access to the internet and its horrors, I am warned, alerted and protected. I marvel at the technology available today. How does an algorithm with no sentience of its own know to protect me from such content?
It does not.
My protection and yours comes at the cost of a woman from a marginalised caste and class background, on whose labour this safety is built. Algorithms and AI can merely reproduce and build on what humans feed them. They have no moral compass of their own. As Sarah T. Roberts documents in Behind the Screen: Content Moderation in the Shadows of Social Media, commercial content moderation is the large scale human review of user generated material to enforce platform rules. It is essential to ltering harmful or abusive content from our feeds.
Hence, a content moderator is now vulnerable to disturbing, graphic and abusive content that they must carefully watch, review and deem suitable or unsuitable for consumption.
In a detailed investigation by The Guardian, journalist Anuj Behal pro les Monsumi Murmu, a content moderator from Jharkhand who reviews hundreds of videos and images daily, including violent and sexually explicit material. She described the psychological toll of repeated exposure, saying that the rst few months she could not sleep, and that images followed her into her dreams. Fatal accidents. Losing family members. Sexual violence she could not stop or escape.
Wage disparities further expose the political economy of this labour. The International Labour Organization shows that online platform work enables companies to source labour globally while paying vastly different wages for comparable tasks depending on geography. Workers in lower income countries routinely earn a fraction of what workers in higher income economies earn for identical digital tasks, while operating with weaker labour protections and limited collective bargaining power. This is not accidental. It is structural.
This structural invisibility is further analysed in Mary L. Gray and Siddharth Suri’s Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass, which demonstrates how human labour underpins arti cial intelligence systems while remaining deliberately obscured from public view. What appears to users as automation is in fact labour arbitrage across borders. The platform looks universal. The wages are not.
Reporting on India’s data annotation and moderation industry consistently shows that a signi cant proportion of this workforce comes from rural and semi urban backgrounds, including many women from marginalised communities. These patterns are not incidental. They reveal how digital platform economies intersect with pre existing social hierarchies.
Research across the eld of commercial content moderation has documented the psychological consequences of repeated exposure to traumatic material, including stress disorders, anxiety, sleep disturbances and emotional numbing. Roberts’ work in particular situates this harm within a broader system that externalises both risk and visibility away from platforms and onto workers.
There is no denying that content moderation plays a role in identifying hate speech, violent material and child abuse online. But when companies advertise their moderation strategies and safety features and celebrate their enforcement of Community Guidelines, what they fail to advertise is the invisible human labour that makes it possible.
If I am shown unwanted sexually explicit content, I can seek justice for the shock, abuse and harassment because the law allows me to. But what about women like Monsumi who unwillingly consume such content in magnitudes every day? Does the content impact her any lesser because she signed a labour contract?
The burden of undesirable and uncomfortable labour has historically fallen on the shoulders of the marginalised because comfort is privilege. The analogy is structural rather than identical, but the echoes are dif cult to ignore. In India, caste based sanitation practices such as manual scavenging have historically relegated Dalit communities to hazardous and stigmatised labour in the name of public hygiene. In both cases, the health, safety and comfort of the privileged lie on the marginalised without adequate recognition, protection or compensation.
History will glamorise the innovation of today. In doing so it must also remember that this development falls on the tired shoulders of labour that we never saw the faces of. When I, as a relatively privileged woman, feel even slightly safe on the internet, it is because other marginalised women did not have the luxury of looking away.
About the Author
Vedika Ochani is a second-year student pursuing a Politics and Psychology double major. She is a student journalist passionate about creating a difference in the spheres of gender, caste, politics, social reform, justice, and public policy.