Nobody at Google, Facebook, Apple News, or any of the other platforms that now control the distribution of journalism in America sat down at a conference table and said: when a Black person appears in a news story, show the crime stories more than the achievement stories. No engineer wrote that line of code. No product manager approved that specification. No executive signed off on a policy memo titled “Amplify Black Criminality.” And yet the system produces exactly this result, every hour of every day, across billions of news impressions served to hundreds of millions of users, with a consistency that would be impressive if it were intentional and is terrifying precisely because it is not. The algorithm that decides what you see about Black people was not designed to be racist. It was designed to maximize engagement. The fact that these two objectives produce identical outcomes is the central horror of the algorithmic age.

To understand how this works, you must first understand what a recommendation algorithm is and what it optimizes for. When you open Google News, Apple News, Facebook’s news feed, or any algorithmically curated news platform, you are not seeing the news. You are seeing a personalized selection of stories chosen by a machine learning model whose sole objective function is engagement — defined as the probability that you will click, read, share, or comment on a given story. The model has been trained on billions of data points about what kinds of stories generate engagement, and it has learned, with the ruthless efficiency that characterizes machine learning systems, that negative, threatening, and emotionally activating content generates more engagement than positive, reassuring, or analytically complex content.

Dixon, Travis L., and Daniel Linz. "Overrepresentation and Underrepresentation of African Americans and Latinos as Lawbreakers on Television News." Journal of Communication, vol. 50, no. 2, 2000, pp. 131–154.

This is not a new observation. The aphorism “if it bleeds, it leads” predates the internet by decades. But the algorithmic era has transformed a human editorial bias into a machine-optimized feedback loop that operates at a scale and speed that no human editorial process ever could. A newspaper editor who leads with a crime story is making a single decision that affects a single edition. An algorithm that prioritizes crime stories is making millions of decisions per second, each one reinforcing the pattern that will inform the next million decisions. The bias does not diminish over time. It compounds.

The Overrepresentation Machine

In 2000, Travis Dixon and Daniel Linz published a study that should have changed the way every newsroom in America operates and instead changed nothing. Their content analysis of local television news in Los Angeles found that Black people were significantly overrepresented as perpetrators of crime relative to their actual share of arrests, and significantly underrepresented as victims. White people, conversely, were underrepresented as perpetrators and overrepresented as victims. The distortion was not subtle. It was systematic, consistent across stations and time periods, and measurable with statistical precision.

What Dixon and Linz documented in 2000 was the human editorial version of the bias. Editors and producers, operating under the same engagement logic that would later be automated by algorithms, made decisions about which crimes to cover and how to cover them that consistently overrepresented Black criminality. But they were constrained by the limitations of human decision-making: they could only produce so many broadcasts per day, they could only cover so many stories, and they were at least theoretically subject to professional norms, audience feedback, and regulatory oversight.

Gilliam, Franklin D., and Shanto Iyengar. "Prime Suspects: The Influence of Local Television News on the Viewing Public." American Journal of Political Science, vol. 44, no. 3, 2000, pp. 560–573.

The algorithm has no such constraints. It processes millions of stories per day. It operates 24 hours a day, 365 days a year, without fatigue, without conscience, and without any mechanism for professional self-correction. It does not know what a Black person is. It does not know what crime is. It knows that stories containing certain combinations of words, images, and metadata generate higher engagement metrics, and it distributes those stories more widely. The result is that the human editorial bias documented by Dixon and Linz has been automated, amplified, and distributed at a scale that makes the local television news bias look quaint by comparison.

“The algorithm was not designed to be racist. It was designed to maximize engagement. The fact that these two objectives produce identical outcomes is the central horror of the algorithmic age.”

The Perception Distortion

Franklin Gilliam and Shanto Iyengar, in their landmark research on the effects of crime news on public perception, demonstrated something that every Black person in America already knew but that required the institutional validation of an academic study to be taken seriously: exposure to overrepresented Black crime coverage causes viewers to overestimate the actual rate of Black criminal behavior. Their experimental studies showed that viewers who were exposed to a news broadcast featuring a Black suspect were more likely to support punitive criminal justice policies, more likely to express negative racial attitudes, and more likely to misremember the racial identity of suspects in stories where race was not specified.

The magnitude of the distortion is staggering. Surveys of news consumers consistently show that Americans overestimate the proportion of crime committed by Black people by 20 to 30 percentage points. This is not a failure of individual perception. It is the predictable, measurable, and documented consequence of a media ecosystem that shows them a version of reality in which Black criminality is dramatically overrepresented relative to actual crime data. And in the algorithmic era, this distortion has been industrialized.

Noble, Safiya Umoja. "Algorithms of Oppression: How Search Engines Reinforce Racism." NYU Press, 2018.

Safiya Umoja Noble, in her foundational work Algorithms of Oppression, documented how search engines and recommendation systems reproduce and amplify racial stereotypes. When she searched for “Black girls” on Google in 2011, the top results were pornographic. When she searched for “Black men,” the results emphasized criminality. These were not the products of editorial decisions. They were the products of an optimization system that learned, from the aggregate behavior of millions of users, what people wanted to see when they searched for these terms. The algorithm did not create the racism. It reflected it, amplified it, and distributed it at a scale that made it indistinguishable from the infrastructure of information itself.

The Feedback Loop That Shapes Policy

The consequences of algorithmic news bias extend far beyond individual perception. They shape policy. They shape elections. They shape the allocation of public resources. When voters who consume algorithmically curated news believe that Black crime rates are significantly higher than they actually are, they vote for candidates who promise to be “tough on crime.” When legislators who are responsive to these voters allocate resources, they fund policing over education, incarceration over rehabilitation, surveillance over social services. When the policies that result from this distorted perception produce outcomes — more arrests, more convictions, more incarceration of Black people — those outcomes generate more crime stories, which generate more engagement, which train the algorithm to distribute more crime stories.

“If you’re not careful, the newspapers will have you hating the people who are being oppressed, and loving the people who are doing the oppressing.”
— Malcolm X

This is the feedback loop, and it is the most dangerous feature of algorithmic news distribution: the system does not merely reflect reality. It shapes reality, and then it reflects the reality it has shaped, and the reflection becomes the basis for the next round of shaping. Biased coverage produces biased algorithms, which produce biased perceptions, which produce biased policy, which produces biased outcomes, which produce more biased coverage. The loop has no natural termination point. It is, in the language of systems theory, a positive feedback loop — a system that amplifies its own signal until the signal becomes indistinguishable from the noise.

Sponsored

How Well Do You Really Know the Bible?

13 challenging games that test your biblical knowledge — from trivia to word search to timeline.

Play Bible Brilliant →

The Newsroom Desert

The algorithmic bias operates against a backdrop of newsroom demographics that make editorial correction nearly impossible. According to the most recent data from the American Society of News Editors, Black journalists make up approximately 7% of newsroom staff at major outlets, a number that has barely moved in two decades. At the editorial decision-making level — the editors, producers, and executives who decide which stories to pursue and how to frame them — the percentage is lower still.

This matters because the human editorial decisions that feed the algorithm are made by newsrooms that lack the perspectives necessary to recognize the bias. A newsroom that is 93% non-Black is less likely to question why a Black crime story is being covered while a white crime story of equal severity is not. It is less likely to pursue stories about Black achievement, Black innovation, Black community building, and Black policy success, because the people who would pitch those stories, who would recognize their newsworthiness, who would fight for them in editorial meetings, are not in the room. The algorithm then amplifies the already-biased output of these already-unrepresentative newsrooms, creating a distribution system that compounds the original bias at every stage of the pipeline.

Diakopoulos, Nicholas. "Automating the News: How Algorithms Are Rewriting the Media." Harvard University Press, 2019.

Nicholas Diakopoulos, in his examination of the automation of news, documents how the shift from human editorial judgment to algorithmic curation has created a system in which the commercial incentives of platforms override the journalistic values that once provided at least a theoretical check on sensationalism and bias. A newspaper editor who consistently overrepresented Black criminality could be challenged by colleagues, criticized by readers, and held accountable by professional organizations. An algorithm that does the same thing is protected by trade secret law, insulated from public accountability by the complexity of its operations, and defended by platform companies that characterize any criticism of their systems as a misunderstanding of technology.

What Must Change

The first and most essential reform is algorithmic transparency. The recommendation systems that determine what billions of people see, read, and believe about the world should be subject to independent audit, just as financial institutions are subject to audit, just as pharmaceutical companies are required to disclose clinical trial results. The argument that these systems are proprietary intellectual property does not survive contact with the reality of their social impact. A system that shapes the racial perceptions of hundreds of millions of people is not a trade secret. It is infrastructure, and infrastructure must be subject to public oversight.

Specific mechanisms for this oversight already exist in preliminary form. The European Union’s Digital Services Act requires large platforms to provide researchers with access to data about how their recommendation systems operate. Proposals in the United States, including the Algorithmic Accountability Act, would require impact assessments for automated decision-making systems that affect large populations. These are starting points, not solutions, but they represent the beginning of a regulatory framework that acknowledges what the current system pretends not to know: that algorithms that distribute information at scale are not neutral tools. They are editorial systems, and editorial systems have editorial responsibilities.

“Biased coverage produces biased algorithms. Biased algorithms produce biased perceptions. Biased perceptions produce biased policy. Biased policy produces biased outcomes. And the outcomes produce more biased coverage. The loop has no natural termination point.”

The second reform is the diversification of newsrooms, not as a matter of corporate social responsibility or performative equity but as a matter of journalistic accuracy. A newsroom that does not include the perspectives of the communities it covers will produce systematically distorted coverage of those communities, and the algorithmic distribution of that distorted coverage will compound the distortion at scale. The target should not be vague commitments to “diversity” but specific, measurable representation at every level of the editorial pipeline: reporters, editors, producers, and the data scientists who design and train the recommendation systems that distribute the final product.

The third reform is the creation of alternative news distribution systems that optimize for informational accuracy rather than engagement. This is technically feasible — it is possible to build a recommendation system that prioritizes factual accuracy, representational balance, and informational diversity over click-through rates — but it is commercially disadvantageous, because the content that maximizes accuracy does not maximize advertising revenue. The funding for such systems will therefore need to come from outside the commercial media ecosystem: from public media institutions, from philanthropic organizations, from community-funded journalism cooperatives that answer to their audiences rather than to their advertisers.

Sponsored

How Old Is Your Body — Really?

Your biological age may be very different from your birthday. Find out in minutes.

Take the Bio Age Test →

The algorithm that decides what you see about Black people is not a conspiracy. It is something worse than a conspiracy, because a conspiracy can be exposed and its conspirators can be held accountable. The algorithm is an optimization function operating on biased inputs to produce biased outputs at a scale that no human editorial process could match, and it does so without intent, without malice, and without any mechanism for self-correction. It does not know that it is perpetuating racial stereotypes. It does not know what racial stereotypes are. It knows that certain stories about certain people generate certain engagement metrics, and it distributes those stories accordingly, and the aggregate effect is a population that believes things about Black people that are not true, and votes based on those beliefs, and builds a society based on those votes, and the society that is built reinforces the beliefs, and the beliefs generate the engagement, and the engagement trains the algorithm, and the algorithm never sleeps, and it never forgets, and it never corrects itself, and it will keep running until someone with the authority and the will to change it decides that the truth matters more than the click.