A Noisy Digital Environment as a Biased Dataset. As previously analyzed, human language in public and political debates impacts not only how individuals form opinions but also how AI systems, including LLMs, interact with users and further shape political discourse.
Indeed, the way political and social issues are framed in the language models’ training datasets could influence how they generate responses on sensitive political topics, potentially perpetuating biases or ideological leanings.
Given these dynamics, it is crucial to examine and delineate the evolution of the digital communicative ecosystem. Understanding how this ecosystem has transformed, shaped by both technological advancements and the shifting nature of online discourse, can provide valuable insights into the risks and challenges posed by LLMs in political and public discussions.
Such analysis connects AI models with human behavior through digital platforms, illuminating the subjective nature of the environment in which these systems are trained and the sources from which their datasets are drawn. It focuses on the potential biases and ideological influences shaping the current digital environment, thereby adding a further dimension to the research by exploring the broader connections between AI systems and public discourse.
At present times, on most issues, communication has shifted from a traditional one-to-many format (media or institutional outreach to citizens) to a social media-driven, many-to-many dynamic, which, despite potential manipulations, fosters interaction at unprecedented levels. William Davies, in Nervous States, explains that opinions on a given topic have become integral to the information itself, driving a bottom-up shift that has transformed even the communication styles of media and institutions.[i] This trend, as previously explained, analyzes objective data, like estimating crowd size at a political rally, to complex reasoning, like understanding the causes of a conflict or planning actions on climate change, from the emotional lens, often blurring objectivity by creating space for monolithic stances or questions on almost any topic. This tendency not only reframes the information landscape, but also influence political or regulatory decisions and public opinion, currently lost among interpretation, and frequently collapsing in a climate of uncertainty.
The consequences of this context have been described by Thomas L. Friedman through the metaphor of mangroves, to capture his perspective on the erosion of societal anchors in the U.S. In nature, mangroves filter toxins, buffer hurricane and tsunami waves, ward off predators, and provide a refuge for young fish to mature in safety. Similarly, in society, mangroves represent social, normative and political screens that establish ecosystems capable of filtering toxic behavior, mitigating political extremism, and fostering healthy communities and trusted institutions. Nevertheless, the author argues that, both in nature and democratic societies, mangroves are nearing extinction.
The deterioration of certainties has the potential to erode institutions’ credibility and politic’s moral positions, opening the door to a myriad of interpretations that, as often demonstrated, can progressively drive to societal divisions through vertical or horizontal polarization.
Politics has adapted to this evolving landscape of constant relativity by crafting communication strategies which have created a “post-truth” environment.
The concept of “post-truth,” as explains Cian O’Callaghan, reflects a political climate in which facts are increasingly sidelined in favor of emotional appeals and partisan perspectives. Despite being an old phenomenon,[ii] it was revealed especially during the 2016 Brexit referendum or U.S. presidential election, enshrining an endemic shift in politics, where campaigns prioritized emotive and often false claims over objective truth, to influence voters.
Unlike political dishonesty, post-truth politics is marked by a lack of concern for exposure of falsehoods: as seen in 2017 when Kellyanne Conway, counselor to President Donald Trump, claimed the White House press secretary gave “alternative facts” when he inaccurately described Trump’s presidential inauguration crowd as “the largest ever.”
As analyzed by researchers at RAND, the current, general, information shift towards emotions has direct consequences on politics because, without a common set of facts, it becomes nearly impossible to have a meaningful debate about important policies and topics, declining civic faith in governance, the quality of policymaking, and slowing the decision making process. Furthermore, as people’s trust in politics declines, so does civic involvement, lowering check on political representatives, directly impacting transparency and accountability.
However, it is essential to consider, as McIntyre highlights, that individuals often resort to falsehoods when they aim to assert something they perceive as more significant than truth itself. In this context, McIntyre describes post-truth as a condition where individuals believe that the crowd’s reaction can alter the facts of a lie.[iii] This reflects not an invention of deception out of nothing but rather the rudimentary exploitation of an intrinsic psychological or emotional need.
The background above-described has permeated the public debate and already uncovered its potential to foster societal division and hate, by interpreting complex issues, that should require deep knowledge and the ability to illustrate facts with an Homeric formality or objectivity, through an ideological or emotional prism.
A shining example refers to the horizontal polarization on both traditional and social media unfolding the Israel-Hamas war, where quite often biased views and extremism have been influencing opposing narratives since its beginning, causing severe consequences for the Jewish, Muslim, and Palestinian communities across the world.
The tragic afterwards of the conflict, so far, considering that in the case of wars and other conflicts, objective data collection is almost impossible, has been around 1,200 Israelis dead, and more than 5,000 injured, while at least 42,000 Palestinians deceased and almost 100,000 injured. According to Pew Research Center, most Americans are not paying very close attention to traditional media news about the conflict to avoid evoking strong negative emotional reactions. In particular, an overwhelming majority of U.S. adults (83%) say that hearing or reading news about the Israel-Hamas war makes them feel sad, and about 65% say news about the war makes them feel angry.
However, many people increasingly rely on social media platforms like TikTok and Instagram for news, indeed, the number of people consuming news content on TikTok has increased from 800,000 in 2020 to 3.9 million in 2022, exposing users to a vast number of viewpoints, but without the fact-checking and credibility usually offered by established media organizations.
In fact, especially in war zones and, in particular, in Gaza, journalists face particularly high risks as they try to cover the conflict, while social media allows for unfiltered views of the reality from people on the ground. Nevertheless, this personal point of view can also lead to a distorted understanding of complex situations: while personal accounts provide valuable insights, they can be biased and incomplete, lacking of diverse perspectives.
Moreover, as highlighted by researchers, although images directly from the conflict in Gaza initiate a lot of conversation on social media, most of the people engaging in those conversations do not have direct ties to the region but are still very influential drivers of opinion, building a version of the conflict for their large group of followers.
Influential figures like Ben Shapiro, with a massive following on X, have used their platforms to express strong pro-Israel sentiments. While discussing the conflict directly, he often dedicates significant attention to criticizing pro-Palestinian protests in the US. These protests are frequently labeled as pro-Hamas or pro-terrorist, fueling further polarization and debate.
Pro-Palestinian content is more prevalent on platforms like TikTok, which has a younger user base that tends to be more sympathetic to the Palestinian cause. While some speculate that the platform’s algorithm favors pro-Palestinian content, suppressing pro-Israeli perspectives, TikTok maintains that this is simply a reflection of its user demographics.
In this unsupervised information landscape, misinformation is widespread, either intentionally or unintentionally. Fact-checkers have debunked numerous images and videos falsely attributed to the Israel-Hamas conflict: images and videos that claim to show scenes from the war but in fact show past conflicts in the region, other parts of the world, or even scenes from movies or video games.
As obvious, even unintentional misrepresentation can have serious consequences, as it can hinder efforts to find a peaceful resolution while promoting a dehumanizing language. This context already exacerbated in the diffusion of extremist contents as, to mention a few: the hashtags #HitlerWasRight or #deathtomuslims on X platform; dozens of young Americans posting videos on TikTok expressing sympathy with Osama bin Laden for the letter he wrote critiquing the United States, especially for its support of Israel; genocidal language against Muslims on YouTube, as “Israel is totally justified if they decide to Carpet bomb every square inch of Palestine to eliminate the terrorists of Islam.”
In this briefly outlined Babylonian landscape, LLMs are trained using both verifiable and unverifiable digital sources. Therefore, is AI sufficiently supervised to prevent it from unintentionally generating harmful or toxic content, including hate speech?
[i] Davies W., Nervous States: How Feeling Took Over the World, Vintage books, New York, USA, September 6, 2018, pp 43-44 (Page numbers may differ in print)
[ii] In South Africa, more than 330,000 people died prematurely from HIV/AIDS between 2000 and 2005 due to the Mbeki government’s obstruction of life-saving treatment, and at least 35,000 babies were born with HIV infections that could have been prevented. The former South Africa’s health minister, at that time, made clear her preference for the health-giving properties to treat AIDS of garlic, lemon, olive oil and beetroot over the drugs that the World Health Organisation wanted provided to save lives in the population. Roeder A., The cost of South Africa’s misguided AIDS policies, Harvard T.H. Chan School of Public Health, 2009, https://www.hsph.harvard.edu/news/magazine/spr09aids/ ; Boeseley S., Aids groups condemn South Africa’s ‘Dr Garlic’, The Guardian, May 6, 2005, https://www.theguardian.com/world/2005/may/06/internationalaidanddevelopment.southafrica
[iii] McIntyre L., Post-truth, The MIT Press, Cambridge, Massachusetts, USA, 2018, p. 12 (Page numbers may differ in print)