From Data to Division 5 of 5: Artificial intelligence – by Daniele M. Barone

The Half-Measure of Human-Grounded AI Ethics. Within the context outlined thus far, as argued by Coeckelbergh and Sætra, the debate carried on by LLMs ongoing development seems to be between two main point of view: the Marxist concept of “technological determinism” and the “technological instrumentalism” or “technical orthodoxy”.

The Marxist concept of “technological determinism”, which posits that technology is the primary driver of social change, shaping societal structures and cultural values, thus, a reductionist view of human agency; “technological instrumentalism” or “technical orthodoxy”, which emphasizes human control and understanding over technology, describing technology as a neutral tool for human use, devoid from political or social influence.
Although, promoting technological determinism in AI, allowing the algorithm to operate in a completely autonomous environment, without human oversight, could be highly problematic. Indeed, regarding LLMs, researchers demonstrated that, while toxic prompts predictably resulted in higher toxicity levels in generated outputs, even non-toxic prompts could lead to toxic generations at significant rates. For instance, an experiment showed that even a model trained solely on Wikipedia could result into levels of toxic generation.
This indicates that LLMs are capable of producing harmful content even in ostensibly innocuous contexts. Therefore, the inherent challenges in eliminating toxicity underscore the necessity for human oversight and intervention throughout the deployment and operation of new models; post-training strategies, including steering and moderation mechanisms, are critical to mitigating toxic behavior in AI systems. Moreover, given the sophistication and scalability of modern and emerging LLMs, which are also trained on unfiltered data, more advanced supervision methods will be required to address the ongoing issues of toxicity. Thus, despite significant advancements, human intervention remains an indispensable component for the responsible deployment of LLMs.
Additionally, the risk of the scalability of falsehoods spread by LLMs cannot be overlooked. In this regard, researchers studied that when a conversation with AI revolves around factual events, the probability of AI spreading falsehood is limited but, where there’s no objective data (as the domain where much of the public and political debate is unfolding), AI could easily answer by providing false, inaccurate or misleading information. This is caused by the fact that LLMs, to provide human-like conversation, have to be trained on a large breath of data given by the web, not always verifiable, which can be harmful or toxic.
Most importantly, the lack of control over AI is not only related to technical issues but also to the inherent ambiguity of human goals (e.g., the duality of combating climate change while maximizing consumption). Addressing or discussing such issues with LLMs requires continuous supervision to ensure the objectivity of AI systems, as they may not fully understand or appropriately interact with the complexities of human behavior.
Hence, with current knowledge on technological developments in the AI field, LLMs are bound to innately reflect a human-given “moral direction,” which in some cases appears as a maze of polarized nonlinear, biased, and questionable opinions.

In light of this, LLMs urgently require a comprehensive rethinking of responsibility, not just in terms of their programming, but also regarding their deployment and impact.
At present, there is a significant gap: the documentation detailing the data used in model development, and the rationale behind its selection, is largely inaccessible. This documentation should not only reflect the developers’ objectives, guiding principles, and motivations but also provide a transparent account of how datasets are curated and models constructed. Furthermore, there is a lack of clarity around the identification and accountability of potential users and stakeholders, particularly regarding those who might suffer from model errors or misuse. This underscores the need for developers to take responsibility for the impact of their models, ensuring they are held accountable for any harmful outcomes and that their systems are designed with both ethical and user-centered considerations in mind.
As LLMs become more embedded in society, the absence of these safeguards should raise pressing concerns about their broader societal implications.
Human responsibility should also extend beyond issues directly related to the creation of AI systems. As previously discussed, polarization, post-truth discourse, and the widespread erosion of objectivity have shaped the current digital ecosystem, which serves as large part of the dataset for LLMs. In this context, emotions have either become a new metric for determining truth or as an absolute threat in public and political debates, fostering widespread divisions that permeate all segments of society.
The role of emotions in public and political discourse necessitates a thorough reevaluation. Rather than seeking to suppress emotions, there is an emergent, bottom-up call to better understand them and to recognize their significance in shaping citizens’ voices. In the digital age, emotions are no longer perceived merely as disruptive forces; instead, they are integral to public dialogue and, at times, societal divisions. This evolution demands that public and political structures adapt to comprehend emotions rather than attempt to repress them.
This approach transcends the simplistic optimism of a Panglossian belief. Instead, it invites a nuanced logic that requires taking emotions seriously while critically challenging established assumptions about representative democracy. As William Davies argues,[1] understanding modern democracy necessitates reconsidering the role of emotions in political dynamics and interrogating the traditional reliance on representation. Historically, democratic systems have depended on the majority of people delegating their voices to elected representatives, judges, or experts, operating within a structured framework of parties and institutions. This arrangement has been funded on public trust and the willingness of citizens to remain silent, trusting professionals to act on their behalf.
However, this trust is eroding globally. Disillusionment with politicians and traditional media is increasing, accompanied by a rising preference for direct democracy. This transformation signals a shift toward a “logic of crowds,” where mobilization overwhelms representation. Crowds, whether physical or digital, derive their influence not from structured delegation but from shared emotional intensity, fostering a collective sense of belonging to something, perceived as, greater. While such mobilization is not inherently negative, it carries inherent risks and emotional strains.
Indeed, emphasizing crowd mobilization represents little more than a surrogate and shortcut to fostering an emotion-centered public debate. In this context, Martha Nussbaum posits that a nuanced understanding of emotions in politics can contribute to more humane and effective decision-making, ensuring that the voices of individuals within the public sphere are not merely acknowledged but actively integrated into political processes. However, achieving this requires that “the public cultivation of emotions has to be compatible with liberal freedom and a vigorous critical culture … a nation that cultivates emotions (of racial brotherhood, for example) must also strongly protect dissenting speech and peaceful protest, and it must convey the idea that individuality is prized, not repressed.”
As previously discussed, the current trajectory within Western democracies appears to deviate significantly from this ideal. Instead of nurturing individuality and fostering constructive emotional engagement, emotions are increasingly reduced to instruments for rallying the electorate or justifying overt falsehoods, while individuality is often subsumed into a simplistic identification with the group.
The increasing prominence of emotions and the advancement of large language models (LLMs) represent two distinct yet interlinked dimensions of progress, with significant points of intersection.
Even though AI, like any other technology, is inherently neutral, unlike many other technologies, it possesses the unique potential to evolve consequently to socio-political, collective, and conscious human growth. While the oversight of algorithmic programming is indispensable, and despite the impossibility of ensuring that its datasets are entirely devoid of toxic or hateful content, due to the uncontrollable nature of some sources, strategies for a sustainable development of AI should consider the shift toward refining the language of public discourse. Then, besides datasets, a responsible public discourse would also serve as a natural antibody, equipping users with the critical awareness needed to recognize and avoid falsehoods, toxic content, or ideologically biased outputs generated by AI systems.

Hence, it is crucial to understand the interconnected aspects of progress at this pivotal moment in communication and technological transformation. This awareness could represent an opportunity to enhance social and political structures while fostering the inclusive and ethical development of AI. Otherwise, as Antonio Gramsci insightfully observed: “The old world is dying. The new world struggles to be born: in this interplay of light and shadow, monsters emerge.”


[1] Davies W., Nervous States: How Feeling Took Over the World, Vintage books, New York, USA, September 6, 2018, pp 13-16 (Page numbers may differ in print)