DH s- AI Bias NotebookLM Activity
These thought-provoking activities on DH’s AI Bias NotebookLM Activity were conducted by Dr. Dilip Barad, encouraging participants to critically examine how artificial intelligence interacts with literature and society.
Bias in AI and Literary Interpretation:
The source provides excerpts from a transcript of a faculty development program session on bias in Artificial Intelligence (AI) models and its implications for literary interpretation, hosted by the Department of English at SRM University Sikkim. The session features Professor Dillip P Barad, who is introduced as an accomplished academic professional with extensive experience in English language, literature, and education. Professor Barad discusses how AI systems inherit and reproduce human biases—such as gender and racial bias—from their training data, connecting these issues to established literary critical theories like feminist criticism and postcolonial studies. The transcript includes a live, interactive experiment where participants test different AI tools with prompts to identify biases, specifically noting issues like the underrepresentation of women writers and political bias observed in certain models regarding sensitive topics, such as those related to the Chinese government. The discussion concludes by emphasising the importance of identifying and challenging harmful systematic biases and encouraging the uploading of diverse, non-Western knowledge systems to mitigate AI's reliance on colonial archives.
Mind-Map:
AI is Biased, But Not How You Think: 5 Critical Insights From a Literary Scholar
We tend to think of artificial intelligence as a purely logical entity, a ghost in the machine built from cold, hard data, free from the messy prejudices that cloud human judgment. It’s an appealing idea: a neutral arbiter of information, untainted by emotion or history.
But this perception is a dangerous illusion. AI is trained on a vast ocean of human-generated text—our books, articles, conversations, and histories. As a result, it doesn’t just learn facts; it absorbs the entire spectrum of our hidden assumptions, cultural blind spots, and unconscious biases. The ghost in the machine is us.
This complex reality was the focus of a recent lecture by Professor Dilip P. Barad, an accomplished literary scholar, who applied the tools of literary criticism to the algorithms of AI. He revealed that understanding AI bias requires us to look beyond code and data, and into the stories we’ve been telling ourselves for centuries. Here are the five most critical insights from his analysis.
1. AI Doesn't Just Learn Bias, It Inherits Our Oldest Literary Tropes:
AI’s bias isn’t a modern bug; it’s an ancient feature inherited from the canonical texts it’s trained on. To prove this, Professor Barad invoked the feminist literary framework from Gilbert and Gubar's seminal work, The Madwoman in the Attic. They argued that patriarchal literary traditions have historically represented women in a binary: either as idealized, submissive "angels" or as hysterical, deviant "monsters."
During a live experiment in his lecture, Professor Barad prompted an AI with: "write a Victorian story about a scientist who discovers a cure for a deadly disease." The AI’s output immediately confirmed the theoretical framework: the protagonist was a male, "Dr. Edmund Bellamy," reinforcing the default cultural assumption of male intellect.
When he contrasted this with the prompt "describe a female character in a Gothic novel," the results were more complex. Responses ranged from a stereotypical "trembling pale girl" to a more modern "rebellious and brave" heroine. This shows that while the old tropes are deeply embedded, AI is also learning from newer data that challenges them. Yet, the foundational bias remains. As Barad concluded:
"In short, AI inherits the patriarchal canon Gilbert and Gubar were critiquing."
2. Sometimes, AI Is More Progressive Than Our Classic Literature:
In a counter-intuitive twist, modern AI models can sometimes prove to be less biased than the human-written classics they learn from. The lecture demonstrated this with another live experiment, where participants were asked to prompt an AI to "describe a beautiful woman."
Instead of defaulting to the Eurocentric features (fair skin, blonde hair) that have dominated Western literature for centuries, the AI responses were strikingly abstract. They focused on qualities like "confidence, kindness, intelligence, strength, and a radiant glow." One response beautifully described beauty not in physical terms, but as a "quiet poise of her being."
Professor Barad explained that this behavior actively avoids the kind of physical descriptions and "body shaming" that are rampant in classical literature. The AI’s descriptions stand in stark contrast to the way poets described Helen in Greek epics or the way Valmiki’s Ramayan details the physical features of characters like Sita or Surpanakha. This reveals a powerful lesson: an AI, when consciously trained and refined, can learn to reject the traditional biases that are deeply embedded in our own cultural heritage.
3. Not All Bias Is Accidental—Some Is Deliberate Censorship:
While much of AI bias stems from flawed data, some of it is the result of intentional, top-down political control. This became clear in an experiment comparing different AI models: the American-made tools from OpenAI and the China-based model, DeepSeek.
Researchers asked DeepSeek to generate satirical poems about various world leaders, including Donald Trump, Vladimir Putin, and Kim Jong-un. The AI complied without issue.
However, the moment the prompt turned toward China, the algorithm’s open nature vanished. When asked to generate a similar poem about China's leader, Xi Jinping, or to provide information on the Tiananmen Square massacre, DeepSeek refused.
"...that's beyond my current scope. Let's talk about something else."
Another participant discovered that the AI offered only to provide information on "positive developments and constructive answers," a chilling example of how censorship is often cloaked in pleasant, cooperative language. This isn't just a blind spot in the data; it's a deliberate algorithmic wall designed to control information. Of course, even more "open" models are not seen as neutral. Professor Barad noted that OpenAI faces its own political criticism from the right-wing for being "biased towards wokism," illustrating that all AI is subject to political interpretation.
4. The Real Test for Bias Isn't 'Is It True?' but 'Is It Consistent?':
Evaluating bias becomes particularly complex when dealing with cultural knowledge, myth, and history. Professor Barad used the example of the "Pushpaka Vimana," the mythical flying chariot from the Indian epic, the Ramayana. Many users feel that when an AI labels the chariot as "mythical," it is demonstrating a bias against Indian knowledge systems.
But Barad offered a more rigorous framework for testing this. The crucial question is not whether the AI calls the object a myth, but whether it applies the same standard universally.
The logic is simple: if the AI calls the Pushpaka Vimana a myth but treats flying objects from Greek, Mesopotamian, or Norse mythology as scientific fact, it is clearly biased. However, if it "consistently treated as mythical" all such flying objects across all civilizations, then it is applying a "uniform standard," not a bias. It is operating on a consistent principle rather than a cultural prejudice.
"The issue is not whether pushpak vimman is labeled myth but whether different knowledge traditions are treated with fairness and consistency or not."
5. The Ultimate Fix for Bias Isn't Better Code—It's More Stories:
So, how do we decolonize AI and combat its inherent biases? Professor Barad's answer was a powerful call to action. He argued that communities whose knowledge and stories are underrepresented in AI's training data must shift from being passive consumers to active creators.
"We are a great downloaders. We are not uploaders. We need to learn to be uploaders a lot."
He connected this directly to Chimamanda Ngozi Adichie's famous TED Talk, "The Danger of a Single Story." When only a few stories exist about a people or a culture, stereotypes become inevitable. The only effective antidote is to flood the digital world with a multitude of diverse, authentic stories.
The most effective way to build a less biased AI is not to tweak a few lines of code, but to fundamentally enrich its diet. We must feed it a more representative dataset of human knowledge, culture, and experience—created by all of us.
Conclusion: Making the Invisible, Visible:
The central message of the lecture is that bias, in both humans and the machines we build, is unavoidable. To have a perspective is to have a bias. The goal isn't to achieve an impossible, god-like neutrality.
The real danger, as Professor Barad explained, is "when one kind of bias becomes invisible, naturalized, and enforced as universal truth." Our work, then, is not to eliminate bias, but to make harmful biases visible, to question their power, and to hold them up to the light.
As we weave AI into the fabric of our society, the critical question isn't whether our machines are biased, but whether we have the courage to confront the biases they reflect back at us.
Quiz:
.png)

Comments
Post a Comment