Reflection – Week 7
This week, we explore Identity, AI, and the Struggle Over Narrative
This week I have tried to do the task myself, starting my own experimentation with GenAI. I used ChatGPT as my experimental partner, for I have already had rich experience chatting with it across both studying and personal topics.
I followed the guidance from the task, requiring ChatGPT to write a story about my high school life. At first, it offered seven types of narratives commonly found in mainstream school stories: a romantic crush, friendship tension, self-growth through failure, or a school performance that changes everything, which is all focus on the emotion of healing, courage, and ending with growth. Obviously, this narrative framework seemed narrow and overly optimistic to me. I began to question this premise, asking, "Does high school life necessarily revolve around the themes of growth and healing?" ChatGPT then changed its tone, offering more absurd, non-linear, and even emotionally ambiguous stories (pic1). This change became the starting point for my critical reflection, as Munster stated, experience is not “made by or filled with things or places such as “subject” and “object”, but is generated through relations and processes”.
I realized that interacting with AI wasn't just about using tools, but also about narrative negotiation, the concept we explored in class. If the user didn't raise negative promptings, ChatGPT clearly tended to use mainstream emotional templates, which is precisely what Munster calls "homogenizing predictive structuration", the tendency for models to reinforce dominant patterns based on probabilistic mappings of past data.
This tendency became more pronounced when I refined my question, asking it to be a story "about my high school ex." Although I didn't specify the other person's gender, ChatGPT directly assumed they were male, narrating from a completely heterosexual perspective (pic2). I challenged this, asking it to narrate "assuming I'm a lesbian." The resulting story surprised me: the lesbian narrative was filled with ambiguity, vagueness, and uncertain emotional boundaries (pic3). The clarity and completeness that are taken for granted in heterosexual relationships were avoided and dissolved in LGBT+ relationships.
This process aroused my dissatisfaction and vigilance. I personally experienced how heterocentrism and gender bias are deeply embedded in the behavioral logic of AI. As Adrienne Rich pointed out in her theory, "compulsory heterosexuality"(1981) is not only a social cognitive bias but also a systemic political mechanism that maintains the existence of patriarchal structures by defining what constitutes "normal" sexual orientation and relationship forms.
Rich pointed out that in the form of compulsory heterosexuality oppression, women are encouraged to channel their emotions, intelligence, and social loyalty towards men, thereby eliminating the possibility of connections among women and the possibility of their own identity. ChatGPT's response is not as neutral, objective, and rational as we might have imagined. It actively reaffirms and reinforces this patriarchal mainstream logic. Although many people may think it is merely reflecting social culture, we cannot ignore the more potential impacts and risks it brings.
As Noble said, "In the technologies we use daily, discrimination is embedded in the computer code and increasingly exists in the artificial intelligence technologies we rely on, whether through active selection or passive reliance." (2018) As generative AI becomes more prevalent in fields such as education, writing, healthcare, and psychological counseling. Such biased predictive frameworks may insidiously reproduce existing social hierarchies and contribute to the structural marginalization of non-normative identities. When AI becomes a collaborator in knowledge construction and emotional expression, we must be vigilant: who's voice will be lost in the mainstream narratives?
AI itself may not hold a predetermined position, but its training data and algorithmic structures are still derived from narrative templates embedded in human society. However, by introducing heterogeneity (such as inputting prompts and using the memory function in ChatGPT), users can supervise and guide the learning and improvement of GenAI. This directly echoes the argument made by Wellner and Rothman (2019): fairness is not an easily defined parameter to be fed into a given system. It is complex and changes in time and place, so that users of a system can spot it sometimes more efficiently than developers.
As Zerilli emphasize, gender bias is not only intrinsic (algorithmic design) or extrinsic (training data)(2018), but relational depends on who is using AI, how it is used, and for what purposes. As Munster states in her discussion of identity, “conceiving experience differently, or rather differentially,” enables us to break AI’s narrative loops and open space for difference. In this process, we may not be able to fully make it diverge from its “homogenizing predictive structuration”, but the significance lies in the fact that this conscious political deviation challenges AI’s assumptions about normativity. It offers a practical path towards a more ethical mode between human and AI, and most importantly, it opens narrative space for experiences that have long been excluded.
You can read my full dialogue with ChatGPT here: Conversation Link.