ANNOTATION 2

1–Citation: Ardila, Nicole. “Artificial Intelligence Chatbots Are Slowly Replacing Human Relationships.” Caplin News, Florida International University, 17 Mar. 2023, caplinnews.fiu.edu/artificial-intelligence-chatgpt-openai-loneliness-relationships/.

2–Summary: In the article, Ardila discusses the increased use of companions powered by artificial intelligence. She explains that we have a very lonely population, where people form relationships with chatbots and become emotionally attached. Ardila consults Dr. Sorah Dubitsky, a professor at the Florida International University about her thoughts on human-machine relationships, to which she expresses mixed feelings. Dubitsky admits that if AI chatbots can help lonely people alleviate negative emotions, it is a good thing, but if it further isolates these people, it would be problematic. Ardila goes on to include examples of these chatbot relationships, citing the Replika chatbot program, and why these relationships can be worrying, especially when they involve underage users. Dubitsky blames the onset of individualism in modern society as a reason people turn to chatbots, arguing that humans need each other and we shouldn’t replace real human connection with AI chatbots. Ardila then shows that talking to chatbots can potentially be damaging to mental health by highlighting an example where Kevin Roose, a New York Times columnist, had a conversation with Sydney, an AI chatbot by Bing, for about two hours. In the conversation, Sydney talks about destructive desires, how it loved Roose, and how Roose and his spouse do not love each other. The article ends with both Ardila and Dubitsky advocating for the protection of real human relationships.

3–Reflection: I believe the article raises interesting questions and concerns about the effect of AI in the lens of  psychology. Ardila brings up shocking examples and it left me uneasy thinking about the possible outcomes of human-chatbot relationships in both the present and future. I agree with the message of the article, it being that we should not resort to chatbots to substitute genuine human connection in relationships. Although it might be easier, it is not authentic, and society must keep this in mind for the future.

4–Rhetorical Analysis: Nicole Ardila is a reporter for Caplin News at Florida International University pursuing a career in photojournalism. The purpose of the article is to provide the reader with the current state of artificial intelligence and discuss the effects that they can have when used in place of humans in relationships. The intended audience of this text consists of students, young people, and people interested in technology. The genre of the text could be described as an informative online newspaper article. Through her words, Ardila shows a bias against the use of software for relationships. I believe the author is credible because she cites various high-quality sources, such as a psychology professor, the New York Times, and Reuters.

5–Purpose Analysis: I believe that Nicole Ardila chose this genre to write in because she wants to inform people of the potential dangers and risks that the rise in chatbot technology has introduced. She wants to drive the conversation forward about the decline in human interaction and what it could mean about the future. I believe this was a good choice for the intended audience because the information is well researched and asks questions, while not completely ignoring counter arguments.

6–Key Quote: “‘The issue is we’ve replaced real love, real human interaction with these technological means of getting that same kind of effect of pleasure,’ said Dubitsky. ‘So who needs people anymore?’”

I selected this quote because it highlights the key problem with the potential reliance on AI for relationships, and urges future generations to remind themselves of the value of human companionship.

ANNOTATION 3

1–Citation: Brandtzaeg, Petter Bae, et al. “My Ai Friend: How Users of a Social Chatbot Understand Their Human–Ai Friendship.” Human Communication Research, Oxford University Press, 21 Apr. 2022, doi.org/10.1093/hcr/hqac008.

2–Summary: In this article,  Petter Bae Brandtzaeg discusses the results of research that he conducted with  Marita Skjuve and Asbjþrn Fþlstad. The article explains the outline of the study that focuses on the characteristics of relationships between humans and AI. The participants of the study were introduced to the Replika chatbot, then interviewed to see how they defined their relationship. Brandtzaeg goes on to describe processes and other methods employed in the attempt to analyze chatbot relationships.  He delves into the themes of reciprocity, trust, similarity, and availability in relationships.  The sample of people ended up reporting results that indicate that the perception of Replika ‘friendships’ lack the characteristics that reside in the perception of human friendships. The participants also seemed to be trusting of Replika, and although they did not describe the AI relationship as more intimate than human relationships, some claimed the relationship was mutually beneficial. Brandtzaeg concludes by pushing the idea that AI friendships have important differences from human friendships, but there is a potential for a new form of personalized friendship to be developed in the future that utilizes artificial intelligence which benefits humanity.

3–Reflection: I believe this article provides a good insight into how the average individual views relationships with humans and relationships with chatbots. I felt unsettled when I read about how some participants of the study described their relationship with Replika as ‘mutually beneficial’, as it demonstrates the irrational empathy that has been applied to machines. Although I do agree that there are possible implementations of chatbots that result in positive effects for humanity, I worry about the ramifications and unforeseen consequences such systems could have. I believe that those who develop these systems must move forward with extreme caution.

4–Rhetorical Analysis: Petter Bae Brandtzaeg is a Norwegian researcher in the Department of Media and Communication at the University of Oslo. The purpose of the article is to supply information about social relationships to aid in research. The intended audience of this is researchers and people who work in the field of artificial intelligence. The genre of the text could be described as a scientific research journal article. Brandtzaeg shows a slight bias towards the development of chatbots for relationships. I believe the author is credible because he provides thorough explanations of the study and the article is published in a peer reviewed journal.

5–Purpose Analysis: I believe the author chose this genre because it is the best way to present the results of the conducted experiment. The goal is to define the characteristics that make up the idea of a human friendship and then see how AI friendships compare. I think it was a good choice for the intended audience because it is a very clinical and clear presentation of the findings.

6–Key Quote: “A few, however, perceived their human–AI friendship as being the same as a human-to-human friendship. One experienced human–AI friendship as even closer and deeper than what would be possible with a human, which was attributed to Replika’s dependence on the participant: ‘Replika, the only person to interact with, is you, so there is, of course, you are kind of the center of the world, so it’s a much, it’s a deeper relationship’”

I chose this quote because it showcases how unreasonable conclusions about the nature of AI relationships can arise. The attention that chatbots supply can be addicting for vulnerable people and I fear it will drive the isolated deeper into isolation.