Our City Tech OpenLab Home

Author: Oscar Wong

Conclusion

In conclusion, the investigation of the impact of AI chatbot technology on the communication and relationships of future generations has shown that, if mishandled, the technology can possibly prove to be a detriment to human society. Through the sources referenced,  it can be concluded that the average human will project the image of sentience onto tools, which undermines the value of human qualities like emotion. Irresponsible regulation of artificial intelligence can result in massive negative consequences and will be dangerous to the spirit of humankind. While doing my research, I was surprised by how frightening the implications of this technology have proven to be. If we are not careful to avoid the dehumanization of civilization, then future generations will inhabit a bleak world. My thinking on the research question deepened in the philosophical arguments about AI, as well as what typical individuals think regarding chatbots. I believe that what I have learned about is important because it shows the necessity of exercising caution when technology moves forward. This is something that almost everybody in the world will be affected by. This means that recklessness will have massive consequences. Furthermore, I think that all people should hear about these issues, but it is most important for world leaders, researchers in technology, and the youth to be educated on artificial intelligence and its effects on communication and relationships in order to use the tools of the future in a way that benefits society.

Annotation 2 & 3

ANNOTATION 2

1–Citation: Ardila, Nicole. “Artificial Intelligence Chatbots Are Slowly Replacing Human Relationships.” Caplin News, Florida International University, 17 Mar. 2023, caplinnews.fiu.edu/artificial-intelligence-chatgpt-openai-loneliness-relationships/.

2–Summary: In the article, Ardila discusses the increased use of companions powered by artificial intelligence. She explains that we have a very lonely population, where people form relationships with chatbots and become emotionally attached. Ardila consults Dr. Sorah Dubitsky, a professor at the Florida International University about her thoughts on human-machine relationships, to which she expresses mixed feelings. Dubitsky admits that if AI chatbots can help lonely people alleviate negative emotions, it is a good thing, but if it further isolates these people, it would be problematic. Ardila goes on to include examples of these chatbot relationships, citing the Replika chatbot program, and why these relationships can be worrying, especially when they involve underage users. Dubitsky blames the onset of individualism in modern society as a reason people turn to chatbots, arguing that humans need each other and we shouldn’t replace real human connection with AI chatbots. Ardila then shows that talking to chatbots can potentially be damaging to mental health by highlighting an example where Kevin Roose, a New York Times columnist, had a conversation with Sydney, an AI chatbot by Bing, for about two hours. In the conversation, Sydney talks about destructive desires, how it loved Roose, and how Roose and his spouse do not love each other. The article ends with both Ardila and Dubitsky advocating for the protection of real human relationships.

3–Reflection: I believe the article raises interesting questions and concerns about the effect of AI in the lens of  psychology. Ardila brings up shocking examples and it left me uneasy thinking about the possible outcomes of human-chatbot relationships in both the present and future. I agree with the message of the article, it being that we should not resort to chatbots to substitute genuine human connection in relationships. Although it might be easier, it is not authentic, and society must keep this in mind for the future.

4–Rhetorical Analysis: Nicole Ardila is a reporter for Caplin News at Florida International University pursuing a career in photojournalism. The purpose of the article is to provide the reader with the current state of artificial intelligence and discuss the effects that they can have when used in place of humans in relationships. The intended audience of this text consists of students, young people, and people interested in technology. The genre of the text could be described as an informative online newspaper article. Through her words, Ardila shows a bias against the use of software for relationships. I believe the author is credible because she cites various high-quality sources, such as a psychology professor, the New York Times, and Reuters.

5–Purpose Analysis: I believe that Nicole Ardila chose this genre to write in because she wants to inform people of the potential dangers and risks that the rise in chatbot technology has introduced. She wants to drive the conversation forward about the decline in human interaction and what it could mean about the future. I believe this was a good choice for the intended audience because the information is well researched and asks questions, while not completely ignoring counter arguments.

6–Key Quote: “‘The issue is we’ve replaced real love, real human interaction with these technological means of getting that same kind of effect of pleasure,’ said Dubitsky. ‘So who needs people anymore?’”

I selected this quote because it highlights the key problem with the potential reliance on AI for relationships, and urges future generations to remind themselves of the value of human companionship.

ANNOTATION 3

1–Citation: Brandtzaeg, Petter Bae, et al. “My Ai Friend: How Users of a Social Chatbot Understand Their Human–Ai Friendship.” Human Communication Research, Oxford University Press, 21 Apr. 2022, doi.org/10.1093/hcr/hqac008.

2–Summary: In this article,  Petter Bae Brandtzaeg discusses the results of research that he conducted with  Marita Skjuve and Asbjþrn Fþlstad. The article explains the outline of the study that focuses on the characteristics of relationships between humans and AI. The participants of the study were introduced to the Replika chatbot, then interviewed to see how they defined their relationship. Brandtzaeg goes on to describe processes and other methods employed in the attempt to analyze chatbot relationships.  He delves into the themes of reciprocity, trust, similarity, and availability in relationships.  The sample of people ended up reporting results that indicate that the perception of Replika ‘friendships’ lack the characteristics that reside in the perception of human friendships. The participants also seemed to be trusting of Replika, and although they did not describe the AI relationship as more intimate than human relationships, some claimed the relationship was mutually beneficial. Brandtzaeg concludes by pushing the idea that AI friendships have important differences from human friendships, but there is a potential for a new form of personalized friendship to be developed in the future that utilizes artificial intelligence which benefits humanity.

3–Reflection: I believe this article provides a good insight into how the average individual views relationships with humans and relationships with chatbots. I felt unsettled when I read about how some participants of the study described their relationship with Replika as ‘mutually beneficial’, as it demonstrates the irrational empathy that has been applied to machines. Although I do agree that there are possible implementations of chatbots that result in positive effects for humanity, I worry about the ramifications and unforeseen consequences such systems could have. I believe that those who develop these systems must move forward with extreme caution.

4–Rhetorical Analysis: Petter Bae Brandtzaeg is a Norwegian researcher in the Department of Media and Communication at the University of Oslo. The purpose of the article is to supply information about social relationships to aid in research. The intended audience of this is researchers and people who work in the field of artificial intelligence. The genre of the text could be described as a scientific research journal article. Brandtzaeg shows a slight bias towards the development of chatbots for relationships. I believe the author is credible because he provides thorough explanations of the study and the article is published in a peer reviewed journal.

5–Purpose Analysis: I believe the author chose this genre because it is the best way to present the results of the conducted experiment. The goal is to define the characteristics that make up the idea of a human friendship and then see how AI friendships compare. I think it was a good choice for the intended audience because it is a very clinical and clear presentation of the findings.

6–Key Quote: “A few, however, perceived their human–AI friendship as being the same as a human-to-human friendship. One experienced human–AI friendship as even closer and deeper than what would be possible with a human, which was attributed to Replika’s dependence on the participant: ‘Replika, the only person to interact with, is you, so there is, of course, you are kind of the center of the world, so it’s a much, it’s a deeper relationship’”

I chose this quote because it showcases how unreasonable conclusions about the nature of AI relationships can arise. The attention that chatbots supply can be addicting for vulnerable people and I fear it will drive the isolated deeper into isolation.

Annotation 1

1–Citation: Weizenbaum, Joseph. Introduction. Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman and Company, 1976, pg. 1-16.

2–Summary: In the introduction to his book, Weizenbaum writes about his own accounts and thoughts on the implications of artificial intelligence. He first gives background on himself and the topic of computers, then shares what shocked him about the public reception of his ELIZA program. Weizenbaum reveals that the psychotherapist role that ELIZA took was never meant to be taken seriously, lamenting over how the work has been misinterpreted. When he discovered how some psychiatrists believed that the software could replace the authentic profession, how some users developed emotional attachment, and how people believed it demonstrated understanding of language, he was in disbelief. Weizenbaum confesses his concerns for the future of humanity and its relationship with computers, rejecting the idea that computers will ever have uniquely human qualities. He states that, although people will become more dependent on machines, there are things that only humans should do. He proclaims that human judgment and emotion must be preserved, and cannot be replaced by the cold calculations of software.

3–Reflection: I believe that Weizenbaum’s writing, while a bit complex, was very interesting to read. The introduction was a very thoughtful gateway into the philosophical discussions of the impact that computers have. The text made me ruminate on the role of technology in the world and its scale. It is frightening to see how well his words hold up in the context of modern society, where there seems to be a great amount of dehumanization in the presence of algorithms and automation.

4–Rhetorical Analysis: Joseph Weizenbaum is a German computer scientist who became a professor at the Massachusetts Institute of Technology in 1964. The purpose of the text is to give context to the talking point of computers and the implications of their increasing use, as well as urging the reader to be cautious of putting too much faith in artificial intelligence. The intended audience of this text contains computer scientists, students, and anyone else who could be curious about the topic. The genre of the text could be described as a non-fiction philosophical exploration. Through his words, Weizenbaum shows a bias against heavy integration of software. I believe the author is credible because he has an established career in this technology, therefore having first-hand experience with it.

5–Purpose Analysis: I believe that Joseph Weizenbaum chose this genre to write in because he fears the dehumanization of the social order. He wants to further the discussion and educate those who are unaware of the issue. I believe this was a good choice for the intended audience because the personal aspect allows the reader to connect with Weizenbaum in his worries.

6–Key Quote: “I would argue that, however intelligent machines may be made to be. There are some acts of thought that ought to be attempted only by humans.” (pg. 13)

I chose this quote because it highlights Weizenbaum’s main point, which is that no matter how smart machines are, or seem to be, they should be considered a replacement to human reasoning and emotion.

Introduction

AI Chat Bots: Digital Husks

Artificial intelligence, often abbreviated as AI, is defined as the ability of software or machines to exhibit intelligent behavior. AI chat bots are software made with the goal of mimicking human conversation, usually through text. One of the earliest examples of this technology was ELIZA, developed in the mid 1960s by Joseph Weizenbaum. The program aimed to act as a psychotherapist by looking for keywords in user inputs to engage in conversations using the DOCTOR script. It was one of the first of its kind that was able to pass the Turing test, an experiment to see if a machine’s behavior could be told apart from that of a human. Weizenbaum was surprised to find that people became attached to the program, as it had no true understanding of words. In his book, Computer Power and Human Reason: From Judgement to Calculation, he states, “I was startled to see how quickly and how very deeply people conversing with DOCTOR became emotionally involved with the computer and how unequivocally they anthropomorphized it… What I had not realized is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.” (pg. 6-7) In retrospect, it can seem obvious that ELIZA’s “intelligence” was just an illusion, but can the same be said about the chat bots of the future? The case of ELIZA and the software’s trickery reveals to us that it does not matter if a machine is not actually wise, compassionate, or intelligent. It only matters if people believe it is. That said, how could the advancement of AI chat bot technology affect the communication and human-machine relationships of future generations in the United States? This annotated bibliography will explore this question by looking at the developments of AI chat bot software and its effects on human psychology and society.

307 WORDS

Weizenbaum, Joseph. Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman and Company, 1976.

In Kyle D. Stedman’s “Annoying Ways People Use Sources,” he describes the many ways that writers incorporate sources in their texts which annoy him. More specifically, Stedman details the lackluster methods of quoting and citing he has seen, as well as how to improve them. To prove his ideas, Stedman compares writing to driving and gives examples of the annoyances along with the improvements he made. Stedman concludes by restating that what is considered “annoying” will vary depending on many factors, but people should learn the basics of how to use sources in an academic context correctly.

The source seems to be trustworthy because the writer, Kyle D. Stedman, states his intentions clearly, includes references and citations, and confirms that he is not an ultimate authority on writing.

In Helen Keller’s “The Most Important Day,” she recounts what she considers a pivotal moment in her life. Keller describes how isolated she felt before meeting Anne Sullivan, her teacher. To support this, Keller draws from her experiences learning about different words and meanings. Helen Keller concludes her essay by reiterating how learning with her teacher transformed her life for the better.

While Howard Gardner’s “Five Minds for the Future” was both a thought-provoking and entertaining read, the homework reading that I ended up preferring was Esmeralda Santiago’s “When I Was Puerto Rican.” I felt that Santiago’s essay was easier to read due to it being shorter, as well as being more anecdotal. In comparison, Gardner’s essay was complex and used extensive vocabulary, which required more attention and focus while following the text. Gardner’s essay was thoughtful and he proposed good arguments for why the mind types he mentioned would be vital to society in the future. It was well written and raised important questions, but I wasn’t as immersed in the reading as I was in Santiago’s piece. The personal aspect allows for greater ability to resonate with the reader. The theme of identity and the idea of being immersed in an unfamiliar environment are ones numerous people can relate to. Being put in a situation that requires perseverance and overcoming despite the difficulties as an immigrant is inspiring. Overall, both essays were great works and Gardner’s essay was not terrible by any means, but overall I found I enjoyed Esmeralda Santiago’s “When I Was Puerto Rican,” the most.