Reflecting on the articles by Barnett and Grose, discuss how, if at all, AI influences your academic practices and personal relationships. Do you see AI as a helpful tool, a potential hindrance, or both? Also, what are you most critical of regarding the speeches we watched in class (“Can AI Companions Help Heal Loneliness” and “AI is Dangerous, But Not For the Reasons You Think”)
Post a brief response (2 paragraphs or so) in the comment section below. If you have questions, let me know.
AI is very helpful. It’s an insight on any topic. But not a sole resolution to any topic. It’s just a mere reference. It should be used to reflect on the answer you have already given. A answer, in which, you yourself have come up with, through your extensive research and study.
AI Can be very Addictive and Dependent. It’s not completely reliable either. There are times, AI have given the wrong answer to some topics. When you rely solely on AI, it takes away form your natural ability of using your brain. Problem solving.
AI allows me to study math and biology in two different ways. I usually ask ChatGPT to generate practice problems for math and have it test my knowledge of Biology 1101. When I do not understand a math problem, even though my professor provided the answer, I will ask ChatGPT for a step-by-step explanation. For biology, I will provide it my notes and ask it to quiz me (using true/false, multiple choice, short answer). Sometimes the answer I provided was marked wrong, so I have to inform ChatGPT about the error it made. I still need to use google and my notes to make sure ChatGPT provided the correct information.
While it does assist me with studying, I don’t use AI in my personal life. It doesn’t affect my relationships, because I don’t depend on it for any personal issues (I’m not that crazy, at least not yet). It may work for people that might be a little more antisocial and don’t have a close family/friend group. As one of the articles mentioned, it is important that we put our money into projects to help humanity become social again, rather than make AI to replace humans. I think I would prefer people connecting with people, not machines acting like people.
I see the potential of AI to be helpful, but I do not believe that it is a complete solution. It is not a fully developed product and too many times it provides an incorrect output. This means that people still must be engaged in their own thinking. AI is a tool, like everything else, it has its pros and cons. Search engines like ChatGPT might help us save on time, however, we cannot allow them to be a crutch because the more we rely on it, the more complications we will create. So, AI can be viewed as a complement to learning not a replacement for learning. In the end human thinking and human connections will always exceed that provided by AI!
What I’m most critical of in the speeches is that they both felt a bit like they were trying to promote their own products or work. In the first video, the speaker had an emotional impact with a nice story, but I still felt it was almost being used as a promotional story for the Replika app. Even if she discussed the dangers of AI companions, the talk was still to some extent, focused on saying how helpful her product was. In the second video, the speaker clearly talked more about ethics and impacts, however, she did mention multiple tools which she helped create that came across as implicit promotion. Both speakers presented important messages, but in my opinion both speakers’ purpose was more of a pitch than a warning alert. I think it’s important to stay focused on the real issues and not mix them too much with selling a product.
Reflecting on the articles by Barnett and Grose, discuss how, if at all, AI influences your academic practices and personal relationships. Do you see AI as a helpful tool, a potential hindrance, or both? Also, what are you most critical of regarding the speeches we watched in class (“Can AI Companions Help Heal Loneliness” and “AI is Dangerous, But Not For the Reasons You Think”)
AI has definitely assisted me in various occasions whether with a school assignment, a letter of recommendation for a friend or a letter for my kids school. There are many positive attributes of using AI, for example, I use it as an assistant to ensure my work is on path with the message I am trying to convey rather than being a parasite to sponge off solely on the results that was AI generated. Using my own words and thinking for myself will help me build more confidence in the way I answer questions, with little to no support from a AI generated source. I think it would be deceptive to relay anything completely AI generated as your own work, as it would do more harm than good in the future denying ones self of mastering cognitive skills.
I would say AI can be both a helpful tool and a hindrance. If one becomes too dependent on it as a primary source to get all assignments done or to function in everyday life then it would obstruct room for growth and cause a lazy mentality to think for ones self. And on the other hand if it is merely used as a guide in comparison to how you already went about answering an assignment with little adjustments after reviewing the AI generated results then this is absolutely satisfactory.
After viewing the speeches in class, I had little to critic about. They both insinuated that AI have their downfalls and should not be solely dependent on. You would think that they would be promoting AI to the fullest extend but that was not the case. They were very transparent and confident in the way they related the information.
AI does influence my academic life and in way, my personal life too (since it’s everywhere). I use it for help in organizing my thoughts in a more efficient way for homework, to give me ideas when I’m stuck on something or to check when I’m unsure about my answer. AI is a tool that can totally be misused, people need to understand that you have to use it to help you improve, not to do all the work for you since first, you won’t be getting benefit from it and second it really isn’t that reliable, it makes mistakes.
About the speeches, I think that they both make some good points but Eugenia’s is more of a sells pitch and I don’t like how Sasha’s proposal is not thinking about the future and just focusing on the present. I personally think there should be a balance, of course we have to worry about the future and the present. I specially more critical of Eugenia’s because although she is giving a good message it doesn’t feel genuine due to her being the founder of an AI-powered app that will obviously contribute on what she goes against: creating dependency on AI by trying to emulate human company. As the creator of the chatbot Replika, she’s biased because she obviously wants her product to do good and I think that’s why she got to the conclusion of needing even more AI to fix the problem AI created, which in my opinion is like putting a bandaid on a broken arm.
AI affects my schoolwork and personal life in both helpful and challenging ways. It’s great for writing essays, organizing my thoughts, and finding quick information, which saves time and makes learning easier. But sometimes I think it can make me a little lazy, since I might depend on it instead of figuring things out on my own. In my personal life, AI is useful for things like reminders, music suggestions, or even chatting when I’m bored, but it might also cause people to spend less time talking face-to-face or building real friendships.
From the speeches we watched, I found the idea of AI companions helping with loneliness really interesting, but I don’t think a robot can truly replace human connection. It might help a little, but real friendships and emotions come from people, not machines. In the second speech, I agreed with the speaker that the real danger of AI isn’t the technology itself, but how people choose to use it. I think both talks had good points, but they didn’t fully explore the balance between AI being helpful and possibly harmful. It’s important to look at both sides to really understand how AI is changing our lives.