Unit 2

Unit 2 Annotated Bibliography


How worried should we be about the rise of Artificial Intelligence (A.I.)? That is the question my annotated bibliography is based around. I became interested in this topic while I was exploring subject matters for an engineering assignment in my freshman year of high school. The subject matter that I settled on was something along the lines of why A.I. should be incorporated further into mechanical engineering. It amazed me how useful it could be but what fascinated me more was how equally if not unequally dangerous it could be. The cost seemed to far outweigh the benefits and still do now depending on the lens you look at it from. I’ve kept up with the topic over the years and it’s sparked a hobby of sorts. I expect to find people talking more about the dangers of A.I. than the benefits. Most of the research I did before on the topic followed this trend. It would be a pleasant surprise to find an article or video that had otherwise, it might even change my opinion on it for the better. I doubt that will happen though as most of the articles I have read portray fewer benefits than anything, most likely due to the majority of people seeing cost over or at least before benefit. Honestly, if I found a genre for this topic that painted a brighter picture of the benefits I would have to use that here as it would balance out the other entries a bit more.


Source Entry #1:

Shane, Janelle. “The Danger of AI Is Weirder than You Think.” TED Talks, 22 Oct. 2019,

Shane talks about what she thinks is one of the major concerns about A.I. should be. She goes on to talk about how A.I. differs from simple computer coding starting at its core. Shane briefly mentions how coding is laying out the process step by step and giving the necessary tools to solve it. While A.I. is fundamentally different. You give it a goal and wait to see how it solves the problem because giving it the answer will defeat the purpose. The example she displays is asking an A.I. to use a set amount of parts to assemble a robot to then go from point A to point B. The A.I. then decides to create a tower out of the pieces and topple itself to achieve the goal. So the point she tries to make is that we shouldn’t focus on A.I.  rebelling against humanity but rather it doing what we want just not how we want it done. She tries to provoke thought on the topic of how to express a problem to an A.I. and how to articulate it in a way that leads the A.I. to a conclusion we desire and not simply what the optimal decision is. I love some of the examples Shane uses to get her point across, my favorite being when she says “But when David Ha set up this experiment, he had to set it up with very, very strict limitations on how big the A.I. was allowed to make the legs… And technically it got to the end of that obstacle course (3:25–4:00).”. I loved learning about this part as it gave me a different perspective on the perceived problems when developing A.I. as I had never thought about it like that. It always made me wonder what led her to that angle of thought and why she used the images and examples she did in the video, thinking about it now makes a lot more sense, as they aptly display both a relatable real-world problem (when Shane mentions the ice cream flavors in the intro) as well as a conceptual one (when she mentions shows the two experiments). I understand the concept for her thought and why she used the medium she did. When sharing information via a Ted Talk, comprehensive and easy to follow slides fit the bill perfectly, especially because they were simple enough that a child could understand, as well as thought-provoking enough that research and debates could be held over it.


Source Entry #2:

Ghosh, Anushka. “PPT Presentation on ARTIFICIAL INTELLIGENCE.” Anushka Ghosh, 19 Dec. 2017,

Ghosh and the other members of her team throughout this PowerPoint illustrate what A.I. is as a basic equation: Artificial + Intelligence = Artificial Intelligence. They then go on to show a brief timeline ranging from 1950-2016. They go even further by briefly going over the history of A.I., as well as the current status (in 2017), and even a few of the goals. Throughout the rest of the slides, they each have a few bullet points about select topics ranging from A.I. platforms to advantages and disadvantages, all the way to what they believe the next decade will bring. This was the most underwhelming of the 3 sources, most closely pertaining to the fact that it was presented in slides format and didn’t have as much information as I would have liked. It was however very easy to understand and slightly thought-provoking, but again after going over the other two this one just seemed a little lackluster. I lack interesting questions after reading it. I would only like to ask this team why they decided to present it in this form because it can leave much to be desired in the explanation department as it only offers a few too-the-point bullets. 


Source Entry #3:

Kurzweil, Ray. “The Singularity Is Near » Questions and Answers.” THE SINGULARITY IS NEAR: When Humans Transcend Biology, 22 Sept. 2005,

Kurzweil writes this piece in a Q&A format. He starts by telling the readers what a singularity is writing that “Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence.” He then goes on to write about the projected calculations per second (cps) necessary to achieve this feat and how close we are to reaching that goal. Further down the line, Kurzweil explains how we have narrow A.I. that can do specific tasks but lack general intellect. Kurzweil mentions how technologies increase is exponential and not linear which leads to his prediction that this boom of sorts will occur in the year 2045. He then goes on to write about nanotechnology and overpopulation but for the sake of sticking strictly to the topic of A.I. I will omit the latter half of this piece. Most of the points illustrated in this Q&A that correlate directly to A.I. are timgated projections of what could and should occur after a set amount of years leaving plenty of room for computational error and expectations as it is purely a guesstimate. That leaves me slightly dissatisfied but it was to be expected as this book that the information is from is more than a decade old. It does however lead me to wonder exactly how off and or on track we are to said projections, are we directly on Kurzweil’s timeline, or did we differ at intervals? It wouldn’t take a considerable amount of research to understand his points, but it would be more than what I am presenting here as he gets into subdivisions of the expansion and development of A.I. that would leave far too much to talk about. The Q&A from the book left me with more questions than answers which to me is great because that only made me want to learn more about the rest of the topic he covers. I believe he wrote this as further clarification to questions people had or would have had after reading through his book.



Throughout my research, I learned of a few of the lenses through which people use to judge the dangers of A.I.. In source 1 brought up my favorite quote that I believe captures the overall theme of the Ted Talk: “But when David Ha set up this experiment, he had to set it up with very, very strict limitations on how big the A.I. was allowed to make the legs… And technically it got to the end of that obstacle course(3:25–4:00).”. It perfectly captures Sane’s point about how we would be more focused on how A.I. would do what we want just not how, instead of focusing on a potential rebellion. The second entry only glosses over a few topics I had already come across, the basic questions if you will about what, why, when, and how. The final piece goes over far too much information to cover in this paper and could easily take up an essay by itself so only the relevant details were added. It might not have been the most thought-provoking to me, but it had the most information and offered a great deal of clarity, and required the most prior knowledge. What surprised me most was the sheer domino effect that A.I. would cause which wasn’t expanded upon in this bibliography. The answer to my question expanded by at least 2 separate levels, such as A.I. going the optimal route instead of the desired one, and the effects it would have on medical technology. This was important because it leads to an entirely different set of problems that need to be worked through before A.I. can be safely implemented into everyday life. It would be such a great achievement if those issues could be beta-tested in a controlled environment which they would have to be anyway, but avoiding life-threatening issues due to this simple fault is what would have to be ended before anything else. Philosophers and the R&D departments actively participating in A.I. would be the first to hear this information. On one hand, it’s purely technological but on a deeper level it’s effectively the creation of life, it’s a new creature in itself. The next on the list would probably be doctors as nanotechnology would skyrocket effectively revolutionizing modern medicine and other medical practices. Just think about tiny nanoscopic machines that could be utilized to heal and extend or prolong human life expectancy. After that, it’s a tie between militaries and politicians, as commercialization and militaristic use would be inevitable after A.I. became widespread enough, or it could end up being the first place it’s utilized.

Leave a Reply

Your email address will not be published. Required fields are marked *