Artificial intelligence vs Human Intelligence 



The idea of Artificial Intelligence has been one that has been discussed in the past decades and the idea will get bigger within the near future. Main concerns about AI’s are if they can function like humans and go beyond that. The idea of technology evolving and getting bigger is an interesting one but what comes to mind is, can these self-machines operate functionally and should there be concerns of a dysfunction within these machines? In the present day, we’ve heard about technologies like self cars that automatically drive for you so you don’t have to yourself. The idea of not driving yourself and letting your own car do it for yourself does sound compelling but we’ve seen how these self cars can malfunction even though it’s something minor. Even though Artificial Intelligence has potential to benefit our world drastically in the future, human intelligence is something that got us where we are today. Human intelligence is something that is evolving and helping us even think of the idea of artificial intelligence


Artificial intelligence can have it’s flaws but it can also many possibilities that we never thought of. The use of AI’s can be beneficial to do daily activities more faster and efficient. AI’s can also help people with disabiliies and special needs in multiple ways, and it can also help in the medical field with devoping vaccines and treatments. We’ve seen the revolution of technogogical advancements throughout the past decades and how it does have a positive affect on our daily lives.  


Bhushan, Divya. “Artificial Intelligence Vs Human Intelligence: Humans, not machines, will build the future”, Springboard, 28 Feburary, 2020


One way artificial intelligence can affect human intelligence is the learning process of the youth. The use of AI’s can unmotivate people from learning themselves as they are dependent on advanced technology to do it for them. Learning gives people the ability to not only access information but be able to do things themselves. A mentor is also important when addressing this issue as one can teach his/her students several aspects of things in life that an AI can’t do. Everyone has a different way of learning and needs different ways to learn. AI can be beneficial but it’s hard to say if it can get the job done for everyone. 


Rinehart, Will and Edwards, Allison “Understanding Job Loss Predictions From Artificial Intelligence” American Action Forum, 11 July, 2019


As AI’s are in the works, it is predicted that it will give a drastic job loss rate. It’s predicted that robots will be doing a lot of the work that we usually do in our daily lives. Part of humans do think like robots so it isn’t surprising with AI’s doing activities that we do but it can lead to many humans unemployed. While there can be a significant decrease of job opportunities, AI’s doing activities we usually do can lead to us focusing more on activities AI’s can’t do. 


Thomas, Mike “The future of Artificial Intelligence” Bulletin, 8 June, 2019


AI’s can have a huge impact on fields like healthcare, transportation, manufacturing, customer service and many more. For healthcare, a virtual nursing assistant can help and satisfy the patient’s overall experience. Transportation can be improved with self-cars but can take at least a decade to make them perfect. AI robots can work with humans to perform tasks and make sure everything is running smoothly. Customer service can improve with AI assistants scheduling appointments and other tasks which google is working on to do. Virtual tutors can help students with their schoolwork and virtual AI’s can assist educators. 


Patrizio, Andy. “Pros and Cons of Artificial intelligence”, Datamotion, 7 July, 2016


Ai’s can have many advantages for us in the future as they make less errors in processing, take faster actions + decisions, and have better research outcomes. One con can be someone controlling an AI and using it for something you shouldn’t use it for or just use it to hurt someone. Another one can be bad calls from an AI as they don’t judge actions like humans do. AI making a bad call can be significant in a hostile situation. 


Dickson, Ben, “There’s a huge difference between AI and human intelligence—so let’s stop comparing them” bdtechtalks, 21 August, 2018


AI’s are really beneficial for repetitive tasks that can be represented by data but humans are good for abstract decisions, something an AI can’t do. Humans can use computers themselves but an AI controlling one itself is better for accuracy and speed. AI can make minor mistakes that humans usually don’t make and something it can fix.

The Essentiality of Ethical and Axiological Research in Advanced Artificial Intelligence Designs


Ethical issues involving artificial intelligence programs have arisen over the last decades. Rudimentary algorithms that deny loan applicants money based on their zip code history or facial-recognition software placing dark-skinned faces in a higher risk category than light-skinned ones are just two examples. While these are, without a doubt, important and consequential problems for individuals having to deal with the determinations made by those software products, those products are profoundly unsophisticated and narrow domains of artificial intelligence. As time goes on, however, and technology continues its inexorable advancement, their sophistication will grow while their domains widen.

Irving John Good, mathematician at Trinity College in Oxford, famously claimed in a 1965 essay, “[l]et an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.¹”

While what Good describes above is beyond current human technological capabilities, there is little standing in the way of it coming to fruition in the near future. All aspects of what we consider to be intelligence are being codified and computationalized, from the design of a system that can understand human language to the scanning and virtualizing of nervous systems and brains. There will come a point when some aspect of our technology can either think or at least give us the impression that it can. From there, based on our technological trajectory, it is only a matter of time before that thinking capacity reaches and exceeds our own. We need to be ready, and the most important way to do that is to understand what we value as humans and how that value can be deeply integrated into our future artificial intelligences. Any failure to do so may be the last thing we ever do.


Annotated Bibliographies + Sources

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford,  United Kingdom: Oxford University Press, 2014. Print.

It is assumed from early on in Superintelligence that, based on the trajectory of human technological progress, artificial general intelligence, or something either approximating or mimicking it, will come to be within the next twenty to one hundred years. Advances in neuronal imaging, increasingly high-density compute clustering, incremental improvements in algorithmic sophistication, and other emerging technologies, both high and low level, will pave the way for some form of artificial general intelligence. It is, according to Bostrom, a genie that cannot be put back into its bottle. Therefore, he argues, it is essential for researchers across all disciplines, not just STEM, to develop strategies to counter the potentially cataclysmic dangers associated with developing an intelligence that will have no boundaries on its capacity. Those strategies are at the forefront of Superintelligence, as well as a strong argument for mediating, and potentially crippling, emerging technologies that have the potential to accelerate the emergence of an artificial general intelligence until proper safeguards can be developed.

Bostrom, Nick, and Eliezer Yudkowsky. THE ETHICS OF ARTIFICIAL INTELLIGENCE. Draft for Cambridge Handbook of Artificial Intelligence, eds. William Ramsey and Keith. Frankish (Cambridge University Press, 2011):  forthcoming

In The Ethics of Artificial Intelligence, Bostrom and Yudkowsky work to explicate the ethical concerns researchers face when developing an artificial intelligence, but Bostrom and Yudkowsky do not limit their analysis to human concerns. In particular, they note that a greater-than-human-level artificial intelligence would have its own considerations and moral status that must not be overlooked. On the familiar level, the analysis touches on the ethical conundrums surrounding contemporary “dumb AI” algorithm design — in particular, ones that may demonstrate undesirable racist results when used to assess things like creditworthiness or loan risk. The authors also discuss the difficulty of designing an AI that can operate successfully and with desired outcomes across multiple domains. It is a relatively simple task to create an AI that can master one domain, e.g. Deep Blue for chess. It is, however, a vastly more complicated and dangerous task to create one that can master more or all domains.

Gabriel, Iason. “Artificial Intelligence, Values and Alignment.”, 5 Oct. 2020,

Gabriel’s Artificial Intelligence, Values, and Alignment studies the philosophical and axiological issues present in the design of a future artificial general intelligence. One theory is a philosophical system that enshrines utilitarian ideals; the belief being that, by codifying a system for the AI agent to follow that ensures it makes decisions and commits actions that provide the greatest good for the greatest number of people, it will not act solely in its own interest or exhibit selfishness. Another theory is codifying Kantian ideals of universal law, such as beneficence or fairness. An underlying, yet profoundly important problem, suggests Gabriel, is that the very act of creating a rigid set of axiological constraints upon the AI does precisely what we are trying to avoid the AI doing to us. Is hardwiring philosophical and axiological codifications an act of aggression or imposition? Among other strategies discussed, reward-based training, which gives the AI a choice when it comes to its philosophical underpinning during the programming and training process, is one that gives the agent some modicum of self determination.

Hendrycks, Dan, et al. “Aligning AI With Shared Human Values.”, 21 Sept. 2020,

Aligning AI with Shared Human Values dissects universally-shared human values and endeavors to map those onto a hypothetical artificially-intelligent agent with the hope that the fruit of those dissections can be eventually codified and encoded. Various tests are conducted and disseminated throughout Amazon’s MTurk system, which allows randomized and anonymous users to take the tests for a small payment. Issues featured in the tests are ideas of care, justice, ethics, and common sense. These are to build a consensus of human desiderata. Those things, ideas, beliefs, and other desired elements are incorporated into a corpus of potentially-valuable axiological data sets. That corpus, while nowhere near, and potentially never, complete, can still allow researchers to glean valuable value data to build into an artificially-intelligent agent.

van de Poel, I. Embedding Values in Artificial Intelligence (AI) Systems. Minds & Machines 30, 385–409 (2020).

Van de Poel’s Embedding Values in Artificial Intelligence (AI) Systems takes a from-the-ground-up approach in value design for AI and artificial agent (AA) systems by breaking down the very concept of value into its core elements and using an approach that attempts to see a particular AI as a sociotechnocratic system. The sociotechnocratic systems approach allows a modularization of the certain AI elements, modules he labels “technical artifacts, human agents, and institutions (rules to be followed by the agents.)” The benefit of this approach is it gives perspective into how those individual modules are approached from a value standpoint; e.g. “what are the values embodied in an institution” can become “what are the values embodied in AI systems” and so on. While van de Poel is able to identify a good number of questions to be asked and values to be codified, he does explicitly claim that at no point can all of these determinations be made without continuous human oversight and redesign.

Works Cited

Akrich, M., et al. “Embedding Values in Artificial Intelligence (AI) Systems.” Minds and Machines, Springer Netherlands, 1 Jan. 1992,

Bostrom, Nick, and Eliezer Yudkowsky. THE ETHICS OF ARTIFICIAL INTELLIGENCE.

Gabriel, Iason. “Artificial Intelligence, Values and Alignment.”, 5 Oct. 2020,

Good, Irving John. “Speculations Concerning the First Ultraintelligent Machine.” The Edward A. Feigenbaum Papers – Spotlight at Stanford,

Hendrycks, Dan, et al. “Aligning AI With Shared Human Values.”, 21 Sept. 2020,

van de Poel, I. Embedding Values in Artificial Intelligence (AI) Systems. Minds & Machines 30, 385–409 (2020).


Research Project Individual Conferences with Professor Belli

*Reply to this post listing, indicating which slot you want. Scheduling is first-come, first-served. Please do not request a time slot that has already been taken/requested (unless you absolutely can’t make any other slots–because you have another class or job–in which case you can ask another student–in the comments–to switch with you).

Research Project Individual Conferences with Professor Belli

  • These conferences are a time for you and me to get together one-to-one to discuss your ideas/progress on the Individual Research Project and to address any questions you may have. This is also a chance to get your topic formally approved by me (each student needs me to sign off on their project as soon as possible).
  • Please bring all relevant materials with you (proposals, sources), and come prepared to discuss specifics (questions you have, etc.).
  • Please only sign up for a spot that you are 100% sure you can make (and make note of the time/date you are coming).
  • All meetings will be through our regular Zoom office hours link.
  • Each slot is 10 minutes long. Arrive a few minutes early (and be prepared to stay a few minutes late, in case we are running behind).
  • If you miss a conference or come unprepared, it will be counted as an absence and you forfeit your right to schedule future conferences (on the research project) with me.

Monday, 11/30

  • 11:00-11:10am:
  • 11:10-11:20am:
  • 11:20-11:30am: Arin

Tuesday, 12/1

  • 1:00-1:10pm: Shamach
  • 1:10-1:20pm: Max
  • 1:20-1:30pm: Ronald
  • 4:00-4:10pm: Xavier
  • 4:10-4:20pm: Oscar
  • 4:20-4:30pm: Itmam

Wednesday, 12/2

  • 11:00-11:10am: Khoury
  • 11:10-11:20am: Phillip
  • 11:20-11:30am: Edward

Thursday, 12/3

  • 4:00-4:10pm: Justin
  • 4:10-4:20pm: Derick
  • 4:20-4:30pm:
  • 4:30-4:40pm:
  • 4:40-4:50pm:
  • 4:50-5:00pm: