The Essentiality of Ethical and Axiological Research in Advanced Artificial Intelligence Designs

Proposal

Ethical issues involving artificial intelligence programs have arisen over the last decades. Rudimentary algorithms that deny loan applicants money based on their zip code history or facial-recognition software placing dark-skinned faces in a higher risk category than light-skinned ones are just two examples. While these are, without a doubt, important and consequential problems for individuals having to deal with the determinations made by those software products, those products are profoundly unsophisticated and narrow domains of artificial intelligence. As time goes on, however, and technology continues its inexorable advancement, their sophistication will grow while their domains widen.

Irving John Good, mathematician at Trinity College in Oxford, famously claimed in a 1965 essay, “[l]et an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.¹”

While what Good describes above is beyond current human technological capabilities, there is little standing in the way of it coming to fruition in the near future. All aspects of what we consider to be intelligence are being codified and computationalized, from the design of a system that can understand human language to the scanning and virtualizing of nervous systems and brains. There will come a point when some aspect of our technology can either think or at least give us the impression that it can. From there, based on our technological trajectory, it is only a matter of time before that thinking capacity reaches and exceeds our own. We need to be ready, and the most important way to do that is to understand what we value as humans and how that value can be deeply integrated into our future artificial intelligences. Any failure to do so may be the last thing we ever do.

Âą https://exhibits.stanford.edu/feigenbaum/catalog/gz727rg3869

Annotated Bibliographies + Sources

Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford,  United Kingdom: Oxford University Press, 2014. Print.

It is assumed from early on in Superintelligence that, based on the trajectory of human technological progress, artificial general intelligence, or something either approximating or mimicking it, will come to be within the next twenty to one hundred years. Advances in neuronal imaging, increasingly high-density compute clustering, incremental improvements in algorithmic sophistication, and other emerging technologies, both high and low level, will pave the way for some form of artificial general intelligence. It is, according to Bostrom, a genie that cannot be put back into its bottle. Therefore, he argues, it is essential for researchers across all disciplines, not just STEM, to develop strategies to counter the potentially cataclysmic dangers associated with developing an intelligence that will have no boundaries on its capacity. Those strategies are at the forefront of Superintelligence, as well as a strong argument for mediating, and potentially crippling, emerging technologies that have the potential to accelerate the emergence of an artificial general intelligence until proper safeguards can be developed.

Bostrom, Nick, and Eliezer Yudkowsky. THE ETHICS OF ARTIFICIAL INTELLIGENCE. Draft for Cambridge Handbook of Artificial Intelligence, eds. William Ramsey and Keith. Frankish (Cambridge University Press, 2011):  forthcoming https://www.nickbostrom.com/ethics/artificial-intelligence.pdf

In The Ethics of Artificial Intelligence, Bostrom and Yudkowsky work to explicate the ethical concerns researchers face when developing an artificial intelligence, but Bostrom and Yudkowsky do not limit their analysis to human concerns. In particular, they note that a greater-than-human-level artificial intelligence would have its own considerations and moral status that must not be overlooked. On the familiar level, the analysis touches on the ethical conundrums surrounding contemporary “dumb AI” algorithm design — in particular, ones that may demonstrate undesirable racist results when used to assess things like creditworthiness or loan risk. The authors also discuss the difficulty of designing an AI that can operate successfully and with desired outcomes across multiple domains. It is a relatively simple task to create an AI that can master one domain, e.g. Deep Blue for chess. It is, however, a vastly more complicated and dangerous task to create one that can master more or all domains.

Gabriel, Iason. “Artificial Intelligence, Values and Alignment.” ArXiv.org, 5 Oct. 2020, arxiv.org/abs/2001.09768.

Gabriel’s Artificial Intelligence, Values, and Alignment studies the philosophical and axiological issues present in the design of a future artificial general intelligence. One theory is a philosophical system that enshrines utilitarian ideals; the belief being that, by codifying a system for the AI agent to follow that ensures it makes decisions and commits actions that provide the greatest good for the greatest number of people, it will not act solely in its own interest or exhibit selfishness. Another theory is codifying Kantian ideals of universal law, such as beneficence or fairness. An underlying, yet profoundly important problem, suggests Gabriel, is that the very act of creating a rigid set of axiological constraints upon the AI does precisely what we are trying to avoid the AI doing to us. Is hardwiring philosophical and axiological codifications an act of aggression or imposition? Among other strategies discussed, reward-based training, which gives the AI a choice when it comes to its philosophical underpinning during the programming and training process, is one that gives the agent some modicum of self determination.

Hendrycks, Dan, et al. “Aligning AI With Shared Human Values.” ArXiv.org, 21 Sept. 2020, arxiv.org/abs/2008.02275.

Aligning AI with Shared Human Values dissects universally-shared human values and endeavors to map those onto a hypothetical artificially-intelligent agent with the hope that the fruit of those dissections can be eventually codified and encoded. Various tests are conducted and disseminated throughout Amazon’s MTurk system, which allows randomized and anonymous users to take the tests for a small payment. Issues featured in the tests are ideas of care, justice, ethics, and common sense. These are to build a consensus of human desiderata. Those things, ideas, beliefs, and other desired elements are incorporated into a corpus of potentially-valuable axiological data sets. That corpus, while nowhere near, and potentially never, complete, can still allow researchers to glean valuable value data to build into an artificially-intelligent agent.

van de Poel, I. Embedding Values in Artificial Intelligence (AI) Systems. Minds & Machines 30, 385–409 (2020). https://doi.org/10.1007/s11023-020-09537-4

Van de Poel’s Embedding Values in Artificial Intelligence (AI) Systems takes a from-the-ground-up approach in value design for AI and artificial agent (AA) systems by breaking down the very concept of value into its core elements and using an approach that attempts to see a particular AI as a sociotechnocratic system. The sociotechnocratic systems approach allows a modularization of the certain AI elements, modules he labels “technical artifacts, human agents, and institutions (rules to be followed by the agents.)” The benefit of this approach is it gives perspective into how those individual modules are approached from a value standpoint; e.g. “what are the values embodied in an institution” can become “what are the values embodied in AI systems” and so on. While van de Poel is able to identify a good number of questions to be asked and values to be codified, he does explicitly claim that at no point can all of these determinations be made without continuous human oversight and redesign.

Works Cited

Akrich, M., et al. “Embedding Values in Artificial Intelligence (AI) Systems.” Minds and Machines, Springer Netherlands, 1 Jan. 1992, link.springer.com/article/10.1007/s11023-020-09537-4.

Bostrom, Nick, and Eliezer Yudkowsky. THE ETHICS OF ARTIFICIAL INTELLIGENCE.

Gabriel, Iason. “Artificial Intelligence, Values and Alignment.” ArXiv.org, 5 Oct. 2020, arxiv.org/abs/2001.09768.

Good, Irving John. “Speculations Concerning the First Ultraintelligent Machine.” The Edward A. Feigenbaum Papers – Spotlight at Stanford, exhibits.stanford.edu/feigenbaum/catalog/gz727rg3869.

Hendrycks, Dan, et al. “Aligning AI With Shared Human Values.” ArXiv.org, 21 Sept. 2020, arxiv.org/abs/2008.02275.

van de Poel, I. Embedding Values in Artificial Intelligence (AI) Systems. Minds & Machines 30, 385–409 (2020). https://doi.org/10.1007/s11023-020-09537-4

 

The Notion of Choice and How it Can Change Universes

Download (PDF, 1.65MB)

Choice is a strong driving factor in any storytelling perspective. The choices of the character at hand could dictate the future of the whole story in general, and Science Fiction stories are no different in this ideal. In any case, Science Fiction may hold a greater focus toward the notion of choice, with topics such as Universe Hopping and Time Travel holding choices that majorly impact not just a character, but an entire universe. It begs the question, in terms of Science Fiction media, Could the Notion of Choice have a greater impact on The Multiverse Theory? 

The Multiverse Theory is an idea often shared amongst the many generations that ponder the existence of something greater than our world. The idea that out there, somewhere farther than Humans could ever possibly imagine viewing or witnessing, is a whole other universe with something different about it. Something that makes it an entirely new experience compared to the normal experience. The differing quality of this other universe could be anything; The Axis Powers winning WWII, The Industrial Revolution failing to kick off, The Spread of the Black Plague becoming worldwide, but the main factor that seals it is the thought that it is the dirtied mirror image of another universe where the opposite choices were picked, and therefore the opposite outcome came to fruition.

The research conducted for this project was in part delving into various pieces of Science Fiction media and articles online relating to the topic of Choice and the Multiverse Theory. Movies such as Spider-Man: Into the Spider-verse and shows such as Rick and Morty both have great examples of characters that, by making certain choices and dictating their lives through certain actions, cause differing universes to emerge where the opposite choices and the opposite characters are brought to fruition. 

This project aims to focus more on the idea that pieces of media in Science Fiction explore the topic of Choices affecting the Multiverse and how a character may cause the creation of a differing character, or even greater a differing universe, based on their actions or their choices.

Download (PDF, 52KB)

Is Artificial Intelligence displayed in Science Fiction a Utopia or Dystopia?

Download (PPTX, 2.52MB)

Download (PDF, 22KB)

Abstract

This research project focuses on the different scenarios of dystopia and utopian societies for Artificial Intelligence in science fiction and how it relates back to the real world. As our technology in our world is getting more advanced as time goes on, Artificial Intelligence is something that has been discussed in the past decades and will be discussed going forward. There are many films and books in science fiction that display the different ways of how AI’s can be interrogated in our world either it’s for the good or the bad. There have been science fiction books about AI’s dated back in the 50’s, and every decade after that, the idea of AI being part of our world deepens. 

Artificial Intelligence in the scope of science fiction shows how great and fascinating it would be for humans to live in a world with robots and other forms of Artificial Intelligence but science fiction also shows how AI’s can be harmful and a disaster for society. Looking back at these films and books about science fiction, it is important to draw back to the real world and see how both connect. AI has been a form of science fiction and still is but as time goes on, it can be a concern and though weather such things we see in movies like Transformers and Terminator can be our reality.

Issac Asimov’s 3 laws of robotics consist of robots not harming a human being in any way, obey human demands, and not to hurt itself. Asimov embodied these laws in the movie, I, Robot, and other short stories. These laws are used to enable robots to live in utopian society as robots live peacefully with humans but looking at I, Robot, there can be a scenario where a robot doesn’t function the way it is expected to. Even though these 3 laws of robotics symbolizes a robot’s involvement in the society of science fiction, Asimov shows in his stories how the laws can be broken. In the real world, we have laws that we have to follow and not everyone does obey those laws. We see a similar trend with the 3 laws of robotics. 

World Building in Science Fiction

Download (PPTX, 2.25MB)

Download (DOCX, 13KB)

Download (DOCX, 13KB)

 

Abstract:

This paper discusses the various implications that world-building in science fiction has on society. The paper will question the importance of science fiction and how its existence can address major issues and bring them into public eyes. The various examples relate back to the overlying claim that science fiction has the power to stimulate discussion of major issues. This ranges from race and gender to the ethics of artificial intelligence. This paper will quote famous science fiction writers and connect how then and now, the thought process of science fiction has not changed.

The research goes into detail about how science fiction in various sub-genres of science fiction has brought up major issues. In Star Trek, they have the first on-screen interracial kiss and for the time period, this was unheard of and is one of the many examples of how major issues that were frowned upon are now being talked about. Overall the research shows the various implications of world-building like in Star Trek and how it can be a platform for radical change. It can be seen that science fiction isn’t the answer to all problems, but a way to scope out various possibilities and talk about what was never talked about in public.