Summary of Etzioni et al.’s “Should Artificial Intelligence Be Regulated?”

TO: Professor Ellis

FROM: Nakeita Clarke

DATE: Sept 20, 2020

SUBJECT: 500-Word Summary

This memo is a 500-word summary of the article, “Should Artificial Intelligence Be Regulated?” by Amitai Etzioni, and Oren Etzioni.

Anxiety regarding Artificial Intelligence (AI) and its potentially dangerous abilities have surfaced the question of whether or not AI should be regulated. A key component, and a first step to approach such regulation would involve standardizing a universally objective definition of AI. Some predict that it is inevitable for AI to reach the point of technological singularity and believe it will happen by 2030. This perspective is due to AI being the first emerging technology with the capability for producing intelligent technology itself, which is interpreted as a foundational threat to human existence. Respected scholars and tech leaders agree AI possesses such a threat and urge for the governance of AI. The Association for the Advancement of Artificial Intelligence (AAAI) suggests that there is no foreseeable reason to pause AI-related research while the decision to monitor AI is being determined. Others see no reason for regulation stating, “machines equipped with AI, however smart they may become, have no goals or motivations of their own.” (Etzioni, A., & Etzioni, O., 2017, p. 33). Even so, it may already be too late to attempt to create international regulations for AI due to global widespread usage across public and private sectors.

Both sides agree on the social and economic impact AI will cause; however, regulation could inflate the cost of such an impact. So far, AI has exhibited superior medical advantage, sped up search and rescue missions leading to increased chances of recovering victims, and is used in the psychological industry for effective patient care. AI is already used in our everyday technology from personal assistants; Google Assistant, Alexa, Siri, and Cortana, as well as security surveillance systems. Instead of regulating AI as a whole, limiting the progression of its beneficial impact, focusing AI regulation on AI-enabled weaponry may be a more actionable approach. Public interest in doing so exists and is evident from petitions urging the United Nations to ban weaponized AI. Existing treaty on Nuclear weapons could be an indicator that countries across the globe may adopt one for AI. In addition to such a treaty, a tiered decision-making guidance system could aid the management of AI systems. On the flip-side, what about the management of AI-powered defense, de-escalation and rescue machines in combat zones?

AI’s disruption of the job market has begun and will create an unevenness causing additional unemployment and income disparities. Despite job loss, economists believe AI will lead to the creation of new types of jobs. Having a committee to monitor AI’s impact, as well as advise on ways to combat job loss due to AI-based initiatives could mitigate social and economic threats AI presents. One can be hopeful that an almost utopian alternative to AI’s negative impact is possible if society changes its response to AI, starting with public open dialogue as the driving force for productive policies.

References

ETZIONI, A., & ETZIONI, O. (2017). Should artificial intelligence be regulated? Issues in Science & Technology, 33(4), 32–36.

Leave a Reply