Nicholas Thompson, Editor-In-Chief
WIRED Magazine
P.O. Box 37706
Boone, IA 50037-0706
Dear Mr. Thompson,
As a New York City College of Technology student, tech geek and a concerned human being more skeptical of other humans than robots, I wanted to write to you in disagreement to the recent article written by Lexi Pandell (âShould We Worry: Will AI Turn Against Me?â September 2017).
I enjoyed the article by Ms. Pandell, but I believe the question posed isnât the exact one we should be asking ourselves. Perhaps âWhich Humans Are Qualified to Make Certain AI Wonât Turn Against Me?â would be more apt (albeit less catchy). The article jokingly contemplates whether AI will spiral out of control and morph our human world into an evil quagmire, and then assures us that engineers for Google and Oxford are âon it, just in caseâ. Thatâs not super comforting in and of itself. The real issue, per the article, is whether AI might diverge from âourâ intended goals. My concern is, who is researching the researchers, and what are their goals? Who deems the ethics of these programmers to be morally sound, worthy of policing potentially destructive AI?
My contention is that the real issue is ensuring that these programmers, so intent on shaping and monitoring the moral compass of AI, are equally vetted as moral. Just in case not all humans are âinherently goodâ as the article suggested.
1234 Rosecroft Street
Richmond, VA 23225
jennifer.travinski@mail.citytech.cuny.edu
804.999.7777
September 5, 2017
Nicholas Thompson, Editor-In-Chief
WIRED Magazine
P.O. Box 37706
Boone, IA 50037-0706
Dear Mr. Thompson,
As a New York City College of Technology student, tech geek and a concerned human being more skeptical of other humans than robots, I wanted to write to you in disagreement to the recent article written by Lexi Pandell (âShould We Worry: Will AI Turn Against Me?â September 2017).
I enjoyed the article by Ms. Pandell, but I believe the question posed isnât the exact one we should be asking ourselves. Perhaps âWhich Humans Are Qualified to Make Certain AI Wonât Turn Against Me?â would be more apt (albeit less catchy). The article jokingly contemplates whether AI will spiral out of control and morph our human world into an evil quagmire, and then assures us that engineers for Google and Oxford are âon it, just in caseâ. Thatâs not super comforting in and of itself. The real issue, per the article, is whether AI might diverge from âourâ intended goals. My concern is, who is researching the researchers, and what are their goals? Who deems the ethics of these programmers to be morally sound, worthy of policing potentially destructive AI?
My contention is that the real issue is ensuring that these programmers, so intent on shaping and monitoring the moral compass of AI, are equally vetted as moral. Just in case not all humans are âinherently goodâ as the article suggested.
Sincerely,
Jennifer L. Travinski