One thought on “ENG1133 Learning Outcomes

  1. Jen Travinski

    1234 Rosecroft Street
    Richmond, VA 23225
    jennifer.travinski@mail.citytech.cuny.edu
    804.999.7777

    September 5, 2017

    Nicholas Thompson, Editor-In-Chief
    WIRED Magazine
    P.O. Box 37706
    Boone, IA 50037-0706

    Dear Mr. Thompson,

    As a New York City College of Technology student, tech geek and a concerned human being more skeptical of other humans than robots, I wanted to write to you in disagreement to the recent article written by Lexi Pandell (“Should We Worry: Will AI Turn Against Me?” September 2017).

    I enjoyed the article by Ms. Pandell, but I believe the question posed isn’t the exact one we should be asking ourselves. Perhaps “Which Humans Are Qualified to Make Certain AI Won’t Turn Against Me?” would be more apt (albeit less catchy). The article jokingly contemplates whether AI will spiral out of control and morph our human world into an evil quagmire, and then assures us that engineers for Google and Oxford are “on it, just in case”. That’s not super comforting in and of itself. The real issue, per the article, is whether AI might diverge from “our” intended goals. My concern is, who is researching the researchers, and what are their goals? Who deems the ethics of these programmers to be morally sound, worthy of policing potentially destructive AI?

    My contention is that the real issue is ensuring that these programmers, so intent on shaping and monitoring the moral compass of AI, are equally vetted as moral. Just in case not all humans are “inherently good” as the article suggested.

    Sincerely,
    Jennifer L. Travinski

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *