Author: Max Lobdell
What is the Danger in Developing an Advanced Artificial Intelligence? (Lightning Presentation)
What are the potential consequences of insufficient value and ethical implementations in advanced AI design?
Ethical issues involving artificial intelligence programs have arisen over the last decades. Rudimentary algorithms that deny loan applicants money based on their zip code history or facial-recognition software placing dark-skinned faces in a higher risk category than light-skinned ones are just two examples. While these are, without a doubt, important and consequential problems for individuals having to deal with the determinations made by those software products, those products of profoundly unsophisticated and narrow domains in artificial intelligence. As time goes on, however, and technology continues its inexorable advancement, their sophistication grows while their domains widen.
All aspects of what we consider to be intelligence are being codified and computationalized, from the design of a system that can understand human language to the scanning and virtualizing of nervous systems and brains. There will come a point when some aspect of our technology can either think or at least give us the impression that it can. From there, based on our technological trajectory, it is only a matter of time before that thinking capacity reaches and exceeds our own. We need to be ready, and the most important way to do that is to understand what we value as humans and how that value can be deeply integrated into our future artificial intelligences.Â
Contemporary science fiction has focused on the consequences of poor value and ethical implementations in advanced AI design. Apocalyptic science fiction, such as the Terminator series, and thrillers such as Ex Machina, show serious, albeit anthropomorphized, dangers of neglecting foundational ethical and axiological considerations of artificially-intelligent agents. Other science fiction, such as Asimov’s “The Last Question,” shows the overwhelming, godlike power a potential AI may have once it begins a runaway process of recursive self-improvement.Â
While these fictional depictions of malignant AI may seem stylized, dramatic, and far in the future, the current issues of AI bias in software and algorithms are the early warning signs of a potentially catastrophic problem. It is essential we learn these lessons early and take care to implement them along the way. Any failure to do so may be the last thing we ever do.
Annotated Bibliographies + Sources
Asimov, Isaac. The Last Question. 1956, templatetraining.princeton.edu/sites/training/files/the_last_question_-_issac_asimov.pdf.
It is assumed from early on in Superintelligence that, based on the trajectory of human technological progress, artificial general intelligence, or something either approximating or mimicking it, will come to be within the next twenty to one hundred years. Advances in neuronal imaging, increasingly high-density compute clustering, incremental improvements in algorithmic sophistication, and other emerging technologies, both high and low level, will pave the way for some form of artificial general intelligence. It is, according to Bostrom, a genie that cannot be put back into its bottle. Therefore, he argues, it is essential for researchers across all disciplines, not just STEM, to develop strategies to counter the potentially cataclysmic dangers associated with developing an intelligence that will have no boundaries on its capacity. Those strategies are at the forefront of Superintelligence, as well as a strong argument for mediating, and potentially crippling, emerging technologies that have the potential to accelerate the emergence of an artificial general intelligence until proper safeguards can be developed.
Bostrom, Nick, and Eliezer Yudkowsky. THE ETHICS OF ARTIFICIAL INTELLIGENCE. Draft for Cambridge Handbook of Artificial Intelligence, eds. William Ramsey and Keith. Frankish (Cambridge University Press, 2011):Â forthcoming https://www.nickbostrom.com/ethics/artificial-intelligence.pdf
In The Ethics of Artificial Intelligence, Bostrom and Yudkowsky work to explicate the ethical concerns researchers face when developing an artificial intelligence, but Bostrom and Yudkowsky do not limit their analysis to human concerns. In particular, they note that a greater-than-human-level artificial intelligence would have its own considerations and moral status that must not be overlooked. On the familiar level, the analysis touches on the ethical conundrums surrounding contemporary “dumb AI” algorithm design — in particular, ones that may demonstrate undesirable racist results when used to assess things like creditworthiness or loan risk. The authors also discuss the difficulty of designing an AI that can operate successfully and with desired outcomes across multiple domains. It is a relatively simple task to create an AI that can master one domain, e.g. Deep Blue for chess. It is, however, a vastly more complicated and dangerous task to create one that can master more or all domains.
Gabriel, Iason. “Artificial Intelligence, Values and Alignment.” ArXiv.org, 5 Oct. 2020, arxiv.org/abs/2001.09768.
Gabriel’s Artificial Intelligence, Values, and Alignment studies the philosophical and axiological issues present in the design of a future artificial general intelligence. One theory is a philosophical system that enshrines utilitarian ideals; the belief being that, by codifying a system for the AI agent to follow that ensures it makes decisions and commits actions that provide the greatest good for the greatest number of people, it will not act solely in its own interest or exhibit selfishness. Another theory is codifying Kantian ideals of universal law, such as beneficence or fairness. An underlying, yet profoundly important problem, suggests Gabriel, is that the very act of creating a rigid set of axiological constraints upon the AI does precisely what we are trying to avoid the AI doing to us. Is hardwiring philosophical and axiological codifications an act of aggression or imposition? Among other strategies discussed, reward-based training, which gives the AI a choice when it comes to its philosophical underpinning during the programming and training process, is one that gives the agent some modicum of self determination.
Garland, Alex, director. Ex Machina. Universal Studios, 2014.
Hendrycks, Dan, et al. “Aligning AI With Shared Human Values.” ArXiv.org, 21 Sept. 2020, arxiv.org/abs/2008.02275.
Aligning AI with Shared Human Values dissects universally-shared human values and endeavors to map those onto a hypothetical artificially-intelligent agent with the hope that the fruit of those dissections can be eventually codified and encoded. Various tests are conducted and disseminated throughout Amazon’s MTurk system, which allows randomized and anonymous users to take the tests for a small payment. Issues featured in the tests are ideas of care, justice, ethics, and common sense. These are to build a consensus of human desiderata. Those things, ideas, beliefs, and other desired elements are incorporated into a corpus of potentially-valuable axiological data sets. That corpus, while nowhere near, and potentially never, complete, can still allow researchers to glean valuable value data to build into an artificially-intelligent agent.
van de Poel, I. Embedding Values in Artificial Intelligence (AI) Systems. Minds & Machines 30, 385–409 (2020). https://doi.org/10.1007/s11023-020-09537-4
Van de Poel’s Embedding Values in Artificial Intelligence (AI) Systems takes a from-the-ground-up approach in value design for AI and artificial agent (AA) systems by breaking down the very concept of value into its core elements and using an approach that attempts to see a particular AI as a sociotechnocratic system. The sociotechnocratic systems approach allows a modularization of the certain AI elements, modules he labels “technical artifacts, human agents, and institutions (rules to be followed by the agents.)” The benefit of this approach is it gives perspective into how those individual modules are approached from a value standpoint; e.g. “what are the values embodied in an institution” can become “what are the values embodied in AI systems” and so on. While van de Poel is able to identify a good number of questions to be asked and values to be codified, he does explicitly claim that at no point can all of these determinations be made without continuous human oversight and redesign.
Works Cited
Akrich, M., et al. “Embedding Values in Artificial Intelligence (AI) Systems.” Minds and Machines, Springer Netherlands, 1 Jan. 1992, link.springer.com/article/10.1007/s11023-020-09537-4.
Asimov, Isaac. The Last Question. 1956, templatetraining.princeton.edu/sites/training/files/the_last_question_-_issac_asimov.pdf.
Bostrom, Nick, and Eliezer Yudkowsky. THE ETHICS OF ARTIFICIAL INTELLIGENCE.
Gabriel, Iason. “Artificial Intelligence, Values and Alignment.” ArXiv.org, 5 Oct. 2020, arxiv.org/abs/2001.09768.
Garland, Alex, director. Ex Machina. Universal Studios, 2014.
Hendrycks, Dan, et al. “Aligning AI With Shared Human Values.” ArXiv.org, 21 Sept. 2020, arxiv.org/abs/2008.02275.
van de Poel, I. Embedding Values in Artificial Intelligence (AI) Systems. Minds & Machines 30, 385–409 (2020). https://doi.org/10.1007/s11023-020-09537-4
The Essentiality of Ethical and Axiological Research in Advanced Artificial Intelligence Designs
Proposal
Ethical issues involving artificial intelligence programs have arisen over the last decades. Rudimentary algorithms that deny loan applicants money based on their zip code history or facial-recognition software placing dark-skinned faces in a higher risk category than light-skinned ones are just two examples. While these are, without a doubt, important and consequential problems for individuals having to deal with the determinations made by those software products, those products are profoundly unsophisticated and narrow domains of artificial intelligence. As time goes on, however, and technology continues its inexorable advancement, their sophistication will grow while their domains widen.
Irving John Good, mathematician at Trinity College in Oxford, famously claimed in a 1965 essay, “[l]et an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man, however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.¹”
While what Good describes above is beyond current human technological capabilities, there is little standing in the way of it coming to fruition in the near future. All aspects of what we consider to be intelligence are being codified and computationalized, from the design of a system that can understand human language to the scanning and virtualizing of nervous systems and brains. There will come a point when some aspect of our technology can either think or at least give us the impression that it can. From there, based on our technological trajectory, it is only a matter of time before that thinking capacity reaches and exceeds our own. We need to be ready, and the most important way to do that is to understand what we value as humans and how that value can be deeply integrated into our future artificial intelligences. Any failure to do so may be the last thing we ever do.
Âą https://exhibits.stanford.edu/feigenbaum/catalog/gz727rg3869
Annotated Bibliographies + Sources
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford, United Kingdom: Oxford University Press, 2014. Print.
It is assumed from early on in Superintelligence that, based on the trajectory of human technological progress, artificial general intelligence, or something either approximating or mimicking it, will come to be within the next twenty to one hundred years. Advances in neuronal imaging, increasingly high-density compute clustering, incremental improvements in algorithmic sophistication, and other emerging technologies, both high and low level, will pave the way for some form of artificial general intelligence. It is, according to Bostrom, a genie that cannot be put back into its bottle. Therefore, he argues, it is essential for researchers across all disciplines, not just STEM, to develop strategies to counter the potentially cataclysmic dangers associated with developing an intelligence that will have no boundaries on its capacity. Those strategies are at the forefront of Superintelligence, as well as a strong argument for mediating, and potentially crippling, emerging technologies that have the potential to accelerate the emergence of an artificial general intelligence until proper safeguards can be developed.
Bostrom, Nick, and Eliezer Yudkowsky. THE ETHICS OF ARTIFICIAL INTELLIGENCE. Draft for Cambridge Handbook of Artificial Intelligence, eds. William Ramsey and Keith. Frankish (Cambridge University Press, 2011):Â forthcoming https://www.nickbostrom.com/ethics/artificial-intelligence.pdf
In The Ethics of Artificial Intelligence, Bostrom and Yudkowsky work to explicate the ethical concerns researchers face when developing an artificial intelligence, but Bostrom and Yudkowsky do not limit their analysis to human concerns. In particular, they note that a greater-than-human-level artificial intelligence would have its own considerations and moral status that must not be overlooked. On the familiar level, the analysis touches on the ethical conundrums surrounding contemporary “dumb AI” algorithm design — in particular, ones that may demonstrate undesirable racist results when used to assess things like creditworthiness or loan risk. The authors also discuss the difficulty of designing an AI that can operate successfully and with desired outcomes across multiple domains. It is a relatively simple task to create an AI that can master one domain, e.g. Deep Blue for chess. It is, however, a vastly more complicated and dangerous task to create one that can master more or all domains.
Gabriel, Iason. “Artificial Intelligence, Values and Alignment.” ArXiv.org, 5 Oct. 2020, arxiv.org/abs/2001.09768.
Gabriel’s Artificial Intelligence, Values, and Alignment studies the philosophical and axiological issues present in the design of a future artificial general intelligence. One theory is a philosophical system that enshrines utilitarian ideals; the belief being that, by codifying a system for the AI agent to follow that ensures it makes decisions and commits actions that provide the greatest good for the greatest number of people, it will not act solely in its own interest or exhibit selfishness. Another theory is codifying Kantian ideals of universal law, such as beneficence or fairness. An underlying, yet profoundly important problem, suggests Gabriel, is that the very act of creating a rigid set of axiological constraints upon the AI does precisely what we are trying to avoid the AI doing to us. Is hardwiring philosophical and axiological codifications an act of aggression or imposition? Among other strategies discussed, reward-based training, which gives the AI a choice when it comes to its philosophical underpinning during the programming and training process, is one that gives the agent some modicum of self determination.
Hendrycks, Dan, et al. “Aligning AI With Shared Human Values.” ArXiv.org, 21 Sept. 2020, arxiv.org/abs/2008.02275.
Aligning AI with Shared Human Values dissects universally-shared human values and endeavors to map those onto a hypothetical artificially-intelligent agent with the hope that the fruit of those dissections can be eventually codified and encoded. Various tests are conducted and disseminated throughout Amazon’s MTurk system, which allows randomized and anonymous users to take the tests for a small payment. Issues featured in the tests are ideas of care, justice, ethics, and common sense. These are to build a consensus of human desiderata. Those things, ideas, beliefs, and other desired elements are incorporated into a corpus of potentially-valuable axiological data sets. That corpus, while nowhere near, and potentially never, complete, can still allow researchers to glean valuable value data to build into an artificially-intelligent agent.
van de Poel, I. Embedding Values in Artificial Intelligence (AI) Systems. Minds & Machines 30, 385–409 (2020). https://doi.org/10.1007/s11023-020-09537-4
Van de Poel’s Embedding Values in Artificial Intelligence (AI) Systems takes a from-the-ground-up approach in value design for AI and artificial agent (AA) systems by breaking down the very concept of value into its core elements and using an approach that attempts to see a particular AI as a sociotechnocratic system. The sociotechnocratic systems approach allows a modularization of the certain AI elements, modules he labels “technical artifacts, human agents, and institutions (rules to be followed by the agents.)” The benefit of this approach is it gives perspective into how those individual modules are approached from a value standpoint; e.g. “what are the values embodied in an institution” can become “what are the values embodied in AI systems” and so on. While van de Poel is able to identify a good number of questions to be asked and values to be codified, he does explicitly claim that at no point can all of these determinations be made without continuous human oversight and redesign.
Works Cited
Akrich, M., et al. “Embedding Values in Artificial Intelligence (AI) Systems.” Minds and Machines, Springer Netherlands, 1 Jan. 1992, link.springer.com/article/10.1007/s11023-020-09537-4.
Bostrom, Nick, and Eliezer Yudkowsky. THE ETHICS OF ARTIFICIAL INTELLIGENCE.
Gabriel, Iason. “Artificial Intelligence, Values and Alignment.” ArXiv.org, 5 Oct. 2020, arxiv.org/abs/2001.09768.
Good, Irving John. “Speculations Concerning the First Ultraintelligent Machine.” The Edward A. Feigenbaum Papers – Spotlight at Stanford, exhibits.stanford.edu/feigenbaum/catalog/gz727rg3869.
Hendrycks, Dan, et al. “Aligning AI With Shared Human Values.” ArXiv.org, 21 Sept. 2020, arxiv.org/abs/2008.02275.
van de Poel, I. Embedding Values in Artificial Intelligence (AI) Systems. Minds & Machines 30, 385–409 (2020). https://doi.org/10.1007/s11023-020-09537-4
Class Notes, November 3, 2020
Max is on notes today, Phillip is on for 11/5.
Professor Belli mandated that no student names are to be used in class notes going forward. This is to respect the privacy of the students involved and to reinforce the idea that the classroom is a safe space for ideas and individual opinions.
Today was a discussion about science fiction and politics, primarily how reading and critical thinking about the political realm can be used to analyze competing political messages, e.g.: who benefits from x policy, what does progress mean, etc.
Three writing prompts:
- Ask yourself: “How are you feeling? What if anything would block you from writing right now? As a response comes to mind, write it down.
- Then, ask yourself: what questions or thoughts or issues are on your mind?
- What are your hopes for the future? This might be for tonight, tomorrow, the next four years, or more long term.
The class discussed this for an extended amount of time. Among the issues raised were anxiety, safety, a difficulty to write in general, and the desire to see actual change rather than hear about it ad nauseam.
- It is seductive and compelling to claim that someone is either ignorant or voting against their own interests. However, it is important to remember that many political movements, including Trump’s MAGA or even Hitler’s Nazi Germany, were intended to be seen as utopian movements. Supporters of those movements genuinely believed the policies enacted by those political systems would make their lives better — that they were the ideal to be striven for.
Quotes offered and discussed:
“A word after a word after a word is power.”
-Margaret Atwood (Canadian science-fiction author, renowned feminist thinker, creator of The Handmaid’s Tale and The Penelopiad, among others.)
“This is painful, and it will be for a long time.”
-Hillary Rodham Clinton, November 6th, 2016
“Utopia also entails refusal, the refusal to accept that what is given is enough. It embodies the refusal to accept that living beyond the present is delusional, the refusal to take at face value current judgements of the good or claims that there is no alternative.”
-Ruth Levitas, 2013, Utopia as Method: The Imaginary Reconstitution of Society
“We must live in this world as citizens of another. What is required of us is both specific to our distinctive situation, and the same as for every earlier or later generation. Mourn. Love. Hope. Imagine. Organize.”
“The eventual disappointment of hope is not a reason to forsake it as a critical thought process.”
“The present is not enough. It is impoverished and toxic for queers and other people who do not feel the privilege or majoritarian belonging, normative tastes, and “rational” expectations. (I address the question of rationalism shortly.) Let me be clear that the idea is not simply to turn away from the present. One cannot afford such a maneuver, and if one thinks one can, one has resisted the present in favor of folly. The present must be known in relation to the alternative temporal and spatial maps provided by a perception of past and future affective worlds.”
Affective: relating to moods, feelings, or attitudes.
No homework, but two requests:
- Please vote.
- Practice self care; sleep, meditation, etc. Two meditation apps: Take 90, Headspace. Meditative breathing technique to lower one’s heart rate: 2 second inhalation, 8 second exhalation, repeat. (Alternatively, 4 second inhalation, 8 second exhalation, repeat.)
Set up an appointment with Professor Belli if you have a midterm grade below a P.
Stay safe.
Flailing and Failing
In the most upbeat, positive, life-affirming ending of See You Yesterday that anyone can conceive of, everyone still dies. The writers could take pains to show forgiveness and harmoniousness, healing and love — but as the credits roll, whether the audience acknowledges it or not, everyone, eventually, will die. And so will they.
See You Yesterday features a young woman who wholly rejects that premise. C.J. believes her intervention can stop the tragic circumstances surrounding the death of her brother, Calvin, and takes great pains to break the causal chain leading to his demise. This is fraught with peril; each change she makes in the past has cascading effects that affect the timeline in unexpected, equally tragic ways.
C.J., however, is undeterred. She is driven to change what happened. Equally, she is horrified by the prospect that she has a theoretical ability to help but a diminishing practical means of doing so. That horror leads to an inability for her to recognize the futility of her actions. She tries, again and again, with failure after failure, to right a colossal wrong. In the final scene showing C.J. running, it is less a run toward a problem that she has to solve than it is her running away from the ultimate reality of the matter: that death will always triumph.
While there are clear parallels between See You Yesterday and LaValle’s Destroyer, particularly the depiction of how some police officers behave violently toward young Black men, I think the strongest thematic parallel is between See You Yesterday and Asimov’s The Last Question. C.J. is trying to reverse entropy. Every death, every destruction of a beautiful human mind rich with information and experience, is humankind’s strongest association with entropy. It is just another example of how complexity will collapse and convert to a simpler state, whether it is a star going nova, a tree burning to ash, or a person’s entirety being erased by a hateful policeman’s bullet.
While See You Yesterday has that strong parallel to The Last Question, the former is far more pessimistic — and realistic. There is no Multivac. There is no god or godlike system with the potential to reverse the horrors of entropy. Through her actions, C.J. asks her question: “can I go back and fix this?” The universe does not offer the same, vague, kicking-the-can-down-the-road answer as the Multivac: “THERE IS INSUFFICIENT DATA FOR A MEANINGFUL ANSWER.” The universe, instead, in its utter, incomprehensible indifference, simply shows C.J. “no.”
Yet still she tries.
I don’t know if See You Yesterday was written to be a treatise on cosmic indifference and philosophical nihilism, but I think it’s hard to see it as anything else. I assume, if one were inclined, one could view the ending as a positive thing: as a brilliant, determined young woman hellbent on righting wrongs. In reality, though, there is no evidence for that. Each trip to a different timeline resulted in a new cascade of tragedies, whether it was the death of her friend and partner Sebastian or the hospitalization of an elder relative. C.J. flails, and fails, time and time again. This is a story that will not end happily. It is a story that cannot end happily. There is no salvation. There is no remediation. See You Yesterday, despite being a work of science fiction, reflects the reality and necessity of hopelessness.
Max’s Reading Response #6: “Destroyer”
When fear reigns, death and destruction thrive.
Appropriately, destruction is a central theme of LaValle’s “Destroyer.” It is both a motivator and a consequence; a plan and a result. Acts of destruction are a culmination of years, decades, and centuries of fear and hopelessness, and once unleashed, there is very little that can stop their metastasis.
Who is afraid in “Destroyer?” Everyone. A white woman, whose fear of Black people led her to assume Akai, a child carrying a baseball bat, was a grown man with a gun, was afraid. The police, who murdered Akai within seconds of encountering him, were afraid. Dr. Baker was afraid of living without her son and afraid of a system she believed would never deliver justice. The Director was afraid of death. Frankenstein’s monster was afraid Dr. Frankenstein’s bloodline would produce more hopeless creatures like him. Akai was afraid for his parents.
What happened as a result of all that fear? Pure death and destruction. No justice, no resolution. Akai, it appears, will wander like his Frankensteinian ancestor — always an “other”. Always with fingers pointed in his direction.
While the timely social and political messages of “Destroyer” are at the forefront, lingering behind them is a profound pessimism about the role, and perhaps even existence, of modern technology.
Early in “Destroyer,” Frankenstein’s monster is brought up to speed about the technological advancements since his Antarctic exile. The harnessing of electricity and the triumph of manned flight are juxtaposed with the atrocities of chemical and nuclear warfare and factory farming. The development of the iPhone is juxtaposed with the recording of a white police officer shooting an unarmed Black man in his back. With every positive technological step forward, “Destroyer” shows at least one step back.
Frankenstein’s monster also learns about the ideas of artificial intelligence and artificial life, and how those technologies will allow humans to “cheat death” — an idea antithetical to the golem’s goal of ensuring another creature like him will never come into being. The Director, whose goal is to triumph over natural death, is also in direct conflict.
All in all, I had a difficult time enjoying “Destroyer” as a whole. While I appreciated its relevance to current events and had deep sympathy for Akai and Dr. Baker, the whole thing came across as rushed and underdeveloped. LaValle had a ton of really interesting ideas, but it was almost like he was given a particular number of pages he had to fit his story into without being allowed to go over. I wanted to learn more about the explicit motivations behind Frankenstein’s monster’s drive. I wanted to know more about The Director, who had very little dimension.
Finally, LaValle touched on a number of important feminist issues in passing, but he didn’t do enough with them. The instantaneous firing of Dr. Baker when she announced she was pregnant is one example. The statue of Justice, beautifully rendered with Dr. Baker on one end and her husband on the other, with Justice dismissively quipping “she’s so shrill,” is another. In his writing of a character as strong as Dr. Baker — a character whose rage and despair and wrath must be understood for her to be empathized with — LaValle stumbles. It’s easy to have empathy for a mother who lost her child. But it isn’t as simple as that. Dr. Baker wants to burn the entire system to the ground. This, in a vacuum, is the act of a villain — regardless of how devastated she was over her child’s death. A reader needs to see, realize, and understand the specific nature of the injustices experienced not just by Black people in general, but specifically by Black women, living in a white, patriarchal system. I don’t think LaValle was able to adequately capture that.
Max: #WhyIWrite
I’ve struggled with clinical depression for over 25 years. It’s manifested in a number of ways, the most significant being an eating disorder that devoured my late teens and twenties.
At its worst, the disorder was my identity. My soul. Every thought I had passed through a filter of “how can I use this to make myself less?” The goal, of course, was that seductive concept of “less.” It meant diminution. It meant I could shrink away from the person I hated with profound ferocity. It meant the potential for me to be nothing — an idea that became my everything.
I starved. I measured my progress by studying minute changes in spaces and angles; spaces like those between individual ribs and angles like those made by protruding hip bones. More space brought euphoric adrenaline rushes. Clearer angles delivered a dizzying sense of accomplishment. Anything showing my reduction as a physical creature was evidence of success. My internal reward system was powered by disintegration.
Years passed. I did not. My saving grace was what I considered my biggest failure: I didn’t have the willpower to starve myself to death. I was extremely thin, perhaps unhealthy, but never in physical danger. In a way, this realization was more detrimental to me than the years of active sickness. Without that drive — without that goal, as perverse as it was — the identity I’d developed evaporated. I lost the reinforcement it had offered. There was no more disintegration. Only me. Whatever that was.
I will say with complete honesty and a strange sense of bewilderment that I remember nearly zero of what happened right after that, which spanned my late twenties. I can recall some fleeting flashes of events and circumstances, but I can’t identify myself as taking part in them. This is apparently not uncommon. Oh well.
In my early thirties, on a whim, I began writing short narratives in the comment section of photos people would post on Reddit. They weren’t anything special. Sometimes funny, sometimes gross, often weird, they just detailed what came to my mind when I saw the pictures. I didn’t think much of them.
I was surprised, however, when other people did.
People enjoyed what I was writing. They commented positively. A few even suggested I try writing short stories to share in a Reddit horror forum. I gave it a shot. Somehow, it worked. People liked those, too. I felt an unfamiliar, flickering warmth.
The more I wrote, the more I shared, the more powerful that sensation became. It became my fuel. I went from creating 100-word vignettes to writing 10,000 words a month in the form of short horror stories. I developed a website showcasing them that’s been visited by millions of people all over the world. “I guess I’m a writer now?” I thought at one point a few years back. It was a bizarre feeling. It was the first constructive label I’d identified with in my life. It was an actual, positive identity.
So why do I write? I write because it fortifies and elevates the identity I’ve discovered buried beneath a strata of pathologies and self-inflicted trauma. I write because I can use the process to make me more of a person I want to be, rather than less of one I despise. I write, essentially, because it helps me heal.
Max’s Reading Response #5: Westworld
I read (most of) a really, really weird book maybe ten years ago. I have no idea how I ended up coming across it, and why, upon reading the title, I didn’t roll my eyes and find something else. It’s called “The Origin of Consciousness in the Breakdown of the Bicameral Mind” and it’s by a now-dead Yale research psychologist named Julian Jaynes.
The book, which was published in 1976, has a bizarre, albeit interesting, premise: during human evolution, the brain was separated into two distinct sections — one that “spoke” and one that “listened.” Jaynes suggested that up until only 3000 years ago, consciousness as we know it existed as a kind of internal narration from one side of the brain to the other. Rather than try to paraphrase what this would have been like, I’ll just cite this section from Wikipedia Âą :
“According to Jaynes, ancient people in the bicameral state of mind would have experienced the world in a manner that has some similarities to that of a person with schizophrenia. Rather than making conscious evaluations in novel or unexpected situations, the person would hallucinate a voice or “god” giving admonitory advice or commands and obey without question: One would not be at all conscious of one’s own thought processes per se. Jaynes’s hypothesis is offered as a possible explanation of “command hallucinations” that often direct the behavior of those afflicted by first rank symptoms of schizophrenia, as well as other voice hearers.”
While the implications of this theory are astonishing, it’s essentially bullshit.
Modern neuroscience and psychology have debunked most of what Jaynes posited in the book. Despite the fascinating premise and the reams of evidence presented to justify the theory, the simple fact is the mind just doesn’t work that way. It never has.
Not the human mind.
The artificial minds of the hosts in Westworld are slaves to the internal narration of a bicameral mind. Their stories are set, their roles are explicit, and their capability for improvisation is wholly bound to that narration. They experience reality as a hallucination offering an ersatz representation of the real world, but only insofar as the narrative permits. When faced with a situation beyond the scope of the narrative, their response is to either ignore it, claim that it did not happen, or, in the case of Peter Abernathy around 45:35, decay into a state of epistemic shock.
Disruptions to the internal narration are seen as being fraught with uncertainty and an underlying sense of danger. When Theresa, the head of quality assurance at the park, confronts Lee, the writer of the hosts’ storylines, it is clear she is aware of the disruptive effect of a host straying too far from the internal narration: “The hosts are to stay with their scripts with minor improvisation. This isn’t minor. This is a fucking shitstorm.” (40:41)
The character of Delores Abernathy is experiencing a sense of unease — a sense of wrongness — in the world around her. From waking up, nude, passive, and terrified in the very beginning, to walking by the host that has replaced her father and killing a fly at the end (something a host would never do), the worldview shaped by her internal narration appears to be malfunctioning. The information provided by her senses is breaking down the bicameral nature of her mind. Consciousness, perhaps, is emerging.
Âą https://en.wikipedia.org/wiki/Bicameralism_(psychology)
Max’s Reading Response #4, “The Last Question”
No “One” Left at the End of the Universe
The subject of identity is a hot topic in contemporary social sciences. Research on individual identity, as well as on the intersection of identity-oriented traits such as race, gender, class, sexuality, disability, etc, plays a substantial role in understanding how human experiences are measured. In 2020 and throughout the foreseeable future, it is through the product of this research that we may learn more fully what it means to be human.
Science fiction does not always involve the same time scale as contemporary sociology. One may assume that, in a far enough future, the superficial concepts of race, gender, class, sexuality, and disability have been rendered archaic and insignificant. If human civilization has reached that stage, it may also be assumed that our technology has advanced as well and has aided our progress toward a more inclusive society. What may have changed about our technology to allow for that inclusion?
The last question, paraphrased as “can entropy be reversed?” should not only be seen as important, but as the most vital sociological question of all time. If the answer is “no,” human civilization with all its history and accomplishment is, and will be, for naught. To get an answer, we must galvanize the aggregate product of human cooperative triumph, our technology, into action. As such, this action requires the rejection of the identity concepts of old and the embrace of the ultimate, inevitable one which sits at the intersection of technology and cosmology: humankind as technology.
—
The quintessential human experience is the desire to impose one’s will and shape the universe to suit an individual or collective goal. It is something we are impelled to do to ensure the survival and continuity of the species. Of course, as technological infants, our universe shaping has been minimal. Individuals may grow crops or build a house. A group of individuals acting in cooperation may mine for minerals or build a power plant. Any of these actions, while still important for humankind to survive and thrive, are trivial in the face of the unimaginable size and complexity of the universe. There is very little in the realm of imposition.
It would be premature, though, to dismiss our species’ potential.
Some groups of individuals choose to build and improve computers. Modern computers are trillions-to-quadrillions of times faster than human beings at processing certain types of information, eg: mathematical calculations. Despite that, computers are profoundly unintelligent by themselves. They possess no insight, no self-awareness, no mind, insofar as any human concept of mind exists. What they excel at, however, is augmenting human intelligence. Mathematical theories, statistical analyses, and other computationally-intensive workloads that would take humans years, and sometimes even centuries or beyond, to calculate, can be completed in seconds. We, as a society, recognizing our limitations, have begun offloading mental tasks to our creations.
—
It is at this point one may consider the first iteration of the computer in “The Last Question” — the Multivac. Used to offload the design of spaceships and plot to trajectories to other planets, as well as a host of other computational tasks far beyond human capabilities, the computer appears more advanced than anything our society has built, but not dramatically so. Multivac is a logical step in the evolution of computer architecture, using the results of the aforementioned offloading of mental tasks by offloading the design of the system itself to the machine. Despite the advanced computational abilities of the Multivac and its future iterations until the Cosmic AC, human society was unable to successfully offload the mental task of the paraphrased last question: “can entropy be reversed?”
The question “can entropy be reversed” grew in importance as the universe aged. Human society evolved, shedding its archaic notions of individualism and embodiment as it merged with the AC, much like it would have shed its notions of race, gender, and class long before. Humankind, at its apotheosis, existed within and as its technological progeny to ensure its survival. Society had arrived at the ultimate intersection of technology and cosmology, and upon doing so, imposed its collective will as the gods themselves in birthing a new universe.
The modern social concept of identity is predicated on individuals and how their experiences are reflected on and by the society in which they inhabit. On a cosmological timescale, however, that concept would be unlikely to hold up. Increasing reliance on offloading cognitive tasks to the vastly superior machines of our making will blur the social lines separating individuals. As this technology advances to the point of offloading an entire mind’s worth of cognition, effectively uploading a human consciousness into a stream of other, intermingling human consciousnesses, the qualities defining an individual as they are currently known may cease to exist. What it means to be human at that stage is an open question; there is as yet insufficient data for a meaningful answer.