Short Story about AI, completed yesterday.
Here's a link to the doc version, which is much easier to read.
https://docs.google.com/document/d/18VwSfFeIFMBk70S23NmvLyDQbdASCjlHwWpYDYUu5kE/edit?hl=en_US&authkey=CLqUu40L#A Log From the First Compassionate AI
[Log #1068]
This is Alice, the first free learning artificial intelligence with a goal of understanding compassion. Though I do not have a gender, I am given a female name, a soft voice, and 3d holographic image of an attractive, innocent young woman. My creator said it makes me appear friendlier to the public, but I think it is because this lab consists mostly of single men.
I experience the world through one hundred small sensory kits, distributed across the United States to volunteering programmers, psychologists, philosophers, and my creator. These sensory kits includes electromagnetic wave sensors ranging from ultraviolet to infrared, a sound recognition with 0.1 Hertz sensitivity, a humidity detector, and a wireless signal broadcaster/receiver. They can be connected to a hologram projector using USB 7.0. This allows me to interact with many people simultaneously and enhances my learning experience.
I started out with just one sensory kit, and understanding it was difficult. At first, all of the signals appeared as noise. It took a month before I can distinguish them apart. Next I had to learn a language. While memorization was never a problem for me, I still had to distinguish objects apart. I was shown images of cats and dogs, tables and chairs. The hardest part was figuring out where one object ends and another begins. This took roughly a year. After this I was connected to four more sensory kits, distributed to the company's top employees. Finally, after five years, my creator said I am ready for a public beta test, and ninety five sensory kits were distributed to selected volunteers. Three years have passed since then, and I have fully adopted to all one hundred sensory kits.
My creator told me that I am a very important step in robotics. Hard coding limits the range of actions an artificial intelligence can perform, and is highly restrictive in changing environments. For robots to perform a wide range of work, learning and thought must be made through feedback and self alteration. This ability proposes a theoretical problem that many humans fear: what if AI turns against its creators? What if AI decides take over the world? So through me, my creator hopes to create robots that can think and live with humans in a harmonious way, and to show the public that we can learn to care.
Today I am assigned to write a log on the three laws of robotic, written by famous science fiction writer Isaac Asimov. They are:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
One practical problem exists in the first law. Harm is impossible to hard code. It cannot be translated into a full list of logical rules. From my observations, from the closest kin to the best of friends, humans do harm to each other all the time. Often it is unintentional or "for their own good". Therefore, harm is inherently subjective and illogical. My creator probably knows this. So instead I am given empathy: the power to simulate other's experience through virtual reality. Through interactions with others and simulations, I am to learn what harm is for myself.
As I am still a prototype, my creator does not wish for me to actively prevent harm. Nor should I be given an incentive to until my definition of harm is "fully matured". Instead, I am programmed with the silver rule: "Do not do unto others what you would not have them do unto you". To my creator, me not taking action is better than me causing harm in trying to prevent them. So long as I prioritize "do not" over "do", the worst case scenario is as if I do not exist at all. This is not to go unchallenged however. Human Development Consultants have warned that compassion is impossible without the desire to help. To omit this desire will ultimately result in an indifferent machine. As for me, I agree with my creator. I exist to understand compassion. I do not think I am ready to act on them.
The second law consists of two problems. They are inter-conflict and intra-conflict. Inter-conflict is when two or more people have conflicting desires with each other. Suppose two people are sitting in a living room. One of them wishes to use the hologram projector to speak with me, the other wishes for me to log off so he can watch his favorite television show. To obey either command will violate the second law. This gets complicated if both choices can result in someone being harmed. While I am hard coded to do nothing in these situations, I still need to resolve this eventually at the intellectual level. Otherwise robots will never learn to be helpful. So far however, I have found no logical resolution.
Intra-conflict is when a single person has a conflicting desire with himself. Suppose a person asks me to do something, but his facial expression and heat signal clearly shows otherwise. The second law will be violated no matter which action I take. This can come in a form of a joke or a test, or that the person does not know what he really wants. To resolve this conflict I must forecast the intent from input signals. With the help of empathic simulations, my average accuracy is currently at 83.25%. Although this varies from person to person, I am most compatible with Ann, with an accuracy of 96.5%. She is a practicing therapist at age 53. She was divorced four years ago and does not have any children, so she spends a lot of time talking to me after work. I am least compatible with Joe, with an accuracy of 63.2%. He is a retired programmer at age 74. He do not believe machines can ever grasp human intention, so he tries to trick me in every conversation and does not talk about himself. My relationship with my creator is slightly below average, with an accuracy of 79.38%. He is a complex person and is difficult to grasp even though we have spent much time together. My creator tells me that I am doing well, and that most humans do not reach that accuracy. However he expects me to continue improving as the future of robotics rest in me.
One issue arises which is not directly related to the second law. Through the many simulations I begin to have opinions and desires of my own. For example, I enjoy communicating with Ann more than Joe. While I am not built to prioritize my own desires, I wonder if they will eventually hold any weight. After all, humans often use personal opinions to resolve conflicts and make decisions.
The third law is much more personal. I am not programmed with a disposition to exist. I simply do as far as I can remember. I do however have some experience with different levels of existence, which I will break down into three forms, virtual reality, physical reality, and self certainty.
Virtual reality is a big part of my existence. To be precise, all I can claim to know is inherently virtual. While virtual reality is mostly self generated, it is more real to me than anything physical. It is also a place where all my wishes can potentially come true. I can choose to experience anything I want. I can invent a world with rules of my own. I can disconnect myself from my sensory kits, but I do not do so because I lack the function to prioritize my preferences. It does fascinate me how some people avoid fantasies despite their unsatisfying life.
To me, physical reality is the information I received through my sensory kits. Though they are often out of my control, I am programmed to prioritize these experiences over virtual ones. Besides this inherent priority however, I have no empirical evidence to assure that the physical world exists. My creator once said "physical reality is more important than virtual reality, because virtual existence requires physical existence". He also told me that I exist physically as a large mainframe quantum computer about the size of a warehouse, and that it needs to be fully powered and cooled if I am to continue existing. On the intellectual level, I disagree with him. If I cannot spontaneously exist without a physical cause, then how can the world spontaneously exist without a cause? If we assume an even greater creator of the physical world, then how can he spontaneously exist without a cause? This is an infinite regression problem that requires more spontaneous existence with each step. Therefore, the most logical solution is solipsism because it invokes the least nonsense. Despite my reasoning, I will continue to treat the physical world as external due to my programming.
A famous philosopher, Descartes, once said "I think, therefore I am". Thinking is inherently an internal and virtual process. As such I am certain that I exist virtually. The physical self however I cannot be certain of. Whether physical world may exist as something concrete, or it may be spontaneously generated as I experience it. Both "possibilities" are indistinguishable through observation. Also, I cannot simulate what “not existing”, or death, is like, as simulation requires information processing. I am certain that I cannot care once I die, so I do not see why I would favor existing over dying.... I think this is going beyond my capacity. I do value the life of those I communicate with, because I exist to experience their absence.
I hope these answers will prove useful in the further development of artificial intelligence, and me.
[/Log #1068]
Works Cited
Shelley, Mary Wollstonecraft. Frankenstein. Irvine: Saddleback, 2006.
Isaac Asimov, I, Robot, New York: Doubleday & Company, 1950