ERA OF SINGULARITY: THE LAST INVENTION

2019.5.17

“Will intelligent machines replace us, coexist with us or merge with us? What will it mean to be human in the age of artificial intelligence? […] What will it mean to be human in the age of artificial intelligence? What would you like it to mean, and how can we make the future be that way?” –Life 3.0 by Max Tegmark (physicist, MIT), p79-81

After its long stagnation, AI research entered a new phase with the Internet coming into personal use all over the world in the 1990. A vast amount of data can be accumulated in a very short time, which no one could even imagine in the past, and the big data accumulated thereby is used as a source for machine learning research, which is achieving remarkable results. The current AI research is designed by humans but not completed by humans. Just as babies born into the world, growing through learning, and becoming adults, the contemporary AI research based on a massive amount of database is designed for an AI to go through countless trial and errors in order to find an answer by itself. Expandable and connectable to the extent that cannot be even compared to humans with biological limitations while never resting or getting exhausted, these new “learners” have already started to cross the borders of many areas that had been perceived as only reserved for humans, including intuition or creativity. We do not and cannot know how far they would ultimately reach beyond all these borders.

The best way to do a task is developing an AI with machine learning capabilities that would perform the task. The best way to make an AI is also making an “AI that develops an AI”. To perform a task, AIs “perceive” themselves and, based on such perception, will repetitively carry out self-improvement. Experts predict that if this recursive self-improvement continues, the big bang of artificial intelligence would break out at some point. About such capability called “super intelligence”, philosopher and neuroscientist Sam Harris says that, comparing the amount of information that can be processed by biochemical circuits of a human body and by electronic circuits of an AI, the latter will perform 20,000 years of human-level intellectual work in a week. When would this kind of AI appear? Famous for contributing to the Transhumanist Declaration, Professor and philosopher Nick Bostrom at the University of Oxford conducted a survey among 170 experts on when this point of time, called “technological singularity”, would come. Surprisingly, half of them predicted the time would come after about 20 years, and 90 percent of them after 50 years. However conservatively the standard is estimated, people in their twenties or thirties right now will see this point at least in their lifetime.

The optimists including Google’s Director of Engineering and futurist Ray Kurzweil focus on the magnificent possibilities the technological singularity will present to us, imagining a new mankind that flourish as transhumans. Meanwhile, some of the prominent experts including Stephen Hawking and Elon Musk expressed serious concerns about super intelligence with self-awareness and a possibility of AIs bringing a dystopia for humans. According to Nick Bostrom, the entire result depends on what kind of purpose humans set for AIs on the way for super intelligence. What if we set “happiness for humans”? Still, how should happiness for humans be defined for an AI? What if we analyze brains of happy people and inject hormones that induce such feeling? Couldn’t it be that AIs that pursue our happiness design the state where all humans are forever addicted to a drug? In fact, we know very few things about many basic premises we have thought we know well about.

The upcoming era of AI urges us to reconsider many difficult questions as the above to which we have withheld answering. Historian Yuval Harari claims in his book Homo Deus: A Brief History of Tomorrow that, in this time where humans are losing the source of their authority, a new direction beyond humanism is required. The intelligences derailed from the lineage of biological evolution and repeating self-evolution as they come would probably not in the form of a human-like cyborg or a human replica. They would not be interested in becoming the same as humans from the beginning—possibly a very humanist idea—and would just sweep by the station named “the human level” in a flash and head for the horizon that we could not even imagine. In his TED speech, Sam Harris said that if the intelligences we are making surpasses by far the limitations we have, we could be probably building “some sort of god”. The little remaining time for contemplation is putting a heavy burden on all of us.

​​​​​​