Technophilia vs Technophobia, which way is the year 2019 leading us to?
A city devastated by the nuclear war greets us with few rays of sunshine and acid rain pouring down. Environmental pollution sharply reduces the production of crops and livestock, which leads a significant number of humans to leave the earth and move to another planet, carving out a new base for human being altogether but this time with human clones as slaves. (Blade Runner, a film, 1982)
Meanwhile, in a city’s state-of-the-art facility, thousands of human clones are raised simply to offer themselves as a treatment for real humans with refractory diseases. Their body organs and tissues are used for humans who requested the service and when they are no longer needed, they are discarded without a second thought. (Island, a film, 2005)
Human clones who were created solely according to human desires rebel against the real humans and pose a question, risking their lives. What does it mean exactly to be a human?
The two films share similar insights. They are based on the human clones of a highly advanced future society, and the year which the films described happens to be year 2019. We have only four months left for the year 2019 and those imaginary world in the films ended up remaining as only imaginations. Should we feel relieved that we are still the only species that can question on the issue of being human? Or is it time for us to consider more seriously some of the claims of people such as Stephen King, who have similar ideas to such dystopian films? Is it about time for us to come up with both the spears and shields in this age of AI as in the case of Elon Musk, who invested a huge amount of capital to found Neuralink Corporation and research on the development of human brains in order to prevent the AI from becoming a potential threat to human civilization?
Not surprisingly, there are significant gaps in the AI that can be experienced and felt in real life and the AI that is described in media and other cultural outlets. So, are we going to face some advanced beings, who or which may confuse themselves as real humans or may actually be more advanced than the real humans? It still feels like a story in a sci-fi film. Nevertheless, the recent achievements of AI technology are astonishingly remarkable, sometimes to the point of feeling frightened, more serious than the feelings described in uncanny valley.
Even before the AlphaGo shock, an event in which AlphaGo had easy wins over Sedol Lee, 9th dan, AI has been, slowly or sometime rapidly, infiltrating our daily lives. There is Watson, an AI doctor, which plays a great role in cancer diagnosis with performance capabilities of completing data analysis in one month while a typical human scientist would take 38 years to do the same job. We also have Pepper the humanoid robot, the world-first robot to recognize human emotions and offer customer services at airports, hospitals and banks. Recently, we have news of Chatbot which makes significant contribution from simple jobs such as taking orders in a shopping mall or in a café to more professional roles such as those of lawyers or psychotherapists, making use of big data. It seems obvious that we will be handing over much larger domains of our lives to AI than what we originally thought.
Humans, with their continuous questions and explorations, will eventually find answers
The creators of ZER01NE, who actively fill in the timelines of the AI era in real time, gathered for the epilogue of AI Art Lecture. Creators Young Ju Kim & Ho Yeon Cho who are game developer- artist duo, and creator Dong Guk Yoon of an AI expert group Wey shared their insights and perspectives on AI that they have experienced in their specialized fields.
The creator Dong Guk Yoon started with fundamental questions about ‘intelligence’ and ‘artificial intelligence’ under the theme of ‘What the AI’. He also emphasized that there is a clear difference between the talks and discussion on AI overflowing in the press and media, and the AI being implemented in the real world. Considering the key conditions that should be defined in order to establish the concept of “intelligence”, such as “being adaptable to a new environment”, “ability to learn”, “solving problems making use of knowledge acquired previously”, “ability to combine interactions between various senses and perceptions and come up with a final decision”, and “having cultural specificity and emotions”, the AI at this moment falls under the category of “weak AI”.
AI is able to perform fairly intelligent actions through machine learning based on predefined rules, complex algorithms and big data but it can solve problems in limited scope only and its inability to come up with creative solutions still seems to be the obvious limit.
“Diana,” a ski robot that creator Dong Guk Yoon contributed in its successful production is the world’s first robot capable of autonomous driving of 11-shape and it weasels out of various gates in a ski slope just like a real human. When machine learning was performed through a reward system which gives points for each passing of the gate, the robot was cunning enough to go round and round over one gate instead of attempting all the in the slope for higher scores.
Creator Ho Yeon Cho, a game developer, also said that the agent before the conduct of machine learning is nothing but a ‘metal can’, empathized on the difficulties of designing adequate reward system for machine learning, which ultimately depends on humans. When the negative cost is set excessively for the quick achievement, the agent will give up attempting any new learning in order to avoid accumulating the penalty points. There are certainly these practical difficulties and in particular, there are fewer examples of successful outcomes in the field of AI in games, but it is undisputable that the introduction of machine learning would open a new chapter which will be very different from those simpler scripted game AI.
Creator Young Ju Kim gave an interesting overview on the moments of “resistance” seen by the machine learning characters in the game against the player (human). Unlike most of other AIs which perform beneficial functions for humans, one of the key functions of game AI is to “promote conflicts” with humans (players).
In the works of Harun Farocki, a German filmmaker, media artist and critic who provided philosophical insights on digital reality through the space of the game, there is an emergence of a non-playing character (NPC) that expresses uncomfortable feelings for the players who are approaching to it. It was surprising clever considering the level of NPC at that time that this character shouts out “Back Off” and takes aggressive attitude to guard its own private turf. Since Farocki, the game AI has made great strides introducing greater conflicts and problems for players.
In “This War of Mine”, a game which represents a position of a civilian who struggles to survive in a war-blocked city, the player experiences the gradual collapse of his mentality when the acts of inflicting harm on others for his own survival are repeated and accumulated, which may even lead to the player committing suicide. In addition, when one player commits suicide, the depression spreads to other players. The game AI with machine learning is not just about the results of win or lose but tackles the deep underlying human emotions, presenting various scenarios dilemma, which enhances the sense of reality of the game and the immersion of the player into the game.
Game is one of the most interesting environments for AI research at present. The various phenomena happening in the game, though virtual, have complex links and associations with the real world. Ultimately, the technology for in-game agents can be applied to many aspects of human reality. Creator Ho Yeon Cho, who talked about the infinite possibilities of game AI, added that the game environment will enable the faster and easier derivation of the reactions and outcomes for the supervised, unsupervised, and reinforcement learning in machine learning.
The possibilities are endless, but the limitations are also clear. Also, the limitations may open a new window for another possibility. Although the specialties of each creator was different, we could see that the perspectives of the creators on AI shared many aspects in the conversation. As creator Dong Guk Yoon said, the current “weak AI” should develop into “strong AI or A.G.I.” A.G.I (Artificial General Intelligence) is able to think, learn, and create in any given situation, not just specific problems.
The linear or parallel process of data under current computing technology can only mimic the strong AI. In order to implement true A.G.I. technology, the creators presented ideas that nonlinear and simultaneous processing of a wider range of data, including human senses should be experimented and applied in various environments. In this regard, creator Young Ju Kim commented that the virtual safe environment of “the games” would play an effective role in AI research.
The question of how AI will be integrated into our society or art will continue to be a topic of issue in many fields, but it may essentially be an unpredictable problem from the start. Today’s AI has made a ski robot, which is a world-first in its kind but with not-so-good cost effectiveness, which required 6 months of efforts and devotion of researchers as well as a huge amount of capital, but we never know how this achievement will have an impact in changing the future.
Our future dotted with AI could be a utopia having reached a technophilia, or a dystopia eroded by technophobia, or another world somewhere in between. As creator Ho Yeon Cho said, the technology will become part of our creative work and develop within our lives while we are not even aware of it, without a particular need to concentrate only on AI or machine learning. In this regard, what may be important for us will be not to spend time trying to predict weather which is not possible from the start but to continue firm grip on anchors and quays that are right in front of us, that is making sure we focus on what we understand and what we have now. In conclusion, I would like to add an excerpt from an interview of Steven Pinker by Choi, Jae-Cheon in 2016.
Choi, Jae-Cheon (chair-professor, Ewha Womans University)
What it would be like in 50 years? It is likely that there will be many AI programs, and there will be robots in every corner, what would humans be doing, then?
Steven Pinker (psychologist, professor at Harvard University)
I hesitate to make a prediction because we know at least how poor we are at predicting the future development of technology. Based on the latest trends in newspapers, we tend to estimate our life in 50 years without much thought. But that’s not right. For example, modern air travel did not improve at all. In many ways it’s worse than it was 50 years ago. Today, air travel has not become faster, but rather takes more time due to airport security checks. This is an example of technology that has stopped the development into its next stage. Similarly, when we think about manned spaceships, 1972 was the last time we ever had one. Now, with 44 years passed, no one has reached beyond the moon. The forecast for human life in 50 years can only be a forecast, and I can predict that it will be wrong.