Why AI Is More Important Than Ever

I. Overview
The reason why we are studying AI right now more actively is clearly because of the potential applications it might have, because of the media and general public attention it received, as well as because of the incredible amount of funding investors are devoting to it as never before.
Machine learning is being quickly commoditized, and this encourages a more profound democratization of intelligence, although this is true only for low-order knowledge. If from one hand a large bucket of services and tools are now available to final users, on the other hand, the real power is concentrating into the hands of few major incumbents with the data availability and computational resources to really exploit AI to a higher level.
II. Two AI Issues
Apart from this technological polarization, the main problem the sector is experiencing can be divided into two key branches: first, the misalignments of i) the long term AGI research sacrificed for the short term business applications, and ii) what AI can actually do against what people think or assume it does. Both the issues stem from the high technical knowledge intrinsically required to understand it, but they are creating hype around AI. Part of the hype is clearly justified, because AI has been useful in those processes that are historically hard to be automated because of the requirement of some degree of domain expertise.
Secondly, the tight relationship machine and humans have, and how they interact with each other. We are participating to an enormous cultural shift in the last few years because the human being was originally the creature in charge of acting, while the machine was the security device for unwanted scenarios. However, nowadays the roles have been inverted, and machines are often in charge while the humans are simply monitoring.
Even more important, this relationship is changing our own being: people normally believe that machines are making humans more similar to them as humans are trying to do the same with computers, but there are thinkers who judge this cross-pollination as a way for humans to become even more humans (Floridi, 2014). The only thing that seems to be commonly accepted is that fact that, in order to shorten the AI adoption cycle, we should learn how to not trust our intuition all the time, and let the machine changing us either in a more human or more mechanical way.
II. How Does AI Compare to Humans?
So the natural question everyone is asking is “where machines stand with respect to humans?” Well, the reality is that we are still far from the point in which a superintelligence will exceed human intelligence — the so-called Singularity (Vinge, 1993). The famous futurist Raymond Kurzweil proposed in 1999 the idea of the law of accelerating returns, which envisages an exponential technological rate of change due to falling costs of chips and their increasing computational capacity. In his view, the human progress is S-shaped with inflection points corresponding to the most relevant technological advancements, and thus proceeds by jumps instead of being a smooth and uniform progress.
Kurzweil also borrowed Moore’s law to estimate accurately the precise year of the singularity: our brain is able of 10¹⁶ calculations per second (cps) and 10¹³ bits of memory, and assuming Moore’s law to hold, Kurzweil computed we will reach an AGI with those capabilities in 2030, and the singularity in 2045.
I believe though this is a quite optimistic view because the intelligence the machines are provided with nowadays is still only partial. They do not possess any common sense, they do not have any sense of what an object is, they do not have any earlier memory of failed attempts, they are not conscious - the so-called the “Chinese room” argument, i.e., even if a machine can perfectly translate Chinese to English and vice versa, it does not really understand the content of the conversation.
On the other side, they solve problems through structured thinking, they have more storage and reliable memory, and raw computational power. Humans instead tried to be more efficient and select ex-ante data that could be relevant (at the risk of losing some important information), they are creative and innovative, and extrapolate essential information better and faster from only a few instances, and they can transfer and apply that knowledge to unknown cases.
Humans are better generalists and work better in an unsupervised learning environment. There are easy intuitive tasks almost impossible for computer (what humans do “without thinking”), while number-intensive activities are spectacularly easy for a machine (the “hard-thinking” moments for our brain) — in other words, activities essential for survival that have to be performed without effort are easier for human rather than for machines.
Part of this has been summarized by Moravec’s paradox with a powerful statement: high-level reasoning requires little computation, and it is then feasible for a machine as well, while very simple low-level sensorimotor skills would demand a gigantic computational effort.
III. Considerations to Building AI Engines
All the considerations made so far do not end in themselves but are useful to sketch the important design aspects to be taken into account when building an AI engine. In addition to those, few characteristics emerged as fundamental for progressing toward an AGI: robustness, safety, and hybridization.
As intended in Russell et al. (2015), an AI has to be verified (acting under formal constraints and conforming to formal specifications); validated (do not pursue unwanted behaviors under the previous constraints); secure (preventing intentional manipulation by third parties, either outside or inside); and controlled (humans should have ways to reestablish control if needed).
Second, it should be safe according to Igor Markov’s view: AI should indeed have key weaknesses; self-replication of software and hardware should be limited; self-repair and self-improvement should be limited; and finally, access to energy should be limited.
Last, an AI should be created through a hybrid intelligence paradigm, and this might be implemented following two different paths: letting the computer do the work, and then either calling in humans in for ambiguous situations or calling them to make the final call. The main difference is that the first case would speed things up putting the machines in charge of deciding (and would use humans as feedback) but it requires high data accuracy.
My conclusion is that AI is coming, although not as soon as predicted. This AI spring seems to be different from previous phases of the cycle for a series of reasons, and we should dedicate resources and effort in order to build an AI that would drive us into an optimistic scenario.
References
Floridi, L. (2014). The Fourth Revolution: How the Infosphere is Reshaping Human Reality. OUP Oxford.
Kurzweil, R. (1999). The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Penguin Books.
Russell, S., Dewey, D., Tegmark, M. (2015). “Research Priorities for Robust and Beneficial Artificial Intelligence”. AI Magazine, 36 (4):105–114.
Vinge, V. (1993). “The Coming Technological Singularity: How to Survive in the Post-Human Era”. In NASA. Lewis Research Center, Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace: 11–22.