Not so long ago, I caught myself in the midst of a Twitter-like Facebook fight over the need to implement human-like cognitive functions in artificial systems, in order to move towards general Artificial Intelligence (AI). I was arguing that consciousness, for instance, is an essential feature that should be explored from early on, if we claim to be able to build smart machines. Traditional data scientists and AI engineers, on the other hand, do not go crazy for high-level implementations, instead the primary scope of their work is to create faster, most cost-effective and efficient machines.
Captivated by the success of neural networks that flourish with deep learning, new AI start-ups are born every day. But a recent MMC report found that nearly half of these companies do not in fact incorporate any AI algorithms. Instead, their success is an estimated outcome of the hype over deep learning; they simply gain much more funding by falling under the umbrella of AI. This hype is totally justified by today’s capitalist economy as well as the ultimate target of new AI ventures: to provide services or create products which will gain massive profit for companies, aka money-making.
To an extent they succeed in both: deep learning has proven itself highly profitable and efficient when processing large databases. It has progressed exponentially and provided fast and simple solutions for very common issues, breaking new ground on image registration, speech recognition, and machine translation. But what separates humans from machines is our ability to resolve rare problems with limited or no prior knowledge. This capacity is called rule-based learning and it stands on the idea that distributed systems, like the brain, are pre-equipped with a set of fixed rules which allows them to master complex abilities, like acquiring language, by manipulating symbolic representations. Symbolism explains many experimental findings on how children learn and some deep learning critics argue that the absence of such symbolic features and/or rules in modern AI algorithms can explain a lot of its absurd errors and general incompetence at simulating human-like intelligence.
The success of deep learning, though, is rooted in the theory of connectionism: we are born tabula rasa and we learn by adapting to the stimuli in our environment. Specifically, its success lies on the grounds that, unlike symbolism, its core principles allow researchers and engineers to “get their hands dirty” by building well-defined models and simulate a range of simple functions, feeding networks with huge databases and succeeding – to an extent- in proving AI critics wrong.
But can a dichotomy in theory explain such a complex system as the brain? Neuroscientist David Marr described three levels of analysis needed to understand complex systems such as a human or an artificial brain: the computational (why), the algorithmic (what), and the implementational (how). In traditional and modern AI research we should say that both symbolism and connectionism are realised in separable levels: we cannot have symbolic systems without any mathematical realisation, nor connectionist models which do not incorporate some sort of symbolism. This is the reason why some argue that the old-fashioned debate is slowly dying out and the field is gradually moving towards more hybrid approaches.
However, that is not yet the case in mainstream industry: from AI behemoths like Google, Facebook and Amazon to new startups emerging every day, the ultimate goal is to create faster, more competitive and efficient machines that address narrow tasks, rather than simulate human-like intelligence as a dynamic whole. Similarly, there is a lack of a much-needed CERN-like approach to AI to bridge the gap between industrial and academic research, like AI influencers suggest. Instead, we see that almost the same issues that we were dealing with thirty years ago remain unresolved. The core question, though, is roughly the same: are we aiming for human-like cognition or task specific intelligence?