Over 30 definitions of intelligence exist* which shows a lack of consensus for what we mean by Intelligence. Trying to recreate an ‘artificial’ version of it before understanding it might be like the hope of hitting an unknown target on the move

The hopes related to AI are, however well grounded. To the surprise of many, tiny mathematically simulated constructs from our brains already work great in some narrow domains, outperforming human-designed algorithms. The computer code that used to resemble a set of instructions is gradually getting replaced by tiny frameworks, called neural networks, that analyze and learn solutions with little or no desire for instructions at all. Those minute changes, which are beginning  to revolutionize software, have the potential to change the way we see ourselves, by demystifying the last standing proof of our uniqueness as beings – our cognitive strengths. 

If we agree that “goal-directed adaptive behavior” as the shortest way to define what Intelligence is, we could convincingly brag that we’re living in the age of artificial intelligence already.  Examples are easy to spot. We passed the point when computers outperformed humans in games such as chess or Go, the race has shifted to computers beating other computers for practice. 

It’s not that obvious when the pre-modern meaning of ‘intelligence’ is considered. In Latin it was related more to comprehension and perception, and much less to goal seeking. The examples of AI perception already exist. A phone in your pocket may not need a password to unlock anymore; it only needs to recognize your face. It does this mathematically, convincingly calculating that the depth-supported image it perceives is you. By stretching the meaning of ‘to perceive’ a bit and we can say, yes – AI is beginning to see.  

Upping the game and defining intelligence as the capacity to apply new skills and knowledge or, to put more bluntly, “the capacity to acquire capacity,” then the term ‘Artificial Intelligence’ starts to be as inadequate as the measure of piston engine output with the unit of horsepower as in regard to horses.  The only defense for proponents is to default to the aspirational meaning of the word and possibility, that intelligence might appear as a substrate of abilities of the complex inner-workings of algorithms. The debate continues supported by calling learning algorithms ‘AI’ since it adds to the hype,  and, as a result, makes it the most sought after sector of Venture Capital investments. The popular perception of intelligent machines of the future makes it even hotter topic of debate. An entire book genre exists related to AI written by the very ‘titans’ of the field; scientists and entrepreneurs that were the driving force behind recent breakthroughs which opened the door for the application of existing technology (see ‘most notable readings’).  

In observing the debate over AI as it unravels, it is already clear that there’s much to gain from the adoption of current learning algorithms. All of those unobvious mundane tasks that require solution-seeking capabilities are now the next wave of profit from entrepreneurial activity.  The range of applications span all the way from large-area traffic optimization to the recognition of molecules, where smart-enough algorithms are already a viable alternative for the busy work previously performed by humans. 

Understanding implications

The key to understanding the implication often comes with downplaying the anthropogenic belief in the uniqueness of humans. If we reduce our decisions to a sequence of actions, then we can compare computer algorithms with results of our cognition… and lose in the process. AI already started to dominate in domains that can be distilled to an isolated set of rules such as in games.  Algorithms, instead of humans, are our modern champions of Chess, Go, and might, soon enough, exceed us in the pursuit of safest driving and other fields, where uncertainty impacts the ruleset. A board game is free from exceptions and rule violations, there will never be an additional drunk Bishop unexpectedly bursting through the chessboard. But on the road, we should expect to deal with such a situation by adjusting rule-obedience when necessary in order to minimize harm. We’re already just a few improvements away from having learning algorithms that are competent enough to comprehend and deal with uncertainty on the road, so it’s only up to the imagination to determine what the next boundary for their application should be? 

Large-scale data-driven human cooperation, such as government, could be ideal environment for learning algorithms to flourish. Having a single representative on the top of all decision-making hierarchies as we use today,  is not a structure designed forte successful leadership of millions. Despite its primitive evolutionary origins, we still use it on every level of organization. Ignorance is a widely accepted outcome, as it is impossible to perceive the unique needs of all participants by leadership. Some future AirBnB-like service for matching the needs of coexisting citizens in a multi-dimensional web of interrelations might provide a successful alternative to the current hierarchal administration system, innovating existing structures rooted in the Athenian democracy and Roman ruling constructs. 

A subjective comparison

Even stretching our imagination enough far to have our most pressing problems resolved with the help of AI, we might still struggle to call our capable problem-solving creations ‘intelligent.’ It could even be that we never arrive at a crystal clear definition of intelligence and remain convinced that it is one among other human matters that should remain subjective. We might decide to call it futile to compare our cognitive abilities to our creations, such as we ceased to measure our physical strength in relation to cranes or caterpillars. As it is not degrading to realize that we’re not the strongest things to lift concrete, we might find similar comfort when competition to our cognitive abilities arise.