One of the best thing about a friend is the way you can be understood. When someone knows you well, ‘reading in between the lines’ a wonderful thing to experience. If you investigate this phenomenon with the goal of getting to the bottom of it, you’ll realize that it comes to an ability to comprehend mental states, and as a result, one can create a mental model of a friend.

Having a model makes it easier to anticipate behavior and reactions. In technical terms, it gets down to  ‘running simulations’ to predict outcomes. Your very best friend ’gets you’ by having the ability to anticipate your reactions.  Soon an algorithm running on your smartphone will have a similar capacity. The device will appear to read your mental states. What would be your reaction?

The concept of mental models can be traced back to Kenneth Craik’s suggestion 1 that the mind constructs “small-scale models” of reality that are used to anticipate events. The mental model theory has been developed over the last 60 years, mainly by psychologists and cognitive scientists, to unravel the mystery on what reasoning depends on. The theories that were developed yielded some answers while stimulating wast amount of new questions. A purse of knowledge is a story of the progress of science at a granular level, slowly churning over one faulty idea after another to arrive at some credible answers. In this process, the Theory of Mind was coined and later, also the Computational Theory of Mind. The latter brought mathematics into the field and, together with Machine Learning, started to show promise in porting mental processes currently reserved for humans onto the silicon chips.

The idea of mental models being useful to support the ways we think about certain phenomena was popularized by Charles Munger. 2 As a continuous learner,  sometimes humorously called as a walking ‘book on legs’, he developed a system called a ‘lattice of mental models’ in 2001. The system contains a set of models for thinking about various phenomena in the business world. His ideas were picked up shortly after and publicized as a  series of articles and books that explored his method of building simplified models of reality with an aim to yield useful predictions of possible futures. 3

The Wright brothers

Inner workings of cross-domain modeling can be observed by looking at examples of creative problem-solving done by humans under conditions when no prior knowledge exist. A well-known illustration comes from airplane inventors, the Wright brothers. One of their challenges was related to the design of the aircraft’s propeller. All existing ones at the time were built for use in incompressible liquids, so their design was irrelevant for a propeller operating in the air that highly compresses. What forced brothers to start from scratch was that design of propellors was based on tedious trial and error. They succeed despite all of that by cross-linking knowledge from one domain and its creative application to another. By assuming that a propellor might be considered as a small wing in a circular motion they could use existing calculations related to wing design, that they mastered, to build an effective air-operating propellor. A model from one domain started to be a solution in another.

(The Wright Brothers 1903 Flyer Propeller)

The creation of abstract models concerning how things work is reserved for humans. The cross-model thinking is still the domain of human creativity. However, the advantage we hold over algorithms is visibly disappearing in selected domains. This advantage will gradually erode further due to the capacity limits we have and which algorithms don’t. The correlations we can grasp using our ‘models’ are limited by the number of factors we include, usually employing a cause and effect reasoning.

Mental models in the age of AI

Computer algorithms are not as limited as humans are in this regard. Their capacity makes it possible to find both: strong correlations as humans do, and go beyond that to explain phenomena by taking into account thousands of minor features, that are impossible to account for by a human being. 

The other limitation has to do with logical reasoning alone, which is not always the prevailing method of human decision making. Thanks to emotions often intervening our logical conclusions, we are offered various shortcuts, making our judgments questionable from a logical standpoint. But is the argument that ‘we feel like it’ sufficient to stand up to scrutiny in the long run? It might be that once quantitive mind theory is complete, we will gain access to the process of artificial reasoning unparalleled to the one we use today.  The models fueled by the abundance of existing information will start to be an excellent tool for predicting outcomes with much greater precision. 

This will result in making it possible for a device, such as a phone, to learn about his user’s ways and anticipate his needs. The discovery will be supported by the profiles of millions of other similar human beings, even further improving accuracy due to recognized similarities between users.

The future of profiling users

A supercharged Siri or Alexa will know their users better than they can even know themselves.  The right music for the mood without even speaking a word? Suddenly ‘reading in between the lines’ in interaction with your device will become possible.  Empathy, appreciation and devotion were so far attributed to emotion of love. The challenge of testing our emotional responses to such situations was depicted in the movie ‘Her’ where an artificial assistant, using the voice of Scarlett Johansson, succeeding in creating a bond between human and a machine

A scene from movie „Her” 2013 by Spike Jonze with Joaquin Phoenix in a lead role

With recent advances we’re gradually currently supplied with tools and resources to come to grips with artificial processes for reasoning. And if we take position of philosophers, such as John Searle or John Dennet, strong AI demands capacity for existence of mental states.4 Since we’re not on that chapter yet, perhaps it is a right moment to get your house in order before algorithms begin to see through you.

  1. See his 1943 book – The Nature of Explanation

  2. Charles is a friend and business partner of hugely successful investor Warren Buffet.

  3. What was popularized by Charles Munger is, in fact, a well researched cognitive capacity of mammals. When a dog anticipates his owner’s wishes he uses a mental model of his owner. This has been already replicated by platforms such as Amazon, where a model is built around a user’s profile to provide product recommendations. The same applies to Google for meeting users’ search criteria. Those algorithms already successfully anticipate users needs in their narrow domains.  This is bound to change when learning from a  single domain will start to be generalized and will bypass existing boundaries. 

  4. According to Searle definition of ‘strong AI’ differs from the ‘narrow AI’ by that aspect alone. He defines Artifical Intelligence as quote: “A physical symbol system can have a mind and mental states” where weak AI can only ‘act intelligently’. See “Mind, language and society