How might a digital whiteboard work as a thinking tool?

In short: Interaction with “representations” on a whiteboard canvas can trigger learning processes. This phenomenon exists across age groups and is relevant among adult learners as well. 


In order to understand how cutting-edge collaborative whiteboards support cognition, one only needs to consider childhood and observe how thinking evolves during play. 

Imagine a two-year-old manipulating physical blocks. While playing, the young brain builds up mental constructs of independent objects. These are created stage by stage, integrating elementary action schemes gained from actual experiences within the environment. This is how things like tower-building can happen.

Soon after the assimilated concepts are accommodated in brain structures, it becomes possible for the child to use “abstractions” to think through actions without the need to physically manipulate objects. This is essentially where most adults are — we have a good understanding of how wooden blocks work.

The canvas of a digital whiteboard (a virtual space) makes it easy to represent real objects with pictures or other forms, such as symbols or drawings, while not being constrained by the real environment (a physical space). These are called “first-order representations”.

Watch Reshan demonstrating early algebraic concepts using first-order representations during our April 1st 10h-long YouTube livestream.

Going beyond simple abstractions

Now think of numbering objects using counters or chips, with representing each object. Using counters is a symbolic action that can be traced back at least to the 8th millennium B.C.E. Back then, counters were used to document a “one-to-one” relation between concrete objects and marks on a clay tablet.

Counters can play an important role in cognitive development. These are essentially “second-order representations”, or representations that are indirectly related to objects and actions. They are the cornerstone for the higher-order structures emerging from their usage, for example in algebra, where a set of symbols reflects arithmetical operations.

The digital whiteboard acts as a bridge between direct representation and abstract symbolic systems of knowledge. The canvas allows you to create a direct representation of any items, and, if needed, use them alongside symbolic representations in order to gradually develop understanding — from simple to more abstract and complex ideas.

Example of using a whiteboard for explaining elementary algebra. 

Using the digital whiteboard to develop thinking together 

A digital whiteboard is an amazing thinking tool, the ultimate medium to use for the fundamental processes of building understanding across different fields. It is a powerful tool to actively support cognition, one which allows the user to witness transformations of less or more abstract representations through the actions they take. 

In the end, it’s actions that shape thinking structures, which in turn become more general and applicable to ever richer content. The application of these structures depends on the context; however, the underlying process is similar to what we just illustrated using elementary algebra as our example. 

To learn more: follow ideas of Max Wertheimer and Jean Piaget, pioneers of cognitive psychology. Piaget documented the mechanism of “reflective abstractions”,  Wertheimer pioneered “constructivist structuralism” and broke new ground with his  “Productive Thinking” book as a result of his collaboration with Albert Einstein.

Article originally published on Explain Everything blog

A short history of thinking as symbol processing

Is human thinking a kind of symbol manipulation? If it would be so, manipulating symbols would be sufficient for intelligence, and, this would also imply, that since machines manipulate symbols, they can be intelligent too.  This idea is a well-known position of AI philosophy reaching back to 1976 when it was conceived by Allen Newell and Herbert A Simpson.  Both thought of physical symbol-processing system as a necessary and sufficient condition of thinking 1. How does it stand ground over four decades later?

What is this tradition of explaining such a complex phenomena as thinking which got us to represent the brain as computer, an object grounded in the mathematical logic?

Let’s do something wild here and, for the sake of understanding, try to look at this highly criticized idea by revisiting not just four decades since its inception, but rather by looking from the more neckbreaking perspective of the millenia. After all, Newell and Simpson followed a thought tradition that can only be understood deeply when referring to the very roots of the ideas. It so happens that the roots bring us all the way back to Aristotle and Ramon Llull.

The tradition of defining nervous system and cognition as computation and signal processing reaches back to the very beginnings of the key scientific discoveries of logic and math. The concepts developed back then (and improved upon over time) are a set of useful tools today, allowing us to calculate a vast number of things; from building structures to winning chess strategies. 

The history of calculations is the history of the computer and the ideas that lead to it, emerging from mathematical logic that, back in the 19th century was an obscure and cult-like discipline.  A great resource for a brief history of those ideas can be found in an article by Chris Dixon, a TechCrunch writer and general partner at Andreessen Horowitz with a major in philosophy. 2

But the influences that got us to perceive brains as computers are rooted also in inception of information sciences, where a founding father figure is sometimes found in Ramon Llull, the Catalonian polymath and logician. Ramon, back in the fourteenth century, become obsessed with an Arab mechanical device which could provide a rhymed answer to any given question by a combination of symbols. Motivated by a pursuit of truth and knowledge, he developed a compendious system for posing and answering philosophical queries using a similar concept with the inclusion of some simplified causations.  Llull’s fascination in ‘computing answers’ was carried on by his scientific descendants, including such prominent figures as Giordano Bruno and Leibniz, an inventor of algebra, who admired universal ambition and the clever combination of Llull’s methods. As a result Leibniz developed a further system of manipulating universal symbols 3 that only later turned into bits and bytes when math crossed paths with electrical circuits.  

This happened thanks to Claude Shannon who not only mapped Boole’s logical system onto electrical circuits but also developed a mathematical theory of communication. Computing answers suddenly became possible in the digital realm and laid the foundation for using computerized symbol manipulation as a metaphor of thinking.  After all, a series of bits as symbols seems somehow similar to the firing of neurons, so all which was needed to connect computers with cognition was a mathematical representation of the singular unit of our neural network: the neuron.  The next breakthrough came quickly from Warren McCulloch and Walter Pitts, who provided a solution 4.

The remaining part of the story is related to post-war computing, much better recognized today thanks to the celebrity-like status of Alan Turing, who defined the notion that carries his name, the so-called  “Turing machine”, which further equalized digital computation with information processing.   

Turing and Alonzo Church’s cooperation opened the door to coin the Computational Theory of Mind  (CTM in short) which directly claims, in short, that the mind is a kind of computer. Computationalism (used interchangeably with CTM) to date remains a research program, helpful in creating and testing theories and individual models on cognition with an emphasis of manipulating representations.

It’s hard not to notice the recent support coming from the promising trends of Machine Learning. The results of artificial neural nets used in Machine Learning display a set of properties which seem to narrow the gap between biological cognition and the results of digital computation. Such as, humans don’t need formal proofs to believe in something 5  and we definitely don’t need to calculate equations in order to maintain body balance, the same is true for Machine Learning. Acting as smart function approximizers, artificial neural nets show little desire for rigid definitions that classical algorithms required.  What sets human learning and machine learning apart are our apetites for repetitions while learning (a human’s handful and machine’s thousands) and the scope of application for things once learned.

So where does this all take us? Brains seem to process information organized in some soft of a structural content, however we still don’t know how processing occurs and what sort of computation is used in ‘the biological circuits;’ digital, analog or hybrid? And what representations are used?

Most cognitive scientists put forth that our brains use some form of representational code that is carried in the firing patterns of neurons. Computational accounts seem to offer an easy way of explaining how our brains carry and manipulate the perceptions, thoughts, feelings, and actions that make up our everyday experience. While most theorists maintain that representation is an important part of cognition, the exact nature of that representation is highly debated.

What is certain however, is that the rush to discover some answers is currently a highly funded venture taking in both private and public investments. 6

Seen from the perspective of the evolution of ideas, the pace of progres is striking. We invented logic roughly 2300 years ago and, 1500 year later, we realized the utility of computation. Calculus was invented only 350 years ago, and math as a discipline experienced a course correction only 100 years ago, fixing mistakes left by its pioneers. Computing machines which utilize math were invented only 80 years ago. An artificial neuron has existed for 75 years, but the improvements that lead to its useful implementation took place only recently. The scientific field of Cognitive Sciences aimed at understanding the organizing principles of the mind was coined only 68 years ago. The breakthroughs which allow us to make use of artificial neural nets to model cognitive processing occured only over the past 10 years.

On the scale of time, the use of computers to simulate and understand cognition has taken place so abruptly that we might discover that use of the computer metaphor for cognition is a mistake similar to the XIX century misleading use of the mechanistic metaphor to explain the inner workings of the human body. Some philosophers of cognitive science already believe that the term ‘the computational theory of mind’ is a misnomer 7 binding a together series of methodological and metaphysical assumptions shared by particular (and sometimes conflicting) theories that, together, compose the core of cognitive science and early efforts to model the brain (known as computational neuroscience). It might be that we’re living a world of modern methaphysics and, to understand this, time is needed and with it, the proper perspective.


  1. It is known as the physical symbol system hypothesis (PSSH)

  2. The difference in Chris Dixon approach was to put in the center the ideas and philosophical influences rather than the evolution of hardware which is usually the basis of historical discussions related to computers. Read the entire piece at The Atlantic

  3. The work that laid ground for abstract representation being a foundation for all further notations, including computer languages, was titled “Dissertation on the Art of Combinations

  4. See their famous article “A Logical Calculus of Ideas Immanent in Nervous Activity

  5. We’re fine not to comperhend entirely why one plus one equals two. Curiously it took roughly three hundred pages of Bertrand Russell’s 1910 Principia Mathematica to explain that. So even if we’re certain of a result of summing two numbers the ‘knowing’ of multiplication table poses some serious epistemological challenge replacing certainty for less rigid (yet still reliable) ‘knowing how’

  6. In a subsequent article we’ll review which research centers are actively working on next breakthrough trying to model brains – and what is their outlook

  7. See “From Computer Metaphor to Computational Modeling: The Evolution of Computationalism” by Marcin Miłkowski and, also his, definition of Computational Theory of Mind on the Internet Encyclopedia of Philosophy

Artificial empathy

One of the best thing about a friend is the way you can be understood. When someone knows you well, ‘reading in between the lines’ a wonderful thing to experience. If you investigate this phenomenon with the goal of getting to the bottom of it, you’ll realize that it comes to an ability to comprehend mental states, and as a result, one can create a mental model of a friend.

Having a model makes it easier to anticipate behavior and reactions. In technical terms, it gets down to  ‘running simulations’ to predict outcomes. Your very best friend ’gets you’ by having the ability to anticipate your reactions.  Soon an algorithm running on your smartphone will have a similar capacity. The device will appear to read your mental states. What would be your reaction?

The concept of mental models can be traced back to Kenneth Craik’s suggestion 1 that the mind constructs “small-scale models” of reality that are used to anticipate events. The mental model theory has been developed over the last 60 years, mainly by psychologists and cognitive scientists, to unravel the mystery on what reasoning depends on. The theories that were developed yielded some answers while stimulating wast amount of new questions. A purse of knowledge is a story of the progress of science at a granular level, slowly churning over one faulty idea after another to arrive at some credible answers. In this process, the Theory of Mind was coined and later, also the Computational Theory of Mind. The latter brought mathematics into the field and, together with Machine Learning, started to show promise in porting mental processes currently reserved for humans onto the silicon chips.

The idea of mental models being useful to support the ways we think about certain phenomena was popularized by Charles Munger. 2 As a continuous learner,  sometimes humorously called as a walking ‘book on legs’, he developed a system called a ‘lattice of mental models’ in 2001. The system contains a set of models for thinking about various phenomena in the business world. His ideas were picked up shortly after and publicized as a  series of articles and books that explored his method of building simplified models of reality with an aim to yield useful predictions of possible futures. 3

The Wright brothers

Inner workings of cross-domain modeling can be observed by looking at examples of creative problem-solving done by humans under conditions when no prior knowledge exist. A well-known illustration comes from airplane inventors, the Wright brothers. One of their challenges was related to the design of the aircraft’s propeller. All existing ones at the time were built for use in incompressible liquids, so their design was irrelevant for a propeller operating in the air that highly compresses. What forced brothers to start from scratch was that design of propellors was based on tedious trial and error. They succeed despite all of that by cross-linking knowledge from one domain and its creative application to another. By assuming that a propellor might be considered as a small wing in a circular motion they could use existing calculations related to wing design, that they mastered, to build an effective air-operating propellor. A model from one domain started to be a solution in another.

(The Wright Brothers 1903 Flyer Propeller)

The creation of abstract models concerning how things work is reserved for humans. The cross-model thinking is still the domain of human creativity. However, the advantage we hold over algorithms is visibly disappearing in selected domains. This advantage will gradually erode further due to the capacity limits we have and which algorithms don’t. The correlations we can grasp using our ‘models’ are limited by the number of factors we include, usually employing a cause and effect reasoning.

Mental models in the age of AI

Computer algorithms are not as limited as humans are in this regard. Their capacity makes it possible to find both: strong correlations as humans do, and go beyond that to explain phenomena by taking into account thousands of minor features, that are impossible to account for by a human being. 

The other limitation has to do with logical reasoning alone, which is not always the prevailing method of human decision making. Thanks to emotions often intervening our logical conclusions, we are offered various shortcuts, making our judgments questionable from a logical standpoint. But is the argument that ‘we feel like it’ sufficient to stand up to scrutiny in the long run? It might be that once quantitive mind theory is complete, we will gain access to the process of artificial reasoning unparalleled to the one we use today.  The models fueled by the abundance of existing information will start to be an excellent tool for predicting outcomes with much greater precision. 

This will result in making it possible for a device, such as a phone, to learn about his user’s ways and anticipate his needs. The discovery will be supported by the profiles of millions of other similar human beings, even further improving accuracy due to recognized similarities between users.

The future of profiling users

A supercharged Siri or Alexa will know their users better than they can even know themselves.  The right music for the mood without even speaking a word? Suddenly ‘reading in between the lines’ in interaction with your device will become possible.  Empathy, appreciation and devotion were so far attributed to emotion of love. The challenge of testing our emotional responses to such situations was depicted in the movie ‘Her’ where an artificial assistant, using the voice of Scarlett Johansson, succeeding in creating a bond between human and a machine

A scene from movie „Her” 2013 by Spike Jonze with Joaquin Phoenix in a lead role

With recent advances we’re gradually currently supplied with tools and resources to come to grips with artificial processes for reasoning. And if we take position of philosophers, such as John Searle or John Dennet, strong AI demands capacity for existence of mental states.4 Since we’re not on that chapter yet, perhaps it is a right moment to get your house in order before algorithms begin to see through you.


  1. See his 1943 book – The Nature of Explanation

  2. Charles is a friend and business partner of hugely successful investor Warren Buffet.

  3. What was popularized by Charles Munger is, in fact, a well researched cognitive capacity of mammals. When a dog anticipates his owner’s wishes he uses a mental model of his owner. This has been already replicated by platforms such as Amazon, where a model is built around a user’s profile to provide product recommendations. The same applies to Google for meeting users’ search criteria. Those algorithms already successfully anticipate users needs in their narrow domains.  This is bound to change when learning from a  single domain will start to be generalized and will bypass existing boundaries. 

  4. According to Searle definition of ‘strong AI’ differs from the ‘narrow AI’ by that aspect alone. He defines Artifical Intelligence as quote: “A physical symbol system can have a mind and mental states” where weak AI can only ‘act intelligently’. See “Mind, language and society

Workforce implications of Machine Learning

Almost every day alarmist voices can be heard on how Artificial Intelligence will steal jobs from humans. The mass media rely on publicity of doomsday stories doomsday books written by documentary filmmakers or articles prepared by magazine staff writers 1 Understandably cutting through the noice of this echo chamber is difficult especially if job security is discussed in relation to machine learning.

But the are times when socioeconomic shifts are debated by experts of the fields, such as in recent article of Erik Brynjolfsson, and Tom Michel from December issue of Science under title “Workforce implications of Machine Learning

After all both have voice in the field. Erik Brynjolfossen as MIT professor drives the Initiative on the Digital Economy that researches new business models, productivity, employment and inequality 2. Tom Mitchell reputation on the field of Machine Learning was established back in 1997 as he was author of the first  textbooks related to that subject. 

The article seems not intended to provide direct answers, and arguments are rounded as there is no way not to agree, for example:

,,The net effect on the demand for labor, even within jobs that are partially automated, can be either negative or positive. Although broader economic effects can be complex, labor demand is more likely to fall for tasks that are close substitutes for capabilities of Machine Learning, whereas it is more likely to increase for tasks that are complements for these systems”

But that’s ok. The thing to expect from two thinkers with such strong background is a vantage view: high level categorization of economic forces that will shape the future of job market provided in the language of economy. It’s up to us to shape the answers on the ground level.

The article provides introduction to the topic of where and how Machine Learning might affect job market by listing tasks suitable for current AI, but that only sets the stage for the main goal of the piece, which is to define six economic factors that combined will shape work of the future.

A whiteboard-created map with excepts from the article What can machine learning do? Workforce implications. Read the whole piece in December issue of Science (also accessible from webpage of Tom Michell here)

To understand influence of those six factors on the job market we need to see their joined effect as a function. However if it is the easies to reduce perspective to merely one of the factors: substitution of human-jobs by machines as a net job destroyer. This tactic is convenient to spin stories antrophomophising guesses about outcomes of technology. After all – this is an easiest way for us to relate to a phenomena – by using analogies.

Erik Brynjolfsson in his 2015 book 3 covered in great detail idea how evolution of technology can destroy jobs by using… human-equivalent androids in a thought experiment:

,,Imagine that tomorrow a company introduced androids that could do absolutely everything a human worker could do, including building more androids. There’s an endless supply of these robots, and they’re extremely cheap to buy and virtually free to run over time.”

Addressing our imagination using such similarities make us focus on the wrong thing. It’s easy to imagine an android and then even fear one in result. But by that we’re missing from the view more abstract consequences of the technology that cannot be represented with analogies to what we know of humans.

Discussing non-human characteristics of technology is a hard sell, yet it is important one not to miss the point of where technology will get us and how it will affect our job market. That’s why list of remaining five economic factors from the article is a great entry point to start thinking about unknown end-results of technology that will affect the job market.


To summarize: when thinking about job market of the future it’s not human-like robot overlords that should be in our focus. That’s a smole screen. What would be more useful to address uknowns on two levels On a ground level:

What incomparable (and hard to imagine) characteristics of technology will play part in substituting human jobs? 

And on a macro level:

What will be the cross-dependence of the six economic factors listed by authors?

We’re going to explore both. In the meantimeand if there’s something to actually fear – it should be a fear of the surprising unknowns. Every bit of thinking around the the emergent properties of technologies is worth our focus.


  1. It is to no ones surprise that the stories based on fear are an easy sell for publishers. See for example work by James Barrat that stands in opposite to what intellectual impresarios, such as John Brockmann is doing with his ‘What to think about machines that think’ as a collection of essays from important figures in the field of AI

  2. Their publication list is available at http://ide.mit.edu and also on Medium

  3. The Second Machine Age that explores how technology is overturning the world’s economy

The Brain Maps Out Ideas and Memories Like Spaces

Article by Jordana Cepelewicz that discusses the assumptions and knowledge related to brain encoding knowledge in reference to position in space. The article combines wide scope of topics starting from Method of Loci and ending on Numenta’s Thousand Brain Theory.

Discussed topics relate to the following concepts:

Read full article on Quanta Magazine.