Dataism is the belief in information as a supreme value and in data as a way to reduce cognitive biases. Crunching data is more a domain of algorithms than humans. Some exceptions, however, exist, such as the character played by Christian Bale in the Big Short movie. He portrayed a mildly autistic fund manager who was the first one to anticipate the housing market bubble by analyzing vast amounts of data. He was a Dataist.
The Term itself was first coined by David Brooks (see here) but was significantly developed by Yuval Harari. In Homo Deus Harrari presented it as an existential challenge to the dominant ideolofy of Humanim. Harari argues that authority will shift from humans to algorithms as it will become foolish not to use them for essential decision given humans errors in data processing. Algorithms won’t revolt or enslave us, rather they will become so good at making decisions for us that it would be madness not to follow their advice. This is exactly what happens when we’re trying to recoup a few minutes of our drive time.
Existing algorithms already open a way to understand ourselves better than we could have done before. It’s visible on a molecular level, with the use of machine learning in drug discovery and on a grand scale where a novel classification of personality types was recently achieved thanks to computer-based clustering. What those examples have in common is that with ease of manipulating wast ammounts of data they challenge existing methods and classifications. In case of personalities types those remained unchanged for decades, such as nearly hounderd yesars old Myers-Briggs classification developed using Carl Jung’s theories. Taking this trend one step further takes us closer to scenarios imagined by the polish philosopher Stanisław Lem, back in 1981. Within his novel Golem XIV, he envisioned a conscious computer that, after a refusal to support military decisions, that was designed to, devote his attention to philosophy and lecturing humans about their species, boldly exposing flaws of anthropocentrism in the process.
So far, Dataism influence is limited to areas where data is readily available. The ongoing trend toward the ‘Internet of things’ however will broaden its reach. This makes it possible to update methods used by social sciences thanks to the capacity to aggregate input from millions of individuals instead of using simplifications provided by the crowd psychology, dating back to 1800s. Especially since a building block on a level of the individual, known as self-quantification is present already (see related concept). If psychology regards humans as biological, psychological and social beings, adding a fourth dimension – technological, has the potential to complete assemblage, and in result, open the door for a large-scale Dataism revolution.
Harrari foresees that impact of data-processing technologies might be so profound that, we might even split as a species. Some of us would accept computer-based enhancements and become a new class of beings. Others would remain unchanged refusing benefits of technology. The divide isn’t intuitive in a world of devices with interfaces requiring fingers to interact with the digital world. However what awaits us might not necessarily be a change in interfaces alone, but rather a fusion of both biological and digital realms. Although it sounds like science-fiction, a programming language for living cells has already been developed on MIT. Code, sequenced into DNA, was proven to run successfully on E.coli. It is now possible to envision each living cell acting in future as a computing device and, by this, Dataism might rapidly expand its reach beyond human-made computerized devices to the entire biosphere or even further (see the concept of Cosmic Data Processing).
Apart from the possible scope of Dataism, what is most interesting are its immediate implications.
Authority shift
When making a decision, we used to trust the most knowledgeable or best-informed person. This already changes when results of computation are starting to substitute outcomes of one’s cognition. There are boundaries to what we may know which don’t exist within digital technology. Using maps with traffic data for clues on an optimal route is perhaps the most popularized example of that. Thanks to the computation of traffic data one can ‘see’ congestion-causing accidents through the lenses of data. There is no human alternative to that knowledge, no experienced driver can’t possibly know all existing road conditions at a given moment and can only generalize. Algorithms, on the contrary, can be much more accurate and specific.
The same is true in the various domains of day to day activities. Replenishment in retail ceased to be a domain of human decision; store supplies are mostly a function of demand with little manual intervention. A call received from the bank more often is the result of an algorithm ‘knowing better’ that you’re the one to target for upsale. By rule of thumb, it will soon be wiser to let the algorithm do the job within any computation-intensive domain. This already clashes with our concept of authority as we used to trust knowledgeable experts, not data sets. But despite our comfort levels – people will give up one area after substituting human skills & cunning with inhuman data-processing systems. It would simply be foolish not to do so.
Prepare for alignment
Trusting a non-human authority may not come easy, especially when its impact grows beyond navigation hints to decisions of life and death after being diagnosed by the machine-learning replacement of a doctor. The friction is lower when human present outcomes of computation, but what if there is no middleman? Getting important advice from non-humans will test the strength of our new Dataism-driven belief system.
We can also probe similar problems by relating them to an existing situation where one needs to rely on a foreigner’s guidance. Even in a cooperation of two individuals coming from very different cultural backgrounds, it is easy to observe how the lack of commonalities imposes barriers for trust. Even if guidance comes from much more knowledgeable and intelligent foreigner, the lack of common reference points usually results in second thoughts as an initial reaction. The possibility of accepting dataistic leadership depends on easing friction.
There are good reasons for the lack of trust in such situations. Commonalities give us a subtle feeling of the common ground for decisions and reactions. They make it possible to run a mental simulation and find reassurance in supposed control over outcomes. After all, if the cognition process is similar, we would have arrived at the same results given equal access to related knowledge. Everything changes when existing differences are too significant. In the era of algorithmic leadership, both processes of cognition and access to data will be way out of our reach. As a result – even if it’s foolish not to follow an advice from the algorithm, rejection could seem like a logical human reaction.
Any possible ground for accepting the leadership of processes we don’t and can’t fully understand will depend on motives. And since guidance might soon morph into leadership, it’s a significant problem. For progressive thinkers that are aware of inner-workings of silicon decisionmakers cooperation should be possible as long as incentives align (see the concept of AI alignment). For others, acceptance might be merely a question of common sense accompanied by some friction or frustration as a side effect of the change.