Beyond words and tokens, too

The way we seek information changed rapidly after conversational search engines stepped in. Practical suggestions provided by large language models (LLM) such as Perplexity or ChatGPT already impact how we tackle problems — allowing us to find solutions that might work and try them quickly. Whether it’s a piece of advice or a snippet of code — apply, move — as long as it works.

Information-seeking used to be a part of a chain of events in problem-solving. Recognize and interpret information first, establish a plan of action, implement it, and, if that doesn’t work, do it all again. With the rise of chatbots, these steps are often shortened to asking questions and trying out proposed solutions — leapfrogging the reasoning parts we were previously doing. With that, mathematical models that fuel AI chatbots started to be perceived as not only sources of information but also sources of reason we increasingly rely on. 

This creates a new relationship with systems and a dependency. One needs the right lenses to see and recognize how this leapfrogging of reasoning affects us.

Artificial knowers

In my upcoming book, “Re-becoming” I tour the set of events that led us to an artificial knower (instead of a human) and explore key thoughts related to the benefits and pitfalls that come with it. 

The bond between thinking humans and (seemingly) thinking systems has been explored in the past, yet these intellectual contributions are often discarded as irrelevant bits of philosophy, sociology, or history. In my book, I collect insights from various fields and dust them off to see how they apply to our current situation. Without a spoiler alert, they do. Above all, they can be pivotal in building our awareness of the systems we interact with today. 

See what’s included the book

Check the Table of Contents

What past thinkers shared over decades often comes in outdated language. For example, in modern Computer Science, we speak of chunking words into tokens and using embeddings to hang them in the multidimensional tapestry of mathematical models. For thinkers of the previous century, tokens and embeddings were “point-signs“. The mathematical models were  “intersections and fissures of flows and breaks along vectors of effectivity” and so on. Yet, that peculiar, odd language shouldn’t obscure their wisdom hidden behind obscure words.

Their insights are still very helpful in coping with far-reaching conclusions from modern computer scientists, such as when Iilya Sutskever (Open AI co-founder) speaks of revealing fundamental reality with the help of word tokens.

As brilliant as these models are, and as talented geeks there are, it’s easy to forget the imposed limits and wander off on a tangent of imagining unlimited opportunities. 

Luckily, bouncing off the borders of such pursuit is nothing new. Some thinkers checked how far one can go with words long before the Internet became stuffed with them. I felt compelled to dust off their lessons and apply them to our modern discussion on the limits of using tokens and embeddings. 

The main goal of the book is to help you stay intellectually grounded while diving into topics of technology. There is a lot of high-value intellectual material out there. I sought sources buried in languages and formats that didn’t even make it to the digital world. It took me years to collect these contributions, refurbish them, and use them in the storyline of Re-Becoming. I hope the result will be eye-opening, revealing the human propensity to exceed the conceivable. 

Prior to the release…

sign up for free Chapter One

…and get notified when the book is ready
Read Chapter One