Preface to the English Edition
The importance of translating this book into English cannot be overstated, particularly in today’s intellectual landscape. As the Argentine philosopher Julio Cabrera once argued, “Philosophy outside the English language today simply does not exist.” This observation highlights the undeniable role English plays in the global dissemination of philosophical thought. To ensure that the essence of the original work is preserved, I have relied on the translations of quotations from books as I encountered them in Russian. This means that while the words may differ from their original counterparts, the essence of what I understood from these texts remains intact. Consequently, readers may sometimes find that certain quotes do not correspond directly to the primary sources.
This book is a tribute to the philosophy of Peter Zapffe, and its translation into English was an endeavor I took on immediately after its release in Russian. I hope that my thoughts will resonate not only with a Russian-speaking audience but also with English-speaking readers. The importance of this translation lies in its ability to convey the essence of the original text while making it accessible to a broader audience. I took Cabrera’s observation to heart, and in order to ensure the meaning of the original is preserved, I translated quotes based on how I read them in Russian. Here, the focus is not on perfect accuracy but on how I understood these ideas. This may sometimes lead to discrepancies between the quotes and their original sources.
I would also like to note that I was the sole editor and publisher of this book, and I did the translation myself. Because of this, I apologize in advance if you come across any errors in the text. I trust that the essence of the ideas presented will come through, despite any imperfections in the process.
Introduction
If you enjoy stories with happy endings, you would be better off choosing another book. This one will not bring you comfort on dark days when you feel down; there will be no joy here. I could suggest that you run to the bookshelf for a story about a “happy elf,” for example. But if you are not simply seeking peace at any cost and instead are looking for a broader perspective on the reasons behind your anxieties and fears, then this book is for you.
To begin, I would like to introduce myself to the reader. Though I am an economist by education, I have never considered myself a philosopher or a scientist. However, from an early age, I have been irresistibly drawn to the unresolved questions of existence. In my search for answers, I turned to religion, philosophy, and science. The existential crises I encountered led me to reflect on the meaning of life, the nature of death, and whether our existence has a sacred purpose. Over the years, however, I have come to realize that these questions remain unanswered.
Religious doctrines and many philosophical movements, such as existentialism, sometimes seemed to me overly optimistic in their view of the world. On the contrary, pessimists appeared to be much more honest realists. The result of these reflections was my immersion in the works of pessimistic philosophers and nihilists. Today, I am known in certain circles as a translator of the philosophical works of Peter Zapffe, including his On the Tragic, as well as articles dedicated to his legacy. Additionally, I have worked on translations of works by thinkers such as Emil Cioran and David Benatar.
My interest in their philosophy was driven by a sense of incompleteness. After reading nearly all the literature available in Russian, I could not shake the feeling that pessimism, however true it seemed, still left too many questions unanswered. These thoughts were reinforced when I became acquainted with the works of Thomas Ligotti. His work The Conspiracy Against the Human Race then struck me as a logical continuation of Schopenhauer’s ideas. It was through Ligotti that I discovered Peter Zapffe..
However, it soon became clear that there was almost nothing known about Zapffe’s philosophy in Russian, and only a short essay, The Last Messiah, had been translated from his works. The situation is only slightly better in the English-speaking world: Zapffe’s main work, On the Tragic, was only translated from Norwegian in recent years. When its English translation was published in 2024, I realized that waiting for a Russian edition was likely pointless. Inspired by the example of Ligotti, whose book is still only available in Russian in an amateur translation, I decided to begin my own work.
My translation of On the Tragic into Russian was completed in December 2024 and is distributed for free online. This book completely changed my perspective. I realized that the very sense of incompleteness that had haunted me in all existential philosophies stemmed from their limitations, from the boundaries they set for themselves — boundaries that Zapffe did not impose on himself.
During the translation process, I realized that the development of pessimistic ideas requires going beyond this worldview. Thus, my own book was born — not as a continuation of pessimistic philosophy, but as its opposition. It is an attempt to overcome the limitations of existential pessimism and nihilism by offering an alternative approach that can lead to a constructive understanding of life.
The goal of this book is to explore the nature of existential fears that limit our ability to predict, understand, and adapt to the complexities of reality. These fears are both biological and cognitive in nature. They not only define the boundaries of human experience but also give rise to profound emotions related to uncertainty, finitude, and meaninglessness.
A special focus is given to the concept of the limit of human forecasting — the point beyond which the mind is unable to integrate new knowledge into familiar models. Through this lens, key philosophical concepts, neurobiological mechanisms, and social strategies are analyzed, all of which help humans adapt to inevitable limitations. The acceleration of scientific and technological progress creates increasingly complex systems that are difficult to predict, and fundamental questions such as the finitude of life, the meaning of death, and the search for purpose remain central to human existence, despite scientific advancements. In the face of global crises — from environmental disasters to the threats of artificial intelligence — understanding our cognitive and philosophical barriers becomes vital.
This work is also an attempt to introduce Russian readers to the philosophy of the tragic by Peter Zapffe, a Norwegian thinker and environmental advocate. His ideas, despite being misinterpreted, are often seen as expressions of pessimism or nihilism, though Zapffe himself never subscribed to these positions. Analyzing his philosophy allows for a fresh perspective on questions related to the limitations of human existence and offers approaches to their understanding.
The book explores how the familiar world around us emerged from chaos, how we, as humans, and our ability to comprehend reality, came to be. It analyzes the mechanisms through which people avoid or struggle with reality. Finally, the work examines the challenges of the future, including the role of transhumanism, artificial intelligence, and scientific hypotheses that challenge human understanding.
This book is aimed at all those interested in philosophy, cognitive sciences, and the questions of human existence. It will serve as a guide in exploring complex issues, allowing for a deeper understanding not only of the limits of the mind but also of the ways to comprehend and overcome them.
Chapter 1. Blind Complication
This chapter will discuss the fundamental principles from which the history of the complexity of matter begins. We will explore how complex structures emerged from the primary forms of matter, leading to the rise of life, consciousness, and awareness. This chapter is dedicated to the origins of everything that exists and their role in shaping the complex world we observe today.
This narrative was necessary because all the topics discussed later began with the emergence of the first form of matter. Everything that followed was simply its complication, the result of natural development. Without understanding this, it will be difficult to fully grasp the philosophical and existential questions addressed in this book.
If you are already familiar with this story, or for some reason are not interested in it, you can proceed directly to the fourth section of the first chapter — The Existential Limit of Forecasting.
For many centuries, humanity has sought to understand the origin of the world and life. Early concepts often explained everything that exists as the result of the design of a higher power. In ancient times, philosophers such as Plato and Aristotle sought order and purpose in nature, suggesting that the world was structured for some rational reason. The Middle Ages brought with it ideas of divine creation, where life and the entire universe were seen as the result of God’s creative act.
However, with the advancement of science in the modern era, these views began to be challenged. In the 19th century, Charles Darwin proposed his theory of evolution through natural selection, which overturned previous conceptions of the world and life. Darwin demonstrated that the diversity of life forms is not the result of any specific design, but rather a consequence of random mutations and selection, which ensures the survival of the most adapted individuals. Evolution, as he argued, has no ultimate goal and does not move toward perfection; it is a continuous process of change, where each generation adapts to changing conditions.
However, despite scientific explanations, many continued to search for purpose and meaning in the process of evolution. Science, armed with Occam’s razor, not only eliminated the idea of a divine design from the equation but also the very concept of a final goal. Evolutionary biologist Richard Dawkins, further developing this approach, uses the metaphor of the “blind watchmaker” to explain that evolution is not a purposeful process, but rather a random and unconscious mechanism that has no preordained goal or design, yet still results in complex and organized outcomes. He wrote:
Evolution has no long-term goal. There is no long-distance target, no final perfection to serve as a criterion for selection, although human vanity cherishes the absurd notion that our species is the final goal of evolution. In real life, the criterion for selection is always short-term — simple survival; or more strictly speaking, reproductive success. What, after geological epochs, appears retrospectively as a movement toward some distant goal is, in reality, always a byproduct of many generations of short-term selection. Our “watchmaker” — the accumulating natural selection — is blind to the future and has no long-term goals.
This is what we will discuss next.
1. The Emergence of the Complex World
1.1 Self-organization and the Absence of Purpose
The modern scientific understanding of the structure of the Universe rejects the idea of purposefulness or an initial design. Instead, the world as we know it is the result of self-organization and gradual complexity arising within the framework of physical laws. These processes were not caused by an external goal, but developed through the interactions of numerous elements over vast timescales.
Fundamental discoveries in physics and cosmology have shown that the Universe emerged as a result of the Big Bang around 13.8 billion years ago. The concept of the Big Bang was first proposed by Belgian scientist Georges Lemaître in 1927 and was confirmed in 1965 when Arno Penzias and Robert Wilson discovered cosmic microwave background radiation.
In the early stages of the Universe’s existence, matter and energy were distributed chaotically and homogeneously. Over time, as a result of density fluctuations and the action of gravity, the first structures began to form: clusters of gas, stars, and galaxies. These processes were a natural consequence of physical laws, such as thermodynamics and gravity, rather than the result of any design.
1.2 The Role of Entropy and the Complication of Systems
A key concept explaining the increasing complexity of the Universe is entropy. According to the second law of thermodynamics, formulated in the 1850s by Rudolf Clausius, entropy (a measure of disorder) tends to increase in isolated systems. However, this does not mean that order is impossible. Organized structures can emerge locally, as long as it is accompanied by an increase in entropy in the surrounding environment. For example, the formation of stars and planets is accompanied by the release of energy and an increase in entropy in the surrounding space.
Thus, complex systems arise as a byproduct of the Universe’s tendency toward a state of equilibrium and maximum disorder. From simple interactions and processes of self-organization, more complex structures and patterns gradually emerge.
1.3 Chaos and Nonlinear Dynamic Systems
Further understanding of the emergence of complexity is tied to the study of nonlinear dynamic systems and chaos theory. In 1963, American mathematician and meteorologist Edward Lorenz discovered that small changes in initial conditions could lead to significant and unpredictable consequences (the butterfly effect). This explains how, from simple physical laws, extremely complex phenomena could arise, such as climate systems, galactic structures, and ultimately, chemical processes leading to life.
Chaotic systems, despite their apparent unpredictability, follow certain rules and can demonstrate self-organizing patterns. Examples include snowflakes, lightning, fractals, and turbulent flows. These processes show that complexity can arise spontaneously, without external control or purpose.
1.4 The Universe as a Chemical Complication
After the formation of the first stars, the process of synthesizing heavier elements from hydrogen and helium began. As a result of nuclear fusion reactions within the stars, elements necessary for the emergence of life — such as carbon, oxygen, nitrogen, and others — were created. This process, known as stellar nucleosynthesis, was explained in the mid-20th century by Fred Hoyle and his colleagues.
When massive stars exploded as supernovae, these elements were scattered across the Universe, becoming the building blocks for new stars, planets, and, ultimately, living organisms.
Thus, the complexity of the Universe unfolded in several stages:
— Physical complication — the formation of galaxies, stars, and planets from primordial gas.
— Chemical complication — the synthesis of more complex chemical elements and compounds.
— Structural complication — the formation of complex molecules and, ultimately, conditions for the emergence of life.
These stages were not directed toward a specific goal but created the conditions for further processes, including biological evolution.
1.5 Conclusion
The emergence of the complex world is a story of self-organization based on physical laws. From chaotic and simple states, through billions of years of interactions and increasing entropy, the Universe emerged, rich in a diversity of structures and processes. This laid the foundation for the next stage — the emergence of life.
2. The Emergence of Life
2.1 Spontaneous Origin of Life and the Absence of Purpose
Modern science asserts that life originated as a result of natural chemical processes, rather than through purposeful action or a higher design. Approximately 3.5 to 4 billion years ago, the first signs of life appeared on Earth, and the process that led to this is known as abiogenesis — the spontaneous emergence of living systems from non-living matter.
The “primordial soup” hypothesis, proposed by Alexander Oparin and John Haldane, became the foundation for studying the conditions of early Earth that could have facilitated the emergence of organic molecules. The Miller-Urey experiment (1953) demonstrated that when electric discharges were applied to a mixture of gases containing ammonia, methane, and hydrogen, amino acids, which are the building blocks of proteins, were formed.
These chemical reactions were not directed toward achieving any specific goal but occurred as a result of molecular interactions, governed by natural physical laws. Gradually, from these simple molecules, more complex structures began to form, such as RNA, capable of self-replication. This led to the “RNA world” hypothesis, proposed by Carl Woese and Leslie Orgel in the 1960s, which suggests that the first molecules of life could have been RNA, capable of self-reproduction without the involvement of proteins. RNA can serve both as a catalyst for chemical reactions and as a carrier of information, providing a basis for considering it the first step toward complex biological life.
The spontaneous origin of life and the absence of an external goal in this process supports the idea that the evolution of life is a random process, not aimed at a specific goal, but driven by the natural laws of chemistry and physics.
2.2 The Emergence of the First Cells and Evolution
The process of the origin of life continued with the formation of the first cells — primitive organismal structures surrounded by a membrane. These cells could facilitate the exchange of substances and protect chemical reactions within themselves from the external environment. In this way, evolution began its course. The formation of cells marked the beginning of living organisms capable of metabolism, reproduction, and interaction with their surroundings
In 1859, Charles Darwin, in his work On the Origin of Species, proposed the theory of natural selection. Darwin argued that organisms better adapted to their environment are more likely to survive and pass their genes on to the next generation. This process occurs without any purposeful intent or predestination; rather, it is the result of random variations leading to increased adaptation to a specific environment.
Evolution is a process of change and adaptation without a final goal or predetermined endpoint. It is a mechanism driven by random mutations, which lead to changes in populations of organisms, with death acting as the process of removing less adapted individuals. In this context, death is not the end of life but an inevitable part of it, necessary for more adapted organisms to continue their existence. Death, thus, plays a crucial role in maintaining the balance and progress of species, ensuring the “cleansing” of less adapted genes.
2.3 The Discovery of the DNA Structure and Genes as Units of Inheritance
The discovery of the structure of DNA in 1953 by James Watson and Francis Crick, based on X-ray crystallography data, marked a significant turning point in biology. DNA was decoded as a molecule that encodes genetic information passed down from generation to generation. Genes became the fundamental units of heredity, containing the instructions for synthesizing proteins that play a crucial role in the functioning of an organism.
Genetics further revealed how mutations occur, with random changes in genes leading to alterations in organisms. These mutations can be beneficial, neutral, or harmful, and depending on their impact on the organism’s survival, they can be passed on to the next generation. The process of gene expression and their regulation through epigenetic mechanisms (such as DNA methylation) adds additional layers to our understanding of how organisms adapt to their environment. This intricate interplay of genetic and epigenetic factors shapes the evolutionary trajectory of life..
The significance of mutations and their impact on organisms is revealed through the concept of “negative selection,” which eliminates organisms with harmful mutations, and “positive selection,” which enhances the existence of those better adapted. The inclusion of epigenetics in the modern understanding of evolution allows for a fuller appreciation of how the external environment can influence genetic changes and species adaptation.
2.4 Theory of Multilevel Selection and Modern Understanding of Evolution
The theory of multilevel selection, proposed by scientists such as William Hamilton and Richard Dawkins, significantly expands our understanding of evolution. In his famous book The Selfish Gene (1976), Dawkins suggested that the primary units of evolution are not organisms, but genes, which strive for self-replication and spread. From his perspective, the organism is merely a vessel for genes, and evolution is essentially not about the survival of individuals but about the preservation and dissemination of genetic information passed down through generations.
According to this theory, evolution does not view the organism as an independent goal, but rather as a means for transmitting genes to the next generations. This leads to the concept of the “selfish gene,” where each gene acts as a kind of “instrument” concerned with its own preservation within the population. Thus, evolution operates at the level of genes rather than individual organisms.
An important aspect of the development of this theory is the concept of multi-level selection. Selection can occur not only at the level of individual organisms but also at the level of genes, groups, and even species. In this context, evolution can be seen as a process in which not only the most adapted individuals are selected, but also genetic combinations that increase the chances of survival of populations or groups.
One example illustrating multi-level selection is the phenomenon of organisms with similar traits, such as the “green beard effect.” Imagine a group of animals within a population randomly developing a unique trait — a green beard. This concept, proposed by Richard Dawkins, illustrates how traits that are disadvantageous at the individual level can be preserved and spread through group selection. In this case, individuals with a “green beard” (a symbolic trait that distinguishes them from others) may not have obvious survival advantages, but if such individuals form a group, their shared trait can promote cooperation and support within the group, thereby increasing the chances of survival for its members. Thus, this trait could be advantageous at the group level, even if it does not directly benefit the individuals. The green beard can be selected through group selection, where cooperation or even “signals” for interaction with other individuals emerge within the group, supporting the survival of the whole community. Therefore, group-level evolution can lead to the spread of this trait if it promotes cooperation and social interactions, increasing the chances of survival for the entire group.
Dawkins’ theory also considers the importance of altruism in evolution. He argues that individuals who act in the interest of the group can contribute to the preservation of their genes, even if their behavior does not bring them direct benefit. An individual may help the survival of others, such as relatives or group members, at the cost of their own risks. In this context, if an individual with a green beard helps other members of their group survive, their actions could improve the overall success of the entire group, and these traits would be maintained and strengthened at the group level.
Considering evolution as a process that occurs on multiple levels allows us to include not only organisms but also broader evolutionary units such as populations, ecosystems, and even species. For example, within multicellular organisms or communities of organisms with similar traits (such as behavior or physical characteristics), there is a likelihood that these traits will be maintained through altruistic behavior that promotes the overall success of the group. However, such behavior is important not only for the survival of individual organisms but also for the propagation of their genes at the population level.
One vivid example of such a phenomenon can be symbiosis — a close, mutually beneficial coexistence of different species. When two or more species cooperate with each other, their chances of survival increase, and their traits can be supported and strengthened through evolutionary mechanisms. In this way, traits like the green beard, over the long term, can spread not only at the level of individual organisms but also within more complex biological systems, contributing to the overall survival of the group.
Today, it is believed that selection occurs on several levels:
Genetic level: Selection occurs at the level of individual genes. Genes that promote the successful survival and reproduction of their carriers become established in the population, passed down to future generations. This selection focuses on how specific genetic variations can increase their frequency in the population through their impact on the organism or on their copies in other organisms.
Individual level: Selection acts at the level of organisms. Individuals with traits that increase their chances of survival and successful reproduction are able to pass their genes to the next generation. This leads to the spread of beneficial adaptations within the population and the establishment of traits that enhance individual fitness.
Kin selection: Selection occurs through helping close relatives who share similar genes. Altruistic behavior toward kin can increase the chances of spreading common genes, even if it reduces individual survival chances. This type of selection explains the emergence of cooperative behavior in family groups and colonies.
Group level: Selection occurs at the level of groups of organisms. Groups in which members cooperate and support each other may have an advantage over groups where selfish behavior predominates. Competition between such groups may lead to the selection of cooperative strategies that enhance the success of the group as a whole.
Ecosystem or symbiotic community level: Selection may occur at the level of entire ecosystems or communities made up of interconnected species. In such systems, stable interactions, such as symbiosis, cooperation, and mutual support, can contribute to the successful existence of all members of the community. If an ecosystem or symbiotic community successfully adapts to changes in the environment and maintains its stability, it can contribute to the survival and spread of all the species involved. Although this level of selection is debated, examples of coevolution show that complex communities can form through cooperative and mutually beneficial relationships between different organisms.
Modern research supports the ideas of multilevel selection, showing how cooperation at the group and community levels can contribute to evolutionary success.
2.5 The Role of Randomness and Directionality in Evolution
It is important to note that evolution, as a process, largely depends on random mutations, which can either benefit or harm an organism. However, the presence of directionality in evolution is not entirely excluded. With each generation, species become more adapted to their environment, but this does not occur through predefined goals or projects. Instead, it is the result of interactions between random changes and prevailing ecological and social factors.
Evolution does not have a predetermined goal or final destination. An important point is that it is not aimed at creating perfect beings but simply at adapting organisms to the specific conditions in which they exist. In this sense, evolution is not so much a process of development as it is one of endless adaptations and changes.
Conclusion
Thus, evolution has no predetermined goal or inherent meaning. Life and death are part of a continuous cycle of changes and adaptations that ensure the survival of species best suited to their environment. Death, as part of this process, does not imply an afterlife; rather, it is necessary for more adapted organisms to continue their existence. Evolution is a sequence of random processes that have ultimately led to the emergence of modern species, including humans. We exist as we are solely because all other variations did not survive, and we do not see them. All life on Earth, from microorganisms to humans, is the result of deterministic processes that, over billions of years, have shaped living beings capable of reproduction and adaptation.
3. The Emergence of Intelligence
Intelligence is one of the most complex achievements of evolution, becoming a key factor in the success of many species, especially humans. In this section, we will explore how evolution led to the emergence of intelligence, examine differences in cognitive development between mammals and cephalopods, and analyze how the brain utilizes predictive coding and Bayesian approaches to process information.
The Emergence of Intelligence: Evolutionary Preconditions
The evolution of intelligence is a gradual process involving the development of increasingly complex cognitive abilities such as learning, memory, prediction, and self-reflection. Intelligence did not arise suddenly; its emergence was the result of millions of years of adaptation to changing environmental conditions.
The most significant steps toward intelligence include:
Development of sensory systems and memory. Organisms began accumulating information about their environment and using it for survival.
Emergence of associative learning. The ability to link stimuli with responses helped in predicting dangers and opportunities.
Development of spatial reasoning. Animals started forming mental representations of their surroundings and planning their actions.
Social interaction. Group interactions facilitated the development of communication and more complex behavioral strategies.
Over time, these elements evolved into advanced cognitive systems capable of abstract thinking, self-awareness, and future planning.
Differences in the Evolution of Intelligence in Mammals and Cephalopods
An intriguing example of the evolution of intelligence can be seen in mammals and cephalopods (such as octopuses) — two distinct evolutionary paths leading to advanced cognition.
Mammals, including humans, developed intelligence in a social context, where cooperation and group living played a crucial role. Their cognitive abilities evolved to solve problems related to cooperation, competition, and social communication. This led to the emergence of complex social hierarchies, empathy, theory of mind (understanding the thoughts and intentions of others), language, and abstract thinking. The mammalian brain features a large cerebral cortex, particularly the frontal lobes, responsible for planning, self-control, and decision-making.
Cephalopods, on the other hand, evolved intelligence in a solitary existence, requiring adaptation to diverse oceanic environments. Their cognitive abilities focus on solving spatial problems, camouflage, tactical behavior, and independent control of limbs. A unique feature of cephalopod brains is that about two-thirds of their neurons are located in their tentacles, allowing their limbs to act autonomously.
These two examples demonstrate that intelligence can evolve through different pathways, adapting to specific survival challenges.
As we continue exploring the evolution of intelligence, understanding how the brain functions and has developed over time remains essential..
The Principle of Brain Functioning
The brain consists of billions of neurons that process information and coordinate the organism’s actions. These neurons communicate with each other through chemical substances called neurotransmitters. When a neuron is activated, it transmits an electrical impulse that reaches the synapse — the contact point with another neuron. At this point, the electrical signal is converted into a chemical one, as neurotransmitters are released into the synaptic cleft and activate receptors on the next neuron.
Key neurotransmitters such as dopamine, serotonin, and glutamate regulate essential aspects of behavior and perception. For example, dopamine is associated with motivation and the reward system, while serotonin influences mood and anxiety levels. Glutamate serves as the primary excitatory neurotransmitter, playing a crucial role in learning and memory processes.
The Influence of Hormones on Brain Function
Hormones play a crucial role in regulating behavior and physiological states. For example, cortisol, the stress hormone, is produced in response to threats and helps the body cope with emergency situations. However, if its levels remain elevated for prolonged periods, it can lead to chronic stress, depression, and impaired cognitive function. Oxytocin, on the other hand, promotes the strengthening of social bonds and empathy, which are essential for complex forms of communication and interaction.
The influence of hormones on the brain is regulated through the hypothalamus, which controls the pituitary gland and, in turn, interacts with the endocrine system. This integration ensures the coordination of cognitive and physiological processes.
The Microbiota and Its Influence on the Brain
The microbiota, or the collective of microorganisms inhabiting our body, also plays a crucial role in brain function. In recent decades, it has become clear that microbes, especially those living in the gut, influence behavior, emotions, and cognitive processes. This interaction between the brain and microbes is known as the microbiome-gut-brain axis.
Some microbes can affect the levels of neurotransmitters, such as serotonin, which is produced in the gut, and influence inflammatory processes that, in turn, may impact the functioning of the nervous system. For example, disruptions in the balance of the microbiota are associated with the development of depression, anxiety disorders, and even neurodegenerative diseases such as Alzheimer’s disease.
Evolution and Development of These Systems
Over time, through the process of evolution, the systems in various animal species, including humans, became increasingly complex and adapted to the surrounding environment. In the human brain, several levels of development can be distinguished: from ancient structures found in our ancestors, including reptiles, to more complex and specialized regions, such as the neocortex, responsible for abstract thinking, planning, and self-awareness.
In reptiles and their ancestors, including early mammals, there was a part of the brain responsible for basic survival functions, such as instincts, aggression, and sexual behavior. As evolution progressed, and more complex cognitive functions developed, new structures were added to this ancient brain, such as the limbic system, which is responsible for emotions, and the neocortex, which developed in mammals and enables more complex cognitive tasks like abstraction, planning, and self-reflection.
These changes led to the creation of brain structures that process information not only based on current events but also in anticipation of future states, allowing adaptation to the changing conditions of the environment. Brain evolution not only improved survival mechanisms but also created conditions for more complex forms of behavior, such as social interactions, empathy, and language.
Brain Development in Octopuses
The brain of octopuses has a remarkable structure and functional features that distinguish it from the brains of mammals. While octopuses do not possess the same complex brain system as mammals, they demonstrate a high level of cognitive abilities such as learning, tool use, problem-solving, and even signs of personality.
The octopus brain is divided into several parts, with the majority of its mass concentrated in the head. However, two-thirds of its neurons are located in the arms. This unique structure allows each arm to operate relatively independently and make its own decisions. This trait provides octopuses with exceptional flexibility in interacting with their environment and adapting to changing conditions.
Differences in Brain Function Between Octopuses and Humans
Mammals, including humans, developed complex social structures, which contributed to the evolution of a more centrally organized brain. As mammals, we have a highly developed cerebral cortex (especially the frontal lobes), which is responsible for functions such as planning, self-control, and abstract thinking. Our brain is also closely connected to the hypothalamus and the endocrine system, which allows hormones like cortisol and oxytocin to regulate behavior in response to external and internal stimuli.
In contrast, the octopus brain, while also highly developed, functions somewhat differently. The concentration of neurons in their arms allows octopuses to make decisions at a local level without needing to send signals to the central brain. This provides them with remarkable autonomy and the ability to adapt to a variety of situations. For example, octopuses can solve problems related to spatial perception and object manipulation, not only thanks to their central brain but also through their body, which is a unique feature.
In both cases — in mammals and octopuses — the brain serves as an adaptive organ that processes information about the external world and makes decisions based on the organism’s current needs. However, while mammals developed a central brain to coordinate actions and social interactions, octopuses use local brain structures to maintain a high degree of independence for their body parts. This difference reflects distinct evolutionary survival strategies, where mammals rely on collective behavior and complex social interactions, while octopuses depend on individual decision-making and flexibility in manipulating their environment.
The Bayesian Approach to the Mind: The Free Energy Principle and Predictive Coding Theory
Predictive Coding and its foundations, related to Bayesian approaches, play a central role in contemporary understanding of how the brain perceives and processes information. Unlike traditional views of perception, where the brain simply reacts to sensory data, the theory of predictive coding argues that the brain actively constructs models of the world and uses them to predict future events. These predictions are then compared with the actual sensory information received through the senses. Prediction error — the difference between what the brain expects and what it actually perceives — serves as a signal for updating the mental model. This process allows the brain to minimize energy costs, accelerating perception and increasing adaptability, which forms the basis for the effective functioning of cognitive processes.
In recent decades, the theory of predictive coding has increasingly been seen as part of the broader Free Energy Principle, which links it with Bayesian inference, Active Inference, and other approaches focused on minimizing uncertainty and adapting to environmental changes.. However, despite the growing interest in this integrative approach, predictive coding itself remains a fundamental concept for understanding how the brain constructs models of the world and updates them based on new data. This work will focus primarily on predictive coding, its neurobiological mechanisms, and its role in cognitive processes.
The historical roots of the theory of predictive coding indeed trace back to the works of Pierre-Simon Laplace, who laid the foundation for the concept of determinism. Laplace, one of the first to consider ideas of probability and determinism in the context of predicting the future, proposed that if one had complete knowledge of the current state of the universe, the future could be predicted with absolute certainty. His hypothesis of the “Laplace Demon,” which could predict the future with perfect accuracy, was based on the idea that if we knew all the parameters of microstates, including the position and velocity of every particle, all events — including human thoughts and actions — could be predicted.
This idea of an all-knowing observer and the ability to predict future events based on complete knowledge of present conditions provided an early conceptual foundation for understanding how the brain processes information and makes predictions about the future. Predictive coding and the free energy principle are modern extensions of this concept, where the brain continually updates its internal models of the world to minimize prediction errors and uncertainty.
However, the concept of prediction and world modeling began to develop much later. In the 18th and 19th centuries, Laplace’s ideas about determinism started to be questioned by contemporary philosophers and scientists such as Isaac Newton, Carl Friedrich Gauss, and others. Ideas related to probabilistic calculations and uncertainty gained popularity with the development of statistics and thermodynamics.
The shift toward probabilistic thinking marked a key turning point in the evolution of predictive models. It became increasingly clear that the world is not fully deterministic and that knowledge of the present state is often insufficient to predict the future with absolute certainty. This uncertainty was formally recognized in statistical mechanics, which introduced the concept of entropy — a measure of disorder or uncertainty in a system. As a result, the idea that the brain might work with probabilities, updating predictions based on new information, became more plausible and relevant in the context of cognitive neuroscience.
In the 20th century, the works of Klaus Heisler, Richard Feynman, and Jan Frenkel represented a significant step toward understanding how predictions can operate in conditions of uncertainty and how the brain can construct hypotheses in the context of probability and imperfection. These scientists proposed mathematical approaches that ultimately laid the foundation for the theory of predictive coding in neurobiology.
Equally important contributions to the development of the idea of prediction and coding theory came from researchers in the field of neuroscience in the mid-20th century, such as Benjamin Libet and Nobel laureates Roger Sperry and Jean-Pierre Chevalier. For example, Libet conducted experiments that demonstrated the brain starts the decision-making process several seconds before a person becomes consciously aware of their choice, challenging the idea of full conscious control over behavior.
However, theories similar to predictive coding began to actively develop only in the late 20th and early 21st centuries. A key role in this was played by research into neuroplasticity and the brain’s adaptive mechanisms. Neurobiological studies, including investigations of neurotransmitters such as dopamine and the influence of neural networks, allowed for significant insights into how the brain uses prediction and models to perceive the surrounding world. Founders of predictive coding theory, such as Karl Friedrich von Weizsäcker and Gregory Hooper, proposed that the brain is constantly forming hypotheses about the future based on past experience and correlating them with incoming sensory information.
Bayes’ theorem, proposed by the English mathematician Thomas Bayes in the 18th century, became an important mathematical tool for analyzing and updating probabilistic hypotheses in light of new data.
The essence of the theorem is that it allows for recalculating the probability of a hypothesis based on new data. Bayes’ theorem describes how the belief (or probability) in a hypothesis is updated in response to new information. In the context of the brain, this theorem can be used to explain how neural networks update their predictions about the future, considering both old and new experiences.
In the context of predictive coding theory, this theorem and formula illustrate how the brain updates its hypotheses (or predictions) about the world based on new sensory data. When the brain encounters new events (data), it revises its prior probability (predictions) to incorporate these data, which helps improve the accuracy of future predictions.
Thus, this process reflects a key feature of predictive coding: the brain does not simply react to data, but actively revises its expectations based on new inputs, always striving to minimize prediction errors.
The application of Bayes’ theorem to neurobiology and cognitive science became possible in the 1980s when scientists began to understand how the brain could use probabilistic methods to solve problems of uncertainty. In this paradigm, the brain is seen as a “Bayesian inference” (interpreter) that formulates hypotheses about the world and updates them in response to sensory information using principles of probability. The Bayesian model suggests that the brain maintains probabilistic models of future events and adjusts them based on prediction errors, which is directly connected to the theory of predictive coding.
This updating of probabilistic hypotheses is crucial because it allows the brain not only to adapt to changes in the environment but also to account for uncertainty in the world, even when information is incomplete. In this sense, Bayes’ theorem and its applications have become fundamental to understanding how the brain, when faced with uncertainty, can improve its predictions and forecast the future based on prior knowledge.
Thus, the connection between predictive coding theory and Bayes’ theorem became a key point in the development of neurobiological models explaining how the brain processes information and uses probabilistic computations to predict the future. Bayes’ theory, as the foundation for handling uncertainty and adaptation, provided an important mathematical and cognitive tool for understanding how the brain functions in the context of constant uncertainty and the ever-changing world.
Predictive Coding as an Adaptive Mechanism
The principle behind the theory of predictive coding is that the brain does not simply react to external stimuli, but actively predicts them using existing models of the world. The brain constructs hypotheses about what will happen in the future and compares them with current sensory information. If the predictions match reality, the prediction error is minimized, allowing the brain to use its resources efficiently. If an error occurs — when there is a mismatch between the prediction and reality — the brain updates its models of the world, which helps improve perception and adaptation.
This approach allows the brain to save energy and effort by minimizing the need to process all information from scratch. Instead of interpreting data anew each time, the brain works with simplified models that it constantly updates based on new sensory data. This significantly speeds up information processing and reduces energy expenditure. For example, when a person is walking down the street, their brain does not analyze each step individually but simply uses its predictions about what should happen in the next second.
Predictive Coding operates at different levels, ranging from simple sensory signals (such as sounds or colors) to complex social interactions and abstract ideas. At lower levels, the brain predicts basic sensory signals, such as shapes and movements, while at higher levels, it predicts more complex phenomena, such as people’s intentions or social interaction scenarios.
The Role of Hormones, Neurotransmitters, and Microbiota in Prediction
The effectiveness of predictive coding mechanisms also depends on various external and internal factors. Hormones, neurotransmitters, gut microbiota, and injuries can significantly influence the brain’s ability to predict and adapt.
Cortisol, the stress hormone, can impair the brain’s ability to adjust its predictions. For example, high levels of cortisol may disrupt the process of updating the world model, leading to persistent perceptual errors and increased anxiety. Neurotransmitters such as dopamine play a key role in reward and motivation processes, as well as in strengthening or weakening certain brain predictions. Recent studies have also shown that gut microbiota can influence cognitive functions and even the brain’s predictive abilities, as microbes interact with the central nervous system, affecting our mood and perception.
Injuries, especially brain injuries, can disrupt the neurobiological processes of prediction, leading to cognitive and emotional disorders. For example, depression and anxiety disorders can be associated with disruptions in the mechanisms of predictive coding, when the brain cannot effectively update its world models.
Modern brain research shows that the mind actively creates and updates models of the world using predictive coding and Bayesian approaches.
Predictive coding is the process by which the brain forms hypotheses about what it expects to perceive and compares these hypotheses with actual sensory information. When predictive coding results in a mismatch between the brain’s expectations and sensory input (prediction error), the brain can either update its world model or try to interpret the data through existing hypotheses. If the prediction error is too large, the brain may sometimes perceive it as reality, which can lead to hallucinations. For example, under conditions of sensory deprivation, when sensory information is insufficient, the brain may dominate with its predictions, and visual or auditory images may appear to compensate for the lack of real stimuli. In cases of excessive activation of predictions, such as during stress or neurochemical imbalances (such as excess dopamine), the brain may ignore real information and impose its own interpretation. This partially explains the hallucinations observed in schizophrenia.
Levels of Predictive Coding:
Low level (sensory): The brain predicts simple sensory signals (e.g., lines, colors, or sounds). For example, if you hear footsteps, your brain predicts that you will see a person.
Middle level (perceptual): Predictions include more complex structures — images, sounds of words, or objects. For instance, seeing quick movement in the bushes, you predict that it’s an animal.
High level (cognitive): At this level, the brain forms complex hypotheses, including social interactions and abstract ideas. For example, based on someone’s behavior, you might predict their intentions..
Ascending and Descending Signals
The hierarchy of information processing is based on two types of signals:
Descending Predictions (top-down signals): At each level of the brain, predictions are generated about sensory data that are sent to lower levels. For example, if a higher level predicts that a person is seeing a face, lower levels will expect facial features (eyes, nose, mouth).
Ascending Prediction Errors (bottom-up signals): When the actual sensory signal does not match the prediction, an error signal is generated. This signal is sent to higher levels to adjust the model and refine predictions..
How Does the Brain Correct Errors?
This process occurs through cyclic feedback:
Prediction: The higher level generates a prediction and sends it down the hierarchy.
Comparison: At the lower level, this prediction is compared with the actual sensory signal.
Error: If there is a discrepancy, a prediction error is generated.
Model Update: The error is sent back upward, where the model is adjusted to improve future predictions.
When the real sensory information matches the predictions, the brain minimizes the prediction error, which helps conserve resources. However, if the information does not align with expectations, a prediction error occurs, signaling the need to update the world model.
In the brain’s neural layers, there is a division between “prediction neurons,” which form expectations, and “error neurons,” which signal when predictions are not met. For example, in the supragranular layers (upper layers of the brain), there are error neurons that activate when something unexpected occurs. In the deeper layers, there are neurons that provide prediction signals.
However, the effectiveness of predictive coding is influenced by various factors, including hormones, neurotransmitters, microbiota, and injuries. Hormones, such as cortisol, produced in response to stress, can alter neuron sensitivity, affecting the brain’s ability to adapt and learn. Neurotransmitters, such as dopamine, play a key role in motivation and reward processes, which can enhance or diminish certain predictions and responses. The gut microbiota, interacting with the central nervous system, can influence mood and cognitive functions, reflecting in the process of prediction. Injuries, especially brain injuries, can disrupt the normal functioning of neural networks responsible for predictive coding, leading to cognitive and emotional disorders.
Errors in the process of predictive coding can occur for various reasons. They may be related to insufficient accuracy of sensory data, incorrect interpretation of information, or failure to update world models. Such errors can lead to distorted perception and impaired adaptive behavior. For example, during chronic stress, elevated cortisol levels can reduce the brain’s ability to adjust predictions, resulting in persistent perceptual errors and increased anxiety.
Thus, predictive coding is the foundation of adaptive behavior and human cognitive functions. Understanding the mechanisms of this process and the factors that influence its efficiency opens new horizons for the development of treatments for various mental and neurological disorders related to disruptions in predictive coding.
Conclusion
The emergence of the mind is the result of a complex evolutionary process that has led to the development of various forms of intelligence in different species. Predictive coding and Bayesian approaches demonstrate how the brain creates models of the world and adapts to new conditions, minimizing prediction errors. These mechanisms form the basis of our perception, learning, and thinking, making the mind a powerful tool for understanding and transforming reality.
4. Existential Limits of Forecasting
Mental models are internal cognitive structures through which we conceptualize and predict the world. These models help us navigate life by creating more or less accurate representations of reality. However, like any other tool, they are limited. Mental models, much like filters through which we perceive the world, are inevitably simplifications based on experience and expectations, allowing us to interact with the environment more efficiently. Yet, like any tool, these models cannot always accurately reflect reality, as the world does not always fit into the frameworks we create for it.
In Plato’s philosophy, these ideas find their continuation. In the famous “Allegory of the Cave,” Plato depicts individuals who, sitting in a dark cave, can only see the shadows cast by objects positioned in front of a fire. These shadows represent a distorted perception of reality, perceived as true because the cave dwellers have never seen the light. Only the one who escapes the cave can see the true reality hidden behind the shadows. Plato’s image symbolizes the limitations of our perception, which reflects only a fragment of the full picture of the world.
Later, Immanuel Kant argued that we perceive the world not as it is “in itself” (Ding an sich), but through the a priori forms of the mind, which help us understand the nature of these limitations. Kant believed that our knowledge of reality will always be constrained by the categories of the mind, such as space, time, and causality, which are imposed upon our experience and do not exist in the world “in itself.” This means that human perception will always be limited by these a priori forms, and we can understand and predict only those aspects of the world that fit within these frameworks.
The idea that our perception of the world is always limited was further developed in the later works of Thomas Bayes, whom we discussed earlier. In particular, Bayes used the example of the sunrise and sunset to explain how our models of the world can be updated based on observations. For instance, a person, stepping out of a cave for the first time, observes the sunrise and wonders: does this happen every day? With each new observation, they update their belief using Bayesian reasoning. With every sunrise, they strengthen their hypothesis that the sun indeed rises every day. However, if one day this prediction proves false, and the sun does not rise or set in its usual place, they will need to adjust their model of the world based on the new data.
Thus, in the Bayesian approach, we observe a process of continuous updating of our mental models based on new observations, which also echoes Plato’s idea of searching for true reality beyond distorted perceptions. Bayes emphasizes that perception and prediction of the world are dynamic processes that are always subject to adjustment, and that the reality we strive to understand may always be deeper than our current model of perception allows.
These ideas were further developed and expanded by Nate Silver, who explored the principles of forecasting in conditions of uncertainty. Silver argues that successful forecasting depends on the ability to distinguish between “signal” (important information) and “noise” (random or insignificant data), which is directly related to Bayesian model updating. However, Silver goes further, emphasizing that not all models can be corrected simply by updating them with new data. In a world full of uncertainty and randomness, many predictions turn out to be incorrect, even if they follow the right methodology.
Silver emphasizes how people often overestimate their ability to interpret data, relying on predictions that seem plausible but may actually be the result of perceptual errors and biases. He explains that it is important not only to consider new data but also to understand the context in which it arises. In this sense, as in Bayesian models, the adjustment of mental models is a process that requires not only observations but also an awareness of the limitations we face when interpreting the world. Silver also underscores that the significance of “noise” in data is often overlooked, and without the ability to separate it from the “signal,” we will not be able to create accurate predictive models, even when using the most advanced data analysis methods.
Thus, like Bayesian theory, Silver emphasizes the importance of continually revising our assumptions and correcting our models of the world. However, unlike classical Bayesian theory, Silver points out the complexity of predictions in the real world, where the signal is often hard to distinguish from the noise, and our ability to make accurate predictions remains limited.
However, despite the fact that our mental models can be updated based on observations, even with all the complexity of predictions, the process of adapting to new data is not infinite. When the world becomes too complex, or when our expectations collide with fundamentally new and unpredictable phenomena, our models encounter limitations that cannot be overcome through conventional methods of adjustment. This opens up an insurmountable gap for the mind — a moment when we find ourselves unable to adapt our predictions to reality.
In such situations, when even the most flexible models prove powerless, the mind experiences a crisis caused by the inability to predict or comprehend what is happening. This confrontation with uncertainty leads to existential tension, questioning the very capacity of the mind to make sense of the world. And despite all efforts to update and revise models, it becomes clear that human cognition inevitably faces boundaries that cannot be surpassed by familiar forecasting mechanisms.
The existential limit of forecasting is the threshold at which the human brain encounters fundamentally unpredictable phenomena that cannot be integrated into predictive models due to a lack of data, experience, or the ability to correct prediction errors. When the brain reaches the limits of its cognitive capabilities, it results in an irresolvable cognitive conflict, giving rise to profound existential experiences.
The existential limit of forecasting became the starting point for the development of numerous philosophical movements such as pessimism, existentialism, and nihilism. These philosophies emerged as a result of confronting the limits of human understanding, when traditional models of perceiving the world prove inadequate to address profound existential questions and uncertainty. Errors arising from the existential limit can sometimes spiral out of control, evolving into desperate pessimism, deep existentialism, or nihilism.
Pessimism, as a philosophical position asserting the dominance of the negative aspects of life, is directly linked to the inability to cope with uncertainty and predict the future during times of profound crisis. When a person encounters phenomena that cannot be integrated into familiar models, their mind may begin to seek an explanation through extremes. A pessimistic view of the world often stems from accepting uncertainty and destructive expectations as an inevitable part of existence.
An example of pessimism is the philosophy of the German thinker Philipp Mainländer, who proposed the idea that existence, by its very nature, contains an element of suffering and meaninglessness. Mainländer’s thinking on the infinite suffering and meaninglessness of life became a striking example of how the existential limit can be interpreted as the inevitable tragedy of human existence. He viewed life as something devoid of an ultimate purpose, which is a direct consequence of experiencing existential uncertainty, which gives rise to the deepest pessimistic disposition.
The philosopher Ulrich Horstmann (pseudonym Klaus Steintal) represents a radical example of pessimism, where his philosophy escalates to extremes. Horstmann is known for his extremist position, according to which the voluntary extinction of humanity should be achieved through deliberate global thermonuclear annihilation. He views existence as something so absurd and filled with suffering that, in his view, the only way out is the complete destruction of humanity. His ideas serve as an example of extreme pessimism, where the philosophy of suffering and the meaninglessness of life leads to misanthropy and radical, shocking conclusions.
Existentialism, in turn, emerged as a response to the recognition of these limits and the struggle with the fact that humans cannot find absolute meaning in life, while their predictions and answers to existential questions often turn out to be superficial or mistaken. Existentialists such as Jean-Paul Sartre and Martin Heidegger sought to confront the ideas of freedom, responsibility, and finitude. However, their works frequently reflect a sense of anxiety and the impossibility of fully grasping existence.
However, existentialism can be rooted in mistaken assumptions about human nature, leading to extremes in the interpretation of freedom and the search for meaning. If we consider that this process begins with an internal crisis, then philosophical systems such as Heidegger’s theories emerge as a response to the inability to find ultimate meaning in a world where predictions about our future are constantly called into question.
Nihilism is perhaps the most extreme response to the existential limit of prediction. Nihilists argue that life has neither meaning nor intrinsic value. They assert that all moral, social, and metaphysical foundations are ultimately meaningless. The belief that all human efforts to create meaning are doomed to failure stems from a profound existential void that emerges when one confronts the limits of human understanding.
The philosopher Friedrich Nietzsche is a striking example of nihilism, describing the world as chaos devoid of meaning and order. For Nietzsche, the world is an arena of struggle and suffering, where human aspirations are doomed to failure if they seek meaning in a universe that offers none. He argues that traditional moral and religious foundations are incapable of providing true meaning in life, and that individuals must forge their own path by overcoming this existential void from within. His works embody this confrontation with existential limits: it is impossible to construct a cognitive model of the world that resolves all contradictions and allows one to escape this darkness.
Nihilism, emerging from a deep crisis of faith in the ability to predict, is essentially the extreme stage of the “amplification” of error. When a person fails to find solutions in conditions of uncertainty, they arrive at the conclusion that nothing exists beyond subjective perception and, therefore, that nothing in the world truly matters. This ultimately escalates into a complete rejection of all values and purposes.
Pessimism, existentialism, and nihilism represent not just philosophical doctrines but also a process of forecasting that arises from erroneous predictions and exaggerated expectations. Beginning as an attempt to explain uncertainty and crisis, these movements gradually spiral, amplifying the significance of the problem and reaching extremes. As a result, what initially started as a search for meaning and an effort to overcome existential limits transforms into extreme forms of despair and philosophical nihilism. We will examine this in more detail in Chapter 3.
These philosophies, to some extent, become a logical consequence of how errors in forecasting and distortions in the perception of uncertainty can lead to a radical reassessment of human nature and its place in the world. They do not always offer solutions, but they raise fundamental questions about our ability to construct a meaningful life in the face of the uncertainty we encounter.
An example of a more honest approach within existentialism is the philosopher Albert Camus. Camus emphasizes the moment when Sisyphus, the absurd hero of his work, becomes aware of the meaninglessness of his existence and his condemnation to endless struggle. However, Camus does not advocate denying reality but rather accepting it. For Sisyphus, despite recognizing the absurd, his life does not lose its value. He becomes happy because he acknowledges his fate and accepts it — not in submission, but in defiance. This acceptance is not passive but an active act in which he finds inner freedom and harmony, continuing his labor despite its futility. Camus argues that although Sisyphus’s struggle is absurd, meaning and happiness can still be found in that absurdity if one abandons the search for ultimate answers and embraces reality as it is.
Chapter 2. Ways of Adapting to Existential Limits
In the first chapter, we arrived at the realization that the world, as it is, is the result of random interactions and self-organization, devoid of any ultimate purpose or higher design. This understanding, coupled with chaos and unpredictability, presents a profound existential problem for the human mind. How can we make decisions and take action when the future is beyond prediction? In this chapter, we will examine existential fears and limits of the mind, such as free will, death, and the complete absence of meaning, through scientific and philosophical works. Since these are eternal themes that will persist as long as there is a self-aware mind, instead of reiterating the ideas of past geniuses, we will focus on the works of the 20th and early 21st centuries, as their works, in a sense, already encapsulate the conclusions of the past.
The next section explores free will as an adaptive tool. We will examine its neurobiological and cognitive foundations, the influence of genetics and environment on its formation, and the illusion of this concept in light of contemporary research. Through this lens, we will understand how free will becomes a means of organizing chaos and a tool for adapting to the ultimate complexity of existence.
1. Free Will as a Tool for Information Processing
Although the brain operates within certain patterns and predictions, we continue to experience a sense of free will. This is because the brain does not process all information directly; instead, it works with the most probable hypotheses and models. As a result, we perceive ourselves as independent agents making decisions, even though, at a deeper level, our brain is always functioning within deterministic patterns, whose predictions simplify perception and adaptation.
This also explains why we feel free, even though, at a deeper level, the brain is guided by certain probabilistic models. The brain conserves resources by processing not all information, but only the most likely events, making it more flexible and adaptive. This allows us to respond quickly to changes in the environment without wasting excessive energy on data processing, which ultimately gives us the sensation of free will.
Robert Sapolsky is an American neuroendocrinologist, biologist, anthropologist, and writer, known for his work on human behavior, its biological foundations, and the mechanisms of stress. He holds a professorship at Stanford University and has spent over three decades researching how neurobiology, genetics, and the environment shape human behavior. In addition to his primary work as a biologist, Sapolsky is well-known for his popular books, such as Behave: The Biology of Humans at Our Best and Worst and Determined: A Science of Life Without Free Will. These works offer revolutionary perspectives on the nature of human behavior, challenging traditional views on free will and moral responsibility.
Neurobiological Evidence
Sapolsky refers to the research of Michael Gazzaniga, who worked with patients with a split corpus callosum to demonstrate the absence of free will. Patients with separated hemispheres of the brain exhibited striking examples of how consciousness interprets and explains actions that were not actually the result of conscious decision-making. When one hemisphere performs an action, the patient is not always able to explain why it occurred. Gazzaniga found that the left hemisphere of the brain, which is associated with speech and explanation, often fabricates justifications for actions performed by the right hemisphere. This supports the notion that our consciousness is not always connected to the actual decision-making process.
“Neurobiology shows that often we are unaware of the true causes of our behavior. When the left hemisphere explains the actions of the right, it does so based on its perception, not the actual caus” (Determined: A Science of Life Without Free Will, p. 45).
This example illustrates the idea that we perceive ourselves as free agents, but in reality, many of our decisions and actions are the result of unconscious processes.
Illusion of Free Will
One of the central aspects of the book is the concept of the “illusion of free will.” Sapolsky argues that, despite our belief in free choice, all of our decisions are actually determined by biological, neurobiological, and social factors. We perceive ourselves as free agents because we are unaware of the entire chain of mechanisms that actually lead to our behavior. Sapolsky uses the metaphor of “illusion”: we see ourselves as free agents because we fail to notice the deeper mechanisms that influence our actions.
“We believe that we control our actions because we don’t see the chain of biological factors that lead to our decisions. It’s simply an illusion that we make decisions consciously” (Determined: A Science of Life Without Free Will, p. 98).
He provides examples where reactions to external stimuli occur before we become aware of them. For instance, if a person faces danger, their body may immediately react based on instinctive responses (such as an increase in adrenaline) before they consciously realize what has happened. This confirms that our behavior is often predetermined by unconscious reactions occurring in our brain..
Генетика и влияние на поведение
Сапольски также подчеркивает важность генетики в детерминированности нашего поведения. Он приводит примеры генетических мутаций, таких как изменения в гене MAOA, который связан с повышенной склонностью к агрессии. Это генетическое влияние может существенно изменять поведение, и, по мнению Сапольски, такие данные показывают, что наша личность и поведение во многом предопределены нашим геном, а не являются результатом свободного выбора.
“Генетика вносит большой вклад в формирование нашей личности. Даже такие черты, как склонность к агрессии, могут быть предопределены нашими генами” (Determined: A Science of Life Without Free Will, с. 127).
Влияние окружения и воспитания
Окружение и воспитание также играют значительную роль в формировании нашего поведения. Сапольски акцентирует внимание на том, как стрессовые события могут сильно повлиять на принятие решений. В частности, стресс может снизить нашу способность к рациональному мышлению, делая нас более склонными к импульсивным решениям. Это также подтверждает, что наши действия во многом предопределены внешними обстоятельствами, а не свободной волей.
“Когда мы находимся под стрессом, наш мозг начинает работать иначе, что делает нас более склонными к агрессии или импульсивным поступкам. Это означает, что даже в моменты напряжения наши действия детерминированы” (Determined: A Science of Life Without Free Will, p. 140).
The Role of Neuropeptides and Hormones in Behavior
Sapolsky provides an in-depth discussion on how hormones, such as oxytocin, can significantly influence our social interactions. He presents examples illustrating how an increase in oxytocin levels can make individuals more trusting and altruistic, whereas a decrease can lead to aggression and distrust.
“Hormones such as oxytocin play a crucial role in our behavior. We cannot control their levels, and it is often these biochemical factors that determine how we relate to others” (Determined: A Science of Life Without Free Will, p. 165).
Decoherence and Classical Reality
Quantum decoherence refers to the loss of quantum coherence. It has been studied to understand how quantum systems transition into states that can be described using classical mechanics. This theory, which emerged as an attempt to extend the understanding of quantum mechanics, has evolved in several directions, with experimental research confirming some key aspects.
At the macroscopic level, quantum effects become “blurred” due to the interaction of quantum systems with the surrounding environment. This process, known as decoherence, explains why the macroscopic world appears strictly deterministic.
Decoherence demonstrates that quantum systems transition into states that, from the observer’s perspective, appear classically determined. Thus, quantum uncertainty does not “penetrate” the macroscopic world, where Newtonian laws prevail.
Bell’s Experiment
Bell’s experiment demonstrates that quantum mechanics violates Bell’s inequalities, indicating the presence of quantum nonlocality. This phenomenon is often interpreted as a challenge to classical notions of determinism. However, Sapolsky emphasizes that even quantum nonlocality does not provide “free will”, as the outcomes remain entirely dependent on the system’s parameters and its initial state.
According to Sapolsky, misinterpretations of quantum nonlocality arise from the assumption that the randomness of quantum events allows for the existence of a will independent of deterministic factors. However, as he points out, quantum randomness does not make events free; it merely makes them unpredictable.
Physical Determinism and System Complexity
The ideas of Pierre-Simon Laplace, suggesting that knowledge of all initial conditions can allow the prediction of the future, are discussed in the context of chaos theory and quantum uncertainty. Sapolsky points out that even in a complex physical system (such as the brain), no “freedom” arises; everything remains predetermined by the laws of physics. Despite potential quantum uncertainty, its impact on the level of conscious decisions is minimal and does nothing to save the concept of free will.
According to Laplace’s theory, Laplace’s demon is a hypothetical entity that, knowing the position and velocity of all particles in the universe at a specific moment in time, can accurately predict the future. If you understand the physical laws governing the universe and know the exact position of every particle within it, you can predict with precision what happened at every moment since the beginning of time and what will happen at every subsequent moment until the end of time. This means that everything that happens in the universe was destined to happen (in a mathematical, not theological, sense).
“Laplace proposed the canonical statement of determinism: if you had a superhuman who knew the position of every particle in the universe at a given moment, they would be able to precisely predict every moment in the future. Moreover, if this superhuman (subsequently called “Laplace’s demon”) could reconstruct the exact position of each particle at any moment in the past, it would lead to the present, identical to our current one. The past and future of the universe are already determined. Science since Laplace has shown that he was not entirely right (proving that Laplace was not the Laplacian demon), but the spirit of his demon lives on. Modern views on determinism must include the fact that certain types of predictability are impossible (the subject of chapters 5 and 6), and some aspects of the universe are in fact undetermined (chapters 9 and 10). (Determined: A Science of Life without Free Will. Chapter 1)
Бесплатный фрагмент закончился.
Купите книгу, чтобы продолжить чтение.