INHERENT RANDOMNESS

When we look at our surroundings, we see order. We see everything organized in a particular fashion, following certain rules and maintaining a pattern that makes it look “organized.” At the macroscopic level, we humans strive to make things in an organized manner. We derive our thoughts and produce ideas from patterns we see in Nature. However, only when we look into the finer details of all the laws that govern the natural world do we see the irregularities. Maybe that’s not the right word for it. What we see is best described by the probabilistic perspective of Nature: we observe randomness.
Quantum Mechanics is one of the most successful theories of physical science humans have formulated so far (in terms of the experimental tests it has endured). It has embedded in it some of the key mysteries that might open the portal to some of the deep questions of our existence.
There are many well-known popular “interpretations” of quantum mechanics at the moment; different schools of thought exist that view the theory with a different lens, all based on the same set of fundamental concepts at its core: Only when an “observer” makes a “measurement” can one truly know the state of the system at that moment.
The above statement contains some loaded words with very significant consequences based on the interpretations. The well-known Heisenberg’s Uncertainty Principle, which describes this core idea of quantum mechanics with extreme mathematical rigor, captures an aspect of reality yet to be completely understood. It states that there is some inherent uncertainty present in any system, which constitutes the observer and the “process” the observer is “observing” or “measuring.” In its simplest form, it states that an object’s position and momentum at the quantum scale cannot be determined exactly at the same time. If the momentum is determined, then there will be some inherent uncertainty in the position of that object.
This raises a lot of confusion, but I believe this confusion arises not from the difficulty of the mathematics involved but rather from the jargon used to explain this concept itself!
This principle does not state some inadequacies in the experimental setup, or the absence of more precise technology humans possess to make the measurement; rather, it describes a hard truth about the physical world! It describes the very nature of the physical world at the finer scales.
We must understand the terminology involved to understand this principle and its implications. The term “measurement” describes the act of gaining “information” about the system at a particular moment. The measurement process involves an observer. The word “observer” in a general sense means a sentient being capable of conscious thoughts, but in physics, this term has a different meaning. The “observer” in physics can be any person or thing capable of making a “measurement.”
A small example would be someone observing the screen before them while reading this article. Even if there is or isn’t any physical contact between their hand and the screen, they interact with the screen in a very “physical” sense. They are an observer making a measurement at this very moment!
The photons of light from the screen are being emitted and received by the receptors of your eye, and those photons carry information about the screen, which is being processed by your brain in the form of electrical signals. When you see something, what happens is that the photons of light bounce off that “thing” and are subsequently received by your eyes, and the information about that object carried by the photons is processed in your brain; as a result, you “see” that “thing.”
But during the very act of you observing (in quite a literal sense) that “thing,” collisions occur between the photons you will be receiving and the atoms that constitute that object. Hence, by the time those photons reach your eye, due to collisions with the photons, the velocity of those atoms has changed from when the collisions took place.
So, technically, when you see something, you observe that object as it was at some point in the past when the photons that allow you to see it collide with that object.
This is the essence of the uncertainty principle. The very act of measurement disturbs the system in that the information about the system’s state at a particular moment is the information describing the system as it WAS, not as it is. This is an inherent property of the universe we are part of, as the observer need not be sentient. The system itself does not know the properties of some of its parts as it constantly interacts with other parts and, in the process, “disturbs” those parts. Hence, whatever inherent information it has about its state is the information about its past, not its present state.
This leads to an eerily similar concept, which I think has some connection to Heisenberg’s Uncertainty Principle but has far greater implications. Stephen Wolfram, in his book A New Kind of Science, published in 2002, proposed a conjecture of sorts: the principle of Computational Irreducibility.
In his study of “simple” programs, which follow certain predefined rules and start from a predefined state, when allowed to evolve, give rise to high levels of complexity not encoded in the governing rules. These “simple” computational systems are emergent in nature and give rise to complexity that we have yet to understand.
He claims some systems are computationally irreducible in the computational universe (as he calls it), consisting of many of these simple computational systems. The principle of computational irreducibility, in its essence, states that there is an upper limit to the predictability of such systems; after some point, these systems give rise to complexity that does not follow any particular pattern, hence become computationally “irreducible” (or in other words, unpredictable).
One of the main ideas of computational irreducibility is that the only way to know the properties/state of the system at a particular point in time is to actually “simulate” the system to that particular point, as there is no way of knowing beforehand due to the absence of any pattern. In other words, one needs to make a “measurement” to know the system’s state at that moment. The present state of such computationally irreducible systems does not give any clue about the system’s future state due to the inherent randomness (unpredictability) present, even if the initial “simple” starting state and the initial governing rules are known.
If our universe started with an “initial state” and a set of “simple rules” (which can be the laws of Physics that we have discovered so far and maybe some we have yet to discover), then if the system, i.e., our universe, is computationally irreducible, then there is no way of knowing its future state with absolute certainty.
In fact, such a computationally irreducible system could very well give rise to enormous degrees of complexity, a consequence of which can be the emergence of consciousness. The implications of these ideas are far-reaching. They might even imply that free will on a much more intricate scale is an inherent property of the universe.
Maybe an inherent quality makes our universe possible and might serve as an answer or at least point to the answers to some of the deep and fundamental questions of reality. I propose that maybe our universe has inherent randomness…
FURTHER READING: