You’re talking about a crisis here in the fundamentals of physics. For 40 years we have not found any new laws of nature. Progress is stagnating. Nothing works. Nobody knows what to do.
Yes, we found the Higgs boson in 2012 and 2013 there was a Nobel Prize for it. But the theory of the Higgs boson was already developed in the 1960s. Yes, we have noticed that particles called neutrinos have a mass and are not massless, as previously assumed. This, too, was recently awarded a Nobel Prize in 2015. But the theory of neutrino masses dates back to the 1950s. Yes, we found gravitational waves, and they were worth a Nobel Prize too, in 2017. But the general theory of relativity that predicted that is more than 100 years old.
It’s not that physicists lack data – it’s just that the data keep confirming the already known theories and not revealing anything new. The problem is not the experiments. The problem is the lack of predictive power of theoretical physicists. Because they made many predictions, for example for the Large Hadron Collider (LHC) at CERN in Geneva, and these predictions were all wrong.
No excitement with the LHC
The Large Hadron Collider has been running since September 2008 and is currently the world’s largest particle accelerator. To be precise, it accelerates protons in a 27-kilometre ring to almost the speed of light and then brings them to a collision at four different points, where the physicists then measure what happens. The second phase of the measurements at the LHC has just been completed. To carry out further optimisations on the machine, there is now a break until 2021. An excellent opportunity to take stock.
In addition to the Higgs boson, some new, but not elementary, particles have also been found at the LHC; in other words, particles composed of already known, smaller particles. Theoretical physicists have also used better data to improve their models for the structure of protons. Thanks to the LHC, physicists have also been able to better determine the properties of known particles, such as their mass and decay time.
In many respects, the LHC has therefore advanced research. But according to theoretical physicists, the LHC should have made other exciting discoveries: Supersymmetric particles, for example, or additional space dimensions, or small black holes. Something new. But that didn’t happen. Something went wrong with these predictions.
The fact that the LHC has not found any new particles is not the only prediction in the fundamentals of physics that has not come true. The search for the particles from which dark matter is said to have been made was just as bad.
It is said that 85 per cent of all matter in the universe consists of dark matter. Astrophysicists have concluded this from many observations which indicate that the gravitational attraction of ordinary matter alone is not sufficient to explain our world. For example, there is not enough ordinary matter to form galactic structures as we see them. The usual thing is not enough to tell how fast galaxies move in clusters. This requires additional issue that is different from the matter we usually deal with. Dark just.
But what is dark matter made of? It should be a particle, said the particle physicists. And that’s what they’ve been looking for, since the 80s, with ever larger detectors. Such experiments usually take place deep under the ground, in old mines – because you can use the many rocks to filter out disturbing signals from cosmic background radiation. But the particles were still not found.
Numerous experiments looking for a fifth force have also failed
In the fundamentals of physics, there are currently only four forces. Besides gravity, which is described by Einstein’s general theory of relativity, this is the electromagnetic force and the two nuclear forces (weak and strong), which are represented by the standard model of particle physics. However, 20 years ago astrophysicists noticed that the universe is not only expanding but that this expansion is also accelerating. For general relativity, this means that the so-called “cosmological constant” is not zero.
To explain the value of the cosmological constant, theoretical physicists have invented new forces, the named fifth forces. This was what they were looking for – for example, by measuring the gravitational force very precisely at small distances. But there was only the good, old gravitation to be found.
To understand what happened here, it is worth taking a look at the history of the natural sciences. At the beginning of the 20th century, physicists completely overhauled the foundations of their field. Not only did Einstein find out that space and time belong together and are curved, but Heisenberg also noted that matter is subject to an unavoidable blurring. Heisenberg, Schrödinger and some other bright minds, concluded at that time that quantum mechanics must describe the thing.
Quantum mechanics was dramatically successful, and not only because it could be used to calculate the order of the periodic table and the atomic spectra, which had been known for a long time at that time. Quantum mechanics is also the theory on which all technology is based, on which we rely so naturally today: Semiconductors and transistors, lasers and digital cameras, atomic clocks, LEDs, and magnetic resonance – all this we have only thanks to quantum mechanics. The physicists are justifiably proud of it.
After they had ticked off the quantum revolution, physicists then began to test the structure of matter at smaller and smaller distances. First, they found the components of the atoms, then the parts of the atomic nuclei, and then again the components of the components of the elements. In the 1970s, they summarised all these insights in the Standard Model of particle physics.
The theory of the Standard Model is firmly based on the idea of unification and symmetry. Unification because first the electric force was combined with the magnetic force, and then the electromagnetic force with the weak nuclear energy was merged to form the so-called “electroweak force”. The strong atomic force, however, still sits somewhat offside and does not want to fit so well with the electroweak one. This is symmetric because each of these forces can be derived from quite simple symmetry principles.
Now that they had the standard model, the physicists thought they could continue the previous success in the same way. Accordingly, attempts were made to unify the three forces in the Standard Model in a so-called “great unification”, or even in the “theory of everything”, in which gravity is also supposed to be involved. But this method did not work.
Various approaches to grand unification based on symmetries were experimentally excluded as early as the 1980s. This was possible because these theories predicted that protons could decay, which was not observed. The physicists, however, did not want to give up the idea of unification and symmetry. Since then they have proposed more and more such beautifully symmetric theories. The theories have become increasingly sophisticated to remain consistent with the data.
As the name suggests, the desire for symmetry is also behind the theory of supersymmetry. Supersymmetry extends the standard model by the hypothesis of new partner particles that should fit the already known particles. The idea of supersymmetry has been pursued by theoretical physicists since the 1980s and has come into conflict with observations right from the start. Instead of abandoning the theory, however, theorists have spiced up supersymmetry so that the contradiction with the experiment no longer arises. Meanwhile, the supersymmetric partner particles are still being searched for.
A further criterion that physicists, apart from symmetries, currently like to use for theory development is that they only want specific numbers in theory. We are not talking here about numbers with units, such as lengths, but pure numbers without units. A good argument, so one thinks in the fundamentals of physics today, should only have names that are close to one. Not too big and not too small, but just right.
There is no particular reason for this. It would be nice if nature were like that. Physicists call such pretty numbers “natural numbers”. The idea for this comes from the intuition that in our daily lives we rarely encounter very large or tiny numbers purely by chance. Unnatural amounts usually need an explanation.
Imagine, for example, planting carrots, and one of them will be 1000 times more significant than the others. You think there must be something behind it! This is precisely how physicists do with unnatural numbers. There must be an explanation. And then they postulate new particles or symmetries. The cosmological constant already mentioned is a case in point. Too small, say the physicists, a fifth force should direct it. There are a few other such offensive numbers, such as the mass of the Higgs boson. The supersymmetry should explain why it is so small.