A screenshot from the film 'The Cloverfield Paradox' (2018). Source: Netflix

All the science in ‘The Cloverfield Paradox’

I watched The Cloverfield Paradox last night, the horror film that Paramount pictures had dumped with Netflix and which was then released by Netflix on February 4. It’s a dumb production: unlike H.R. Giger’s existential, visceral horrors that I so admire, The Cloverfield Paradox is all about things going bump in the dark. But what sets these things off in the film is quite interesting: a particle accelerator. However, given how bad the film was, the screenwriter seems to have used this device simply as a plot device, nothing else.

The particle accelerator is called Shepard. We don’t know what particles it’s accelerating or up to what centre-of-mass collision energy. However, the film’s premise rests on the possibility that a particle accelerator can open up windows into other dimensions. The Cloverfield Paradox needs this because, according to its story, Earth has run out of energy sources in 2028 and countries are threatening ground invasions for the last of the oil, so scientists assemble a giant particle accelerator in space to tap into energy sources in other dimensions.

Considering 2028 is only a decade from now – when the Sun will still be shining bright as ever in the sky – and renewable sources of energy aren’t even being discussed, the movie segues from sci-fi into fantasy right there.

Anyway, the idea that a particle accelerator can open up ‘portals’ into other dimensions isn’t new nor entirely silly. Broadly, an accelerator’s purpose is founded on three concepts: the special theory of relativity (SR), particle decay and the wavefunction of quantum mechanics.

According to SR, mass and energy can transform into each other as well as that objects moving closer to the speed of light will become more massive, thus more energetic. Particle decay is what happens when a heavier subatomic particle decomposes into groups of lighter particles because it’s unstable. Put these two ideas together and you have a part of the answer: accelerators accelerate particles to extremely high velocities, the particles become more massive, ergo more energetic, and the excess energy condenses out at some point as other particles.

Next, in quantum mechanics, the wavefunction is a mathematical function: when you solve it based on what information you have available, the answer spit out by one kind of the function gives the probability that a particular particle exists at some point in the spacetime continuum. It’s called a wavefunction because the function describes a wave, and like all waves, this one also has a wavelength and an amplitude. However, the wavelength here describes the distance across which the particle will manifest. Because energy is directly proportional to frequency (E = × ν; h is Planck’s constant) and frequency is inversely proportional to the wavelength, energy is inversely proportional to wavelength. So the more the energy a particle accelerator achieves, the smaller the part of spacetime the particles will have a chance of probing.

Spoilers ahead

SR, particle decay and the properties of the wavefunction together imply that if the Shepard is able to achieve a suitably high energy of acceleration, it will be able to touch upon an exceedingly small part of spacetime. But why, as it happens in The Cloverfield Paradox, would this open a window into another universe?

Spoilers end

Instead of directly offering a peek into alternate universes, a very-high-energy particle accelerator could offer a peek into higher dimensions. According to some theories of physics, there are many higher dimensions even though humankind may have access only to four (three of space and one of time). The reason they should even exist is to be able to solve some conundrums that have evaded explanation. For example, according to Kaluza-Klein theory (one of the precursors of string theory), the force of gravity is so much weaker than the other three fundamental forces (strong nuclear, weak nuclear and electromagnetic) because it exists in five dimensions. So when you experience it in just four dimensions, its effects are subdued.

Where are these dimensions? Per string theory, for example, they are extremely compactified, i.e. accessible only over incredibly short distances, because they are thought to be curled up on themselves. According to Oskar Klein (one half of ‘Kaluza-Klein’, the other half being Theodore Kaluza), this region of space could be a circle of radius 10-32 m. That’s 0.00000000000000000000000000000001 m – over five quadrillion times smaller than a proton. According to CERN, which hosts the Large Hadron Collider (LHC), a particle accelerated to 10 TeV can probe a distance of 10-19 m. That’s still one trillion times larger than where the Kaluza-Klein fifth dimension is supposed to be curled up. The LHC has been able to accelerate particles to 8 TeV.

The likelihood of a particle accelerator tossing us into an alternate universe entirely is a different kind of problem. For one, we have no clue where the connections between alternate universes are nor how they can be accessed. In Nolan’s Interstellar (2014), a wormhole is discovered by the protagonist to exist inside a blackhole – a hypothesis we currently don’t have any way of verifying. Moreover, though the LHC is supposed to be able to create microscopic blackholes, they have a 0% chance of growing to possess the size or potential of Interstellar‘s Gargantua.

In all, The Cloverfield Paradox is a waste of time. In the 2016 film Spectral – also released by Netflix – the science is overwrought, stretched beyond its possibilities, but still stays close to the basic principles. For example, the antagonists in Spectral are creatures made entirely as Bose-Einstein condensates. How this was even achieved boggles the mind, but the creatures have the same physical properties that the condensates do. In The Cloverfield Paradox, however, the accelerator is a convenient insertion into a bland story, an abuse of the opportunities that physics of this complexity offers. The writers might as well have said all the characters blinked and found themselves in a different universe.

The science in Netflix's 'Spectral'

I watched Spectral, the movie that released on Netflix on December 9, 2016, after Universal Studios got cold feet about releasing it on the big screen – the same place where a previous offering, Warcraft, had been gutted. Spectral is sci-fi and has a few great moments but mostly it’s bland and begging for some tabasco. The premise: an elite group of American soldiers deployed in Moldova come upon some belligerent ghost-like creatures in a city they’re fighting in. They’ve no clue how to stop them, so they fly in an engineer to consult from DARPA, the same guy who built the goggles that detected the creatures in the first place. Together, they do things. Now, I’d like to talk about the science in the film and not the plot itself, though the former feeds the latter.

SPOILERS AHEAD

A scene from the film 'Spectral' (2016). Source: Netflix
A scene from the film ‘Spectral’ (2016). Source: Netflix

Towards the middle of the movie, the engineer realises that the ghost-like creatures have the same limitations as – wait for it – a Bose-Einstein condensate (BEC). They can pass through walls but not ceramic or heavy metal (not the music), they rapidly freeze objects in their path, and conventional weapons, typically projectiles of some kind, can’t stop them. Frankly, it’s fabulous that Ian Fried, the film’s writer, thought to use creatures made of BECs as villains.

A BEC is an exotic state of matter in which a group of ultra-cold particles condense into a superfluid (i.e., it flows without viscosity). Once a BEC forms, a subsection of a BEC can’t be removed from it without breaking the whole BEC state down. You’d think this makes the BEC especially fragile – because it’s susceptible to so many ‘liabilities’ – but it’s the exact opposite. In a BEC, the energy required to ‘kick’ a single particle out of its special state is equal to the energy that’s required to ‘kick’ all the particles out, making BECs as a whole that much more durable.

This property is apparently beneficial for the creatures of Spectral, and that’s where the similarity ends because BECs have other properties that are inimical to the portrayal of the creatures. Two immediately came to mind: first, BECs are attainable only at ultra-cold temperatures; and second, the creatures can’t be seen by the naked eye but are revealed by UV light. There’s a third and relevant property but which we’ll come to later: that BECs have to be composed of bosons or bosonic particles.

It’s not clear why Spectral‘s creatures are visible only when exposed to light of a certain kind. Clyne, the DARPA engineer, says in a scene, “If I can turn it inside out, by reversing the polarity of some of the components, I might be able to turn it from a camera [that, he earlier says, is one that “projects the right wavelength of UV light”] into a searchlight. We’ll [then] be able to see them with our own eyes.” However, the documented ability of BECs to slow down light to a great extent (5.7-million times more than lead can, in certain conditions) should make them appear extremely opaque. More specifically, while a BEC can be created that is transparent to a very narrow range of frequencies of electromagnetic radiation, it will stonewall all frequencies outside of this range on the flipside. That the BECs in Spectral are opaque to a single frequency and transparent to all others is weird.

Obviating the need for special filters or torches to be able to see the creatures simplifies Spectral by removing one entire layer of complexity. However, it would remove the need for the DARPA engineer also, who comes up with the hyperspectral camera and, its inside-out version, the “right wavelength of UV” searchlight. Additionally, the complexity serves another purpose. Ahead of the climax, Clyne builds an energy-discharging gun whose plasma-bullets of heat can rip through the BECs (fair enough). This tech is also slightly futuristic. If the sci-fi/futurism of the rest of Spectral leading up to that moment (when he invents the gun) was absent, then the second-half of the movie would’ve become way more sci-fi than the first-half, effectively leaving Spectral split between two genres: sci-fi and wtf. Thus the need for the “right wavelength of UV” condition?

Now, to the third property. Not all particles can be used to make BECs. Its two predictors, Satyendra Nath Bose and Albert Einstein, were working (on paper) with kinds of particles since called bosons. In nature, bosons are force-carriers, acting against matter-maker particles called fermions. A more technical distinction between them is that the behaviour of bosons is explained using Bose-Einstein statistics while the behaviour of fermions is explained using Fermi-Dirac statistics. And only Bose-Einstein statistics predicts the existence of states of matter called condensates, not Femi-Dirac statistics.

(Aside: Clyne, when explaining what BECs are in Spectral, says its predictors are “Nath Bose and Albert Einstein”. Both ‘Nath’ and ‘Bose’ are surnames in India, so “Nath Bose” is both anyone and no one at all. Ugh. Another thing is I’ve never heard anyone refer to S.N. Bose as “Nath Bose”, only ‘Satyendranath Bose’ or, simply, ‘Satyen Bose’. Why do Clyne/Fried stick to “Nath Bose”? Was “Satyendra” too hard to pronounce?)

All particles constitute a certain amount of energy, which under some circumstances can increase or decrease. However, the increments of energy in which this happens are well-defined and fixed (hence the ‘quantum’ of quantum mechanics). So, for an oversimplified example, a particle can be said to occupy energy levels constituting 2, 4 or 6 units but never of 1, 2.5 or 3 units. Now, when a very-low-density collection of bosons is cooled to an ultra-cold temperature (a few hundredths of kelvins or cooler), the bosons increasingly prefer occupying fewer and fewer energy levels. At one point, they will all occupy a single and common level – flouting a fundamental rule that there’s a maximum limit for the number of particles that can be in the same level at once. (In technical parlance, the wavefunctions of all the bosons will merge.)

When this condition is achieved, a BEC will have been formed. And in this condition, even if a new boson is added to the condensate, it will be forced into occupying the same level as every other boson in the condensate. This condition is also out of limits for all fermions – except in very special circumstances, and circumstances whose exceptionalism perhaps makes way for Spectral‘s more fantastic condensate-creatures. We known one such as superconductivity.

In a superconducting material, electrons flow without any resistance whatsoever at very low temperatures. The most widely applied theory of superconductivity interprets this flow as being that of a superfluid, and the ‘sea’ of electrons flowing as such to be a BEC. However, electrons are fermions. To overcome this barrier, Leon Cooper proposed in 1956 that the electrons didn’t form a condensate straight away but that there was an intervening state called a Cooper pair. A Cooper pair is a pair of electrons that had become bound, overcoming their like-charges repulsion because of the vibration of atoms of the superconducting metal surrounding them. The electrons in a Cooper pair also can’t easily quit their embrace because, once they become bound, the total energy they constitute as a pair is lower than the energy that would be destabilising in any other circumstances.

Could Spectral‘s creatures have represented such superconducting states of matter? It’s definitely science fiction because it’s not too far beyond the bounds of what we know about BEC today (at least in terms of a concept). And in being science fiction, Spectral assumes the liberty to make certain leaps of reasoning – one being, for example, how a BEC-creature is able to ram against an M1 Abrams and still not dissipate. Or how a BEC-creature is able to sit on an electric transformer without blowing up. I get that these in fact are the sort of liberties a sci-fi script is indeed allowed to take, so there’s little point harping on them. However, that Clyne figured the creatures ought to be BECs prompted way more disbelief than anything else because BECs are in the here and the now – and they haven’t been known to behave anything like the creatures in Spectral do.

For some, this information might even help decide if a movie is sci-fi or fantasy. To me, it’s sci-fi.

SPOILERS END

On the more imaginative side of things, Spectral also dwells for a bit on how these creatures might have been created in the first place and how they’re conscious. Any answers to these questions, I’m pretty sure, would be closer to fantasy than to sci-fi. For example, I wonder how the computing capabilities of a very large neural network seen at the end of the movie (not a spoiler, trust me) were available to the creatures wirelessly, or where the power source was that the soldiers were actually after. Spectral does try to skip the whys and hows by having Clyne declare, “I guess science doesn’t have the answer to everything” – but you’re just going “No shit, Sherlock.”

His character is, as this Verge review puts it, exemplarily shallow while the movie never suggests before the climax that science might indeed have all the answers. In fact, the movie as such, throughout its 108 minutes, wasn’t that great for me; it doesn’t ever live up to its billing as a “supernatural Black Hawk Down“. You think about BHD and you remember it being so emotional – Spectral has none of that. It was just obviously more fun to think about the implications of its antagonists being modelled after a phenomenon I’ve often read/written about but never thought about that way.

Relativity’s kin, the Bose-Einstein condensate, is 90 now

Excerpt:

Over November 2015, physicists and commentators alike the world over marked 100 years since the conception of the theory of relativity, which gave us everything from GPS to blackholes, and described the machinations of the universe at the largest scales. Despite many struggles by the greatest scientists of our times, the theory of relativity remains incompatible with quantum mechanics, the rules that describe the universe at its smallest, to this day. Yet it persists as our best description of the grand opera of the cosmos.

Incidentally, Einstein wasn’t a fan of quantum mechanics because of its occasional tendencies to violate the principles of locality and causality. Such violations resulted in what he called “spooky action at a distance”, where particles behaved as if they could communicate with each other faster than the speed of light would have it. It was weirdness the likes of which his conception of gravitation and space-time didn’t have room for.

As it happens, 2015 also marks another milestone, also involving Einstein’s work – as well as the work of an Indian scientist: Satyendra Nath Bose. It’s been 20 years since physicists realised the first Bose-Einstein condensate, which has proved to be an exceptional as well as quirky testbed for scientists probing the strange implications of a quantum mechanical reality.

Its significance today can be understood in terms of three ‘periods’ of research that contributed to it: 1925 onward, 1975 onward, and 1995 onward.

Read the full piece here.

 

The intricacies of being sold on string theory

If you are seeking an appreciation for the techniques of string theory, then Brian Greene’s The Elegant Universe could be an optional supplement. If, on the other hand, you want to explore the epistemological backdrop against which string theory proclaimed its aesthetic vigor, then the book is a must-read. As the title implies, it discusses the elegance of string theory in great and pleasurable detail, beginning from a harmonious resolution of the conflicts between quantum mechanics and general relativity being its raison d’être to why it commands the attention of some of the greatest living scientists.

A bigger victory it secures, however, is not in simply laying out string theory but getting you interested in it – and this has become a particularly important feature of science in the 21st century.

The counter-intuitive depiction of nature by the principles of modern physics have, since the mid-20th century, foretold that reality can be best understood in terms of mathematical expressions. This contrasted the simplicity of its preceding paradigm: Newtonian physics, which was less about the mathematics and more about observations, and therefore required fewer interventions to bridge reality as it seemed and reality as it said it was.

Modern physics – encompassing quantum mechanics and Albert Einstein’s theories of relativity – overhauled this simplicity. While reality as it seemed hadn’t changed, reality as they said it was bore no semblence to any of Newton’s work. The process of understanding reality became much more sophisticated, requiring years of training just to prepare oneself to be able to understand it, while probing it required the grandest associations of intellect and hardware.

The trouble getting it across

An overlooked side to this fallout concerned the instruction of these subjects to non-technical audiences, to people who liked to know what was going on but didn’t want to dedicate their lives to it1. Both quantum mechanics and general relativity are dominated by advanced mathematics, yet spelling out such abstractions is neither convenient nor effective for non-technical communication. As a result, science communicators have increasingly resorted to metaphors, using them to negotiate with the knowledge their readers already possessed.

This is where The Elegant Universe is most effective, especially since string theory is admittedly more difficult to understand than quantum mechanics or general relativity ever was. In fact, the book’s first few chapters – before Greene delves into string theory – are seasoned with statements of how intricate string theory is, while he does a tremendous job of laying the foundations of modern physics.

Especially admirable is his seamless guidance of the reader from time dilation and Lorentzian contraction to quantum superposition to the essentials of superstring theory to the unification of all forces under M-theory, with nary a twitch in between. The examples with which he illustrates important concepts are never mundane, too. His flamboyant writing makes for the proverbial engaging read. You will often find words you wouldn’t quickly use to describe the world around you, endorsing a supreme confidence in the subject being discussed.

Consider: “… the gently curving geometrical form of space emerging from general relativity is at loggerheads with the frantic, roiling, microscopic behavior of the universe implied by quantum mechanics”. Or, “With the discovery of superstring theory, musical metaphors take on a startling reality, for the theory suggests that the microscopic landscape is suffused with tiny strings whose vibrational patterns orchestrate the evolution of the cosmos. The winds of charge, according to superstring theory, gust through an aeolian universe.”

More importantly, Greene’s points of view in the book betray a confidence in string theory itself – as if he thinks that it is the only way to unify quantum mechanics and general relativity under an umbrella pithily called the ‘theory of everything’. What it means for you, the reader, is that you can expect The Elegant Universe not to be an exploratory stroll through a garden but more of a negotiation of the high seas.

Taking recourse in emotions

Does this subtract from the objectivity an enthused reader might appreciate as it would have prepared her to tackle the unification problem by herself? Somewhat. It is a subtle flaw in Greene’s reasoning throughout the book: while he devotes many pages to discussing solutions, he spends little time annotating the flaws of string theory itself. Even if no other theory has charted the sea of unification so well, Greene could have maintained some objectivity about it.

At the same time, by the end of the book, you start to think there is no other way to expound on string theory than by constantly retreating into the intensity of emotions and the honest sensationalism they are capable of yielding. For instance, when describing his own work alongside Paul Aspinwall and David Morrison in determining if space can tear in string theory, Greene introduces the theory’s greatest exponent, Edward Witten. As he writes,

“Edward Witten’s razor-sharp intellect is clothed in a soft-spoken demeanor that often has a wry, almost ironic, edge. He is widely regarded as Einstein’s successor in the role of the world’s greatest living physicist. Some would go even further and describe him as the greatest physicist of all time. He has an insatiable appetite for cutting-edge physics problems and he wields tremendous influence in setting the direction of research in string theory.”

Then, in order to convey the difficulty of a problem that the trio was facing, Greene simply states: Witten “lit up upon hearing the ideas, but cautioned that he thought the calculations would be horrendously difficult”. If Witten expects them to be horrendously difficult, then they must indeed be as horrendous as they get.

Such descriptions of magnitude are peppered throughout The Elegant Universe, often clothed in evocative language, and constitute a significant portion of its appeal to a general audience. They rob string theory of its esoteric stature, making the study of its study memorable. Greene has done well to not dwell on the technical intricacies of his subject while still retaining both the wonderment and the frustration of dealing with something as intractable. This, in fact, is his prime achievement through writing the book.

String theory is not about technique

It was published in 1999. In the years since, many believe that string theory has become dormant. However, that is also where the book scores: not by depicting the theory as being unfalsifiable but as being resilient, as being incomplete enough to dare physicists to follow their own lead in developing it, as being less of a feat in breathtaking mathematics and more of constantly putting one’s beliefs to the test.

Simultaneously, it is unlike the theories of inflationary cosmology that are so flexible that disproving them is like fencing with air. String theory has a sound historical basis in the work of Leonhard Euler, and its careful derivation from those founding principles to augur the intertwined destinies of space and time have concerned the efforts of simply the world’s best mathematicians.

Since the late 1960s, when string theory was first introduced, it has gone through alternating periods of reaffirmation and discreditation. Each crest in this journey has been introduced by a ‘superstring revolution’, a landmark hypothesis or discovery that has restored its place in the scientific canon. Each trough, on the other hand, has represented a difficult struggle to attempt to cohere the implications of string theory into a convincing picture of reality.

These struggles are paralleled by Greene’s efforts in composing The Elegant Universe, managing to accomplish what is often lost in the translation of human endeavors: the implications for the common person. This could be in the form of beauty, or a better life, or some form of intellectual satisfaction; in the end, the book succeeds by drawing these possibilities to the fore, for once overshadowing the enormity of the undertaking that string theory will always be.

Buy the book on Amazon.

1Although it can also be argued that science communication as a special skill was necessitated by science becoming so complex.

Bohr and the breakaway from classical mechanics

One hundred years ago, Niels Bohr developed the Bohr model of the atom, where electrons go around a nucleus at the center like planets in the Solar System. The model and its implications brought a lot of clarity to the field of physics at a time when physicists didn’t know what was inside an atom, and how that influenced the things around it. For his work, Bohr was awarded the physics Nobel Prize in 1922.

The Bohr model marked a transition from the world of Isaac Newton’s classical mechanics, where gravity was the dominant force and values like mass and velocity were accurately measurable, to that of quantum mechanics, where objects were too small to be seen even with powerful instruments and their exact position didn’t matter.

Even though modern quantum mechanics is still under development, its origins can be traced to humanity’s first thinking of energy as being quantized and not randomly strewn about in nature, and the Bohr model was an important part of this thinking.

The Bohr model

According to the Dane, electrons orbiting the nucleus at different distances were at different energies, and an electron inside an atom – any atom – could only have specific energies. Thus, electrons could ascend or descend through these orbits by gaining or losing a certain quantum of energy, respectively. By allowing for such transitions, the model acknowledged a more discrete energy conservation policy in physics, and used it to explain many aspects of chemistry and chemical reactions.

Unfortunately, this model couldn’t evolve continuously to become its modern equivalent because it could properly explain only the hydrogen atom, and it couldn’t account for the Zeeman effect.

What is the Zeeman effect? When an electron jumps from a higher to a lower energy-level, it loses some energy. This can be charted using a “map” of energies like the electromagnetic spectrum, showing if the energy has been lost as infrared, UV, visible, radio, etc., radiation. In 1896, Dutch physicist Pieter Zeeman found that this map could be distorted when the energy was emitted in the presence of a magnetic field, leading to the effect named after him.

It was only in 1925 that the cause of this behavior was found (by Wolfgang Pauli, George Uhlenbeck and Samuel Goudsmit), attributed to a property of electrons called spin.

The Bohr model couldn’t explain spin or its effects. It wasn’t discarded for this shortcoming, however, because it had succeeded in explaining a lot more, such as the emission of light in lasers, an application developed on the basis of Bohr’s theories and still in use today.

The model was also important for being a tangible breakaway from the principles of classical mechanics, which were useless at explaining quantum mechanical effects in atoms. Physicists recognized this and insisted on building on what they had.

A way ahead

To this end, a German named Arnold Sommerfeld provided a generalization of Bohr’s model – a correction – to let it explain the Zeeman effect in ionized helium (which is a hydrogen atom with one proton and one neutron more).

In 1924, Louis de Broglie introduced particle-wave duality into quantum mechanics, invoking that matter at its simplest could be both particulate and wave-like. As such, he was able to verify Bohr’s model mathematically from a waves’ perspective. Before him, in 1905, Albert Einstein had postulated the existence of light-particles called photons but couldn’t explain how they could be related to heat waves emanating from a gas, a problem he solved using de Broglie’s logic.

All these developments reinforced the apparent validity of Bohr’s model. Simultaneously, new discoveries were emerging that continuously challenged its authority (and classical mechanics’, too): molecular rotation, ground-state energy, Heisenberg’s uncertainty principle, Bose-Einstein statistics, etc. One option was to fall back to classical mechanics and rework quantum theory thereon. Another was to keep moving ahead in search of a solution.

However, this decision didn’t have to be taken because the field of physics itself had started to move ahead in different ways, ways which would become ultimately unified.

Leaps of faith

Between 1900 and 1925, there were a handful of people responsible for opening this floodgate to tide over the centuries old Newtonian laws. Perhaps the last among them was Niels Bohr; the first was Max Planck, who originated quantum theory when he was working on making light bulbs glow brighter. He found that the smallest bits of energy to be found in nature weren’t random, but actually came in specific amounts that he called quanta.

It is notable that when either of these men began working on their respective contributions to quantum mechanics, they took a leap of faith that couldn’t be spanned by purely scientific reasoning, as is the dominant process today, but by faith in philosophical reasoning and, simply, hope.

For example, Planck wasn’t fond of a class of mechanics he used to establish quantum mechanics. When asked about it, he said it was an “act of despair”, that he was “ready to sacrifice any of [his] previous convictions about physics”. Bohr, on the other hand, had relied on the intuitive philosophy of correspondence to conceive of his model. In fact, even before he had received his Nobel in 1922, Bohr had begun to deviate from his most eminent finding because it disagreed with what he thought were more important, and to be preserved, foundational ideas.

It was also through this philosophy of correspondence that the many theories were able to be unified over the course of time. According to it, a new theory should replicate the results of an older, well-established one in the domain where it worked.

Coming a full circle

Since humankind’s investigation into the nature of physics has proceeded from the large to the small, new attempts to investigate from the small to the large were likely to run into old theories. And when multiple new quantum theories were found to replicate the results of one classical theory, they could be translated between each other by corresponding through the old theory (thus the name).

Because the Bohr model could successfully explain how and why energy was emitted by electrons jumping orbits in the hydrogen atom, it had a domain of applicability. So, it couldn’t be entirely wrong and would have to correspond in some way with another, possibly more successful, theory.

Earlier, in 1924, de Broglie’s formulation was suffering from its own inability to explain certain wave-like phenomena in particulate matter. Then, in 1926, Erwin Schrodinger built on it and, like Sommerfeld did with Bohr’s ideas, generalized them so that they could apply in experimental quantum mechanics. The end result was the famous Schrodinger’s equation.

The Sommerfeld-Bohr theory corresponds with the equation, and this is where it comes “full circle”. After the equation became well known, the Bohr model was finally understood as being a semi-classical approximation of the Schrodinger equation. In other words, the model represented some of the simplest corrections to be made to classical mechanics for it to become quantum in any way.

An ingenious span

After this, the Bohr model was, rather became, a fully integrable part of the foundational ancestry of modern quantum mechanics. While its significance in the field today is great yet still one of many like it, by itself it had a special place in history: a bridge, between the older classical thinking and the newer quantum thinking.

Even philosophically speaking, Niels Bohr and his pathbreaking work were important because they planted the seeds of ingenuity in our minds, and led us to think outside of convention.

This article, as written by me, originally appeared in The Copernican science blog on May 19, 2013.

Bohr and the breakaway from classical mechanics

Niels Bohr, 1950.
Niels Bohr, 1950. Photo: Blogspot

One hundred years ago, Niels Bohr developed the Bohr model of the atom, where electrons go around a nucleus at the centre like planets in the Solar System. The model and its implications brought a lot of clarity to the field of physics at a time when physicists didn’t know what was inside an atom, and how that influenced the things around it. For his work, Bohr was awarded the physics Nobel Prize in 1922.

The Bohr model marked a transition from the world of Isaac Newton’s classical mechanics, where gravity was the dominant force and values like mass and velocity were accurately measurable, to that of quantum mechanics, where objects were too small to be seen even with powerful instruments and their exact position didn’t matter.

Even though modern quantum mechanics is still under development, its origins can be traced to humanity’s first thinking of energy as being quantised and not randomly strewn about in nature, and the Bohr model was an important part of this thinking.

The Bohr model

According to the Dane, electrons orbiting the nucleus at different distances were at different energies, and an electron inside an atom – any atom – could only have specific energies. Thus, electrons could ascend or descend through these orbits by gaining or losing a certain quantum of energy, respectively. By allowing for such transitions, the model acknowledged a more discrete energy conservation policy in physics, and used it to explain many aspects of chemistry and chemical reactions.

Unfortunately, this model couldn’t evolve continuously to become its modern equivalent because it could properly explain only the hydrogen atom, and it couldn’t account for the Zeeman effect.

What is the Zeeman effect? When an electron jumps from a higher to a lower energy-level, it loses some energy. This can be charted using a “map” of energies like the electromagnetic spectrum, showing if the energy has been lost as infrared, UV, visible, radio, etc., radiation. In 1896, Dutch physicist Pieter Zeeman found that this map could be distorted when the energy was emitted in the presence of a magnetic field, leading to the effect named after him.

It was only in 1925 that the cause of this behaviour was found (by Wolfgang Pauli, George Uhlenbeck and Samuel Goudsmit), attributed to a property of electrons called spin.

The Bohr model couldn’t explain spin or its effects. It wasn’t discarded for this shortcoming, however, because it had succeeded in explaining a lot more, such as the emission of light in lasers, an application developed on the basis of Bohr’s theories and still in use today.

The model was also important for being a tangible breakaway from the principles of classical mechanics, which were useless at explaining quantum mechanical effects in atoms. Physicists recognised this and insisted on building on what they had.

A way ahead

To this end, a German named Arnold Sommerfeld provided a generalisation of Bohr’s model – a correction – to let it explain the Zeeman effect in ionized helium (which is a hydrogen atom with one proton and one neutron more).

In 1924, Louis de Broglie introduced particle-wave duality into quantum mechanics, invoking that matter at its simplest could be both particulate and wave-like. As such, he was able to verify Bohr’s model mathematically from a waves’ perspective. Before him, in 1905, Albert Einstein had postulated the existence of light-particles called photons but couldn’t explain how they could be related to heat waves emanating from a gas, a problem he solved using de Broglie’s logic.

All these developments reinforced the apparent validity of Bohr’s model. Simultaneously, new discoveries were emerging that continuously challenged its authority (and classical mechanics’, too): molecular rotation, ground-state energy, Heisenberg’s uncertainty principle, Bose-Einstein statistics, etc. One option was to fall back to classical mechanics and rework quantum theory thereon. Another was to keep moving ahead in search of a solution.

However, this decision didn’t have to be taken because the field of physics itself had started to move ahead in different ways, ways which would become ultimately unified.

Leaps of faith

Between 1900 and 1925, there were a handful of people responsible for opening this floodgate to tide over the centuries old Newtonian laws. Perhaps the last among them was Niels Bohr; the first was Max Planck, who originated quantum theory when he was working on making light bulbs glow brighter. He found that the smallest bits of energy to be found in nature weren’t random, but actually came in specific amounts that he called quanta.

It is notable that when either of these men began working on their respective contributions to quantum mechanics, they took a leap of faith that couldn’t be spanned by purely scientific reasoning, as is the dominant process today, but by faith in philosophical reasoning and, simply, hope.

For example, Planck wasn’t fond of a class of mechanics he used to establish quantum mechanics. When asked about it, he said it was an “act of despair”, that he was “ready to sacrifice any of [his] previous convictions about physics”. Bohr, on the other hand, had relied on the intuitive philosophy of correspondence to conceive of his model. In fact, only a few years after he had received his Nobel in 1922, Bohr had begun to deviate from his most eminent finding because it disagreed with what he thought were more important, and to be preserved, foundational ideas.

It was also through this philosophy of correspondence that the many theories were able to be unified over the course of time. According to it, a new theory should replicate the results of an older, well-established one in the domain where it worked.

Coming a full circle

Since humankind’s investigation into the nature of physics has proceeded from the large to the small, new attempts to investigate from the small to the large were likely to run into old theories. And when multiple new quantum theories were found to replicate the results of one classical theory, they could be translated between each other by corresponding through the old theory (thus the name).

Because the Bohr model could successfully explain how and why energy was emitted by electrons jumping orbits in the hydrogen atom, it had a domain of applicability. So, it couldn’t be entirely wrong and would have to correspond in some way with another, possibly more succesful, theory.

Earlier, in 1924, de Broglie’s formulation was suffering from its own inability to explain certain wave-like phenomena in particulate matter. Then, in 1926, Erwin Schrodinger built on it and, like Sommerfeld did with Bohr’s ideas, generalised them so that they could apply in experimental quantum mechanics. The end result was the famous Schrodinger’s equation.

The Sommerfeld-Bohr theory corresponds with the equation, and this is where it comes “full circle”. After the equation became well known, the Bohr model was finally understood as being a semi-classical approximation of the Schrodinger equation. In other words, the model represented some of the simplest corrections to be made to classical mechanics for it to become quantum in any way.

An ingenious span

After this, the Bohr model was, rather became, a fully integrable part of the foundational ancestry of modern quantum mechanics. While its significance in the field today is great yet still one of many like it, by itself it had a special place in history: a bridge, between the older classical thinking and the newer quantum thinking.

Even philosophically speaking, Niels Bohr and his path-breaking work were important because they planted the seeds of ingenuity in our minds, and led us to think outside of convention.

How hard is it to violate Pauli's exclusion principle?

Ultracooled rubidium atoms are bosonic, and start to behave as part of a collective fluid, their properties varying together like shown above. Bosons can do this because they don't obey Pauli's principle.
Ultracooled rubidium atoms are bosonic, and start to behave as part of a collective fluid, their properties varying together like shown above. Bosons can do this because they don’t obey Pauli’s principle. Photo: Wikimedia Commons

A well-designed auditorium always has all its seats positioned on an inclined plane. ​Otherwise it wouldn’t be well-designed, would it? Anyway, this arrangement solves an important problem: It lets people sit anywhere they want to irrespective of their heights.

It won’t matter if a taller person sits in front of a shorter one – the inclination will render their height-differences irrelevant.

However, if the plane had been flat, if all the seats were just placed one behind another instead of raising or lowering their distances from the floor, ​then people would have been forced to follow a particular seating order. Like the discs in a game of Tower of Hanoi, the seats must be filled with shorter people coming first if everyone’s view of the stage must be unobstructed.

It’s only logical.​

A similar thing happens inside atoms. ​While protons and neutrons are packed into a tiny nucleus, electrons orbit the nucleus in relatively much larger orbits. For instance, if the nucleus is 2 m across, then electrons would be orbiting it at up to 10 km away. This is because every electron can only be so far away that its negative charge doesn’t pull it into the nucleus.

However, this doesn’t mean all electrons orbit the nucleus at the same distance. They follow an order. Like the seats on the flat floor where taller people must sit behind shorter ones, more energetic electrons must orbit closer to the nucleus than less energetic ones. Similarly, all electrons of the same energy must orbit the nucleus at the same distance.

Over the years, scientists have observed that around every atom of a known element, there are well-defined energy levels, each accommodating a fixed and known number of electrons. These quantities are determined by various properties of electrons, designated by the particle’s four quantum numbers: n, l, m_s, m_l.

Nomenclature:

1. n is the principle quantum number, and designates the energy-level of the electron.​

​2. l is the azimuthal quantum number, and describes the angular momentum at which the electron is zipping around the nucleus.

​3. m_l is the orbital quantum number and yields the value of l along a specified axis.

4. s is the spin quantum number and describes the “intrinsic” angular momentum, a quantity that doesn’t have a counterpart in Newtonian mechanics.​

So, an electron’s occupation of some energy slot around a nucleus depends on the values of the four quantum numbers. ​And the most significant relation between all of them is the Pauli exclusion principle (PEP): no two electrons with all four same quantum numbers can occupy the same quantum state.

An energy level is an example of a quantum state. This means if two electrons exist at the same level inside an atom, and if their n, l and m_l values are equal, then their m_s value (i.e., spin) must be different: one up, one down. Two electrons with equal n, l, m_l, and m_s values couldn’t occupy the same level in the same atom.

But why?​

The PEP is named for its discoverer, Wolfgang Pauli. Interestingly, Pauli himself couldn’t put a finger why the principle was the way it was. From his Nobel lecture, 1945 (PDF):​

Already in my original paper I stressed the circumstance that I was unable to give a logical reason for the exclusion principle or to deduce it from more general assumptions. I had always the feeling and I still have it today, that this is a deficiency. … The impression that the shadow of some incompleteness [falls] here on the bright light of success of the new quantum mechanics seems to me unavoidable.

It wasn’t that the principle’s ontology was sorted over time. In 1963, Richard Feynman said:

“…. Why is it that particles with half-integral spin are Fermi particles (…) whereas particles with integral spin are Bose particles (…)? We apologize for the fact that we can not give you an elementary explanation. An explanation has been worked out by Pauli from complicated arguments from quantum field theory and relativity. He has shown that the two must necessarily go together, but we have not been able to find a way to reproduce his arguments on an elementary level. It appears to be one of the few places in physics where there is a rule which can be stated very simply, but for which no one has found a simple and easy explanation. (…) This probably means that we do not have a complete understanding of the fundamental principle involved. For the moment, you will just have to take it as one of the rules of the world.

​(R. Feynman, Feynman Lectures of Physics, 3rd Vol., Chap. 4, Addison-Wesley, Reading, Massachusetts, 1963)

The Ramberg-Snow experiment​

In 1990, two scientists, Ramberg and Snow, devised a simple experiment to study the principle. They connected a thin strip of copper to a 50-ampere current source. Then, they placed an X-ray detector over ​the strip. When electric current passed through the strip, X-rays would be emitted, which would then be picked by the detector for analysis.

How did this happen?​

When electrons jump from a higher-energy (i.e., farther) energy-level to a lower-energy (closer) one, they must lose some energy to be permitted their new status. The energy can be lost as light, X-rays, UV- radiation, etc. Because we know how many distinct energy-levels there are in the atoms of each element and how much energy each of those orbitals has, electrons jumping levels for different elements must lose different, but fixed, amounts of energy.

So, when current is passed through copper, extra electrons are introduced into the metal, precipitating the forced occupation of some energy-level, like people sitting in the aisles of a full auditorium.

In this scenario, or in any other one for that matter, an electron jumping from the 2p level to the 1s level in a copper atom ​must lose 8.05 keV as X-rays – no more, no less, no differently.

However, Ramberg and Snow found that, after over two months of data-taking at a basement in Fermilab, Illinois, ​about 1 in 170 trillion trillion X-ray signals didn’t contain 8.05 keV but 7.7 keV.

The 1s orbital usually has space for two electrons going by the PEP. If one slot’s taken and the other’s free, then an electron wanting to jump in from the 2p level must lose 8.05 keV. However, ​if an electron was losing 7.7 keV, where was it going?

After some simple calculations, the scientists made a surprising discovery.​ The electron was squeezing itself in with two other electrons in the 1s level itself – instead of resorting to the aisles, it was sitting on another electron’s lap! This meant that the PEP was being violated with a probability of 1 in 170 trillion trillion.

While this is a laughably minuscule number, it’s nevertheless a positive number even taking into account possibly large errors arising out of the unsophisticated nature of the Ramberg-Snow apparatus. Effectively, where we thought there ought to be no violations, there were.

Just like that, there was a hole in our understanding of the exclusion principle.​

And it was the sort of hole with which we could make lemonade.

Into the kitchen​

So fast forward to 2006, 26 lemonade-hungry physicists, one pretentiously titled experiment, and one problem statement: Could the PEP be violated much more or much less often than once in 170 trillion trillion?

The setup was called the VIP for ‘VIolation of the Pauli Exclusion Principle Experiment’.​ How ingenious. Anyway, the idea was to replicate the Ramberg-Snow experiment in a more sophisticated environment. Instead of a simple circuit that you could build on the table top, they used one in the Gran Sasso National Lab that looked like this.

This is the DEAR (DAΦNE Exotic Atom Research) setup and was slightly modified to make way for the VIP setup. Everything’s self-evident, I suppose. CCD stands for charge-coupled detector, which is basically an X-ray detector.

(The Gran Sasso National Lab, or Laboratori Nazionali del Gran Sasso, is one of the world’s largest underground particle physics laboratories, consisting of around 1,000 scientists working on more than 15 experiments.​ It is located near the Gran Sasso mountain, between the towns of L’Aquila and Teramo in Italy.)​

​After about three years of data-taking, the team of 26 announced that it had bettered the Ramberg-Snow data by three orders of magnitude. According to data made available in 2009, they declared the PEP had been violated only once every 570,000 trillion trillion electronic level-jumps.​

Fewer yet surely

​Hurrah! The principle was being violated 1,000 times less often than thought, but it was being violated still. At this stage, the VIP team seemed to have thought the number could be much lesser, even 100-times lesser. On March 5, 2013, it submitted a paper (PDF) to the arXiv pre-print server containing a proposal for the more-sensitive VIP2.

​You might think that the number is positive, so VIP’s efforts an attempt to figure out how many angels are dancing on the head of a pin.

Well, think about this way. The moment we zero in on one value, one frequency with which anomalous level-jumps take place, then we’ll be in a position to stick the number into a formula and see what that means for the world around us.

Also, electrons are only one kind of a class of particles called fermions, all of which are thought to obey the PEP. Perhaps other experiments conducted with other fermions, such as tau leptons and muons, will throw up some other rate of violation. In that case, we’ll be able to say the misbehavior is actually dependent on some property of the particle, like its mass, spin, charge, etc.

Until that day, we’ve got to keep trying.​

(This blog post first appeared at The Copernican on March 11, 2013.)

The weakening measurement

Unlike the special theory of relativity that the superluminal-neutrinos fiasco sought to defy, Heisenberg’s uncertainty principle presents very few, and equally iffy, measurement techniques to stand verified. While both Einstein’s and Heisenberg’s foundations are close to fundamental truths, the uncertainty principle has more guided than dictated applications that involved its consequences. Essentially, a defiance of Heisenberg is one for the statisticians.

And I’m pessimistic. Let’s face it, who wouldn’t be?

Anyway, the parameters involved in the experiment were:

  1. The particles being measured
  2. Weak measurement
  3. The apparatus

The experimenters claim that a value of the photon’s original polarization, X, was obtained upon a weak measurement. Then, a “stronger” measurement was made, yielding a value A. However, according to Heisenberg’s principle, the observation should have changed the polarization from A to some fixed value A’.

Now, the conclusions they drew:

  1. Obtaining X did not change A: X = A
  2. A’ – A < Limits set by Heisenberg

The terms of the weak measurement are understood with the following formula in mind:

(The bra-ket, or Dirac, notation signifies the dot-product between two vectors or vector-states.)

Here, φ(1,2) denote the pre- and post-selected states, A-hat the observable system, and Aw the value of the weak-measurement. Thus, when the pre-selected state tends toward becoming orthogonal to the post-selected state, the value of the weak measurement increases, becoming large, or “strong”, enough to affect the being-measured value of A-hat.

In our case: Aw = A – X; φ(1) = A; φ(2) = A’.

As listed above, the sources of error are:

  1. φ(1,2)
  2. X

To prove that Heisenberg was miserly all along, Aw would have been increased until φ(1) • φ(2) equaled 0 (through multiple runs of the same experiment), and then φ(2) – φ(1), or A’ – A, measured and compared to the different corresponding values of X. After determining the strength of the weak measurement thus, A’ – X can be determined.

I am skeptical because X signifies the extent of coupling between the measuring device and the system being measured, and its standard deviation, in the case of this experiment, is dependent on the standard deviation of A’ – A, which is in turn dependent on X.

The philosophies in physics

As a big week for physics comes up–a July 4 update by CERN on the search for the Higgs boson followed by ICHEP ’12 at Melbourne–I feel really anxious as a small-time proto-journalist and particle-physics-enthusiast. If CERN announces the discovery of evidence that rules out the existence of such a thing as the Higgs particle, not much will be lost apart from years of theoretical groundwork set in place for the post-Higgs universe. Physicists obeying the Standard Model will, to think the snowclone, scramble to their boards and come up with another hypothesis that explains mass-formation in quantum-mechanical terms.

For me… I don’t know what it means. Sure, I will have to unlearn the Higgs mechanism, which does make a lot of sense, and scour through the outpouring of scientific literature that will definitely follow to keep track of new directions and, more fascinatingly, new thought. The competing supertheories–loop quantum gravity (LQG) and string theory–will have to have their innards adjusted to make up for the change in the mechanism of mass-formation. Even then, their principle bone of contention will remain unchanged: whether there exists an absolute frame of reference. All this while, the universe, however, will have continued to witness the rise and fall of stars, galaxies and matter.

It is easier to consider the non-existence of the Higgs boson than its proven existence: the post-Higgs world is dark, riddled with problems more complex and, unsurprisingly, more philosophical. The two theories that dominated the first half of the previous century, quantum mechanics and special relativity, will still have to be reconciled. While special relativity holds causality and locality close to its heart, quantum mechanics’ tendency to violate the latter made it disagreeable at the philosophical level to A. Einstein (in a humorous and ironical turn, his attempts to illustrate this “anomaly” numerically opened up the field that further made acceptable the implications of quantum mechanics).

The theories’ impudent bickering continues with mathematical terms as well. While one prohibits travel at the speed of light, the other allows for the conclusive demonstration of superluminal communication. While one keeps all objects nailed to one place in space and time, the other allows for the occupation of multiple regions of space at a time. While one operates in a universe wherein gods don’t play with dice, the other can exist at all only if there are unseen powers that gamble on a secondly basis. If you ask me, I’d prefer one with no gods; I also have a strange feeling that that’s not a physics problem.

Speaking of causality, physicists of the Standard Model believe that the four fundamental forces–nuclear, weak, gravitational, and electromagnetic–cause everything that happens in this universe. However, they are at a loss to explain why the weak force is 1032-times stronger than the gravitational force (even the finding of the Higgs boson won’t fix this–assuming the boson exists). An attempt to explain this anomaly exists in the name of supersymmetry (SUSY) or, together with the Standard Model, MSSM. If an entity in the (hypothetical) likeness of the Higgs boson cannot exist, then MSSM will also fall with it.

Taunting physicists everywhere all the way through this mesh of intense speculation, Werner Heisenberg’s tragic formulation remains indefatigable. In a universe in which the scale at which physics is born is only hypothetical, in which energy in its fundamental form is thought to be a result of probabilistic fluctuations in a quantum field, determinism plays a dominant role in determining the future as well as, in some ways, contradicting it. The quantum field, counter-intuitively, is antecedent to human intervention: Heisenberg postulated that physical quantities such as position and particle spin come in conjugate quantities, and that making a measurement of one quantity makes the other indeterminable. In other words, one cannot simultaneously know the position and momentum of a particle, or the spins of a particle around two different axes.

To me, this seems like a problem of scale: humans are macroscopic in the sense that they can manipulate objects using the laws of classical mechanics and not the laws of quantum mechanics. However, a sense of scale is rendered incontextualizable when it is known that the dynamics of quantum mechanics affect the entire universe through a principle called the collapse postulate (i.e., collapse of the state vector): if I measure an observable physical property of a system that is in a particular state, I subject the entire system to collapse into a state that is described by the observable’s eigenstate. Even further, there exist many eigenstates for collapsing into; which eigenstate is “chosen” depends on its observation (this is an awfully close analogue to the anthropic principle).

xkcd #45

That reminds me. The greatest unsolved question in my opinion is whether the universe houses the brain or if the brain houses the universe. To be honest, I started writing this post without knowing how it would end: there were multiple eigenstates it could “collapse” into. That it would collapse into this particular one was unknown to me, too, and, in hindsight, there was no way I could have known about any aspect of its destiny. Having said that, the nature of the universe–and the brain/universe protogenesis problem–with the knowledge of deterministic causality and mensural antecedence, if the universe conceived the brain, the brain must inherit the characteristics of the universe, and therefore must not allow for freewill.

Now, I’m faintly depressed. And yes, this eigenstate did exist in the possibility-space.