Tag Archives: Large Hadron Collider

Credit: SaraRichterArt/pixabay

Exploring what it means to be big

Reading a Nature report titled ‘Step aside CERN: There’s a cheaper way to break open physics‘ (January 10, 2018) brought to mind something G. Rajasekaran, former head of the Institute of Mathematical Sciences, Chennai, told me once: that the future – as the Nature report also touts – belongs to tabletop particle accelerators.

Rajaji (as he is known) said he believed so because of the simple realisation that particle accelerators could only get so big before they’d have to get much, much bigger to tell us anything more. On the other hand, tabletop setups based on laser wakefield acceleration, which could accelerate electrons to higher energies across just a few centimetres, would allow us to perform slightly different experiments such that their outcomes will guide future research.

The question of size is an interesting one (and almost personal: I’m 6’4” tall and somewhat heavy, which means I’ve to start by moving away from seeming intimidating in almost all new relationships). For most of history, humans’ ideas of better included something becoming bigger. From what I can see – which isn’t really much – the impetus for this is founded in five things:

1. The laws of classical physics: They are, and were, multiplicative. To do more or to do better (which for a long time meant doing more), the laws had to be summoned in larger magnitudes and in more locations. This has been true from the machines of industrialisation to scientific instruments to various modes of construction and transportation. Some laws also foster inverse relationships that straightforwardly encourage devices to be bigger to be better.

2. Capitalism, rather commerce in general: Notwithstanding social necessities, bigger often implied better the same way a sphere of volume 4 units has a smaller surface area than four spheres of volume 1 unit each. So if your expenditure is pegged to the surface area – and it often is – then it’s better to pack 400 people on one airplane instead of flying four airplanes with 100 people in each.

3. Sense of self: A sense of our own size and place in the universe, as seemingly diminutive creatures living their lives out under the perennial gaze of the vast heavens. From such a point of view, a show of power and authority would obviously have meant transcending the limitations of our dimensions and demonstrating to others that we’re capable of devising ‘ultrastructures’ that magnify our will, to take us places we only thought the gods could go and achieve simultaneity of effect only the gods could achieve. (And, of course, for heads of state to swing longer dicks at each other.)

4. Politics: Engineers building a tabletop detector and engineers building a detector weighing 50,000 tonnes will obviously run into different kinds of obstacles. Moreover, big things are easier to stake claims over, to discuss, dispute or dislodge. It affects more people even before it has produced its first results.

5. Natural advantages: An example that comes immediately to mind is social networks – not Facebook or Twitter but the offline ones that define cultures and civilisations. Such networks afford people an extra degree of adaptability and improve chances of survival by allowing people to access resources (including information/knowledge) that originated elsewhere. This can be as simple as a barter system where people exchange food for gold, or as complex as a bashful Tamilian staving off alienation in California by relying on the support of the Tamil community there.

(The inevitable sixth impetus is tradition. For example, its equation with growth has given bigness pride of place in business culture, so much so that many managers I’ve met wanted to set up bigger media houses even when it might have been more appropriate to go smaller.)

Against this backdrop of impetuses working together, Ed Yong’s I Contain Multitudes – a book about how our biological experience of reality is mediated by microbes – becomes a saga of reconciliation with a world much smaller, not bigger, yet more consequential. To me, that’s an idea as unintuitive as, say, being able to engineer materials with fantastical properties by sporadically introducing contaminants into their atomic lattice. It’s the sort of smallness whose individual parts amount to very close to nothing, whose sum amounts to something, but the human experience of which is simply monumental.

And when we find that such smallness is able to move mountains, so to speak, it disrupts our conception of what it means to be big. This is as true of microbes as it is of quantum mechanics, as true of elementary particles as it is of nano-electromechanical systems. This is one of the more understated revolutions that happened in the 20th century: the decoupling of bigger and better, a sort of virtualisation of betterment that separated it from additive scale and led to the proliferation of ‘trons’.

I like to imagine what gave us tabletop accelerators also gave us containerised software and a pan-industrial trend towards personalisation – although this would be philosophy, not history, because it’s a trend we compose in hindsight. But in the same vein, both hardware (to run software) and accelerators first became big, riding on the back of the classical and additive laws of physics, then hit some sort of technological upper limit (imposed by finite funds and logistical limitations) and then bounced back down when humankind developed tools to manipulate nature at the mesoscopic scale.

Of course, some would also argue that tabletop particle accelerators wouldn’t be possible, or deemed necessary, if the city-sized ones didn’t exist first, that it was the failure of the big ones that drove the development of the small ones. And they would argue right. But as I said, that’d be history; it’s the philosophy that seems more interesting here.

A screenshot from the film 'The Cloverfield Paradox' (2018). Source: Netflix

All the science in ‘The Cloverfield Paradox’

I watched The Cloverfield Paradox last night, the horror film that Paramount pictures had dumped with Netflix and which was then released by Netflix on February 4. It’s a dumb production: unlike H.R. Giger’s existential, visceral horrors that I so admire, The Cloverfield Paradox is all about things going bump in the dark. But what sets these things off in the film is quite interesting: a particle accelerator. However, given how bad the film was, the screenwriter seems to have used this device simply as a plot device, nothing else.

The particle accelerator is called Shepard. We don’t know what particles it’s accelerating or up to what centre-of-mass collision energy. However, the film’s premise rests on the possibility that a particle accelerator can open up windows into other dimensions. The Cloverfield Paradox needs this because, according to its story, Earth has run out of energy sources in 2028 and countries are threatening ground invasions for the last of the oil, so scientists assemble a giant particle accelerator in space to tap into energy sources in other dimensions.

Considering 2028 is only a decade from now – when the Sun will still be shining bright as ever in the sky – and renewable sources of energy aren’t even being discussed, the movie segues from sci-fi into fantasy right there.

Anyway, the idea that a particle accelerator can open up ‘portals’ into other dimensions isn’t new nor entirely silly. Broadly, an accelerator’s purpose is founded on three concepts: the special theory of relativity (SR), particle decay and the wavefunction of quantum mechanics.

According to SR, mass and energy can transform into each other as well as that objects moving closer to the speed of light will become more massive, thus more energetic. Particle decay is what happens when a heavier subatomic particle decomposes into groups of lighter particles because it’s unstable. Put these two ideas together and you have a part of the answer: accelerators accelerate particles to extremely high velocities, the particles become more massive, ergo more energetic, and the excess energy condenses out at some point as other particles.

Next, in quantum mechanics, the wavefunction is a mathematical function: when you solve it based on what information you have available, the answer spit out by one kind of the function gives the probability that a particular particle exists at some point in the spacetime continuum. It’s called a wavefunction because the function describes a wave, and like all waves, this one also has a wavelength and an amplitude. However, the wavelength here describes the distance across which the particle will manifest. Because energy is directly proportional to frequency (E = × ν; h is Planck’s constant) and frequency is inversely proportional to the wavelength, energy is inversely proportional to wavelength. So the more the energy a particle accelerator achieves, the smaller the part of spacetime the particles will have a chance of probing.

Spoilers ahead

SR, particle decay and the properties of the wavefunction together imply that if the Shepard is able to achieve a suitably high energy of acceleration, it will be able to touch upon an exceedingly small part of spacetime. But why, as it happens in The Cloverfield Paradox, would this open a window into another universe?

Spoilers end

Instead of directly offering a peek into alternate universes, a very-high-energy particle accelerator could offer a peek into higher dimensions. According to some theories of physics, there are many higher dimensions even though humankind may have access only to four (three of space and one of time). The reason they should even exist is to be able to solve some conundrums that have evaded explanation. For example, according to Kaluza-Klein theory (one of the precursors of string theory), the force of gravity is so much weaker than the other three fundamental forces (strong nuclear, weak nuclear and electromagnetic) because it exists in five dimensions. So when you experience it in just four dimensions, its effects are subdued.

Where are these dimensions? Per string theory, for example, they are extremely compactified, i.e. accessible only over incredibly short distances, because they are thought to be curled up on themselves. According to Oskar Klein (one half of ‘Kaluza-Klein’, the other half being Theodore Kaluza), this region of space could be a circle of radius 10-32 m. That’s 0.00000000000000000000000000000001 m – over five quadrillion times smaller than a proton. According to CERN, which hosts the Large Hadron Collider (LHC), a particle accelerated to 10 TeV can probe a distance of 10-19 m. That’s still one trillion times larger than where the Kaluza-Klein fifth dimension is supposed to be curled up. The LHC has been able to accelerate particles to 8 TeV.

The likelihood of a particle accelerator tossing us into an alternate universe entirely is a different kind of problem. For one, we have no clue where the connections between alternate universes are nor how they can be accessed. In Nolan’s Interstellar (2014), a wormhole is discovered by the protagonist to exist inside a blackhole – a hypothesis we currently don’t have any way of verifying. Moreover, though the LHC is supposed to be able to create microscopic blackholes, they have a 0% chance of growing to possess the size or potential of Interstellar‘s Gargantua.

In all, The Cloverfield Paradox is a waste of time. In the 2016 film Spectral – also released by Netflix – the science is overwrought, stretched beyond its possibilities, but still stays close to the basic principles. For example, the antagonists in Spectral are creatures made entirely as Bose-Einstein condensates. How this was even achieved boggles the mind, but the creatures have the same physical properties that the condensates do. In The Cloverfield Paradox, however, the accelerator is a convenient insertion into a bland story, an abuse of the opportunities that physics of this complexity offers. The writers might as well have said all the characters blinked and found themselves in a different universe.

Chromodynamics: Gluons are just gonzo

One of the more fascinating bits of high-energy physics is the branch of physics called quantum chromodynamics (QCD). Don’t let the big name throw you off: it deals with a bunch of elementary particles that have a property called colour charge. And one of these particles creates a mess of this branch of physics because of its colour charge – so much so that it participates in the story that it is trying to shape. What could be more gonzo than this? Hunter S. Thompson would have been proud.

Like electrons have electric charge, particles studied by QCD have a colour charge. It doesn’t correspond to a colour of any kind; it’s just a funky name.

(Richard Feynman wrote about this naming convention in his book, QED: The Strange Theory of Light and Matter (pp. 163, 1985): “The idiot physicists, unable to come up with any wonderful Greek words anymore, call this type of polarization by the unfortunate name of ‘color,’ which has nothing to do with color in the normal sense.”)

The fascinating thing about these QCD particles is that they exhibit a property called colour confinement. It means that all particles with colour charge can’t ever be isolated. They’re always to be found only in pairs or bigger clumps. They can be isolated in theory if the clumps are heated to the Hagedorn temperature: 1,000 billion billion billion K. But the bigness of this number has ensured that this temperature has remained theoretical. They can also be isolated in a quark-gluon plasma, a superhot, superdense state of matter that has been creating fleetingly in particle physics experiments like the Large Hadron Collider. The particles in this plasma quickly collapse to form bigger particles, restoring colour confinement.

There are two kinds of particles that are colour-confined: quarks and gluons. Quarks come together to form bigger particles called mesons and baryons. The aptly named gluons are the particles that ‘glue’ the quarks together.

The force that acts between quarks and gluons is called the strong nuclear force. But this is misleading. The gluons actually mediate the strong nuclear force. A physicist would say that when two quarks exchange gluons, the quarks are being acted on by the strong nuclear force.

Because protons and neutrons are also made up of quarks and gluons, the strong nuclear force holds the nucleus together in all the atoms in the universe. Breaking this force releases enormous amounts of energy – like in the nuclear fission that powers atomic bombs and the nuclear fusion that powers the Sun. In fact, 99% of a proton’s mass comes from the energy of the strong nuclear force. The quarks contribute the remaining 1%; gluons are massless.

When you pull two quarks apart, you’d think the force between them will reduce. It doesn’t; it actually increases. This is very counterintuitive. For example, the gravitational force exerted by Earth drops off the farther you get away from it. The electromagnetic force between an electron and a proton decreases the more they move apart. But it’s only with the strong nuclear force that the force between two particles on which the force is acting actually increases as they move apart. Frank Wilczek called this a “self-reinforcing, runaway process”. This behaviour of the force is what makes colour confinement possible.

However, in 1973, Wilczek, David Gross and David Politzer found that the strong nuclear force increases in strength only up to a certain distance – around 1 fermi (0.000000000000001 metres, slightly larger than the diameter of a proton). If the quarks are separated by more than a fermi, the force between them falls off drastically, but not completely. This is called asymptotic freedom: the freedom from the force beyond some distance drops off asymptotically towards zero. Gross, Politzer and Wilczek won the Nobel Prize for physics in 2004 for their work.

In the parlance of particle physics, what makes asymptotic freedom possible is the fact that gluons emit other gluons. How else would you explain the strong nuclear force becoming stronger as the quarks move apart – if not for the gluons that the quarks are exchanging becoming more numerous as the distance increases?

This is the crazy phenomenon that you’re fighting against when you’re trying to set off a nuclear bomb. This is also the crazy phenomenon that will one day lead to the Sun’s death.

The first question anyone would ask now is – doesn’t asymptotic freedom violate the law of conservation of energy?

The answer lies in the nothingness all around us.

The vacuum of deep space in the universe is not really a vacuum. It’s got some energy of itself, which astrophysicists call ‘dark energy’. This energy manifests itself in the form of virtual particles: particles that pop in and out of existence, living for far shorter than a second before dissipating into energy. When a charged particle pops into being, its charge attracts other particles of opposite charge towards itself and repels particles of the same charge away. This is high-school physics.

But when a charged gluon pops into being, something strange happens. An electron has one kind of charge, the positive/negative electric charge. But a gluon contains a ‘colour’ charge and an ‘anti-colour’ charge, each of which can take one of three values. So the virtual gluon will attract other virtual gluons depending on their colour charges and intensify the colour charge field around it, and also change its colour according to whichever particles are present. If this had been an electron, its electric charge and the opposite charge of the particle it attracted would cancel the field out.

This multiplication is what leads to the build up of energy when we’re talking about asymptotic freedom.

Physicists refer to the three values of the colour charge as blue, green and red. (This is more idiocy – you might as well call them ‘baboon’, ‘lion’ and ‘giraffe’.) If a blue quark, a green quark and a red quark come together to form a hadron (a class of particles that includes protons and neutrons), then the hadron will have a colour charge of ‘white’, becoming colour-neutral. Anti-quarks have anti-colour charges: antiblue, antigreen, antired. When a red quark and an antired anti-quark meet, they will annihilate each other – but not so when a red quark and an antiblue anti-quark meet.

Gluons complicate this picture further because, in experiments, physicists have found that gluons behave as if they have both colour and anti-colour. In physical terms, this doesn’t make much sense, but they do in mathematical terms (which we won’t get into). Let’s say a proton is made of one red quark, one blue quark and one green quark. The quarks are held together by gluons, which also have a colour charge. So when two quarks exchange a gluon, the colours of the quarks change. If a blue quark emits a blue-antigreen gluon, the quark turns green whereas the quark that receives the gluon will turn blue. Ultimately, if the proton is ‘white’ overall, then the three quarks inside are responsible for maintaining that whiteness. This is the law of conservation of colour charge.

Gluons emit gluons because of their colour charges. When quarks exchange gluons, the quarks’ colour charges also change. In effect, the gluons are responsible for quarks getting their colours. And because the gluons participate in the evolution of the force that they also mediate, they’re just gonzo: they can interact with themselves to give rise to new particles.

A gluon can split up into two gluons or into a quark-antiquark pair. Say a quark and an antiquark are joined together. If you try to pull them apart by supplying some energy, the gluon between them will ‘swallow’ that energy and split up into one antiquark and one quark, giving rise to two quark-antiquark pairs (and also preserving colour-confinement). If you supply even more energy, more quark-antiquark pairs will be generated.

For these reasons, the strong nuclear force is called a ‘colour force’: it manifests in the movement of colour charge between quarks.

In an atomic nucleus, say there is one proton and one neutron. Each particle is made up of three quarks. The quarks in the proton and the quarks in the neutron interact with each other because they are close enough to be colour-confined: the proton-quarks’ gluons and the neutron-quarks’ gluons interact with each other. So the nucleus is effectively one ball of quarks and gluons. However, one nucleus doesn’t interact with that of a nearby atom in the same way because they’re too far apart for gluons to be exchanged.

Clearly, this is quite complicated – not just for you and me but also for scientists, and for supercomputers that perform these calculations for large experiments in which billions of protons are smashed into each other to see how the particles interact. Imagine: there are six types, or ‘flavours’, of quarks, each carrying one of three colour charges. Then there is the one gluon that can carry one of nine combinations of colour-anticolour charges.

The Wire
September 20, 2017

Featured image credit: Alexas_Fotos/pixabay.

A gear-train for particle physics

It has come under scrutiny at various times by multiple prominent physicists and thinkers, but it’s not hard to see why, when the idea of ‘grand unification’ first set out, it seemed plausible to so many. The first time it was seriously considered was about four decades ago, shortly after physicists had realised that two of the four fundamental forces of nature were in fact a single unified force if you ramped up the energy at which it acted. (electromagnetic + weak = electroweak). The thought that followed was simply logical: what if, at some extremely high energy (like what was in the Big Bang), all four forces unified into one? This was 1974.

There has been no direct evidence of such grand unification yet. Physicists don’t know how the electroweak force will unify with the strong nuclear force – let alone gravity, a problem that actually birthed one of the most powerful mathematical tools in an attempt to solve it. Nonetheless, they think they know the energy at which such grand unification should occur if it does: the Planck scale, around 1019 GeV. This is about as much energy as is contained in a few litres of petrol, but it’s stupefyingly large when you have to accommodate all of it in a particle that’s 10-15 metres wide.

This is where particle accelerators come in. The most powerful of them, the Large Hadron Collider (LHC), uses powerful magnetic fields to accelerate protons to close to light-speed, when their energy approaches about 7,000 GeV. But the Planck energy is still 10 million billion orders of magnitude higher, which means it’s not something we might ever be able to attain on Earth. Nonetheless, physicists’ theories show that that’s where all of our physical laws should be created, where the commandments by which all that exists does should be written.

… Or is it?

There are many outstanding problems in particle physics, and physicists are desperate for a solution. They have to find something wrong with what they’ve already done, something new or a way to reinterpret what they already know. The clockwork theory is of the third kind – and its reinterpretation begins by asking physicists to dump the idea that new physics is born only at the Planck scale. So, for example, it suggests that the effects of quantum gravity (a quantum-mechanical description of gravity) needn’t necessarily become apparent only at the Planck scale but at a lower energy itself. But even if it then goes on to solve some problems, the theory threatens to present a new one. Consider: If it’s true that new physics isn’t born at the highest energy possible, then wouldn’t the choice of any energy lower than that just be arbitrary? And if nothing else, nature is not arbitrary.

To its credit, clockwork sidesteps this issue by simply not trying to find ‘special’ energies at which ‘important’ things happen. Its basic premise is that the forces of nature are like a set of interlocking gears moving against each other, transmitting energy – rather potential – from one wheel to the next, magnifying or diminishing the way fundamental particles behave in different contexts. Its supporters at CERN and elsewhere think it can be used to explain some annoying gaps between theory and experiment in particle physics, particularly the naturalness problem.

Before the Higgs boson was discovered, physicists predicted based on the properties of other particles and forces that its mass would be very high. But when the boson’s discovery was confirmed at CERN in January 2013, its mass implied that the universe would have to be “the size of a football” – which is clearly not the case. So why is the Higgs boson’s mass so low, so unnaturally low? Scientists have fronted many new theories that try to solve this problem but their solutions often require the existence of other, hitherto undiscovered particles.

Clockwork’s solution is a way in which the Higgs boson’s interaction with gravity – rather gravity’s associated energy – is mediated by a string of effects described in quantum field theory that tamp down the boson’s mass. In technical parlance, the boson’s mass becomes ‘screened’. An explanation for this that’s both physical and accurate is hard to draw up because of various abstractions. So as University of Bruxelles physicist Daniele Teresi suggests, imagine this series: Χ = 0.5 × 0.5 × 0.5 × 0.5 × … × 0.5. Even if each step reduces Χ’s value by only a half, it is already an eighth after three steps; after four, a sixteenth. So the effect can get quickly drastic because it’s exponential.

And the theory provides a mathematical toolbox that allows for all this to be achieved without the addition of new particles. This is advantageous because it makes clockwork relatively more elegant than another theory that seeks to solve the naturalness problem, called supersymmetry, SUSY for short. Physicists like SUSY also because it allows for a large energy hierarchy: a distribution of particles and processes at energies between electroweak unification and grand unification, instead of leaving the region bizarrely devoid of action like the Standard Model does. But then SUSY predicts the existence of 17 new particles, none of which have been detected yet.

Even more, as Matthew McCullough, one of clockwork’s developers, showed at an ongoing conference in Italy, its solutions for a stationary particle in four dimensions exhibit conceptual similarities to Maxwell’s equations for an electromagnetic wave in a conductor. The existence of such analogues is reassuring because it recalls nature’s tendency to be guided by common principles in diverse contexts.

This isn’t to say clockwork theory is it. As physicist Ben Allanach has written, it is a “new toy” and physicists are still playing with it to solve different problems. Just that in the event that it has an answer to the naturalness problem – as well as to the question why dark matter doesn’t decay, e.g. – it is notable. But is this enough: to say that clockwork theory mops up the math cleanly in a bunch of problems? How do we make sure that this is how nature works?

McCullough thinks there’s one way, using the LHC. Very simplistically: clockwork theory induces fluctuations in the probabilities with which pairs of high-energy photons are created at some energies at the LHC. These should be visible as wavy squiggles in a plot with energy on the x-axis and events on the y-axis. If these plots can be obtained and analysed, and the results agree with clockwork’s predictions, then we will have confirmed what McCullough calls an “irreducible prediction of clockwork gravity”, the case of using the theory to solve the naturalness problem.

To recap: No free parameters (i.e. no new particles), conceptual elegance and familiarity, and finally a concrete and unique prediction. No wonder Allanach thinks clockwork theory inhabits fertile ground. On the other hand, SUSY’s prospects have been bleak since at least 2013 (if not earlier) – and it is one of the more favoured theories among physicists to explain physics beyond the Standard Model, physics we haven’t observed yet but generally believe exists. At the same time, and it bears reiterating, clockwork theory will also have to face down a host of challenges before it can be declared a definitive success. Tik tok tik tok tik tok

Some notes and updates

Four years of the Higgs boson

Missed this didn’t I. On July 4, 2012, physicists at CERN announced that the Large Hadron Collider had found a Higgs-boson-like particle. Though the confirmation would only come in January 2013 (that it was the Higgs boson and not any other particle), July 4 is the celebrated date. I don’t exactly mark the occasion every year except to recap on whatever’s been happening in particle physics. And this year: everyone’s still looking for supersymmetry; there was widespread excitement about a possible new fundamental particle weighing about 750 GeV when data-taking began at the LHC in late May but strong rumours from within CERN have it that such a particle probably doesn’t exist (i.e. it’s vanishing in the new data-sets). Pity. The favoured way to anticipate what might come to be well before the final announcements are made in August is to keep an eye out for conference announcements in mid-July. If they’re made, it’s a strong giveaway that something’s been found.

Live-tweeting and timezones

I’ve a shitty internet connection at home in Delhi which means I couldn’t get to see the live-stream NASA put out of its control room or whatever as Juno executed its orbital insertion manoeuvre this morning. Fortunately, Twitter came to the rescue; NASA’s social media team had done such a great job of hyping up the insertion (deservingly so) that it seemed as if all the 480 accounts I followed were tweeting about it. I don’t believe I missed anything at all, except perhaps the sounds of applause. Twitter’s awesome that way, and I’ll say that even if it means I’m stating the obvious. One thing did strike me: all times (of the various events in the timeline) were published in UTC and EDT. This makes sense because converting from UTC to a local timezone is easy (IST = UTC + 5.30) while EDT corresponds to the US east cost. However, the thing about IST being UTC + 5.30 isn’t immediately apparent to everyone (at least not to me), and every so often I wish an account tweeting from India, such as a news agency’s, uses IST. I do it every time.

New music

I don’t know why I hadn’t found Yat-kha earlier considering I listen to Huun Huur Tu so much, and Yat-kha is almost always among the recommendations (all bands specialising in throat-singing). And while Huun Huur Tu likes to keep their music traditional and true to its original compositional style, Yat-kha takes it a step further, banding its sound up with rock, and this tastes much better to me. With a voice like Albert Kuvezin’s, keeping things traditional can be a little disappointing – you can hear why in the song above. It’s called Kaa-khem; the same song by Huun Huur Tu is called Mezhegei. Bass evokes megalomania in me, and it’s all the more sensual when its rendition is accomplished with human voice, rising and falling. Another example of what I’m talking about is called Yenisei punk. Finally, this is where I’d suggest you stop if you’re looking for throat-singing made to sound more belligerent: I stumbled upon War horse by Tengger Cavalry, classified as nomadic folk metal. It’s terrible.

Fall of Light, a part 2

In fantasy trilogies, the first part benefits from establishing the premise and the third, from the denouement. If the second part has to benefit from anything at all, then it is the story itself, not the intensity of the stakes within its narrative. At least, that’s my takeaway from Fall of Light, the second book of Steven Erikson’s Kharkanas trilogy. Its predecessor, Forge of Darkness, established the kingdom of Kurald Galain and the various forces that shape its peoples and policies. Because the trilogy has been described as being a prequel (note: not the prequel) to Erikson’s epic Malazan Book of the Fallen series, and because of what we know about Kurald Galain in the series, the last book of the trilogy has its work cut out for it. But in the meantime, Fall of Light was an unexpectedly monotonous affair – and that was awesome. As a friend of mine has been wont to describe the Malazan series: Erikson is a master of raising the stakes. He does that in all of his books (including the Korbal Broach short-stories) and he does it really well. However, Fall of Light rode with the stakes as they were laid down at the end of the first book, through a plot that maintained the tension at all times. It’s neither eager to shed its burden nor is it eager to take on new ones. If you’ve read the Malazan series, I’d say he’s written another Deadhouse Gates, but better.

Oh, and this completes one of my bigger goals for 2016.

A universe out of sight

Two things before we begin:

  1. The first subsection of this post assumes that humankind has colonised some distant extrasolar planet(s) within the observable universe, and that humanity won’t be wiped out in 5 billion years.
  2. Both subsections assume a pessimistic outlook, and neither projections they dwell on might ever come to be while humanity still exists. Nonetheless, it’s still fun to consider them and their science, and, most importantly, their potential to fuel fiction.

Cosmology

Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving Universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0

Astronomers using the Hubble Space Telescope have captured the most comprehensive picture ever assembled of the evolving universe — and one of the most colourful. The study is called the Ultraviolet Coverage of the Hubble Ultra Deep Field. Caption and credit: hubble_esa/Flickr, CC BY 2.0

Note: An edited version of this post has been published on The Wire.

A new study whose results were reported this morning made for a disconcerting read: it seems the universe is expanding 5-9% faster than we figured it was.

That the universe is expanding at all is disappointing, that it is growing in volume like a balloon and continuously birthing more emptiness within itself. Because of the suddenly larger distances between things, each passing day leaves us lonelier than we were yesterday. The universe’s expansion is accelerating, too, and that doesn’t simply mean objects getting farther away. It means some photons from those objects never reaching our telescopes despite travelling at lightspeed, doomed to yearn forever like Tantalus in Tartarus. At some point in the future, a part of the universe will become completely invisible to our telescopes, remaining that way no matter how hard we try.

And the darkness will only grow, until a day out of an Asimov story confronts us: a powerful telescope bearing witness to the last light of a star before it is stolen from us for all time. Even if such a day is far, far into the future – the effect of the universe’s expansion is perceptible only on intergalactic scales, as the Hubble constant indicates, and simply negligible within the Solar System – the day exists.

This is why we are uniquely positioned: to be able to see as much as we are able to see. At the same time, it is pointless to wonder how much more we are able to see than our successors because it calls into question what we have ever been able to see. Say the whole universe occupies a volume of X, that the part of it that remains accessible to us contains a volume Y, and what we are able to see today is Z. Then: Z < Y < X. We can dream of some future technological innovation that will engender a rapid expansion of what we are able to see, but with Y being what it is, we will likely forever play catch-up (unless we find tachyons, navigable wormholes, or the universe beginning to decelerate someday).

How fast is the universe expanding? There is a fixed number to this called the deceleration parameter:

q = – (1 + /H2),

where H is the Hubble constant and  is its first derivative. The Hubble constant is the speed at which an object one megaparsec from us is moving away at. So, if q is positive, the universe’s expansion is slowing down. If q is zero, then H is the time since the Big Bang. And if q is negative – as scientists have found to be the case – then the universe’s expansion is accelerating.

The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons

The age and ultimate fate of the universe can be determined by measuring the Hubble constant today and extrapolating with the observed value of the deceleration parameter, uniquely characterised by values of density parameters (Ω_M for matter and Ω_Λ for dark energy). Caption and credit: Wikimedia Commons

We measure the expansion of the universe from our position: on its surface (because, no, we’re not inside the universe). We look at light coming from distant objects, like supernovae; we work out how much that light is ‘red-shifted’; and we compare that to previous measurements. Here’s a rough guide.

What kind of objects do we use to measure these distances? Cosmologists prefer type Ia supernovae. In a type Ia supernova, a white-dwarf (the core of a dead stare made entirely of electrons) is slowly sucking in matter from an object orbiting it until it becomes hot enough to trigger fusion reaction. In the next few seconds, the reaction expels 1044 joules of energy, visible as a bright fleck in the gaze of a suitable telescope. Such explosions have a unique attribute: the mass of the white-dwarf that goes boom is uniform, which means type Ia supernova across the universe are almost equally bright. This is why cosmologists refer to them as ‘cosmic candles’. Based on how faint these candles are, you can tell how far away they are burning.

After a type Ia supernova occurs, photons set off from its surface toward a telescope on Earth. However, because the universe is continuously expanding, the distance between us and the supernova is continuously increasing. The effective interpretation is that the explosion appears to be moving away from us, becoming fainter. How much it has moved away is derived from the redshift. The wave nature of radiation allows us to think of light as having a frequency and a wavelength. When an object that is moving away from us emits light toward us, the waves of light appear to become stretched, i.e. the wavelength seems to become distended. If the light is in the visible part of the spectrum when starting out, then by the time it reached Earth, the increase in its wavelength will make it seem redder. And so the name.

The redshift, z – technically known as the cosmological redshift – can be calculated as:

z = (λobserved – λemitted)/λemitted

In English: the redshift is the factor by which the observed wavelength is changed from the emitted wavelength. If z = 1, then the observed wavelength is twice as much as the emitted wavelength. If z = 5, then the observed wavelength is six-times as much as the emitted wavelength. The farthest galaxy we know (MACS0647-JD) is estimated to be at a distance wherefrom = 10.7 (corresponding to 13.3 billion lightyears).

Anyway, z is used to calculate the cosmological scale-factor, a(t). This is the formula:

a(t) = 1/(1 + z)

a(t) is then used to calculate the distance between two objects:

d(t) = a(t) d0,

where d(t) is the distance between the two objects at time t and d0 is the distance between them at some reference time t0. Since the scale factor would be constant throughout the universe, d(t) and d0 can be stand-ins for the ‘size’ of the universe itself.

So, let’s say a type Ia supernova lit up at a redshift of 0.6. This gives a(t) = 0.625 = 5/8. So: d(t) = 5/8 * d0. In English, this means that the universe was 5/8th its current size when the supernova went off. Using z = 10.7, we infer that the universe was one-twelfth its current size when light started its journey from MACS0647-JD to reach us.

As it happens, residual radiation from the primordial universe is still around today – as the cosmic microwave background radiation. It originated 378,000 years after the Big Bang, following a period called the recombination epoch, 13.8 billion years ago. Its redshift is 1,089. Phew.

The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you're observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0

The relation between redshift (z) and distance (in billions of light years). d_H is the comoving distance between you and the object you’re observing. Where it flattens out is the distance out to the edge of the observable universe. Credit: Redshiftimprove/Wikimedia Commons, CC BY-SA 3.0

A curious redshift is z = 1.4, corresponding to a distance of about 4,200 megaparsec (~0.13 trillion trillion km). Objects that are already this far from us will be moving away faster than at the speed of light. However, this isn’t faster-than-light travel because it doesn’t involve travelling. It’s just a case of the distance between us and the object increasing at such a rate that, if that distance was once covered by light in time t0, light will now need t > t0 to cover it*. The corresponding a(t) = 0.42. I wonder at times if this is what Douglas Adams was referring to (… and at other times I don’t because the exact z at which this happens is 1.69, which means a(t) = 0.37. But it’s something to think about).

Ultimately, we will never be able to detect any electromagnetic radiation from before the recombination epoch 13.8 billion years ago; then again, the universe has since expanded, leaving the supposed edge of the observable universe 46.5 billion lightyears away in any direction. In the same vein, we can imagine there will be a distance (closing in) at which objects are moving away from us so fast that the photons from their surface never reach us. These objects will define the outermost edges of the potentially observable universe, nature’s paltry alms to our insatiable hunger.

Now, a gentle reminder that the universe is expanding a wee bit faster than we thought it was. This means that our theoretical predictions, founded on Einstein’s theories of relativity, have been wrong for some reason; perhaps we haven’t properly accounted for the effects of dark matter? This also means that, in an Asimovian tale, there could be a twist in the plot.

*When making such a measurement, Earthlings assume that Earth as seen from the object is at rest and that it’s the object that is moving. In other words: we measure the relative velocity. A third observer will notice both Earth and the object to be moving away, and her measurement of the velocity between us will be different.


Particle physics

Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN

Candidate Higgs boson event from collisions in 2012 between protons in the ATLAS detector on the LHC. Credit: ATLAS/CERN

If the news that our universe is expanding 5-9% faster than we thought sooner portends a stellar barrenness in the future, then another foretells a fecundity of opportunities: in the opening days of its 2016 run, the Large Hadron Collider produced more data in a single day than it did in the entirety of its first run (which led to the discovery of the Higgs boson).

Now, so much about the cosmos was easy to visualise, abiding as it all did with Einstein’s conceptualisation of physics: as inherently classical, and never violating the principles of locality and causality. However, Einstein’s physics explains only one of the two infinities that modern physics has been able to comprehend – the other being the world of subatomic particles. And the kind of physics that reigns over the particles isn’t classical in any sense, and sometimes takes liberties with locality and causality as well. At the same time, it isn’t arbitrary either. How then do we reconcile these two sides of quantum physics?

Through the rules of statistics. Take the example of the Higgs boson: it is not created every time two protons smash together, no matter how energetic the protons are. It is created at a fixed rate – once every ~X collisions. Even better: we say that whenever a Higgs boson forms, it decays to a group of specific particles one-Yth of the time. The value of Y is related to a number called the coupling constant. The lower Y is, the higher the coupling constant is, and more often will the Higgs boson decay into that group of particles. When estimating a coupling constant, theoretical physicists assess the various ways in which the decays can happen (e.g., Higgs boson → two photons).

A similar interpretation is that the coupling constant determines how strongly a particle and a force acting on that particle will interact. Between the electron and the electromagnetic force is the fine-structure constant,

α = e2/2ε0hc;

and between quarks and the strong nuclear force is the constant defining the strength of the asymptotic freedom:

αs(k2) = [β0ln(k22)]-1

So, if the LHC’s experiments require P (number of) Higgs bosons to make their measurements, and its detectors are tuned to detect that group of particles, then at least P-times-that-coupling-constant collisions ought to have happened. The LHC might be a bad example because it’s a machine on the Energy Frontier: it is tasked with attaining higher and higher energies so that, at the moment the protons collide, heavier and much shorter-lived particles can show themselves. A better example would be a machine on the Intensity Frontier: its aim would be to produce orders of magnitude more collisions to spot extremely rare processes, such as particles that are formed very rarely. Then again, it’s not as straightforward as just being prolific.

It’s like rolling an unbiased die. The chance that you’ll roll a four is 1/6 (i.e. the coupling constant) – but it could happen that if you roll the die six times, you never get a four. This is because the chance can also be represented as 10/60. Then again, you could roll the die 60 times and still never get a four (though the odds of that happened are even lower). So you decide to take it to the next level: you build a die-rolling machine that rolls the die a thousand times. You would surely have gotten some fours – but say you didn’t get fours one-sixth of the time. So you take it up a notch: you make the machine roll the die a million times. The odds of a four should by now start converging toward 1/6. This is how a particle accelerator-collider aims to work, and succeeds.

And this is why the LHC producing as much data as it already has this year is exciting news. That much data means a lot more opportunities for ‘new physics’ – phenomena beyond what our theories can currently explain – to manifest itself. Analysing all this data completely will take many years (physicists continue to publish papers based on results gleaned from data generated in the first run), and all of it will be useful in some way even if very little of it ends up contributing to new ideas.

The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN

The steady (logarithmic) rise in luminosity – the number of collision events detected – at the CMS detector on the LHC. Credit: CMS/CERN

Occasionally, an oddball will show up – like a pentaquark, a state of five quarks bound together. As particles in their own right, they might not be as exciting as the Higgs boson, but in the larger schemes of things, they have a role to call their own. For example, the existence of a pentaquark teaches physicists about what sorts of configurations of the strong nuclear force, which holds the quarks together, are really possible, and what sorts are not. However, let’s say the LHC data throws up nothing. What then?

Tumult is what. In the first run, the LHC used to smash two beams of billions of protons, each beam accelerated to 4 TeV and separated into 2,000+ bunches, head on at the rate of two opposing bunches every 50 nanoseconds. In the second run, after upgrades through early 2015, the LHC smashes bunches accelerated to 6.5 TeV once every 25 nanoseconds. In the process, the number of collisions per sq. cm per second increased tenfold, to 1 × 1034. These heightened numbers are so new physics has fewer places to hide; we are at the verge of desperation to tease them out, to plumb the weakest coupling constants, because existing theories have not been able to answer all of our questions about fundamental physics (why things are the way they are, etc.). And even the barest hint of something new, something we haven’t seen before, will:

  • Tell us that we haven’t seen all that there is to see**, that there is yet more, and
  • Validate this or that speculative theory over a host of others, and point us down a new path to tread

Axiomatically, these are the desiderata at stake should the LHC find nothing, even more so that it’s yielded a massive dataset. Of course, not all will be lost: larger, more powerful, more innovative colliders will be built – even as a disappointment will linger. Let’s imagine for a moment that all of them continue to find nothing, and that persistent day comes to be when the cosmos falls out of our reach, too. Wouldn’t that be maddening?

**I’m not sure of what an expanding universe’s effects on gravitational waves will be, but I presume it will be the same as its effect on electromagnetic radiation. Both are energy transmissions travelling on the universe’s surface at the speed of light, right? Do correct me if I’m wrong.

Prospects for suspected new fundamental particle improve marginally

This image shows a collision event with a photon pair observed by the CMS detector in proton-collision data collected in 2015 with no magnetic field present. The energy deposits of the two photons are represented by the two large green towers. The mass of the di-photon system is between 700 and 800 GeV. The candidates are consistent with what is expected for prompt isolated photons. Caption & credit © 2016 CERN

This image shows a collision event with a photon pair observed by the CMS detector in proton-collision data collected in 2015 with no magnetic field present. The energy deposits of the two photons are represented by the two large green towers. The mass of the di-photon system is between 700 and 800 GeV. The candidates are consistent with what is expected for prompt isolated photons. Caption & credit © 2016 CERN

On December 15 last year, scientists working with the Large Hadron Collider experiment announced that they had found slight whispers of a possible new fundamental particle, and got the entire particle physics community excited. There was good reason: should such a particle’s existence become verified, it would provide physicists some crucial headway in answering questions about the universe that our current knowledge of physics has been remarkably unable to cope with. And on March 17, members of the teams that made the detection presented more details as well as some preliminary analyses at a conference, held every year, in La Thuile, Italy.

The verdict: the case for the hypothesised particle’s existence has got a tad bit stronger. Physicists still don’t know what it could be or if it won’t reveal itself to have been a fluke measurement once more data trickles in by summer this year. At the same time, the bump in the data persists in two sets of measurements logged by two detectors and at different times. In December, the ATLAS detector had presented a stronger case – i.e., a more reliable measurement – than the CMS detector; at La Thuile on March 17, the CMS team also came through with promising numbers.

Because of the stochastic nature of particle physics, the reliability of results is encapsulated by their statistical significance, denoted by σ (sigma). So 3σ would mean the measurements possess a 1-in-350 chance of being a fluke and marks the threshold for considering the readings as evidence. And 5σ would mean the measurements possess a 1-in-3.5 million chance of being a fluke and marks the threshold for claiming a discovery. Additionally, tags called ‘local’ and ‘global’ refer to whether the significance is for a bump exactly at 750 GeV or anywhere in the plot at all.

And right now, particle physicists have this scoreboard, as compiled by Alessandro Strumia, an associate professor of physics at Pisa University, who presented it at the conference:

750_new

Pauline Gagnon, a senior research scientist at CERN, explained on her blog, “Two hypotheses were tested, assuming different characteristics for the hypothetical new particle: the ‘spin 0’ case corresponds to a new type of Higgs boson, while ‘spin 2’ denotes a graviton.” A graviton is a speculative particle carrying the force of gravity. The – rather, a – Higgs boson was discovered at the LHC in July 2012 and verified in January 2013. This was during the collider’s first run, when it accelerated two beams of protons to 4 TeV (1,000 GeV = 1 TeV) each and then smashed them together. The second run kicked off, following upgrades to the collider and detectors during 2014, with a beam energy of 6.5 TeV.

Although none of the significances are as good as they’d have to be for there to be a new ‘champagne bottle boson’moment (alternatively: another summertime hit), it’s encouraging that the data behind them has shown up over multiple data-taking periods and isn’t failing repeated scrutiny. More presentations by physicists from ATLAS and CMS at the conference, which concludes on March 19, are expected to provide clues about other anomalous bumps in the data that could be related to the one at 750 GeV. If theoretical physicists have such connections to make, their ability to zero in on what could be producing the excess photons becomes much better.

But even more than new analyses gleaned from old data, physicists will be looking forward to the LHC waking up from its siesta in the first week of May, and producing results that could become available as early as June. Should the data still continue to hold up – and the 5σ local significance barrier be breached – then physicists will have just what they need to start a new chapter in the study of fundamental physics just as the previous one was closed by the Higgs boson’s discovery in 2012.

For reasons both technical and otherwise, such a chapter has its work already cut out. The Standard Model of particle physics, a theory unifying the behaviours of different species of particles and which requires the Higgs boson’s existence, is flawed despite its many successes. Therefore, physicists have been, and are, looking for ways to ‘break’ the model by finding something it doesn’t have room for. Both the graviton and another Higgs boson are such things although there are other contenders as well.

The Wire
March 19, 2016

 

Ways of seeing

A lot of the physics of 2015 was about how the ways in which we study the natural world had been improved or were improving.

New LHC data has more of the same but could something be in the offing?

Dijet mass (TeV) v. no. of events. SOurce: ATLAS/CERN

Dijet mass (TeV) v. no. of events. Source: ATLAS/CERN

Looks intimidating, doesn’t it? It’s also very interesting because it contains an important result acquired at the Large Hadron Collider (LHC) this year, a result that could disappoint many physicists.

The LHC reopened earlier this year after receiving multiple performance-boosting upgrades over the 18 months before. In its new avatar, the particle-smasher explores nature’s fundamental constituents at the highest energies yet, almost twice as high as they were in its first run. By Albert Einstein’s mass-energy equivalence (E = mc2), the proton’s mass corresponds to an energy of almost 1 GeV (giga-electron-volt). The LHC’s beam energy to compare was 3,500 GeV and is now 6,500 GeV.

At the start of December, it concluded data-taking for 2015. That data is being steadily processed, interpreted and published by the multiple topical collaborations working on the LHC. Two collaborations in particular, ATLAS and CMS, were responsible for plots like the one shown above.

This is CMS’s plot showing the same result:

Source: CMS/CERN

Source: CMS/CERN

When protons are smashed together at the LHC, a host of particles erupt and fly off in different directions, showing up as streaks in the detectors. These streaks are called jets. The plots above look particularly at pairs of particles called quarks, anti-quarks or gluons that are produced in the proton-proton collisions (they’re in fact the smaller particles that make up protons).

The sequence of black dots in the ATLAS plot shows the number of jets (i.e. pairs of particles) observed at different energies. The red line shows the predicted number of events. They both match, which is good… to some extent.

One of the biggest, and certainly among the most annoying, problems in particle physics right now is that the prevailing theory that explains it all is unsatisfactory – mostly because it has some really clunky explanations for some things. The theory is called the Standard Model and physicists would like to see it disproved, broken in some way.

In fact, those physicists will have gone to work today to be proved wrong – and be sad at the end of the day if they weren’t.

Maintenance work underway at the CMS detector, the largest of the five that straddle the LHC. Credit: CERN

Maintenance work underway at the CMS detector, the largest of the five that straddle the LHC. Credit: CERN

The annoying problem at its heart

The LHC chips in providing two kinds of opportunities: extremely sensitive particle-detectors that can provide precise measurements of fleeting readings, and extremely high collision energies so physicists can explore how some particles behave in thousands of scenarios in search of a surprising result.

So, the plots above show three things. First, the predicted event-count and the observed event-count are a match, which is disappointing. Second, the biggest deviation from the predicted count is highlighted in the ATLAS plot (look at the red columns at the bottom between the two blue lines). It’s small, corresponding to two standard deviations (symbol: σ) from the normal. Physicists need at least three standard deviations () from the normal for license to be excited.

But this is the most important result (an extension to the first): The predicted event-count and the observed event-count are a match across 6,000 GeV. In other words: physicists are seeing no cause for joy, and all cause for revalidating a section of the Standard Model, in a wide swath of scenarios.

The section in particular is called quantum chromodynamics (QCD), which deals with how quarks, antiquarks and gluons interact with each other. As theoretical physicist Matt Strassler explains on his blog,

… from the point of view of the highest energies available [at the LHC], all particles in the Standard Model have almost negligible rest masses. QCD itself is associated with the rest mass scale of the proton, with mass-energy of about 1 GeV, again essentially zero from the TeV point of view. And the structure of the proton is simple and smooth. So QCD’s prediction is this: the physics we are currently probing is essential scale-invariant.

Scale-invariance is the idea that two particles will interact the same way no matter how energetic they are. To be sure, the ATLAS/CMS results suggest QCD is scale-invariant in the 0-6,000 GeV range. There’s a long way to go – in terms of energy levels and future opportunities.

Something in the valley

The folks analysing the data are helped along by previous results at the LHC as well. For example, with the collision energy having been ramped up, one would expect to see particles of higher energies manifesting in the data. However, the heavier the particle, the wider the bump in the plot and more the focusing that’ll be necessary to really tease out the peak. This is one of the plots that led to the discovery of the Higgs boson:

 

Source: ATLAS/CERN

Source: ATLAS/CERN

That bump between 125 and 130 GeV is what was found to be the Higgs, and you can see it’s more of a smear than a spike. For heavier particles, that smear’s going to be wider with longer tails on the site. So any particle that weighs a lot – a few thousand GeV – and is expected to be found at the LHC would have a tail showing in the lower energy LHC data. But no such tails have been found, ruling out heavier stuff.

And because many replacement theories for the Standard Model involve the discovery of new particles, analysts will tend to focus on particles that could weigh less than about 2,000 GeV.

In fact that’s what’s riveted the particle physics community at the moment: rumours of a possible new particle in the range 1,900-2,000 GeV. A paper uploaded to the arXiv preprint server on December 10 shows a combination of ATLAS and CMS data logged in 2012, and highlights a deviation from the normal that physicists haven’t been able to explain using information they already have. This is the relevant plot:

Source: arXiv:1512.03371v1

Source: arXiv:1512.03371v1

 

The one on the middle and right are particularly relevant. They each show the probability of the occurrence of an event (observed as a bump in the data, not shown here) of some heavier mass of energy decaying into two different final states: of W and Z bosons (WZ), and of two Z bosons (ZZ). Bosons make a type of fundamental particle and carry forces.

The middle chart implies that the mysterious event is at least 1,000-times less likelier to occur than normally and the one on the left implies the event is at least 10,000-times less likelier to occur than normally. And both readings are at more than 3σ significance, so people are excited.

The authors of the paper write: “Out of all benchmark models considered, the combination favours the hypothesis of a [particle or its excitations] with mass 1.9-2.0 [thousands of GeV] … as long as the resonance does not decay exclusively to WW final states.”

But as physicist Tommaso Dorigo points out, these blips could also be a fluctuation in the data, which does happen.

Although the fact that the two experiments see the same effect … is suggestive, that’s no cigar yet. For CMS and ATLAS have studied dozens of different mass distributions, and a bump could have appeared in a thousand places. I believe the bump is just a fluctuation – the best fluctuation we have in CERN data so far, but still a fluke.

There’s a seminar due to happen today at the LHC Physics Centre at CERN where data from the upgraded run is due to be presented. If something really did happen in those ‘valleys’, which were filtered out of a collision energy of 8,000 GeV (basically twice the beam energy, where each beam is a train of protons), then those events would’ve happened in larger quantities during the upgraded run and so been more visible. The results will be presented at 1930 IST. Watch this space.

Featured image: Inside one of the control centres of the collaborations working on the LHC at CERN. Each collaboration handles an experiment, or detector, stationed around the LHC tunnel. Credit: CERN.

A new dawn for particle accelerators in the wake

During a lecture in 2012, G. Rajasekaran, professor emeritus at the Institute for Mathematical Sciences, Chennai, said that the future of high-energy physics lay with engineers being able to design smaller particle accelerators. The theories of particle physics have for long been exploring energy levels that we might never be able to reach with accelerators built on Earth. At the same time, it will still be on physicists to reach the energies that we can reach but in ways that are cheaper, more efficient, and smaller – because reach them we will have to if our theories must be tested. According to Rajasekaran, the answer is, or will soon be, the tabletop particle accelerator.

In the last decade, tabletop accelerators have inched closer to commercial viability because of a method called plasma wakefield acceleration. Recently, a peer-reviewed experiment detailing the effects of this method was performed at the University of Maryland (UMD) and the results published in the journal Physical Review Letters. A team-member said in a statement: “We have accelerated high-charge electron beams to more than 10 million electron volts using only millijoules of laser pulse energy. This is the energy consumed by a typical household lightbulb in one-thousandth of a second.” Ten MeV pales in comparison to what the world’s most powerful particle accelerator, the Large Hadron Collider (LHC), achieves – a dozen million MeV – but what the UMD researchers have built doesn’t intend to compete against the LHC but against the room-sized accelerators typically used for medical imaging.

In particle accelerator like the LHC or the Stanford linac, a string of radiofrequency (RF) cavities are used to accelerate charged particles around a ring. Energy is delivered to the particles using powerful electromagnetic fields via the cavities, which switch polarity at 400 MHz – that’s switching at 400 million times a second. The particles’ arrival at the cavities are timed accordingly. Over the course of 15 minutes, the particle bunches are accelerated from 450 GeV to 4 TeV (the beam energy before the LHC was upgraded over 2014), with the bunches going 11,000 times around the ring per second. As the RF cavities switch faster and are ramped up in energy, the particles swing faster and faster around – until computers bring two such beams into each other’s paths at a designated point inside the ring and BANG.

A wakefield accelerator also has an electromagnetic field that delivers the energy, but instead of ramping and switching over time, it delivers the energy in one big tug.

First, scientists create a plasma, a fluidic state of matter consisting of free-floating ions (positively charged) and electrons (negatively charged). Then, the scientists shoot two bunches of electrons separated by 15-20 micrometers (millionths of a metre). As the leading bunch moves into the plasma, it pushes away the plasma’s electrons and so creates a distinct electric field around itself called the wakefield. The wakefield envelopes the trailing bunch of electrons as well, and exerts two forces on them: one along the direction of the leading bunch, which accelerates the trailing bunch, and one in the transverse direction, which either makes the bunch more or less focused. And as the two bunches shoot through the plasma, the leading bunch transfers its energy to the trailing bunch via the linear component of the wakefield, and the trailing bunch accelerates.

A plasma wakefield accelerator scores over a bigger machine in two key ways:

  • The wakefield is a very efficient energy transfer medium (but not as much as natural media), i.e. transformer. Experiments at the Stanford Linear Accelerator Centre (SLAC) have recorded 30% efficiency, which is considered high.
  • Wakefield accelerators have been able to push the energy gained per unit distance travelled by the particle to 100 GV/m (an electric potential of 1 GV/m corresponds to an energy gain of 1 GeV/c2 for one electron over 1 metre). Assuming a realistic peak accelerating gradient of 100 MV/m, a similar gain (of 100 GeV) at the SLAC would have taken over a kilometre.

There are many ways to push these limits – but it is historically almost imperative that we do. Could the leap in accelerating gradient by a factor of 100 to 1,000 break the slope of the Livingston plot?

Could the leap in accelerating gradient from RF cavities to plasma wakefields break the Livingston plot? Source: AIP

Could the leap in accelerating gradient from RF cavities to plasma wakefield accelerators break the Livingston plot? Source: AIP

In the UMD experiment, scientists shot a laser pulse into a hydrogen plasma. The photons in the laser then induced the wakefield that trailing electrons surfed and were accelerated through. To reduce the amount of energy transferred by the laser to generate the same wakefield, they made the plasma denser instead to capitalise on an effect called self-focusing.

A laser’s electromagnetic field, as it travels through the plasma, makes electrons near it wiggle back and forth as the field’s waves pass through. The more intense waves near the pulse’s centre make the electrons around it wiggle harder. Since Einstein’s theory of relativity requires objects moving faster to weigh more, the harder-wiggling electrons become heavier, slow down and then settle down, creating a focused beam of electrons along the laser pulse. The denser the plasma, the stronger the self-focusing – a principle that can compensate for weaker laser pulses to sustain a wakefield of the same strength if the pulses were stronger but the plasma less dense.

The UMD team increased the hydrogen gas density, of which the plasma is made, by some 20x and found that electrons could be accelerated by 2-12 MeV using 10-50 millijoule laser pulses. Additionally, the scientists also found that at high densities, the amplitude of the plasma wave propagated by the laser pulse increases to the point where it traps some electrons from the plasma and continuously accelerates them to relativistic energies. This obviates the need for trailing electrons to be injected separately and increases the efficiency of acceleration.

But as with all accelerators, there are limitations. Two specific to the UMD experiment are:

  • If the plasma density goes beyond a critical threshold (1.19 x 1020 electrons/cm3) and if the laser pulse is too powerful (>50 mJ), the electrons are accelerated more by the direct shot than by the plasma wakefield. These numbers define an upper limit to the advantage of relativistic self-focusing.
  • The accelerated electrons slowly drift apart (in the UMD case, to at most 250 milliradians) and so require separate structures to keep their beam focused – especially if they will be used for biomedical purposes. (In 2014, physicists from the Lawrence Berkeley National Lab resolved this problem by using a 9-cm long capillary waveguide through which the plasma was channelled.)

There is another way lasers can be used to build an accelerator. In 2013, physicists from Stanford University devised a small glass channel 0.075-0.1 micrometers wide, and etched with nanoscale ridges on the floor. When they shined infrared light with wavelength of twice the channel’s height across it, the eM field of the light wiggled the electrons back and forth – but the ridges on the floor were cut such that electrons passing over the crests would accelerate more than they would decelerate when passing over the troughs. Like this, they achieved an energy gain gradient of 300 MeV/m. This way, the accelerator is only a few millimetres long and devoid of any plasma, which is difficult to handle.

At the same time, this method shares a shortcoming with the (non-laser driven) plasma wakefield accelerator: both require the electrons to be pre-accelerated before injection, which means room-sized pre-accelerators are still in the picture.

Physical size is an important aspect of particle accelerators because, the way we’re building them, the higher-energy ones are massive. The LHC currently accelerates particles to 13 TeV (1 TeV = 1 million MeV) in a 27-km long underground tunnel running beneath the shared borders of France and Switzerland. The planned Circular Electron-Positron Collider in China envisages a 100-TeV accelerator around a 54.7-km long ring (Both the LHC and the CEPC involve pre-accelerators that are quite big – but not as much as the final-stage ring). The International Linear Collider will comprise a straight tube, instead of a ring, over 30 km long to achieve accelerations of 500 GeV to 1 TeV. In contrast, Georg Korn suggested in APS Physics in December 2014 that a hundred 10-GeV electron acceleration modules could be lined up facing against a hundred 10-GeV positron acceleration modules to have a collider that can compete with the ILC but from atop a table.

In all these cases, the net energy gain per distance travelled (by the accelerated particle) was low compared to the gain in wakefield accelerators: 250 MV/m versus 10-100 GV/m. This is the physical difference that translates to a great reduction in cost (from billions of dollars to thousands), which in turn stands to make particle accelerators accessible to a wider range of people. As of 2014, there were at least 30,000 particle accelerators around the world – up from 26,000 in 2010 according to a Physics Today census. More importantly, the latter estimated that almost half the accelerators were being used for medical imaging and research, such as in radiotherapy, while the really high-energy devices (>1 GeV) used for physics research numbered a little over 100.

These are encouraging numbers for India, which imports 75% of its medical imaging equipment for more than Rs.30,000 crore a year (2015). These are also encouraging numbers for developing nations in general that want to get in on experimental high-energy physics, innovations in which power a variety of applications, ranging from cleaning coal to detecting WMDs, not to mention expand their medical imaging capabilities as well.

Featured image credit: digital cat/Flickr, CC BY 2.0.