Colliders of the future: LHeC and FCC-he

In this decade, CERN is exploiting and upgrading the LHC – but not constructing “the next big machine”.

Source

Looking into a section of the 6.3-km long HERA tunnel at Deutsches Elektronen-Synchrotron (DESY), Hamburg. Source: DESY
Looking into a section of the 6.3-km long HERA tunnel at Deutsches Elektronen-Synchrotron (DESY), Hamburg. Source: DESY

For many years, one of the world’s most powerful scopes, as in a microscope, was the Hadron-Elektron Ring Anlage (HERA) particle accelerator in Germany. Where scopes bounce electromagnetic radiation – like visible light – off surfaces to reveal information hidden to the naked eye, accelerators reveal hidden information by bombarding the target with energetic particles. At HERA, those particles were electrons accelerated to 27.5 GeV. At this energy, the particles can probe a distance of a few hundredths of a femtometer (earlier called fermi) – 2.5 million times better than the 0.1 nanometers that atomic force microscopy can achieve (of course, they’re used for different purposes).

The electrons were then collided head on against protons accelerated to 920 GeV.

Unlike protons, electrons aren’t made up of smaller particles and are considered elementary. Moreover, protons are approx. 2,000-times heavier than electrons. As a result, the high-energy collision is more an electron scattering off of a proton, but the way it happens is that the electron imparts some energy to the proton before scattering off (this is imagined as an electron emitting some energy as a photon, which is then absorbed by the proton). This is called deep inelastic scattering: ‘deep’ for high-energy; ‘inelastic’ because the proton absorbs some energy.

One of the most famous deep-inelastic scattering experiments was conducted in 1968 at the Stanford Linear Accelerator Centre. Then, the perturbed protons were observed to ’emit’ other particles – essentially hitherto undetected constituent particles that escaped their proton formation and formed other kinds of composite particles. The constituent particles were initially dubbed partons but later found to be quarks, anti-quarks (the matter/anti-matter particles) and gluons (the force-particles that held the quarks/anti-quarks together).

HERA was shut in June 2007. Five years later, the plans for a successor at least 100-times more sensitive than HERA were presented – in the form of the Large Hadron-electron Collider (LHeC). As the name indicates, it is proposed to be built adjoining the Large Hadron Collider (LHC) complex at CERN by 2025 – a timeframe based on when the high-luminosity phase of the LHC is set to begin (2024).

Timeline for the LHeC. Source: CERN
Timeline for the LHeC. Source: CERN

On December 15, physicists working on the LHC had announced new results obtained from the collider – two in particular stood out. One was a cause for great, yet possibly premature, excitement: a hint of a yet unknown particle weighing around 747 GeV. The other was cause for a bit of dismay: quantum chromodynamics (QCD), the theory that deals with the physics of quarks, anti-quarks and gluons, seemed flawless across a swath of energies. Some physicists were hoping it wouldn’t be so (because its flawlessness has come at the cost of being unable to explain some discoveries, like dark matter). Over the next decade, the LHC will push the energy frontier further to see – among other things – if QCD ‘breaks’, becoming unable to explain a possible new phenomenon.

Against this background, the LHeC is being pitched as the machine that could be dedicated to examining this breakpoint and some other others like it, and in more detail than the LHC is equipped to. One helpful factor is that when electrons are one kind of particles participating in a collision, physicists don’t have to worry about how the energy will be distributed among constituent particles since electrons don’t have any. Hadron collisions, on the other hand, have to deal with quarks, anti-quarks and gluons, and are tougher to analyse.

An energy recovery linac (in red) shown straddling the LHC ring. A rejected design involved installing the electron-accelerator (in yellow) concentrically with the LHC ring. Source: CERN
An energy recovery linac (in red) shown straddling the LHC ring. A rejected design involved installing the electron-accelerator (in yellow) concentrically with the LHC ring. Source: CERN

So, to accomplish this, the team behind the LHeC is considering installing a pill-shaped machine called the energy recovery linac (ERL), straddling the LHC ring (shown above), to produce a beam of electrons that’d then take on the accelerated protons from the main LHC ring – making up the ‘linac-ring LHeC’ design. A first suggestion to install the LHeC as a ring, to accelerate electrons, along the LHC ring was rejected because it would hamper experiments during construction. Anyway, the electrons will be accelerated to 60 GeV while the protons, to 7,000 GeV. The total wall-plug power to the ERL is being capped at 100 MW.

The ERL has a slightly different acceleration mechanism from the LHC, and doesn’t simply accelerate particles continuously around a ring. First, the electrons are accelerated through a radio-frequency field in a linear accelerator (linac – the straight section of the ERL) and then fed into a circular channel, crisscrossed by magnetic fields, curving into the rear end of the linac. The length of the circular channel is such that by the time the electrons travel along it, their phase has shifted by 180º (i.e. if their spin was oriented “up” at one end, it’d have become flipped to “down” by the time they reached the other). And when the out-of-phase electrons reenter the channel, they decelerate. Their kinetic energy is lost to the RF field, which intensifies and so provides a bigger kick to the new batch of particles being injected to the linac at just that moment. This way, the linac recovers the kinetic energy from each circulation.

Such a mechanism is employed at all because the amount of energy lost in a form called synchrotron radiation increases drastically as the particle’s mass gets lower – when accelerated radially using bending magnetic fields.

The bluish glow from the central region of the Crab Nebula is due to synchrotron radiation. Credit: NASA-ESA/Wikimedia Commons
The bluish glow from the central region of the Crab Nebula is due to synchrotron radiation. Credit: NASA-ESA/Wikimedia Commons

Keeping in mind the need to explore new areas of physics – especially those associated with leptons (elementary particles of which electrons are a kind) and quarks/gluons (described by QCD) – the energy of the electrons coming out of the ERL is currently planned to be 60 GeV. They will be collided with accelerated protons by positioning the ERL tangential to the LHC ring. And at the moment of the collision, CERN’s scientists hope that they will be able to use the LHeC to study:

  • Predicted unification of the electromagnetic and weak forces (into an electroweak force): The electromagnetic force of nature is mediated by the particles called photons while the weak force, by particles called W and Z bosons. Whether the scientists will observe the unification of these forces, as some theories predict, is dependent on the quality of electron-proton collisions. Specifically, if the square of the momentum transferred between the particles can reach up to 8-9 TeV, the collider will have created an environment in which physicists will be able to probe for signs of an electroweak force at play.
  • Gluon saturation: To quote from an interview given by theoretical physicist Raju Venugopalan in January 2013: “We all know the nuclei are made of protons and neutrons, and those are each made of quarks and gluons. There were hints in data from the HERA collider in Germany and other experiments that the number of gluons increases dramatically as you accelerate particles to high energy. Nuclear physics theorists predicted that the ions accelerated to near the speed of light at the [Relativistic Heavy Ion Collider] would reach an upper limit of gluon concentration – a state of gluon saturation we call colour glass condensate.”
  • Higgs bosons: On July 4, 2012, Fabiola Gianotti, soon to be the next DG of CERN but then the spokesperson of the ATLAS experiment at the LHC, declared that physicists had found a Higgs boson. Widespread celebrations followed – while a technical nitpick remained: physicists only knew the particle resembled a Higgs boson and might not have been the real thing itself. Then, in March 2013, the particle was most likely identified as being a Higgs boson. And even then, one box remained to be checked: that it was the Higgs boson, not one of many kinds. For that, physicists have been waiting for more results from the upgraded LHC. But a machine like the LHeC would be able to produce a “few thousand” Higgs bosons a year, enabling physicists to study the elusive particle in more detail, confirm more of its properties – or, more excitingly, find that that’s not the case – and look for higher-energy versions of it.

A 2012 paper detailing the concept also notes that should the LHC find that signs of ‘new physics’ could exist beyond the default energy levels of the LHeC, scientists are bearing in mind the need for the electrons to be accelerated by the ERL to up to 140 GeV.

The default configuration of the proposed ERL. The bending arcs are totally about 19 km long (three to a side at different energies). Source: CERN
The default configuration of the proposed ERL. The bending arcs are totally about 19 km long (three to a side at different energies). Source: CERN
The default configuration of the proposed ERL. The bending arcs are totally about 19 km long (three to a side at different energies). Source: CERN
The default configuration of the proposed ERL. The bending arcs are totally about 19 km long (three to a side at different energies). Source: CERN

The unique opportunity presented by an electron-proton collider working in tandem with the LHC goes beyond the mammoth energies to a property called luminosity as well. It’s measured in inverse femtobarn per second, denoting the number of events occurring per 10-39 squared centimetres per second. For example, 10 fb-1 denotes 10 events occurring per 10-39 sq. cm s-1 – that’s 1040 events per sq. cm per second (The luminosity over a specific period of time, i.e. without the ‘per seconds’ in the unit, is called the integrated luminosity). At the LHeC, a luminosity of 1033 cm-2 s-1 is expected to be achieved and physicists hope that with some tweaks, it can be hiked by yet another order of magnitude. To compare: this is 100x what HERA achieved, providing an unprecedented scale at which to explore the effects of deep inelastic scattering, and 10x the LHC’s current luminosity.

It’s also 100x lower than that of the HL-LHC, which is the configuration of the LHC with which the ERL will be operating to make up the LHeC. And the LHeC’s lifetime will be the planned lifetime of the LHC, till the 2030s, about a decade. In the same period, if all goes well, a Chinese behemoth will have taken shape: the Circular Electron-Positron Collider (CEPC), with a circumference 2x that of the LHC. In its proton-proton collision configuration – paralleling the LHC’s – China claims it will reach energies of 70,000 GeV (as against the LHC’s current 14,000 GeV) and luminosity comparable to the HL-LHC. And when its electron-positron collision configuration, which the LHeC will be able to mimic, will be at its best, physicists reckon the CEPC will be able to produce 100,000 Higgs bosons a year.

Timeline for operation of the Future Circular Collider being considered. Source: CERN
Timeline for operation of the Future Circular Collider being considered. Source: CERN

 

As it happens, some groups at CERN are already drawing up plans, due to be presented in 2018, for a machine dwarfing even the CEPC. Meet the Future Circular Collider (FCC), by one account the “ultimate precision-physics machine” (and funnily named by another). To be fair, the FCC has been under consideration since about 2013 and independent of the CEPC. However, in sheer size, the FCC could swallow the CEPC – with an 80-100 km-long ring. It will also be able to accelerate protons to 50,000 GeV (by 2040), attain luminosities of 1035 cm-2 s-1, continue to work with the ERL, function as an electron-positron collider (video), and look for particles weighing up to 25,000 GeV (currently the heaviest known fundamental particle is the top quark, weighing 169-173 GeV).

An illustration showing a possible location and size, relative to the LHC (in white) of the FCC. The main tunnel is shown as a yellow dotted line. Source: CERN
An illustration showing a possible location and size, relative to the LHC (in white) of the FCC. The main tunnel is shown as a yellow dotted line. Source: CERN

And should it be approved and come online in the second half of the 2030s, there’s a good chance the world will be a different place, too: not just the CEPC – there will be (or will have been?) either the International Linear Collider (ILC) and Compact Linear Collider (CLIC) as well. ‘Either’ because they’re both linear accelerators with similar physical dimensions and planning to collide electrons with positrons, their antiparticles, to study QCD, the Higgs field and the prospects of higher dimensions, so only one of them might get built. And they will require a decade to be built, coming online in the late 2020s. The biggest difference between them is that the ILC will be able to reach collision energies of 1,000 GeV while the CLIC (whose idea was conceived at CERN), of 3,000 GeV.

Screen Shot 2015-12-30 at 5.57.55 pmFCC-he = proton-electron collision mode; FCC-hh = proton-proton collision mode; SppC = CEPC’s proton-proton collision mode.

Some thoughts on the nature of cyber-weapons

With inputs from Anuj Srivas.

There’s a hole in the bucket.

When someone asks for my phone number, I’m on alert, even if it’s so my local supermarket can tell me about new products on their shelves. Or for my email ID so the taxi company I regularly use can email me ride receipts, or permission to peek into my phone if only to see what other music I have installed – All vaults of information I haven’t been too protective about but which have off late acquired a notorious potential to reveal things about me I never thought I could so passively.

It’s not everywhere but those aware of the risks of possessing an account with Google or Facebook have been making polar choices: either wilfully surrender information or wilfully withhold information – the neutral middle-ground is becoming mythical. Wariness of telecommunications is on the rise. In an effort to protect our intangible assets, we’re constantly creating redundant, disposable ones – extra email IDs, anonymous Twitter accounts, deliberately misidentified Facebook profiles. We know the Machines can’t be shut down so we make ourselves unavailable to them. And we succeed to different extents, but none completely – there’s a bit of our digital DNA in government files, much like with the kompromat maintained by East Germany and the Soviet Union during the Cold War.

In fact, is there an equivalence between the conglomerates surrounding nuclear weapons and cyber-weapons? Solly Zuckerman (1904-1993), once Chief Scientific Adviser to the British government, famously said:

When it comes to nuclear weapons … it is the man in the laboratory who at the start proposes that for this or that arcane reason it would be useful to improve an old or to devise a new nuclear warhead. It is he, the technician, not the commander in the field, who is at the heart of the arms race.

These words are still relevant but could they have accrued another context? To paraphrase Zuckerman – “It is he, the programmer, not the politician in the government, who is at the heart of the surveillance state.”

An engrossing argument presented in the Bulletin of the Atomic Scientists on November 6 did seem an uncanny parallel to one of whistleblower Edward Snowden’s indirect revelations about the National Security Agency’s activities. In the BAS article, nuclear security specialist James Doyle wrote:

The psychology of nuclear deterrence is a mental illness. We must develop a new psychology of nuclear survival, one that refuses to tolerate such catastrophic weapons or the self-destructive thinking that has kept them around. We must adopt a more forceful, single-minded opposition to nuclear arms and disempower the small number of people who we now permit to assert their intention to commit morally reprehensible acts in the name of our defense.

This is akin to the multiple articles that appeared following Snowden’s exposé in 2013 – that the paranoia-fuelled NSA was gathering more data than it could meaningfully process, much more data than might be necessary to better equip the US’s counterterrorism measures. For example, four experts argued in a policy paper published by the nonpartisan think-tank New America in January 2014:

Surveillance of American phone metadata has had no discernible impact on preventing acts of terrorism and only the most marginal of impacts on preventing terrorist-related activity, such as fundraising for a terrorist group. Furthermore, our examination of the role of the database of U.S. citizens’ telephone metadata in the single plot the government uses to justify the importance of the program – that of Basaaly Moalin, a San Diego cabdriver who in 2007 and 2008 provided $8,500 to al-Shabaab, al-Qaeda’s affiliate in Somalia – calls into question the necessity of the Section 215 bulk collection program. According to the government, the database of American phone metadata allows intelligence authorities to quickly circumvent the traditional burden of proof associated with criminal warrants, thus allowing them to “connect the dots” faster and prevent future 9/11-scale attacks.

Yet in the Moalin case, after using the NSA’s phone database to link a number in Somalia to Moalin, the FBI waited two months to begin an investigation and wiretap his phone. Although it’s unclear why there was a delay between the NSA tip and the FBI wiretapping, court documents show there was a two-month period in which the FBI was not monitoring Moalin’s calls, despite official statements that the bureau had Moalin’s phone number and had identified him. This undercuts the government’s theory that the database of Americans’ telephone metadata is necessary to expedite the investigative process, since it clearly didn’t expedite the process in the single case the government uses to extol its virtues.

So, just as nuclear weapons seem to be plausible but improbable threats fashioned to fuel the construction of evermore nuclear warheads, terrorists are presented as threats who can be neutralised by surveilling everything and by calling for companies to provide weakened encryption so governments can tap civilian communications easier-ly. This state of affairs also points to there being a cyber-congressional complex paralleling the nuclear-congressional complex that, on the one hand, exalts the benefits of being a nuclear power while, on the other, demands absolute secrecy and faith in its machinations.

However, there could be reason to believe cyber-weapons present a more insidious threat than their nuclear counterparts, a sentiment fuelled by challenges on three fronts:

  1. Cyber-weapons are easier to miss – and the consequences of their use are easier to disguise, suppress and dismiss
  2. Lawmakers are yet to figure out the exact framework of multilateral instruments that will minimise the threat of cyber-weapons
  3. Computer scientists have been slow to recognise the moral character and political implications of their creations

That cyber-weapons are easier to miss – and the consequences of their use are easier to disguise, suppress and dismiss

In 1995, Joseph Rotblat won the Nobel Prize for peace for helping found the Pugwash Conference against nuclear weapons in 1955. In his lecture, he lamented the role scientists had wittingly or unwittingly played in developing nuclear weapons, invoking those words of Zuckerman quoted above as well as going on to add:

If all scientists heeded [Hans Bethe’s] call there would be no more new nuclear warheads; no French scientists at Mururoa; no new chemical and biological poisons. The arms race would be truly over. But there are other areas of scientific research that may directly or indirectly lead to harm to society. This calls for constant vigilance. The purpose of some government or industrial research is sometimes concealed, and misleading information is presented to the public. It should be the duty of scientists to expose such malfeasance. “Whistle-blowing” should become part of the scientist’s ethos. This may bring reprisals; a price to be paid for one’s convictions. The price may be very heavy…

The perspectives of both Zuckerman and Rotblat were situated in the aftermath of the nuclear bombings that closed the Second World War. The ensuing devastation beggared comprehension in its scale and scope – yet its effects were there for all to see, all too immediately. The flattened cities of Hiroshima and Nagasaki became quick (but unwilling) memorials for the hundreds of thousands who were killed. What devastation is there to see for the thousands of Facebook and Twitter profiles being monitored, email IDs being hacked and phone numbers being trawled? What about it at all could appeal to the conscience of future lawmakers?

As John Arquilla writes on the CACM blog

Nuclear deterrence is a “one-off” situation; strategic cyber attack is much more like the air power situation that was developing a century ago, with costly damage looming, but hardly societal destruction. … Yes, nuclear deterrence still looks quite robust, but when it comes to cyber attack, the world of deterrence after [the age of cyber-wars has begun] looks remarkably like the world of deterrence before Hiroshima: bleak. (Emphasis added.)

… the absence of “societal destruction” with cyber-warfare imposed less of a real burden upon the perpetrators and endorsers.

And records of such intangible devastations are preserved only in writing, in our memories, and can be quickly manipulated or supplanted by newer information and problems. Events that erupt as a result of illegally obtained information continue to be measured against their physical consequences – there’s a standing arrest warrant while the National Security Agency continues to labour on, flitting between the shadows of SIPA, the Patriot Act and others like them. The violations are like a creep, easily withdrawn, easily restored, easily justified as being counterterrorism measures, easily depicted to be something they aren’t.

That lawmakers are yet to figure out the exact framework of multilateral instruments that will minimise the threat of cyber-weapons

What makes matters frustrating is a multilateral instrument called the Wassenaar Arrangement (WA), which was originally drafted in 1995 to restrict the export of potentially malignant technologies leftover from the Cold War, but which lawmakers resorted to in 2013 to prevent entities with questionable human-rights records from accessing “intrusive software” as well. In effect, the WA defines limits on its 41 signatories about what kind of technology can or can’t be transferred between themselves or not at all to non-signatories based on the tech’s susceptibility to be misused. After 2013, the WA became one of the unhappiest pacts out there, persisting largely because of the confusion that surrounds it. There are three kinds of problems:

1. In its language – Unreasonable absolutes

Sergey Bratus, a research associate professor in the computer science department at Dartmouth College, New Hampshire, published an article on December 2 highlighting WA’s failure to “describe a technical capability in an intent-neutral way” – with reference to the increasingly thin line (not just of code) that separates a correct output from a flawed one, which hackers have become adept at exploiting. Think of it like this:

Say there’s a computer, called C, which Alice uses for a particular purpose (like to withdraw cash if C were an ATM). C accepts an input called I and spits out an output called O. Because C is used for a fixed purpose, its programmers know that the range of values I can assume is limited (such as the four-digit PIN numbers used at ATMs). However, they end up designing the machine to operate safely for all known four-digit numbers and neglecting what would happen should I be a five-digit number. By some technical insight, a hacker could exploit this feature and make C spit out all the cash it contains using a five-digit I.

In this case, a correct output by C is defined only for a fixed range of inputs, with any output corresponding to an I outside of this range being considered a flawed one. However, programmatically, C has still only provided the correct O for a five-digit I. Bratus’s point is just this: we’ve no way to perfectly define the intentions of the programs that we build, at least not beyond the remits of what we expect them to achieve. How then can the WA aspire to categorise them as safe and unsafe?

2. In its purpose – Sneaky enemies

Speaking at Kiwicon 2015, New Zealand’s computer security conference, cyber-policy buff Katie Moussouris said the WA was underprepared to confront superbugs targeting computers connected to the Internet irrespective of their geographical location but the solutions for which could potentially emerge out of a WA signatory. A case in point that Moussouris used was Heartbleed, a vulnerability that achieved peak nuisance in April 2014. Its M.O. was to target the OpenSSL library, used by a server to encrypt personal information transmitted over the web, and force it to divulge the encryption key. To protect against it, users had to upgrade OpenSSL with a software patch containing the solution. However, such patches targeted against bugs of the future could fall under what the WA has defined simply as “intrusion software”, and for which officials administering the agreement will end up having to provide exemptions dozens of times a day. As Darren Pauli wrote in The Register,

[Moussouri] said the Arrangement requires an overhaul, adding that so-called emergency exemptions that allow controlled goods to be quickly deployed – such as radar units to the 2010 Haiti earthquake – will not apply to globally-coordinated security vulnerability research that occurs daily.

3. In presenting an illusion of sufficiency

Beyond the limitations it places on the export of software, the signatories’ continued reliance on the WA as an instrument of defence has also been questioned. Earlier this year, India received some shade after hackers revealed that its – our – government was considering purchasing surveillance equipment from an Italian company that was selling the tools illegitimately. India wasn’t invited to be part of the WA and had it been, it would’ve been able to purchase the surveillance equipment legitimately. Sure, it doesn’t bode well that India was eyeing the equipment at all but when it does so illegitimately, international human rights organisations have fewer opportunities to track violations in India or be able to haul authorities up for infarctions. Legitimacy confers accountability – or at least the need to be accountable.

Nonetheless, despite an assurance (insufficient in hindsight) that countries like India and China would be invited to participate in conversations over the WA in future, nothing has happened. At the same time, extant signatories have continued to express support for the arrangement. “Offending” software came to be included in the WA following amendments in December 2013. States of the European Union enforced the rules from January 2015 while the US Department of Commerce’s Bureau of Industry and Security published a set of controls pursuant to the arrangement’s rules in May 2015 – which have been widely panned by security experts for being too broadly defined. Over December, however, they have begun to hope National Security Adviser Susan Rice can persuade the State Department push for making the language in the WA more specific at the plenary session in December 2016. The Departments of Commerce and Homeland Security are already onboard.

That computer scientists have been slow to recognise the moral character and political implications of their creations

Phillip Rogaway, a computer scientist at the University of California, Davis, penned an essay he published on December 12 titled The Moral Character of Cryptographic Work. Rogaway’s thesis is centred on the increasing social responsibility of the cryptographer – as invoked by Zuckerman – as he writes,

… we don’t need the specter of mushroom clouds to be dealing with politically relevant technology: scientific and technical work routinely implicates politics. This is an overarching insight from decades of work at the crossroads of science, technology, and society. Technological ideas and technological things are not politically neutral: routinely, they have strong, built-in tendencies. Technological advances are usefully considered not only from the lens of how they work, but also why they came to be as they did, whom they help, and whom they harm. Emphasizing the breadth of man’s agency and technological options, and borrowing a beautiful phrase of Borges, it has been said that innovation is a garden of forking paths. Still, cryptographic ideas can be quite mathematical; mightn’t this make them relatively apolitical? Absolutely not. That cryptographic work is deeply tied to politics is a claim so obvious that only a cryptographer could fail to see it.

And maybe cryptographers have missed the wood for the trees until now but times are a’changing.

On December 22, Apple publicly declared it was opposing a new surveillance bill that the British government is attempting to fast-track. The bill, should it become law, will require messages transmitted via the company’s iMessage platform to be encrypted in such a way that government authorities can access them if they need to but not anyone else – a fallacious presumption that Apple has called out as being impossible to engineer. “A key left under the doormat would not just be there for the good guys. The bad guys would find it too,” it wrote in a statement.

Similarly, in November this year, Microsoft resisted an American warrant to hand over some of its users’ data acquired in Europe by entrusting a German telecom company with its servers. As a result, any requests for data about German users using Microsoft to make calls or send emails, and originating from outside Germany, will now have to deal with German lawmakers. At the same time, anxiety over requests from within the country are minimal as the country boasts some of the world’s strictest data-access policies.

Apple’s and Microsoft’s are welcome and important changes in tack. Both companies were featured in the Snowden/Greenwald stories as having folded under pressure from the NSA to open their data-transfer pipelines to snooping. That the companies also had little alternative at that time was glossed over by the scale of NSA’s violations. However, in 2015, a clear moral as well as economic high-ground has emerged in the form of defiance: Snowden’s revelations were in effect a renewed vilification of Big Brother, and so occupying that high-ground has become a practical option. After Snowden, not taking that option when there’s a chance to has come to mean passive complicity.

But apropos Rogaway’s contention: at what level can, or should, the cryptographer’s commitment be expected? Can smaller companies or individual computer-scientists afford to occupy the same ground as larger companies? After all, without the business model of data monetisation, privacy would be automatically secured – but the business model is what provides for the individuals.

Take the case of Stuxnet, the virus unleashed by what are believed to be agents with the US and Israel in 2009-2010 to destroy Iranian centrifuges suspected of being used to enrich uranium to explosive-grade levels. How many computer-scientists spoke up against it? To date, no institutional condemnation has emerged*. Though it could be that neither the US nor Israel publicly acknowledging their roles in developing Stuxnet could have made it tough to judge who may have crossed a line, that a deceptive bundle of code was used as a weapon in an unjust war was obvious.

Then again, can all cryptographers be expected to comply? One of the threats that the 2013 amendments to the WA attempts to tackle is dual-use technology (which Stuxnet is an example of because the virus took advantage of its ability to mimic harmless code). Evidently such tech also straddles what Aaron Adams (PDF) calls “the boundary between bug and behaviour”. That engineers have had only tenuous control over these boundaries owes itself to imperfect yet blameless programming languages, as Bratus also asserts, and not to the engineers themselves. It is in the nature of a nuclear weapon, when deployed, to overshadow the simple intent of its deployers, rapidly overwhelming the already-weakened doctrine of proportionality – and in turn retroactively making that intent seem far, far more important. But in cyber-warfare, its agents are trapped in the ambiguities surrounding what the nature of a cyber-weapon is at all, with what intent and for what purpose it was crafted, allowing its repercussions to seem anywhere from rapid to evanescent.

Or, as it happens, the agents are liberated.

*That I could find. I’m happy to be proved wrong.

Featured image credit: ikrichter/Flickr, CC BY 2.0.

Hopes for a new particle at the LHC offset by call for more data

At a seminar at CERN on Tuesday, scientists working with the Large Hadron Collider provided the latest results from the particle-smasher at the end of its operations for 2015. The results make up the most detailed measurements of the properties of some fundamental particles made to date at the highest energy at which humankind has been able to study them.

The data discussed during the seminar originated from observations at two experiments: ATLAS and CMS. And while the numbers were consistent between them, neither experimental collaboration could confirm any of the hopeful rumours doing the rounds – that a new particle might have been found. However, they were able to keep the excitement going by not denying some of the rumours either. All they said was they needed to gather more data.

One rumour that was neither confirmed nor denied was the existence of a particle at an energy of about 750 GeV (that’s about 750x the mass of a proton). That’s a lot of mass for a single particle – the heaviest known elementary particle is the top quark, weighing 175 GeV. As a result, it’d be extremely short-lived (if it existed) and rapidly decay into a combination of lighter particles, which are then logged by the detectors.

When physicists find such groups of particles, they use statistical methods and simulations to reconstruct the properties of the particle that could’ve produced them in the first place. The reconstruction shows up as a bump in the data where otherwise there’d have been a smooth curve.

This is the ATLAS plot displaying said bump (look in the area over 750 GeV on the x-axis):

ATLAS result showing a small bump in the diphoton channel at 750 GeV in the run-2 data. Credit: CERN
ATLAS result showing a small bump in the diphoton channel at 750 GeV in the run-2 data. Credit: CERN

It was found in the diphoton channel – i.e. the heavier particle decayed into two energetic photons which then impinged on the ATLAS detector. So why aren’t physicists celebrating if they can see the bump?

Because it’s not a significant bump. Its local significance is 3.6σ (that’s 3.6 times more than the average size of a fluctuation) – which is pretty significant by itself. But the more important number is the global significance that accounts for the look-elsewhere effect. As experimental physicist Tommaso Dorigo explains neatly here,

… you looked in many places [in the data] for a possible [bump], and found a significant effect somewhere; the likelihood of finding something in the entire region you probed is greater than it would be if you had stated beforehand where the signal would be, because of the “probability boost” of looking in many places.

The global significance is calculated by subtracting the effect of this boost. In the case of the 750-GeV particle, the bump stood at a dismal 1.9σ. A minimum of 3 is required to claim evidence and 5 for a discovery.

A computer’s reconstruction of the diphoton event observed by the ATLAS detector. Credit: ATLAS/CERN
A computer’s reconstruction of the diphoton event observed by the ATLAS detector. Credit: ATLAS/CERN

Marumi Kado, the physicist who presented the ATLAS results, added that when the bump was studied across a 45 GeV swath (on the x-axis), its significance went up to 3.9σ local and 2.3σ global. Kado is affiliated with the Laboratoire de l’Accelerateur Lineaire, Orsay.

A similar result was reported by James Olsen, of Princeton University, speaking for the CMS team with a telltale bump at 745 GeV. However, the significance was only 2.6σ local and 1.2σ global. Olsen also said the CMS detector had only one-fourth the data that ATLAS had in the same channel.

Where all this leaves us is that the Standard Model, which is the prevailing theory + equations used to describe how fundamental particles behave, isn’t threatened yet. Physicists would much like it to be: though it’s been able to correctly predict the the existence of many particles and fundamental forces, it’s been equally unable to explain some findings (like dark matter). And finding a particle weighing ~750 GeV, which the model hasn’t predicted so far, could show physicists what could be broken about the model and pave the way for a ‘new physics’.

However, on the downside, some other new-physics hypotheses didn’t find validation. One of the more prominent among them is called supersymmetry, SUSY for short, and it requires the existence of some heavier fundamental particles. Kado and Olsen both reported that no signs of such particles have been observed, nor of heavier versions of the Higgs boson, whose discovery was announced mid-2012 at the LHC. Thankfully they also appended that the teams weren’t done with their searches and analyses yet.

So, more data FTW – as well as looking forward to the Rencontres de Moriond (conference) in March 2016.

Physicists could have to wait 66,000 yottayears to see an electron decay

The longest coherently described span of time I’ve encountered is from Hindu cosmology. It concerns the age of Brahma, one of Hinduism’s principal deities, who is described as being 51 years old (with 49 more to go). But these are no simple years. Each day in Brahma’s life lasts for a period called the kalpa: 4.32 billion Earth-years. In 51 years, he will actually have lived for almost 80 trillion Earth-years. In a 100, he will have lived 157 trillion Earth-years.

157,000,000,000,000. That’s stupidly huge. Forget astronomy – I doubt even economic crises have use for such numbers.

On December 3, scientists announced that we’ve all known something that will live for even longer: the electron.

Yup, the same tiny lepton that zips around inside atoms with gay abandon, that’s swimming through the power lines in your home, has been found to be stable for at least 66,000 yottayears – yotta- being the largest available prefix in the decimal system.

In stupidly huge terms, that’s 66,000,000,000,000,000,000,000,000,000 (66,000 trillion trillion) years. Brahma just slipped to second place among the mortals.

But why were scientists making this measurement in the first place?

Because they’re desperately trying to disprove a prevailing theory in physics. Called the Standard Model, it describes how fundamental particles interact with each other. Though it was meticulously studied and built over a period of more than 30 years to explain a variety of phenomena, the Standard Model hasn’t been able to answer few of the more important questions. For example, why is gravity among the four fundamental forces so much weaker than the rest? Or why is there more matter than antimatter in the universe? Or why does the Higgs boson not weigh more than it does? Or what is dark matter?

Silence.

The electron belongs to a class of particles called leptons, which in turn is well described by the Standard Model. So if physicists are able to find that an electron is less stable the model predicts, it’d be a breakthrough. But despite multiple attempts to find an equally freak event, physicists haven’t succeeded – not even with the LHC (though hopeful rumours are doing the rounds that that could change soon).

The measurement of 66,000 yottayears was published in the journal Physical Review Letters on December 3 (a preprint copy is available on the arXiv server dated November 11). It was made at the Borexino neutrino experiment buried under the Gran Sasso mountain in Italy. The value itself is hinged on a simple idea: the conservation of charge.

If an electron becomes unstable and has to break down, it’ll break down into a photon and a neutrino. There are almost no other options because the electron is the lightest charged particle and whatever it breaks down into has to be even lighter. However, neither the photon nor the neutrino has an electric charge so the breaking-down would violate a fundamental law of nature – and definitely overturn the Standard Model.

The Borexino experiment is actually a solar neutrino detector, using 300 tonnes of a petroleum-based liquid to detect and study neutrinos streaming in from the Sun. When a neutrino strikes the liquid, it knocks out an electron in a tiny flash of energy. Some 2,210 photomultiplier tubes surrounding the tank amplify this flash for examination. The energy released is about 256 keV (by the mass-energy equivalence, corresponding to about a 4,000th the mass of a proton).

However, the innards of the mountain where the detector is located also produce photons thanks to the radioactive decay of bismuth and polonium in it. So the team making the measurement used a simulator to calculate how often photons of 256 keV are logged by the detector against the ‘background’ of all the photons striking the detector. Kinda like a filter. They used data logged over 408 days (January 2012 to May 2013).

The answer: once every 66,000 yotta-years (that’s 420 trillion Brahma-years).

Physics World reports that if photons from the ‘background’ radiation could be eliminated further, the electron’s lifetime could probably be increased by a thousand times. But there’s historical precedent that to some extent encourages stronger probes of the humble electron’s properties.

In 2006, another experiment situated under the Gran Sasso mountain tried to measure the rate at which electrons violated a defining rule in particle physics called Pauli’s exclusion principle. All electrons can be described by four distinct attibutes called their quantum numbers, and the principle holds that no two electrons can have the same four numbers at any given time.

The experiment was called DEAR (DAΦNE Exotic Atom Research). It energised electrons and then measured how much of it was released when the particles returned to a lower-energy state. After three years of data-taking, its team announced in 2009 that the principle was being violated once every 570 trillion trillion measurements (another stupidly large number).

That’s a violation 0.0000000000000000000000001% of the time – but it’s still something. And it could amount to more when compared to the Borexino measurement of an electron’s stability. In March 2013, the team that worked DEAR submitted a proposal for building an instrument that improve the measurement by a 100-times, and in May 2015, reported that such an instrument was under construction.

Here’s hoping they don’t find what they were looking for?

New LHC data has more of the same but could something be in the offing?

Dijet mass (TeV) v. no. of events. SOurce: ATLAS/CERN
Dijet mass (TeV) v. no. of events. Source: ATLAS/CERN

Looks intimidating, doesn’t it? It’s also very interesting because it contains an important result acquired at the Large Hadron Collider (LHC) this year, a result that could disappoint many physicists.

The LHC reopened earlier this year after receiving multiple performance-boosting upgrades over the 18 months before. In its new avatar, the particle-smasher explores nature’s fundamental constituents at the highest energies yet, almost twice as high as they were in its first run. By Albert Einstein’s mass-energy equivalence (E = mc2), the proton’s mass corresponds to an energy of almost 1 GeV (giga-electron-volt). The LHC’s beam energy to compare was 3,500 GeV and is now 6,500 GeV.

At the start of December, it concluded data-taking for 2015. That data is being steadily processed, interpreted and published by the multiple topical collaborations working on the LHC. Two collaborations in particular, ATLAS and CMS, were responsible for plots like the one shown above.

This is CMS’s plot showing the same result:

Source: CMS/CERN
Source: CMS/CERN

When protons are smashed together at the LHC, a host of particles erupt and fly off in different directions, showing up as streaks in the detectors. These streaks are called jets. The plots above look particularly at pairs of particles called quarks, anti-quarks or gluons that are produced in the proton-proton collisions (they’re in fact the smaller particles that make up protons).

The sequence of black dots in the ATLAS plot shows the number of jets (i.e. pairs of particles) observed at different energies. The red line shows the predicted number of events. They both match, which is good… to some extent.

One of the biggest, and certainly among the most annoying, problems in particle physics right now is that the prevailing theory that explains it all is unsatisfactory – mostly because it has some really clunky explanations for some things. The theory is called the Standard Model and physicists would like to see it disproved, broken in some way.

In fact, those physicists will have gone to work today to be proved wrong – and be sad at the end of the day if they weren’t.

Maintenance work underway at the CMS detector, the largest of the five that straddle the LHC. Credit: CERN
Maintenance work underway at the CMS detector, the largest of the five that straddle the LHC. Credit: CERN

The annoying problem at its heart

The LHC chips in providing two kinds of opportunities: extremely sensitive particle-detectors that can provide precise measurements of fleeting readings, and extremely high collision energies so physicists can explore how some particles behave in thousands of scenarios in search of a surprising result.

So, the plots above show three things. First, the predicted event-count and the observed event-count are a match, which is disappointing. Second, the biggest deviation from the predicted count is highlighted in the ATLAS plot (look at the red columns at the bottom between the two blue lines). It’s small, corresponding to two standard deviations (symbol: σ) from the normal. Physicists need at least three standard deviations () from the normal for license to be excited.

But this is the most important result (an extension to the first): The predicted event-count and the observed event-count are a match across 6,000 GeV. In other words: physicists are seeing no cause for joy, and all cause for revalidating a section of the Standard Model, in a wide swath of scenarios.

The section in particular is called quantum chromodynamics (QCD), which deals with how quarks, antiquarks and gluons interact with each other. As theoretical physicist Matt Strassler explains on his blog,

… from the point of view of the highest energies available [at the LHC], all particles in the Standard Model have almost negligible rest masses. QCD itself is associated with the rest mass scale of the proton, with mass-energy of about 1 GeV, again essentially zero from the TeV point of view. And the structure of the proton is simple and smooth. So QCD’s prediction is this: the physics we are currently probing is essential scale-invariant.

Scale-invariance is the idea that two particles will interact the same way no matter how energetic they are. To be sure, the ATLAS/CMS results suggest QCD is scale-invariant in the 0-6,000 GeV range. There’s a long way to go – in terms of energy levels and future opportunities.

Something in the valley

The folks analysing the data are helped along by previous results at the LHC as well. For example, with the collision energy having been ramped up, one would expect to see particles of higher energies manifesting in the data. However, the heavier the particle, the wider the bump in the plot and more the focusing that’ll be necessary to really tease out the peak. This is one of the plots that led to the discovery of the Higgs boson:

 

Source: ATLAS/CERN
Source: ATLAS/CERN

That bump between 125 and 130 GeV is what was found to be the Higgs, and you can see it’s more of a smear than a spike. For heavier particles, that smear’s going to be wider with longer tails on the site. So any particle that weighs a lot – a few thousand GeV – and is expected to be found at the LHC would have a tail showing in the lower energy LHC data. But no such tails have been found, ruling out heavier stuff.

And because many replacement theories for the Standard Model involve the discovery of new particles, analysts will tend to focus on particles that could weigh less than about 2,000 GeV.

In fact that’s what’s riveted the particle physics community at the moment: rumours of a possible new particle in the range 1,900-2,000 GeV. A paper uploaded to the arXiv preprint server on December 10 shows a combination of ATLAS and CMS data logged in 2012, and highlights a deviation from the normal that physicists haven’t been able to explain using information they already have. This is the relevant plot:

Source: arXiv:1512.03371v1
Source: arXiv:1512.03371v1

 

The one on the middle and right are particularly relevant. They each show the probability of the occurrence of an event (observed as a bump in the data, not shown here) of some heavier mass of energy decaying into two different final states: of W and Z bosons (WZ), and of two Z bosons (ZZ). Bosons make a type of fundamental particle and carry forces.

The middle chart implies that the mysterious event is at least 1,000-times less likelier to occur than normally and the one on the left implies the event is at least 10,000-times less likelier to occur than normally. And both readings are at more than 3σ significance, so people are excited.

The authors of the paper write: “Out of all benchmark models considered, the combination favours the hypothesis of a [particle or its excitations] with mass 1.9-2.0 [thousands of GeV] … as long as the resonance does not decay exclusively to WW final states.”

But as physicist Tommaso Dorigo points out, these blips could also be a fluctuation in the data, which does happen.

Although the fact that the two experiments see the same effect … is suggestive, that’s no cigar yet. For CMS and ATLAS have studied dozens of different mass distributions, and a bump could have appeared in a thousand places. I believe the bump is just a fluctuation – the best fluctuation we have in CERN data so far, but still a fluke.

There’s a seminar due to happen today at the LHC Physics Centre at CERN where data from the upgraded run is due to be presented. If something really did happen in those ‘valleys’, which were filtered out of a collision energy of 8,000 GeV (basically twice the beam energy, where each beam is a train of protons), then those events would’ve happened in larger quantities during the upgraded run and so been more visible. The results will be presented at 1930 IST. Watch this space.

Featured image: Inside one of the control centres of the collaborations working on the LHC at CERN. Each collaboration handles an experiment, or detector, stationed around the LHC tunnel. Credit: CERN.

Calling 2015

It might still be too soon to call it but 2015 was a great year, far better than the fiasco 2014 was. Ups and downs and all that, but what ups they were have been. I thought I’d list them out just to be able to put a finger on all that I’ve dealt with and been dealt with.

Ups

  1. Launched The Wire (only Siddharth and Vignesh know my struggle at 5 am on May 11 to get the domain mapped properly)
  2. Wrote a lot of articles, and probably the most in a year about the kind of stuff that really interests me (history of science, cosmology, cybersec)
  3. Got my reading habit back (somewhat)
  4. Found two awesome counselors and a psychologist, absolutely wonderful people
  5. … who helped me get a great handle on my depression and almost completely get rid of it
  6. Managed to hold on to a job for more than four months for the first time since early 2014 (one of the two companies that hired me in between is now shut, so not my fault?)
  7. Didn’t lose any of my friends – in fact, made six really good new ones!
  8. Didn’t have to put up with a fourth The Hobbit movie (I’m sure Tauriel’s lines would’ve had Tolkien doing spinarooneys in his grave)

and others.

Downs

  1. Acquired an addiction
  2. Didn’t have a Tolkien story releasing on the big screen 10 days before my birthday)
  3. Grandpa passed away (though I don’t wish he’d stayed on for longer either – he was in a lot of pain before he died) as did an uncle
  4. Chennai floods totalled my Macbook Pro (and partially damaged my passport)
  5. Stopped sending out the Curious Bends newsletter
  6. My vomit-free streak ended after eight years
  7. Still feel an impostor
  8. Didn’t discover any major fantasy series to read (which sucks because Steven Erikson publishes one book only every two years)

and others.

Lots to look forward to in 2016; five things come immediately to mind:

  • Move to Delhi
  • Continue contributing to The Wire
  • Visit a world-renowned particle accelerator lab
  • Await, purchase and devour Erikson’s new book (Fall of Light, book #2 of the Kharkhanas Trilogy)
  • Await new Planck and LHC data (kind of a big deal when you’re able to move away from notions of nerdiness or academic specialisation and toward the idea that the data will provide you – a human – a better idea of the cosmos that surrounds you, that is you)

Tracing the origins of Pu-244

Excerpt:

The heaviest naturally occurring elements are thought to form not when a star is alive but when it begins to die. Specifically, in the explosion that results when a star weighing 8x to 20x our Sun dies, in a core-collapse supernova (cc-SNe). In this process, the star first implodes to some extent before being rebounded outward in a violent throwing-off of its outer layers. The atoms of lighter elements in these layers could capture free neutrons and transmutate into an atom of a heavier one, called the r-process.

The rebound occurs because if the star’s core weighs less than about 5x our Sun (our entire Sun!), it doesn’t collapse into a blackhole but an intermediary state called a neutron star – a small and extremely dense ball composed almost entirely of neutrons.

Anyway, the expelled elements are dispersed through the interstellar medium, the region of space between stars. Therefrom, for example, they could become part of the ingredients of a new nebula or star, get picked up by passing comets or meteors, or eventually settle down on the surface of a distant planet. For example, the isotope of one such element – plutonium (Pu) – is found scattered among the sediments on the floor of Earth’s deepest seas: plutonium-244.

Based on multiple measurements of the amount of Pu-244 on the seafloor and in the interstellar medium, scientists know how the two amounts correlate over time. And based on astronomical observations, they also know how much Pu-244 each cc-SNe may have produced. But what has caught off recent scientists is that the amount of Pu-244 on Earth over time doesn’t match up with the rate at which cc-SNe occur in the Milky Way galaxy. That is, the amount of Pu-244 on Earth is 100 times lower than there would’ve been if all of it had to have come from cc-SNe.

So where is the remaining Pu-244?

Or, a team of astrophysicists from the Hebrew University, Jerusalem, realised, was so much Pu-244 not being produced in the first place?

Read the full piece here.

The downward/laterward style in science writing

One of the first lessons in journalism 101 is the inverted pyramid, a style of writing where the journalist presents the more important information higher up the piece. This way, the copy sort of tapers down in importance the longer it runs. The idea was that such writing served two purposes:

  1. Allowing editors looking to shorten the copy to make it fit in print to make cuts easily – they’d just have to snip whatever they wanted off the bottom, knowing that the meat was on the top.
  2. Readers would get the most important information without having to read too much through the copy – allowing them to decide earlier if they want to read the whole thing or move on to something else.

As a science writer, I don’t like the inverted pyramid. Agreed, it makes for pithy writing and imposes the kind of restriction on the writer that does a good job of forcing her to preclude her indulgence from the writing process. But if the writer was intent on indulging herself, I think she’d do it inverted pyramid or not. My point is that the threat of self-indulgence shouldn’t disallow other, possibly more engaging, forms of writing.

To wit: my favourite style is the pyramid. It starts with a slowly building trickle of information at the top with the best stuff coming at the bottom. I like this style because it closely mimics the process of discovery, of the brain receiving new information and then accommodating it within an existing paradigm. To me, it also allows for a more logical, linear construction of the narrative. In fact, I prefer the descriptor ‘downward/laterward’ because, relative to the conventional inverted pyramid style, the pyramid postpones the punchline.

However, two caveats.

  1. The downward/laterward doesn’t make anything easier for the editors, but that again – like self-indulgence – is to me a separate issue. In the pursuit of constructing wholesome pieces, it’d be an insult to me if I had an editor who wasn’t interested in reading my whole piece and then deciding how to edit it. Similarly, in return for the stylistic choices it affords, the downward/laterward compels the writer to write even better to keep the reader from losing interest.
  2. I usually write explainers (rather, end up having tried to write one). Explainers in the context of my interests typically focus on the science behind an object or an event, and they’re usually about high-energy astronomy/physics. Scientific advancements in these subjects usually require a lot of background, pre-existing information. So the pyramid style affords me the convenience of presenting such information as a build toward the conclusion – which is likely the advancement in question.
    However, I’m sure I’m in the minority. Most writers whose articles I enjoy are also writers gunning to describe the human emotions at play behind significant scientific findings. And their articles are typically about drama. So it might be that the drama builds downward/laterward while the science itself is presented in the inverted-pyramid way (and I just end up noticing the science).

Looking back, I think most of my recent pieces (2011-onward) have been written in the downward/laterward style. And the only reason I decided to reflect on the process now is because of this fantastic piece in The Atlantic that talks about how astronomers hunt for the oldest stars in the universe. Great stuff.

#ChennaiRains – let's not forget

Chennai. Poda vennai. Credit: Wikimedia Commons
Chennai. Poda vennai. Credit: Wikimedia Commons

It was a friend’s remark in 2012 that alerted me to something off about the way I’ve looked at natural disasters in India’s urban centres – especially Chennai. At that time – as it is today – long strips of land in many parts of the city were occupied by trucks and machinery involved in building the Metro. At the same time, arbitrary overcharging by auto-rickshaws was rampant and almost all buses were overcrowded during peak hours. Visiting the city for a few days, she tweeted: “Get your act together, Chennai.”

Like all great cities, Chennai has always sported two identities conflated as one: its public infrastructure and its people. There has been as much to experience about Chennai’s physical framework as its anthropological counterpart. For every dabara of filter coffee you had, visit to the Marina beach you paid on a cloudy evening, stroll around Kapaleeshwarar Temple you took during a festival, you could take a sweaty bus-ride at 12 pm, bargain with an auto-rickshaw driver, and get lost on South Usman road. This conflation has invoked the image of a place retaining its small-townish charm while evolving a big-town bustle. And this impression wouldn’t be far off the mark if it weren’t for one problem.

In the shadow of its wonderful people, Chennai’s public infrastructure has been fraying at the seams.

The ongoing spell of rains in the city have really brought some of these tears to the fore. Large swaths are flooded with upto two feet of water while Saidapet, Kotturpuram, Eekkattuthangal, Tiruvanmiyur and Tambaram areas have been wrecked. A crowdsourced effort has registered over 2,000 roads as being water-logged. Hundreds of volunteers still ply the city providing what help they can – while a similar number of others have opened up their homes – as thousands desperately await it. The airport has been shut for a week, all trains cancelled and major arterial roads blocked off. The Army, Navy and the NDRF have been deployed for rescue efforts but they’re overstretched. Already, the northern, poorer suburbs are witnessing flash protests amidst a building exodus for want of supplies.

Nobody saw these rains coming. For over three decades, the annual northeast monsoons have been just about consistently short of expectations. But this year, the weather has seemed intent on correcting that hefty deficit in the span of a few weeks. For example, December 1-2 alone witnessed over 300 mm of rainfall as opposed to a full month’s historic average of 191 mm.

But as it happens, there’s no credible drainage system. The consequential damage is already an estimated Rs.15,000 crore – which is really just fine because I believe that that number’s smaller than all the bribes that were given and taken by the city’s municipal administrators to let builders build where and how they wished: within once-swamps, in the middle of dried lakebeds, using impervious materials for watertight designs, with little care for surface runoffs and solid waste management, the entire facade constructed to be car- and motorbike-friendly.

What I think is up for change now is that we don’t forget, that we don’t let the government surmount the disaster this time with compensation packages, reconciliatory sops and good ol’ flattery – the last one by saying the people of Chennai have stood tall, have coped well, and move on, just like that. But what made the crisis that required the fortitude in the first place – any more than the fortitude we already display to get on with our lives? It was only drawn out by what has always been a planned but ignored crisis. Even if it’s the sole silver-lining, focusing on it also distracts us from understanding the real damage we’ve taken.

An opinion piece that appeared in The Hindu on December 3 provides a convenient springboard to further explain my views. An excerpt:

Many outsiders who come to the city say it’s hard to make friends here. The people are insular, they say. It’s true, we Chennaites stick to ourselves. There is none of the brash socialising of the Delhiite, the familiar chattiness of the Kolkatan, or the earthy amiability of the Mumbaikar. Your breezy hello will likely get a grunt in return and chirpy conversational overtures will meet austere monosyllables. That’s because we don’t much care for small talk. We can spend entire evenings making few friends and influencing nobody, but give us a crisis and you’ll find that few cities stand up tall the way Chennai does. It is unglamorously practical, calmly efficient, and absolutely rock-solid in its support systems.

Apropos these words: It’s very important to glorify the people who’ve stood up to adversity but when the adversity was brought on by the government (pointing at AIADMK for its construction-heavy reigns and at the DMK for having no sense of urban planning – exemplified by that fucking flyover on South Usman Road), it’s equally important to call it out as well. Sadly, the author of the piece blames the rain god for it! It’s like I push you in front of a speeding truck, you somehow survive a fatal scenario, then I applaud you and you thank me for the applause. I think that when you’re able to celebrate a life-goes-on narrative without talking about what broke, you’re essentially rooting for the status quo.

Moreover, thousands of cities have stood tall the way Chennai has. Kalyan Raman had penned a justifiably provocative essay in 2005 where he argued that India’s biggest metros have largely been made (as opposed to being unmade) by daunting crises. I think it’s important in this context to cheer on rescue efforts but not the physical infrastructure itself (which has a cultural component in having established it), and which is neither “calmly efficient” nor has a rocky quality to it. The infrastructure stinks (a 10-year timeline for building the Metro is another example) and must now earn its own narrative in stories of Chennai instead of piggybacking on the city’s other well-deserved qualities.

In the same vein, I don’t think different cities’ different struggles are even comparable, so it’s offensive to suggest few cities can stand up tall the way Chennai has. Let’s cheer for having survived, not thump our chests. We made the floods happen, and unless we demand better from our government, we won’t get better governance (for starters, in the form of civic infrastructure reform).

Relativity’s kin, the Bose-Einstein condensate, is 90 now

Excerpt:

Over November 2015, physicists and commentators alike the world over marked 100 years since the conception of the theory of relativity, which gave us everything from GPS to blackholes, and described the machinations of the universe at the largest scales. Despite many struggles by the greatest scientists of our times, the theory of relativity remains incompatible with quantum mechanics, the rules that describe the universe at its smallest, to this day. Yet it persists as our best description of the grand opera of the cosmos.

Incidentally, Einstein wasn’t a fan of quantum mechanics because of its occasional tendencies to violate the principles of locality and causality. Such violations resulted in what he called “spooky action at a distance”, where particles behaved as if they could communicate with each other faster than the speed of light would have it. It was weirdness the likes of which his conception of gravitation and space-time didn’t have room for.

As it happens, 2015 also marks another milestone, also involving Einstein’s work – as well as the work of an Indian scientist: Satyendra Nath Bose. It’s been 20 years since physicists realised the first Bose-Einstein condensate, which has proved to be an exceptional as well as quirky testbed for scientists probing the strange implications of a quantum mechanical reality.

Its significance today can be understood in terms of three ‘periods’ of research that contributed to it: 1925 onward, 1975 onward, and 1995 onward.

Read the full piece here.