Trying to understand bitcoins

In a 2008 paper, a Japanese programmer, Satoshi Nakamoto, introduced an alternate form of currency that he called bitcoins. His justifications were the problems plaguing contemporary digital commerce. In Nakamoto’s words:

Completely non-reversible transactions are not really possible, since financial institutions cannot avoid mediating disputes. The cost of mediation increases transaction costs, limiting the minimum practical transaction size and cutting off the possibility for small casual transactions, and there is a broader cost in the loss of ability to make non-reversible payments for nonreversible services.

With the possibility of reversal, the need for trust spreads.

Merchants must be wary of their customers, hassling them for more information than they would otherwise need. A certain percentage of fraud is accepted as unavoidable. These costs and payment uncertainties can be avoided in person by using physical currency, but no mechanism exists to make payments over a communications channel without a trusted party.

Nakamoto’s solution was a purely digital currency – the bitcoin – that would let transacting parties remain anonymous, keep transactions very secure, and eliminate redundant fees. Unlike conventional currencies such as the rupee or the dollar, it would also be impervious to government interference. And it would accomplish all this by “being material” only on the world wide web.

Contrary to popular opinion, bitcoins don’t already exist, waiting to be found, etc. Bitcoins are created when a particular kind of transaction happens – not between two people, but between two people and a system that can be thought of as a bitcoin client. It exists on the world wide web, too.

When you login through your client and start looking for a bitcoin, you’re given a bit of information – like your location on the web, a time, a date, an index number, etc. – called a mandatory string. You then proceed to encrypt the string using an algorithm called the SHA-256. Doing this would require a computer or processor called the miner.

A legacy in the string

On the miner, an encryption algorithm performs mathematical and logical operations on it that distorts all information that would’ve been visible at first glance. For instance, if the mandatory string reads like thecopernican.28052013.1921681011, post-encryption with SHA-256 it would read 2aa003e47246e54f439873516cb1b2d61af8def752fe883c22886c39ce430563.

In the case of bitcoins, the mandatory string consists of a collection of all the mandatory strings that have been used by users before it. So, encrypting it would mean you’re encrypting the attempts of all those who have come before you, maintaining a sort of legacy called the blockchain.

After this first step, when you manage to encrypt the mandatory string in a specific way – such as such that the first four digits are zero, say – as determined by the system, you’ve hit your jackpot… almost.

This jackpot is a block of 50 bitcoins, and you can’t immediately own it. Because you’ve performed an encryption that could just as well have been staged, you’ve to wait for confirmation. That is, another user who’s out there looking for bitcoins must have encrypted another bit of mandatory string the exact same way. The odds are against you, but you’ve to wait for it to happen or you won’t get your bitcoins.

Once another user lands up on your block, then your block is confirmed and it’s split between you – the miner – and the confirmers, with you getting the lion’s share.

Proof of work, and its denial

This establishes proof of work in getting to the coins, and implies a consensus among miners that your discovery was legitimate. And you don’t even need to reveal your identity for the grant of legitimacy. But of course, the number of confirmations necessary to consummate a “dig” varies – from six to some appropriate number.

If, somehow, you possess more than 50 per cent of the bitcoin-mining community’s encrypting power, then you can perform the mining as well as the confirmation. That is, you will be able to establish your own blockchain as you are the consensus, and generate blocks faster than the rest of the network. Over time, your legacy will be longer than the original, making it the dominant chain for the system.

Similarly, if you have transferred your bitcoins to another person, you will also be able to reverse the transaction. As stated in a paper by Meni Rosenfeld: “… if the sender [of coins] would be able, after receiving [a] product, to broadcast a conflicting transaction sending the same coin back to himself,” the concept of bitcoins will be undermined.

Greed is accounted for

Even after you’ve landed your first block, you’re going to keep looking for more blocks. And because there are only 21 million bitcoins that the system has been programmed to allow, finding each block must increase the difficulty of finding subsequent blocks.

Why must it? Because if all the 21 million were equally difficult to find, then they’d all have been found by now. The currency would neither have had time to accrue a community of its users nor the time needed to attain a stable value that can be useful when transacting. Another way to look at it is because bitcoins have no central issuing authority, like RBI for the rupee, regulating the value of the currency after letting it become monopolised would be difficult.

The coin doesn’t have an intrinsic value but provides value to transactions. The only other form of currency – the one issued by governments – represents value that can be ascertained by government-approved institutions like banks. This shows itself as a processing fee when you’re wiring money between two accounts, for instance.

A bitcoin’s veracity, however, is proven just like the its mining: by user confirmation.

What goes around comes around

If A wants to transfer bitcoins to B, the process is:

  1. A informs B.
  2. B creates a block that comes with a cryptographic key pair: a private key that is retained by B and apublic key that everyone knows.
  3. A tells the bitcoin client, software that mediates the transaction, that he’d like to transfer 10 bitcoins to B’s block.
  4. The client transfers 10 bitcoins to the new block.
  5. The block can be accessed only with the private key, which now rests with B, and the public key, which other miners use to verify the transaction.

Since there is no intervening ‘authority’ like a bank that ratifies the transaction but other miners themselves, the processing fee is eliminated. Moreover, because of the minimal resources necessary to start and finish a transaction, there is no minimum size of transaction for it to be economically feasible. And last: a transaction is always (remarkably!) secure.

God in the machine

While the bitcoin client can be used on any computer, special hardware is necessary for a machine to repeatedly encrypt – a.k.a. hash – a given string until it arrives at a block. Every time an unsatisfactory hash is generated that’s rejected by the system, a random number is affixed to the mandatory string and then hashed again for a different result. Each such result is called a nonce.

Because only a uniquely defined nonce – such as starting with a few zeroes, etc. – is acceptable, the mining rig must be able to hash at least millions of times each second in order to yield any considerable results. Commercially available rigs hash much faster than this, though.

The Avalon ASIC miner costs $9,750 for an at-least-60 billion hashes per second (GH/s) unit; the BFL Jalapeno 50-GH/s miner comes at $2,499. Note, however, that Avalon accepts only bitcoins as payment these days, and BFL haven’t shipped their product for quite some time now.

The electronic architecture behind such miners is either the application-specific integrated circuit (ASIC) or the advanced field programmable gate array (FPGA), both of which are made to run the SHA-256 algorithm. ASICs are integrated circiuts customised for a particular application. FPGAs are ASICs that are customisable even after manufacturing.

Because of the tremendous interest in bitcoins, and crypto-currencies in general, its economic impact is best measured not just by its present value – a whopping $130 per bitcoin – but also the mining-rig industry, their power consumption, ‘bitcoin bubbles‘, and the rise of other crypto-currencies that take an even more sophisticated approach to mitigating the pains of internet commerce.

This post first appeared, as written by me, in The Copernican science blog on May 31, 2013.

Bohr and the breakaway from classical mechanics

One hundred years ago, Niels Bohr developed the Bohr model of the atom, where electrons go around a nucleus at the center like planets in the Solar System. The model and its implications brought a lot of clarity to the field of physics at a time when physicists didn’t know what was inside an atom, and how that influenced the things around it. For his work, Bohr was awarded the physics Nobel Prize in 1922.

The Bohr model marked a transition from the world of Isaac Newton’s classical mechanics, where gravity was the dominant force and values like mass and velocity were accurately measurable, to that of quantum mechanics, where objects were too small to be seen even with powerful instruments and their exact position didn’t matter.

Even though modern quantum mechanics is still under development, its origins can be traced to humanity’s first thinking of energy as being quantized and not randomly strewn about in nature, and the Bohr model was an important part of this thinking.

The Bohr model

According to the Dane, electrons orbiting the nucleus at different distances were at different energies, and an electron inside an atom – any atom – could only have specific energies. Thus, electrons could ascend or descend through these orbits by gaining or losing a certain quantum of energy, respectively. By allowing for such transitions, the model acknowledged a more discrete energy conservation policy in physics, and used it to explain many aspects of chemistry and chemical reactions.

Unfortunately, this model couldn’t evolve continuously to become its modern equivalent because it could properly explain only the hydrogen atom, and it couldn’t account for the Zeeman effect.

What is the Zeeman effect? When an electron jumps from a higher to a lower energy-level, it loses some energy. This can be charted using a “map” of energies like the electromagnetic spectrum, showing if the energy has been lost as infrared, UV, visible, radio, etc., radiation. In 1896, Dutch physicist Pieter Zeeman found that this map could be distorted when the energy was emitted in the presence of a magnetic field, leading to the effect named after him.

It was only in 1925 that the cause of this behavior was found (by Wolfgang Pauli, George Uhlenbeck and Samuel Goudsmit), attributed to a property of electrons called spin.

The Bohr model couldn’t explain spin or its effects. It wasn’t discarded for this shortcoming, however, because it had succeeded in explaining a lot more, such as the emission of light in lasers, an application developed on the basis of Bohr’s theories and still in use today.

The model was also important for being a tangible breakaway from the principles of classical mechanics, which were useless at explaining quantum mechanical effects in atoms. Physicists recognized this and insisted on building on what they had.

A way ahead

To this end, a German named Arnold Sommerfeld provided a generalization of Bohr’s model – a correction – to let it explain the Zeeman effect in ionized helium (which is a hydrogen atom with one proton and one neutron more).

In 1924, Louis de Broglie introduced particle-wave duality into quantum mechanics, invoking that matter at its simplest could be both particulate and wave-like. As such, he was able to verify Bohr’s model mathematically from a waves’ perspective. Before him, in 1905, Albert Einstein had postulated the existence of light-particles called photons but couldn’t explain how they could be related to heat waves emanating from a gas, a problem he solved using de Broglie’s logic.

All these developments reinforced the apparent validity of Bohr’s model. Simultaneously, new discoveries were emerging that continuously challenged its authority (and classical mechanics’, too): molecular rotation, ground-state energy, Heisenberg’s uncertainty principle, Bose-Einstein statistics, etc. One option was to fall back to classical mechanics and rework quantum theory thereon. Another was to keep moving ahead in search of a solution.

However, this decision didn’t have to be taken because the field of physics itself had started to move ahead in different ways, ways which would become ultimately unified.

Leaps of faith

Between 1900 and 1925, there were a handful of people responsible for opening this floodgate to tide over the centuries old Newtonian laws. Perhaps the last among them was Niels Bohr; the first was Max Planck, who originated quantum theory when he was working on making light bulbs glow brighter. He found that the smallest bits of energy to be found in nature weren’t random, but actually came in specific amounts that he called quanta.

It is notable that when either of these men began working on their respective contributions to quantum mechanics, they took a leap of faith that couldn’t be spanned by purely scientific reasoning, as is the dominant process today, but by faith in philosophical reasoning and, simply, hope.

For example, Planck wasn’t fond of a class of mechanics he used to establish quantum mechanics. When asked about it, he said it was an “act of despair”, that he was “ready to sacrifice any of [his] previous convictions about physics”. Bohr, on the other hand, had relied on the intuitive philosophy of correspondence to conceive of his model. In fact, even before he had received his Nobel in 1922, Bohr had begun to deviate from his most eminent finding because it disagreed with what he thought were more important, and to be preserved, foundational ideas.

It was also through this philosophy of correspondence that the many theories were able to be unified over the course of time. According to it, a new theory should replicate the results of an older, well-established one in the domain where it worked.

Coming a full circle

Since humankind’s investigation into the nature of physics has proceeded from the large to the small, new attempts to investigate from the small to the large were likely to run into old theories. And when multiple new quantum theories were found to replicate the results of one classical theory, they could be translated between each other by corresponding through the old theory (thus the name).

Because the Bohr model could successfully explain how and why energy was emitted by electrons jumping orbits in the hydrogen atom, it had a domain of applicability. So, it couldn’t be entirely wrong and would have to correspond in some way with another, possibly more successful, theory.

Earlier, in 1924, de Broglie’s formulation was suffering from its own inability to explain certain wave-like phenomena in particulate matter. Then, in 1926, Erwin Schrodinger built on it and, like Sommerfeld did with Bohr’s ideas, generalized them so that they could apply in experimental quantum mechanics. The end result was the famous Schrodinger’s equation.

The Sommerfeld-Bohr theory corresponds with the equation, and this is where it comes “full circle”. After the equation became well known, the Bohr model was finally understood as being a semi-classical approximation of the Schrodinger equation. In other words, the model represented some of the simplest corrections to be made to classical mechanics for it to become quantum in any way.

An ingenious span

After this, the Bohr model was, rather became, a fully integrable part of the foundational ancestry of modern quantum mechanics. While its significance in the field today is great yet still one of many like it, by itself it had a special place in history: a bridge, between the older classical thinking and the newer quantum thinking.

Even philosophically speaking, Niels Bohr and his pathbreaking work were important because they planted the seeds of ingenuity in our minds, and led us to think outside of convention.

This article, as written by me, originally appeared in The Copernican science blog on May 19, 2013.

Bohr and the breakaway from classical mechanics

Niels Bohr, 1950.
Niels Bohr, 1950. Photo: Blogspot

One hundred years ago, Niels Bohr developed the Bohr model of the atom, where electrons go around a nucleus at the centre like planets in the Solar System. The model and its implications brought a lot of clarity to the field of physics at a time when physicists didn’t know what was inside an atom, and how that influenced the things around it. For his work, Bohr was awarded the physics Nobel Prize in 1922.

The Bohr model marked a transition from the world of Isaac Newton’s classical mechanics, where gravity was the dominant force and values like mass and velocity were accurately measurable, to that of quantum mechanics, where objects were too small to be seen even with powerful instruments and their exact position didn’t matter.

Even though modern quantum mechanics is still under development, its origins can be traced to humanity’s first thinking of energy as being quantised and not randomly strewn about in nature, and the Bohr model was an important part of this thinking.

The Bohr model

According to the Dane, electrons orbiting the nucleus at different distances were at different energies, and an electron inside an atom – any atom – could only have specific energies. Thus, electrons could ascend or descend through these orbits by gaining or losing a certain quantum of energy, respectively. By allowing for such transitions, the model acknowledged a more discrete energy conservation policy in physics, and used it to explain many aspects of chemistry and chemical reactions.

Unfortunately, this model couldn’t evolve continuously to become its modern equivalent because it could properly explain only the hydrogen atom, and it couldn’t account for the Zeeman effect.

What is the Zeeman effect? When an electron jumps from a higher to a lower energy-level, it loses some energy. This can be charted using a “map” of energies like the electromagnetic spectrum, showing if the energy has been lost as infrared, UV, visible, radio, etc., radiation. In 1896, Dutch physicist Pieter Zeeman found that this map could be distorted when the energy was emitted in the presence of a magnetic field, leading to the effect named after him.

It was only in 1925 that the cause of this behaviour was found (by Wolfgang Pauli, George Uhlenbeck and Samuel Goudsmit), attributed to a property of electrons called spin.

The Bohr model couldn’t explain spin or its effects. It wasn’t discarded for this shortcoming, however, because it had succeeded in explaining a lot more, such as the emission of light in lasers, an application developed on the basis of Bohr’s theories and still in use today.

The model was also important for being a tangible breakaway from the principles of classical mechanics, which were useless at explaining quantum mechanical effects in atoms. Physicists recognised this and insisted on building on what they had.

A way ahead

To this end, a German named Arnold Sommerfeld provided a generalisation of Bohr’s model – a correction – to let it explain the Zeeman effect in ionized helium (which is a hydrogen atom with one proton and one neutron more).

In 1924, Louis de Broglie introduced particle-wave duality into quantum mechanics, invoking that matter at its simplest could be both particulate and wave-like. As such, he was able to verify Bohr’s model mathematically from a waves’ perspective. Before him, in 1905, Albert Einstein had postulated the existence of light-particles called photons but couldn’t explain how they could be related to heat waves emanating from a gas, a problem he solved using de Broglie’s logic.

All these developments reinforced the apparent validity of Bohr’s model. Simultaneously, new discoveries were emerging that continuously challenged its authority (and classical mechanics’, too): molecular rotation, ground-state energy, Heisenberg’s uncertainty principle, Bose-Einstein statistics, etc. One option was to fall back to classical mechanics and rework quantum theory thereon. Another was to keep moving ahead in search of a solution.

However, this decision didn’t have to be taken because the field of physics itself had started to move ahead in different ways, ways which would become ultimately unified.

Leaps of faith

Between 1900 and 1925, there were a handful of people responsible for opening this floodgate to tide over the centuries old Newtonian laws. Perhaps the last among them was Niels Bohr; the first was Max Planck, who originated quantum theory when he was working on making light bulbs glow brighter. He found that the smallest bits of energy to be found in nature weren’t random, but actually came in specific amounts that he called quanta.

It is notable that when either of these men began working on their respective contributions to quantum mechanics, they took a leap of faith that couldn’t be spanned by purely scientific reasoning, as is the dominant process today, but by faith in philosophical reasoning and, simply, hope.

For example, Planck wasn’t fond of a class of mechanics he used to establish quantum mechanics. When asked about it, he said it was an “act of despair”, that he was “ready to sacrifice any of [his] previous convictions about physics”. Bohr, on the other hand, had relied on the intuitive philosophy of correspondence to conceive of his model. In fact, only a few years after he had received his Nobel in 1922, Bohr had begun to deviate from his most eminent finding because it disagreed with what he thought were more important, and to be preserved, foundational ideas.

It was also through this philosophy of correspondence that the many theories were able to be unified over the course of time. According to it, a new theory should replicate the results of an older, well-established one in the domain where it worked.

Coming a full circle

Since humankind’s investigation into the nature of physics has proceeded from the large to the small, new attempts to investigate from the small to the large were likely to run into old theories. And when multiple new quantum theories were found to replicate the results of one classical theory, they could be translated between each other by corresponding through the old theory (thus the name).

Because the Bohr model could successfully explain how and why energy was emitted by electrons jumping orbits in the hydrogen atom, it had a domain of applicability. So, it couldn’t be entirely wrong and would have to correspond in some way with another, possibly more succesful, theory.

Earlier, in 1924, de Broglie’s formulation was suffering from its own inability to explain certain wave-like phenomena in particulate matter. Then, in 1926, Erwin Schrodinger built on it and, like Sommerfeld did with Bohr’s ideas, generalised them so that they could apply in experimental quantum mechanics. The end result was the famous Schrodinger’s equation.

The Sommerfeld-Bohr theory corresponds with the equation, and this is where it comes “full circle”. After the equation became well known, the Bohr model was finally understood as being a semi-classical approximation of the Schrodinger equation. In other words, the model represented some of the simplest corrections to be made to classical mechanics for it to become quantum in any way.

An ingenious span

After this, the Bohr model was, rather became, a fully integrable part of the foundational ancestry of modern quantum mechanics. While its significance in the field today is great yet still one of many like it, by itself it had a special place in history: a bridge, between the older classical thinking and the newer quantum thinking.

Even philosophically speaking, Niels Bohr and his path-breaking work were important because they planted the seeds of ingenuity in our minds, and led us to think outside of convention.

The Last Temptation

Today, I bought The Last Temptation by Nikos Kazantzakis. When I handed the Rs. 450 it cost over at the counter, it was a significant moment for me because for the last three years, after my reading habit had fallen off but before I had realized that it had, I was rejecting books that “wouldn’t appeal to the man I wanted to become”.

I wouldn’t read books that had strong religious elements (because I wanted to be an atheist), that hadn’t good reviews (because I wanted to spend time “well”), that attended to morals and values I considered irrelevant, that hosted plots drawing upon cultural memories that were simply American or simply European but surely not global, etc. I would find the smallest of excuses to avoid masterpieces.

At the same time, I would read other books – especially non-fiction and works of fantasy fiction. To this day, I don’t know whence that part of me arose that judged literary agency before it was agent, but I do know it turned me into this pontificator who thought he’d read enough books to start judging others without having to read them. A part of me has liked to think nobody can do that. And by buying a copy of The Last Temptation (and intending to read it), I think I am out of mine.

Of course, I’m also assuming the solution is something so simple…

Choices.

The Verge paid Paul Miller to stay away from the internet for a year.

paul_miller_verge

We have this urge to think of the internet as something that wasn’t produced by human agency, like an alien sewerage network whose filth has infected us and our lives to the point of disease. If someone has problems and they tell you about it, don’t tell me you haven’t thought about blaming the internet. I have, too. We think it is a constantly refilled dump that spills over onto our computer screens (while also hypocritically engaging in the rhetoric of how many opportunities “the social media” hold). And then, we realize that the internet is one massive improbably impressionable relay of emotions, propped up on infrastructure that simplifies access a hundredfold. There’s nothing leaving it behind will do to you because it’s always been your choice whether or not to access it.

In fact, that’s what you rediscover.

(Hat-tip to Dhiya Kuriakose)

Which way does antimatter swing?

In our universe, matter is king: it makes up everything. Its constituents are incredibly tiny particles – smaller than even the protons and neutrons they constitute – and they work together with nature’s forces to make up… everything.

There was also another form of particle once, called antimatter. It is extinct today, but when the universe was born 13.82 billion years ago, there were equal amounts of both kinds.

Nobody really knows where all the antimatter disappeared to or how, but they are looking. Some others, however, are asking another question: did antimatter, while it lasted, fall downward or upward in response to gravity?

Joel Fajans, a professor at the University of California, Berkeley, is one of the physicists doing the asking. “It is the general consensus that the interaction of matter with antimatter is the same as gravitational interaction of matter,” he told this correspondent.

But he wants to be sure, because what he finds could revolutionize the world of physics. Over the years, studying particles and their antimatter counterparts has revealed most of what we know today about the universe. In the future, physicists will explore their minuscule world, called the quantum world, further to see if answers to some unsolved problems are found. If, somewhere, an anomaly is spotted, it could pave the way for new explanations to take over.

“Much of our basic understanding of the evolution of the early universe might change. Concepts like dark energy and dark matter might have be to revised,” Fajans said.

Along with his colleague Jonathan Wurtele, Fajans will work with the ALPHA experiment at CERN to run an elegant experiment that could directly reveal gravity’s effect on antimatter. ALPHA stands for Anti-hydrogen Laser Physics Apparatus.

We know gravity acts on a ball by watching it fall when dropped. On Earth, the ball will fall toward the source of the gravitational pull, a direction called ‘down’. Fajans and Wurtele will study if down is in the same place for antimatter as for matter.

An instrument at CERN called the anti-proton decelerator (AD) synthesizes the antimatter counterpart of protons for study in the lab at a low energy. Fajans and co. will then use the ALPHA experiment’s setup to guide them into the presence of anti-electrons derived from another source using carefully directed magnetic fields.

When an anti-proton and an anti-electron come close enough, their charges will trap each other to form an anti-hydrogen atom.

Because antimatter and matter annihilate each other in a flash of energy, they couldn’t be let near each other during the experiment. Instead, the team used strong magnetic fields to form a force-field around the antimatter, “bottling” it in space.

Once this was done, the experiment was ready to go. Like fingers holding a ball unclench, the magnetic fields were turned off – but not instantaneously. They were allowed to go from ‘on’ to ‘off’ over 30 milliseconds. In this period, the magnetic force wears off and lets gravitational force take its place.

And in this state, Fajans and his team studied which way the little things moved: up or down.

The results

The first set of results from the experiment have allowed no firm conclusions to be drawn. Why? Fajans answered, “Relatively speaking, gravity has little effect on the energetic anti-atoms. They are already moving so fast that they are barely affected by the gravitational forces.” According to Wurtele, about 411 out 434 anti-atoms in the trap were so energetic that the way they escaped from the trap couldn’t be attributed to gravity’s pull or push on them.

Among them, they observed roughly equal numbers of anti-atoms to falling out at the bottom of the trap as at the top (and sides, for that matter.)

They shared this data with their ALPHA colleagues and two people from the University of California, lecturer Andrew Charman and postdoc Andre Zhmoginov. They ran statistical tests to separate results due to gravity from results due to the magnetic field. Again, much statistical uncertainty remained.

The team has no reason to give up, though. For now, they know that gravity would have to be 100 times stronger than it is for them to see any of its effects on anti-hydrogen atoms. They have a lower limit.

Moreover, the ALPHA experiment is also undergoing upgrades to become ALPHA-2. With this avatar, Fajans’s team also hopes to incorporate laser-cooling, a method of further slowing the anti-atoms, so that the effects of gravity are enhanced. Michael Doser, however, is cautious.

The future

As a physicist working with antimatter at CERN, Doser says, “I would be surprised if laser cooling of antihydrogen atoms, something that hasn’t been attempted to date, would turn out to be straightforward.” The challenge lies in bringing the systematics down to the point at which one can trust that any observation would be due to gravity, rather than due to the magnetic trap or the detectors being used.

Fajans and co. also plan to turn off the magnets more slowly in the future to enhance the effects of gravity on the anti-atom trajectories. “We hope to be able to definitively answer the question of whether or not antimatter falls down or up with these improvements,” Fajans concluded.

Like its larger sibling, the Large Hadron Collider, the AD is also undergoing maintenance and repair in 2013, so until the next batch of anti-protons are available in mid-2014, Fajans and Wurtele will be running tests at their university, checking if their experiment can be improved in any way.

They will also be taking heart from there being two other experiments at CERN that can verify their results if they come up with something anomalous, two experiments working with antimatter and gravity. They are the Anti-matter Experiment: Gravity, Interferometry, Spectrocopy (AEGIS), for which Doser is the spokesperson, and the Gravitational Behaviour of Anti-hydrogen at Rest (GBAR).

Together, they carry the potential benefit of an independent cross-check between techniques and results. “This is less important in case no difference to the behaviour of normal matter is found,” Doser said, “but would be crucial in the contrary case. With three experiments chasing this up, the coming years look to be interesting!”

This post, as written by me, originally appeared in The Copernican science blog at The Hindu on May 1, 2013.