Tabby's star

Announcing the WTF star

This paper presents the discovery of a mysterious dipping source, KIC 8462852, from the Planet Hunters project. In just the first quarter of Kepler data, Planet Hunter volunteers identified KIC 8462852’s light curve as a “bizarre”, “interesting”, “giant transit” (Q1 event depth was 0.5% with a duration of 4 days). As new Kepler data were released in subsequent quarters, discussions continued on ‘Talk’ about KIC 8462852’s light curve peculiarities, particularly ramping up pace in the final observations quarters of the Kepler mission.

Umm, is there an alien megastructure around the star?

The most extreme case of a transiting megastructure would be a structure or swarm so large and opaque that it completely occults the star. In this case there might be a very small amount of scattered light from other components of a swarm, but for the most part the star would go completely dark at optical wavelengths. In the limit that such a structure or swarm had complete coverage of the star, one has a “complete Dyson sphere” (α = 1 in the AGENT formalism of Wright et al. 2014a). Less complete swarms or structures (as in the case of Badescu and Shkadov’s scenarios above) undergoing (perhaps nonKeplerian) orbital motion might lead to a star “winking out” as the structure moved between Earth and the star. In such scenarios, the occulting structure might be detectable at midinfrared wavelengths if all of the intercepted stellar energy is ultimately disposed of as waste heat (that is, in the AGENT formalism, if ≈ α and α is of order 1).

If there are aliens around the star, they aren’t pinging us in radio

We have made a radio reconnaissance of the star KIC 8462852 whose unusual light curves might possibly be due to planet-scale technology of an extraterrestrial civilization. The observations presented here indicate no evidence for persistent technology-related signals in the microwave frequency range 1 – 10 GHz with threshold sensitivities of 180 – 300 Jy in a 1 Hz channel for signals with 0.01 – 100 Hz bandwidth, and 100 Jy in a 100 kHz channel from 0.1 – 100 MHz. These limits correspond to isotropic radio transmitter powers of 4 – 7 10^15 W and 10^20 W for the narrowband and moderate band observations. These can be compared with Earth’s strongest transmitters, including the Arecibo Observatory’s planetary radar (2 1013 W EIRP). Clearly, the energy demands for a detectable signal from KIC 8462852 are far higher than this terrestrial example (largely as a consequence of the distance of this star). On the other hand, these energy requirements could be very substantially reduced if the emissions were beamed in our direction. Additionally, it’s worth noting that any society able to construct a Dyson swarm will have an abundant energy source, as the star furnishes energy at a level of ~10^27 watts. This report represents a first survey placing upper limits on anomalous flux from KIC 8462852. We expect that this star will be the object of additional observations for years to come.

There could be a comet swarm around the star

We find that a comet family on a single orbit with dense clusters can cause most of the observed complex series of irregular dips that occur after day 1500 in the KIC 8462852 light curve. However, the fit requires a large number of comets and is also not well constrained. We cannot limit the structure of the system much beyond the observational constraints and the dynamical history of the comet family is unknown, but if the comet family model is correct, there is likely a planetary companion forming sungrazers. Since the comets are still tightly clustered within each dip, a disruption event likely occurred recently within orbit, like tidal disruption by the star. This comet family model does not explain the large dip observed around day 800 and treats it as unrelated to the ones starting at day 1500. The flux changes too smoothly and too slowly to be easily explained with a simple comet family model.

Okay, the aliens won’t have it hard to ping us, but if they don’t know we’re listening, we might have to look pretty hard for them

If, however, any inhabitants of KIC 8462852 were targeting our solar system (Shostak & Villard 2004), the required energy would be reduced greatly. As an example, if such hypothetical extraterrestrials used a 10 m mirror to beam laser pulses in our direction, then using a 10 m receiving telescope, the minimum detectable energy per pulse would be 125,000 joules. If this pulse repeated every 20 minutes, then the average power cost to the transmitting civilization would be a rather low 100 watts. This would be a negligible cost for any civilization capable of constructing a megastructure large enough to be responsible for the dimming seen with KIC 8462852, particularly if that structure were used to capture a large fraction of the star’s energy (~10^27 watts). It would be considerably easier to detect such signals intentionally directed toward Earth than to intercept collimated communications between two star systems along a vector that accidentally intersects the Earth (Forgan 2014).

BTW, the star faded in brightness in the last 100 years

KIC8462852 is suffering a century-long secular fading, and this is contrary to the the various speculation that the obscuring dust was created by some singular catastrophic event. If any such singular event happened after around 1920, then the prior light curve should appear perfectly flat, whereas there is significant variability before 1920. If the trend is caused by multiple small catastrophic events, then it is difficult to understand how they can time themselves so as to mimic the trend from 1890-1989. In the context of the idea that the star in undergoing a Late Heavy Bombardment (Lisse et al. 2015), it is implausible that such a mechanism could start up on a time scale of a century, or that it would start so smoothly with many well-spaced collisions.

Wait, there’s a reason it might not have faded in the last 100 years

Assuming that all stars have been drawn randomly from the same sample, the chance of drawing 2 of 2 constant stars is 13%. It might be attributed to bad luck that these apparent data discontinuities were not seen in the first place. After visual inspection of all data, we favour the interpretation that both structural breaks, and long-term (decades) linear trends are present in these data. The structural breaks appear most prominent at the “Menzel gap”, but might also be present at other times. These issues might arise from changes in technology, and imperfect calibration.

32 days and counting

It’s been 32 days since I’ve been able to write in that half-ranting half-jargon-dropping style that’s always been the clearest indication that I’m excited about something (yes, my writing tells me things I can’t otherwise figure out). In the last three weeks, I’ve written three pieces for The Wire, and all of them were coughed-spluttered-staccatoed out. I dearly miss the flow and 32 days is the longest it’s been gone.

I’m not sure which one the causes are and which the effect – but periods of the block are also accompanied by the inability to think things through, being scatter-brained and easily distracted, and a sense of general disinterestedness. And when I can’t write normally, I can’t read or ideate normally either; even the way I’ve ratified and edited submissions for The Wire took a toll.

I’ve tried everything that’s worked in the past to clear the block but nothing has worked. I tried writing more, reading more, cathartic music; moving around, meeting people, changes of scenery; got into a routine I’ve traditionally reserved for phases like this, a diet, some exercise. This is frightening – I need a new solution and I’ve no idea where to look. Do you just wait for your block to fade or do you have a remedy for it?

Featured image credit: Matthias Ripp/Flickr, CC BY 2.0.

Priggish NEJM editorial on data-sharing misses the point it almost made

Twitter outraged like only Twitter could on January 22 over a strange editorial that appeared in the prestigious New England Journal of Medicine, calling for medical researchers to not make their research data public. The call comes at a time when the scientific publishing zeitgeist is slowly but surely shifting toward journals requiring, sometimes mandating, the authors of studies to make their data freely available so that their work can be validated by other researchers.

Through the editorial, written by Dan Longo and Jeffrey Drazen, both doctors and the latter the chief editor, NEJM also cautions medical researchers to be on the lookout for ‘research parasites’, a coinage that the journal says is befitting “of people who had nothing to do with the design and execution of the study but use another group’s data for their own ends, possibly stealing from the research productivity planned by the data gatherers, or even use the data to try to disprove what the original investigators had posited”. As @omgItsEnRIz tweeted, do the authors even science?

The choice of words is more incriminating than the overall tone of the text, which also tries to express the more legitimate concern of replicators not getting along with the original performers. However, by saying that the ‘parasites’ may “use the data to try to disprove what the original investigators had posited”, NEJM has crawled into an unwise hole of infallibility of its own making.

In October 2015, a paper published in the Journal of Experimental Psychology pointed out why replication studies are probably more necessary than ever. The misguided publish-or-perish impetus of scientific research, together with publishing in high impact-factor journals being lazily used as a proxy for ‘good research’ by many institutions, has led researchers to hack their results – i.e. prime them (say, by cherry-picking) so that the study ends up reporting sensational results when, really, duller ones exist.

The JEP paper had a funnel plot to demonstrate this. Quoting from the Neuroskeptic blog, which highlighted the plot when the paper was published, “This is a funnel plot, a two-dimensional scatter plot in which each point represents one previously published study. The graph plots the effect size reported by each study against the standard error of the effect size – essentially, the precision of the results, which is mostly determined by the sample size.” Note: the y-axis is running top-down.

funnel_shanks1

The paper concerned itself with 43 previously published studies discussing how people’s choices were perceived to change when they were gently reminded about sex.

As Neuroskeptic goes on to explain, there are three giveaways in this plot. One is obvious – that the distribution of replication studies is markedly separated from that of the original studies. Second: the least precise results from the original studies worked with the larger sample sizes. Third: the original studies all seemed to “hug” the outer edge of the grey triangles, which represents a statistical measure responsible for indicating if some results are reliable. The uniform ‘hugging’ is an indication that all those original studies were likely guilty of cherry-picking from their data to conclude with results that are just about reliable, an act called ‘p-hacking’.

A line of research can appear to progress rapidly but without replication studies it’s difficult to establish if the progress is meaningful for science – a notion famously highlighted by John Ioannidis, a professor of medicine and statistics at Stanford University, in his two landmark papers in 2005 and 2014. Björn Brembs, a professor of neurogenetics at the Universität Regensburg, Bavaria, also pointed out how the top journals’ insistence on sensational results could result in a congregation of unreliability. Together with a conspicuous dearth of systematically conducted replication studies, this ironically implies that the least reliable results are often taken the most seriously thanks to the journals they appear in.

The most accessible sign of this is a plot between the retraction index and the impact factor of journals. The term ‘retraction index’ was coined in the same paper in which the plot first appeared; it stands for “the number of retractions in the time interval from 2001 to 2010, multiplied by 1,000, and divided by the number of published articles with abstracts”.

Impact factor of journals plotted against the retraction index. The highest IF journals – Nature, Cell and Science – are farther along the trend line than they should be. Source: doi: 10.1128/IAI.05661-11
Impact factor of journals plotted against the retraction index. The highest IF journals – Nature, Cell and Science – are farther along the trend line than they should be. Source: doi: 10.1128/IAI.05661-11

Look where NEJM is. Enough said.

The journal’s first such supplication appeared in 1997, then writing against pre-print copies of medical research papers becoming available and easily accessible – á la the arXiv server for physics. Then, the authors, again two doctors, wrote, “medicine is not physics: the wide circulation of unedited preprints in physics is unlikely to have an immediate effect on the public’s well-being even if the material is biased or false. In medicine, such a practice could have unintended consequences that we all would regret.” Though a reasonable PoV, the overall tone appeared to stand against the principles of open science.

More importantly, both editorials, separated by almost two decades, make one reasonable argument that sadly appears to make sense to the journal only in the context of a wider set of arguments, many of them contemptible. For example, Drazen seems to understand the importance of data being available for studies to be validated but has differing views on different kinds of data. Two days before his editorial was published, another appeared co-authored by 16 medical researchers – Drazen one of them – in the same journal, this time calling for anonymised patient data from clinical trials being made available to other researchers because it would “increase confidence and trust in the conclusions drawn from clinical trials. It will enable the independent confirmation of results, an essential tenet of the scientific process.”

(At the same time, the editorial also says, “Those using data collected by others should seek collaboration with those who collected the data.”)

For another example, NEJM labours under the impression that the data generated by medical experiments will not ever be perfectly communicable to other researchers who were not involved in the generation of it. One reason it provides is that discrepancies in the data between the original group and a new group could arise because of subtle choices made by the former in the selection of parameters to evaluate. However, the solution doesn’t lie in the data being opaque altogether.

A better way to conduct replication studies

An instructive example played out in May 2014, when the journal Social Psychology published a special issue dedicated to replication studies. The issue contained both successful and failed attempts at replicating some previously published results, and the whole process was designed to eliminate biases as much as possible. For example, the journal’s editors Brian Nosek and Daniel Lakens didn’t curate replication studies but instead registered the studies before they were performed so that their outcomes would be published irrespective of whether they turned out positive or negative. For another, all the replications used the same experimental and statistical techniques as in the original study.

One scientist who came out feeling wronged by the special issue was Simone Schnall, the director of the Embodied Cognition and Emotion Laboratory at Cambridge University. The results of a paper co-authored by Schnall in 2008 hadfailed to be replicated, but she believed there had been a mistake in the replication that, when corrected, would corroborate her group’s findings. However, her statements were quickly and widely interpreted to mean she was being a “sore loser”. In one blog, her 2008 findings were called an “epic fail” (though the words were later struck out).

This was soon followed a rebuttal by Schnall, followed by a counter by the replicators, and then Schnall writing two blog posts (here and here). Over time, the core issue became how replication studies were conducted – who performed the peer review, the level of independence the replicators had, the level of access the original group had, and how journals could be divorced from having a choice about which replication studies to publish. But relevant to the NEJM context, the important thing was the level of transparency maintained by Schnall & co. as well as the replicators, which provided a sheen of honesty and legitimacy to the debate.

The Social Psychology issue was able to take the conversation forward, getting authors to talk about the psychology of research reporting. There have been few other such instances – of incidents exploring the proper mechanisms of replication studies – so if the NEJM editorial had stopped itself with calling for better organised collaborations between a study’s original performers and its replicators, it would’ve been great. As Longo and Drazen concluded, “How would data sharing work best? We think it should happen symbiotically … Start with a novel idea, one that is not an obvious extension of the reported work. Second, identify potential collaborators whose collected data may be useful in assessing the hypothesis and propose a collaboration. Third, work together to test the new hypothesis. Fourth, report the new findings with relevant coauthorship to acknowledge both the group that proposed the new idea and the investigative group that accrued the data that allowed it to be tested.”

https://twitter.com/significantcont/status/690507462848450560

The mistake lies in thinking anything else would be parasitic. And the attitude affects not just other scientists but some science communicators as well. Any journalist or blogger who has been reporting on a particular beat for a while stands to become a ‘temporary expert‘ on the technical contents of that beat. And with exploratory/analytical tools like R – which is easier than you think to pick up – the communicator could dig deeper into the data, teasing out issues more relevant to their readers than what the accompanying paper thinks is the highlight. Sure, NEJM remains apprehensive about how medical results could be misinterpreted to terrible consequence. But the solution there would be for the communicators to be more professional and disciplined, not for the journal to be more opaque.

The Wire
January 24, 2016

Parsing Ajay Sharma v. E = mc2

Featured image credit: saulotrento/Deviantart, CC BY-SA 3.0.

To quote John Cutter (Michael Caine) from The Prestige:

Every magic trick consists of three parts, or acts. The first part is called the pledge, the magician shows you something ordinary. The second act is called the turn, the magician takes the ordinary something and makes it into something extraordinary. But you wouldn’t clap yet, because making something disappear isn’t enough. You have to bring it back. Now you’re looking for the secret. But you won’t find it because of course, you’re not really looking. You don’t really want to work it out. You want to be fooled.

The Pledge

Ajay Sharma is an assistant director of education with the Himachal Pradesh government. On January 10, the Indo-Asian News Service (IANS) published an article in which Sharma claims Albert Einstein’s famous equation E = mc2 is “illogical” (republished by The Hindu, Yahoo! NewsGizmodo India, among others). The precise articulation of Sharma’s issue with it is unclear because the IANS article contains multiple unqualified statements:

Albert Einstein’s mass energy equation (E=mc2) is inadequate as it has not been completely studied and is only valid under special conditions.

Einstein considered just two light waves of equal energy emitted in opposite directions with uniform relative velocity.

“It’s only valid under special conditions of the parameters involved, e.g. number of light waves, magnitude of light energy, angles at which waves are emitted and relative velocity.”

Einstein considered just two light waves of equal energy, emitted in opposite directions and the relative velocity uniform. There are numerous possibilities for the parameters which were not considered in Einstein’s 1905 derivation.

It said E=mc2 is obtained from L=mc2 by simply replacing L by E (all energy) without derivation by Einstein. “It’s illogical,” he said.

Although Einstein’s theory is well established, it has to be critically analysed and the new results would definitely emerge.

Sharma also claims Einstein’s work wasn’t original and only ripped off Galileo, Henri Poincaré, Hendrik Lorentz, Joseph Larmor and George FitzGerald.

The Turn

Let’s get some things straight.

Mass-energy equivalence – E = mc2 isn’t wrong but it’s often overlooked that it’s an approximation. This is the full equation:

E2 = m02c4 + p2c4

(Notice the similarity to the Pythagoras theorem?)

Here, m0 is the mass of the object (say, a particle) when it’s not moving, p is its momentum (calculated as mass times its velocity – m*v) and c, the speed of light. When the particle is not moving, v is zero, so p is zero, and so the right-most term in the equation can be removed. This yields:

E2 = m02c4 ⇒ E = m0c2

If a particle was moving close to the speed of light, applying just E = m0c2 would be wrong without the rapidly enlarging p2c4 component. In fact, the equivalence remains applicable in its most famous form only in cases where an observer is co-moving along with the particle. So, there is no mass-energy equivalence as much as a mass-energy-momentum equivalence.

And at the time of publishing this equation, Einstein was aware that it was held up by multiple approximations. As Terence Tao sets out, these would include (but not be limited to) p being equal to mv at low velocities, the laws of physics being the same in two frames of reference moving at uniform velocities, Planck’s and de Broglie’s laws holding, etc.

These approximations are actually inherited from Einstein’s special theory of relativity, which describes the connection between space and time. In a paper dated September 27, 1905, Einstein concluded that if “a body gives off the energyL in the form of radiation, its mass diminishes by L/c2“. ‘L’ was simply the notation for energy that Einstein used until 1912, when he switched to the more-common ‘E’.

The basis of his conclusion was a thought experiment he detailed in the paper, where a point-particle emits “plane waves of light” in opposite directions while at rest and then while in motion. He then calculates the difference in kinetic energy of the body before and after it starts to move and accounting for the energy carried away by the radiated light:

K0 – K1 = 1/2 * L/c2 * v2

This is what Sharma is referring to when he says, “Einstein considered just two light waves of equal energy, emitted in opposite directions and the relative velocity uniform. There are numerous possibilities for the parameters which were not considered in Einstein’s 1905 derivation.” Well… sure. Einstein’s was a gedanken (thought) experiment to illustrate a direct consequence of the special theory. How he chose to frame the problem depended on what connection he wanted to illustrate between the various attributes at play.

And the more attributes are included in the experiment, the more connections will arise. Whether or not they’d be meaningful (i.e. being able to represent a physical reality – such as with being able to say “if a body gives off the energy Lin the form of radiation, its mass diminishes by L/c2“) is a separate question.

As for another of Sharma’s claims – that the equivalence is “only valid under special conditions of the parameters involved, e.g. number of light waves, magnitude of light energy, angles at which waves are emitted and relative velocity”: Einstein’s theory of relativity is the best framework of mathematical rules we have to describe all these parameters together. So any gedanken experiment involving just these parameters can be properly analysed, to the best of our knowledge, with Einstein’s theory, and within that theory – and as a consequence of that theory – the mass-energy-momentum equivalence will persist. This implication was demonstrated by the famous Cockcroft-Walton experiment in 1932.

General theory of relativity – Einstein’s road to publishing his general theory (which turned 100 last year) was littered with multiple challenges to its primacy. This is not surprising because Einstein’s principal accomplishment was not in having invented something but in having recombined and interpreted a trail of disjointed theoretical and experimental discoveries into a coherent, meaningful and testable theory of gravitation.

As mentioned earlier, Sharma claims Einstein ripped off Galileo, Poincaré, Lorentz, Larmor and FitzGerald. For what it’s worth, he could also have mentioned William Kingdon Clifford, Georg Bernhard Riemann, Tullio Levi-Civita, Gregorio Ricci-Curbastro, János Bolyai, Nikolai Lobachevsky, David Hilbert, Hermann Minkowski and Fritz Hasenhörl. Here are their achievements in the context of Einstein’s (in a list that’s by no means exhaustive).

  • 1632, Galileo Galilei – Published a book, one of whose chapters features a dialogue about the relative motion of planetary bodies and the role of gravity in regulating their motion
  • 1824-1832, Bolyai and Lobachevsky – Conceived of hyperbolic geometry (which didn’t follow Euclidean laws like the sum of a triangle’s angles is 180º) over 1824-1832, which inspired Riemann and his mentor to consider if there was a kind of geometry to explain the behaviour of shapes in four dimensions (as opposed to three)
  • 1854, G. Bernhard Riemann – Conceived of elliptic geometry and a way to compare vectors in four dimensions, ideas that would benefit Einstein immensely because they helped him discover that gravity wasn’t a force in space-time but actually the curvature of space-time
  • 1876, William K. CliffordSuggested that the forces that shape matter’s motion in space could be guided by the geometry of space, foreshadowing Einstein’s idea that matter influences gravity influences matter
  • 1887-1902, FitzGerald and Lorentz – Showed that observers in different frames of reference that are moving at different velocities can measure the length of a common body to differing values, an idea then called the FitzGerald-Lorentz contraction hypothesis. Lorentz’s mathematical description of this gave rise to a set of formulae called Lorentz transformations, which Einstein later derived through his special theory.
  • 1897-1900, Joseph Larmor – Realised that observers in different frames of reference that are moving at different velocities can also measure different times for the same event, leading to the time dilation hypothesis that Einstein later explained
  • 1898, Henri Poincaré – Interpreted Lorentz’s abstract idea of a “local time” to have physical meaning – giving rise to the idea of relative time in physics – and was among the first physicists to speculate on the need for a consistent theory to explain the consequences of light having a constant speed
  • 1900, Levi-Civita and Ricci-Curbastro – Built on Riemann’s ideas of a non-Euclidean geometry to develop tensor calculus (a tensor is a vector in higher dimensions). Einstein’s field-equations for gravity, which capped his formulation of the celebrated general theory of relativity, would feature the Ricci tensor to account for the geometric differences between Euclidean and non-Euclidean geometry.
  • 1904-1905, Fritz Hasenöhrl – Built on the work of Oliver Heaviside, Wilhelm Wien, Max Abraham and John H. Poynting to devise a thought experiment from which he was able to conclude that heat has mass, a primitive synonym of the mass-energy-momentum equivalence
  • 1907, Hermann Minkowski – Conceived a unified mathematical description of space and time in 1907 that Einstein could use to better express his special theory. Said of his work: “From this hour on, space by itself, and time by itself, shall be doomed to fade away in the shadows, and only a kind of union of the two shall preserve an independent reality.”
  • 1915, David Hilbert – Derived the general theory’s field equations a few days before Einstein did but managed to have his paper published only after Einstein’s was, leading to an unresolved dispute about who should take credit. However, the argument was made moot by only Einstein being able to explain how Isaac Newton’s laws of classical mechanics fit into the theory – Hilbert couldn’t.

FitzGerald, Lorentz, Larmor and Poincaré all laboured assuming that space was filled with a ‘luminiferous ether’. The ether was a pervasive, hypothetical yet undetectable substance that physicists of the time believed had to exist so electromagnetic radiation had a medium to travel in. Einstein’s theories provided a basis for their ideas to exist withoutthe ether, and as a consequence of the geometry of space.

So, Sharma’s allegation that Einstein republished the work of other people in his own name is misguided. Einstein didn’t plagiarise. And while there are many accounts of his competitive nature, to the point of asserting that a mathematician who helped him formulate the general theory wouldn’t later lay partial claim to it, there’s no doubt that he did come up with something distinctively original in the end.

The Prestige

Ajay Sharma with two of his books. Source: Fundamental Physics Society (Facebook page)
Ajay Sharma with two of his books. Source: Fundamental Physics Society (Facebook page)

To recap:

Albert Einstein’s mass energy equation (E=mc2) is inadequate as it has not been completely studied and is only valid under special conditions.

Claims that Einstein’s equations are inadequate are difficult to back up because we’re yet to find circumstances in which they seem to fail. Theoretically, they can be made to appear to fail by forcing them to account for, say, higher dimensions, but that’s like wearing suede shoes in the rain and then complaining when they’re ruined. There’s a time and a place to use them. Moreover, the failure of general relativity or quantum physics to meet each other halfway (in a quantum theory of gravity) can’t be pinned on a supposed inadequacy of the mass-energy equivalence alone.

Einstein considered just two light waves of equal energy emitted in opposite directions with uniform relative velocity.

“It’s only valid under special conditions of the parameters involved, e.g. number of light waves, magnitude of light energy, angles at which waves are emitted and relative velocity.”

Einstein considered just two light waves of equal energy, emitted in opposite directions and the relative velocity uniform. There are numerous possibilities for the parameters which were not considered in Einstein’s 1905 derivation.

That a gedanken experiment was limited in scope is a pointless accusation. Einstein was simply showing that A implied B, and was never interested in proving that A’ (a different version of A) did not imply B. And tying all of this to the adequacy (or not) of E = mc2 leads equally nowhere.

It said E=mc2 is obtained from L=mc2 by simply replacing L by E (all energy) without derivation by Einstein. “It’s illogical,” he said.

From the literature, the change appears to be one of notation. If not that, then Sharma could be challenging the notion that the energy of a moving body is equal to the sum of the energy of the body at rest and its kinetic energy – letting Einstein say that the kinetic energy on the LHS of the equation can be substituted by L (or E) if the RHS is added to E0(energy of the body at rest): E = E0 + K. In which case Sharma’s challenge is even more ludicrous for calling one of the basic tenets of thermodynamics “illogical” without indicating why.

Although Einstein’s theory is well established, it has to be critically analysed and the new results would definitely emerge.

The “the” before “new results” is the worrying bit: it points to claims of his that have already been made, and suggests they’re contrary to what Einstein has claimed. It’s not that the German is immune to refutation – no one is – but that whatever claim this is seems to be at the heart of what’s at best an awkwardly worded outburst, and which IANS has unquestioningly reproduced.

A persistent search for Sharma’s paper on the web didn’t turn up any results – the closest I got was in unearthing its title (#237) in a list of titles published at a conference hosted by a ‘Russian Gravitational Society’ in May 2015. Sharma’s affiliation is mentioned as a ‘Fundamental Physics Society’ – which in turn shows up as a Facebook page run by Sharma. But an ibnlive.com article from around the same time provides some insight into Sharma’s ‘research’ (translated from the Hindi by Siddharth Varadarajan):

In this way, Ajay is also challenging the great scientist of the 21st century (sic) Albert Einstein. After deep research into his formula, E=mc2, he says that “when a candle burns, its mass reduces and light and energy are released”. According to Ajay, Einstein obtained this equation under special circumstances. This means that from any matter/thing, only two rays of light emerge. The intensity of light of both rays is the same and they emerge from opposite directions. Ajay says Einstein’s research paper was published in 1905 in the German research journal [Annalen der Physik] without the opinion of experts. Ajay claims that if this equation is interpreted under all circumstances, then you will get wrong results. Ajay says that if a candle is burning, its mass should increase. Ajay says his research paper has been published after peer review. [Emphasis added.]

A pattern underlying some of Sharma’s claims have to do with confusing conjecturing and speculating (even perfectly reasonably) with formulating and defining and proving. The most telling example in this context is alleging that Einstein ripped off Galileo: even if they both touched on relative motion in their research, what Galileo did for relativity was vastly different from what Einstein did. In fact, following the Indian Science Congress in 2015, V. Vinay, an adjunct faculty at the Chennai Mathematical Institute and teacher in Bengaluru, had pointed out that these differences in fact encapsulated the epistemological attitudes of the Indian and Greek civilisations: the TL;DR version is that we weren’t a proof-seeking people.

Swinging back to the mass-energy equivalence itself – it’s a notable piece but a piece nonetheless of an expansive theory that’s demonstrably incomplete. And there are other theories like it, like flotsam on a dark ocean whose waters we haven’t been able to see, theories we’re struggling to piece together. It’s a time when Popper’s philosophies haven’t been able to qualify or disqualify ‘discoveries’, a time when the subjective evaluation of an idea’s usefulness seems just as important as objectively testing it. But despite the grand philosophical challenges these times face us with, extraordinary claims still do require extraordinary evidence. And at that Ajay Sharma quickly fails.

Hat-tip to @AstroBwouy, @ainvvy and @hosnimk.

The Wire
January 12, 2016

Ways of seeing

A lot of the physics of 2015 was about how the ways in which we study the natural world had been improved or were improving.