Psych of Science: Hello World

Hello, world. ūüôā I’m filing this post under a new category on Is Nerd called¬†Psych of Science. A¬†dull name but it’ll do. This category will host my¬†personal reflections on the science in the stories I’ve written or read and, more importantly, of the people in those stories.

I decided to create this category¬†after the Social Psychology replications incident. While it was not a seminal episode, reading and understanding the kind of issues faced by authors of the original paper and the replicators really got me thinking about the psychology of science. It wasn’t an eye-opening incident but I was surprised by how interested I was in how the conversation was going to play out.

Admittedly, I’m a lousy people person, and that especially comes across in my writing. I’ve always been interested in understanding how things work, not how people work. This is a discrepancy I hope will be fixed during my stint at NYU, which I’m slated to attend this fall (2014). In the meantime, and after if I get the time, I’ll leave my reflections here, and you’re welcome to add to it, too.

Rocky exoplanets only get so big before they get gassy

By the time the NASA Kepler mission failed in 2013, it had gathered evidence that there were at least 962 exoplanets in 76 stellar systems, not to mention the final word is awaited on 2,900 more. In the four years it had operated it far surpassed its envisioned science goals. The 12 gigabytes of data it had transmitted home contained a wealth of information on different kinds of planets, big and small, hot and cold, orbiting a similar variety of stars.

Sifting through it, scientists¬†have found many insightful patterns, many of which evade¬†a scientific explanation and keep the cosmos as wonderful as it has been. In the most recent instance of this, astronomers from Harvard, Berkeley and¬†Honolulu have unearthed a connection between some exoplanets’ size, density and prevalence.

They have found that most exoplanets with¬†radii 1.5 times more than Earth’s are not rocky. Around or below¬†this cut-off, they were rocky and could hypothetically support human life. Larger exoplanets – analogous to Neptune and heavier – have¬†rocky cores surrounded by thick gaseous envelopes with atmospheric pressures too high for human survival.

We do not know why rocky planetary cores begin to support thick gaseous layers at about 1.5 Earth radii as opposed to 1.2 or 1.8 Earth radii, and as the community answers this question, we will learn something about planet formation,” said Lauren Weiss, a third year graduate student at UC Berkeley.

She is¬†the second¬†author on¬†the group’s paper published in¬†Proceedings of the National Academy of Sciences on May 26. The first author is Geoff Marcy the “planet hunter”, who holds the Watson and Marilyn Alberts Chair for SETI at UC Berkeley.

Not necessarily the bigger the heavier

The planets of the Solar System.
The planets of the Solar System. Image: Lsmpascal

The group analyzed the masses and radii of more than 60 exoplanets, 33 of which were discussed in the paper. “Many of the planets our study straddle the transition between rocky planets and planets with gaseous envelopes,” Weiss explained. The analysis was narrowed down to planets with orbital periods of five to 100 days, which correspond to orbital distances of 0.05 to 0.42 astronomical units. One astronomical unit (AU) is the distance between Earth and the Sun.

Fully 26.2% of such planets, which orbit Sun-like stars, have radii 1 to 1.41 times that of Earth, denoted as R‚äē, and have an orbital distance of around 0.4 AU. Accounting for planets with radii up to 4R‚äē, their prevalence jumps to more than half. In other words, one in every two planets orbiting a Sun-like star¬†was bound to be just as wide to eight times as wide as Earth.

And in this set, the connection between exoplanet density and radius showed itself. The astronomers found that the masses of Earth-sized exoplanets steadily increased until their radii touched 1.5R‚äē, and then dropped off after. In fact, this relationship¬†was so consistent with their data that Weiss & co. were able to tease out a relation between density and radius for 0-1.5R‚äē exoplanets – one they found held with Mercury Venus and Earth, too.

Density¬†= 2.32 + 3.19R/R‚äē

So, the¬†astronomers were able to calculate an Earth-like planet’s density from its radius, and vice versa, using this equation. Beyond 1.5R‚äē, however, the density dropped off as the planet accrued more hydrogen, helium and water vapor. At 1.5R‚äē, they found the maximum density to be around 7.6 g/cm3, against Earth’s 5.5 g/cm3.

The question of density plays a role in understanding where life could arise in the universe.¬†While it¬†could form on any planet orbiting any kind of star, we can’t also forget that Earth is the only planet on which life has been found to date. It forms an exemplary case.

There’s nothing inbetween

Are we really that alone? Photo: NASA
Are we really that alone? Photo: NASA

Figuring out how many Earth-like planets, possibly around Sun-like stars, there could be in the galaxy could therefore help us understand what the chances are like to find life outside the Solar System.

And because Earth leads the way, we think “humans would best be able to explore planets with rocky surfaces.” In the same way, Weiss added, “we would better be able to explore,¬†or colonize, the rocky planets smaller than 1.5 Earth radii.”

This is where the astronomers hit another stumbling block. While¬†data from Kepler showed that¬†most exoplanets were small and in fact topped off at 4R‚äē, the Solar System doesn’t have any such planets. That is, there is no planet orbiting the Sun which is heavier than Earth but lighter than Neptune.

“It beats all of us,” Weiss said. “We don’t know why our Solar System didn’t make sub-Neptunes.” ¬†The Kepler mission is also responsible for not providing information on this front. “At four¬†years, it lasted less time than a single orbit of Jupiter, 11 years, and so it can’t¬†answer questions about the frequency of Jupiter, Saturn, Uranus, or Neptune analogs,” Weiss explained.

It seems the cosmos has lived up to its millennia-old promise, then, as more discoveries trickle in on the back of yet more questions. We will have to keep looking skyward for answers.

Replication studies, ceiling effects, and the psychology of science

On May 25, I found¬†Erika Salomon’s tweet:

The story started¬†when the journal Social Psychology decided to publish successful and failed replication attempts instead of conventional papers and their conclusions for a Replications Special Issue (Volume 45, Number 3 / 2014). It accepted proposals from scientists stating which studies they wanted to¬†try to replicate, and registered the accepted ones. This way, the journal’s editors Brian Nosek and Daniel Lakens could ensure that¬†a¬†study was published no matter the outcome – successful or not.

All the replication studies were direct replication studies, which means they used the same experimental procedure and statistical methods to analyze the data. And before the replication attempt began, the original data, procedure and analysis methods were scrutinized, and the data was shared with the replicating group. Moreover, an author of the original paper was invited to review the respective proposals and have a say in whether the proposal could be accepted. So much is pre-study.

Finally, the replication studies were performed, and had their results published.


The consequences of failing to replicate a study

Now comes the problem: What if the¬†second group failed to replicate the findings of the first group? There are different ways of looking at this from here on out. The first person such a negative outcome affects¬†is the original study’s author, whose reputation is at stake. Given the gravity of the situation, is the original author allowed to ask for a replication of the replication?

Second, during the replication study itself (and given the eventual negative outcome), how much of a role is the original author allowed to play when performing the experiment, analyzing the results and interpreting them? This could swing both ways. If the original author is allowed to be fully involved during the analysis process, there will be a conflict of interest. If the original author is not allowed to participate in the analysis, the replicating group could get biased toward a negative outcome for various reasons.

Simone Schnall, a psychology researcher from Cambridge writes on the SPSP blog (linked to in the tweet above) that, as an author of a paper whose results have been unsuccessfully replicated and reported in the Special Issue, she feels “like a criminal suspect who has no right to a defense and there is no way to win: The accusations that come with a ‚Äúfailed‚ÄĚ replication can do great damage to my reputation, but if I challenge the findings I come across as a ‚Äúsore loser.‚ÄĚ”

People on both sides of this issue recognize the importance of replication studies; there’s no debate there. But the presence of these issues calls into question how replication studies are designed, reviewed and published, with a just as firm support structure, or they all suffer the risk of becoming personalized. Forget who replicates the replicators, it could just as well become who bullies the bullies. And in the absence of such rules, replication studies are becoming actively disincentivized. Simone Schnall acceded to a request to replicate her study, but the fallout could set a bad example.

During her commentary, Schnall links to a short essay by Princeton University psychologist Daniel Kahneman titled ‘A New Etiquette for Replication‘.¬†In the piece, Kahneman¬†writes,¬†“…¬†tension is inevitable when the replicator does not believe the original findings and intends to show that a reported effect does not exist. The relationship between replicator and author is then, at best, politely adversarial. The relationship is also radically asymmetric: the replicator is in the offense, the author plays defense.”

In this blog post by one of the replicators, the phrase “epic fail” is an example of how things could be personalized. Note: the author of the post has struck out the words and apologized.

In order to eliminate these issues, the replicators could be asked to keep things specific. Various stakeholders have suggested different ways to resolve this issue. For one, replicators should address the questions and answers raised in the original study instead of the author and her/his credentials. Another way is to regularly publish reports of replication results instead of devoting a special issue to it, and make them part of the scientific literature.

This is one concern that Schnall raises in her answers (in response to question #13):”I doubt anybody would have widely shared the news had the replication been considered ‚Äúsuccessful.‚ÄĚ” So there’s a need to address a bias here: are journals likelier to publish replication studies that fail to replicate previous results? Erasing this bias requires publishers to actively incentivize replication studies.

A paper published in Perspectives on Psychological Science in 2012 paints a slightly different picture. It looks at the number of replication studies published in the field and pegs the replication rate at 1.07%. Despite the low rate,¬†one of the paper’s¬†conclusions was that among all published replication studies, most of them reported¬†successful, not unsuccessful, replications.¬†It also notes that since 2000, among all replication studies published, the fraction¬†reporting successful¬†outcomes stands at 69.4%, and that reporting unsuccessful outcomes at 11.8%.

chart_1
Sorry about the lousy resolution. Click on the chart for a better view.

At the same time, Nosek and Lakens concede in this editorial that, “In the present scientific culture, novel and positive results are considered more publishable than replications and negative results.”


The ceiling effect

Schnall does raise many questions about the replication, including alleging the presence of a ceiling effect. As she describes it (in response to question #8):

“Imagine two people are speaking into a microphone and you can clearly understand and distinguish their voices. Now you crank up the volume to the maximum. All you hear is this high-pitched sound (‚Äúeeeeee‚ÄĚ) and you can no longer tell whether the two people are saying the same thing or something different. Thus, in the presence of such a ceiling effect it would seem that both speakers were saying the same thing, namely ‚Äúeeeeee‚ÄĚ.

The same thing applies to the ceiling effect in the replication studies. Once a majority of the participants are giving extreme scores, all differences between two conditions are abolished. Thus, a ceiling effect means that all predicted differences will be wiped out: It will look like there is no difference between the two people (or the two experimental conditions).”

She states this as an important reason to get the replicators’ results replicated.


My opinions

// Because Schnall thinks the presence of a ceiling effect is a reason to have the replicators’ results replicated, it implies that there could be a problem with the method used to evaluate the authors’ hypothesis.¬†Both the original and the replication studies used the same method, and the emergence of an effect in one of them but not the other implies the “fault”, if that, could lie with the replicator – for improperly performing the experiment – or with the original author – for choosing an inadequate set-up to verify the hypothesis. Therefore, one thing that Schnall felt strongly about, the scrutiny of her methods,¬†should also have been formally outlined, i.e. a replication study is not just about the replication of results but about the replication of methods as well.

// Because both papers have passed scrutiny and have been judged worthy of publication, it makes sense to treat them as individual studies in their own right instead of one being a follow up to the other (even though technically that’s what they are), and to consider both together instead of selecting one over the other – especially in terms of the method. This sort of debate gives room for Simone Schnall to publish an official commentary in response to the replication effort and make the process inclusive. In some sense, I think this is also the sort of debate that Ivan Oransky and Adam Marcus think scientific publishing should engender.

// Daniel Lakens explains in a comment on the SPSP blog that there was peer-review¬†of the introduction, method, and analysis plan by the original authors and not an independent group of experts.¬†This was termed “pre-data peer review”: a review of the methods and not the numbers. It is unclear to what extent this was sufficient because it’s only with a scrutiny of the numbers does any ceiling effect become apparent. While post-publication peer-review can check for this, it’s not formalized (at least in this case) and does little to mitigate¬†Schnall’s situation.

// Schnall’s paper was peer-reviewed. The replicators’ paper was peer-reviewed by Schnall et al. Even if both passed the same level of scrutiny, they didn’t pass the same type of it.¬†On this basis,¬†there might be¬†reason for Schnall to be involved with the replication study. Ideally, however, it would have been¬†better if the replication was better formulated, with normal peer-review, in order to eliminate Schnall’s interference.¬†Apart from the conflict of interest that could arise, a replication study needs to be fully independent to make it credible, just like the peer-review process is trusted to be credible because it is independent. So while it is commendable that Schnall shared all the details of her study, it should have been made possible for her participation to end there.

// While I’ve disagreed with Kahneman over the previous point, I do agree¬†with point¬†#3 in his essay that describes the¬†new etiquette: “The replicator is not obliged to accept¬†the author‚Äôs suggestions [about the replicators’ M.O.], but is required to provide a full description of the final plan. The reasons for rejecting any of the¬†author‚Äôs suggestions must be¬†explained in detail.” [Emphasis mine]

I’m still learning about this fascinating topic, so if I’ve made mistakes in interpretations, please point them out.


Featured image: shutterstock/(c)Sunny Forest

All ice that falls is not an avalanche

Note: Updated with quotes from Patrick Wagnon, ICIMOD

On April 18, an avalanche on Mount Everest killed 16 Nepalese guides. By the end of the month, 13 bodies had been recovered. The search for the remaining three was called off after conditions were termed too risky and difficult. On April 22, the Sherpa guides announced they would not work on the mountain as a mark of respect for their fallen colleagues. The climbing season on Mt. Everest for 2014 was closed.

The incident drew attention from¬†around the world – as consternation aimed at the Nepalese government’s provision of insufficient compensation and as concern over¬†the effects of climate change. Its capacity to be a rallying point for anthropogenic warming was bolstered after another avalanche on May 23 that killed one climber and two more guides, who were scaling Yalung Kang, a sister peak of Mt. Kanchenjunga.

Except that the April-18 incident wasn’t an avalanche, according to some glaciologists, climate change specialists and other scientists from the International Centre for Integrated Mountain Development (ICIMOD), Nepal. They issued a ‘Clarification on ¬†inaccurate¬†media reports‘ on May 23, about a week after a conference on the Hindu Kush Himalayas cryosphere closed.

They attributed the April-18 tragedy to a serac fall, and explained how it was different from an avalanche.

“An avalanche requires a snowpack of sufficient depth with a weak layer, a sufficiently steep slope, and a trigger. In contrast, the 18 April tragedy on Mount Everest was the result of a different phenomenon called serac collapse. Seracs are large blocks of ice that are formed as a result of glacier fracture patterns and motion, and can fall or topple without warning.”

Thus, most of the time, there is no relationship between climate change and avalanche/serac risk.

Little to no link

According to Patrick Wagnon, one of the authors of the media clarification and a glaciologist at ICIMOD, “Serac falls are due to glacier flow and fracturation, and glaciers move down with gravity. Avalanche are due to snow falls, slope, and snow cover stability, and gravity is the main process to trigger avalanches.”

Going by the ICIMOD clarification, the April 18 avalanche that killed the 16 guides was triggered by a serac fall.

In the same statement, the scientists add, “Changes in the frequency of either avalanches or serac falls in the Everest region have not been definitively linked to climate change.” So further studies have to be done to establish the nature of the link between climate change and the frequency and magnitude of avalanches and¬†serac falls.

Wagnon¬†added, “As far as I know, no studies have been conducted so far to link climate change and serac falls or avalanches in the Himalayas, and¬†very few in the Alps.”

However, Wagnon also cautioned that in some specific cases, glacier flow and associated serac fall can be modified by climate change. For an example, he referred to a case under study in the Mont Blanc area in France, where a serac barrier at 3,700 m, on the Taconnaz glacier, is dominating the town of Chamonix.

Such a glacier will be moving slowly because the its temperature keeps parts of it from melting.¬†Since¬†the amount of warming is sensitive to elevation, parts of glaciers at critical altitudes could warm up to close to 0¬į C and accelerate “from 1-10 m/year to 10-50 m/year”, precipitating a serac fall. On¬†the Alps,¬†the critical altitude above which the falls are likelier to happen is in the range of 3,500-3,900 m.¬†On the Himalayas, around 6,000 m.

“But really, take care, it is in very few¬†cases,” Wagnon concluded.

Not a glacial recession

At the end of the Himalayas cryosphere conference, ICIMOD¬†published a report titled ‘Glacier status in Nepal and decadal change from 1980 to 2010 based on Landsat data (2014)‘. One of its conclusions is that the total glacier area decreased by 24% between 1977 and 2010, and,¬†on average, glaciers were also found to be receding at 38 km2 per year.

Thus, the corresponding ice reserves had dwindled¬†by around 129 km3¬†in the same period. The report’s authors note that while the impact of climate change on avalanches and serac falls is not fully known, rising local temperatures affect different physical features to different extents. As a result, they write, smaller glaciers that sport a larger surface area, those at lower elevations and with less-sloping surfaces are more¬†pliant to warmer climes.

At the same time, the authors of the report advised caution in the clarification. Between 1980 and 1990, they speculate that the rate of ice loss could have been overestimated by the misclassification of snow as glacier ice – a characterization that’s yet to be fully understood.

(Hat-tip to Siddharth Varadarajan)

The violent history of the Chelyabinsk meteorite

The Copernican
May 22, 2014

With the second largest air burst recorded in history,¬†a meteorite exploded over the southern Ural region of Russia in February 2013 and crashed near the city¬†of Chelyabinsk.¬†During its journey through Earth’s atmosphere, it underwent intense heating, eventually glowing brighter than the Sun, and blew up¬†with a bright flash.

The accompanying shockwave damaged over 7,000 buildings and injured 1,500. The crash disintegrated the rock into fragments.

When analyzing some of these fragments, scientists from the Tohoku University, Japan, detected the presence of a mineral called jadeite. Jadeite is a major constituent of jade, the hard rock that has been used since prehistoric times for fashioning ornaments. The mineral forms only under extreme pressure and temperature.

“Generally, jadeite is not included in meteorites as a primary mineral,” said Shin Ozawa, a graduate school student at Tohoku University and lead author of his team’s¬†paper published in Scientific Reports on May 22.

The implication is that the Chelyabinsk meteorite, originally an asteroid, could have had a violent past leading to its undergoing immense heating and compression.

Piecing evidence together

“The jadeite reported in our paper is considered to have crystallized from a¬†melt of sodium-rich plagioclase under high-pressure and¬†high-temperature conditions caused by an impact,” Ozawa explained. Plagioclase (NaAlSi3O8) is a silicate mineral found in meteorites as well as¬†terrestrial rocks.

The impact would have been in the form of the Chelyabinsk asteroid Рor its parent body Рcolliding with another rock in space.

To arrive at distinct estimates of how this collision could have occurred, Ozawa and his colleagues connected two bits of evidence and solved it like an algebraic equation. In this case, the equations are called the Rankine-Hugoniot relations.

First, they observed¬†the jadeite was found embedded in¬†black seams in the rock called shock-melt veins. “They are formed by localized melting of rocks¬†probably due to frictional heat, accompanied with shear movements of¬†material within the rocks during an impact,” Ozawa explained.

The molten rock then solidifies due to high pressure. The amount of time for which this pressure is maintained Рi.e. duration of the impact Рwas calculated based on how long it would have taken a shock-melt vein of its composition to solidify.

Second, they knew the conditions under which jadeite forms, which require a certain minimum impact pressure which, in turn, is related to the speed at which the two bodies smashed into one another.

Based on this information, Ozawa reasons the Chelyabinsk meteorite – or its parent body – could have collided with another space-rock “at least 150 metres in diameter” at 0.4 to 1.5 km/s.

The impact itself could have occurred around or after 290 million years ago, according to a study¬†published in Geochemistry International in 2013, titled¬†‘Analytical results for the material of the Chelyabinsk meteorite’. It also reports that the meteorite is 4.4-4.6 billion years old.

Collision course

Ozawa’s results aren’t the end of the road, however, in understanding the meteorite’s past,¬†a¬†4-billion-year journey that ended on the only planet known to harbor life. In fact, nobody noticed it hurtling toward our planet until it entered the atmosphere and started¬†glowing.

Earth has been subjected to many asteroid-crashes because of its proximity to the asteroid belt between Mars and Jupiter. In this region, according to Ozawa, asteroids exist in a stable state. So violent collisions with other asteroids could be one of the triggers that could set these rocks on a path toward Earth.

Ozawa speculated that such events wouldn’t be uncommon. A report released by¬†the¬†B612 Foundation in¬†April this year attests to that. It states that¬†asteroids caused 26 nuclear-scale explosions in Earth’s atmosphere between 2000 and 2013. As¬†The Guardian wrote,¬†“the evidence was a sobering reminder of how vulnerable the Earth was to the threat from¬†space”.

The difficulty in detecting the Chelyabinsk asteroid was also compounded by the fact that it came from the direction of the Sun. “If it had approached the Earth from a different direction,” Ozawa added, “its detection might have been easier.”

Thus, such collisions cause essentially random upheavals in our ability to predict when one of these rocks might threaten to get too close. By studying their past, scientists can piece together when and how these collisions occur, and get a grip on the threat-levels.

Dude, where's my comma?

(Update: Includes Gopal Gandhi’s reply.)

Gopalkrishna Gandhi’s lead in The Hindu, ‘An open letter to Narendra Modi‘, was a wonderful read – as if from the Keeper of the Nation’s Conscience to the Executor of the Republic’s Will. I’m not¬†interested in scrupulous political analyses and¬†Gandhi’s piece sat well with that, explaining so lucidly what’s really at stake as Modi gears up to become India’s 14th Prime Minister without fixating on big words, not that that’s wrong but they tend to throw me off.

However, Gandhi’s piece does have an awful number of commas in it and IMO they hamper the flow. Sample this.

Why is there, in so many, so much fear, that they dare not voice their fears?

The piece as such is 1,469 words long, has 82 sentences, about 17.91 words per sentence and 140 commas. That means a lot of sentences have at least one comma. In fact, there are only 11 sentences in which a comma has appeared exactly once; in every other sentence with a comma, there are at least two of them (excluding the opening and closing addresses).

Overall, there are 13 sentences with no commas. Remove them and the average number of commas per sentence comes to 2.02. Factor in the number of sentences with only one comma and that gives you 2.22 Рthe number of commas on average in each sentence with at least two commas.

This means almost 71% of sentences in the piece¬†possess¬†a¬†sub-clause.¬†I think that makes for clunky reading. Many people, especially those writing in Indian newspapers, have a tendency to use the comma to effect a pause while reading, mostly for dramatic effect, but the comma serves a bigger purpose than that. It breaks the sentence down into meaningful nuclear bits. For example, see the italicized bit¬†in the sentence¬†two lines above or¬†below. That’s a sub-clause demarcated by commas. Remove it and the rest of the sentence, with the two ends brought together, still make sense.

Ideally, the number of commas should be comparable to the number of sentences, and definitely shouldn’t differ by an order of magnitude unless, of course, you’re composing something especially tricky, like this sentence. If you find you can’t avoid using too many sub-clauses, it could mean you’re not spelling things out simple enough.

If your sub-clauses are dominated by words like ‘however’ or ‘albeit’, it could mean you’re making many assumptions while constructing your arguments. If there are too many non-essential relative clauses, it could mean you’re trying to pack in too much information (usually in the form of adjectives).

In short, this Feynman episode sums it up:

Richard Feynman, the late Nobel Laureate in physics, was once asked by a Caltech faculty member to explain why spin one-half particles obey Fermi Dirac statistics. Rising to the challenge, he said, “I’ll prepare a freshman lecture on it.” But a few days later he told the faculty member, “You know, I couldn’t do it. I couldn’t reduce it to the freshman level. That means we really don’t understand it.”

Of course, these are just my thoughts, and most of them are the sort of things I’ve to look out for while editing¬†The Hindu Blogs. I’d try to use commas only when absolutely necessary because they, especially when frequent enough, don’t just give pause but enforce them.

Update: Gopalkrishna Gandhi replied to my piece. Very sweet of him to do so…

Absolutely delighted and want to tell him that I find his comment as refreshing as a shower in lavender for it cures me almost if not fully of my old old habit of taking myself too seriously and writing as if I am meant to change the world and also that I will be very watchful about not enforcing any pauses through commas and under no circumstances on pain of ostracism for that worst of all effects namely dramatic effect and will assiduosuly [sic] follow the near zero comma if not a zero comma rule and that I would greatly value a meet up and a chat discussing pernicious punctuation and other evils.

… but what a troll!

Metal, flesh and monochrome

Sunday Magazine
May 18, 2014

Hans Rudolf Giger, the Swiss artist who conceived of the alien xenomorph in Ridley Scott’s Alien (1979), died on May 12 at the age of 84 in Zurich. Here was an artist who was not awkward, harboring no pretense of subtlety. Giger was an artist suckling on a vein of psychotic posthumanism like a fat, usurious pup. His influence on various artists and art-forms cannot be overstated. From Alejandro Jodorowsky to Ibanez, from Dune to Doom, from gamers to tattoo aficionados, Giger‚Äôs biomechanical fusion of metal, flesh and insipid monochrome was the perfect picture of the macabre.

It would be wrong to remember him for just Alien. The author of dozens of paintings, sculptures and lithographs as well, perhaps his most profound accomplishment was the surgical depiction of posthuman fetishes. His 1977 book Necronomicon, a compendium of his pictures, was breathlessly celebrated for the psychiatric grimoire that it was. At the same time, it was one of the first complete impressions of unhuman lifeforms – beaten in time only by H.P. Lovecraft’s Cthulhu Mythos from the 1930s – where creatures aspired not to be ape-like, not to present distended limbs in an effort to approximate familiarity, but were beings in their own right.

What better example of this idea than Necronom IV and V, the conceptual beings that inspired the xenomorph. The Necronom had no eyes, and only the mouth to give its face any semblance of being facial. At the same time, the way Giger assembled these beings into an iconoclastic portrayal of sanity – such as with Vlad Tepes (1978) and the dharmic horror that was Goho Doji (1987) – drew forth chills, sleepless nights and confused arousal from very-human adolescents. The faces in his paintings weren’t screaming. They were staring even while they were penetrated by translucent metal proboscises. They were existing for pain and confusion.

When Ridley Scott arrived for his first meeting at 20th Century Fox for Alien, he was shown Giger‚Äôs Necronomican. ‚ÄúI took one look at it,‚ÄĚ he said, ‚Äúand I‚Äôve never been so sure of anything in my life.‚ÄĚ

To look at them was to realize the composition of the human psyche was independent of the human body, that the human mind was frightened not by the disfiguration of familiarity Рsuch as the image of a mangled corpse or by someone jumping out from behind the shower curtains Рbut by its reconfiguration. Giger put fear and gratification where they didn’t belong, and the product always had a sheen of otherworldly Gödelian inaccessibility. That even alien constructions could inspire empathy and distress was a disturbing revelation, if only for me. And no, I have not made it as a normal adult.

While cinema may have moved on from the genius of Giger, the best collaboration being The Last Megalopolis (1988) and the last Species (1995), he did not suffer from the same decline in prolificity or skill that artists are wont to after tinsel-town toss-outs. He seemed not to work toward the shock factor that the screen is adept at reproducing because his success lay in his ability to parallely evoke and inhere humankind‚Äôs tendency for abuse, a chronically relevant motif. Giger‚Äôs sculpture Birth Machine (1967) on display in the permanent museum dedicated to him in Gruyeres, Switzerland, stimulates this sensation of an existential vertigo, like the thematically similar Doodlebug (1997) by Nolan. Better yet, consider Aleph (1972), or Li I (1974), dedicated to Li Tobler, his partner from ‚Äė66 until she killed herself in ‚Äė75 – both potent with occultist interpretations.

Such images, rather experiments with the triggers of strangeness, populate the breadth of his work. Growing up in the Swiss town of Chur, where his father ran a pharmacy, Giger admitted to having been fascinated by dark alleys between buildings that he could see from his room’s window. He also had serial nightmares, and took to art first as therapy. No wonder then that his work is effortlessly visceral, drawing as it does from the inviting darkness that pervaded Chur’s alleyways.

Long live H.R. Giger.

'Free Indian science': Responses, rebuttals and retrenchments

In the April 3 issue of Nature, Joseph Mathai and Andrew Robinson published a Comment on¬†the afflictions of¬†scientific research in India – and found the interference of bureaucracy to be chief among all ills. Most of the¬†writers’ concerns were very valid, and kudos to them for highlighting how it was the government mismanaging science in India, not the institutes mismanaging themselves. In the May 8 issue of¬†the same journal, three letters in response to the piece were published, under Correspondence, which brought to light two more issues just as important although not that immense, and both symptomatic of mismanagement that appears to border on either malevolence or stupidity, depending on your bent of mind.

Biswa Prasun Chatterji from St. Xavier’s, Mumbai, wrote about the “disastrous”¬†decoupling of research and education in the country, mainly as a result of newly created research institutions in the 1940s and 1950s. These institutions led bright, young students away from universities, which as a result were parched of funds. The research bodies, on the other hand, fell prey to increasing bureaucratic meddling. Chatterji then points to an editorial in the November 1998 (vol. 75) issue of Current Science by P. Balaram, now the director of the Indian Institute of Science. In the piece, Prof. Balaram describes C.V. Raman as having been a firm believer in universities being the powerhouses of research, not any separate entities.

The latest issue of 'Current Science' (May 10, 2014)
The latest issue of ‘Current Science’ (May 10, 2014)

In 1932, C.V. Raman helped found Current Science after recognizing the need for an Indian science journal. In one of its first issues, an editorial appeared named ‘Retrenchment and Education’, in which the author, likely Prof.¬†Raman himself, lays out the importance of having an independent body to manage scientific research in India. Because of its relevance to the issues at hand, I’ve reproduced it from the Current Science archives below.

The second letter’s contents follow from the first’s. Dhruba Saikia, Cotton College State University (Assam), and Rowena Robinson, IIT-Guwahati, ask for the country’s university-teaching to be overhauled. Many professors I’ve spoken to ask for the same thing but are turned to amusement after they realize that the problem has been left to fester for so long that the solution they’re looking for requires fixing our entire elementary education system. Moreover, after the forking of education and research described in Chatterji’s letter, it seems that universities were left to fend for themselves after their best teaching resources were drawn away by the government. Here is a paragraph from Saikia’s and Robinson’s letter:

Hundreds of thousands of students graduate from Indian universities each year. However, our own experience in selecting students indicates that many are ignorant of the basics, with underdeveloped reasoning skills and an inability to apply the knowledge they have.

There was also a third letter, this one critical of the Mathai-Robinson piece. Shobhana Narasimhan, a theoretical physicist¬†from JNCASR, Bangalore, says that she is free to pursue “curiosity-driven science” and doesn’t have to spend as much time writing grant proposals as do scholars in the West, and so Mathai-Robinson are wrong on that front. At the same time, it seems from her¬†letter that those things she has access to that her presumably better-equipped Occidental colleagues don’t could also be the result of a lack of control on research agendas and funding in India. In short, she might be free to pursue topics her curiosity moves her toward because the authorities don’t care (yes, this is a cynical point of view, but I think it must be considered).

So I emailed her and she replied.

“The quick answer to your question is I don’t think more overview of research funding is the answer to improving Indian science. My colleagues abroad spend more time writing proposals to get funding than actually carrying out research… I don’t think that is a good situation. Similarly getting tenure at an American university often depends on how much money you brought in. We don’t have such a situation (yet) and I think that is good.

We shouldn’t blindly copy foreign systems because they are by no means perfect. [Emphasis mine]

I have been on grant committees and I found good proposals always got funded. But I do agree that there is often much dead wood in many Indian departments, but that can also happen abroad.

I am aware that I may be speaking from a position of privilege since I work at one of the better funded institutes. Also as a theorist, I do not need much equipment.”


I would say Narasimhan’s case is the exception rather than the rule. Although I don’t have a background in researching anything (except for my articles and food prices), two¬†points have been established with general consensus:

  1. The Rajiv Gandhi-era promise of funding for scientific R&D to the tune of 2% of GDP is yet to materialize. The fixation on this number ranges from the local – for unpaid students and ill-equipped labs – to the global – to keep up with investments in other developing countries.
  2. Even if there is funding, there is no independent body staffed with non-governmental stakeholders to decide which research groups get how much, leading to arbitrary research focus.

If prey can eat predators, we're ignoring evolution

The half-century old mathematics that ecologists use to understand how predator and prey populations rise and fall has received a revamp. Two scientists from Georgia Tech did this by crediting evolution for what it is but not commonly thought to be: fast, not slow.

The scientists, Joshua Weitz and Michael Cortez, applied a branch of mathematics called fast-slow dynamical systems theory to model how two populations could vary over time if they are evolving together. Until now, this has been the exclusive demesne of the Lotka-Volterra equations, derived by Alfred Lotka and Vito Volterra in the early 20th century. On a graph, these equations are visually striking for how they show predator and prey numbers rising in falling in continuous cycles.

For example, cheetahs eat baboons. In an ecosystem good for baboonkind, baboons will thrive. Cheetahs will eat them and thrive. As the number of baboons increases, so will the number of cheetahs. With too many cheetahs, the number of baboons will decline. As a result, the number of cheetahs will also decline. But the ecosystem is good for baboons. So after the number of cheetahs has declined, more baboons will appear. As the number of baboons increases, so does the number of cheetahs. And so on.

Image: Wikimedia Commons
Image: Wikimedia Commons

However, the Lotka-Volterra equations make several assumptions to get this far, many of which oversimplify natural conditions to the point that they no longer seem natural. Chief among them concerns the ignorance of genetic¬†variations.¬†Animals¬†do possess them whether in the field or in the laboratory but the Lotka-Volterra equations assume the differences arising from them don’t exist. As a result, while “predators and their prey differ in their abilities acquire food or avoid capture,” the equations just overlook such traits,¬†said Michael Cortez, a postdoc at Georgia Tech and first author on the paper describing the revamped equations. It was published in the Proceedings of the National Academy of Sciences May 5.

Turned on its head

In fact, Cortez and his postdoctoral mentor Joshua Weitz were particularly motivated by three studies, two from 2001 and one from 2011, whose findings gave rise to absurd implications if the Lotka-Volterra reasoning was applied. The equations Рlike depicted in the chart Рrequire the prey population to peak first, followed by the predator population. The studies from 2001 and 2011 investigated gyrfalcon-rock ptarmigan, mink-muskrat and phage-V. cholerae pairs, and found the opposite: they showed the predator population peaked first, before the prey population did.

So are the prey eating the predators? “This is not the case,” Cortez¬†explained.¬†According to him, the reversal in peaking is driven by fluctuations in the abundance of different types of prey. One type of prey could be more or less able to avoid capture, while one type of predator could be more or less able to capture prey. Thus, these two kinds of animals are developing distinct genetic traits at the same time, i.e. coevolving.

The difference between the Lotka-Volterra and the coevolution cycles.
Image: Joshua Weitz

To understand how coevolution¬†influenced the number of predators and prey, Cortez and Weitz applied fast-slow dynamical systems theory. The ‘fast’ applies to the change in the number of types of predator or prey. The ‘slow’, to how the population as a whole is changing. Between them, says Cortez, “I was able to break the reverse cycles into pieces and study each piece of the cycle individually,¬†allowing me to understand how coevolution was causing the reverse cycles.

The most surprising and exciting prediction from our work is that co-evolution between predators and prey can reverse this ordering, yielding cycles where peaks in prey abundance follow peaks in predator abundance,” Weitz added.

A different fast-slow

While this is not the first study to investigate what effects evolution has on changing populations, it is the first to accommodate fast rates of evolution, i.e. evolutionary changes that are more rapid and occur within a few generations. As a result, their implications are far-ranging, too, for the Lotka-Volterra equations were not restricted to ecology even though they were inspired by it. One other area of science in which a system could go back and forth between two stable states is chemistry and all its chemical reactions.

However, just like in ecology, the precise mathematics that¬†governs them is computationally intensive. On May 6, researchers from Oxford University published a paper in¬†The Journal of Chemical Physics explaining how the mathematics could be further simplified, making it easier to model them on computers.¬†While this team also considers fast-slow systems,¬†the designation is different. The Cortez-Weitz model compared how rapid evolutionary changes (fast) affected population (slow). The ‘Oxford model’, on the other hand, compares how changes in the sources of food (fast) affect the time taken for the predators to become¬†extinct (slow).

This image shows the evolution of a prey (blue line) and predator (green line) system in three parameter regimes: from the low extinction risk in Regime 1 to the high extinction risk in Regime 3.
This image shows the evolution of a prey (blue line) and predator (green line) system in three parameter regimes: from the low extinction risk in Regime 1 to the high extinction risk in Regime 3. Credit: M. Bruna/University of Oxford

To demonstrate, Maria Bruna, the first author on the paper, explained that in their system, she and her team consider whale and plankton populations. Plankton is an important food source for whales. While whales live and function over many years, plankton blooms can be fickle and change their yield of food on a daily basis. However, some¬†environmental conditions can push the plankton blooms to take many years to shift their yield. “In such cases, the whales will ‚Äėcare‚Äô about these metastable transitions in plankton, since they notice the changes in plankton abundance on a timescale which is relevant to them,” she said.

Weitz expressed interest in this work: “It would be very interesting to see what happens when their method is applied to more complex contexts, including in which populations are comprised of two or more variants.


References:

Cortez MH, & Weitz JS (2014). Coevolution can reverse predator-prey cycles. Proceedings of the National Academy of Sciences of the United States of America PMID: 24799689

Bruna M, Chapman SJ, & Smith MJ (2014). Model reduction for slow-fast stochastic systems with metastable behaviour. The Journal of chemical physics, 140 (17) PMID: 24811625

The magnetic sky

On May 6, the team behind the now-inoperative Planck space telescope released a map of the magnetic field pervading the Milky Way galaxy.

Milky_Way_s_magnetic_fingerprint

Titled ‘Milky Way’s Magnetic Fingerprint’, the map incorporates two textures to visualize the magnetic field’s dual qualities: striations for direction and shading for intensity.

Planck was able to measure the polarization by studying light. Light is a wave (apart from being a particle, too). As a wave, it is composed of electric and magnetic fields vibrating perpendicular to each other. Overall, however, the two fields could vibrate in any direction. So when they choose to vibrate in a particular direction, the light is said to be polarized.

Such light is emitted by dust grains strewn in the space between Milky Way’s stars. As Dr. Chris Tibbs, an astrophysicist from Caltech,¬†told me over Twitter, “Dust grains absorb light from stars, which heats up the grains, and [they] then radiate away this heat producing the emission.”

The¬†grains¬†are oriented along the Milky Way’s magnetic field, so the light they emit is polarized along the magnetic field. Because the grains are so small, the light they emit is of very low intensity (i.e. very long wavelength), so it takes a powerful telescope like Planck, perched on its orbit around the Sun, to study it.

It¬†used a technique that’s the opposite of polarized sunglasses, which use filters to eliminate polarized light and reduce glare. The telescope, on the other hand, used filters to eliminate all but the polarized light, and then studied it to construct the map shown above.

As the astrophysicist Katie Mack pointed out on her Facebook page, the Planck team that released this image has carefully left out showing the magnetic fields in the region of the sky studied by the BICEP2 telescope at the South Pole which, on March 17, announced the discovery of evidence pointing to cosmic inflation. According to Katie,

The amount of polarized dust emission in the region where BICEP2 made its observation is unknown, but if it turns out to be a lot, it could mean that the signal BICEP2 saw was not entirely primordial.

This means we’ve to wait until the end of the year to know if the BICEP2 announcements were all they were made out to be.