Comparing Prime Minister Narendra Modi with former prime minister Atal Bihari Vajpayee, Union Science and Technology Minister Harsh Vardhan on Wednesday said both have a similar “DNA” and share a passion for scientific research.
I’m sure I’m interpreting this too literally but when the national science minister makes a statement saying two people share similar DNA, I can’t help but wonder if he knows that the genome of any two humans is 99.9% the same. The remaining 0.1% accounts for all the difference. Ergo, Prime Minister Narendra Modi has DNA similar to Rahul Gandhi, me and you.
That said, I refuse to believe a man who slashed funding for the CSIR labs by 50% (and asked them to make up for it – a princely sum of Rs 2,000 crore – in three years by marketing their research), who claims ancient Indians surgically transplanted animal heads on humans, whose government passively condones right-wing extremism fuelled by irrational beliefs, whose ministries spend crores of rupees on conducting biased investigations of cow urine, and whose bonehead officials have interfered in the conduct of autonomous educational institutions even knows how scientific research works, let alone respects it.
Vardhan himself goes on to extol Vajpayee as the man who suffixed ‘jay vigyan‘ (‘Hail science’) to the common slogan ‘Jay jawan, jay kisan‘ (‘Hail the soldier, hail the farmer’) and, as an example of his contribution to the scientific community, says that the former PM made India a nuclear state within two months of coming to power. Temporarily setting aside the fact that it takes way more than two months to build and test nuclear weapons, it’s also disturbing that Vardhan thinks atom bombs are good science.
Additionally, Modi is like Vajpayee according to him because the former keeps asking scientists to “alleviate the sufferings of the common man” – which, speaking from experience, is nicespeak for “just do what I tell you and deliver it before my term is over”.
K. VijayRaghavan, the secretary of India’s Department of Biotechnology, has written a good piece in Hindustan Times about how India must shed its “intellectual colonialism” to excel at science and tech – particularly by shedding its obsession with the English language. This, as you might notice, parallels a post I wrote recently about how English plays an overbearing role in our lives, and particularly in the lives of scientists, because it remains a language many Indians don’t have to access to get through their days. Having worked closely with the government in drafting and implementing many policies related to the conduct and funding of scientific research in the country, VijayRaghavan is able to take a more fine-grained look at what needs changing and whether that’s possible. Most hearteningly, he says it is – only if we had the will to change. As he writes:
Currently, the bulk of our college education in science and technology is notionally in English whereas the bulk of our high-school education is in the local language. Science courses in college are thus accessible largely to the urban population and even when this happens, education is effectively neither of quality in English nor communicated as translations of quality in the classroom. Starting with the Kendriya Vidyalayas and the Nayodya Vidyalayas as test-arenas, we can ensure the training of teachers so that students in high-school are simultaneously taught in both their native language and in English. This already happens informally, but it needs formalisation. The student should be free to take exams in either language or indeed use a free-flowing mix. This approach should be steadily ramped up and used in all our best educational institutions in college and then scaled to be used more widely. Public and private colleges, in STEM subjects for example, can lead and make bi-lingual professional education attractive and economically viable.
Apart from helping students become more knowledgeable about the world through a language of their choice (for the execution of which many logistical barriers spring to mind, not the least of which is finding teachers), it’s also important to fund academic journals that allow these students to express their research in their language of choice. Without this component, they will be forced to fallback to the use of English, which is bound to be counterproductive to the whole enterprise. This form of change will require material resources as well as a shift in perspective that could be harder to attain. Additionally, as VijayRaghavan mentions, there also need to be good quality translation services for research in one language to be expressed in another so that cross-disciplinary and/or cross-linguistic tie-ups are not hampered.
How much I’ve missed writing these posts since Cassini passed away. Unsurprisingly, it’s after the probe’s demise that we’ve really begun to realise how much of Cassini’s images and data we were consuming on a daily basis, all of which is gone. There’s no more the steady stream of visuals of Saturn’s rings, bands, storms and panoply of moons – in fact all of which have been replaced by Jupiter’s rings, bands, storms and panoply of moons thanks to Juno. Nonetheless, one entire area of the Solar System has been darkened in my imagination. Until the next full mission to the Saturnian system (although nothing of the kind is in the works), we’ll have to make do with what Cassini data trickles down through NASA’s and ESA’s data-processing sieves.
One such is a new study about the temperature of the air high above Titan’s poles. Before Cassini’s death-dive into Saturn, the probe spent some time studying the moon’s polar atmosphere. Researchers from the University of Bristol who obtained this data noticed something odd: the part of the atmosphere over Titan’s poles began to develop a warm spot over late 2009 but that by 2012, it had become a ‘cold spot’. By 2015, the temperature at about 550 km above had dropped to 120 K (that’s a little below the temperature at which supercooled water turns into a glass).
On Earth, a warm spot forms over the poles because of two principle reasons: how Earth’s wind circulates around the planet and because of the presence of carbon dioxide. During winter, air over the corresponding hemispheric pole sinks down, becomes compressed and heats up. Moreover, the carbon dioxide present in the air also emits the heat it has trapped in its chemical bonds.
In 2012, astronomers using Cassini data had found that Titan also exhibits a wind circulation process that is moon-wide. It can be understood as Titan having two atmospheres, or layers, one on top of the other. In the lower atmosphere, there are three Hadley cells; each cell represents a distinct air circulation system wherein air rises up for 10 km or so from near the equator, moves up/down towards subtropical regions, sinks back down and returns to the equator along the surface. In the second, upper atmosphere, air moves between the two poles directly in a unified, global Hadley cell.
Now, remember that Titan’s distance from the Sun means that one Titan-year is 29.5 Earth-years, that each Titanic season lasts over seven Earth-years and that seasonal shifts are much slower on the moon as a result. However, in 2012, scientists studying Cassini data found that the rate at which the air over one of Titan’s poles was sinking into the pole – like the air does on Earth – was happening really quickly: according to Nick Teanby, a researcher at the University of Bristol and also the lead author of the latest study, the rate of subsidence increased from 0.5 mm/s in January 2010 to 1.5 mm/s in June 2010. In other words, it was a shift that, unlike the moon’s seasons, happened rapidly (in just 12 Titanic days).
The same study concluded that Titan’s atmosphere was thicker than previously thought because trace gases like ethane, hydrogen cyanide, acetylene and cyanoacetylene were found to be produced at an altitude of over 500 km over the poles thanks to photochemical reactions induced by ultraviolet radiation and high-energy electrons streaming in from the Sun. These gases would then subside into the lower atmosphere over the polar region – which brings us to the latest study. It says that, unlike what carbon dioxide warming Earth’s atmosphere, the (once) trace gases actually cool the atmosphere, resulting in the dreadfully cold spot over Titan’s poles. They also participate in the upper Hadley cell circulation.
When I stay over at a friend’s place whenever I come to Delhi, I try to help around the house. But more often than not, I just do the dishes – often a lot of dishes. One item I’ve always had trouble cleaning is the strainer, whether a small tea strainer or a large but fine sieve, because I can never tell if the multicoloured sheen I’m seeing on the wires is a patch of oil, liquid soap or something else. The fundamental problem is that these items are susceptible to the quirks of the wave of nature of light, as a result of which their surfaces display an effect called goniochromism, also known as iridescence.
At first (and over 12 years after high school), I suspected the wires on the sieve were acting as a diffraction grating. This is a structure that has a series of fine and closely spaced ridges on the surface. When a wave of light strikes this surface, the ridges scatter different parts of the wave in different directions. When these waves interact with each other on the other side, they interfere with each other constructively or destructively. A constructive interference produces a brighter band of colour; a destructive interference produces a darker band. How the wave becomes scattered is a function of its frequency: the lower the frequency (or redder the colour), the more the wave is bent around a grating.
As a result, white and continuous light appears to breakdown into its constituent colours when passed through a diffraction grating. But it must be noted that a useful diffraction grating used in a visible-light experiment has something like 4,000-6,000 ridges every centimetre. The width of each ridge has to be of comparable size to the wavelength of visible light because only then can it scatter that portion of light. On the other hand, the sieve I was holding appeared to have only 6-8 ridges every centimetre, so the structure itself couldn’t have been what was effecting the sheen.
Goniochromism, or iridescence, is caused when two transparent or semi-transparent films – like liquid soap atop water – reflect the incident light multiple times. In fact, this is one type of iridescence, called thin-film interference. Here, imagine a thin layer of soap on the surface of a thin layer of water, itself sitting on the surface of a vessel you’re cleaning. (With a strainer, the water-soap liquid forms meniscuses between the wires.) When white light strikes the soap layer, some of it is reflected our and some is transmitted. The transmitted portion than strikes the surface of the water layer: some of it is sent through while the rest is reflected back out.
When the light reflected by each of the two layers interact, their respective waves can interfere either constructively or destructively. Depending on the angle at which you’re viewing the vessel, bright and dark bands of light will be visible. Additionally, the thickness of the soap film also decides which frequencies are intensified and which become subdued in this process. The total effect is for you to see rainbow-esque pattern of undulating brightness on the vessel.
So herein lies the rub. Either effect, although the second more than the first, produces what effectively looks like an oily sheen on the strainer in my hand no matter how many times I scrub it with soap and run it under the water. And ultimately, I end up doing a very thorough job of it if there was no oil on the strainer to begin with – or a very bad one if there was oil on it but I’ve let it be assuming it’s soap residue. It’s a toss-up… so I think I’ll just follow my friend C.S.R.S’s words: “Just rub it a few times and leave it.”
It’s finally happening. As the world turns, as our little lives wear on, gravitational wave detectors quietly eavesdrop on secrets whispered by colliding blackholes and neutron stars in distant reaches of the cosmos, no big deal. It’s going to be just another day.
On November 15, the LIGO scientific collaboration confirmed the detection of the fifth set of gravitational waves, made originally on June 8, 2017, but announced only now. These waves were released by two blackholes of 12 and seven solar masses that collided about a billion lightyears away – a.k.a. about a billion years ago. The combined blackhole weighed 18 solar masses, so one solar mass’s worth of energy had been released in the form of gravitational waves.
The announcement was delayed because the LIGO teams had to work on processing two other, more spectacular detections. One of them involved the VIRGO detector in Italy for the first time; the second was the detection of gravitational waves from colliding neutron stars.
Even though the June 8 is run o’ the mill by now, it is unique because it stands for the blackholes of lowest mass eavesdropped on thus far by the twin LIGO detectors.
LIGO’s significance as a scientific experiment lies in the fact that it can detect collisions of blackholes with other blackholes. Because these objects don’t let any kind of radiation escape their prodigious gravitational pulls, their collisions don’t release any electromagnetic energy. As a result, conventional telescopes that work by detecting such radiation are blind to them. LIGO, however, detects gravitational waves emitted by the blackholes as they collide. Whereas electromagnetic radiation moves over the surface of the spacetime continuum and are thus susceptible to being trapped in blackholes, gravitational waves are ripples of the continuum itself and can escape from blackholes.
Processes involving blackholes of a lower mass have been detected by conventional telescopes because these processes typically involve a light blackhole (5-20 solar masses) and a second object that is not a blackhole but instead usually a star. Mass emitted by the star is siphoned into the blackhole, and this movement releases X-rays that can be spotted by space telescopes like NASA Chandra.
So LIGO’s June 8 detection is unique because it signals a collision involving two light blackholes, until now the demesne of conventional astronomy alone. This also means that multi-messenger astronomy can join in on the fun should LIGO detect a collision of a star and a blackhole in the future. Multi-messenger astronomy is astronomy that uses up to four ‘messengers’, or channels of information, to study a single event. These channels are electromagnetic, gravitational, neutrino and cosmic rays.
The detection also signals that LIGO is sensitive to such low-mass events. The three other sets of gravitational waves LIGO has observed involved black holes of masses ranging from 20-25 solar masses to 60-65 solar masses. The previous record-holder for lowest mass collision was a detection made in December 2015, of two colliding blackholes weighing 14.2 and 7.5 solar masses.
One of the bigger reasons astronomy is fascinating is its ability to reveal so much about a source of radiation trillions of kilometres away using very little information. The same is true of the June 8 detection. According to the LIGO scientific collaboration’s assessment,
When massive stars reach the end of their lives, they lose large amounts of their mass due to stellar winds – flows of gas driven by the pressure of the star’s own radiation. The more ‘heavy’ elements like carbon and nitrogen that a star contains, the more mass it will lose before collapsing to form a black hole. So, the stars which produced GW170608’s [the official designation of the detection] black holes could have contained relatively large amounts of these elements, compared to the stellar progenitors of more massive black holes such as those observed in the GW150914 merger. … The overall amplitude of the signal allows the distance to the black holes to be estimated as 340 megaparsec, or 1.1 billion light years.
The circumstances of the discovery are also interesting. Quoting at length from a LIGO press release:
A month before this detection, LIGO paused its second observation run to open the vacuum systems at both sites and perform maintenance. While researchers at LIGO Livingston, in Louisiana, completed their maintenance and were ready to observe again after about two weeks, LIGO Hanford, in Washington, encountered additional problems that delayed its return to observing.
On the afternoon of June 7 (PDT), LIGO Hanford was finally able to stay online reliably and staff were making final preparations to once again “listen” for incoming gravitational waves. As part of these preparations, the team at Hanford was making routine adjustments to reduce the level of noise in the gravitational-wave data caused by angular motion of the main mirrors. To disentangle how much this angular motion affected the data, scientists shook the mirrors very slightly at specific frequencies. A few minutes into this procedure, GW170608 passed through Hanford’s interferometer, reaching Louisiana about 7 milliseconds later.
LIGO Livingston quickly reported the possible detection, but since Hanford’s detector was being worked on, its automated detection system was not engaged. While the procedure being performed affected LIGO Hanford’s ability to automatically analyse incoming data, it did not prevent LIGO Hanford from detecting gravitational waves. The procedure only affected a narrow frequency range, so LIGO researchers, having learned of the detection in Louisiana, were still able to look for and find the waves in the data after excluding those frequencies.
But what I’m most excited about is the quiet announcement. All of the gravitational wave detection announcements before this were accompanied by an embargo, lots of hype building up, press releases from various groups associated with the data analysis, and of course reporters scrambling under the radar to get their stories ready. There was none of that this time. This time, the LIGO scientific collaboration published their press release with links to the raw data and the preprint paper (submitted to the Astrophysical Journal Letters) on November 15. I found out about it when I stumbled upon a tweet from Sean Carroll.
And this is how it’s going to be, too. In the near future, the detectors – LIGO, VIRGO, etc. – are going to be gathering data in the background of our lives, like just another telescope doing its job. The detections are going to stop being a big deal: we know LIGO works the way it should. Fortunately for it, some of its more spectacular detections (colliding intermediary-mass blackholes and colliding neutron stars) were also made early in its life. What we can all look forward to now is reports of first-order derivatives from LIGO data.
In other words, we can stop focusing on Einstein’s theories of relativity (long overdue) and move on to what multiple gravitational wave detections can tell us about things we still don’t know. We can mine patterns out of the data, chart their variation across space, time and their sources, and begin the arduous task of drafting the gravitational history of the universe.
Earlier today, the Retraction Watch mailing list highlighted a strange paper written by a V.M. Das disputing the widely accepted fact that our body clocks are regulated by the gene-level circadian rhythm. The paper is utter bullshit. Sample its breathless title: ‘Nobel Prize Physiology 2017 (for their discoveries of molecular mechanisms controlling the circadian rhythm) is On Fiction as There Is No Molecular Mechanisms of Biological Clock Controlling the Circadian Rhythm. Circadian Rhythm Is Triggered and Controlled By Divine Mechanism (CCP – Time Mindness (TM) Real Biological Clock) in Life Sciences’.
The use of language here is interesting. Retraction Watch called the paper ‘unreadable’ in the headline of its post because that’s obviously a standout feature of this paper. I’m not sure why Retraction Watch is highlighting nonsense papers on its pages – watched by thousands every day for intriguing retraction reports informed by the reporting of its staff – but I’m going to assume its editors want to help all their readers set up their own bullshit filters. And the best way to do this, as I’ve written before, is to invite readers to participate in understanding why something is bullshit.
However, to what extent do we think unreadability is a bullshit indicator? And from whose perspective?
There’s no exonerating the ‘time mindness’ paper because those who get beyond the language are able to see that it’s simply not even wrong. But if you had judged it only by its language, you would’ve landed yourself in murky waters. In fact, no paper should be judged by how it exercises the grammar of the language its authors have decided to write it in. Two reasons:
1. English is not the first language for most of India. Those who’ve been able to afford an English-centred education growing up or hail from English-fluent families (or both) are fine with the language but I remember most of my college professors preferring Hindi in the classroom. And I assume that’s the picture in most universities, colleges and schools around the country. You only need access to English if you’ve also had the opportunity to afford a certain lifestyle (cosmopolitan, e.g.).
2. There are not enough good journals publishing in vernacular languages in India – at least not that I know of. The ‘best’ is automatically the one in English, among other factors. Even the government thinks so. Earlier this year, the University Grants Commission published a ‘preferred’ list of journals; only papers published herein were to be considered for career advancement evaluations. The list left out most major local-language publications.
Now, imagine the scientific vocabulary of a researcher who prefers Hindi over English, for example, because of her educational upbringing as well as to teach within the classroom. Wouldn’t it be composed of Latin and English jargon suspended from Hindi adjectives and verbs, a web of Hindi-speaking sensibilities straining to sound like a scientist? Oh, that recalls a third issue:
3. Scientific papers are becoming increasingly hard to read, with many scientists choosing to actively include words they wouldn’t use around the dinner table because they like how the ‘sciencese’ sounds. In time, to write like this becomes fashionable – and to not write like this becomes a sign of complacency, disinterest or disingenuousness.
… to the mounting detriment of those who are not familiar with even colloquial English in the first place. To sum up: if a paper shows other, more ‘proper’ signs of bullshit, then it is bullshit no matter how much its author struggled to write it. On the other hand, a paper can’t be suspected of badness if its language is off – nor can it be called bad as such if that’s all is off about it.
This post was composed entirely on a smartphone. Please excuse typos or minor formatting issues.
Because The Wire had signed up to be some kind of A-listed publisher with Facebook, The Wire‘s staff was required to create Facebook Pages under each writer/editor’s name. So I created the ‘Vasudevan Mukunth’ page. Then, about 10 days ago, Facebook began to promote my page on the platform, running ads for it that would appear on people’s timelines across the network. The result is that my page now has almost as many likes as The Wire English’s Facebook Page: 320,000+. Apart from sharing my pieces from The Wire, I now use the page to share my blog posts as well. Woot!
Action on Twitter hasn’t far behind either. I’ve had a verified account on the microblogging platform for a few months now. And this morning, Twitter rolled out the expanded tweet character limit (from 140 to 280) to everyone. For someone to whom 140 characters was a liberating experience – a mechanical hurdle imposed on running your mouth and forcing you to think things through (though many choose not to) – the 280-char limit is even more so.
How exactly? An interesting implication discussed in this blog post by Twitter is that allowing people to think 280 characters at a time allowed them to be less anxious about how they were going to compose their tweets. The number of tweets hitting the character limit dropped from 9% during the 140-char era to 1% in the newly begun 280-char era. At the same time, people have continued to tweet within the 140-char most of the time. So fewer tweets were being extensively reworked or abandoned because people no longer composed them with the anxiety of staying within a smaller character limit.
But here’s the problem: most of my blog’s engagement had already been happening on the social media. As soon as I published a post, WordPress’s Jetpack plugin would send an email to 4brane’s 3,600+ subscribers with the full post, post the headline + link on Twitter and the headline + blurb + image + link on Facebook. Readers would reply to the tweet, threading their responses if they had to, and drop comments on Facebook. But on the other hand, the number of emails I’ve been receiving from my subscribers has been dropping drastically, as has the number of comments on posts.
I remember my blogging habit having taken a hit when I’d decided to become more active on Twitter because I no longer bore, fermented and composed my thoughts at length, with nuance. Instead, I dropped them as tweets as and when they arose, often with no filter, building it out through conversations with my followers. The 280-char limit now looks set to ‘scale up’ this disruption by allowing people to be more free and encouraging them to explore more complex ideas, aided by how (and how well, I begrudgingly admit) Twitter displays tweet-threads.
Perhaps – rather hopefully – the anxiety that gripped people when they were composing 140-char tweets will soon grip them as they’re composing 280-char tweets as well. I somehow doubt 420-char tweets will be a thing; that would make the platform non-Twitter-like. And hopefully the other advantages of having a blog, apart from the now-lost ‘let’s have a conversation’ part, such as organising information in different ways unlike Twitter’s sole time-based option, will continue to remain relevant.
An instrument onboard the ISRO Astrosat space-telescope has studied how X-rays being emitted by the Crab pulsar are being polarised, and how such polarisation varies from one pulse to the next. This is very important information for understanding how pulsars create and emit high-energy radiation – information that we haven’t been able to obtain from any other pulsars in the known universe. The underpinning study was published in Nature Astronomy on November 6, 2017.
Quick recap: CZTI stands for the Cadmium Zinc Telluride Imager, a 16-MP X-ray camera and, as The Wire has discussed before, one of the best in its class – in the league of the NASA Fermi and Swift detectors and even better in the 80-250 keV range. Pulsars are rotating neutron stars that emit focused beams of high-energy radiation from two polar locations on their surface. (As it rotates, the beams sweep past Earth like a lighthouse sweeping past ships, giving the impression that it’s blinking, or pulsating). We study them because they’re extreme environments that can help validate theories by pushing them to their limits.
There are two things notable about the current study: how CZTI studied the pulsar and what it found as a result.
1. How – The Crab pulsar, the remnant of a star that went supernova in 1,058 AD, is located 6,500 lightyears away in the direction of the Taurus constellation. Second, pulsars – despite their remarkable radiation output – emit few X-ray photons that can be studied from near Earth. Third, the Crab pulsar has a rotation period of 33 ms (i.e. very fast). For these reasons, CZTI couldn’t just study the pulsar directly and hope to find what it eventually did. Whatever X-ray was collected would’ve had to be precisely calibrated in time. So the CZTI team* partnered up with the Giant Metrewave Radio Telescope in Pune and the Ooty Radio Telescope in Muthorai (Tamil Nadu) for the ephemeris data. In all, there were 21 observations made over (CZTI’s first) 18 months.
2. What – Like a Ferrero Rocher from hell, a pulsar is a rotating neutron star on the inside, wrapped in a very strong magnetic field. Astronomers think charged particles are accelerated by this field and the energy they emit is shot into space, as X-rays + other frequencies of radiation. So studying how these X-rays are polarised could provide more info on how a pulsar produces its famous sweeping pulses. The CZTI data had a surprise: hard X-rays are being emitted by the Crab pulsar in the off-pulse – or the-beam-is-not-pointing-at-us – phase. In other words, the magnetic field isn’t involved in producing these X-rays; the neutron star itself is. Dun dun duuuuuuun!
It’s always nice to get science results that send researchers back to the proverbial drawing board, like the CZTI result has. It’s sweeter still when local researchers are involved – and even sweeter to be reminded that we haven’t been entirely left behind in non-theoretical particle physics research. There’s even more X-ray astronomy in India’s future. After Astrosat, launched in September 2015, ISRO has okayed a proposal from the Raman Research Institute (RRI), Bengaluru, to build an X-ray polarimeter instrument that the org will launch in the future (date not known). Called Polix, it is similar to the NASA GEMS probe that stalled in 2012.
*The CZTI team had scientists from Physical Research Laboratory, Ahmedabad; Tata Institute of Fundamental Research, Mumbai; Inter-University Centre for Astronomy and Astrophysics, Pune; IIT Powai; National Centre for Radio Astronomy, Pune; Vikram Sarabhai Space Centre, Thiruvananthapuram; ISRO, Bengaluru; and RRI.
Featured image: A composite image of the Crab Nebula showing the X-ray (blue), and optical (red) images superimposed. The size of the X-ray image is smaller because the higher energy X-ray emitting electrons radiate away their energy more quickly than the lower energy optically emitting electrons as they move. Caption and credit: NASA/ESA.