Curious Bends – homeopathy's Nazi connections, painful science, the HepC bombshell and more

1. Standing up for the truth about homeopathy and Nazi medicine

“Few people would doubt the Nazi atrocities constituted the worst violations of ethics in the history of medicine. They were possible because doctors had disregarded the most elementary rules of medical ethics. Using unproven, disproven or unsafe treatments on misinformed patients, as in alternative medicine, is also hardly an ethical approach to healthcare. In fact, it violates Hippocrates’ essential principle of “first do no harm” in a most obvious way. These were some of the ideas I cover in my memoir, A Scientist in Wonderland. Just when the book had been written – and seemingly to prove my point – an extraordinary turn of events linked all these themes together in a most dramatic fashion.” (4 min read, irishtimes.com)

2. How does the emerging world use technology?

“Very few people in India and Bangladesh use the internet – only 20% and 11% respectively. But among those who do, job searching is a popular activity. Majorities of internet users in Bangladesh (62%) and India (55%) say they have looked for a job online in the past year, the highest rates among the 31 countries surveyed that have enough internet users to analyze.” (4 min read, pewresearch.org)

3. For a renaissance in Indian science and technology

“In addition, several premier research and development laboratories function without a regular director, examples being the Tata Institute of Fundamental Research in Mumbai, and the Indian Agricultural Research Institute in New Delhi. There is more. The last Union Budget speech had virtually no reference to science. Personally, I am aware of the erosion of excellence built painstakingly over the years in laboratories such as the Centre for Cellular & Molecular Biology in Hyderabad. Its library can no longer subscribe to even Current Contents leave alone other scientific journals as there is no money. I know that the ICMR cannot even pay appropriate travel allowance to those attending its meetings. I have not seen such situations arise in my scientific career spanning over six decades. The resource crunch that S&T labs face today is something unknown and is painful.” (7 min read, thehindu.com)

+ The author of this piece, Pushpa M. Bhargava, is the Chairman of the Council for Social Development (southern regional centre).

4. An Indian won the 2015 Stockholm Water Prize for revitalising an ancient innovation

“It look him a few months before finding his life’s mission—and it took an ancient innovation, a fast disappearing traditional technology, to help him transform the lives of thousands of villagers in one of India’s most arid regions. On March 20, Singh was awarded the 2015 Stockholm Water Prize, sometimes described as the Nobel prize for water. “Rajendra Singh did not insist with the clinics,” the Stockholm International Water Institute, which awards the prize, said in a statement. “Instead, and with the help of the villagers, he set out to build johads, or traditional earthen dams.”” (4 min read, qz.com)

5. Now, silence is offered as a luxury good

“Silence is now offered as a luxury good. In the business-class lounge at Charles de Gaulle Airport, I heard only the occasional tinkling of a spoon against china. I saw no advertisements on the walls. This silence, more than any other feature, is what makes it feel genuinely luxurious. When you step inside and the automatic doors whoosh shut behind you, the difference is nearly tactile, like slipping out of haircloth into satin. Your brow unfurrows, your neck muscles relax; after 20 minutes you no longer feel exhausted. Outside, in the peon section, is the usual airport cacophony. Because we have allowed our attention to be monetized, if you want yours back you’re going to have to pay for it.” (7 min read, nytimes.com)

Chart of the Week

“As the patent case winds its way through the legal labyrinth, there is both hope and disappointment. The hope springs from the belief that patent challenge to sofosbuvir is strong. The pre-grant opposition filed by I-MAK says the drug is not new and that the patent is based on old science that was disclosed in a 2005 application made by Gilead to India’s patent office. The disappointment stems from the fact that India’s top generic companies have caved in and opted for the safer option of VL agreements.” (10 min read, scroll.in)

c8a5e989-36a6-48fb-80c9-f01dbb419a75

Lessons from Orion's first test flight

The NASA Orion program managers Mark Geyer and Mike Hawes discussed the lessons they learnt from the crew capsule’s first test flight in December 2014 with Space.com, which published the interview on March 29.

The propulsion, guidance and navigation systems had worked well, but the splashdown airbags didn’t, while they also found the heat-shield’s performance could be improved.

The ablative material – dating from the Apollo days – is an epoxy resin called Avcoat that is filled into a fiberglass honeycomb matrix bonded to the rocket’s surface.

According to Geyer and Hawes, the shield is susceptible to cracking under severe swings in temperature. As an alternative, the Orion team plans to make the shield in blocks instead of as a continuous patch. “You make blocks of Avcoat, and there is a seam that you put between these blocks. That will get the strength up,” they’re quoted as saying.

But the more serious concern was with the so-called “stable two”, i.e. upside down, landings of the capsule. This was how the Apollo 7 mission in 1968 splashed down as well, the capsule floating upside down in the water and the astronauts within hanging from the restraining belts on their seats.

The Apollo 7 capsule, which experienced a "stable two" landing. Credit: history.nasa.gov
The Apollo 7 capsule, which experienced a “stable two” splashdown. Credit: history.nasa.gov

A few minutes later, airbags around the capsule would inflate and right it. Hawes said Orion had landed stable-two in 50% of the test-drops. The same thing happened at the end of the December 2014 test-flight. However, the righting mechanism was delayed. It was later found that one of the tanks inflating the airbags was leaking.

Geyer clarified that hanging upside down on Earth after a long time in space would be dangerous for astronauts. When asked if they’d fixed the problem, Hawes said,

We’ve looked at the tanks, the connecting lines and the pyrotechnics. We know that all the valves fired and opened. We know that the tanks all evacuated. We have found in the bags that failed some small cracks in the fabric; it looks like a failure of the fabric itself. Whether that is because of the way they are packaged and come out, we’re not sure, so we are looking at that now.

The duo began – and finished – their interview on a similar refrain: that what they were doing was for their generation, having grown up watching their predecessors perfect the moon-landings.

At the same time, Hawes’s first words in the interview bespoke how their accomplishments weren’t to one-up those of the past but to continue them.

I kind of choked up at the press conference after the flight. I started [my career] when the Apollo guys were still at JSC [Johnson Space Center] and learned from them, and now I finally felt like we had done this for our generation and for the other generations behind us — something we hadn’t done for 40 years … It’s a human spacecraft that’s going much farther than we have gone in a long time.

You can read the full interview here.

From Orwell to Kafka, Markov to Doctorow: Understanding Big Data through metaphors

On March 20, I attended a short talk by Malavika Jayaram, a fellow at the Berkman Center for Internet & Society, titled ‘What we talk about when we talk about Big Data’ at the T.A.J. Residency in Bengaluru. It was something of an initiation into the social and political contexts of Big Data and its usage, and the important ethical conundrums assailing these contexts.

Even if it was a little slow during the first 15 minutes, Jayaram’s talk progressed rapidly later on as she quickly piled criticism after criticism upon the concept’s foundation, which was quickly being revealed to be immature. Perhaps those familiar with Jayaram’s past research did (or didn’t) find the contents of her talk to contain more nuances than she’s let on before, but to me it revealed an array of perspectives I’ve remained balefully ignorant of.

The first in line was about the metaphors used to describe Big Data – and how our use of metaphors at all betrays our inability to comprehend Big Data in its entirety. Jayaram quoted at length but loosely from an essay by Sara M. Watson, her colleague at Berkman, titled Data is the new “____”. It describes how the dominant metaphors are industrial, dealing with the data itself as if it were a natural resource and the process of analyzing it as if it were being mined or refined.

Data as a natural resource suggests that it has great value to be mined and refined but that it must be handled by experts and large-scale industrial processes. Data as a byproduct describes the transactional traces of digital interactions but suggests it is also wasteful, pollutive, and may not be meaningful without processing. Data has also been described as a fungible resource, as an asset class, suggesting that it can be traded, stored, and protected in a data vault. One programmatic advertising professional related to me that he thinks “data is the steel of the digital economy,” an image that avoids the negative connotations of oil while at the same time expressing concern about monopolizing forces of firms Google and Facebook.

Not Orwellian but Kafkaesque

There are two casualties of this perspective. The first is the people behind the data – those whose features, actions, choices, etc. have become numbers – are forgotten even as the data they have given “birth” to becomes more important and valuable. The second casualty is the constant reminder that data is valuable, and large amounts of data more so, condemning it to a life where it can’t hope to be stagnant for long.

The dehumanization of Big Data, according to Jayaram, extends beyond analysts forgetting the data belongs to faces and names and unto the restriction of personal ownership. The people the data represents often don’t have access to it. This implies an existential anxiety quite unlike found in George Orwell’s 1984 and more like the one in Franz Kafka’s The Trial. In Jayaram’s words,

You are in prison awaiting your trial. Suddenly you find out the trial has been postponed and you have no idea why or how. There seem to be people who know things that you never will. You don’t know what you can do to encourage their decisions to keep the trial permanently postponed. You don’t know what it was about you and you have no way of changing your behavior accordingly.

In 2013, American attorney John Whitehead popularized this comparison in an article titled Kafka’s America. Whitehead argues that the sentiments of Josef K., the protagonist of The Trial, are increasingly becoming the sentiments of a common American.

Josef K’s plight, one of bureaucratic lunacy and an inability to discover the identity of his accusers, is increasingly an American reality. We now live in a society in which a person can be accused of any number of crimes without knowing what exactly he has done. He might be apprehended in the middle of the night by a roving band of SWAT police. He might find himself on a no-fly list, unable to travel for reasons undisclosed. He might have his phones or internet tapped based upon a secret order handed down by a secret court, with no recourse to discover why he was targeted. Indeed, this is Kafka’s nightmare, and it is slowly becoming America’s reality.

Kafka-biographer Reiner Stach summed up these activities as well as the steadily unraveling realism of Kafka’s book as proof of “the extent to which power relies on the complicity of its victims” – and the ‘evil’ mechanism used to achieve this state is a concern that Jayaram places among the prime contemporary problems threatening civil liberties.

If your hard drive’s not in space…

There is an added complication. If the use of Big Data was predominantly suspect, it would have been easier to build consensus against its abuse. However, that isn’t the case: Big Data is more often than not used in ways that don’t harm our personal liberties, and the misfortune is that their collective beneficence as yet has been no match for the collective harm some of its misuses have achieved. Could this be because the potential for its misuse is almost everywhere?

Yes. An often overlooked facet of using Big Data is the idea that the responsible use of Big Data is not a black-and-white deal. Facebook is not all evil and academic ethnographers are not all benign. Zuckerberg’s social network may collect and store large amounts of information that it nefariously trades with advertisers – and may even comply with the NSA’s “requests” – but there is a systematicity, an orderliness, with which the data is being passed around. The complex’s existence alone presents a problem, no doubt, but that there is a complex at all makes it easier to attempt to fix the problem than if the orderliness were absent.

And this orderliness is often absent among academicians, scholars, journalists, etc., who may not think data is a dollar note but at the same time are processing prodigious amounts of it without being as careful as is necessary about how they are logging, storing and sharing it. Jayaram rightly believes that even if information is collected for benevolent purposes, the moment it becomes data it loses its memory and stays on on the Internet as data; that if we are to be responsible data-scientists, being benevolent alone will be inadequate.

To drive the point home, she recalled a comment someone had made to her during a data workshop.

The Utopian way to secure data is to shoot your hard drive into space.

Every other recourse will only fall short.

Consent is not enough

This memoryless, Markovian character of the data-economy demands a redefinition of consent as well. The question “What is consent?” is dependent on what a person is consenting to. However, almost nobody knows how the data will be used, what for, or over what time-frames. Like a variable flowing through different parts of a computer, data can pass through a variety of contexts to each of which it provides value of varying quality. So, the same question of contextual integrity should retrospectively apply to the process of consent-giving as well: What are we consenting to when we’re consenting to something?

And when both the party asking for consent and the party asked for consent can’t know all the ways in which the data will be used, the typical way-out has been to seek consent that protects one against harm – either by ensuring that one’s civil liberties are safeguarded or by explicitly prohibiting choices that will impinge upon, again, one’s civil liberties. This has also been increasingly done in a one-size-fits-all manner that the average citizen doesn’t have the bargaining power to modify.

However, it’s become obvious by now that just protecting these liberties isn’t enough to ensure that data and consent are both promised a contextual integrity.

Why not? Because the statutes that enshrine many of these liberties is yet to be refashioned for the Internet age. In India, at least, the six fundamental rights are to equality, to freedom, against exploitation, to freedom of religion, cultural and educational rights, and to constitutional remedies. Between them, the promise of protecting against the misuse of not one’s person but one’s data is tenuous (although a recent document from the Telecom Regulatory Authority of India could soon fix this).

The Little Brothers

Anyway, an immediate consequence of this typical way-out has been that one needs to be harmed to get remedy, at a time when it remains difficult to define when one’s privacy has been harmed. And since privacy has been an enabler of human rights, even unobtrusive acts of tagging and monitoring that don’t violate the law can force compliance among the people. This is what hacker Andrew Huang talks about in his afterword to Cory Doctorow’s novel Little Brother (2008),

[In] January 2007, … Boston police found suspected explosive devices and shut down the city for a day. These devices turned out to be nothing more than circuit boards with flashing LEDs, promoting a show for the Cartoon Network. The artists who placed this urban graffiti were taken in as suspected terrorists and ultimately charged with felony; the network producers had to shell out a $2 million settlement, and the head of the Cartoon Network resigned over the fallout.

Huang’s example further weakens the Big Brother metaphor by implicating not one malevolent central authority but an epidemic, Kafkaesque paranoia that has “empowered” a multitude of Little Brothers all convinced that God is only in the detail.

While Watson’s essay (Data is the new “____”) is explicit about the power of metaphors to shape public thought, Doctorow’s book and Huang’s afterword take the next logical step in that direction and highlight the clear and present danger for what it is.

It’s not the abuse of power by one head of state but the evolution of statewide machines that (exhibit the potential to) exploit the unpreparedness of the times to coerce and compel, using as their fuel the mountainous entity – sometimes as Gargantuan as to be formless, and sometimes equally absurd – called Big Data (I exaggerate – Jayaram was more measured in her assessments – but not much).

And even if Whitehead and Stach only draw parallels between The Trial and American society, the relevant, singular “flaw” of that society exists elsewhere in the world, too: the more we surveil others, the more we’ll be surveilled ourselves, and the longer we choose to stay ignorant of what’s happening to our data, the more our complicity in its misuse. It is a bitter pill to swallow.

Featured image credit: DARPA

Curious Bends – macaroni scandal, bilingual brain, beef-eating Hindus and more

1. The great macaroni scandal in the world began in Kerala

“‘Only the upper class people of our larger cities are likely to have tasted macaroni, the popular Italian food. It is made from wheat flour and looks like bits of onion leaves, reedy, hollow, but white in colour.’ This paragraph appears in a piece titled: “Ta-Pi-O-Ca Ma-Ca-Ro-Ni: Eight Syllables That Have Proved Popular In Kerala”. Readers, I am not making this up. For a few years, from around 1958 to 1964, food scientists in India were obsessed with tapioca macaroni. Originally called synthetic rice, it was developed by the Central Food Technological Research Institute (CFTRI) in Mysore as a remedy for the problems of rice shortage, especially in the southern states.” (4 min read, livemint.com)

2. China is using Pakistan as a place to safety test its nuclear power technology

“Pakistan’s plans to build two nuclear reactors 40 kilometres from the bustling port city of Karachi, a metropolis of about 18 million people has become a bone of contention between scientists and the government. They are to be built by the China National Nuclear Corporation. Each reactor is worth US$4.8 billion and the deal includes a loan of US$6.5 billion from a Chinese bank. These reactors have never been built or tested anywhere, not even in China. If a Fukushima or a Chernobyl-like disaster were to take place, evacuating Karachi would be impossible, says a leading Pakistani physicist. He argues that building these nuclear reactors may have significant environmental, health, and social impacts.” (6 min read, scidev.net)

3. Speaking a second language may change how you see the world

“Cognitive scientists have debated whether your native language shapes how you think since the 1940s. The idea has seen a revival in recent decades, as a growing number of studies suggested that language can prompt speakers to pay attention to certain features of the world. Russian speakers are faster to distinguish shades of blue than English speakers, for example. And Japanese speakers tend to group objects by material rather than shape, whereas Koreans focus on how tightly objects fit together. Still, skeptics argue that such results are laboratory artifacts, or at best reflect cultural differences between speakers that are unrelated to language.” (4 min read, sciencemag.org)

4. Nobel-prize winning biologist Venkatraman Ramakrishnan named president of the Royal Society​

“Ramakrishnan grew up in India and has spent the majority of his research career in the United States, moving to the United Kingdom in 1999. He has a diverse scientific background: he switched to biology after a PhD in physics. “That breadth is something I hope will help me,” he says.” (3 min read, nature.com)

5. History is proof most Hindus never had any beef with beef

“To achieve this goal, the RSS has, among other things, turned beef into a Muslim-Hindu issue. So the ban on beef is a device to create a monolithic Hindu community? Yes. You also have to ask the question: When did the idea of not eating beef and meat become strong? Gandhi was essentially a Jain; he campaigned for cow protection as well as vegetarianism. It was Gandhi’s campaign that took vegetarianism to non-Brahmin social groups that were meat-arian. The only people who were not really influenced by Gandhi’s cow protection campaign and vegetarianism were Muslims, Christians and Dalits. If the Dalits were not affected, it was because Ambedkar immediately started a counter-campaign.” (8 min read, scroll.in)

Chart of the week

“Among the educated elite the traditional family is thriving: fewer than 10% of births to female college graduates are outside marriage—a figure that is barely higher than it was in 1970. In 2007 among women with just a high-school education, by contrast, 65% of births were non-marital. Race makes a difference: only 2% of births to white college graduates are out-of-wedlock, compared with 80% among African-Americans with no more than a high-school education, but neither of these figures has changed much since the 1970s. However, the non-marital birth proportion among high-school-educated whites has quadrupled, to 50%, and the same figure for college-educated blacks has fallen by a third, to 25%. Thus the class divide is growing even as the racial gap is shrinking.” (4 min read, economist.com)

d5bbf5e8-32dc-4849-802a-a510169f86aa

Tuberculosis's invisible millions – in cases and money

Tuberculosis (TB) has killed more than a billion people in the last 200 years. That’s more than any other infectious disease in that period. And, what’s worse is that, according to the World Health Organisation (WHO), less than half the cases worldwide are ever diagnosed.

India suffers the most. It has the highest burden of TB in the world: More than 2 million suffer from the disease, and this is despite years of work to control the disease.

TB was declared a global health emergency by the WHO in 1993. Then, in 2001, the first global “Stop TB Plan” came into effect, with an international network of donors and private and public sector organisations tackling TB-related issues around the world together.

The disease is prevalent among both rich and poor countries, but has more disastrous consequences in the latter because of limited access to healthcare, poor sanitation and undernutrition. The matter is worsened because of co-morbidity, where those with weakened immune systems—having suffered from diabetes or AIDS—fall prey to TB and die.

tb1

And even between developing economies, there is significant variation in treatment levels because of difficulties in identifying new infections. In 2012, while China and India together accounted for 40% of the world’s burden of TB, the prevalence among 100,000 people was at least 167 in India and less than half that in China (about 68).

Technology can help

In an article in the journal PLOS Medicine, Puneet Dewan from the Bill & Melinda Gates Foundation and Madhukar Pai of McGill University have called for global efforts to identify, treat and cure the 3 million “missed” TB infections every year.

“Reaching all these individuals and ensuring accountable, effective TB treatment will require TB control programs to adopt innovative tools and modernize program service delivery,” they write.

In January 2015, the WHO representative to India, Nata Menabde, said the decline of TB incidence in the country was occurring at 2% per year, instead of the desired 19-20%. She added that it could be pulled up to 10% per year by 2025 if the country was ready to leverage better the available technology. The WHO’s goal is to eradicate TB by 2050. But for India that may prove to be too soon. 

This is also what Dewan and Pai are calling for. The tech interventions could be in the form of e-health services, the use of mobile phones by doctors to notify centers of new cases, and disbursing e-vouchers for subsidized treatment.

And their demands are not unreasonable, given India’s progress so far. First, India has met one of the United Nations’ ambitious Millennium Development Goals by cutting TB prevalence to half in 2015 compared to prevalence in 1990. Second, according to Menabde, India is also on track to halve TB mortality by the end of this year compared to that in 1990. The accomplishment testifies to commitment from public and private sector initiatives and places the country in a good position from which to springboard toward stiffer targets. Continued support can sustain the momentum.

tb2

In 2012, the previous government made TB a notifiable disease—mandating medical practitioners to report every TB case detected—going some way in reducing the number of “missing” cases. It also banned blood tests to diagnose TB for the lack of a clinical basis. While the delay in implementing these measures contributed to the rise of multidrug-resistant strains of the disease, they also revitalised efforts to meet targets set by the WHO at an important time. Then bad news struck.

Causing self-harm

India’s health budget for 2015-16 has not even managed to keep up with inflation. It is a mere 2% more than the previous year. For TB, this budgetary belt-tightening has meant taking a few steps back in the pace of developing cures against multi-drug resistant strains and in efforts to improve the quality of treatment at frontline private-sector agencies, which already provide more than 60% of patient care.

Dewan and Pai think TV programs, such as Aamir Khan’s Satyamev Jayate, and Amitabh Bachchan’s admission that he is a TB survivor will promote enough awareness to force changes in healthcare spending—but this seems far too beamish an outlook when the funding cuts and regulatory failures are factored in.

A new draft of the National Health Policy (NHP) was published in December. Besides providing a lopsided insight into the government’s thoughts on public healthcare, it made evident that ministers’ apathetic attitude, and not a paucity of public support, was to blame for poor policies.

Nidhi Khurana, a health systems researcher at the Johns Hopkins Bloomberg School of Public Health, summed up the NHP deftly in The Hindu:

The NHP refutes itself while describing the main reason for the National Rural Health Mission’s failure to achieve stronger health systems: “Strengthening health systems for providing comprehensive care required higher levels of investment and human resources than were made available. The budget received and the expenditure thereunder was only about 40 per cent of what was envisaged for a full revitalisation in the NRHM framework.” If this is not the case against diminished public funding for health, what is?

OA shouldn't stop at access

Joseph Esposito argues in the scholarly kitchen why it’s okay for OA articles (which come with a CC-BY license) to be repackaged and then sold for a price by other merchants once they’re out in a paper.

The economic incentive to reach new audiences could make that otherwise OA article into something that gets brought to the attention of more and more readers. What incentive does a pure-play OA publisher have to market the materials it publishes? Unfortunately, the real name of this game is not “Open Access” but “Post and Forget.” Well-designed commerce, in other words, leads to enhanced discovery. And when it doesn’t, it enters the archaeological record.

If we can chase the idealists, ideologues, and moralists out of the temple, we may see that the practical act of providing economic incentives may be able to do more for access than any resolution from Budapest, Bayonne, Bethesda, or Berlin. The market works, and when it doesn’t, things quietly go away. So why all the fuss?

It’s not an argument that’s evident on the face of it, becoming apparent only when you realize OA’s victory march stopped halfway at allowing people to access research papers, not find them. The people who are good to helping other people find stuff are actually taking the trouble to market their wares.

So Esposito’s essentially argued to leave in a “finding fee” where it exists because there’s a big difference between something just being there in the public domain and something being found. I thought I’d disagree with the consequences of this reasoning for OA but I largely don’t.

Where I stop short is where this permission to sell papers available for free infringes on the ideals of OA for no fault of the principle of OA. But then what can OA do about that?

Read: Getting Beyond “Post and Forget” Open Access, the scholarly kitchen

A future for driverless cars, from a limbo between trolley problems and autopilots

By Anuj  Srivas and Vasudevan Mukunth

What’s the deal with everyone getting worried about artificial intelligence? It’s all the Silicon Valley elite seem willing to be apprehensive about, and Oxford philosopher Nick Bostrom seems to be the patron saint along with his book Superintelligence: Paths, Dangers, Strategies (2014).

Even if Big Data seems like it could catalyze things, they could be overestimating AI’s advent. But thanks to Google’s espied breed of driverless cars, conversations on regulation are already afoot. This is the sort of subject that could benefit from its tech being better understood; it’s not immediately apparent. To make matters worse, now is also the period when not enough data is available for everyone to scrutinize the issue but at the same time there are some opinion-mongers distorting the early hints of a debate with their desires.

In an effort to bypass this, let’s say things happen like they always do: Google doesn’t ask anybody and starts deploying its driverless cars, and then the law is forced to shape around that. Yes, this isn’t something Google can force on people because it’s part of no pre-existing ecosystem. It can’t force participation like it did with Hangouts. Yet, the law isn’t prohibitive.

In the Silicon Valley, Google has premiered its express Shopping service – for delivering purchases made online within three hours of someone placing the order for no extra cost. No extra cost because the goods are delivered using Google’s driverless cars, and the service is a test-bed for them, where they get to ‘learn’ what they will. But when it comes to buying them, who will? What about insurance? What about licenses?

A better trolley problem

It’s been understood for a while that the problem here is liabilities, summarized in many ways by the trolley problem. There’s something unsettling about loss of life due to machine failure, whereas it’s relatively easier to accept when the loss is the consequence of human hands. Theoretically it should make no difference – planes for example are driven more by computers these days than a living, breathing pilot. Essentially, you’re trusting your life to the computers running the plane. And when driverless cars are rolled out, there’s ample reason to believe that will have a similarly low chance of failure as aircrafts run by computer-pilots. But we could be missing something through this simplification.

Even if we’re laughably bad at it at times, having a human behind the wheel makes it predictable, sure, but more importantly it makes liability easier to figure. The problem with a driverless car is not that we’d doubt its logic – the logic could be perfect – but that we’d doubt what that logic dictates. A failure right now is an accident: a car ramming into a wall, a pole, into another car, another person, etc. Are these the only failures, though? A driverless car does seem similar to autopilot, but we must be concerned about what its logic dictates. We consciously say that human decision making skills are inferior, that we can’t be trusted. Though that is true, we cross an epistemological ground when we do so.

Perhaps the trolley problem isn’t well-thought out. The problem with driverless cars is not about 5 lives versus 1 life; that’s an utterly human problem. The updated problem for driverless cars would be: should the algorithm look to save the the passengers of the car or should it look to save bystanders?

And yet even this updated trolley problem is too simplistic. Computers and programmers make these kind of decisions on a daily basis already, by choosing at what time, for instance, an airbag should deploy, especially considering that if deployed unnecessarily, the airbag can also grievously injure a human being.

Therefore, we shouldn’t fall into a Frankenstein complex where our technological creations are automatically assumed to be doing evil things simply because they have no human soul. It’s not a question of “it’s bad if a machine does it and good if a human does it”.

Who programs the programmers?

And yet, the scale and moral ambiguity is pumped up to a hundred when it comes to driverless cars. Things like airbag deployment can often take refuge in physics and statistics – they are often seen in that context. And yet for driverless cars, specific programming decisions will be forced to confront morally ambiguous situations and it is here that the problem starts. If an airbag deploys unintentionally or wrongly it can always be explained away as an unfortunate error, accident or freak situation. Or, more simply, that we can’t program airbags to deploy on a case-by-case basis. Driverless cars however, can’t take refuge behind statistics or simple physics when it it is confronted with its trolley problem.

There is a more interesting question here. If a driverless car has to choose between a) running over a dog, b) swerving your car in order to miss the dog, thereby hitting a tree, and c) freeze and do nothing, what will it do? It will do whatever the programmer tells it to do. Earlier we had the choice, depending on our own moral compass, as to what we should do. People who like dogs wouldn’t kill the animal; people who cared more about their car would kill the dog. So, who programs the programmers?

And as with the simplification to a trolley problem, comparing autonomous cars to autopilot on board an aircraft is similarly short-sighted. In his book Normal Accidents, sociologist Charles Perrow talks about nuclear power plant technology and its implications for insurance policy. NPPs are packed in with redundant safety systems. When accidents don’t happen, these systems make up a bulk of the plant’s dead weight, but when an accident does happen, their failure is often the failure worth talking about.

So, even as the aircraft is flying through the air, control towers are monitoring its progress, the flight data recorders act as a deterrent against complacency, and simply the cost of one flight makes redundant safety systems feasible over a reasonable span of time.

Safety is a human thing

These features together make up the environment in which autopilot functions. On the other hand, an autonomous car doesn’t inspire the same sense of being in secure hands. In fact, it’s like an economy of scale working the other way. What safety systems kick in when the ghost in the machine fails? To continue the metaphor: As Maria Konnikova pointed out in The New Yorker in September 2014, maneuvering an aircraft can be increasingly automated. The problem arises when something about it fails and humans have to take over: we won’t be able to take over as effectively as we think we can because automation encourages our minds to wander, to not pay attention to the differences between normalcy and failure. As a result, a ‘redundancy of airbags’ is encouraged.

In other words, it would be too expensive to include all these foolproof safety measures for driverless cars but at the same time they ought to be. And this is why the first ones likely won’t be owned by individuals. The best way to introduce them would be through taxi services like Uber, effectuating communal car sharing with autonomous drivers. In a world of driverless cars, we may not own the cars themselves, so a company like Uber could internalize the costs involved in producing that ecosystem, and having them around in bulk makes safety-redundancies feasible as well.

And if driverless cars are being touted as the future, owning a car could probably become a thing of the past, too. The thrust of digital has been to share and rent more than to own with pretty much most things. Only essentials like smartphones are owned. Look at music, business software, games, rides (Uber), even apartments (Airbnb). Why not autonomous vehicles?

Curious Bends – babies for sale, broken AIIMS, male gynaec and more

1. China has a growing online market for abducted babies

“Girls fetch considerably less than boys, but there is still a market for them. Old social patterns have re-emerged in the market, like the sale of girls into a household where they will be servants until they and the son of the house are of age to marry. Most abducted children are sold to new families as a form of illegal adoption, and are increasingly sold online, though some, mostly boys, are also trafficked for forced labour. I recently worked on an asylum case involving a young man forced into begging with a group of children under traffickers’ control in China. He is still so traumatised by the brutal physical punishments inflicted on the boys when they didn’t collect enough money that he can only talk about it in the third person: “they did this to the children”, never “they did this to me”.” (4 min read, theconversation.com)

2. Will it improve India’s poor healthcare if more research hospitals like the AIIMS are built?

“Barely 1-2% of the funds allocated to AIIMS, it observed, were being spent on research. As for education, even as India suffered from a lack of doctors, 49% of the doctors trained at AIIMS had “found their vocations abroad”. This staffing shortage was hurting AIIMS itself. Waiting time for surgery ranged between 2.5-34 months. With a high doctor-patient ratio, patients were barely getting four to nine minutes with doctors at the outpatient (OPD) department. The report flagged other shortcomings. AIIMS had failed to lead the modernisation of India’s public health infrastructure. CAG also noted delays in setting up medical centres, irregularities in the purchase of equipment, and so on.” (5 min read, economictimes.com)

3. Confessions of an Indian male gynaecologist

“Many of my patients confess that they prefer a male doctor to a female one. I don’t know why. But not every woman who walks into my room is comfortable. There is always a nurse in the room as I am scared that some woman will level baseless allegations over the physical examination. Unlike men, women have many health problems. Seeing all that they go through has made me respect them. My wife says I am more like a woman. That I have too much compassion.” (2 min read, openthemagazine.com)

4. Personalising cancer care, one tumour at a time

“Mitra’s CANScript has gone a step further. A simpler analogy would be the bacteria sensitivity tests that are commonly used today. Just as a pathology lab takes a swab, cultures it and tests it against all available antibiotics to finally help a doctor prescribe the right antibiotic, CANScript runs a test against the biopsy from the patient and gives a score card for the drugs to be used. In clinics it is currently used in six solid tumors (breast cancer, gastrointestinal, glioblastoma, head and neck squamous cell carcinoma and colorectal) and two blood cancers. Three other cancers – lung, cervical and melanoma—are under lab testing. However, the limitation with CANScript is that it requires very fresh tumour.” (5 min read, seemasingh.in)

5. What your bones have in common with the Eiffel Tower

“So how did Eiffel design a structure that’s strong enough to withstand the elements, and yet weighs about as much as the air surrounding it? The secret lies in understanding the shapes of strength. It’s a lesson we can learn by looking inwards… literally. By studying our bones, we can discover some of the same principles that Eiffel used in designing his tower.” (11 min read, wired.com)

Chart of the Week

“Now there are nine powers, and the kind of protocols that the cold-war era America and Soviet Union set up to reassure each other are much less in evidence today. China is cagey about the size, status and capabilities of its nuclear forces and opaque about the doctrinal approach that might govern their use. India and Pakistan have a hotline and inform each other about tests, but do not discuss any other measures to improve nuclear security, for example by moving weapons farther from their border. Israel does not even admit that its nuclear arsenal of around 80 weapons (but could be as many as 200) exists. North Korea has around ten and can add one a year and regularly threatens to use them. The agreements that used to govern the nuclear relationship between America and Russia are also visibly fraying; co-operation on nuclear-materials safety ended in December 2014. America is expected to spend $350 billion on modernising its nuclear arsenal over the next decade and Russia is dedicating a third of its fast-growing defence budget to upgrading its nuclear forces. In January this year the Doomsday Clock was moved to three minutes to midnight, a position it was last at in 1987.” (3 min read, economist.com)

nuclear

The notion of natural quasicrystals is here to stay

In November 2008, Luca Bindi, a curator at the Universita degli Studi di Firenze, Italy, found that the alloy of aluminium and copper called khatyrkite could be a quasicrystal. Bindi couldn’t be sure because he didn’t have the transmission electron microscope necessary to verify his find, so he couriered two grains of it to a lab in Princeton University. There, physicists Paul Steinhardt – whose name has been associated with the study of quasicrystals since their discovery in 1982 – and Nan Yao made their monumental discovery: the alloy was indeed a quasicrystal, and that meant these abnormal crystal structures could form naturally as well.

Before 1982, solid substances were either crystalline or amorphous. The atoms or molecules of crystalline substances were neatly stacked in a variety of patterns, but in patterns nonetheless, that were repetitive – whether you moved them to the left or right or rotated them by some amount. In amorphous substances, their arrangement was chaotic. Then, the physicist Dan Shechtman discovered quasicrystals, crystalline solids whose atoms or molecules were arranged in patterns that were orderly but, somehow, not repetitive. It altered the extant paradigm of physical chemistry, overthrowing knowledge a century old and redefining crystallography. Shechtman won a Nobel Prize in chemistry for his work in 2011.

The electron diffraction pattern from an icosahedral quasicrystal. Credit: nobelprize.org
The electron diffraction pattern from an icosahedral quasicrystal. Credit: nobelprize.org

The discovery that khatyrkite did in fact harbor quasicrystals, on New Year’s Day 2009, triggered an expedition to the foot of the Koryak Mountains in eastern Russia in 2011. Steinhardt and Bindi were there, and his team found some strange rocks along a stream 230 km to the south-west of Anadyr, the capital of Chukotka, in which quasicrystal grains were embedded. More fascinating was the quasicrystals’ composition itself, identified as icosahedrite and thought to be of extraterrestrial origins. Steinhardt & co. think it formed in our solar nebula 4.57 billion years ago – when Earth was being formed, too – and got attached to a meteorite that crashed on Earth 15,000 years ago.

The latest results from this expedition were published in Scientific Reports on March 13. For all its details, the paper remains silent about the ten years of work and dedication consumed in discovering these anomalous crystals in a remote patch of the Russian tundra, about the human experience that fleshed out the discovery’s implications for the birth of the Solar System. Fortunately, Virat Markandeya was loud about it, in the November 2013 issue of Periscope magazine, and well. The piece is a must-read now that the notion of natural quasicrystals is here to stay.

The Large Hadron Collider is back online, ready to shift from the "what" of reality to "why"

The world’s single largest science experiment will restart on March 23 after a two-year break. Scientists and administrators at the European Organization for Nuclear Research – known by its French acronym CERN – have announced the status of the agency’s upgrades on its Large Hadron Collider (LHC) and its readiness for a new phase of experiments running from now until 2018.

Before the experiment was shut down in late 2013, the LHC became famous for helping discover the elusive Higgs boson, a fundamental (that is, indivisible) particle that gives other fundamental particles their mass through a complicated mechanism. The find earned two of the physicists who thought up the mechanism in 1964, Peter Higgs and Francois Englert, a Nobel Prize in that year.

Though the LHC had fulfilled one of its more significant goals by finding the Higgs boson, its purpose is far from complete. In its new avatar, the machine boasts of the energy and technical agility necessary to answer questions that current theories of physics are struggling to make sense of.

As Alice Bean, a particle physicist who has worked with the LHC, said, “A whole new energy region will be waiting for us to discover something.”

The finding of the Higgs boson laid to rest speculations of whether such a particle existed and what its properties could be, and validated the currently reigning set of theories that describe how various fundamental particles interact. This is called the Standard Model, and it has been successful in predicting the dynamics of those interactions.

From the what to the why

But having assimilated all this knowledge, what physicists don’t know, but desperately want to, is why those particles’ properties have the values they do. They have realized the implications are numerous and profound: ranging from the possible existence of more fundamental particles we are yet to encounter to the nature of the substance known as dark matter, which makes up a great proportion of matter in the universe while we know next to nothing about it. These mysteries were first conceived to plug gaps in the Standard Model but they have only been widening since.

With an experiment now able to better test theories, physicists have started investigating these gaps. For the LHC, the implication is that in its second edition it will not be looking for something as much as helping scientists decide where to look to start with.

As Tara Shears, a particle physicist at the University of Liverpool, told Nature, “In the first run we had a very strong theoretical steer to look for the Higgs boson. This time we don’t have any signposts that are quite so clear.”

Higher energy, luminosity

The upgrades to the LHC that would unlock new experimental possibilities were evident in early 2012.

The machine works by using powerful electric currents and magnetic fields to accelerate two trains, or beams, of protons in opposite directions, within a ring 27 km long, to almost the speed of light and then colliding them head-on. The result is a particulate fireworks of such high energy that the most rare, short-lived particles are brought into existence before they promptly devolve into lighter, more common particles. Particle detectors straddling the LHC at four points on the ring record these collisions and their effects for study.

So, to boost its performance, upgrades to the LHC were of two kinds: increasing the collision energy inside the ring and increasing the detectors’ abilities to track more numerous and more powerful collisions.

The collision energy has been nearly doubled in its second life, from 7-8 TeV to 13-14 TeV. The frequency of collisions has also been doubled from one set every 50 nanoseconds (billionth of a second) to one every 25 nanoseconds. Steve Myers, CERN’s director for accelerators and technology, had said in December 2012, “More intense beams mean more collisions and a better chance of observing rare phenomena.”

The detectors have received new sensors, neutron shields to protect from radiation damage, cooling systems and superconducting cables. An improved fail-safe system has also been installed to forestall accidents like the one in 2008, when failing to cool a magnet led to a shut-down for eight months.

In all, the upgrades cost approximately $149 million, and will increase CERN’s electricity bill by 20% to $65 million. A “massive debugging exercise” was conducted last week to ensure all of it clicked together.

Going ahead, these new specifications will be leveraged to tackle some of the more outstanding issues in fundamental physics.

CERN listed a few–presumably primary–focus areas. They include investigating if the Higgs boson could betray the existence of undiscovered particles, the particles dark matter could be made of, why the universe today has much more matter than antimatter, and if gravity is so much weaker than other forces because it is leaking into other dimensions.

Stride forward in three frontiers

Physicists are also hopeful for the prospects of discovering a class of particles called supersymmetric partners. The theory that predicts their existence is called supersymmetry. It builds on some of the conclusions of the Standard Model, and offers predictions that plug its holes as well with such mathematical elegance that it has many of the world’s leading physicists enamored. These predictions involve the existence of new particles called partners.

In a neat infographic by Elizabeth Gibney in Nature, she explains that the partner that will be easiest to detect will be the ‘stop squark’ as it is the lightest and can show itself in lower energy collisions.

In all, the LHC’s new avatar marks a big stride forward not just in the energy frontier but also in the intensity and cosmic frontiers. With its ability to produce and track more collisions per second as well as chart the least explored territories of the ancient cosmos, it’d be foolish to think this gigantic machine’s domain is confined to particle physics and couldn’t extend to fuel cells, medical diagnostics or achieving systems-reliability in IT.

Here’s a fitting video released by CERN to mark this momentous occasion in the history of high-energy physics.

Featured image: A view of the LHC. Credit: CERN

Update: After engineers spotted a short-circuit glitch in a cooled part of the LHC on March 21, its restart was postponed from March 23 by a few weeks. However, CERN has assured that its a fully understood problem and that it won’t detract from the experiment’s goals for the year.