The DNA-based computer that can calculate π

I’m not fond of biology. Of late, however, it’s been harder to avoid encountering it because the frontiers of many fields of research are becoming increasingly multidisciplinary. Biological processes are meshing with physics and statistics, and undergoing the kind of epistemic reimagination that geometry experienced in the 19th and 20th centuries. Now, scientists are able to manipulate biology to do wondrous things.

Consider the work of a team from the Dhirubhai Ambani Institute of Information and Communication Technology, Gujarat, India, which has figured out a way to compute the value of π using self-assembling strands of DNA. Their work derives from previous successful attempts to perform simple mathematical calculations by nudging these molecules to bind to each other in specific ways, a technique called tile assembly.

It was first formulated as a tiling problem by Chinese philosopher Hao Wang in 1961. Wang wanted to know if a set of square tiles could cover a plane in a periodic pattern if each tile had four different colored edges and only edges of the same color could abut each other. The answer was that they could cover a plane but only with an aperiodic pattern.

In a DNA tile assembly model (TAM), each tile represents a section of the DNA molecule, called a monomer. When adjacent tiles’ abutting sides line up with the same color, then the two monomers attach themselves across the abutting sides according to a strength corresponding to that color. This way, given a tile to start with – called the seed tile – and a sequence of tiles coming up next, the DNA monomers can link up to form diverse patterns.

By controlling the sequence of colors and their strengths, scientists can thus use TAM to control the values of variables moving through the resultant grid. Connections of monomers between tiles can be made become stronger or weaker, and to different extents, in ways mimicking how the voltage between different electronic components in a computer’s circuit allow it to perform mathematical calculations.

So, Shalin Shah, Parth Dave and Manish Gupta from the Institute used four new variations of TAM that they’d developed to calculate the value of π. Each of these variations performs a specific function, much like the logic gates inside an information processor.

  1. The compare tile system decides which number is greater between two numbers, or if they’re equal
  2. The shift tile system shifts the bits of a number by one bit to the right, and adds a 0 to the leftmost bit. For example, 11001 becomes 01100.
  3. The subtract and shift tile system subtracts one binary number from the other, then right-shifts its bits by one bit to the right, and finally adds a padding 0 to the leftmost bit
  4. The insert bit tile system inserts a bit in a number

Using a combination of these systems – all with the TAM at their hearts – the trio has been able to compute the value of π like below:

The gray tiles are input tiles, green are addition/subtraction tiles, yellow are copy/duplicate tiles, orange tiles are shift tiles, and blue tiles indicate the remainders of the corresponding division process. Image: Computing Real Numbers using DNA Self-Assembly, Shah et al, Laboratory of Natural Information Processing, DAIICT.
The gray tiles are input tiles, green are addition/subtraction tiles, yellow are copy/duplicate tiles, orange tiles are shift tiles, and blue tiles indicate the remainders of the corresponding division process. The calculation is growing upward and toward the right. Image: Computing Real Numbers using DNA Self-Assembly, Shah et al, Laboratory of Natural Information Processing, DAIICT.

You can see that the calculation is an ongoing infinite series – specifically, the Leibniz series, which estimates π as an infinitely alternating sequence of additions and subtractions between smaller and smaller fractions. Because it is infinite, the trio’s calculator’s ability to find a more precise value of π depends only on how many tiles are available. Second, because the calculator can compute infinite series, any number or problem that can be reduced to the solution of an infinite series is now solvable using this calculator.

This would merely be a curious yet tedious way to calculate if not for its potential to exploit the biological properties of DNA to enhance the calculator’s abilities. Although this hasn’t been elaborately outlined in the trio’s pre-print paper on arXiv, it is plausible that such calculators could be used to guide the development of complex and evermore intricate DNA structures with minimal human intervention, or to fashion molecular logic circuits commoving microscopic robots delivering drugs within our bloodstreams. Studies in the past have already shown that DNA self-assembly is Turing-universal, which means it can perform any calculation that is known to be calculable.

The DNA molecule is itself a wondrous device, existing in nature to store genetic data over tens of thousands of years only for a future inheritor to slowly retrieve information essential for its survival. Scientists have found the molecule can hold 5.5 petabits of data per cubic millimeter, without letting any of it become corrupted for 1 million years if stored at -18 degrees Celsius.

A falling want of opportunity for life to grip Titan

There is a new possibility for life on Titan. Scientists affiliated with Cornell University have created a blueprint for a cellular lifeform that wouldn’t need water to survive.

Water on Earth has been the principal ingredient of, as well as the catalyst for, the formation of life. The Cornell scientists believe water could be replaced by methane on Saturn’s moon Titan, where there are seas full of liquid methane.

The scientists’ work essentially lies in finding a suitable alternative for the phospholipid bilayer, a double-layer of fatty acids that constitutes every cell’s membrane on Earth. Because Titan’s atmosphere is rich in nitrogen and methane, their analysis suggested that a combination of the two molecules could create acrylonitrile azotosome, which when stacked together tends to assemble in membrane-like structures.

Acrylonitrile, it turns out, is already present in Titan’s atmosphere. What’s more is that the molecule could be reactive in the moon’s dreadfully cold environment (at -292 to -180 degrees Celsius). The next step is to see if cells with azotosome membranes can reproduce and metabolize in the methane- and nitrogen-rich environment. Their findings were published in Science Advances on February 27.

Incidentally, the formation of azotosomes also requires some hydrogen. This is interesting because astrobiologists have recently shown that the surface of liquid methane lakes on Titan could be host to microbes that metabolize acetylene, ethane and some other organic compounds, along with hydrogen.

Astrobiologists from the NASA Ames Research Center, who did this research, presumed that the microbes would need to consume hydrogen to make their metabolic reactions work. And because there is no other known process on Titan that could reduce the concentration of atmospheric hydrogen in the moon’s lower atmosphere, their calculations gave other astronomers an intriguing way to interpret anomalous deficiencies of hydrogen. Such deficiencies were recorded on Titan in 2010 by the NASA’s Cassini space probe.

There is an alternative explanation as well. Hydrogen may also be involved in chemical reactions in the atmosphere spurred by the bombardment of cosmic rays. Only continued observation will tell what is actually eating Titan’s hydrogen.

Yet another possibility for life on the moon was conceived in August 2014, when Dirk Schulze-Makuch, an astrobiologist from Washington State University, reported in Science that methane-digesting bacteria had been found in a lake of asphalt in Trinidad. The water content of the lake was only 13.5%. Schulze-Makuch suggested that, even if very little water was present on the moon, it would be enough to encourage the formation of these bacteria. What he couldn’t account for was the substantially lower temperature at which these reactions would need to occur on, say, Titan.

Slowly there has been a mounting number of possibilities, which suggest that life on Titan needn’t have to be fine-tuned or capitalize on one or two narrow windows of existential opportunity. Instead, there exist a variety of routes through which enterprising molecules could aspire for self-organization and awareness, even in an oxygen-deficient, methane-rich environment.

The neuroscience of how you enter your fantasy-realms

If you grew up reading Harry Potter (or Lord of the Rings, as the case may be), chances are you’d have liked to move to the world of Hogwarts (or Middle Earth), and spent time play-acting scenes in your head as if you were in them. This way of enjoying fiction isn’t uncommon. On the contrary, the potentially intimidating levels of detail that works of fantasy offer often lets us move in with the characters we enjoy reading about. As a result, these books have a not inconsiderable influence on our personal development. It isn’t for nothing that story-telling is a large part of most, if not all, cultures.

That being the case, it was only a matter of time before someone took a probe to our brains and tried to understand what really was going as we read a great book. Those someones are Annabel Nijhof and Roel Willems, both neuroscientists affiliated with the Radboud University in The Netherlands. They used functional magnetic resonance imaging, a technique that employs a scanner to identify the brain’s activity by measuring blood flow around it, “to investigate how individuals differently employ neural networks important for understanding others’ beliefs and intentions, and for sensori-motor simulation while listening to excerpts from literary novels”.

If you’re interested in their methods, their paper published in PLOS One on February 11 discusses them in detail. And as much as I’d like to lay them out here, I’m also in a hurry to move on to the findings.

Nijhof and Willems found that there were two major modes in which listeners’ brains reacted to the prompts, summed up as mentalizing and activating. A mentalizing listener focused on the “thoughts and beliefs” depicted in the prompt while an activating listener paid more attention to descriptions of actions and replaying them in his/her head. And while some listeners did both, the scientists found that the majority either predominantly mentalized or predominantly activated.

This study references another from 2012 that describes how the neural system associated with mentalizing kicks in when people are asked to understand motivations, and that associated with activating kicks in when they’re asked to understand actions. So an extrapolation of results between both studies yields a way for neuroscientists to better understand the neurocognitive mechanisms associated with assimilating stories, especially fiction.

At this point, a caveat from the paper is pertinent:

It should be noted that the correlation we observed between Mentalizing and Action networks, only holds for one of the Mentalizing regions, namely the anterior medial prefrontal cortex. It is tempting to conclude that this region plays a privileged role during fiction comprehension, in comparison to the other parts of the mentalizing network

… while of course this isn’t the case, so more investigation – as well as further review of extant literature – is necessary.

The age-range of participants in the Nijhof-Willems study was 18-27 years, with an average age of 22.2 years. Consequent prompt: a similar study but with children as subjects could be useful in determining how a younger brain assimilates stories, and checking if there exist any predilections toward mentalizing or activating – or both or altogether something else – which then change as the kids grow up. (I must add that such a study would be especially useful to me because I recently joined a start-up that produces supplementary science-learning content for 10-15-year-olds in India.)

So… are you a mentalizing reader or an activating reader?

A conference's peer-review was found to be sort of random, but whose fault is it?

It’s not a good time for peer-review. Sure, if you’ve been a regular reader of Retraction Watch, it’s never been a good time for peer-review. But aside from that, the process has increasingly been taking the brunt for not being able to stem the publishing of results that – after publication – have been found to be the product of bad research practices.

The problem may be that the reviewers are letting the ‘bad’ papers through but the bigger issue is that, while the system itself has been shown to have many flaws – not excluding personal biases – journals rely on the reviewers and naught else to stamp accepted papers with their approval. And some of those stamps, especially from Nature or Science, are weighty indeed. Now add to this muddle the NIPS wrangle, where researchers may have found that some peer-reviews are just arbitrary.

NIPS stands for the Neural Information Processing Systems (Foundation), whose annual conference was held in the second week of December 2014, in Montreal. It’s considered one of the few main conferences in the field of machine-learning. Around the time, two attendees – Corinna Cortes and Neil Lawrence – performed an experiment to judge how arbitrary the conference’s peer-review could get.

Their modus operandi was simple. All the papers submitted to the conference were peer-reviewed before they were accepted. Cortes and Lawrence then routed a tenth of all submitted papers through a second peer-review stage, and observed which papers were accepted or rejected in the second stage (According to Eric Price, NIPS ultimately accepted a paper if either group of reviewers accepted it). Their findings were distressing.

About 57%* of all papers accepted in the first review were rejected during the second review. To be sure, each stage of the review was presumably equally competent – it wasn’t as if the second stage was more stringent than the first. That said, 57% is a very big number. More than five times out of 10, peer-reviewers disagreed on what could be published. In other words, in an alternate universe, the same conference but with only the second group of reviewers in place was generating different knowledge.

Lawrence was also able to eliminate a possibly redeeming confounding factor, which he described in a Facebook discussion on this experiment:

… we had a look through the split decisions and didn’t find an example where the reject decision had found a ‘critical error’ that was missed by the accept. It seems that there is quite a lot of subjectivity in these things, which I suppose isn’t that surprising.

It doesn’t bode well that the NIPS conference is held in some esteem among its attendees for having one of the better reviewing processes. Including the 90% of the papers that did not go through a second peer-review, the total predetermined acceptance rate was 22%, i.e. reviewers were tasked with accepting 22 papers out of every 100 submitted. Put another way, the reviewers were rejecting 78%. And this sheds light on the more troubling perspective of their actions.

If the reviewers had been randomly rejecting a paper, they would’ve done so at the tasked rate of 78%. At NIPS, one can only hope that they weren’t – so the second group was purposefully rejecting 57% of the papers that the first group had accepted. In an absolutely non-random, logical world, this number should have been 0%. So, that 57% is closer to 78% than is 0% implies some of the rejection was random. Hmm.

While this is definitely cause for concern, forging ahead on the basis of arbitrariness – which machine-learning theorist John Langford defines as the probability that the second group rejects a paper that the first group has accepted – wouldn’t be the right way to go about it. This is similar to the case with A/B-testing: we have a test whose outcome can be used to inform our consequent actions, but using the test itself as a basis for the solution wouldn’t be right. For example, the arbitrariness can be reduced to 0% simply by having both groups accept every nth paper – a meaningless exercise.

Is our goal to reduce the arbitrariness to 0% at all? You’d say ‘Yes’, but consider the volume of papers being submitted to important conferences like NIPS and the number of reviewer-hours being available to evaluate them. In the history of conferences, surely some judgments must have been arbitrary for the reviewer to have fulfilled his/her responsibilities to his/her employer. So you see the bigger issue: it’s not all the reviewer as much as it’s also the so-called system that’s flawed.

Langford’s piece raises a similarly confounding topic:

Perhaps this means that NIPS is a very broad conference with substantial disagreement by reviewers (and attendees) about what is important? Maybe. This even seems plausible to me, given anecdotal personal experience. Perhaps small highly-focused conferences have a smaller arbitrariness?

Problems like these are necessarily difficult to solve because of the number of players involved. In fact, it wouldn’t be entirely surprising if we found that nobody or no institution was at fault except how they were all interacting with each other, and not just in fields like machine-learning. A study conducted in January 2015 found that minor biases during peer-review could result in massive changes in funding outcomes if the acceptance rate was low – such as with the annual awarding of grants by the National Institutes of Health. Even Nature is wary about the ability of its double-blind peer-review to solve the problems ailing normal ‘peer-review’.

Perhaps for the near future, the only takeaway is likely going to be that ambitious young scientists are going to have to remember that, first, acceptance – just as well as rejection – can be arbitrary and, second, that the impact factor isn’t everything. On the other hand, it doesn’t seem possible in the interim to keep from lowering our expectations of peer-reviewing itself.

*The number of papers routed to the second group after the first was 166. The overall disagreement rate was 26%, so they would have disagreed on the fates of 43. And because they were tasked with accepting 22% – which is 37 or 38 – group 1 could be said to have accepted 21 that group 2 rejected, and group 2 could be said to have accepted 22 that group 1 rejected. Between 21/37 (56.7%) and 22/38 (57.8%) is 57%.

Hat-tip: Akshat Rathi.

The case for sustainability just got easier—nature reserves are much more profitable than previously thought

“Can you have your cake and eat it to?” is a question environmentalists would like to answer positively. Now they can, at least in the case of the value generated by nature reserves.

Tourist visits to protected areas around the world are worth $600 billion a year. Cut the costs and you are still left with about $250 billion a year in consumer surplus. The numbers were published in PLOS Biology by an international collaboration, which included members from the United Nations Environment Program.

The idea was to put a value on the well-established “nature-based recreation and tourism” industry. And that value appears to be big enough—at least much bigger than the $10 billion thought to be spent on safeguarding protected areas. Now the demands for governments to step up investment in conservation efforts can be said to be evidence-based.

The scientists behind the study seem to have gone to great lengths to arrive at their figures, which have been difficult to acquire with any reliability because protected areas are scattered around the world and information on visit rates has been limited. To make their task easier, they leveraged countries’ commitment to the Convention on Biological Diversity’s Aichi Biodiversity Targets, an international agreement, and arrived at a statistical model to estimate the economic significance of protected areas depending on the socioeconomic conditions they operated within.

In all, they were able to account for 556 protected areas (all terrestrial and above 10 hectare in size) in 51 countries, with 2,663 annual visitor-records in 1998-2007. The median value was 20,333 visits/year.

They left out some 40,000 areas because they were too small for their analysis to be applicable. Also, as with all statistical estimates, there is uncertainty in the results they have produced. The authors of the paper write: “Uncertainty in our modeled visit rates and the wide variation in published estimates of expenditure and consumer surplus mean that they could be out by a factor of two or more.”

Estimated number of annual visits to protected areas by country.
Estimated number of annual visits to protected areas by country. Data: Balmford A, Green JMH, Anderson M, Beresford J, Huang C, Naidoo R, et al. (2015) Walk on the Wild Side: Estimating the Global Magnitude of Visits to Protected Areas. PLoS Biol 13(2): e1002074. doi:10.1371/journal.pbio.1002074

This means they may be wrong by a factor of 10 or 100. Reassuringly, they add, “The comparison with calculations that visits to North American Protected Areas alone have an economic impact of $350–550 billion per year and that direct expenditure on all travel and tourism worldwide runs at $2,000 billion per year suggests our figures are of the correct order of magnitude, and that the value of Protected Area visitation runs into hundreds of billions of dollars annually.”

During their analyses, the scientists estimated that the world’s terrestrial protected areas received about 8 billion visits annually, of which 3.8 billion were made in Europe and 3.3 billion in North America.

The rapid growth in global population has put enormous constraints on terrestrial resources, which is best illustrated by the swift reduction of the Amazon rainforest.

Relevance to India

A profit of $250 billion per year means that countries could make a good economic case to stop encroaching on their protected areas, which tend to be early victims of their growth ambitions.

The argument pertains especially to India’s Ministry of Environment & Forests, which has been engaged since June 2014 in signing off on certificates of approval to coal-fired power plants and pesticide factories at the rate of “15 to 30 minutes per file“, and rendering decisions on disenfranchising tribal villagers from their own land practically non-consultative.

Such lands are protected areas, too, and those who manage them could benefit from the idea that they could contribute economically just as much as they do ecologically. And with the social activist Anna Hazare having announced a nationwide agitation on the controversial Land Acquisition ordinance, Prime Minister Narendra Modi could do well to entertain this idea.

Average annual tourist visits to protected areas in India, 1998-2007. Data: Balmford A, Green JMH, Anderson M, Beresford J, Huang C, Naidoo R, et al. (2015) Walk on the Wild Side: Estimating the Global Magnitude of Visits to Protected Areas. PLoS Biol 13(2): e1002074. doi:10.1371/journal.pbio.1002074
Data: Balmford A, Green JMH, Anderson M, Beresford J, Huang C, Naidoo R, et al. (2015) Walk on the Wild Side: Estimating the Global Magnitude of Visits to Protected Areas. PLoS Biol 13(2): e1002074. doi:10.1371/journal.pbio.1002074

The global warming hiatus could last another five years. Its aftermath is the real problem.

Whether you’ve been fending off climate-change skeptics on Twitter or have been looking for reasons to become a climate-change skeptic yourself, you must’ve heard about the hiatus. It’s the name given to a relatively drastic drop in the rate at which the world’s surface temperatures have increased, starting since the late 1990s, as compared to the rate since the early 1900s. Even if different measurements have revealed different drops in the rate, there’s no doubt among those who believe in anthropogenic global-warming that it’s happening.

According to one account: between 1998 and 2012, the global surface temperature rose by 0.05 kelvin per decade as opposed to 0.12 kelvin in the decades preceding it, going back to the start of the previous century. To be sure, the Earth has not stopped getting warmer, but the rate at which it was doing so got turned down a notch for reasons that weren’t immediately understood. And even as climate-scientists have been taking their readings, debate has surged about what the hiatus portends for the future of climate-change.

Now, a new study in Nature Climate Change has taken a shot at settling just this debate. According to it: The chances that a global-warming hiatus will happen for 10 consecutive years is about 10%, but that it will happen for 20 consecutive years is less than 1%. Finally, it says, if a warming hiatus has lasted for 15 years, then the chances it will last for five more years could be as high as 25%. So that means the current letoff in warming is somewhat likely to go on till 2020.

The study was published on February 23, titled pithily, Quantifying the likelihood of a continued hiatus in global warming. It focuses on the effects of internal variability, which – according to the IPCC – is the variability due to internal processes in the climate system (such as the El Niño Southern Oscillation) and excluding external influences (such as volcanic eruptions and sulphate aerosol emissions).

At the least, the statistically deduced projections empower climate scientists by giving them a vantage point from which to address the slowdown in warming rates since the start of this century. But more significantly, the numbers and methods give observers – such as those in the government and policy-makers – a perspective with which to address a seeming anomaly that has come at a crucial time for tackling anthropogenic global warming.

Global mean land-ocean temperature index from January 1970 through January 2014. The colored line is the monthly mean and the black line is the five-year running mean. The global warming hiatus referenced in literature commonly starts circa 2000.
Image: DHeyward/CC-BY-SA 3.0

Its timing (as if it could be timed) was crucial because it coincided with the same decade in which most of the faster-growing economies on the planet were circling each other in the negotiation bullring, wanting to be perceived as being committed to protecting the environment while reluctant about backing down on growth-rate reforms. The slowdown was a not-insurmountable-yet-still-there stumbling block to effectively mobilizing public debate on the issue. Needless to say, it also made for fodder for the deniers.

Wryly enough, the Nature Climate Change study shows that it is not an anomaly that’s about to let anybody off the hook but a phenomenon actually consistent with what we know about internal climate variability, and that such an event though rare could last two full decades without defying our knowledge. In fact, throw in coincident external variability and we have the additional possibility of longer and stronger hiatus periods in reality.

Anyway, there is yet more cause for alarm with this assertion because it suggests that some natural entity – in this case the sub-surface Pacific Ocean – is absorbing heat and causing the hiatus. Once a threshold is reached, that accumulated heat will be released in a sustained burst of about five years. The study’s authors term this the period of ‘accelerated warming’, when the oceans release 0.2 W/m2 of energy in “a pattern … that approximates a mirror image of surface temperature trends during hiatus periods”.

The analysis was based on data obtained from the Coupled Carbon Cycle Climate Model Intercomparison Project (Phase 5), which assesses changes in the climate due to changes in the carbon cycle in the presence of external variables. And simulations using it helped the researchers highlight a worrying discrepancy from previous predictions for the Arctic region:

Hiatus decades associated with internal variability in models generally exhibit cooling over the Arctic whereas recent observations indicate a strong warming. Our results indicate that, following the termination of the current global warming hiatus, internal climate variability may act to intensify rates of Arctic warming leading to increased climate stress on a region that is already particularly vulnerable to climate change.

The Arctic isn’t the only region that’s in trouble. The authors also predict that the period of accelerated warming will be “associated with warming across South America, Australia, Africa and Southeast Asia”. This doesn’t bode well: developing nations have been found to be especially susceptible to the adverse effects of anthropogenic warming because of their dependence on agriculture and for being under-prepared for catastrophic weather events.

Even if climate talks are beginning to focus on goals for the post-2020 period, this predicted asymmetry of impact won’t be at the top of negotiators’ minds at the 21st annual Conference of the Parties to the UNFCCC in Paris on November 30. However, should it transpire, the slowdown-speedup tendencies of climate variability could further muddle negotiations already fraught with shifting alliances and general bullheadedness.

Curious Bends – big tobacco, internet blindness, spoilt dogs and more

1. Despite the deadly floods in Uttarakhand in 2013, the govt ignores grave environmental reports on the new dams to be built in the state

“The Supreme Court asked the Union environment ministry to review six specific hydroelectric projects on the upper Ganga basin in Uttarakhand. On Wednesday, the ministry informed the apex court that its expert committee had checked and found the six had almost all the requisite and legitimate clearances. But, the ministry did not tell the court the experts, in the report to the ministry, had also warned these dams could have a huge impact on the people, ecology and safety of the region, and should not be permitted at all on the basis of old clearances.” (6 min read, businessstandard.com)

2. At the heart of the global-warming debate is the issue of energy poverty, and we don’t really have a plan to solve the problem

“Each year, human civilization consumes some 14 terawatts of power, mostly provided by burning the fossilized sunshine known as coal, oil and natural gas. That’s 2,000 watts for every man, woman and child on the planet. Of course, power isn’t exactly distributed that way. In fact, roughly two billion people lack reliable access to modern energy—whether fossil fuels or electricity—and largely rely on burning charcoal, dung or wood for light, heat and cooking.” (4 min read, scientificamerican.com)

3. Millions of Facebook users have no idea they’re using the internet

“Indonesians surveyed by Galpaya told her that they didn’t use the internet. But in focus groups, they would talk enthusiastically about how much time they spent on Facebook. Galpaya, a researcher (and now CEO) with LIRNEasia, a think tank, called Rohan Samarajiva, her boss at the time, to tell him what she had discovered. “It seemed that in their minds, the Internet did not exist; only Facebook,” he concluded.” (8 min read, qz.com)

+ The author of the piece, Leo Mirani, is a London-based reporter for Quartz.

4. The lengths to which big tobacco industries will go to keep their markets alive is truly astounding

“Countries have responded to Big Tobacco’s unorthodox marketing with laws that allow government to place grotesque images of smoker’s lung and blackened teeth on cigarette packaging, but even those measures have resulted in threats of billion-dollar lawsuits from the tobacco giants in international court. One such battle is being waged in Togo, where Philip Morris International, a company with annual earnings of $80 billion, is threatening a nation with a GDP of $4.3 billion over their plans to add the harsh imagery to cigarette boxes, since much of the population is illiterate and therefore can’t read the warning labels.” (18 min video, John Oliver’s Last Week Tonight via youtube.com)

5. Hundreds of people have caught hellish bacterial infections and turned to Eastern Europe for a century-old viral therapy

“A few weeks later, the Georgian doctors called Rose with good news: They would be able to design a concoction of phages to treat Rachel’s infections. After convincing Rachel’s doctor to write a prescription for the viruses (so they could cross the U.S. border), Rose paid the Georgian clinic $800 for a three-month supply. She was surprised that phages were so inexpensive; in contrast, her insurance company was forking over roughly $14,000 a month for Rachel’s antibiotics.” (14 min read, buzzfeed.com)

Chart of the week

“Deshpande takes her dog, who turned six in February, for a walk three times every day. When summers are at its peak, he is made to run on the treadmill inside the house for about half-hour. Zuzu’s brown and white hair is brushed once every month, he goes for a shower twice a month—sometimes at home, or at a dog spa—and even travels with the family to the hills every year. And like any other Saint Bernard, he has a large appetite, eating 20 kilograms of dog food every month. The family ends up spending Rs5,000 ($80)-7,000 ($112) every month on Zuzu, about double the amount they spend on Filu, a Cocker Spaniel.” (4 min read, qz.com)

59d83687-7134-4c90-8298-ba975d380556

Experiencing the modern city

Of all the things that have had a persistent tendency to surprise observers, cities have been among the most prolific. Then again, they’d better be for all their social and economic complexity, for their capacity to be the seed of so many perspectives on human development. We shape our cities, which then shape us, and we shape our cities again in return. Even social interactions that we have on the streets, even the decisions we make about whether or not we feel like a walk in the city have to do with how we let our cities communicate with us*.

This is the idea that The Human Scale, a documentary by the Danish filmmaker Andreas Dalsgaard, explores – mostly through a comparative analysis of architectural narratives at play in Chongqing, Copenhagen, New York, Siena and Dhaka, together with the work of the architect Jan Gehl and his colleagues. Its storytelling is patient and sympathetic, and does a good job of providing curated insights into how cities are failing humans today, and how we’re failing the solutions we’re conceiving in response. While it elucidates well the social character that future growth reforms must necessarily imbibe, it also refuses to accommodate the necessity of industrial growth.

What follows are a few thoughts based on notes I managed to take during the screening.

*

An immersive experience

I watched it at Nextbangalore Gatishil, the name given to a previously open lot on Rhenius Street then repurposed to host small meetings centered on urban studies. As the Nextbangalore website puts it,

The Nextbangalore Gatishil Space is an urban intervention on an unused space in Shantnagar. During nearly three weeks we provide a space to share your visions for Bangalore, to discuss your ideas, and to improve them in workshops, events and meetings. With an additional toolset, we want to explose [sic] a citizens vision for Bangalore.

Jute mats were laid on bare ground to make for seating space. A bunch of short tables occupied the center. The setup was rounded off by makeshift walls made of a plain-woven fabric, stretched on bamboo scaffolding, on which testimonials to Nextbangalore’s work by past attendees were printed. A cloth-roof was suspended more than 15 feet high. Including a seven-foot-high yellow wall facing the road, the Gatishil was only barely disparate.

What this has to do with The Human Scale is that the sound of traffic was a constant background, and made for an immersive watching experience. When Dalsgaard traveled to Chongqing to have Jan Gehl talk about developing countries’ tendency to mimic the West, he pointed his camera at the city’s domineering skyline and traffic-choked highways. Looking up from within the Gatishil, you could see the three flanking apartment buildings, one to a side, and hear the cars passing outside. There were a lot of mosquitoes that hinted at a stagnant pool of water in the vicinity. No stars were visible through a big gap in the roof.

The result was that you didn’t have to go to Chongqing to understand what Gehl was talking about. It was happening around you. Buildings were getting taller for lack of space, making it harder for the people on the highest floors to spontaneously decide to go for a walk outside. Roads were being built to move cars, not pedestrians, with narrow sidewalks, wide lanes and opulent bends demanding longer travel-times for shorter distances. After you finally leave for home from work, you reach after dark and the kids are already late for bed. Live life like this for years on end and you – the city-dweller – learn not to blame the city even as the city-planners get tunnel-vision and forget how walking is different from driving.

*

Data doesn’t mean quantitative

One of the problems addressed in The Human Scale is our reluctance to develop new kinds of solutions for evolving problems. David Sim, of Gehl Architects, suggests at one point that it’s not about having one vision or a master-plan but about evolving a solution and letting people experience the changes as they’re implemented in stages.

A notable aspect of this philosophy is surveying: about observing whatever it is that people are doing in different places and then installing those settings that will encourage them to do more of what they already do a lot of. As another architect, Lars Gemzøe, put it: if you build more roads, there will be more cars; if you build more parks, there will be more people on picnics. And you built more parks if the people wanted more parks.

Gehl’s approach testifies to the importance of data-taking when architects want to translate populism to designing the perfect ‘social city’. In the beginning of the documentary, he decries the modernism of Le Corbusier and the contrived aspirations it seeded, including encouraging designs to be machine-like, using materials for their appearance rather than mechanical properties, the elimination of lavishness in appearance, and almost always requiring a rectilinear arrangement of surfaces.

Instead, he calls for retaining only one of its tenets – ‘form follows function’ – and retooling the rest to cater not to aesthetic appeal but social necessities. There are two interesting examples to illustrate this. The first is when deciding where to place a sunshade so that it becomes a small meeting-spot on lazy Sunday afternoons, or when building a balcony at the right place so that passersby find a place to sit when they’re looking for places to sit. The second is about building longitudinal highways in oblong cities but then also paving the way for latitudinal walkways crisscrossing the shorter breadths of the city.

Corbusier – and others like him – heralded a school of design that did not account for the people who would use the building. In effect, its practice was more-personal in that it celebrated the architect – his work – and not the consequence of his work, at a time when technological changes such as mass-manufacturing had the personal under threat. On the other hand, the more-personal populism – which aimed at honing social interactions – banked on the less-subjective character of surveying and statistical analysis to eliminate the architect’s aspirations from the designing process.

*

The question of modernity

Dhaka, the capital of Bangladesh, was the only ‘developing city’ addressed in The Human Scale. And being a developing city, its government’s plan for it was obvious: development. But in pursuing the West as a model for development, the documentary focuses on how the city’s planners were also inadvertently pursuing the West as a model for flaws in urban-planning – but with an important difference. The Occident had urbanized and developed simultaneously; in the Subcontinent, urbanization was more like a rash on an underdeveloped landscape. In this context, Bangladeshi activist Ruhan Shama asks: “What does it mean to be modern?” The question is left unanswered, sadly.

Anyway, David Sim calls the imitative approach taking the ‘helicopter perspective’ – that we’ve been building things because we can without knowing what we really want. The result has been one of large-scale irony. According to Gehl and his supporters, today’s cities have coerced their inhabitants into a life of social austerity, driving even neighbors away from each other. But the cities themselves – for example, the Northeast and Taiheiyō megalopolises – and their corresponding metropolitan areas have started to move into each other’s spaces, not encroaching as much as overlapping. The Northeast Megalopolis in that region of the United States is a great example, with Boston, New York City, Philadelphia and Washington already starting to merge. If anything, such dualisms (notably in the guise of Zipf’s law) have been the character of urban modernism.

 

*Much like Ernst Mach’s once-provocative idea that local inertial frames are influenced by the large-scale distribution of matter in the universe.

Two years, one opinion

Re Chinmayi Arun’s Comment in The Hindu on February 16, ‘Using law to bully comedians‘ – the central argument of the piece has become such an overused trope. She invokes the heckler’s veto to defend the AIB roast against state bullying, but there’s nothing new as such in the piece apart from the application of this recently rediscovered catchphrase to a problem that’s been around for a while. Here’s an excerpt from the piece:

The Indian government already has a questionable track record in the context of blocking online content. The system followed to block content under the IT Act is opaque — it neither notifies speakers and readers that content has been blocked, nor permits intermediaries to disclose what content the government has asked them to block. If speakers and readers have no way of finding out that the government has ordered the blocking of particular speech, they will not be able to challenge the government’s decision to censor before the judiciary. This means that the judiciary will not be able to check whether the government is using its power to block online content consistently with the Constitution. This lack of accountability leaves the system open to government misuse to block politically threatening speech.

This was the argument in early 2013. This appears to be the argument in early 2015. Has nothing changed? Have we accepted defeat at the hands of the heckler? It seems we’re sure of everything about this kind of issues that no one seems to be able to take the conversation forward. To be sure, Arun’s arguments are not factually incorrect but her piece’s crux can be summed up in a tweet. And I want to give experts like her the benefit of the doubt and say there are some nuances I’m missing. My question is, what’s next? Why aren’t we talking about what could come after? It has always been easier to talk about things that are broken – I’ve been guilty of that, too – but to be able to think a solution hasn’t been forthcoming from anybody willing to be vocal is disappointing.

Further, through all of this, there is also an onus on the publisher, as a willing constructor of public opinion, to present developments that can be pieced and assimilated together instead of as items that pop up once in a while, as if pertaining to disparate incidents and not one common kind of abuse. Repackaging the content will drive home the similarity better.

Note: The hyperlinked article for the line “This was the argument in early 2013” has been changed from one of my pieces to one of Chinmayi Arun’s earlier pieces. The text has also been edited for clarity. March 22, 2015

Why we appreciate art

I met a friend after a couple years in Bangalore last weekend, and he told me an interesting fact about how one of my favorite authors, Joseph Heller, became a bestseller. He said that Robert Gottlieb, the publisher of Catch-22, Heller’s first and most famous novel, took a five-column full-length ad in The New York Times on the eve of the release.

I can’t find the image on the web (it’s probably behind nytimes.com’s archives paywall), but I did find a bunch of other ads that ran in newspapers in 1961 – the year of the book’s release – with a prominent line going “What’s the catch?”

Catch-22 is widely regarded to be a bestseller and one of the best wartime books of all time today. The question is if it would’ve been recognized by such a wide audience at all if not for the surreal and, as writer Christine Bold describes it, “giddy” promotion campaign:

On October 11, 1961, rising stars of Madison Avenue launched Catch-22 with a slick ad campaign (“What’s the Catch?”) splashed across an unprecedented five columns in the New York Times. Joseph Heller and his wife Shirley did their part by dashing around Manhattan bookshops, surreptitiously switching displays so that copies of his novel obscured betterselling titles. Some of the giddiness of the moment is captured in the handsome fiftieth-anniversary edition, which reprints the ads devised by Simon and Schuster’s Nina Bourne and Robert Gottlieb – later famous as, respectively, advertising director at Knopf and Editor of the New Yorker. The laudatory reviews likened the novel to a collaboration between Lewis Carroll and Hieronymus Bosch, combining the genius of Dante, Kafka, and Abbott and Costello. Harper Lee said it was the only war novel that made sense to her. Philip Toynbee declared it “the greatest satirical work in English since Erewhon”.

In 1998, the noted art critic Melvyn Gussow highlighted an alternative scenario, one that actually played out, from 1950 when another book about the ruthlessness of wartime psychology received a positive review from The New York Times Book Review but then faded into obscurity (while Heller’s work received a negative review and went on to rock the charts):

When Louis Falstein’s “Face of a Hero” was published in 1950, Herbert F. West reviewed it favorably in The New York Times Book Review, calling it “the most mature novel about the Air Force that has yet appeared. . . . a book that is both exciting and important.” Still, the book and its author faded into obscurity.

When Joseph Heller’s “Catch-22” was published 11 years later, Richard G. Stern gave it a negative review in the Times Book Review. He said that it “gasps for want of craft and sensibility” and called it “an emotional hodgepodge.” Despite that indictment, “Catch-22” eventually became a phenomenal success — a best seller, a film and the cornerstone of a major literary career.

Now, in a strange twist, the two books have come together, and their meeting has led to a provocative debate. In a recent letter to The Times of London, Lewis Pollock, a London bibliophile, wondered if anyone could “account for the amazing similarity of characters, personality traits, eccentricities, physical descriptions, personnel injuries and incidents” in the two books.

Heller denied the allegation that he’d springboarded off of Falstein’s book, but that’s not the point. The point is that the way Catch-22 was marketed makes it impossible for us know if it would’ve garnered worldwide notice (and notoriety) hadn’t it been for the ad campaign. There are two ways to look at this.

  1. Did the world read Catch-22 only because a notable literary agent thought the book was so good that it deserved almost a full-page ad in The New York Times?
  2. How many books like Face of a Hero have slipped under the radar for want of an indulgent publicity crusade?

And both ways betoken an introspection about how we find our books – rather, our art – and whether it is anything about art itself that draws us to it or the ostensibly psychological marketing efforts that push a form of appreciation of art that someone else would like us to practice.