Tag Archives: Facebook

HAL 9000. Credit: OpenClipart-Vectors/pixabay

The tragic hero of ‘2001: A Space Odyssey’

This is something I wrote for April 10 but forgot to schedule for publication. Publishing it now…

Since news of the Cambridge Analytica scandal broke last month, many of us have expressed apprehension – often on Facebook itself – that the social networking platform has transformed since its juvenile beginnings into an ugly monster.

Such moral panic is flawed and we ought to know that by now. After all, it’s been 50 years since 2001: A Space Odyssey was released, and a 100 since Frankenstein – both cultural assets that have withstood the proverbial test of time only because they managed to strike some deep, mostly unknown chord about the human condition, a note that continues to resonate with the passions of a world that likes to believe it has disrupted the course of history itself.

Gary Greenberg, a mental health professional and author, recently wrote that the similarities between Viktor Frankenstein’s monster and Facebook were unmistakable except on one count: the absence of a conscience was a bug in the monster, and remains a feature in Facebook. As a result, he wrote, “an invention whose genius lies in its programmed inability to sort the true from the false, opinion from fact, evil from good … is bound to be a remorseless, lumbering beast, one that does nothing other than … aggregate and distribute, and then to stand back and collect the fees.”

However, it is 2001‘s HAL 9000 that continues to be an allegory of choice in many ways, not least because it’s an artificial intelligence the likes of which we’re yet to confront in 2018 but have learnt to constantly anticipate. In the film, HAL serves as the onboard computer for an interplanetary spaceship carrying a crew of astronauts to a point near Jupiter, where a mysterious black monolith of alien origin has been spotted. Only HAL knows the real nature of the mission, which in Kafkaesque fashion is never revealed.

Within the logic-rules-all-until-it-doesn’t narrative canon that science fiction writers have abused for decades, HAL is not remarkable. But take him out into space, make sure he knows more than the humans he’s guiding and give him the ability to physically interfere in people’s lives – and you have not a villain waylaid by complicated Boolean algebra but a reflection of human hubris.

2001 was the cosmic extrapolation of Kubrick’s previous production, the madcap romp Dr Strangelove. While the two films differ significantly in the levels of moroseness on display as humankind confronts a threat to its existence, they’re both meditations on how humanity often leads itself towards disaster while believing it’s fixing itself and the world. In fact, in both films, the threat was weapons of mass destruction (WMDs). Kubrick intended for the Star Child in 2001‘s closing scenes to unleash nuclear holocaust on Earth – but he changed his mind later and chose to keep the ending open.

This is where HAL has been able to step in, in our public consciousness, as a caution against our over-optimism towards artificial intelligence and reminding us that WMDs can take different forms. Using the tools and methods of ‘Big Data’ and machine learning, machines have defeated human players at chess and go, solved problems in computer science and helped diagnose some diseases better. There is a long way to go for HAL-like artificial general intelligence, assuming that is even possible.

But in the meantime, we come across examples every week that these machines are nothing like what popular science fiction has taught us to expect. We have found that their algorithms often inherit the biases of their makers, and that their makers often don’t realise this until the issue is called out – or they do but slip it in anyway.

According to (the modified) Tesler’s theorem, “AI is whatever hasn’t been done yet”. When overlaid on optimism of the Silicon Valley variety, AI in our imagination suddenly becomes able to do what we have never been able to ourselves, even as we assume humans will still be in control. We forget that for AI to be truly AI, its intelligence should be indistinguishable from that of a human’s – a.k.a. the Turing test. In this situation, why do we expect AI to behave differently than we do?

We shouldn’t, and this is what HAL teaches us. His iconic descent into madness in 2001 reminds us that AI can go wonderfully right but it’s likelier to go wonderfully wrong if only because of the outcomes that we are not, and have never been, anticipating as a species. In fact, it has been argued that HAL never went mad but only appeared to do so because of the untenability of human expectations. That 2001 was the story of his tragedy.

This is also what makes 2001 all the more memorable: its refusal to abandon the human perspective – noted for its amusing tendency to be tripped up by human will and agency – even as Kubrick and Arthur C. Clarke looked towards the stars for humankind’s salvation.

In the film’s opening scenes, a bunch of apes briefly interacts with a monolith just like the one near Jupiter and quickly develops the ability to use commonplace objects as tools and weapons. The rest is history, so the story suddenly jumps four million years ahead and then 18 months more. As the Tool song goes, “Silly monkeys, give them thumbs, they make a club and beat their brother down.”

In much the same way, HAL recalls the origins of mainstream AI research as it happened in the late 1950s at the Massachusetts Institute of Technology (MIT), Boston. At the time, the linguist and not-yet-activist Noam Chomsky had reimagined the inner workings of the human brain as those of a computer (specifically, as a “Language Acquisition Device”). According to anthropologist Chris Knight, this ‘act’ inspired cognitive scientist Marvin Minsky to wonder if the mind, in the form of software, could be separated from the body, the hardware.

Minsky would later say, “The most important thing about each person is the data, and the programs in the data that are in the brain”. This is chillingly evocative of what Facebook has achieved in 2018: to paraphrase Greenberg, it has enabled data-driven politics by digitising and monetising “a trove of intimate detail about billions of people”.

Minsky founded the AI Lab at MIT in 1959. Less than a decade later, he joined the production team of 2001 as a consultant to design and execute the character called HAL. As much as we’re fond of celebrating the prophetic power of 2001, perhaps the film was able to herald the 21st century as well as it has because we inherited it from many of the men who shaped the 20th, and Kubrick and Clarke simply mapped their visions onto the stars.

Featured image: HAL 9000. Credit: OpenClipart-Vectors/pixabay.

Credit: geralt/pixabay

How science is presented and consumed on Facebook

This post is a breakdown of the Pew study titled The Science People See on Social Media, published March 21, 2018. Without further ado…

In an effort to better understand the science information that social media users encounter on these platforms, Pew Research Center systematically analyzed six months’ worth of posts from 30 of the most followed science-related pages on Facebook. These science-related pages included 15 popular Facebook accounts from established “multiplatform” organizations … along with 15 popular “Facebook-primary” accounts from individuals or organizations that have a large social media presence on the platform but are not connected to any offline, legacy outlet.

Is popularity the best way to judge if a Facebook page counts as a page about science? Popularity is an easy measure but it often almost exclusively represents a section of the ‘market’ skewed towards popular science. Some such pages from the Pew dataset include facebook.com/healthdigest, /mindbodygreen, /DailyHealthTips, /DavidAvocadoWolfe and /droz – all “wellness” brands that may not represent the publication of scientific content as much as, more broadly, content that panders to a sense of societal insecurity that is not restricted to science. This doesn’t limit the Pew study insofar as the study aims to elucidate what passes off as ‘science’ on Facebook but it does limit Pew’s audience-specific insights.

§

… just 29% of the [6,528] Facebook posts from these pages [published in the first half of 2017] had a focus or “frame” around information about new scientific discoveries.

Not sure why the authors, Paul Hitlin and Kenneth Olmstead, think this is “just” 29% – that’s quite high! Science is not just about new research and research results, and if these pages are consciously acknowledging that on average limiting their posts about such news to three of every 10 posts, that’s fantastic. (Of course, if the reason for not sharing research results is that they’re not very marketable, that’s too bad.)

I’m also curious about what counts as research on the “wellness” pages. If their posts share research to a) dismiss it because it doesn’t fit the page authors’ worldview or b) popularise studies that are, say, pursuing a causative link between coffee consumption and cancer, then such data is useless.

From 'The science people see on social media'. Credit: Pew Research Center

From ‘The science people see on social media’. Credit: Pew Research Center

§

The volume of posts from these science-related pages has increased over the past few years, especially among multiplatform pages. On average, the 15 popular multiplatform Facebook pages have increased their production of posts by 115% since 2014, compared with a 66% increase among Facebook-primary pages over the same time period. (emphasis in the original)

The first line in italics is a self-fulfilling prophecy, not a discovery. This is because the “multiplatform organisations” chosen by Pew for analysis all need to make money, and all organisations that need to continue making money need to grow. Growth is not an option, it’s a necessity, and it often implies growth on all platforms of publication in quantity and (hopefully) quality. In fact, the “Facebook-primary” pages, by which Hitlin and Olmstead mean “accounts from individuals or organizations that have a large social media presence on the platform but are not connected to any offline, legacy outlet”, are also driven to grow for the same reason: commerce, both on Facebook and off. As the authors write,

Across the set of 30 pages, 16% of posts were promotional in nature. Several accounts aimed a majority of their posts at promoting other media and public appearances. The four prominent scientists among the Facebook-primary pages posted fewer than 200 times over the course of 2017, but when they did, a majority of their posts were promotions (79% of posts from Dr. Michio Kaku, 78% of posts from Neil deGrasse Tyson, 64% of posts from Bill Nye and 58% of posts from Stephen Hawking). Most of these were self-promotional posts related to television appearances, book signings or speeches.

A page with a few million followers is likelier than not to be a revenue-generating exercise. While this is by no means an indictment of the material shared by these pages, at least not automatically, IFL Science is my favourite example: its owner Elise Andrews was offered $30 million for the page in 2015. I suspect that might’ve been a really strong draw to continue growing, and unfortunately, many of the “Facebook-primary” pages like IFLS find this quite easy to do by sharing well-dressed click-bait.

Second, if Facebook is the primary content distribution channel, then the number of video posts will also have shown an increase in the Pew data – as it did – because publishers both small and large that’ve made this deal with the devil have to give the devil whatever it wants. If Facebook says videos are the future and that it’s going to tweak its newsfeed algorithms accordingly, publishers are going to follow suit.

Source: Pew Research Center

Source: Pew Research Center

So when Hitlin and Olmstead say, “Video was a common feature of these highly engaging posts whether they were aimed at explaining a scientific concept, highlighting new discoveries, or showcasing ways people can put science information to use in their lives”, they’re glossing over an important confounding factor: the platform itself. There’s a chance Facebook is soon going to say VR is the next big thing, and then there’s going to be a burst of posts with VR-mediated content. But that doesn’t mean the publishing houses themselves believe VR is good or bad for sharing science news.

§

The average number of user interactions per post – a common indicator of audience engagement based on the total number of shares, comments, and likes or other reactions – tends to be higher for posts from Facebook-primary accounts than posts from multiplatform accounts. From January 2014 to June 2017, Facebook-primary pages averaged 14,730 interactions per post, compared with 4,265 for posts on multiplatform pages. This relationship held up even when controlling for the frame of the post. (emphasis in the original)

Again, Hitlin and Olmstead refuse to distinguish between ‘legitimate’ posts and trash. This would involve a lot more work on their part, sure, but it would also make their insights into science consumption on the social media that much more useful. But until then, for all I know, “the average number of user interactions per post … tends to be higher for posts from Facebook-primary accounts than posts from multiplatform accounts” simply because it’s Gwyneth Paltrow wondering about what stones to shove up which orifices.

§

… posts on Facebook-primary pages related to federal funding for agencies with a significant scientific research mission were particularly engaging, averaging more than 122,000 interactions per post in the first half of 2017.

Now that’s interesting and useful. Possible explanation: Trump must’ve been going nuts about something science-related. [Later in the report] Here it is: “Many of these highly engaging posts linked to stories suggesting Trump was considering a decrease in science-agency funding. For example, a Jan. 25, 2017, IFLScience post called Trump’s Freeze On EPA Grants Leaves Scientists Wondering What It Means was shared more than 22,000 times on Facebook and had 62,000 likes and other reactions.”

§

Highly engaging posts among these pages did not always feature science-related information. Four of the top 15 most-engaging posts from Facebook-primary pages featured inspirational sayings or advice such as “look after your friends” or “believe in yourself.”

Does mental-health-related messaging on the back of new findings or realisations about the need for, say, speaking out on depression and anxiety count as science communication? It does to me; by all means, it’s “news I can use”.

§

Three of the Facebook-primary pages belong to prominent astrophysicists. Not surprisingly, about half or more of the posts on these pages were related to astronomy or physics: Dr. Michio Kaku (58%), Stephen Hawking (58%) and Neil deGrasse Tyson (48%).

Ha! It would be interesting to find out why science’s most prominent public authority figures in the last few decades have all been physicists of some kind. I already have some ideas but that’ll be a different post.

§

Useful takeaways for me as science editor, The Wire:

  1. Pages that stick to a narrower range of topics do better than those that cover all areas of science
  2. Controversial topics such as GMOs “didn’t appear often” on the 30 pages surveyed – this is surprising because you’d think divisive issues would attract more audience engagement. However, I also imagine the pages’ owners might not want to post on those issues to avoid flame wars (😐), stay away from inconclusive evidence (😄), not have to take a stand that might hurt them (🤔) or because issue-specific nuances make an issue a hard-sell (🙄).
  3. Most posts that shared discoveries were focused on “energy and environment, geology, and archeology”; half of all posts about physics and astronomy were about discoveries

Featured image credit: geralt/pixabay.

Friends no more

Growing up, watching Friends was a source of much amusement and happiness. Now, as a grownup, I can’t watch a single episode without deeply resenting how the show caricatures all science as avoidable and all scientists as boring. The way Monica, Rachael, Phoebe, Chandler and Joey respond to Ross’s attempts to tell them something interesting from his work or passions always provokes strong consternation and an impulse to move away from him. In one episode, Monica condemns comet-watching to be a “stupid” exercise. When Ross starts to talk about its (fictitious) discoverer, Joey muffles his ears, screams “No, no, no!” and begins banging on a door pleading to be let out. Pathetic.

This sort of reaction is at the heart of my (im)mortal enemy: the Invisible Barrier that has erupted between many people and science/mathematics. These people, all adults, passively – and sometimes actively – keep away from numbers and equations of any kind. The moment any symbols are invoked in an article or introduced in a conversation, they want to put as much distance as possible between them and what they perceive to be a monster that will make them think. This is why I doubly resent that Friends continues to be popular, that it continues to celebrate the deliberate mediocrity of its characters and the profound lack of inspiration that comes with it.

David Hopkins wrote a nice piece on Medium a year ago about this:

I want to discuss a popular TV show my wife and I have been binge-watching on Netflix. It’s the story of a family man, a man of science, a genius who fell in with the wrong crowd. He slowly descends into madness and desperation, lead by his own egotism. With one mishap after another, he becomes a monster. I’m talking, of course, about Friends and its tragic hero, Ross Geller. …

Eventually, the Friends audience — roughly 52.5 million people — turned on Ross. But the characters of the show were pitted against him from the beginning (consider episode 1, when Joey says of Ross: “This guy says hello, I wanna kill myself.”) In fact, any time Ross would say anything about his interests, his studies, his ideas, whenever he was mid-sentence, one of his “friends” was sure to groan and say how boring Ross was, how stupid it is to be smart, and that nobody cares. Cue the laughter of the live studio audience. This gag went on, pretty much every episode, for 10 seasons. Can you blame Ross for going crazy?

He goes on to say that Friends in fact portended a bad time for America in general and that the show may have even precipitated it – a period of remarkable anti-intellectualism and consumerism. But towards the end, Hopkins says we must not bully the nerds, we must protect them, because “they make the world a better place” – a curious call given that nerds are also building things like Facebook, Twitter, Airbnb, Uber, etc., services that, by and large, have negatively disrupted the quality of life for those not in the top 1%. These are nerds that first come to mind when we say they’re shaping the world, doing great things for it – but they’re not. Instead, these are really smart people either bereft of social consciousness or trapped in corporate assemblages that have little commitment to social responsibilities outside of their token CSR programmes. And together, they have only made the world a worse place.

But I don’t blame the nerd, if only because I can’t blame anyone for being smart. I blame the Invisible Barrier, which is slowly but surely making it harder for people embrace technical knowledge before it has been processed, refined, flavoured and served on a platter. The Barrier takes many shapes, too, making it harder to hunt down. Sometimes, it’s a scientist who refuses to engage with an audience that’s interested in listening to what she has to say. Sometimes, it’s a member of the audience who doesn’t believe science can do anything to improve one’s quality of life. But mostly, rather most problematically, the Barrier is a scientist who thinks she’s engaging with an enthusiast but is really not, and a self-proclaimed enthusiast who thinks she’s doing her bit to promote science but is really not.

This is why we have people who will undertake a ‘March for Science’ once a year but not otherwise pressure the government to make scientific outreach activities count more towards their career advancement or demand an astrology workshop at a research centre be cancelled and withdraw into their bubbles unmindful of such workshops being held everywhere all the time. This is why we have people who will mindlessly mortgage invaluable opportunities to build research stations against a chance to score political points or refuse to fund fundamental research programmes because they won’t yield any short-term benefits.

Unfortunately, these are all the people who matter – the people with the power and ability to effect change on a scale that is meaningful to the rest of us but won’t in order to protect their interests. The Monicas, Rachaels, Phoebes, Chandlers and Joeys of the world, all entertainers who thought they were doing good and being good, enjoying life as it should be, without stopping to think about the foundations of their lives and the worms that were eating into them. The fantasy that their combined performance had constructed asked, and still asks, its followers to give up, go home and watch TV.

Fucking clowns.

Featured image: A poster of the TV show ‘Friends’: (L-R) Chandler, Rachael, Ross, Monica, Joey and Phoebe. Source: Warner Bros.

The blog and the social media

Because The Wire had signed up to be some kind of A-listed publisher with Facebook, The Wire‘s staff was required to create Facebook Pages under each writer/editor’s name. So I created the ‘Vasudevan Mukunth’ page. Then, about 10 days ago, Facebook began to promote my page on the platform, running ads for it that would appear on people’s timelines across the network. The result is that my page now has almost as many likes as The Wire English’s Facebook Page: 320,000+. Apart from sharing my pieces from The Wire, I now use the page to share my blog posts as well. Woot!

Action on Twitter hasn’t far behind either. I’ve had a verified account on the microblogging platform for a few months now. And this morning, Twitter rolled out the expanded tweet character limit (from 140 to 280) to everyone. For someone to whom 140 characters was a liberating experience – a mechanical hurdle imposed on running your mouth and forcing you to think things through (though many choose not to) – the 280-char limit is even more so.

How exactly? An interesting implication discussed in this blog post by Twitter is that allowing people to think 280 characters at a time allowed them to be less anxious about how they were going to compose their tweets. The number of tweets hitting the character limit dropped from 9% during the 140-char era to 1% in the newly begun 280-char era. At the same time, people have continued to tweet within the 140-char most of the time. So fewer tweets were being extensively reworked or abandoned because people no longer composed them with the anxiety of staying within a smaller character limit.

But here’s the problem: most of my blog’s engagement had already been happening on the social media. As soon as I published a post, WordPress’s Jetpack plugin would send an email to 4brane’s 3,600+ subscribers with the full post, post the headline + link on Twitter and the headline + blurb + image + link on Facebook. Readers would reply to the tweet, threading their responses if they had to, and drop comments on Facebook. But on the other hand, the number of emails I’ve been receiving from my subscribers has been dropping drastically, as has the number of comments on posts.

I remember my blogging habit having taken a hit when I’d decided to become more active on Twitter because I no longer bore, fermented and composed my thoughts at length, with nuance. Instead, I dropped them as tweets as and when they arose, often with no filter, building it out through conversations with my followers. The 280-char limit now looks set to ‘scale up’ this disruption by allowing people to be more free and encouraging them to explore more complex ideas, aided by how (and how well, I begrudgingly admit) Twitter displays tweet-threads.

Perhaps – rather hopefully – the anxiety that gripped people when they were composing 140-char tweets will soon grip them as they’re composing 280-char tweets as well. I somehow doubt 420-char tweets will be a thing; that would make the platform non-Twitter-like. And hopefully the other advantages of having a blog, apart from the now-lost ‘let’s have a conversation’ part, such as organising information in different ways unlike Twitter’s sole time-based option, will continue to remain relevant.

Featured image credit: LoboStudioHamburg/pixabay.

Confused thoughts on embargoes

Seventy! That’s how many observatories around the world turned their antennae to study the neutron-star collision that LIGO first detected. So I don’t know why the LIGO Collaboration, and Nature, bothered to embargo the announcement and, more importantly, the scientific papers of the LIGO-Virgo collaboration as well as those by the people at all these observatories. That’s a lot of people and many of them leaked the neutron-star collision news on blogs and on Twitter. Madness. I even trawled through arΧiv to see if I could find preprint copies of the LIGO papers. Nope; it’s all been removed.

Embargoes create hype from which journals profit. Everyone knows this. Instead of dumping the data along with the scientific articles as soon as they’re ready, journals like Nature, Science and others announce that the information will all be available at a particular time on a particular date. And between this announcement and the moment at which the embargo lifts, the journal’s PR team fuels hype surrounding whatever’s being reported. This hype is important because it generates interest. And if the information promises to be good enough, the interest in turn creates ‘high pressure’ zones on the internet – populated by those people who want to know what’s going on.

Search engines and news aggregators like Google and Facebook are sensitive to the formation of these high-pressure zones and, at the time of the embargo’s lifting, watch out for news publications carrying the relevant information. And after the embargo lifts, thanks to the attention already devoted by the aggregators, news websites are transformed into ‘low pressure’ zones into which the aggregators divert all the traffic. It’s like the moment a giant information bubble goes pop! And the journal profits from all of this because, while the bubble was building, the journal’s name is everywhere.

In short: embargoes are a traffic-producing opportunity for news websites because they create ‘pseudo-cycles of news’, and an advertising opportunity for journals.

But what’s in it for someone reporting on the science itself? And what’s in it for the consumers? And, overall, am I being too vicious about the idea?

For science reporters, there’s the Ingelfinger rule promulgated by the New England Journal of Medicine in 1969. It states that the journal will not publish any papers with results that have been previously published elsewhere and/or whose authors have not discussed the results with the media. NEJM defended the rule by claiming it was to keep their output fresh and interesting as well as to prevent scientists from getting carried away by the implications of their own research (NEJM’s peer-review process would prevent that, they said). In the end, the consumers would receive scientific information that has been thoroughly vetted.

While the rule makes sense from the scientists’ point of view, it doesn’t from the reporters’. A good science reporter, having chosen to cover a certain paper, will present the paper to an expert unaffiliated with the authors and working in the same area for her judgment. This is a form of peer-review that is extraneous to the journal publishing the paper. Second: a pro-embargo argument that’s been advanced is that embargoes alert science reporters to papers of importance as well as give them time to write a good story on it.

I’m conflicted about this. Embargoes, and the attendant hype, do help science reporters pick up on a story they might’ve missed out on, to capitalise on the traffic potential of a new announcement that may not be as big as it becomes without the embargo. Case in point: today’s neutron-star collision announcement. At the same time, science reporters constantly pick up on interesting research that is considered old/stale or that wasn’t ever embargoed and write great stories about them. Case in point: almost everything else.

My perspective is coloured by the fact that I manage a very small science newsroom at The Wire. I have a very finite monthly budget (equal to about what someone working eight hours a day and five days a week would make in two months on the US minimum wage) using which I’ve to ensure that all my writers – who are all freelancers – provide both the big picture of science in that month as well as the important nitty-gritties. Embargoes, for me, are good news because it helps me reallocate human and financial resources for a story well in advance and make The Wire‘s presence felt on the big stage when the curtain lifts. Rather, even if I can’t make it on time to the moment the curtain lifts, I’ve still got what I know for sure is good story on my hands.

A similar point was made by Kent Anderson when he wrote about eLife‘s media policy, which said that the journal would not be enforcing the Ingelfinger rule, over at The Scholarly Kitchen:

By waiving the Ingelfinger rule in its modernised and evolved form – which still places a premium on embargoes but makes pre-publication communications allowable as long as they don’t threaten the news power – eLife is running a huge risk in the attention economy. Namely, there is only so much time and attention to go around, and if you don’t cut through the noise, you won’t get the attention. …

Like it or not, but press embargoes help journals, authors, sponsors, and institutions cut through the noise. Most reporters appreciate them because they level the playing field, provide time to report on complicated and novel science, and create an effective overall communication scenario for important science news. Without embargoes and coordinated media activity, interviews become more difficult to secure, complex stories may go uncovered because they’re too difficult to do well under deadline pressures, and coverage becomes more fragmented.

What would I be thinking if I had a bigger budget and many full-time reporters to work with? I don’t know.

On Embargo Watch in July this year, Ivan Oransky wrote about how an editor wasn’t pleased with embargoes because “staffers had been pulled off other stories to make sure to have this one ready by the original embargo”. I.e., embargoes create deadlines that are not in your control; they create deadlines within which everyone, over time, tends to do the bare minimum (“as much as other publications will do”) so they can ride the interest wave and move on to other things – sometimes not revisiting this story again even. In a separate post, Oransky briefly reviewed a book against embargoes by Vincent Kiernan, a noted critic of the idea:

In his book, Embargoed Science, Kiernan argues that embargoes make journalists lazy, always chasing that week’s big studies. They become addicted to the journal hit, afraid to divert their attention to more original and enterprising reporting because their editors will give them grief for not covering that study everyone else seems to have covered.

Alice Bell wrote a fantastic post in 2010 about how to overcome such tendencies: by newsrooms redistributing their attention on science to both upstream and downstream activities. But more than that, I don’t think lethargic news coverage can be explained solely by the addiction to embargoes. A good editor should keep stirring the pot – should keep her journalists moving on good stories, particularly of the kind no one wants to talk about, report on it and play it up. So, while I’m hoping that The Wire‘s coverage of the neutron-star collision discovery is a hit, I’ve also got great pieces coming this week about solar flares, open-access publishing, the health effects of ******** mining and the conservation of sea snakes.

I hope time will provide some clarity.

Featured image credit: Free-Photos/pixabay.

The metaphorical transparency of responsible media

Featured image credit: dryfish/Flickr, CC BY 2.0.

I’d written a two-part essay (although they were both quite short; reproduced in full below) on The Wire about what science was like in 2016 and what we can look forward to in 2017. The first part was about how science journalism in India is a battle for relevance, both within journalistic circles and among audiences. The second was about how science journalism needs to be treated like other forms of journalism in 2017, and understood to be afflicted with the same ills that, say, political and business journalism are.

Other pieces on The Wire that had the same mandate, of looking back and looking forward, stuck to being roundups and retrospective analyses. My pieces were retrospective, too, but they – to use the parlance of calculus – addressed the second derivative of science journalism, in effect performing a meta-analysis of the producers and consumers of science writing. This blog post is a quick discussion (or rant) of why I chose to go the “science media” way.

We in India often complain about how the media doesn’t care enough to cover science stories. But when we’re looking back and forward in time, we become blind to the media’s efforts. And looking back is more apparently problematic than is looking forward.

Looking back is problematic because our roundup of the ‘best’ science (the ‘best’ being whatever adjective you want it to be) from the previous year is actually a roundup of the ‘best’ science we were able to discover or access from the previous year. Many of us may have walled ourselves off into digital echo-chambers, sitting within not-so-fragile filter bubbles and ensuring news we don’t want to read about doesn’t reach us at all. Even so, the stories that do reach us don’t make up the sum of all that is available to consume because of two reasons:

  1. We practically can’t consume everything, period.
  2. Unless you’re a journalist or someone who is at the zeroth step of the information dissemination pyramid, your submission to a source of information is simply your submission to another set of filters apart from your own. Without these filters, finding something you are looking for on the web would be a huge problem.

So becoming blind to media efforts at the time of the roundup is to let journalists (who sit higher up on the dissemination pyramid) who should’ve paid more attention to scientific developments off the hook. For example, assuming things were gloomy in 2016 is assuming one thing given another thing (like a partial differential): “while the mood of science news could’ve been anything between good and bad, it was bad” GIVEN “journalists mostly focused on the bad news over the good news”. This is only a simplistic example: more often than not, the ‘good’ and ‘bad’ can be replaced by ‘significant’ and ‘insignificant’. Significance is also a function of media attention. At the time of probing our sentiments on a specific topic, we should probe the information we have as well as how we acquired that information.

Looking forward without paying attention to how the media will likely deal with science is less apparently problematic because of the establishment of the ideal. For example, to look forward is also to hope: I can say an event X will be significant irrespective of whether the media chooses to cover it (i.e., “it should ideally be covered”); when the media doesn’t cover the event, then I can recall X as well as pull up journalists who turned a blind eye. In this sense, ignoring the media is to not hold its hand at the beginning of the period being monitored – and it’s okay. But this is also what I find problematic. Why not help journalists look out for an event when you know it’s going to happen instead of relying on their ‘news sense’, as well as expecting them to have the time and attention to spend at just the right time?

Effectively: pull us up in hindsight – but only if you helped us out in foresight. (The ‘us’ in this case is, of course, #notalljournalists. Be careful with whom you choose to help or you could be wasting your time.)


Part I: Why Independent Media is Essential to Good Science Journalism

What was 2016 like in science? Furious googling will give you the details you need to come to the clinical conclusion that it wasn’t so bad. After all, LIGO found gravitational waves; an Ebola vaccine was readied; ISRO began tests of its reusable launch vehicle; the LHC amassed particle collisions data; the Philae comet-hopping mission ended; New Horizons zipped past Pluto; Juno is zipping around Jupiter; scientists did amazing (but sometimes ethically questionable) things with CRISPR; etc. But if you’ve been reading science articles throughout the year, then please take a step back from everything and think about what your overall mood is like.

Because, just as easily as 2016 was about mega-science projects doing amazing things, it was also about climate-change action taking a step forward but not enough; about scientific communities becoming fragmented; about mainstream scientific wisdom becoming entirely sidelined in some parts of the world; about crucial environmental protections being eroded; about – undeniably – questionable practices receiving protection under the emotional cover of nationalism. As a result, and as always, it is difficult to capture what this year was to science in a single mood, unless that mood in turn captures anger, dismay, elation and bewilderment at various times.

So, to simplify our exercise, let’s do that furious googling – and then perform a meta-analysis to reflect on where each of us sees fit to stand with respect to what the Indian scientific enterprise has been up to this year. (Note: I’m hoping this exercise can also be a referendum on the type of science news The Wire chose to cover this year, and how that can be improved in 2017.) The three broad categories (and sub-categories) of stories that The Wire covered this year are:

GOOD BAD UGLY
Different kinds of ISRO rockets – sometimes with student-built sats onboard – took off Big cats in general, and leopards specifically, had a bad year Indian scientists continued to plagiarise and engage in other forms of research misconduct without consequence
ISRO decided to partially privatise PSLV missions by 2020 The JE/AES scourge struck again, their effects exacerbated by malnutrition The INO got effectively shut down
LIGO-India collaboration received govt. clearance; Indian scientists of the LIGO collaboration received a vote of confidence from the international community PM endorsed BGR-34, an anti-diabetic drug of dubious credentials Antibiotic resistance worsened in India (and other middle-income nations)
We supported ‘The Life of Science’ Govt. conceived misguided culling rules India succumbed to US pressure on curtailing generic drugs
Many new species of birds/animals discovered in India Ken-Betwa river linkup approved at the expense of a tiger sanctuary Important urban and rural waterways were disrupted, often to the detriment of millions
New telescopes were set up, further boosting Indian astronomy; ASTROSAT opened up for international scientists Many conservation efforts were hampered – while some were mooted that sounded like ministers hadn’t thought them through Ministers made dozens of pseudoscientific claims, often derailing important research
Otters returned to their habitats in Kerala and Goa A politician beat a horse to its death Fake-science-news was widely reported in the Indian media
Janaki Lenin continued her ‘Amazing Animals’ series Environmental regulations turned and/or stayed anti-environment Socio-environmental changes resulting from climate change affect many livelihoods around the country
We produced monthly columns on modern microbiology and the history of science We didn’t properly respond to human-wildlife conflicts Low investments in public healthcare, and focus on privatisation, short-changed Indian patients
Indian physicists discovered a new form of superconductivity in bismuth GM tech continues to polarise scientists, social scientists and activists Space, defence-research and nuclear power establishments continued to remain opaque
/ Conversations stuttered on eastern traditions of science /

I leave it to you to weigh each of these types of stories as you see fit. For me – as a journalist – science in the year 2016 was defined by two parallel narratives: first, science coverage in the mainstream media did not improve; second, the mainstream media in many instances remained obediently uncritical of the government’s many dubious claims. As a result, it was heartening on the first count to see ‘alternative’ publications like The Life of Science and The Intersection being set up or sustained (as the case may be).

On the latter count: the media’s submission paralleled, rather directly followed, its capitulation to pro-government interests (although some publications still held out). This is problematic for various reasons, but one that is often overlooked is that the “counterproductive continuity” that right-wing groups stress upon – between traditional wisdom and knowledge derived through modern modes of investigation – receives nothing short of a passive endorsement by uncritical media broadcasts.

From within The Wire, doing a good job of covering science has become a battle for relevance as a result. And this is a many-faceted problem: it’s as big a deal for a science journalist to come upon and then report a significant story as finding the story itself in the first place – and it’s as difficult to get every scientist you meet to trust you as it is to convince every reader who visits The Wire to read an article or two in the science section per visit. Fortunately (though let it not be said that this is simply a case of material fortunes), the ‘Science’ section on The Wire has enjoyed both emotional and financial support. To show for it, we have had the privilege of overseeing the publication of 830 articles, and counting, in 2016 (across science, health, environment, energy, space and tech). And I hope those who have written for this section will continue to write for it, even as those who have been reading this section will continue to read it.

Because it is a battle for relevance – a fight to be noticed and to be read, even when stories have nothing to do with national interests or immediate economic gains – the ideal of ‘speaking truth to power’ that other like-minded sections of the media cherish is preceded for science journalism in India by the ideals of ‘speaking’ first and then ‘speaking truth’ second. This is why an empowered media is as essential to the revival of that constitutionally enshrined scientific temperament as are productive scientists and scientific institutions.

The Wire‘s journalists have spent thousands of hours this year striving to be factually correct. The science writers and editors have also been especially conscientious of receiving feedback at all stages, engaging in conversations with our readers and taking prompt corrective action when necessary – even if that means a retraction. This will continue to be the case in 2017 as well in recognition of the fact that the elevation of Indian science on the global stage, long hailed to be overdue, will directly follow from empowering our readers to ask the right questions and be reasonably critical of all claims at all times, no matter who the maker.

Part II: If You’re Asking ‘What To Expect in Science in 2017’, You Have Missed the Point

While a science reporter at The Hindu, this author conducted an informal poll asking the newspaper’s readers to speak up about what their impressions were of science writing in India. The answers, received via email, Twitter and comments on the site, generally swung between saying there was no point and saying there was a need to fight an uphill battle to ‘bring science to everyone’. After the poll, however, it still wasn’t clear who this ‘everyone’ was, notwithstanding a consensus that it meant everyone who chanced upon a write-up. It still isn’t clear.

Moreover, much has been written about the importance of science, the value of engaging with it in any form without expectation of immediate value and even the usefulness of looking at it ‘from the outside in’ when the opportunity arises. With these theses in mind (which I don’t want to rehash; they’re available in countless articles on The Wire), the question of “What to expect in science in 2017?” immediately evolves into a two-part discussion. Why? Because not all science that happens is covered; not all science that is covered is consumed; and not all science that is consumed is remembered.

The two parts are delineated below.

What science will be covered in 2017?

Answering this question is an exercise in reinterpreting the meaning of ‘newsworthiness’ subject to the forces that will assail journalism in 2017. An immensely simplified way is to address the following factors: the audience, the business, the visible and the hidden.

The first two are closely linked. As print publications are shrinking and digital publications growing, a consideration of distribution channels online can’t ignore the social media – specifically, Twitter and Facebook – as well as Google News. This means that an increasing number of younger readers are available to target, which in turn means covering science in a way that interests this demographic. Qualities like coolness and virality will make an item immediately sellable to marketers whereas news items rich with nuance and depth will take more work.

Another way to address the question is in terms of what kind of science will be apparently visible, and available for journalists to easily chance upon, follow up and write about. The subjects of such writing typically are studies conducted and publicised by large labs or universities, involving scientists working in the global north, and often on topics that lend themselves immediately to bragging rights, short-lived discussions, etc. In being aware of ‘the visible’, we must be sure to remember ‘the invisible’. This can be defined as broadly as in terms of the scientists (say, from Latin America, the Middle East or Southeast Asia) or the studies (e.g., by asking how the results were arrived at, who funded the studies and so forth).

On the other hand, ‘the hidden’ is what will – or ought to – occupy those journalists interested in digging up what Big X (Pharma, Media, Science, etc.) doesn’t want publicised. What exactly is hidden changes continuously but is often centred on the abuse of privilege, the disregard of those we are responsible for and, of course, the money trail. The issues that will ultimately come to define 2017 will all have had dark undersides defined by these aspects and which we must strive to uncover.

For example: with the election of Donald Trump, and his bad-for-science clique of bureaucrats, there is a confused but dawning recognition among liberals of the demands of the American midwest. So to continue to write about climate change targeting an audience composed of left-wingers or east coast or west coast residents won’t work in 2017. We must figure out how to reach across the aisle and disabuse climate deniers of their beliefs using language they understand and using persuasions that motivate them to speak to their leaders about shaping climate policy.

What will be considered good science journalism in 2017?

Scientists are not magical creatures from another world – they’re humans, too. So is their collective enterprise riddled with human decisions and human mistakes. Similarly, despite all the travails unique to itself, science journalism is fundamentally similar to other topical forms of journalism. As a result, the broader social, political and media trends sweeping around the globe will inform novel – or at least evolving – interpretations of what will be good or bad in 2017. But instead of speculating, let’s discuss the new processes through which good and bad can be arrived at.

In this context, it might be useful to draw from a blog post by Jay Rosen, a noted media critic and professor of journalism at New York University. Though the post focuses on what political journalists could do to adapt to the Age of Trump, its implied lessons are applicable in many contexts. More specifically, the core effort is about avoiding those primary sources of information (out of which a story sprouts) the persistence with which has landed us in this mess. A wildly remixed excerpt:

Send interns to the daily briefing when it becomes a newsless mess. Move the experienced people to the rim. Seek and accept offers to speak on the radio in areas of Trump’s greatest support. Make common cause with scholars who have been there. Especially experts in authoritarianism and countries when democratic conditions have been undermined, so you know what to watch for— and report on. (Creeping authoritarianism is a beat: who do you have on it?). Keep an eye on the internationalization of these trends, and find spots to collaborate with journalists across borders. Find coverage patterns that cross [the aisle].

And then this:

[Washington Post reporter David] Fahrenthold explains what he’s doing as he does it. He lets the ultimate readers of his work see how painstakingly it is put together. He lets those who might have knowledge help him. People who follow along can see how much goes into one of his stories, which means they are more likely to trust it. … He’s also human, humble, approachable, and very, very determined. He never goes beyond the facts, but he calls bullshit when he has the facts. So impressive are the results that people tell me all the time that Fahrenthold by himself got them to subscribe.

Transparency is going to matter more than ever in 2017 because of how the people’s trust in the media was eroded in 2016. And there’s no reason science journalism should be an exception to these trends – especially given how science and ideology quickly locked horns in India following the disastrous Science Congress in 2015. More than any other event since the election of the Bharatiya Janata Party to the centre, and much like Trump’s victory caught everyone by surprise, the 2015 congress really spotlighted the extent of rational blight that had seeped into the minds of some of India’s most powerful ideologues. In the two years since, the reluctance of scientists to step forward and call bullshit out has also started to become more apparent, as a result exposing the different kinds of undercurrents that drastic shifts in policies have led to.

So whatever shape good science journalism is going to assume in 2017, it will surely benefit by being more honest and approachable in its construction. As will the science journalist who is willing to engage with her audience about the provenance of information and opinions capable of changing minds. As Jeff Leek, an associate professor at the Johns Hopkins Bloomberg School of Public Health, quoted (statistician Philip Stark) on his blog: “If I say just trust me and I’m wrong, I’m untrustworthy. If I say here’s my work and it’s wrong, I’m honest, human, and serving scientific progress.”

Here’s to a great 2017! 🙌🏾

Curious Bends – big tobacco, internet blindness, spoilt dogs and more

1. Despite the deadly floods in Uttarakhand in 2013, the govt ignores grave environmental reports on the new dams to be built in the state

“The Supreme Court asked the Union environment ministry to review six specific hydroelectric projects on the upper Ganga basin in Uttarakhand. On Wednesday, the ministry informed the apex court that its expert committee had checked and found the six had almost all the requisite and legitimate clearances. But, the ministry did not tell the court the experts, in the report to the ministry, had also warned these dams could have a huge impact on the people, ecology and safety of the region, and should not be permitted at all on the basis of old clearances.” (6 min read, businessstandard.com)

2. At the heart of the global-warming debate is the issue of energy poverty, and we don’t really have a plan to solve the problem

“Each year, human civilization consumes some 14 terawatts of power, mostly provided by burning the fossilized sunshine known as coal, oil and natural gas. That’s 2,000 watts for every man, woman and child on the planet. Of course, power isn’t exactly distributed that way. In fact, roughly two billion people lack reliable access to modern energy—whether fossil fuels or electricity—and largely rely on burning charcoal, dung or wood for light, heat and cooking.” (4 min read, scientificamerican.com)

3. Millions of Facebook users have no idea they’re using the internet

“Indonesians surveyed by Galpaya told her that they didn’t use the internet. But in focus groups, they would talk enthusiastically about how much time they spent on Facebook. Galpaya, a researcher (and now CEO) with LIRNEasia, a think tank, called Rohan Samarajiva, her boss at the time, to tell him what she had discovered. “It seemed that in their minds, the Internet did not exist; only Facebook,” he concluded.” (8 min read, qz.com)

+ The author of the piece, Leo Mirani, is a London-based reporter for Quartz.

4. The lengths to which big tobacco industries will go to keep their markets alive is truly astounding

“Countries have responded to Big Tobacco’s unorthodox marketing with laws that allow government to place grotesque images of smoker’s lung and blackened teeth on cigarette packaging, but even those measures have resulted in threats of billion-dollar lawsuits from the tobacco giants in international court. One such battle is being waged in Togo, where Philip Morris International, a company with annual earnings of $80 billion, is threatening a nation with a GDP of $4.3 billion over their plans to add the harsh imagery to cigarette boxes, since much of the population is illiterate and therefore can’t read the warning labels.” (18 min video, John Oliver’s Last Week Tonight via youtube.com)

5. Hundreds of people have caught hellish bacterial infections and turned to Eastern Europe for a century-old viral therapy

“A few weeks later, the Georgian doctors called Rose with good news: They would be able to design a concoction of phages to treat Rachel’s infections. After convincing Rachel’s doctor to write a prescription for the viruses (so they could cross the U.S. border), Rose paid the Georgian clinic $800 for a three-month supply. She was surprised that phages were so inexpensive; in contrast, her insurance company was forking over roughly $14,000 a month for Rachel’s antibiotics.” (14 min read, buzzfeed.com)

Chart of the week

“Deshpande takes her dog, who turned six in February, for a walk three times every day. When summers are at its peak, he is made to run on the treadmill inside the house for about half-hour. Zuzu’s brown and white hair is brushed once every month, he goes for a shower twice a month—sometimes at home, or at a dog spa—and even travels with the family to the hills every year. And like any other Saint Bernard, he has a large appetite, eating 20 kilograms of dog food every month. The family ends up spending Rs5,000 ($80)-7,000 ($112) every month on Zuzu, about double the amount they spend on Filu, a Cocker Spaniel.” (4 min read, qz.com)

59d83687-7134-4c90-8298-ba975d380556

Ello! I love you, let me jump in your game!

This is a guest post contributed by Anuj Srivas. Formerly a tech. reporter and writer for The Hindu, he’s now pursuing an MSc. at the Oxford Internet Institute, and blogging for Sciblogger.

If there were ever an artifact to which Marshall McLuhan’s ‘the medium is the message’ would be best applicable, it would be Ello. The rapidly-growing social network – much like the EU’s ‘right to be forgotten’ – is quickly turning out to be something of a Rorschach test: people look at it and see what they wish to see.

Like all political slogans, Ello’s manifest is becoming an inkblot onto which we can project our innermost ideologies. It is almost instructive to look at the wide range of reactions, if only for the fact that it tells us something about the way in which we will build the future of the Web.

Optimists and advocates of privacy take a look at Ello and see the start of something new, or view it as a chance to refresh the targeted-advertising foundations of our Web. The most sceptical of this lot, however, point towards the company’s venture capital funding and sneer.

Technology and business analysts look at Ello and see a failed business model; one that is doomed from the start. Feminists and other minority activists look at the company’s founders and notice the appalling lack of diversity. Utopian Internet intellectuals like Clay Shirky see Ello as a way to reclaim conversational discourse on the Internet, even if it doesn’t quite achieve it just yet.

What do I see in the Ello inkblot? Two things.

The first is that Ello, if it gains enough traction, will become an example of whether the free market is capable of providing a social network alternative that respects privacy.

For the last decade, one of the biggest debates among netizens has been whether we should take steps (legal or otherwise) to safeguard values such as privacy on the Internet. One of the most vocal arguments against this has been that “if the demand for the privacy is so great, then the market will notice the demand and find some way to supply it”.

Ello is seemingly the first proper, privacy-caring, centralized social network that the market has spit out (Diaspora was more of a social creation that was designed to radically change online social networks, which was in all likelihood what caused its stagnation). In this way, the VC funding gives Ello a greater chance to provide a better experience – even if it does prove to be the spark that leads to the company’s demise.

If Ello succeeds and continues to stick to its espoused principles, then that’s one argument settled.

The second point pertains to all that Ello does not represent. Sociologist Nathan Jurgensen has an excellent post on Ello where he lashes out at how online social networks are still being built by only technology geeks. He writes:

This [Ello] is yet another example of social media built by coders and entrepreneurs, but no central role for those expert in thinking about and researching the social world. The people who have decided they should mediate our social interactions and write a political manifesto have no special expertise in the social or political.

I cannot emphasize this point enough. One of the more prominent theories regarding technology and its implications is the ‘social shaping of technology’. It theorizes that technology is not born and developed in a vacuum – it is instead very much shaped and created by relevant social groups. There is little doubt that much of today’s technology and online services is skewed very disproportionately – the number of social groups that are involved in the creation of an online social network is minuscule compared to the potential reach and influence of the final product. Ello is no different when it comes to this.

It is a combination of these two points that sums up the current, almost tragic state of affairs. The technology and digital tools of today are very rarely created, or deployed, keeping in mind the needs of the citizen. They usually are brought to life from some entrepreneur’s or venture capitalist’s PowerPoint presentation and then applied to real world situations.

Is Ello the anti-Facebook that we need? Perhaps. Is it the one we deserve? Probably not.

The federation of our digital identities

Facebook, Twitter, email, WordPress, Instagram, online banking, the list goes on… Offline, you’re one person maintaining (presumably) one identity. On the web, you have many of them. All of them might point at you, but they’re still distinct packets of data floating through different websites. Within each site, your identity is unified, but between them, you’re different people. For example, I can’t log into Twitter with my Facebook username/password because Facebook owns them. When digital information becomes federated like this, it drives down cross-network accountability because my identity doesn’t move around.

However, there are some popular exceptions to this. Facebook and Twitter don’t exchange my log-in credentials – the keys with which I unlock my identity – because they’re rivals, but many other services and these sites are not. For example, I can log into my YouTube account using my GMail credentials. When I hit ‘Submit’, YouTube banks on the validity of my identity on GMail to log me in. Suddenly, GMail and YouTube both have access to my behavioral information through my username now. In the name of convenience, my online visibility has increased and I’ve become exposed to targeted advertising, likely the least of ills.

The Crypto-Book

John Maheswaran, a doctoral student at Yale University, has a solution. He’s called it ‘Crypto-Book’, describing its application and uses in a pre-print paper he and his colleagues uploaded to arXiv on June 16.

1. The user clicks ‘Sign up using Facebook’ on StackOverflow.

stackoverflow

2. StackOverflow redirects the user to Facebook to log in using Facebook credentials, 3. after which the user grants some permissions.

facebook

4. Facebook generates a temporary OAuth access token corresponding to the permissions.

5. Facebook redirects the user back to StackOverflow along with the access token.

redirection

 

6. StackOverflow can now access the user’s Facebook resources in line with the granted permissions.

Crypto-Book sits between steps 1 and 6. Instead of letting Facebook and StackOverflow talk to each other, it steps in to take your social network ID from Facebook, uses that to generate a username and password (in this context called a public and private key, respectively), and passes them on to StackOverflow for authentication.

OpenID and OAuth

It communicates with both sites using the OAuth protocol, which came into use in 2010. Five years before this, the OpenID protocol had launched to some success. In either case, the idea was to reduce the multiplicity of digital identities but in the context of sites like Facebook and Twitter that could own your identities themselves, the services the protocols provided enabled users to wield more control over what information they shared, or at least keep track of it.

OpenID let users to register with itself, and then functioned as a decentralized hub. If you wanted to log into WordPress next, you could do so with your OpenID credentials; WordPress only had to recognize the protocol. In that sense, it was like, say, Twitter, but with the sole function of maintaining a registry of identities. Its use has since declined because of a combination of its security shortcomings and other sites’ better authentication schemes. OAuth, on the other hand, has grown more popular. Unlike OpenID, OAuth is an identity access protocol, and gives users a way to grant limited-access permissions to third-party sites without having to enter any credentials (a feature called pseudo-authentication).

So Crypto-Book inserts itself as an anonymizing layer to prevent Facebook and StackOverflow from exchanging tokens with each other. Maheswaran also describes additional techniques to bolster Crypto-Book’s security. For one, a user doesn’t receive his/her key pair from one server but many, and has to combine the different parts to make the whole. For another, the user can use the key-pair to log in to a site using a technique called linkable ring sgnatures, “which prove that the signer owns one of a list of public keys, without revealing which key,” the paper says. “This property is particularly useful in scenarios where trust is associated with a group rather than an individual.”

The cryptocurrency parvenu

Interestingly, the precedent for an equally competent solution was set in 2008 when the cryptocurrency called bitcoins came online. Bitcoins are bits of code generated by complex mathematical calculations, and each is worth about $630 today. Using my public and private keys, I can perform bitcoin transactions, the records of which are encrypted and logged in a publicly maintained registry called the blockchain. Once the blockchain is updated with a transaction, no other information except the value exchanged can be retrieved. In April 2011, this blockchain was forked into a new registry for a cryptocurrency called namecoin. Namecoins and bitcoins are exactly the same but for one crucial difference. While bitcoins make up a decentralized banking system, namecoins make up a decentralized domain name system (DNS), a registry of unique locations on the Internet.

The namecoin blockchain, like its website puts it, can “securely record and transfer arbitrary names,” or keys, an ability that lets programmers use it as an anonymizing layer to communicate between social network identities and third-party sites in the same way Crypto-Book does. For instance, OneName, a service that lets you use a social network identity to label your bitcoin address to simplify transactions, describes itself as

a decentralized identity system (DIS) with a user directory made of entries in a decentralized key-value store (the Namecoin blockchain).

Say I ‘register’ my digital identity with namecoin. The process of registration is logged on the blockchain and I get a public and private key. If Twitter is a relying partner, I should be able to log in to it with my keys and start using it. Only now, Twitter’s server will log me in but not itself own the username with which it can monitor my behavior. And unlike with OpenID or OAuth, neither namecoin or anyone on the web can access my identity because it has been encrypted. At the same time, like with Crypto-Book, namecoin will use OAuth to communicate with the social networking and third-party sites. But at the end of the day, namecoin lets me mobilize only the proof that my identity exists and not my identity itself in order to let me use services anonymously.

If everybody’s wearing a mask, who’s anonymous?

As such, it enables one of the most advanced anonymization services today. What makes it particularly effective is its reliance on the blockchain, which is not maintained by a central authority. Instead, it’s run by multiple namecoin users lending computing resources that process and maintain the blockchain, so there’s a fee associated with staking and sustaining your claim of anonymity. This decentralization is necessary to dislocate power centers and forestall precipitous decisions that could compromise your privacy or shut websites down.

Services like IRC provided the zeroth level of abstraction to achieve anonymity in the presence of institutions like Facebook – by being completely independent and ‘unhooked’. Then, the OpenID protocol aspired, ironically, to some centrality by trying to set up one set of keys to unlock multiple doors. In this sense, the OAuth protocol was disruptive because it didn’t provide anonymity as much as tried to provide an alternative route by limiting the number of identities you had to maintain on the web. Then come the Crypto-Book and blockchain techniques, both aspiring toward anonymity, both reliant on Pyrrhic decentralization in the sense that the power to make decisions was not eliminated as much extensively diluted.

Therefore, the move toward privatization of digital identities has been supported by publicizing the resources that maintain those identities. As a result, perfect anonymity becomes consequent to full participation – which has always been the ideal – and the size of the fee to achieve anonymity today is symptomatic of how far we are from that ideal.

(Thanks to Vignesh Sundaresan for inputs.)