Credit: StockSnap/pixabay

On that Poynter debate about stock images and ethical visual journalism

Response to Mark Johnson, Article about free images ‘contradicts everything I hold true about journalism’, Poynter, February 9, 2018. 

Let’s get the caveats out of the way:

  • The article to which Johnson is responding did get some of its messaging wrong. As Johnson wrote, it suggested the following: “We don’t think about visuals BUT visuals are critically important. The solutions offered amount to scouring the web for royalty-free and (hopefully) copyright-released stock images.”
  • In doing so, the original article may have further diminished prospects for visual journalists in newsrooms around the country (whether the US or India), especially since Poynter is such a well-regarded publisher among editors and since there already aren’t enough jobs available on the visual journalism front.
  • I think visual journalists are important in any newsroom that includes a visual presentation component because they’re particularly qualified to interrogate how journalism can be adapted to multimedia forms and in what circumstances such adaptations can strain or liberate its participants’ moral and ethical positions.

That said, IMO Johnson himself may have missed a bit of the nuances of this issue. Before we go ahead: I’m going to shorten “royalty-free and/or copyright-released” to CC0, which is short for the Creative Commons ‘No Rights Reserved’ license. It allows “scientists, educators, artists and other creators and owners of copyright- or database-protected content to waive those interests in their works and thereby place them as completely as possible in the public domain, so that others may freely build upon, enhance and reuse the works for any purposes without restriction under copyright or database law.” However, what I’m going to say should be true for most other CC licenses (including BY, BY-SA, BY-SA-NC, BY-SA-ND and BY-SA-NC-ND).

By providing an option for publishers to look for CC0 images, the authors of the original piece may have missed an important nuance: publishers come in varying sizes; the bigger the publisher is, the less excusable it is for it to not have a visual journalism department in-house. For smaller (and the smallest) publishers, however, having access to CC0 images is important because (a) producing original images and videos can invoke prohibitive costs and (b) distribution channels of choice such as Facebook and Twitter penalise the absence of images on links shared on these platforms.

Bigger publishers have an option and should, to the extent possible, exercise that option to hire illustrators, designers, video producers and reporters, podcasters, etc. To not do so would be to abdicate professional responsibilities. However, in the interest of leveraging the possibilities afforded by the internet as well as of keeping our news professional but also democratic, it’s not fair to assume that it’s okay to penalise smaller publishers simply because they’re resorting to using CC0 images. A penalty it will be if they don’t: Facebook, for example, will deprioritise their content on people’s feeds. So the message that needs to be broadcast is that it’s okay for smaller publishers to use CC0 images but also that it’s important for them to break away from the practice as they grow.

Second: Johnson writes,

Choosing stock images for news stories is an ethically questionable choice — you don’t know the provenance of the image, you don’t know the conditions under which it was created and you don’t know where else it has been used. It degrades the journalistic integrity of the site. Flip it around — what if there were generic quotes inserted into a story? They wouldn’t advance the narrative at all, they would just act as filler.

He’s absolutely right to equate text and images: they both help tell a story and they should both be treated with equal respect and consequence. (Later in his article, Johnson goes on to suggest visuals may in fact be more consequential because people tend to remember them better.) However, characterising stock images as the journalistic equivalent of blood diamonds is unfair.

For example, it’s not clear what Johnson means by “generic quotes”. Sometimes, some quotes are statements that need to be printed to reflect its author’s official position (or lack thereof). For another, stock images may not be completely specific to a story but they could fit its broader theme, for example, in a quasi-specific way (after all, there are millions of CC0 images to pick from).

But most importantly, the allegations drub the possibilities of the Open Access (OA) movement in the realms of digital knowledge-production and publishing. By saying, “Choosing stock images for news stories is an ethically questionable choice”, Johnson risks offending those who create visual assets and share it with a CC0 license expressly to inject it into the public domain – a process by which those who are starved of resources in one part of the world are not also starved of information produced in another. Journalism shouldn’t – can’t – be free because it includes some well-defined value-adds that need to be paid for. But information (and sometimes knowledge) can be free, especially if those generating them are willing to waive being paid for them.

My go-to example has been The Conversation. Its articles are written by experts with PhDs in the subjects they’re writing about (and are affiliated with reputable institutions). The website is funded by contributions from universities and labs. The affiliations of its contributors and their conflicts of interest, if any, are acknowledged with every article. Best of all, its articles are all available to republish for free under at least a CC BY license. Their content is not of the ‘stock’ variety; their sentences and ideas are not generic. Reusing their articles may not advance the narrative inherent in them but would I say it hurts journalists? No.

Royalty-free and copyright-released images and videos free visual journalists from being involved every step of the way. This is sadly but definitely necessary in circumstances where they might not get paid, where there might not be the room, inclination or expertise necessary to manage and/or work with them, where an audience might not exist that values their work and time.

This is where having, using and contributing to a digital commons can help. Engaging with it is a choice, not a burden. Ignoring those who make this choice to argue that every editor must carefully consider the visual elements of a story together with experts and technicians hired just for this purpose is akin to suggesting that proponents of OA/CC0 content are jeopardising opportunities for visual journalists to leave their mark. This is silly, mostly because it leaves the central agent out of the picture: the publisher.

It’s a publisher’s call to tell a story through just text, just visuals or both. Not stopping to chide those who can hire visual journalists but don’t while insisting “it’s a big part of what we do” doesn’t make sense. Not stopping to help those who opt for text-only because that’s what they can afford doesn’t make sense either.

Featured image credit: StockSnap/pixabay.

Eroding the dignity of Jayalalithaa's memories

On December 20, P. Vetrivel, a former MLA and member of the AIADMK party, convened a press meet and released a 20-second video clip purportedly showing former Tamil Nadu chief minister J. Jayalalithaa lying on a hospital bed shortly before she died on December 5, 2016. Since that day, the affairs of the AIADMK have been in tatters – an inconvenience they’ve been forced to confront twice over, both when Jayalalithaa’s constituency, R.K. Nagar, had by-polls to elect their next representative.

A major rift within the party itself meant that there were those within and without who suspected Jayalalithaa may not have died a natural death, as the currently dominant AIADMK faction – to which Vetrivel belongs – has insisted. The same faction is led by T.T.V. Dinakaran, who is former Jayalalithaa aide V.K. Sasikala’s nephew. Vetrivel’s new video, which he said was made by Sasikala with Jayalalithaa’s consent, tries to allay these fears by showing that the former leader was really at a hospital being treated for diabetes and kidney problems. This happened even as voting began in R.K. Nagar in the morning on December 21.

No Tamil news channel seems to have heeded the Election Commission’s directive to not air the video clip, which itself arrived only four+ hours after Vetrivel’s press meet concluded. While some people have tried to poke holes in the video, especially focusing on how palm trees are visible outside Jayalalithaa’s room in the hospital when her treatment was widely publicised to have happened on the seventh floor, news channels aired it all day yesterday.

The clip shows Jayalalithaa on a large bed, unmoving, in a gown. Her facial features aren’t apparent. Her left leg is visible outstretched but her right leg isn’t. In her left hand, there’s a cup of some liquid that she brings to her mouth once and drinks through a straw. It’s quite a sad sight to behold.

When Jayalalithaa died, the pall of sorrow that hung over Chennai was palpable. Even functionaries of the DMK, which has been the AIADMK’s principal opponent for decades, were shaken and paid heartfelt tributes to a woman they called a ‘worthy opponent’. Although she’d run an opaque, pro-business government and centralised a majority of its decision-making, her rule was marked by many popular social development schemes. There’s no bigger testimony to her leadership than the blind, self-serving hutch the AIADMK has devolved to become without her.

To see a woman considered to have been tactful, shrewd and graceful when she lived depicted after her death in a way that minimised her agency and highlighted an implicit sense of distress and decay is nauseating1. Jayalalithaa was known to have actively constructed and maintained her appearances in public and on TV as characterising a certain persona. With Sasikala’s and Vetrivel’s choices, this personality has been broken – which makes Vetrivel’s claim that Jayalalithaa consented to being filmed, and for that video to be released to TV channels, triply suspect.

Jayalalithaa, when alive, took great care to make herself appear a certain way – including going all the way to issuing statements only to select members of the press, those whose words she could control. What would she have said now with the image of a weakened, unsustaining Jayalalithaa being flashed everywhere?

There’s little doubt that Dinakaran and Vetrivel wanted to manipulate R.K. Nagar’s voters by releasing the clip barely a day before voting was to begin. Most people recognise that their faction within the AIADMK shouldn’t have released the video now but much earlier and with proof of the footage’s legitimacy to the Commission of Inquiry, which has been investigating her death.

Then again, considering what has been caught on camera, consuming it has been nothing short of engaging in voyeurism. So the video shouldn’t have been shot in the first place, especially since there’s no proof of Jayalalithaa’s having consented to being filmed as well as to being shown thus on TV beyond what Vetrivel told the press about what Sasikala had told him.

For this alone, I hope the people of R.K. Nagar reject Dinakaran’s faction and its exploitative politics. But more importantly, I hope journalists recognise how seriously they’ve erred in showing Jayalalithaa the way they did – and helped Dinakaran achieve what he’d wanted to in the first place.

1. This also happened with Eman Ahmed.

Featured image credit: Nandhinikandhasamy/Wikimedia Commons, CC BY-SA 3.0.

A close encounter with the first kind: the obnoxious thieves of good journalism

A Huffington Post article purportedly published by the US bureau has flicked two quotes from a story first published by The Wire, on the influenza epidemics ravaging India. The story’s original author and its editor (me) reached out to HuffPo India folks via Twitter to get them to attribute The Wire for both quotes – and remove, rephrase or enclose-in-double-quotes a paragraph copied verbatim from the original. What this resulted in was half-assed acknowledgment: one of the quotes was attributed to The Wire, the other quote was left unattributed, giving the impression that it was sourced first-hand, and the plagiarised paragraph was left in as is.

I’m delighted that The Wire‘s story is receiving wider reach, and is being read by the people who matter around the world. (And I request you, the reader, to please share the original article and not the plagiarised version.)

But to acknowledge our requests for change and then to assume that attributing only one of the quotes will suffice is to suggest that “this is enough”. This is an offensive attitude that I think has its roots in a complacence of sorts. Huffington Post could be assuming that a partial attribution (and plagiarism) is ‘okay’ because nobody cares about these things because they’re getting valuable information in return that’s going to distract consumers, and because it’s Huffington Post and their traffic volumes are going to make up for the oversight.

For the average consumer – by which I mean someone who only consumes journalism and doesn’t produce it – does it matter that Huffington Post, in some sense, has cheated to get the content it has? I don’t think it does. (This is a problem; there should be specific short-term sanctions if a publisher chooses to behave this way. Edit: Priyanka Pulla, the original author: “It DOES hurt you, the reader. Each time you read bad journalism, it’s because content thieves destroy market for good journalism and skew incentives.”) However, if anything, the publisher effectively signals that consumers will be getting content produced in newsrooms other than the Post’s. The website is now a ‘destination’ site.

Who this kind of irreverence really hurts is other journalists. For example, Pulla spent a lot of time and work writing the piece, I spent a lot of time and work editing it and The Wire spent a lot of money for commissioning and publishing it. By thinking our work is available to reuse for free, Huffington Post disparages the whole enterprise.

This enterprise is an intangible commodity – the kind that encourages readers to pay for journalism because it’s the absence of this enterprise, and the attendant diligence, that leads to ‘bad journalism’. And at a time when every publisher publishing journalistic content online on the planet is struggling to make money, what Huffington Post has done is value theft. At last check, the article on their site had 3,300 LinkedIn Shares and 5,100 shares on StumbleUpon.

(Edit: “We didn’t know” wouldn’t work with HuffPo here because my issue is with their response to our bringing the problems to their notice.)

This isn’t the first time such a thing has happened with The Wire. From personal experience (having managed the site for 18 months), there are three forms of content-stealing I’ve seen:

  1. The more obnoxious kind – where a publisher that has traffic in the millions every month lifts an article, or reuses parts of it, without permission; and when pulled up for it, gives this excuse: “We’re giving your content free publicity. You should let us do this.” The best response for this has been public-shaming.
  2. The more insidious kind – where a bot from an algorithmic publisher freely republishes content in bulk without permission, and then takes the content down 24-48 hours later once its shelf-life has lapsed. The most effective, and also the most blunt-edged, response to this has been to issue a DMCA notice.
  3. The more frustrating kind – where a small publisher (monthly traffic at 1 million/month or less and/or operating on a small budget) reuses some articles without permission and then pulls a sad face when pulled up for the act. The best response to this has either been to strike a deal with the publisher for content-exchange or a small fee or, of course, a strongly worded email (the latter is restricted to some well-defined circumstances because otherwise it’s The Wire strong-arming the little guy and nobody likes that).

Dear Huffington Post – I dearly hope you don’t belong to the first kind.

Featured image credit: TheDigitalWay/pixabay.

ASI's note to Financial Exp. over eclipse article is naïve

The public outreach arm of the Astronomical Society of India has written to the Financial Express expressing concern over their August 7 article, which advised people to fast during a lunar eclipse. The ASI called the article “anti-science”, requested that FE print a clarification and finally give them room to write an article about the science of eclipses. The full note is available to read here.

I’m glad that the ASI reached out to FE and offered their help to set things right instead of simply condemning FE or demanding that it retract the article. However, this may not be enough.

Consider the following paragraph from their note:

We are disappointed to find that a prestigious national paper like yours deems it fit to publish an article exhorting people to not eat during an eclipse, in this day and age. We wish you had checked with any science institution in the country before publishing such an article. Your own newspaper had published articles praising the science education and communication work done by the late Professors Yash Pal and Pushpa Bhargava merely a week ago. To go against this spirit in the same week by publishing such an article that is blatantly anti-science in nature, without even talking to scientists about it, is very sad indeed.

Frankly, apart from a few (that I can count off on the fingers of one hand), every other MSM publication in India has little or no sense of balance when it comes to science communication. The point in publishing an article asking people to fast during eclipses, bathe right after in cold water with their clothes on or claim that a planet could slam into Earth sometime this year and kill us all is not a matter of education.

Education requires taking responsibility for a group of people and conscientiously empowering them. What newspapers like FE are doing is simply capitalising on demand. If one section of the audience is interested in knowing more about which rituals to follow during an eclipse, FE & co. will give them what they want. That’s where the traffic is.

This is why it’s important to understand the business of journalism, especially if you’re a science writer because science journalism is among the most screwed-over areas of journalism in the country. If you fix the business – for example if you provide FE an incentive to publish au fait science stories that’s stronger than the incentive to generate more traffic (and presumably earn money by pleasing advertisers) – you will immediately have more room for good science journalism.

However, such an incentive is very difficult to provide because the profit–by-volume way has an almost complete stranglehold over Indian MSM today. And it’s important to remember that it’s not all bad. Yes, it forces editors to acquiesce every now and then (if not more often) to their business/financial interests, but it’s also what’s keeping most of Indian journalism that’s published on the internet alive. It’s one thing to say FE should grow a conscience but quite another to expect them to live off of it. It’s very difficult.

So a group like the ASI expecting a publication like FE to have “checked with any science institution in the country before publishing such an article” misses the point. ‘Checking with scientists’ makes for a boring story that’s also very difficult to sell. And FE was irresponsible for having given superstition such a big stage. But my point is that I’m reluctant to be angry with the article’s author or editor beyond this because the underlying problem is quite something else.

I admit I will be surprised if FE allows the scientists to write an article about the “beautiful natural events” that eclipses are – but I will be way, way more surprised if they print the clarification, and even more surprised if they retract their original article. Better yet for FE for both articles to live on their pages with equal levels of qualification (i.e. none).

It’s like the politician Rajeev Chandrasekhar, who funds Republic TV, said in an interview,

rajeev_mp_market share

About AWS/Azure/GCP coming to India, etc.

Featured image: A data centre in San Antonio, Texas. Credit: scobleizer/Flickr, CC BY 2.0.

Interesting story by The Ken (paywall) on the effects AWS, Azure and GCP will have in India once Amazon, Microsoft and Google turn their gaze this way.

Data centre companies at least have 30-35% margins.The bigger companies like Netmagic, CtrlS, Tata Comm and Reliance have data centres in India. They provide colocation services—they let other cloud providers run their servers in their data centres. They lease it to everyone—be it Amazon Web Services (AWS), Azure, Google,  E2E or even smaller companies. That is their cash cow.  Of course, this is in addition to private cloud (dedicated resources for end users) and public cloud (shared resources) they offer.

Business has been stellar for the last 10 years or so. Well, up until recently.

With the overall push to digitisation, from banking to government, global cloud firms have doubled-down on their investments. Microsoft set up three data centres in September 2015; AWS settled for two data centres in July 2016, and Google plans to debut this year. For an everyday business, the focus has shifted to a concept called Infrastructure-as-a-Service (IaaS)—where you pay for what you use—something that was being used only by core tech companies and IT services providers so far.

A few points on it:

1. I feel this awareness, the intensifying of competition, may not be as sudden or as recent as we think. I’m not sure about AWS and Azure but I remember using GCP in 2013 and they already had a credits system going, especially for small-scale developers. And even without that, it was still very cost-effective but more importantly it was the security it offered that cut it. But when I think of Indian cloud providers, security is the last thing that comes to mind (and uptime the second-last and UX the third).

2. Questions of data sovereignty and privacy are moot to me – the former because the bulk of data that moves around India that can’t be serviced by foreign IaaS providers is simply going to be self-hosted; the latter because there’s no reason to believe AWS/Azure/GCP will let my data be compromised. (Obviously I’m not factoring in NSA-level snooping because, even though it happened, the problem wasn’t the infrastructure.) Moreover, I’m also encouraged by Microsoft’s data trustee model it implemented in Germany cognisant of data sovereignty issues.

3. If I’m using AWS to run a small blog – like a static site – then it’s going to cost me about $10 a month and almost no technical work to keep it going (after setting it up). But the moment I scale up and start using more than one EC2 instance, and also start looking at things like ELB, WAF and VPCs to make my site more efficient, I will either have to be a developer myself or hire one. And if I’m hiring a developer, I’m likelier to find better talent that works with AWS or Azure than with any other service. So if an Indian company has to beat them, then it has to be PaaS-like with its offering to grow.

4. Because of the security issues outlined by The Ken, it’s curious to think small-scale cloud providers, such as those offering ‘packaged apps’ like WordPress, etc. to run individual blogs, etc., are only threatened by the likes of AWS/Azure/GCP. To me, they’re already under threat – if they already haven’t lost – if they’re not factoring in Digital Ocean, Vultr, Linode and even Bitnami (which provides a soup-to-nuts tour to deploy popular stacks like, say, LAMP using AWS). The Wire was launched on Digital Ocean for $10 a month.

An opportunity to understand the GPL license

Featured image: Matt Mullenweg, 2009. Credit: loiclemeur/Flickr, CC BY 2.0.

Every December, I wander over to, the blog of WordPress founder Matt Mullenweg, to check what he’s saying about how the CMS will be shaping up in the next year. Despite my cribbings as well as constant yearning to be on Ghost, I’m still on WordPress and no closer to leaving than I ever was. And WordPress isn’t all that bad either (it runs The Wire, for example). In fact, I’m reminded of the words of a very talented developer from earlier this year with whom we at The Wire were consulting. When I brought up the fact that PHP (the programming language on which WordPress is a script) isn’t very conducive to scaling, he replied, “Anything can be built on anything.” So, for all its problems, WordPress does do some other things well that other CMSs might usually struggle with.

Anyway, lest I digress further – On a post on October 28, Mullenweg described the impact of the GPL license on WordPress’s development as “fantastic” – possibly because, as Linus Torvalds, who created the Linux kernel, has noted, the GPL license enforces itself: code derived from GPL-licensed code also has to be GPL-licensed. As a result, those making modifications to WordPress for their own use could not wall themselves off, preventing fragmentation as well as, in the words of University of Pennsylvania law professor Christopher Yoo, persevere in an environment that allows “multiple actors to pursue parallel innovation, which can improve the quality of the technical solution as well as increase the rate of technological change”.

GPL stands for ‘general public license’, and is widely used on the creation, modification, deconstruction, use and distribution of software on the web. Mullenweg’s broader post was actually about him noticing how the UI of the mobile app of Wix, a platform that lets its users build websites with a few clicks, closely resembled WordPress’s own, and how there was – as a result – a glaring problem. In its composition, WordPress uses code that’s on the GPL. GPL’s self-enforcement feature makes it a copyleft license: works that are derived from GPL-licensed work also have to be copyleft and distributed on the same terms. As a result, the code behind Wix’s mobile app had to immediately be made available (by, say, making it available on GitHub) and publicly accessible. It wasn’t.

Last I checked, the post had one update from Wix CEO Avishai Abrahami and 120 comments. And all together, they illustrated how the terms of the license, though written in language that was lucid enough, were easy to confuse and the sort of impedance that poses to its useful implementation. I spent an hour (probably more) going through all of it; if you’re interested in this sort of thing – or learning something new – I highly recommend going through the comments yourself. Or, if you’d like something shorter (but also trying to be wider in scope), you could just keep reading.

The tale has four dramatis personae: the GPL license, the MIT license, Wix’s mobile app and WordPress’s code (plus a cameo appearance by the LGPL license). Code derived from GPL-compatible code also has to be GPL-compatible – a major requirement of which is that: “… you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things” (emphasis added). This is also the clause that’s missing from the MIT license. As a result, code that’s originally under the MIT license can later be licensed as GPL (i.e. its source code made available) but code that’s originally under the GPL cannot later be licensed as MIT (i.e. source code that a GPL license has made accessible cannot be hidden away by the MIT license) – unless all the relevant copyright holders are onboard with the shift.

Paul Sieminski, the general counsel of Automattic – the non-profit company that originally built WordPress – commented on Mullenweg’s post thus: “[Wix would] probably be in the clear if you had used just the original editor we started with (ZSSRichTextEditor, MIT licensed). Instead, Wix took our version of the editor which has 1000+ original commits on top of the original MIT editor, that took more than a year to write. We improved it. A lot. And Wix took those improvements, used them in their app… but then stripped out all of the important rights that they’re not legally allowed to take away. We’re just asking Wix to fix their mistake. Release the Wix Mobile App under a GPL license, and put the source code up on GitHub” (link added). So far so good.

Wix CEO Abrahami’s response – posted on his blog on Wix – though cordial, makes the mistake of being evasive and in denial at once. As many commenters pointed out, Mullenweg’s ask was simple and clearly articulated: bring the source code behind Wix’s mobile app under the GPL and upload it on GitHub. Abrahami, however, defended Wix’s decision to keep the source code proprietary by saying that it only used an open source library modified by WordPress (“that is the concept of open source right?”) for a “minor part” of the app, and that he would “release the app you [Matt] saw as well”. The latter statement should have resolved the dispute because GPL only mandates that the source code be made available when asked for – not necessarily on GitHub. George Stephanis, a developer at Automattic, added: “The source code has to be freely available to everyone that has the software. If you want a paywall, it has to treat the software and source as a unit — you can’t distribute the software, but then charge for the source code.”

Some commenters pointed out that Abrahami may have been confusing the GPL license with the Library GPL (LGPL), and as a result not be entirely clear about the “viral” nature of the GPL license. When code is LGPL-compatible, extensions to the code needn’t be GPL-compatible. For example, in the Wix case, if the WordPress-modified open-source library was LGPL-, instead of GPL-, compatible and the mobile app had used parts of it, then the app’s source code doesn’t have to be GPL-compatible. In colloquial terms, the LGPL doesn’t infect code it is associated with the way the GPL does; it is less “viral”.

Nonetheless, I’d think it’s arguably harder to know Wix’s code has to be GPL-compatible, or even to know what the license on it ought to be, if it isn’t publicly available at all times. In support: the relevant part from the license’s preamble, which I quoted earlier, is “that [the users] know [they] can do these things”. I use the word ‘arguably’ not in the legal sense but in the spiritual one – the spirit being that of the free-software movement. And this is why I’m glad Mullenweg chose to hammer this issue out in public (via his blog) instead of via email. Moreover, I’m also glad that he didn’t initiate legal action immediately either: the conversation between Mullenweg, Abrahami and all the commenters – despite the occasional passive-aggressive animus – deserved to happen instead of the groups splintering off and blocking each other. The open source community always needs more unity.

Then again, the licenses that help sustain these communities could do more harm than good if they become too restrictive – especially when they fall out of step with changing governance practices while striving to keep the open source ideals we’ve associated with Richard Stallman alive even as they don’t offer too much freedom to users, which could result in a proliferation of alternatives that deprive useful software of its coherence. For example, Yoo writes in the paper I quoted from above,

… some restrictions on what people can do with open source operating systems are necessary if consumers are to enjoy the full benefits of competition and innovation. My point is not to suggest that open source software is inherently superior to proprietary software or vice versa. Both approaches have distinct virtues that appeal to different users. Moreover, any attempt to cast the policy debate as a choice between those polar extremes [i.e. open source and modular development] is based on a false dichotomy. Instead, the different modes for producing software platforms are better regarded as occupying different locations along a continuum running from completely unrestricted open source to completely proprietary closed source. Indeed, companies may even choose to pursue hybrid strategies that occupy multiple locations on this continuum simultaneously. The diversity of advantages associated with these different approaches suggests that consumers benefit if different companies are given the latitude to experiment with different governance models, with the presence of one open source platform serving as an important competitive safety valve.

Subscribe to The Wire‘s weekly science newsletter, Infinite in All Directions (archive), where I curate interesting science news, blogs and analyses from around the web.

Some thoughts on the nature of cyber-weapons

With inputs from Anuj Srivas.

There’s a hole in the bucket.

When someone asks for my phone number, I’m on alert, even if it’s so my local supermarket can tell me about new products on their shelves. Or for my email ID so the taxi company I regularly use can email me ride receipts, or permission to peek into my phone if only to see what other music I have installed – All vaults of information I haven’t been too protective about but which have off late acquired a notorious potential to reveal things about me I never thought I could so passively.

It’s not everywhere but those aware of the risks of possessing an account with Google or Facebook have been making polar choices: either wilfully surrender information or wilfully withhold information – the neutral middle-ground is becoming mythical. Wariness of telecommunications is on the rise. In an effort to protect our intangible assets, we’re constantly creating redundant, disposable ones – extra email IDs, anonymous Twitter accounts, deliberately misidentified Facebook profiles. We know the Machines can’t be shut down so we make ourselves unavailable to them. And we succeed to different extents, but none completely – there’s a bit of our digital DNA in government files, much like with the kompromat maintained by East Germany and the Soviet Union during the Cold War.

In fact, is there an equivalence between the conglomerates surrounding nuclear weapons and cyber-weapons? Solly Zuckerman (1904-1993), once Chief Scientific Adviser to the British government, famously said:

When it comes to nuclear weapons … it is the man in the laboratory who at the start proposes that for this or that arcane reason it would be useful to improve an old or to devise a new nuclear warhead. It is he, the technician, not the commander in the field, who is at the heart of the arms race.

These words are still relevant but could they have accrued another context? To paraphrase Zuckerman – “It is he, the programmer, not the politician in the government, who is at the heart of the surveillance state.”

An engrossing argument presented in the Bulletin of the Atomic Scientists on November 6 did seem an uncanny parallel to one of whistleblower Edward Snowden’s indirect revelations about the National Security Agency’s activities. In the BAS article, nuclear security specialist James Doyle wrote:

The psychology of nuclear deterrence is a mental illness. We must develop a new psychology of nuclear survival, one that refuses to tolerate such catastrophic weapons or the self-destructive thinking that has kept them around. We must adopt a more forceful, single-minded opposition to nuclear arms and disempower the small number of people who we now permit to assert their intention to commit morally reprehensible acts in the name of our defense.

This is akin to the multiple articles that appeared following Snowden’s exposé in 2013 – that the paranoia-fuelled NSA was gathering more data than it could meaningfully process, much more data than might be necessary to better equip the US’s counterterrorism measures. For example, four experts argued in a policy paper published by the nonpartisan think-tank New America in January 2014:

Surveillance of American phone metadata has had no discernible impact on preventing acts of terrorism and only the most marginal of impacts on preventing terrorist-related activity, such as fundraising for a terrorist group. Furthermore, our examination of the role of the database of U.S. citizens’ telephone metadata in the single plot the government uses to justify the importance of the program – that of Basaaly Moalin, a San Diego cabdriver who in 2007 and 2008 provided $8,500 to al-Shabaab, al-Qaeda’s affiliate in Somalia – calls into question the necessity of the Section 215 bulk collection program. According to the government, the database of American phone metadata allows intelligence authorities to quickly circumvent the traditional burden of proof associated with criminal warrants, thus allowing them to “connect the dots” faster and prevent future 9/11-scale attacks.

Yet in the Moalin case, after using the NSA’s phone database to link a number in Somalia to Moalin, the FBI waited two months to begin an investigation and wiretap his phone. Although it’s unclear why there was a delay between the NSA tip and the FBI wiretapping, court documents show there was a two-month period in which the FBI was not monitoring Moalin’s calls, despite official statements that the bureau had Moalin’s phone number and had identified him. This undercuts the government’s theory that the database of Americans’ telephone metadata is necessary to expedite the investigative process, since it clearly didn’t expedite the process in the single case the government uses to extol its virtues.

So, just as nuclear weapons seem to be plausible but improbable threats fashioned to fuel the construction of evermore nuclear warheads, terrorists are presented as threats who can be neutralised by surveilling everything and by calling for companies to provide weakened encryption so governments can tap civilian communications easier-ly. This state of affairs also points to there being a cyber-congressional complex paralleling the nuclear-congressional complex that, on the one hand, exalts the benefits of being a nuclear power while, on the other, demands absolute secrecy and faith in its machinations.

However, there could be reason to believe cyber-weapons present a more insidious threat than their nuclear counterparts, a sentiment fuelled by challenges on three fronts:

  1. Cyber-weapons are easier to miss – and the consequences of their use are easier to disguise, suppress and dismiss
  2. Lawmakers are yet to figure out the exact framework of multilateral instruments that will minimise the threat of cyber-weapons
  3. Computer scientists have been slow to recognise the moral character and political implications of their creations

That cyber-weapons are easier to miss – and the consequences of their use are easier to disguise, suppress and dismiss

In 1995, Joseph Rotblat won the Nobel Prize for peace for helping found the Pugwash Conference against nuclear weapons in 1955. In his lecture, he lamented the role scientists had wittingly or unwittingly played in developing nuclear weapons, invoking those words of Zuckerman quoted above as well as going on to add:

If all scientists heeded [Hans Bethe’s] call there would be no more new nuclear warheads; no French scientists at Mururoa; no new chemical and biological poisons. The arms race would be truly over. But there are other areas of scientific research that may directly or indirectly lead to harm to society. This calls for constant vigilance. The purpose of some government or industrial research is sometimes concealed, and misleading information is presented to the public. It should be the duty of scientists to expose such malfeasance. “Whistle-blowing” should become part of the scientist’s ethos. This may bring reprisals; a price to be paid for one’s convictions. The price may be very heavy…

The perspectives of both Zuckerman and Rotblat were situated in the aftermath of the nuclear bombings that closed the Second World War. The ensuing devastation beggared comprehension in its scale and scope – yet its effects were there for all to see, all too immediately. The flattened cities of Hiroshima and Nagasaki became quick (but unwilling) memorials for the hundreds of thousands who were killed. What devastation is there to see for the thousands of Facebook and Twitter profiles being monitored, email IDs being hacked and phone numbers being trawled? What about it at all could appeal to the conscience of future lawmakers?

As John Arquilla writes on the CACM blog

Nuclear deterrence is a “one-off” situation; strategic cyber attack is much more like the air power situation that was developing a century ago, with costly damage looming, but hardly societal destruction. … Yes, nuclear deterrence still looks quite robust, but when it comes to cyber attack, the world of deterrence after [the age of cyber-wars has begun] looks remarkably like the world of deterrence before Hiroshima: bleak. (Emphasis added.)

… the absence of “societal destruction” with cyber-warfare imposed less of a real burden upon the perpetrators and endorsers.

And records of such intangible devastations are preserved only in writing, in our memories, and can be quickly manipulated or supplanted by newer information and problems. Events that erupt as a result of illegally obtained information continue to be measured against their physical consequences – there’s a standing arrest warrant while the National Security Agency continues to labour on, flitting between the shadows of SIPA, the Patriot Act and others like them. The violations are like a creep, easily withdrawn, easily restored, easily justified as being counterterrorism measures, easily depicted to be something they aren’t.

That lawmakers are yet to figure out the exact framework of multilateral instruments that will minimise the threat of cyber-weapons

What makes matters frustrating is a multilateral instrument called the Wassenaar Arrangement (WA), which was originally drafted in 1995 to restrict the export of potentially malignant technologies leftover from the Cold War, but which lawmakers resorted to in 2013 to prevent entities with questionable human-rights records from accessing “intrusive software” as well. In effect, the WA defines limits on its 41 signatories about what kind of technology can or can’t be transferred between themselves or not at all to non-signatories based on the tech’s susceptibility to be misused. After 2013, the WA became one of the unhappiest pacts out there, persisting largely because of the confusion that surrounds it. There are three kinds of problems:

1. In its language – Unreasonable absolutes

Sergey Bratus, a research associate professor in the computer science department at Dartmouth College, New Hampshire, published an article on December 2 highlighting WA’s failure to “describe a technical capability in an intent-neutral way” – with reference to the increasingly thin line (not just of code) that separates a correct output from a flawed one, which hackers have become adept at exploiting. Think of it like this:

Say there’s a computer, called C, which Alice uses for a particular purpose (like to withdraw cash if C were an ATM). C accepts an input called I and spits out an output called O. Because C is used for a fixed purpose, its programmers know that the range of values I can assume is limited (such as the four-digit PIN numbers used at ATMs). However, they end up designing the machine to operate safely for all known four-digit numbers and neglecting what would happen should I be a five-digit number. By some technical insight, a hacker could exploit this feature and make C spit out all the cash it contains using a five-digit I.

In this case, a correct output by C is defined only for a fixed range of inputs, with any output corresponding to an I outside of this range being considered a flawed one. However, programmatically, C has still only provided the correct O for a five-digit I. Bratus’s point is just this: we’ve no way to perfectly define the intentions of the programs that we build, at least not beyond the remits of what we expect them to achieve. How then can the WA aspire to categorise them as safe and unsafe?

2. In its purpose – Sneaky enemies

Speaking at Kiwicon 2015, New Zealand’s computer security conference, cyber-policy buff Katie Moussouris said the WA was underprepared to confront superbugs targeting computers connected to the Internet irrespective of their geographical location but the solutions for which could potentially emerge out of a WA signatory. A case in point that Moussouris used was Heartbleed, a vulnerability that achieved peak nuisance in April 2014. Its M.O. was to target the OpenSSL library, used by a server to encrypt personal information transmitted over the web, and force it to divulge the encryption key. To protect against it, users had to upgrade OpenSSL with a software patch containing the solution. However, such patches targeted against bugs of the future could fall under what the WA has defined simply as “intrusion software”, and for which officials administering the agreement will end up having to provide exemptions dozens of times a day. As Darren Pauli wrote in The Register,

[Moussouri] said the Arrangement requires an overhaul, adding that so-called emergency exemptions that allow controlled goods to be quickly deployed – such as radar units to the 2010 Haiti earthquake – will not apply to globally-coordinated security vulnerability research that occurs daily.

3. In presenting an illusion of sufficiency

Beyond the limitations it places on the export of software, the signatories’ continued reliance on the WA as an instrument of defence has also been questioned. Earlier this year, India received some shade after hackers revealed that its – our – government was considering purchasing surveillance equipment from an Italian company that was selling the tools illegitimately. India wasn’t invited to be part of the WA and had it been, it would’ve been able to purchase the surveillance equipment legitimately. Sure, it doesn’t bode well that India was eyeing the equipment at all but when it does so illegitimately, international human rights organisations have fewer opportunities to track violations in India or be able to haul authorities up for infarctions. Legitimacy confers accountability – or at least the need to be accountable.

Nonetheless, despite an assurance (insufficient in hindsight) that countries like India and China would be invited to participate in conversations over the WA in future, nothing has happened. At the same time, extant signatories have continued to express support for the arrangement. “Offending” software came to be included in the WA following amendments in December 2013. States of the European Union enforced the rules from January 2015 while the US Department of Commerce’s Bureau of Industry and Security published a set of controls pursuant to the arrangement’s rules in May 2015 – which have been widely panned by security experts for being too broadly defined. Over December, however, they have begun to hope National Security Adviser Susan Rice can persuade the State Department push for making the language in the WA more specific at the plenary session in December 2016. The Departments of Commerce and Homeland Security are already onboard.

That computer scientists have been slow to recognise the moral character and political implications of their creations

Phillip Rogaway, a computer scientist at the University of California, Davis, penned an essay he published on December 12 titled The Moral Character of Cryptographic Work. Rogaway’s thesis is centred on the increasing social responsibility of the cryptographer – as invoked by Zuckerman – as he writes,

… we don’t need the specter of mushroom clouds to be dealing with politically relevant technology: scientific and technical work routinely implicates politics. This is an overarching insight from decades of work at the crossroads of science, technology, and society. Technological ideas and technological things are not politically neutral: routinely, they have strong, built-in tendencies. Technological advances are usefully considered not only from the lens of how they work, but also why they came to be as they did, whom they help, and whom they harm. Emphasizing the breadth of man’s agency and technological options, and borrowing a beautiful phrase of Borges, it has been said that innovation is a garden of forking paths. Still, cryptographic ideas can be quite mathematical; mightn’t this make them relatively apolitical? Absolutely not. That cryptographic work is deeply tied to politics is a claim so obvious that only a cryptographer could fail to see it.

And maybe cryptographers have missed the wood for the trees until now but times are a’changing.

On December 22, Apple publicly declared it was opposing a new surveillance bill that the British government is attempting to fast-track. The bill, should it become law, will require messages transmitted via the company’s iMessage platform to be encrypted in such a way that government authorities can access them if they need to but not anyone else – a fallacious presumption that Apple has called out as being impossible to engineer. “A key left under the doormat would not just be there for the good guys. The bad guys would find it too,” it wrote in a statement.

Similarly, in November this year, Microsoft resisted an American warrant to hand over some of its users’ data acquired in Europe by entrusting a German telecom company with its servers. As a result, any requests for data about German users using Microsoft to make calls or send emails, and originating from outside Germany, will now have to deal with German lawmakers. At the same time, anxiety over requests from within the country are minimal as the country boasts some of the world’s strictest data-access policies.

Apple’s and Microsoft’s are welcome and important changes in tack. Both companies were featured in the Snowden/Greenwald stories as having folded under pressure from the NSA to open their data-transfer pipelines to snooping. That the companies also had little alternative at that time was glossed over by the scale of NSA’s violations. However, in 2015, a clear moral as well as economic high-ground has emerged in the form of defiance: Snowden’s revelations were in effect a renewed vilification of Big Brother, and so occupying that high-ground has become a practical option. After Snowden, not taking that option when there’s a chance to has come to mean passive complicity.

But apropos Rogaway’s contention: at what level can, or should, the cryptographer’s commitment be expected? Can smaller companies or individual computer-scientists afford to occupy the same ground as larger companies? After all, without the business model of data monetisation, privacy would be automatically secured – but the business model is what provides for the individuals.

Take the case of Stuxnet, the virus unleashed by what are believed to be agents with the US and Israel in 2009-2010 to destroy Iranian centrifuges suspected of being used to enrich uranium to explosive-grade levels. How many computer-scientists spoke up against it? To date, no institutional condemnation has emerged*. Though it could be that neither the US nor Israel publicly acknowledging their roles in developing Stuxnet could have made it tough to judge who may have crossed a line, that a deceptive bundle of code was used as a weapon in an unjust war was obvious.

Then again, can all cryptographers be expected to comply? One of the threats that the 2013 amendments to the WA attempts to tackle is dual-use technology (which Stuxnet is an example of because the virus took advantage of its ability to mimic harmless code). Evidently such tech also straddles what Aaron Adams (PDF) calls “the boundary between bug and behaviour”. That engineers have had only tenuous control over these boundaries owes itself to imperfect yet blameless programming languages, as Bratus also asserts, and not to the engineers themselves. It is in the nature of a nuclear weapon, when deployed, to overshadow the simple intent of its deployers, rapidly overwhelming the already-weakened doctrine of proportionality – and in turn retroactively making that intent seem far, far more important. But in cyber-warfare, its agents are trapped in the ambiguities surrounding what the nature of a cyber-weapon is at all, with what intent and for what purpose it was crafted, allowing its repercussions to seem anywhere from rapid to evanescent.

Or, as it happens, the agents are liberated.

*That I could find. I’m happy to be proved wrong.

Featured image credit: ikrichter/Flickr, CC BY 2.0.

An app for dissent in the 21st century

“Every act of rebellion expresses a nostalgia for innocence and an appeal to the essence of being.”

These words belong to the French philosopher Albert Camus, from his essay The Rebel (1951). The persistence of that appeal is rooted in our ease of access to it – as the holders of rights, as participants of democracies, as able workers, as rational spenders, etc. – as well as in our choosing to access it. During government crackdowns, it’s this choice that is penalised and its making that is discouraged. But in the 21st century, the act of rebellion has become doubly jeopardised: our choices are still heavily punished but our appeal to the essence of being is also under threat. The Internet, often idealised as a democratic institution, has accrued a layer of corporations intent on negotiating our richer use of it against our privacy. What, at a time like this, would be a consummate act of rebellion?

Engineers from the Delft University of Technology in the Netherlands have served up an unlikely candidate: an Android app. But unlike other apps, this one is autonomous, self-compiles, mutates and spreads itself – thus being able to actively evade capture or eradication while it goes about tapping on the essence of being. The hope is that it will render online censorship meaningless.

Times to build application from source code. Source: arXiv:1511.00444v2
Times to build application from source code. Source: arXiv:1511.00444v2

The app is referred to as SelfCompileApp by its creators, Paul Brussee and Johan Pouwelse. It had a less-sophisticated predecessor named DroidStealth in 2014, also engineered at Delft. While DroidStealth could move around rabidly, it had less mutating capability because its source code couldn’t be modified, often leaving it with weakened camouflage. On the other hand, SelfCompileApp boasts of a source code (available on GitHub) that can be altered, by others as well as itself, to adapt to various hardware and software environments and self-compile – effectively being able to tweak and relaunch itself without losing sight of its purpose. Its creators claim this is a historic first.

A technical paper accompanying the release also describes the app’s mimetic skills and, formidably, innocuousness. Brussee and Pouwelse write: “A casual search or monitoring may not pick up on an app that looks like a calculator or is generally inconspicuous. Also separate pieces of code may be innocuous on their own, so it is only a matter of putting these together. A game could for example be embedded with a special launch pattern to open the encrypted content within.” It can also use mesh networks, sidestep app stores as a way to get into your phone, slip past probes looking for malicious code and make copies of itself. But the chief advantage it secures through all these capabilities is to not have to depend on human decisions to further its cause.

SelfCompileApp isn’t artificially intelligent but it is remarkably deadly because it could push already-nervous civil servants over the edge. The big question they’re dealing with in cybersecurity is what makes (some lines of code) a cyberweapon. In the physical world, one of the trickiest examples of this ‘dual-use’ is uranium, which can be purified to the level necessary for a nuclear power plant or for a nuclear missile. So inspectors are alert for that level of enrichment as well as centrifuges that perform the enriching. With software, even the simplest algorithms can be engineered to be dual-use; the cost of repurposing is invitingly low. As a result, governments’ tendencies to be on the safer side could mean a lot of legitimate systems could get trawled by the security net, as surveillance technology exporters in the US are realising.

A symmetric problem exists in governance. By all means, SelfCompileApp could support a non-violent form of legitimate dissent in the hands of the right people, replicating itself and persisting through the interwebs – when physical infrastructure is malfunctioning, by carrying messages; when physical infrastructure is proscribed, by spreading messages. But in another form, a surveillance state could appropriate the app’s resilience to spy on its people in spite of whatever precautions they take to protect themselves. The apps’s makers are cognisant of this: “A point for consideration is the minimisation of the use for harm of the app, and the risk for harm by use of the app.”

Currently, SelfCompileApp works only on the Android OS but iOS and Windows Phone builds are on the way, as well as an ability to cross-compile across all three platforms. Information can be inserted into and retrieved from the app, but Brussee and Pouwelse note it will take a developer, not a casual user, to perform these tasks. DroidStealth was also able to obfuscate the information but it’s unclear if future builds of SelfCompileApp will have the same functionalities.

Now there’s an app for dissent, too.

The Wire
November 7, 2015

Cybersecurity, a horse with no name

A cybersecurity visualisation tool at the Idaho National Laboratory. Credit: inl/Flickr, CC BY 2.0
Credit: inl/Flickr, CC BY 2.0


When asked about the origins of The America‘s hit single ‘A horse with no name’ (1971), lyricist Dewey Bunnell said he wanted to capture the spirit of the hot, all-too familiarly dry desert around Arizona and California, which he’d drive through as a kid. The horse was a vehicle that’d take him through the desert and away from the chaos of life.

Cybersecurity sounds like it could be that horse, in the form of IT infrastructure to effectively defend against the desert of cyber-weaponry, except we’ve probably only just seen a foal. When software is weaponised and used in cyber-attacks, we’re confronted with a threat we’ve not fully understood yet and which we’re in no real position to understand, let alone effectively defend against. At the same time, even in this inchoate form, cyber-weapons are posing threats that we better defend against or risk the shutdown of critical services. The only clear way forward seems to be of survival, on an ad hoc basis. Not surprisingly, the key to understanding cybersecurity’s various challenges for its innumerable stakeholders lies in knowing what a cyber-weapon, a peril of the desert, is.

We don’t know.

Read the full article here.

Mobile network shutdowns could be human-rights violations


Who uses mobile phones and for what? The biggest use case is with friends and family using cell phones to communicate good news – especially helpful during times of distress – and bad. They’re also used to access banking services, emergency services and the social media, and in information-poor environments like in the rural hinterland, to stay updated with essential government services and weather updates. Mobile-network shutdowns are also harmful for small businesses and impact TSP revenues. However, in the event of a shutdown, those who effect it are either not concerned about the consequences for legitimate activities or there isn’t a mechanism that allows them to reflect that concern.

As shutdowns become more frequent, the affected stakeholders are beginning to grapple with the fact that there are few legal sanctions holding the authorities back. Though the Telegraph Act (1885) and the Telecom Regulatory Authority of Indian Act (2000) specify the circumstances in which the government can submit shutdown requests to TSPs, there is no requirement that an independent body be constituted to approve or reject shutdown requests – in effect, no layer between the government and TSPs (in the USA, the Department of Homeland Security is required to ensure a shutdown request it’s going to sign off on is absolutely necessary). Nor does the law specify the circumstances in which TSPs can discuss requests or claim compensation for loss of revenue – which are especially important because the requests are mired in claims of “national security” – or for citizens to engage with a grievance redressal mechanism. Though these concerns apply to ISPs, the Information Technology Act (2000) is more cognisant of the effects of Internet blockades.

Both Acts are concerned with regulating the provision and availability of network, not their unavailability. In other words there’s no ‘non-natural disaster response’ legislation that explicitly defines the extent to which state actors can interfere with the provision of public services to quell unrest.

Full article here.