Why this ASTROSAT instrument could be a game-changer for high-energy astrophysics

On November 17, NASA announced its Swift satellite had recorded its thousandth gamma-ray burst (GRB), an important milestone that indicates how many of these high-energy explosions, sometimes followed by the creation of blackholes, happen in the observable universe and in what ways.

Some five weeks before the announcement, Swift had observed a less symbolically significant GRB called 151006A. Its physical characteristics as logged and analysed by the satellite were quickly available, too, on a University of Leicester webpage.

On the same day as this observation, on October 6, the 50-kg CZTI instrument onboard India’s ASTROSAT space-borne satellite had come online. Like Swift, CZTI is tuned to observe and study high-energy phenomena like GRBs. And like every instrument that has just opened its eyes to the cosmos, ISRO’s scientists were eager to do something with it to check if it worked according to expectations. The Swift-spotted GRB 151006A provided just the opportunity.

CZTI stands for Cadmium-Zinc-Telluride Imager – a compound of these three metals (the third is tellurium) being a known industrial radiation detector. And nothing releases radiation as explosively as a GRB, which have been known to outshine the light of whole galaxies in the few seconds that they last. The ISRO scientists pointed the CZTI at 151006A and recorded observations that they’d later compare against Swift records and see if they matched up. A good match would be validation and a definite sign that the CZTI was working normally.

It was working normally, and how.

NASA has two satellites adept at measuring high-energy radiation coming from different sources in the observable universe – Swift and the Fermi Gamma-ray Space Telescope (FGST). Swift is good at detecting incoming particles that have an energy of up to 150 keV, but not so good at determining the peak energy of hard-spectrum emissions. In astrophysics, spectral hardness is defined as the position of the peak – in power emitted per decade in energy – in the emission spectrum of the GRB. This spectrum is essentially a histogram of the number of particles with some values of a property that strike a detector, so a hard-spectrum emission has a well-defined peak in that histogram. An example:

The plot of argon dense plasma emission is a type of histogram – where the intensity of photons is binned according to the energies at which they were observed. Credit: Wikimedia Commons
The plot of argon dense plasma emission is a type of histogram – where the intensity of photons is binned according to the energies at which they were observed. Credit: Wikimedia Commons

FGST, on the other hand, is better equipped to detect emissions higher than 150 keV but not so much at quickly figuring out where in the sky the emissions are coming from. The quickness is important because GRBs typically last for a few seconds, while a subcategory of them lasts for a few thousandths of a second, and then fade into a much duller afterglow of X-rays and other lower-energy emissions. So it’s important to find where in the sky GRBs could be when the brighter flash occurs so that other telescopes around the world can better home in on the afterglow.

This blindspot between Swift and FGST is easily bridged by CZTI, according to ISRO. In fact, per a deceptively innocuous calibration notice put out by the organisation on October 17, CZTI boasts the “best spectral [capabilities] ever” for GRB studies in the 80-250 keV range. This means it can provide better spectral studies of long GRBs (which are usually soft) and better localisation for short, harder GRBs. And together, they make up a strong suite of simultaneous spectral and timing observations of high-energy phenomena for the ASTROSAT.

There’s more.

Enter Compton scattering

The X-rays and gamma rays emanating from a GRB are simply photons that have a very low wavelength (or, very high frequency). Apart from these characteristics, they also have a property called polarisation, which describes the plane along which the electromagnetic waves of the radiation are vibrating. Polarisation is very important when studying directions along long distances in the universe and how the alignment of intervening matter affects the path of the radiation.

All these properties can be visualised according to the wave nature of radiation.

But in 1922, the British physicist Arthur Compton found that when high-frequency X-rays collided with free electrons, their frequency dropped by a bit (because some energy was transferred to the electrons). This discovery – celebrated for proving that electromagnetic radiation could behave like particles – also yielded an equation that let physicists calculate the angle at which the radiation was scattered off based on the change in its frequency. As a result, instruments sensitive to Compton scattering are also able to measure polarisation.

Observed count profile of Compton events during GRB 151006A. Source: IUCAA
Observed count profile of Compton events during GRB 151006A. Source: IUCAA

This plot shows the number of Compton scattering events logged by CZTI based on observing GRB 151006A; zero-time is the time at which the GRB triggered the attention of Swift. That CZTI was able to generate this plot was evidence that it could make simultaneous observations of timing, spectra and polarisation of high-energy events (especially in X-rays, up to 250 keV), lessening the burden on ISRO to depend on multiple satellites for different observations at different energies.

The ISRO note did clarify that no polarisation measurement was made in this case because about 500 Compton events were logged against the 2,000 needed for the calculation.

But that a GRB had been observed and studied by CZTI was broadcast on the Gamma-ray Coordinates Network:

V. Bhalerao (IUCAA), D. Bhattacharya (IUCAA), A.R. Rao (TIFR), S. Vadawale (PRL) report on behalf of the Astronaut CZTI collaboration:

Analysis of Astronaut commissioning data showed the presence of GRB 151006A (Kocevski et al. 2015, GCN 18398) in the Cadmium Zinc Telluride Imager. The source was located 60.7 degrees away from the pointing direction and was detected at energies above 60 keV. Modelling the profile as a fast rise and exponential decay, we measure T90 of 65s, 775s and 50s in 60-80 keV, 80-100 keV and 100-250 keV bands respectively.

In addition, the GRB is clearly detected in a light curve created from double events satisfying Compton scattering criteria (Vadawale et al, 2015, A&A, 578, 73). This demonstrates the feasibility of measuring polarisation for brighter GRBs with CZTI.

That CZTI is a top-notch instrument doesn’t come as a big surprise: most of ASTROSAT’s instruments boast unique capabilities and in some contexts are the best on Earth in space. For example, the LAXPC (Large Area X-ray Proportional Counter) instrument as well as NASA’s uniquely designed NuSTAR space telescope both log radiation in the 6-79 keV range coming from around blackholes. While NuSTAR’s spectral abilities are superior, LAXPC’s radiation-collecting area is 10x as much.

On October 7-8, ISRO also used CZTI to observe the famous Cygnus X-1 X-ray source (believed to be a blackhole) in the constellation Cygnus. The observation was made coincidental to NuSTAR’s study of the same object in the same period, allowing ISRO to calibrate CZTI’s functioning in the 0-80 (approx.) keV range and signalling the readiness of four of the six instruments onboard ASTROSAT.

The two remaining instruments: the Ultraviolet Imaging Telescope will switch on on December 10 and the Soft X-ray Telescope, on December 13. And from late December to September 2016, ISRO will use the satellite to make a series of observations before it becomes available to third-parties, and finally to foreign teams in 2018.

The Wire
November 21, 2015

A new dawn for particle accelerators in the wake

During a lecture in 2012, G. Rajasekaran, professor emeritus at the Institute for Mathematical Sciences, Chennai, said that the future of high-energy physics lay with engineers being able to design smaller particle accelerators. The theories of particle physics have for long been exploring energy levels that we might never be able to reach with accelerators built on Earth. At the same time, it will still be on physicists to reach the energies that we can reach but in ways that are cheaper, more efficient, and smaller – because reach them we will have to if our theories must be tested. According to Rajasekaran, the answer is, or will soon be, the tabletop particle accelerator.

In the last decade, tabletop accelerators have inched closer to commercial viability because of a method called plasma wakefield acceleration. Recently, a peer-reviewed experiment detailing the effects of this method was performed at the University of Maryland (UMD) and the results published in the journal Physical Review Letters. A team-member said in a statement: “We have accelerated high-charge electron beams to more than 10 million electron volts using only millijoules of laser pulse energy. This is the energy consumed by a typical household lightbulb in one-thousandth of a second.” Ten MeV pales in comparison to what the world’s most powerful particle accelerator, the Large Hadron Collider (LHC), achieves – a dozen million MeV – but what the UMD researchers have built doesn’t intend to compete against the LHC but against the room-sized accelerators typically used for medical imaging.

In particle accelerator like the LHC or the Stanford linac, a string of radiofrequency (RF) cavities are used to accelerate charged particles around a ring. Energy is delivered to the particles using powerful electromagnetic fields via the cavities, which switch polarity at 400 MHz – that’s switching at 400 million times a second. The particles’ arrival at the cavities are timed accordingly. Over the course of 15 minutes, the particle bunches are accelerated from 450 GeV to 4 TeV (the beam energy before the LHC was upgraded over 2014), with the bunches going 11,000 times around the ring per second. As the RF cavities switch faster and are ramped up in energy, the particles swing faster and faster around – until computers bring two such beams into each other’s paths at a designated point inside the ring and BANG.

A wakefield accelerator also has an electromagnetic field that delivers the energy, but instead of ramping and switching over time, it delivers the energy in one big tug.

First, scientists create a plasma, a fluidic state of matter consisting of free-floating ions (positively charged) and electrons (negatively charged). Then, the scientists shoot two bunches of electrons separated by 15-20 micrometers (millionths of a metre). As the leading bunch moves into the plasma, it pushes away the plasma’s electrons and so creates a distinct electric field around itself called the wakefield. The wakefield envelopes the trailing bunch of electrons as well, and exerts two forces on them: one along the direction of the leading bunch, which accelerates the trailing bunch, and one in the transverse direction, which either makes the bunch more or less focused. And as the two bunches shoot through the plasma, the leading bunch transfers its energy to the trailing bunch via the linear component of the wakefield, and the trailing bunch accelerates.

A plasma wakefield accelerator scores over a bigger machine in two key ways:

  • The wakefield is a very efficient energy transfer medium (but not as much as natural media), i.e. transformer. Experiments at the Stanford Linear Accelerator Centre (SLAC) have recorded 30% efficiency, which is considered high.
  • Wakefield accelerators have been able to push the energy gained per unit distance travelled by the particle to 100 GV/m (an electric potential of 1 GV/m corresponds to an energy gain of 1 GeV/c2 for one electron over 1 metre). Assuming a realistic peak accelerating gradient of 100 MV/m, a similar gain (of 100 GeV) at the SLAC would have taken over a kilometre.

There are many ways to push these limits – but it is historically almost imperative that we do. Could the leap in accelerating gradient by a factor of 100 to 1,000 break the slope of the Livingston plot?

Could the leap in accelerating gradient from RF cavities to plasma wakefields break the Livingston plot? Source: AIP
Could the leap in accelerating gradient from RF cavities to plasma wakefield accelerators break the Livingston plot? Source: AIP

In the UMD experiment, scientists shot a laser pulse into a hydrogen plasma. The photons in the laser then induced the wakefield that trailing electrons surfed and were accelerated through. To reduce the amount of energy transferred by the laser to generate the same wakefield, they made the plasma denser instead to capitalise on an effect called self-focusing.

A laser’s electromagnetic field, as it travels through the plasma, makes electrons near it wiggle back and forth as the field’s waves pass through. The more intense waves near the pulse’s centre make the electrons around it wiggle harder. Since Einstein’s theory of relativity requires objects moving faster to weigh more, the harder-wiggling electrons become heavier, slow down and then settle down, creating a focused beam of electrons along the laser pulse. The denser the plasma, the stronger the self-focusing – a principle that can compensate for weaker laser pulses to sustain a wakefield of the same strength if the pulses were stronger but the plasma less dense.

The UMD team increased the hydrogen gas density, of which the plasma is made, by some 20x and found that electrons could be accelerated by 2-12 MeV using 10-50 millijoule laser pulses. Additionally, the scientists also found that at high densities, the amplitude of the plasma wave propagated by the laser pulse increases to the point where it traps some electrons from the plasma and continuously accelerates them to relativistic energies. This obviates the need for trailing electrons to be injected separately and increases the efficiency of acceleration.

But as with all accelerators, there are limitations. Two specific to the UMD experiment are:

  • If the plasma density goes beyond a critical threshold (1.19 x 1020 electrons/cm3) and if the laser pulse is too powerful (>50 mJ), the electrons are accelerated more by the direct shot than by the plasma wakefield. These numbers define an upper limit to the advantage of relativistic self-focusing.
  • The accelerated electrons slowly drift apart (in the UMD case, to at most 250 milliradians) and so require separate structures to keep their beam focused – especially if they will be used for biomedical purposes. (In 2014, physicists from the Lawrence Berkeley National Lab resolved this problem by using a 9-cm long capillary waveguide through which the plasma was channelled.)

There is another way lasers can be used to build an accelerator. In 2013, physicists from Stanford University devised a small glass channel 0.075-0.1 micrometers wide, and etched with nanoscale ridges on the floor. When they shined infrared light with wavelength of twice the channel’s height across it, the eM field of the light wiggled the electrons back and forth – but the ridges on the floor were cut such that electrons passing over the crests would accelerate more than they would decelerate when passing over the troughs. Like this, they achieved an energy gain gradient of 300 MeV/m. This way, the accelerator is only a few millimetres long and devoid of any plasma, which is difficult to handle.

At the same time, this method shares a shortcoming with the (non-laser driven) plasma wakefield accelerator: both require the electrons to be pre-accelerated before injection, which means room-sized pre-accelerators are still in the picture.

Physical size is an important aspect of particle accelerators because, the way we’re building them, the higher-energy ones are massive. The LHC currently accelerates particles to 13 TeV (1 TeV = 1 million MeV) in a 27-km long underground tunnel running beneath the shared borders of France and Switzerland. The planned Circular Electron-Positron Collider in China envisages a 100-TeV accelerator around a 54.7-km long ring (Both the LHC and the CEPC involve pre-accelerators that are quite big – but not as much as the final-stage ring). The International Linear Collider will comprise a straight tube, instead of a ring, over 30 km long to achieve accelerations of 500 GeV to 1 TeV. In contrast, Georg Korn suggested in APS Physics in December 2014 that a hundred 10-GeV electron acceleration modules could be lined up facing against a hundred 10-GeV positron acceleration modules to have a collider that can compete with the ILC but from atop a table.

In all these cases, the net energy gain per distance travelled (by the accelerated particle) was low compared to the gain in wakefield accelerators: 250 MV/m versus 10-100 GV/m. This is the physical difference that translates to a great reduction in cost (from billions of dollars to thousands), which in turn stands to make particle accelerators accessible to a wider range of people. As of 2014, there were at least 30,000 particle accelerators around the world – up from 26,000 in 2010 according to a Physics Today census. More importantly, the latter estimated that almost half the accelerators were being used for medical imaging and research, such as in radiotherapy, while the really high-energy devices (>1 GeV) used for physics research numbered a little over 100.

These are encouraging numbers for India, which imports 75% of its medical imaging equipment for more than Rs.30,000 crore a year (2015). These are also encouraging numbers for developing nations in general that want to get in on experimental high-energy physics, innovations in which power a variety of applications, ranging from cleaning coal to detecting WMDs, not to mention expand their medical imaging capabilities as well.

Featured image credit: digital cat/Flickr, CC BY 2.0.

Is the universe as we know it stable?

The anthropic principle has been a cornerstone of fundamental physics, being used by some physicists to console themselves about why the universe is the way it is: tightly sandwiched between two dangerous states. If the laws and equations that define it had slipped during its formation just one way or the other in their properties, humans wouldn’t have existed to be able to observe the universe, and conceive the anthropic principle. At least, this is the weak anthropic principle – that we’re talking about the anthropic principle because the universe allowed humans to exist, or we wouldn’t be here. The strong anthropic principle thinks the universe is duty-bound to conceive life, and if another universe was created along the same lines that ours was, it would conceive intelligent life, too, give or take a few billion years.

The principle has been repeatedly resorted to because physicists are at that juncture in history where they’re not able to tell why some things are the way they are and – worse – why some things aren’t the way they should be. The latest significant addition to this list, and an illustrative example, is the Higgs boson, whose discovery was announced on July 4, 2012, at the CERN supercollider LHC. The Higgs boson’s existence was predicted by three independently working groups of physicists in 1964. In the intervening decades, from hypothesis to discovery, physicists spent a long time trying to find its mass. The now-shut American particle accelerator Tevatron helped speed up this process, using repeated measurements to steadily narrow down the range of masses in which the boson could lie. It was eventually found at the LHC at 125.6 GeV (a proton weighs about 0.98 GeV).

It was a great moment, the discovery of a particle that completed the Standard Model group of theories and equations that governs the behaviour of fundamental particles. It was also a problematic moment for some, who had expected the Higgs boson to weigh much, much more. The mass of the Higgs boson is connected to the energy of the universe (because the Higgs field that generates the boson pervades throughout the universe), so by some calculations 125.6 GeV implied that the universe should be the size of a football. Clearly, it isn’t, so physicists got the sense something was missing from the Standard Model that would’ve been able to explain the discrepancy. (In another example, physicists have used the discovery of the Higgs boson to explain why there is more matter than antimatter in the universe though both were created in equal amounts.)

The energy of the Higgs field also contributes to the scalar potential of the universe. A good analogy lies with the electrons in an atom. Sometimes, an energised electron sees fit to lose some extra energy it has in the form of a photon and jump to a lower-energy state. At others, a lower-energy electron can gain some energy to jump to a higher state, a phenomenon commonly observed in metals (where the higher-energy electrons contribute to conducting electricity). Like the electrons can have different energies, the scalar potential defines a sort of energy that the universe can have. It’s calculated based on the properties of all the fundamental forces of nature: strong nuclear, weak nuclear, electromagnetic, gravitational and Higgs.

For the last 13.8 billion years, the universe has existed in a particular way that’s been unchanged, so we know that it is at a scalar-potential minimum. The apt image is of a mountain-range, like so:

valleys1

The point is to figure out if the universe is lying at the deepest point of the potential – the global minimum – or at a point that’s the deepest in a given range but not the deepest overall – the local minimum. This is important for two reasons. First: the universe will always, always try to get to the lowest energy state. Second: quantum mechanics. With the principles of classical mechanics, if the universe were to get to the global minimum from the local minimum, its energy will first have to be increased so it can surmount the intervening peaks. But with the principles of quantum mechanics, the universe can tunnel through the intervening peaks to sink into the global minimum. And such tunnelling could occur if the universe is currently in a local minimum only.

To find out, physicists try and calculate the shape of the scalar potential in its entirety. This is an intensely complicated mathematical process and takes lots of computing power to tackle, but that’s beside the point. The biggest problem is that we don’t know enough about the fundamental forces, and we don’t know anything about what else could be out there at higher energies. For example, it took an accelerator capable of boosting particles to 3,500 GeV and then smash them head-on to discover a particle weighing 125 GeV. Discovering anything heavier – i.e. more energetic – would take ever more powerful colliders costing many billions of dollars to build.

Almost sadistically, theoretical physicists have predicted that there exists an energy level at which the gravitational force unifies with the strong/weak nuclear and electromagnetic forces to become one indistinct force: the Planck scale, 12,200,000,000,000,000,000 GeV. We don’t know the mechanism of this unification, and its rules are among the most sought-after in high-energy physics. Last week, Chinese physicists announced that they were planning to build a supercollider bigger than the LHC, called the Circular Electron-Positron Collider (CEPC), starting 2020. The CEPC is slated to collide particles at 100,000 GeV, more than 7x the energy at which the LHC collides particles now, in a ring 54.7 km long. Given the way we’re building our most powerful particle accelerators, one able to smash particles together at the Planck scale would have to be as large as the Milky Way.

(Note: 12,200,000,000,000,000,000 GeV is the energy produced when 57.2 litres of gasoline are burnt, which is not a lot of energy at all. The trick is to contain so much energy in a particle as big as the proton, whose diameter is 0.000000000000001 m. That is, the energy density is 1064 GeV/m3.)

We also don’t know how the Standard Model scales from the energy levels it currently inhabits unto the Planck scale. If it changes significantly as it scales up, then the forces’ contributions to the scalar potential will change also. Physicists think that if any new bosons, essentially new forces, appear along the way, then the equations defining the scalar potential – our picture of the peaks and valleys – will have to be changed themselves. This is why physicists want to arrive at more precise values of, say, the mass of the Higgs boson.

Or the mass of the top quark. While force-carrying particles are called bosons, matter-forming particles are called fermions. Quarks are a type of fermion; together with force-carriers called gluons, they make up protons and neutrons. There are six kinds, or flavours, of quarks, and the heaviest is called the top quark. In fact, the top quark is the heaviest known fundamental particle. The top quark’s mass is particularly important. All fundamental particles get their mass from interacting with the Higgs field – the more the level of interaction, the higher the mass generated. So a precise measurement of the top quark’s mass indicates the Higgs field’s strongest level of interaction, or “loudest conversation”, with a fundamental particle, which in turn contributes to the scalar potential.

On November 9, a group of physicists from Russia published the results of an advanced scalar-potential calculation to find where the universe really lay: in a local minimum or in a stable global minimum. They found that the universe was in a local minimum. The calculations were “advanced” because they used the best estimates available for the properties of the various fundamental forces, as well as of the Higgs boson and the top quark, to arrive at their results, but they’re still not final because the estimates could still vary. Hearteningly enough, the physicists also found that if the real values in the universe shifted by just 1.3 standard deviations from our best estimates of them, our universe would enter the global minimum and become truly stable. In other words, the universe is situated in a shallow valley on one side of a peak of the scalar potential, and right on the other side lies the deepest valley of all that it could sit in for ever.

If the Russian group’s calculations are right (though there’s no quick way for us to know if they aren’t), then there could be a distant future – in human terms – where the universe tunnels through from the local to the global minimum and enters a new state. If we’ve assumed that the laws and forces of nature haven’t changed in the last 13.8 billion years, then we can also assume that in the fully stable state, these laws and forces could change in ways we can’t predict now. The changes would sweep over from one part of the universe into others at the speed of light, like a shockwave, redefining all the laws that let us exist. One moment we’d be around and gone the next. For all we know, that breadth of 1.3 standard deviations between our measurements of particles’ and forces’ properties and their true values could be the breath of our lives.

The Wire
November 11, 2015

An app for dissent in the 21st century

“Every act of rebellion expresses a nostalgia for innocence and an appeal to the essence of being.”

These words belong to the French philosopher Albert Camus, from his essay The Rebel (1951). The persistence of that appeal is rooted in our ease of access to it – as the holders of rights, as participants of democracies, as able workers, as rational spenders, etc. – as well as in our choosing to access it. During government crackdowns, it’s this choice that is penalised and its making that is discouraged. But in the 21st century, the act of rebellion has become doubly jeopardised: our choices are still heavily punished but our appeal to the essence of being is also under threat. The Internet, often idealised as a democratic institution, has accrued a layer of corporations intent on negotiating our richer use of it against our privacy. What, at a time like this, would be a consummate act of rebellion?

Engineers from the Delft University of Technology in the Netherlands have served up an unlikely candidate: an Android app. But unlike other apps, this one is autonomous, self-compiles, mutates and spreads itself – thus being able to actively evade capture or eradication while it goes about tapping on the essence of being. The hope is that it will render online censorship meaningless.

Times to build application from source code. Source: arXiv:1511.00444v2
Times to build application from source code. Source: arXiv:1511.00444v2

The app is referred to as SelfCompileApp by its creators, Paul Brussee and Johan Pouwelse. It had a less-sophisticated predecessor named DroidStealth in 2014, also engineered at Delft. While DroidStealth could move around rabidly, it had less mutating capability because its source code couldn’t be modified, often leaving it with weakened camouflage. On the other hand, SelfCompileApp boasts of a source code (available on GitHub) that can be altered, by others as well as itself, to adapt to various hardware and software environments and self-compile – effectively being able to tweak and relaunch itself without losing sight of its purpose. Its creators claim this is a historic first.

A technical paper accompanying the release also describes the app’s mimetic skills and, formidably, innocuousness. Brussee and Pouwelse write: “A casual search or monitoring may not pick up on an app that looks like a calculator or is generally inconspicuous. Also separate pieces of code may be innocuous on their own, so it is only a matter of putting these together. A game could for example be embedded with a special launch pattern to open the encrypted content within.” It can also use mesh networks, sidestep app stores as a way to get into your phone, slip past probes looking for malicious code and make copies of itself. But the chief advantage it secures through all these capabilities is to not have to depend on human decisions to further its cause.

SelfCompileApp isn’t artificially intelligent but it is remarkably deadly because it could push already-nervous civil servants over the edge. The big question they’re dealing with in cybersecurity is what makes (some lines of code) a cyberweapon. In the physical world, one of the trickiest examples of this ‘dual-use’ is uranium, which can be purified to the level necessary for a nuclear power plant or for a nuclear missile. So inspectors are alert for that level of enrichment as well as centrifuges that perform the enriching. With software, even the simplest algorithms can be engineered to be dual-use; the cost of repurposing is invitingly low. As a result, governments’ tendencies to be on the safer side could mean a lot of legitimate systems could get trawled by the security net, as surveillance technology exporters in the US are realising.

A symmetric problem exists in governance. By all means, SelfCompileApp could support a non-violent form of legitimate dissent in the hands of the right people, replicating itself and persisting through the interwebs – when physical infrastructure is malfunctioning, by carrying messages; when physical infrastructure is proscribed, by spreading messages. But in another form, a surveillance state could appropriate the app’s resilience to spy on its people in spite of whatever precautions they take to protect themselves. The apps’s makers are cognisant of this: “A point for consideration is the minimisation of the use for harm of the app, and the risk for harm by use of the app.”

Currently, SelfCompileApp works only on the Android OS but iOS and Windows Phone builds are on the way, as well as an ability to cross-compile across all three platforms. Information can be inserted into and retrieved from the app, but Brussee and Pouwelse note it will take a developer, not a casual user, to perform these tasks. DroidStealth was also able to obfuscate the information but it’s unclear if future builds of SelfCompileApp will have the same functionalities.

Now there’s an app for dissent, too.

The Wire
November 7, 2015