Chromodynamics: Gluons are just gonzo

One of the more fascinating bits of high-energy physics is the branch of physics called quantum chromodynamics (QCD). Don’t let the big name throw you off: it deals with a bunch of elementary particles that have a property called colour charge. And one of these particles creates a mess of this branch of physics because of its colour charge – so much so that it participates in the story that it is trying to shape. What could be more gonzo than this? Hunter S. Thompson would have been proud.

Like electrons have electric charge, particles studied by QCD have a colour charge. It doesn’t correspond to a colour of any kind; it’s just a funky name.

(Richard Feynman wrote about this naming convention in his book, QED: The Strange Theory of Light and Matter (pp. 163, 1985): “The idiot physicists, unable to come up with any wonderful Greek words anymore, call this type of polarization by the unfortunate name of ‘color,’ which has nothing to do with color in the normal sense.”)

The fascinating thing about these QCD particles is that they exhibit a property called colour confinement. It means that all particles with colour charge can’t ever be isolated. They’re always to be found only in pairs or bigger clumps. They can be isolated in theory if the clumps are heated to the Hagedorn temperature: 1,000 billion billion billion K. But the bigness of this number has ensured that this temperature has remained theoretical. They can also be isolated in a quark-gluon plasma, a superhot, superdense state of matter that has been creating fleetingly in particle physics experiments like the Large Hadron Collider. The particles in this plasma quickly collapse to form bigger particles, restoring colour confinement.

There are two kinds of particles that are colour-confined: quarks and gluons. Quarks come together to form bigger particles called mesons and baryons. The aptly named gluons are the particles that ‘glue’ the quarks together.

The force that acts between quarks and gluons is called the strong nuclear force. But this is misleading. The gluons actually mediate the strong nuclear force. A physicist would say that when two quarks exchange gluons, the quarks are being acted on by the strong nuclear force.

Because protons and neutrons are also made up of quarks and gluons, the strong nuclear force holds the nucleus together in all the atoms in the universe. Breaking this force releases enormous amounts of energy – like in the nuclear fission that powers atomic bombs and the nuclear fusion that powers the Sun. In fact, 99% of a proton’s mass comes from the energy of the strong nuclear force. The quarks contribute the remaining 1%; gluons are massless.

When you pull two quarks apart, you’d think the force between them will reduce. It doesn’t; it actually increases. This is very counterintuitive. For example, the gravitational force exerted by Earth drops off the farther you get away from it. The electromagnetic force between an electron and a proton decreases the more they move apart. But it’s only with the strong nuclear force that the force between two particles on which the force is acting actually increases as they move apart. Frank Wilczek called this a “self-reinforcing, runaway process”. This behaviour of the force is what makes colour confinement possible.

However, in 1973, Wilczek, David Gross and David Politzer found that the strong nuclear force increases in strength only up to a certain distance – around 1 fermi (0.000000000000001 metres, slightly larger than the diameter of a proton). If the quarks are separated by more than a fermi, the force between them falls off drastically, but not completely. This is called asymptotic freedom: the freedom from the force beyond some distance drops off asymptotically towards zero. Gross, Politzer and Wilczek won the Nobel Prize for physics in 2004 for their work.

In the parlance of particle physics, what makes asymptotic freedom possible is the fact that gluons emit other gluons. How else would you explain the strong nuclear force becoming stronger as the quarks move apart – if not for the gluons that the quarks are exchanging becoming more numerous as the distance increases?

This is the crazy phenomenon that you’re fighting against when you’re trying to set off a nuclear bomb. This is also the crazy phenomenon that will one day lead to the Sun’s death.

The first question anyone would ask now is – doesn’t asymptotic freedom violate the law of conservation of energy?

The answer lies in the nothingness all around us.

The vacuum of deep space in the universe is not really a vacuum. It’s got some energy of itself, which astrophysicists call ‘dark energy’. This energy manifests itself in the form of virtual particles: particles that pop in and out of existence, living for far shorter than a second before dissipating into energy. When a charged particle pops into being, its charge attracts other particles of opposite charge towards itself and repels particles of the same charge away. This is high-school physics.

But when a charged gluon pops into being, something strange happens. An electron has one kind of charge, the positive/negative electric charge. But a gluon contains a ‘colour’ charge and an ‘anti-colour’ charge, each of which can take one of three values. So the virtual gluon will attract other virtual gluons depending on their colour charges and intensify the colour charge field around it, and also change its colour according to whichever particles are present. If this had been an electron, its electric charge and the opposite charge of the particle it attracted would cancel the field out.

This multiplication is what leads to the build up of energy when we’re talking about asymptotic freedom.

Physicists refer to the three values of the colour charge as blue, green and red. (This is more idiocy – you might as well call them ‘baboon’, ‘lion’ and ‘giraffe’.) If a blue quark, a green quark and a red quark come together to form a hadron (a class of particles that includes protons and neutrons), then the hadron will have a colour charge of ‘white’, becoming colour-neutral. Anti-quarks have anti-colour charges: antiblue, antigreen, antired. When a red quark and an antired anti-quark meet, they will annihilate each other – but not so when a red quark and an antiblue anti-quark meet.

Gluons complicate this picture further because, in experiments, physicists have found that gluons behave as if they have both colour and anti-colour. In physical terms, this doesn’t make much sense, but they do in mathematical terms (which we won’t get into). Let’s say a proton is made of one red quark, one blue quark and one green quark. The quarks are held together by gluons, which also have a colour charge. So when two quarks exchange a gluon, the colours of the quarks change. If a blue quark emits a blue-antigreen gluon, the quark turns green whereas the quark that receives the gluon will turn blue. Ultimately, if the proton is ‘white’ overall, then the three quarks inside are responsible for maintaining that whiteness. This is the law of conservation of colour charge.

Gluons emit gluons because of their colour charges. When quarks exchange gluons, the quarks’ colour charges also change. In effect, the gluons are responsible for quarks getting their colours. And because the gluons participate in the evolution of the force that they also mediate, they’re just gonzo: they can interact with themselves to give rise to new particles.

A gluon can split up into two gluons or into a quark-antiquark pair. Say a quark and an antiquark are joined together. If you try to pull them apart by supplying some energy, the gluon between them will ‘swallow’ that energy and split up into one antiquark and one quark, giving rise to two quark-antiquark pairs (and also preserving colour-confinement). If you supply even more energy, more quark-antiquark pairs will be generated.

For these reasons, the strong nuclear force is called a ‘colour force’: it manifests in the movement of colour charge between quarks.

In an atomic nucleus, say there is one proton and one neutron. Each particle is made up of three quarks. The quarks in the proton and the quarks in the neutron interact with each other because they are close enough to be colour-confined: the proton-quarks’ gluons and the neutron-quarks’ gluons interact with each other. So the nucleus is effectively one ball of quarks and gluons. However, one nucleus doesn’t interact with that of a nearby atom in the same way because they’re too far apart for gluons to be exchanged.

Clearly, this is quite complicated – not just for you and me but also for scientists, and for supercomputers that perform these calculations for large experiments in which billions of protons are smashed into each other to see how the particles interact. Imagine: there are six types, or ‘flavours’, of quarks, each carrying one of three colour charges. Then there is the one gluon that can carry one of nine combinations of colour-anticolour charges.

The Wire
September 20, 2017

Featured image credit: Alexas_Fotos/pixabay.

The significance of Cassini's end

Many generations of physicists, astronomers and astrobiologists are going to be fascinated by Saturn because of Cassini.

I wrote this on The Wire on September 15. I lied. Truth is, I don’t care about Saturn. In fact, I’m fascinated with Cassini because of Saturn. We all are. Without Cassini, Saturn wouldn’t have been what it is in our shared imagination of the planet as well as the part of the Solar System it inhabits. At the same time, without Saturn, Cassini wouldn’t have been what it is in our shared imagination of what a space probe is and how much they mean to us. This is significant.

The aspects of Cassini’s end that are relevant in this context are:

  1. The abruptness
  2. The spectacle

Both together have made Cassini unforgettable (at least for a year or so) and its end a notable part of our thoughts on Saturn. We usually don’t remember probes, their instruments and interplanetary manoeuvres during ongoing missions because we are appreciably more captivated by the images and other data the probe is beaming back to Earth. In other words, the human experience of space is mediated by machines, but when a mission is underway, we don’t engage with information about the machine and/or what it’s doing as much as we do with what it has discovered/rediscovered, together with the terms of humankind’s engagement with that information.

This is particularly true of the Hubble Space Telescope, whose images constantly expand our vision of the cosmos while very few of us know how the telescope actually achieves what it does.

From a piece I wrote on The Wire in July 2015:

[Hubble’s] impressive suite of five instruments, highly polished mirrors and advanced housing all enable it to see the universe in visible-to-ultraviolet light in exquisite detail. Its opaque engineering is inaccessible to most but this gap in public knowledge has been compensated many times over by the richness of its observations. In a sense, we no longer concern ourselves with how the telescope works because we have drunk our fill with what it has seen of the universe for us…

Cassini broke this mould by – in its finish – reminding us that it exists. And the abruptness of the mission’s end contributed to this. In contrast, consider the story of the Mars Phoenix lander. NASA launched Phoenix (August 2007 to May 2010) in August 2007. It helped us understand Mars’s north polar region and the distribution of water ice on the planet. Its landing manoeuvre also helped NASA scientists validate the landing gear and techniques for future missions. However, the mission’s last date has a bit of uncertainty. Phoenix’s last proper signal was sent in November 2, 2008. It was declared not on the same day but a week later, when attempts reestablish contact with Phoenix failed. But the official declaration of ‘mission end’ came only in May 2010, when a NASA satellite’s attempts to reestablish contact failed.

Is it easier to deal with the death of someone because their death came suddenly? Does it matter if their body was found or not? For Phoenix, we have a ‘body’ (a hunk of metal lying dormant near the Martian north pole); for Cassini, we don’t have a ‘body’. On the other hand, we don’t have a fixed date of ‘mission end’ for Phoenix but we do for Cassini, down to the last centisecond and which will be memorialised at NASA one way or another.

Spectacle exacerbates this tendency to memorialise by providing a vivid representation of ‘mission end’ that has been shared by millions of people. Axiomatically, a memorial for Cassini – wherever one emerges – will likely evoke the same memories and emotions in a larger number of people, and all of those people will be living existences made congruent by the shared cognisance and interpretation of the ‘Cassini event’.

However, Phoenix’s ‘mission end’ wasn’t spectacular. The lander – sitting in one place, immobile – slowly faded to nothing. Cassini burnt up over Saturn. Interestingly, both probes experienced similar ‘deaths’ (though I am loth to use that word) in one sense: neither probe knew the way an I/AI could that they were going to their deaths but both their instrument suites fought against failing systems all guns blazing. Cassini only got the memorial upper hand because it could actively reorient itself in space (akin to the arms on our bodies) and because it was in an environment it was not designed for at all.

The ultimate effect is for humans to remember Cassini more vividly than they would Phoenix, as well as associate a temporality with that remembrance. Phoenix was a sensor, the nicotine patch for a chain-smoking planet (‘smoking’ being the semantic variable here). Cassini moved around – 2 billion km’s worth – and also completed a complicated sequence of orbits around Saturn in three dimensions in 13 years. Cassini represents more agency, more risk, more of a life – and what better way to realise this anthropomorphisation than as a time-wise progression of events with a common purpose?

We remember Cassini by recalling not one moment in space or time but a sequence of them. That’s what establishes the perfect context for the probe’s identity as a quasi-person. That’s also what shatters the glaze of ignorance crenellated around the object, bringing it unto fixation from transience, unto visibility from the same invisibility that Hubble is currently languishing in.

Featured image credit: nasahqphoto/Flickr, CC BY-NC-ND 2.0.

Starless city

Overheard three people in Delhi:

When you feel the rain fall, you feel the dirt pouring down on you, muck streaking down your face and clothes. It washes down the haze from the skies and you can finally breathe clean air and the sky gets so blue. The dust settles. Some 25 drops of water fell on my car and collected all the dirt in runnels. The next morning, my whole car had brown sports of dirt all over it. But the day after the rain, the Sun is really clear and bright in a nice way. You can finally see the stars at night. Like two or three of them!

Understanding what '400 years' stands for, through telescopes

This is how Leonard Digges described a telescope in 1571:

By concave and convex mirrors of circular [spherical] and parabolic forms, or by paires of them placed at due angles, and using the aid of transparent glasses which may break, or unite, the images produced by the reflection of the mirrors, there may be represented a whole region; also any part of it may be augmented so that a small object may be discerned as plainly as if it were close to the observer, though it may be as far distant as the eye can descrie. (source)

While it’s not clearly known who first invented the telescope – or if such an event even happened – Hans Lippershey is widely credited by historians for having installed two specially crafted lenses in a tube in 1608 “for seeing things far away as if they were nearby” (source). People would describe a telescope this way today as well. But the difference would be that this definition captures much less of the working of a telescope today than one built even a hundred years ago. For example, consider this description of how the CHIME (Canadian Hydrogen Intensity Mapping Experiment) radio telescope works:

To search for FRBs, CHIME will continuously scan 1024 separate points or “beams” on the sky 24/7. Each beam is sampled at 16,000 different frequencies and at a rate of 1000 times per second, corresponding to 130 billion “bits” of data per second to be sifted through in real time. The data are packaged in the X-engine and shipped via a high-speed network to the FRB backend search engine, which is housed in its own 40-foot shipping container under the CHIME telescope. The FRB search backend will consist of 128 compute nodes with over 2500 CPU cores and 32,000 GB of RAM. Each compute node will search eight individual beams for FRBs. Candidate FRBs are then passed to a second stage of processing which combines information from all 1024 beams to determine the location, distance and characteristics of the burst. Once an FRB event has been detected, an automatic alert will be sent, within seconds of the arrival of the burst, to the CHIME team and to the wider astrophysical community allowing for rapid follow up of the burst. (source)

I suppose this is the kind of advancement you’d expect in 400 years. And yes, I’m aware that I’ve compared an optical telescope to a radio telescope, but my point still stands. You’d see similar leaps between optical telescopes from 400 years ago and optical telescopes as they are today. I only picked the example of CHIME because I just found out about it.

Now, while the difference in sophistication is awesome, the detector component of CHIME itself looks like this:

Credit: CHIME Experiment
Credit: CHIME Experiment

The telescope has no moving parts. It will passively scan patches of the sky, record the data and send it for processing. How the recording happens is derived directly from a branch of physics that didn’t exist until the early 20th century: quantum mechanics. And because we had quantum mechanics, we knew what kind of instrument to build to intercept whatever information about the universe we needed. So the data-gathering part itself is not something we’re in awe of. We might have been able to put something resembling the CHIME detector together 50 years ago if someone had wanted us to.

What I think we’re really in awe of is how much data CHIME has been built to gather in unit time and how that data will be processed. In other words, what really makes this leap of four centuries evident is the computing power we have developed. This also means that, going ahead, improving on CHIME will mean improving the detector hardware a little and improving the processing software a lot. (According to the telescope’s website, the computers connected to CHIME will be able to process data with an input rate of 13 TB/s. That’s already massive.)

Making sense of quantum annealing

One of the tougher things about writing and reading about quantum mechanics is keeping up with how the meaning of some words change as they graduate from being used in the realm of classical mechanics – where things are what they look like – to that of the quantum – where we have no idea what the things even are. If we don’t keep up but remain fixated on what a word means in one specific context, then we’re likely to experience a cognitive drag that limits our ability to relearn, and reacquire, some knowledge.

For example, teleportation in the classical sense is the complete disintegration of an individual or object in one location in space and its reappearance in another almost instanetaneously. In quantum mechanics, teleportation is almost always used to mean the simultaneous realisation of information at two points in space, not necessarily their transportation.

Another way to look at this: to a so-called classicist, teleportation means to take object A, subject it to process B and so achieve C. But when a quantumist enters the picture, claiming to take object A, subjecting it to a different process B* and so achieving C – and still calling it teleportation, we’re forced to jettison the involvement of process B or B* from our definition of teleportation. Effectively, teleportation to us goes from being A –> B –> C to being just A –> C.

Alfonso de la Fuente Ruiz, an engineering student at the Universidad de Burgos, Spain, in 2011, wrote in an article,

In some way, all methods for annealing, alloying, tempering or crystallisation are metaphors of nature that try to imitate the way in which the molecules of a metal order themselves when magnetisation occurs, or of a crystal during the phase transition that happens for instance when water freezes or silicon dioxide crystallises after having been previously heated up enough to break its chemical bonds.

So put another way, going from A –> B –> C to A –> C would be us re-understanding a metaphor of nature, and maybe even nature itself.

The thing called annealing has a similar curse upon it. In metallurgy, annealing is the process by which a metal is forced to recrystallise by heating it above its recrystallisation temperature and then letting it cool down. This way, the metal’s internal stresses are removed and the material becomes the stronger for it. Quantum annealing, however, is referred by Wikipedia as a “metaheuristic”. A heuristic is any technique that lets people learn something by themselves. A metaheuristic then is any technique that produces a heuristic. It is commonly found in the context of computing. What could it have to do with the quantum nature of matter?

To understand whatever is happening first requires us to acknowledge that a lot of what happens in quantum mechanics is simply mathematics. This isn’t always because physicists are dealing with unphysical entities; sometimes it’s because they’re dealing with objects that exist in ways that we can’t even comprehend (such as in extra dimensions) outside the language of mathematics.

So, quantum annealing is a metaheuristic technique that helps physicists, for example, look for one specific kind of solution to a problem that has multiple independent variables and a very large number of ways in which they can influence the state of the system. This is a very broad definition. A specific instance where it could be used is to find the ground state of a system of multiple particles. Each particle’s ground state comes to be when that particle has the lowest energy it can have and still exist. When it is supplied a little more energy, such as by heating, it starts to vibrate and move around. When it is cooled, it loses the extra energy and returns to its ground state.

But in a larger system consisting of more than a few particles, a sense of the system’s ground state doesn’t arise simply by knowing what each particle’s ground state is. It also requires analysing how the particles’ interactions with each other modifies their individual and cumulative energies. These calculations are performed using matrices with 2N rows if there are particles. It’s easy to see that the calculations can become quickly mind-boggling: if there are 10 particles, then the matrix is a giant grid with 1,048,576 cells. To avoid this, physicists take recourse through quantum annealing.

In the classical metallurgical definition of annealing, a crystal (object A) is heated beyond its recrystallisation temperature (process B) and then cooled (outcome C). Another way to understand this is by saying that for A to transform into C, it must undergo B, and then that B would have to be a process of heating. However, in the quantum realm, there can be more than one way for A to transform into C. A visualisation of the metallurgical annealing process shows how:

The x-axis marks time, the y-axis marks heat, or energy. The journey of the system from A to C means that, as it moves through time, its energy rises and then falls in a certain way. This is because of the system’s constitution as well as the techniques we’re using to manipulate it. However, say the system included a set of other particles (that don’t change its constitution), and that for those particles to go from A to C didn’t require conventional energising but a different kind of process (B) and that B is easier to compute when we’re trying to find C.

These processes actually exist in the quantum realm. One of them is called quantum tunneling. When the system – or let’s say a particle in the system – is going downhill from the peak of the energy mountain (in the graph), sometimes it gets stuck in a valley on the way, akin to the system being mostly in its ground state except in one patch, where a particle or some particles have knotted themselves up in a configuration such that they don’t have the lowest energy possible. This happens when the particle finds an energy level on the way down where it goes, “I’m quite comfortable here. If I’m to keep going down, I will need an energy-kick.” Such states are also called metastable states.

In a classical system, the particle will have to be given some extra energy to move up the energy barrier, and then roll on down to its global ground state. In a quantum system, the particle might be able to tunnel through the energy barrier and emerge on the other side. This is thanks to Heisenberg’s uncertainty principle, which states that a particle’s position and momentum (or velocity) can’t both be known simultaneously with the same accuracy. One consequence of this is that, if we know the particle’s velocity with great certainty, then we can only suspect that the particle will pop up in a given point in spacetime with fractional surety. E.g., “I’m 50% sure that the particle will be in the metastable part of the energy mountain.”

What this also means is that there is a very small, but non-zero, chance that the particle will pop up on the other side of the mountain after having borrowed some energy from its surroundings to tunnel through the barrier.

In most cases, quantum tunneling is understood to be a problem of statistical mechanics. What this means is that it’s not understood at a per-particle level but at the population level. If there are 10 million particles stuck in the metastable valley, and if there is a 1% chance for each particle to tunnel through the valley and come out the other side, then we might be able to say 1% of the 10 million particles will tunnel; the remaining 90% will be reflected back. There is also a strange energy conservation mechanism at work: the tunnelers will borrow energy from their surroundings and go through while the ones bouncing back will do so at a higher energy than they had when they came in.

This means that in a computer that is solving problems by transforming A to C in the quickest way possible, using quantum annealing to make that journey will be orders of magnitude more effective than using metallurgical annealing because more particles will be delivered to their ground state, fewer will be left behind in metastable valleys. The annealing itself is a metaphor: if a piece of metal recalibrates itself during annealing, then a problematic quantum system resolves itself through quantum annealing.

To be a little more technical: quantum annealing is a set of algorithms that introduces new variables into the system (A) so that, with their help, the algorithms can find a shortcut for A to turn into C.

The world’s most famous quantum annealer is the D-Wave system. Ars Technica wrote this about their 2000Q model in January 2017:

Annealing involves a series of magnets that are arranged on a grid. The magnetic field of each magnet influences all the other magnets—together, they flip orientation to arrange themselves to minimize the amount of energy stored in the overall magnetic field. You can use the orientation of the magnets to solve problems by controlling how strongly the magnetic field from each magnet affects all the other magnets.

To obtain a solution, you start with lots of energy so the magnets can flip back and forth easily. As you slowly cool, the flipping magnets settle as the overall field reaches lower and lower energetic states, until you freeze the magnets into the lowest energy state. After that, you read the orientation of each magnet, and that is the solution to the problem. You may not believe me, but this works really well—so well that it’s modeled using ordinary computers (where it is called simulated annealing) to solve a wide variety of problems.

As the excerpt makes clear, an annealer can be used as a computer if system A is chosen such that it can evolve into different Cs. The more kinds of C there are possible, the more problems that A can be used to solve. For example, D-Wave can find better solutions than classical computers can for problems in aerodynamic modelling using quantum annealing – but it still can’t crack Shor’s algorithm, used widely in data encryption technologies. So the scientists and engineers working on D-Wave will be trying to augment their A such that Shor’s algorithm is also within reach.

Moreover, because of how 2000Q works, the same solution can be the result of different magnetic configurations – perhaps even millions of them. So apart from zeroing in on a solution, the computer must also figure out the different ways in which the solution can be achieved. But because there are so many possibilities, D-Wave must be ‘taught’ to identify some of them, all of them or a sample of them in an unbiased manner.

Thus, such are the problems that people working on the edge of quantum computing have to deal with these days.

(To be clear: the ‘A’ in the 2000Q is not a system of simple particles as much as it is an array of qubits, which I’ll save for a different post.)

Featured image credit: Engin_Akyurt/pixabay.

Gods of nothing

The chances that I’ll ever understand Hollywood filmmakers’ appetite for films based on ancient mythology is low, and even lower that I’ll find a way to explain the shittiness of what their admiration begets. I just watched Gods of Egypt, which released in 2016, on Netflix. From the first scene, it felt like one of those movies that a bunch of actors participate in (I can’t say if they perform) to have some screen time and, of course, some pay. It’s a jolly conspiracy, a well-planned party with oodles of CGI to make it pleasing on the eyes and the faint hope that you, the viewer, will be distracted from all the vacuity on display.

I’m a sucker for bad cinema because I’ve learnt so much about what makes good cinema good by noticing what it is that gets in the way. However, with Gods of Egypt, I’m not sure where to begin. It’s an abject production that is entirely predictable, entirely devoid of drama or suspense, entirely devoid of a plot worth taking seriously. Not all Egyptian, Green, Roman, Norwegian, Celtic and other legends can be described by the “gods battle, mortal is clever, revenge is sweet” paradigm, so it’s baffling that Hollywood reuses it as much as it does. Why? What does it want to put on display?

It surely is neither historical fidelity nor entertainment, and audiences aren’t wowed by much these days unless you pull off an Avatar. There is a glut of Americanisms, including (but not limited to) the habit of wining one’s sorrows away, a certain notion of beauty defined by distinct clothing choices, the sole major black character being killed off midway, sprinklings of American modes of appreciation (such as in the use of embraces, claps, certain words, etc.), and so forth. And in all the respect that they have shown for the shades of Egyptian lore, which is none, what they have chosen to retain is the most (white) American Americanism of all: a self-apotheosising saviour complex, delivered by Geoffrey Rush as the Sun god Ra himself.

 

There seems to be no awareness among scriptwriters angling at mythologies of the profound, moving nuances at play in many of these tales – of, for example, the kind that Bryan Fuller and Michael Green are pulling off for TV based on Neil Gaiman’s book.

There is no ingenuity. In a scene from the film, a series of traps laid before a prized artifact are “meant to lure Horus’s allies to their deaths” – but they are breachable. In another – many others, in fact – a series of barriers erected by the best builders in the world (presumably) are surmounted by a lot of jumping. In yet another, an important character who was strong as well as wily at the beginning relies on just strength towards the end because, as he became supposedly smarter by appropriating the brain of the god of wisdom, he gave himself a breakable suit of armour. Clearly, someone’s holding a really big idiot ball here. It’s even flashing blue-red lights and playing ‘Teenage wasteland’.

Finaly, Gods of Egypt makes no attempt even to deliver the base promise that some bad films make – that there will be a trefoil knot of a twist, or a moment of epiphany, or a well-executed scene or two – after which you might just be persuaded to consign your experience to the realm of lomography or art brut. I even liked Clash of the Titans (2010) even though it displayed none of these things; it just took itself so seriously.

But no, this is the sort of film that happens when Donald Trump thinks he’s Jean-Michel Basquiat. It is a mistake, an unabashed waste of time that all but drools its caucasian privilege over your face. Seriously, the only black people in the film – apart from the one major guy that dies – are either beating drums or are being saved. Which makes it all the more maddening. Remember Roger Christian’s Battlefield Earth (2000)? It was so bad – but it is still remembered because at the heart of its badness was an honest, if misguided, attempt by its makers to experiment, to exercise their agency as artists. A common complaint about the film is that Christian overused Dutch angles. I would have wept in relief if the Gods of Egypt had done anything like that. Anything at all.