Superconductivity: From Feshbach to Fermi

(This post is continued from this one.)

After a bit of searching on Wikipedia, I found that the fundamental philosophical underpinnings of superconductivity were to be found in a statistical concept called the Feshbach resonance. If I had to teach superconductivity to those who only knew of the phenomenon superfluously, that’s where I’d begin. So.

Imagine a group of students who have gathered in a room to study together for a paper the next day. Usually, there is that one guy among them who will be hell-bent on gossiping more than studying, affecting the performance of the rest of the group. In fact, given sufficient time, the entire group’s interest will gradually shift in the direction of the gossip and away from its syllabus. The way to get the entire group back on track is to introduce a Feshbach resonance: cut the bond between the group’s interest and the entity causing the disruption. If done properly, the group will turn coherent in its interest and to focusing on studying for the paper.

In multi-body systems, such as a conductor harboring electrons, the presence of a Feshbach resonance renders an internal degree of freedom independent of those coordinates “along” which dissociation is most like to occur. And in a superconductor, a Feshbach resonance results in each electron pairing up with another (i.e., electron-vibrations are quelled by eliminating thermal excitation) owing to both being influenced by an attractive potential that arises out of the electron’s interaction with the vibrating lattice.

Feshbach resonance & BCS theory

For particulate considerations, the lattice-vibrations are quantized in the form of hypothetical particles called phonons. As for why the Feshbach resonance must occur the way it does in a superconductor: that is the conclusion, rather implication, of the BCS theory formulated in 1957 by John Bardeen, Leon Neil Cooper, and John Robert Schrieffer.

(Arrows describe the direction of forces acting on each entity) When a nucleus, N, pulls electrons, e, toward itself, it may be said that the two electrons are pulled toward a common target by a common force. Therefore, the electrons’ engagement with each other is influenced by N. The energy of N, in turn, is quantified as a phonon (p), and the electrons are said to interact through the phonons.

The BCS theory essentially treats electrons like rebellious, teenage kids (I must be getting old). As negatively charged electrons pass through the crystal lattice, they draw the positively charged nuclei toward themselves, creating an increase in the positive charge density in their vicinity that attracts more electrons in turn. The resulting electrostatic pull is stronger near nuclei and very weak at larger distances. The BCS theory states that two electrons that would otherwise repel each other will pair up in the face of such a unifying electrostatic potential, howsoever weak it is.

This is something like rebellious teens who, in the face of a common enemy, will unite with each other no matter what the differences between them earlier were.

Since electrons are fermions, they bow down to Pauli’s exclusion principle, which states that no two fermions may occupy the same quantum state. As each quantum state is defined by some specific combination of state variables called quantum numbers, at least one quantum number must differ between the two co-paired electrons.

Prof. Wolfgang Pauli (1900-1958)

In the case of superconductors, this is particle spin: the electrons in the member-pair will have opposite spins. Further, once such unions have been achieved between different pairs of electrons, each pair becomes indistinguishable from the other, even in principle. Imagine: they are all electron-pairs with two opposing spins but with the same values for all other quantum numbers. Each pair, called a Cooper pair, is just the same as the next!

Bose-Einstein condensates

This unification results in the sea of electrons displaying many properties normally associated with Bose-Einstein condensates (BECs). In a BEC, the particles that attain the state of indistinguishability are bosons (particles with integer spin), not fermions (particles with half-integer spin). The phenomenon occurs at temperatures close to absolute zero and in the presence of an external confining potential, such as an electric field.

In 1995, at the Joint Institute for Laboratory Astrophysics, physicists cooled rubidium atoms down to 170 billionths of a degree above absolute zero. They observed that the atoms, upon such cooling, condensed into a uniform state such that their respective velocities and distribution began to display a strong correlation (shown above, L to R with decreasing temp.). In other words, the multi-body system had condensed into a homogenous form, called a Bose-Einstein condensate (BEC), where the fluid behaved as a single, indivisible entity.

Since bosons don’t follow Pauli’s exclusion principle, a major fraction of the indistinguishable entities in the condensate may and do occupy the same quantum state. This causes quantum mechanical effects to become apparent on a macroscopic scale.

By extension, the formulation and conclusions of the BCS theory, alongside its success in supporting associated phenomena, imply that superconductivity may be a quantum phenomenon manifesting in a macroscopic scale.

Note: If even one Cooper pair is “broken”, the superconducting state will be lost as the passage of electric current will be disrupted, and the condensate will dissolve into individual electrons, which means the energy required to break one Cooper pair is the same as the energy required to break the composition of the condensate. So thermal vibrations of the crystal lattice, usually weak, become insufficient to interrupt the flow of Cooper pairs, which is the flow of electrons.

The Meissner effect in action: A magnet is levitated by a superconductor because of the expulsion of the magnetic field from within the material

The Meissner effect

In this context, the Meissner effect is simply an extrapolation of Lenz’s law but with zero electrical resistance.

Lenz’s law states that the electromotive force (EMF) because of a current in a conductor acts in a direction that always resists a change in the magnetic flux that causes the EMF. In the absence of resistance, the magnetic fields due to electric currents at the surface of a superconductor cancel all magnetic fields inside the bulk of the material, effectively pushing magnetic field lines of an external magnetic potential outward. However, the Meissner effect manifests only when the externally applied field is weaker than a certain critical threshold: if it is stronger, then the superconductor returns to its conducting state.

Now, there are a class of materials called Type II superconductors – as opposed to the Type I class described earlier – that only push some of the magnetic field outward, the rest remaining conserved inside the material in filaments while being surrounded by supercurrents. This state is called the vortex state, and its occurrence means the material can withstand much stronger magnetic fields and continue to remain superconducting while also exhibiting the hybrid Meissner effect.

Temperature & superconductivity

There are also a host of other effects that only superconductors can exhibit, including Cooper-pair tunneling, flux quantization, and the isotope effect, and it was by studying them that a strong relationship was observed between temperature and superconductivity in various forms.

(L to R) John Bardeen, Leon Cooper, and John Schrieffer

In fact, Bardeen, Cooper, and Schrieffer hit upon their eponymous theory after observing a band gap in the electronic spectra of superconductors. The electrons in any conductor can exist at specific energies, each well-defined. Electrons above a certain energy, usually in the valence band, become free to pass through the entire material instead of staying in motion around the nuclei, and are responsible for conduction.

The trio observed that upon cooling the material to closer and closer to absolute zero, there was a curious gap in the energies at which electrons could be found in the material at a particular temperature. This meant that, at that temperature, the electrons were jumping from existing at one energy to existing at some other lower energy. The observation indicated that some form of condensation was occurring. However, a BEC was ruled out because of Pauli’s exclusion principle. At the same time, a BEC-like state had to have been achieved by the electrons.

This temperature is called the transition temperature, and is the temperature below which a conductor transitions into its superconducting state, and Cooper pairs form, leading to the drop in the energy of each electron. Also, the differences in various properties of the material on either side of this threshold are also attributed to this temperature, including an important notion called the Fermi energy: it is the potential energy that any system possesses when all its thermal energy has been removed from it. This is a significant idea because it defines both the kind and amount of energy that a superconductor has to offer for an externally applied electric current.

Enrico Fermi, along with Paul Dirac, defined the Fermi-Dirac statistics that governs the behavior all identical particles that obey Pauli’s exclusion principle (i.e., fermions). Fermi level and Fermi energy are concepts named for him; however, as long as we’re discussing eponymy, Fermilab overshadows them all.

In simple terms, the density of various energy states of the electrons at the Fermi energy of a given material dictates the “breadth” of the band gap if the electron-phonon interaction energy were to be held fixed at some value: a direct proportionality. Thus, the value of the energy gap at absolute zero should be a fixed multiple of the value of the energy gap at the superconducting transition temperature (the multiplication factor was found to be 3.5 universally, irrespective of the material).

Similarly, because of the suppression of thermal excitation (because of the low temperature), the heat capacity of the material reduces drastically at low temperatures, and vanishes below the transition temperature. However, just before hitting zero at the threshold, the heat capacity balloons up to beyond its original value, and then pops. It was found that the ballooned value was always 2.5 times the material’s normal heat capacity value… again, universally, irrespective of the material!

The temperature-dependence of superconductors gains further importance with respect to applications and industrial deployment in the context of its possible occurring at higher temperatures. The low temperatures currently necessary eliminate thermal excitations, in the form of vibrations, of nuclei and almost entirely counter the possibility of electrons, or Cooper pairs, colliding into them.The low temperatures also assist in the flow of Cooper pairs as a superfluid apart from allowing for the energy of the superfluid being higher than the phononic energy of the lattice.

However, to achieve all these states in order to turn a conductor into a superconductor at a higher temperature, a more definitive theory of superconductivity is required. One that allows for the conception of superconductivity that requires only certain internal conditions to prevail while the ambient temperature soars. The 1986-discovery of high-temperature superconductors in ceramics by Bednorz and Muller was the turning point. It started to displace the BCS theory which, physicists realized, doesn’t contain the necessary mechanisms for superconductivity to manifest itself in ceramics – insulators at room temperature – at temperatures as high as 125 K.

A firmer description of superconductivity, therefore, still remains elusive. Its construction should not only pave the for one of the few phenomena that hardly appears in nature and natural processes to be fully understood, but also for its substitution against standard conductors that are responsible for lossy transmission and other such undesirable effects. After all, superconductors are the creation of humankind, and only by its hand while they ever be fully worked.

The personal loss of a printed book

Over the last few days, I’ve been getting the feeling that buying a Kindle is the best decision I’ve ever made. What I find unsettling, however, is that I don’t seem to miss printed books as much as I thought I would: the transition was so smooth that it might almost have been irreversible. It seems I fell in love with what the books had to say and not with the books themselves, a disengagement with the metaphysical form and a betrothal to its humble, degenerate function

The Markovian Mind

In many ways, human engagement with information happens in such a manner that, with the accumulation of information over time, the dataset constructed out of the latest volume of information has the strongest relationship of any kind with the consecutively next dataset – a Markovian trait.

At any point of time, the future state of the dataset is determined solely by its present one. In other words, with a discrete understanding, its nth state is dependant solely on its (n – i)th state, where ‘i’ is a cardinal index. Upon a failure to quantify its (n + i)th state, there is no certain state that we know the dataset will intend to assume.

At the same time, given its limited historic dependency, the past’s bearing on the state of the dataset is continuous but constantly depreciating (asymptotically tending to zero): the correlation index between the (n + i)th state for increasing i with the (n – k)th decreases for increasing k (for all k = i).*

Over time, if the information-dataset could be quantized through a set of state variables, S, then there will be a characteristic function, φ(n), which would describe the slope of the correlation index’s curve at (i, k). Essentially, the evolution of S will be as a Markov chain whereas φ(n) will be continuous, rendering (i, k) random and memoryless.

(*For k = i, (n – k) = (n – i). However, for a given set of state variables S, which evolve as a Markov chain, the devolution that k tracks and the evolution that i tracks will be asymmetric, necessitating two different indices to describe the two degrees of freedom.)

When must science give way to religion?

When I saw an article titled ‘Sometimes science must give way to religion‘ in Nature on August 22, 2012, by Daniel Sarewitz, I had to read it. I am agnostic, and I try as much as I can to keep from attempting to proselyte anyone – through argument or reason (although I often fail at controlling myself). However, titled as it was, I had to read the piece, especially since it’d appeared in a publication I subscribe to for their hard-hitting science news, which I’ve always approached as Dawkins might: godlessly.

First mistake.

Dr. Daniel Sarewitz

At first, if anything, I hoped the article would treat the entity known as God as simply an encapsulation of the unknown rather than in the form of an icon or elemental to be worshiped. However, the lead paragraph was itself a disappointment – the article was going to be about something else, I understood.

Visitors to the Angkor temples in Cambodia can find themselves overwhelmed with awe. When I visited the temples last month, I found myself pondering the Higgs boson — and the similarities between religion and science.

The awe is architectural. When pilgrims visit a temple built like the Angkor, the same quantum of awe hits them as it does an architect who has entered a Pritzker-prize winning building. But then, this sort of “reasoning”, upon closer observation or just an extra second of clear thought, is simply nitpicking. It implies that I’m just pissed that Nature decided to publish an article and disappoint ME. So, I continued to read on.

Until I stumbled upon this:

If you find the idea of a cosmic molasses that imparts mass to invisible elementary particles more convincing than a sea of milk that imparts immortality to the Hindu gods, then surely it’s not because one image is inherently more credible and more ‘scientific’ than the other. Both images sound a bit ridiculous. But people raised to believe that physicists are more reliable than Hindu priests will prefer molasses to milk. For those who cannot follow the mathematics, belief in the Higgs is an act of faith, not of rationality.

For a long time, I have understood that science and religion have a lot in common: they’re both frameworks that are understood through some supposedly indisputable facts, the nuclear constituents of the experience born from believing in a world reality that we think is subject to the framework. Yes, circular logic, but how are we to escape it? The presence of only one sentient species on the planet means a uniform biology beyond whose involvement any experience is meaningless.

So how are we to judge which framework is more relevant, more meaningful? To me, subjectively, the answer is to be able to predict what will come, what will happen, what will transpire. For religion, these are eschatological and soteriological considerations. As Hinduism has it: “What goes around comes around!” For science, these are statistical and empirical considerations. Most commonly, scientists will try to spot patterns. If one is found, they will go about pinning the pattern’s geometric whims down to mathematical dictations to yield a parametric function. And then, parameters will be pulled out of the future and plugged into the function to deliver a prediction.

Earlier, I would have been dismissive of religion’s “ability” to predict the future. Let’s face it, some of those predictions and prophecies are too far into the future to be of any use whatsoever, and some other claims are so ad hoc that they sound too convenient to be true… but I digress. Earlier, I would’ve been dismissive, but after Sarewitz’s elucidation of the difference between rationality and faith, I am prompted to explain why, to me, it is more science than religion that makes the cut. Granted, both have their shortcomings: empiricism was smashed by Popper, while statistics and unpredictability are conjugate variables.

(One last point on this matter: If Sarewitz seems to suggest that the metaphorical stands in the way of faith evolving into becoming a conclusion of rationalism, then he also suggests lack of knowledge in one field of science merits a rejection of scientific rationality in that field. Consequently, are we to stand in eternal fear of the incomprehensible, blaming its incomprehensibility on its complexity? He seems to have failed to realize that a submission to the simpler must always be a struggle, never a surrender.)

Sarewitz ploughed on, and drew a comparison more germane and, unfortunately, more personal than logical.

By contrast, the Angkor temples demonstrate how religion can offer an authentic personal encounter with the unknown. At Angkor, the genius of a long-vanished civilization, expressed across the centuries through its monuments, allows visitors to connect with things that lie beyond their knowing in a way that no journalistic or popular scientific account of the Higgs boson can. Put another way, if, in a thousand years, someone visited the ruins of the Large Hadron Collider, where the Higgs experiment was conducted, it is doubtful that they would get from the relics of the detectors and super­conducting magnets a sense of the subatomic world that its scientists say it revealed.

Granted, if a physicist were to visit the ruins of the LHC, he may be able to put two and two together at the sight of the large superconducting magnets, striated with the shadows of brittle wires and their cryostatic sleeves, and guess the nature of the prey. At the same time, an engagement with the unknown at the Angkor Wat (since I haven’t been there, I’ll extrapolate my experience at the Thillai Nataraja Temple, Chidambaram, South India, from a few years back) requires a need to engage with the unknown. A pilgrim visiting millennia-old temples will feel the same way a physicist does when he enters the chamber that houses the Tevatron! Are they not both pleasurable?

I think now that what Sarewitz is essentially arguing against is the incomparability of pleasures, of sensations, of entire worlds constructed on the basis two very different ideologies, rather requirements, and not against the impracticality of a world ruled by one faith, one science. This aspect came in earlier in this post, too, when I thought I was nitpicking when I surmised Sarewitz’s awe upon entering a massive temple was unique: it may have been unique, but only in sensation, not in subject, I realize now.

(Also, I’m sure we have enough of those unknowns scattered around science; that said, Sarewitz seems to suggest that the memorability of his personal experiences in Cambodia are a basis for the foundation of every reader’s objective truth. It isn’t.)

The author finishes with a mention that he is an atheist. That doesn’t give any value to or take away any value from the article. It could have been so were Sarewitz to pit the two worlds against each other, but in his highlighting their unification – their genesis in the human mind, an entity that continues to evade full explicability – he has left much to be desired, much to be yearned for in the form of clarification in the conflict of science with religion. If someday, we were able to fully explain the working and origin of the human mind, and if we find it has a fully scientific basis, then where will that put religion? And vice versa, too.

Until then, science will not give way for religion, nor religion for science, as both seem equipped to explain.

Thinking quantum

In quantum physics, every metric is conceived as a vector. But that’s where its relation with classical physics ends, makes teaching a pain.

Teaching classical mechanics is easy because we engage with it every day in many ways. Enough successful visualization tools exist to do that.

Just wondering why quantum mechanics has to be so hard. All I need is to find a smart way to make visualizing it easier.

Analogizing quantum physics with classical physics creates more problems than it solves. More than anything, the practice creates a need to nip cognitive inconsistencies in the bud.

If quantum mechanics is the way the world works at its most fundamental levels, why is it taught in continuation of classical physics?

Is or isn’t it easier to teach mathematics and experiments relating to quantum mechanics and then present the classical scenario as an idealized, macroscopic state?

After all, isn’t that the real physics of the times? We completely understand classical mechanics; we need more people who can “think quantum” today.

Getting started on superconductivity

After the hoopla surrounding and attention on particle physics subsided, I realized that I’d been riding a speeding wagon all the time. All I’d done is used the lead-up to (the search for the Higgs boson) and the climax itself to teach myself something. Now, it’s left me really excited! Learning about particle physics, I’ve come to understand, is not a single-track course: all the way from making theoretical predictions to having them experimentally verified, particle physics is an amalgamation of far-reaching advancements in a host of other subjects.

One such is superconductivity. Philosophically, it’s a state of existence so far removed from its naturally occurring one that it’s a veritable “freak”. It is common knowledge that everything that’s naturally occurring is equipped to resist change that energizes, to return whenever possible to a state of lower energy. Symmetry and surface tension are great examples of this tendency. Superconductivity, on the other hand, is the desistence of a system to resist the passage of an electric current through it. As a phenomenon that as yet doesn’t manifest in naturally occurring substances, I can’t really opine on its phenomenological “naturalness”.

In particle physics, superconductivity plays a significant role in building powerful particle accelerators. In the presence of a magnetic field, a charged particle moves in a curved trajectory through it because of the Lorentz force acting on it; this fact is used to guide the protons in the Large Hadron Collider (LHC) at CERN through a ring 27 km long. Because moving in a curved path involves acceleration, each “swing” around the ring happens faster than the last, eventually resulting in the particle traveling at close to the speed of light.

A set of superconducting quadrupole-electromagnets installed at the LHC with the cryogenic cooling system visible in the background

In order to generate these extremely powerful magnetic fields – powerful because of the minuteness of each charge and the velocity required to be achieved – superconducting magnets are used that generate fields of the order of 20 T (to compare: the earth’s magnetic field is 25-60 μT, or close to 500,000-times weaker)! Furthermore, the direction of the magnetic field is also switched accordingly to achieve circular motion, to keep the particle from being swung off into the inner wall of the collider at any point!

To understand the role the phenomenon of superconductivity plays in building these magnets, let’s understand how electromagnets work. In a standard iron-core electromagnet, insulated wire is wound around an iron cylinder, and when a current is passed through the wire, a magnetic field is generated around the cross-section of the wire. Because of the coiling, though, the centre of the magnetic field passes through the axis of the cylinder, whose magnetic permeability magnifies the field by a factor of thousands, itself becoming magnetic.

When the current is turned off, the magnetic field instantaneously disappears. When the number of coils is increased, the strength of the magnetic field increases. When the strength of the current is increased, the strength of the magnetic field increases. However, beyond a point, the heat dissipated due to the wire’s electric resistance reduces the amount of current flowing through it, consequently resulting in a weakening of the core’s magnetic field over time.

It is Ohm’s law that establishes proportionality between voltage (V) and electric current (I), calling the proportionality-constant the material’s electrical resistance: R = V/I. To overcome heating due to resistance, resistance itself must be brought down to zero. According to Ohm’s law, this can be done either by passing a ridiculously large current through the wire or bringing the voltage across its ends down to zero. However, performing either of these changes on conventional conductors is impossible: how does one quickly pass a large volume of water through any pipe across which the pressure difference is miniscule?!

Heike Kamerlingh Onnes

The solution to this unique problem, therefore, lay in a new class of materials that humankind had to prepare, a class of materials that could “instigate” an alternate form of electrical conduction such that an electrical current could pass through it in the absence of a voltage difference. In other words, the material should be able to carry large amounts of current without offering up any resistance to it. This class of materials came to be known as superconductors – after Heike Kamerlingh Onnes discovered the phenomenon in 1911.

In a conducting material, the electrons that essentially effect the flow of electric current could be thought of as a charged fluid flowing through and around an ionic 3D grid, an arrangement of positively charged nuclei that all together make up the crystal lattice. When a voltage-drop is established, the fluid begins to get excited and moves around, an action called conducting. However, the electrons constantly collide with the ions. The ions, then, absorb some of the energy of the current, start vibrating, and gradually dissipate it as heat. This manifests as the resistance. In a superconductor, however, the fluid exists as a superfluid, and flows such that the electrons never collide into the ions.

In (a classical understanding of) the superfluid state, each electron repels every other electron because of their charge likeness, and attracts the positively charged nuclei. As a result, the nucleus moves very slightly toward the electron, causing an equally slight distortion of the crystal lattice. Because of the newly increased positive-charge density in the vicinity, some more electrons are attracted by the nucleus.

This attraction, which, across the entirety of the lattice, can cause a long-range but weak “draw” of electrons, results in pairs of electrons overcoming their mutual hatred of each other and tending toward one nucleus (or the resultant charge-centre of some nuclei). Effectively, this is a pairing of electrons whose total energy was shown by Leon Cooper in 1956 to be lesser than the energy of the most energetic electron if it had existed unpaired in the material. Subsequently, these pairs came to be called Cooper pairs, and a fluid composed of Cooper pairs, a superfluid (thermodynamically, a superfluid is defined as a fluid that can flow without dissipating any energy).

Although the sea of electrons in the new superconducting class of materials could condense into a superfluid, the fluid itself can’t be expected to flow naturally. Earlier, the application of an electric current imparted enough energy to all the electrons in the metal (via a voltage difference) to move around and to scatter against nuclei to yield resistance. Now, however, upon Cooper-pairing, the superfluid had to be given an environment in which there’d be no vibrating nuclei. And so: enter cryogenics.

The International Linear Collider – Test Area’s (ILCTA) cryogenic refrigerator room

The thermal energy of a crystal lattice is given by E = kT, where ‘k’ is Boltzmann’s constant and T, the temperature. Demonstrably, to reduce the kinetic energy of all nuclei in the lattice to zero, the crystal itself had to be cooled to absolute zero (0 kelvin). This could be achieved by cryogenic cooling techniques. For instance, at the LHC, the superconducting magnets are electromagnets wherein the coiled wire is made of a superconducting material. When cooled to a really low temperature using a two-stage heat-exchanger composed of liquid helium jacketed with liquid nitrogen, the wires can carry extremely large amounts of current to generate very intense magnetic fields.

At the same time, however, if the energy of the superfluid itself surpassed the thermal energy of the lattice, then it could flow without the lattice having to be cooled down. Because the thermal energy is different for different crystals at different ambient temperatures, the challenge now lies in identifying materials that could permit superconductivity at temperatures approaching room-temperature. Now that would be (even more) exciting!

P.S. A lot of the related topics have not been covered in this post, such as the Meissner effect, electron-phonon interactions, properties of cuprates and lanthanides, and Mott insulators. They will be taken up in the future as they’re topics that require in-depth detailing, quite unlike this post which has been constructed as a superfluous introduction only.

Yes, I had $50.

Last week, I paid $50 to sign up for entrepreneur Dalton Caldwell‘s new start-up App.net. I wouldn’t have found the service by myself until it’d have been too late for me to get on their bandwagon early – and getting early on a promising bandwagon is something I’ve always missed out on. So, on a friend’s advice, I signed up for the alpha as a paying member (the other tier being paying developer), and went about finding out what really I’d signed up for. I know, it sounds stupid.

From where I was looking, App.net – by leaving out advertisers – provided access to more definition for developers to work with. Sure, it looks like Twitter for now, but I’m hoping that in the near future, it could yield a unified service through which I could manage my entire web-based social graph in real-time.

I’m not a developer. Sure, I can navigate through the world of developers, but I’d only be a tourist at the most. What I am at heart is an information-collator and -distributor. I read almost 50 articles on various topics every day, and that’s just opinion/analysis pieces. News is separate. More than anything, I’d be thrilled if I had someway to represent myself through these commodities (like I’m doing now by sharing links and blurbs on Facebook and short quips on Twitter) in a more tractable manner.

Not to mention: I’d also like it if I was able to customize what I had to offer and serve it differently. For instance, Twitter-lists is a concept that comes closest to tracking, in real-time, news on my favorite subjects from my favorite commentators. However, Twitter’s social infrastructure has left the possibilities arising out of that fragmented. Imagine, instead, how great it’d be if I could set up one platform from atop which multiple authors could share their favorite reads in real-time, which readers could then customize and consume.

Perhaps I’m going too far, perhaps I’m imagining things, but I’d like to think such things will become possible, and that App.net will have a role to play in it. Sure, I had $50, and I could just be trying to salvage the sense in my decision right now. However, if I hadn’t thought these and such things would be possible, I wouldn’t have spent the money I’d saved up to buy some hosting space for this blog.

The marching coloumns

Every day is a swing between highs and lows, and in the last two months that I’ve experienced them, they’ve never been periodic. Setting off the work, the mood depends on the weather: cloudy is good, buoyant, rain is more than welcome, but a clear, blue sky and a blazing fireball in the empyrean is a dampener on my spirits, if not on anyone else’s. How will I work if I’m sweating all the time? Hmm.

The traffic in my erstwhile small city has grown to draconean proportions. Some argue that it’s a good sign, a sign of the city turning into a metropolis. I don’t like it. It not only places more minutes, more hours between work and home, home and work, between the factories and the beach, between the railway stations and the travel-shops, but it turns nice auto-drivers into pissed-off tyrants whom you simply don’t want to run into.

It takes nothing to precipitate all this but the clock striking 6. Areas and wards transform from familiar crenelations of microscopic economies, communities of traders, sweatshop toilers, and flower-braiders to hotbeds of rage, of exodus and maddened intra-urban migration… Suddenly, friends want to leave, fathers want to be left alone, mothers want to vent, and sisters want only to know what the hell’s going on.

If you’re in Chennai and traveling by auto in the evenings, I suggest you carry a book, or a Kindle, or a smartphone with which to kill time. It’s a time-warp, absolute and unrelenting chronostasis, with a profanity-drenched metronome ticking away like a time-bomb in the seat in front of yours. Of course, there are also people pushing, people shoving their way through the maze of vehicles. For every mile, I suppose it’s 10 points, and for every deceptively shallow pothole surmounted, 50.

In this crazy, demented rush, the only place anyone wants to be is on the other side of the road, the Place Where There Is Space, a vacuum on the far side that sucks the journeymen and journeywomen of Chennai into a few seconds of a non-Chennai space. When I ride in an auto on such days, I just don’t mind waiting, for everyone to pass by. I don’t want to make enemies of my fellows. At the same time, I never might know them better than their mumbled gratitude when I wave them ahead.

The driver gets pissed off, though. Starts to charge more, calls me “soft”, and that I don’t have what it takes to live and survive in the city. I tell him I can live and survive in the city alright, it’s just the city that’s not the city anymore. Sometimes, the driver laughs; most times, it’s a frown. In that instant, I’m computed to become an intellectual, and auto-drivers seem to think intellectuals have buttloads of money.

The only thing these days that intellectuals have buttloads of is tolerance.

Tolerance to let the world pass by without doing anything about it, tolerance to letting passersby jeer at you and making you feel guilty, tolerance to the rivers that must flow and the coloumns that must march, tolerance to peers and idols who insist something must be done, tolerance to their mundane introspection and insistence that there’s more to doing things than just hoping that that’s a purpose in itself.

It’s circular logic, unbreakable without a sudden and overwhelming injection of a dose of chaos. When the ants scurry, the mosquitoes take off, and the elephants stampede, all to wade through an influx of uncertainty and incomprehension and unadulterated freedom, real purpose will be forged. When children grow up, they are introduced to this cycle, cajoled into adopting it. Eventually, the children are killed to make way for adults.

With penises and vaginas, the adults must rule this world. But why must they rule? They don’t know. Why must they serve? They don’t know. Yeah, sitting in an auto moving at 1 mile an hour, these questions weigh you down like lodestones, like anchors tugging at the seafloor, fastening your wayward and seemingly productive mind to an epiphany. You must surely have watched Nolan’s Inception: doesn’t the paradox of pitch circularity come to mind?

The grass is always greener on the other side, the staircase forever leads to heaven, the triangle is an infinite mobius spiral, each twist a jump into the few-seconds-from-now future. Somewhere, however, there is a rupture. Somewhere inside my city, there is a road at the other end of which there is my city in chronostasis, stuck in a few-hours-from-now past.

Where auto-drivers aren’t pissed off because the clock struck 6, where fathers and mothers realize nothing’s slowed down but just that their clocks have been on fast-forward of late, where snaking ribbons of smoke don’t compete for space but simply let it go, no longer covet it, only join in the collective sorrow of our city’s adolescence.

Building the researcher's best friend

One of the most pressing problems for someone conducting any research on personal initiative has to be information storage, access, and reproduction. Even if you’re someone who’s just going through interesting papers in pre-print servers and journals and want to quickly store text, excerpts, images, videos, diagrams, and/or graphs on the fly, you’ll notice that a multitude of storage options exist that are still not academically intelligent.

For instance, for starters, I could use an offline notepad that has a toggle-equipped LaTex-interpreter that I could use to quickly key in equations.

So, when I stumbled across this paper written by Joshi, et al, at Purdue University in 1994, I was glad someone had taken the time and trouble to think up the software-architecture of an all-encompassing system that would handle information in all media, provide options for cross-referencing, modality, multiple authors, subject-wise categorization, cataloguing, data mining, etc. Here’s an excerpt from the paper.

The electronic notebook concept is an attempt to emulate the physical notebook that we use ubiquitously. It provides an unrestricted editing environment where users can record their problem and solution specifications, computed solutions, results of various analyses, commentary text as well as handwritten comments.

The notebook interface is multimodal and synergetic, it integrates text, handwriting, graphics, audio and video in its input and output modes. It functions not only as a central recording mechanism, it also acts as the access mechanism for all the tools that support the user’s problem solving activities.

(I’d like to take a moment to stress on good data-mining because it plays an instrumental role in effecting serendipitous discoveries within my finite corpus of data, i.e. (and as a matter of definition) if the system is smart enough to show me something that it knows could be related to what I’m working on and something that I don’t know is related to what I’m working on, then it’s an awesome system.)

The Purdue team went on to implement a prototype, but you’ll see it was limited to being an interactive PDE-solver. If you’re looking for something along the same lines, then the Wolfram Mathematica framework has to be your best bet: its highly intuitive UI makes visualizing the task at hand a breeze, and lets you focus on designing practical mathematical/physical systems while it takes care of getting problems out of the way.

However, that misses the point. For every time I come across an interesting paper, some sections of which could fit well into a corpus of knowledge that I’m, at the time, assimilating, I currently use a fragile customization of the WordPress CMS that “works” with certain folders in my hard-drive. And by “works”, I mean I’m the go-between semantic interpreter – and that’s exactly what I need an automaton for. On one of my other blogs – unnamed here because it’s an online index of sorts for me – I have tagged and properly categorized posts that are actually bits and pieces of different research paths.

For products that offer such functionalities as the ones I’m looking for, I’m willing to pay, and I’m sure anyone will given how much more handy such tools are becoming by the day. Better yet if they’re hosted on the cloud: I don’t have to bother about backing up too much and can also enjoy the added benefit of “anywhere-access”.

For now, however, I’m going to get back to installing the California Digital Library’s eXtensible Text Framework (CDL-XTF) – a solution that seems to be a promising offline variant.