Pulsing a two-band model to discover topology

Justin Wilson — 20 Feb 2017

In systems with an anomalous quantum Hall effect, the quantized Hall conductivity comes from the integral of a Chern number over some manifold. Usually, this integral is derived via Kubo formula. However, there is geometry involved in how the state evolves, and in fact, we can use the dynamics of the current following a weak pulse in order to find the DC conductivity. The route is easy enough: Say we have a conductivity which when written with respect to time is , and without loss of generality we apply a pulse , then we can find the current response in the perpendicular direction

This allows us to derive an expression for the DC-conductivity

Geometrically, there is a lot going on with when we have a system with spin-orbit coupling. In particular, take the two band model

where is 3D, is 2D, and , the vector of Pauli matrices. The initial states of the system can be represented by where they are on the Bloch sphere . But once a pulse is supplied, this state will begin to rotate about a different vector . Thus, if we add time dependence to to represent the state’s location, we can use Heisenberg’s equations of motion to obtain

We can rewrite this equation as

However, this state has an associated current with it, and that can be represented by the operator . And the vector . Therefore,

Combining these expressions, we have

Or simplified

This is exact. At this point, we make a couple of approximations. First of all, the first term is independent of so it cannot contribute to the total current if we have a finite DC conductivity. This leaves the second term. We can do the integral over time — there is an order of limits problem but we can get around this by noting that we do not expect the infinite time state to contribute to the energy (or: it averages to something proportional to anway and so the cross product vanishes), so we discard it and therefore, .

Hence, we get the Hall conductivity

At this point, we actually have not expanded in terms of yet. The first term will produce a term that is symmetric in and —however it drops out due to the cross product vanishes. Thus, only the second term persists and we see directly that and must be different and in fact, we get the well-known formula

This describes the Chern number of some manifold parametrized by (usually the Brillioun zone). It is the number of times the vector wraps the sphere.

To understand why it’s a topological invariant, note that the quantity in the integral looks very much like a Jacobian. In fact, it is; it describes a coordinate transformation from the to . In this way, the integrand represents an area element on the sphere, and in general is a closed manifold. So maps that closed manifold to the sphere, and without any edges or boundaries the area it maps out must be times an integer.

This formula is well-known, but this dynamical way of obtaining it is slightly less well-known. We have extended this idea in a paper published last year to handle the out-of-equilibrium case of quenches. In that situation, new phenonmena appear that are quite different from the equilibrium case—terms that we discarded in this calculation become quite relevant.

Integrating org-mode and PushBullet for automated reminders

Justin Wilson — 03 Jan 2015

For a while now, I have been using Emacs’ org-mode as my fully customizable, open-source, plain-text to-do list manager. In order to help me not escape my agenda items though, I wanted some way to receive push notifications, and for that I am using PushBullet. With a little bit of python code, this can easily be accomplished.

Note: This guide uses these services on OS X with Homebrew installed. It should be easily generalizable to other systems.

Set up your machine

Make sure you have the most recent version of emacs. Using Homebrew:

brew install emacs

We are going to interface everything with python, and for that we need to install pushbullet.py with

pip install pushbullet.py

This requires python-magic (which should be automatically installed), but which requires dependencies which we quote here

On Windows, install Cygwin (http://cygwin.com/install.html). To find the libraries, either add /bin to the $PATH or copy cygwin1.dll, cygz.dll, and cygmagic-1.dll to C:\Windows\System32

On OSX:

  • When using Homebrew: brew install libmagic
  • When using macports: port install file

Now, I assume you have org-mode set up with a list of agenda files hanging around somewhere on your machine (usually identified in your .emacs file or your customizations.el file for Aquamacs).

The Python code

I cobbled together some python code with help from the org-mode site explaining how to extract agenda information as well as the python implementation I found on this blog in his agenda.py.

It’s important to note that the loadfile must define org-agenda-files. Additionally, I had problems if the loadfile tried to do too much upon initialization. If your init file does too much, you may need to make this loadfile something else, something simpler.

The push notifications you receive from running this code are specifically tailored to how I wanted them output. One can easily mess around with the section I labeled “PROCESSING Agenda information into notes to send”. Additionally, there are more ways than just pb.push_note to send information (see the documentation here).

Set up Automator

There are numerous tools to automate tasks on different systems. On OS X, we’ll use Automator.

  1. Create a new application.
  2. Find the “Run Shell Script” action and drag it to the box on the right.
  3. Select shell /bin/bash and insert the following code
export PATH=/usr/local/bin:$PATH
python /path/to/org-mode-agenda-push-bullet.py

This is the simplest since the shell /usr/bin/python doesn’t necessarily use the python that we want (located in /usr/local/bin in this case).

From here, we open the Calendar application, create a new event, and insert when we want it to run (frequency, etc.), then under the “alert:” option, we click “Custom…”, then in the first drop down menu “Open file”, in the new drop down menu that appears we click “Other…”, and navigate to where we saved the automator application created above. The rest should be self-explanatory.

If all is well and good, at the specified times, you should get a push bullet reminder at that time (if your computer is on and connected to the internet).

Subtleties in linear response theory

Justin Wilson — 22 Dec 2014

In linear response theory, we consider some small perturbation to a Hamiltonian and look at the response of some observable to that perturbation. In the case considered here, the perturbation is an electric field, and the response is current. The linear response that characterizes these quantities is called the conductance.

There’s a problem though, an electric field accelerates a charge. Consider a classical electron for the time being, then

Or in terms of current , . From here we can quickly and naively go to frequency space to find . Then one might remember that another way to define is in terms of a vector potential that is purely time-dependent, so . Now, if we just plug this into our linear response for the current, we get

All is well and good, right? Well, not quite. In electromagnetism, the constant part of corresponds to the term of . This represents what is known as a “pure gauge”. These gauges are physically equivalent to the null field . Thus, whatever linear response is represented above at must be unphysical, right?

Wrong.

Before explainging why this is wrong, let’s give some further context to this linear response theory. The term is actually the single particle term of what is known as the “diamagnetic” response to the conductivity when you add in more electrons (usually distributed in a Fermi distribution). This term persists in quantum mechanics, and no other terms appear to cancel it in the simplest case of . In fact, while the math becomes more cumbersome, the solution we shall illustate below holds perfectly well for even the non-interacting multi-electron system.

Now, at this point you may have guessed that there’s something strange going on at due to the fact that the electric field accelerates the particle and doesn’t just have a velocity response. At the point, the physical field seems to necessarily be equal to zero in the gauge we have prescribed unless for small . This would lead to a divergent , restoring our faith that the system is accelerating out of control.

But what about when ? It seems like then we have a true velocity response to an unphysical object. The solution is subtle: At some point in the quick derivation we made an assumption that implied at . This implies that if at any finite time, there had to be some time in between where . Thus, during that “ramp up” time, an electric field was on and it accelerated the charge to a specific velocity resulting in the current .

The assumption is subtle, but the result is rather simple. For now, just assume that and at some , , then we can integrate Eq. \eqref{eq:Newton} to obtain the velocity:

Or, in other words, , the same as before! This is how a constant can be physical: When it represents the change from a different constant vector potential.

Now, to isolate the assumption, let us run through what they were

  1. .
  2. Take the Fourier transform: .
  3. Insert vector potential with and assume the Fourier transform exists for : .
  4. Undo the Fourier transform: .

Now, let us get the last equation (in #4 above) by a simpler route.

  1. .
  2. Use and integrate the above expression from to : .

Two perfectly legitimate calculations resulting in different results. Firstly, this highlights that the first procedure does actually assume . Secondly, the only assumptions that could have given (an assumption we probably wanted anyway) and are that they could be given in terms of Fourier transforms. In order for a function to have a Fourier transform it needs to be absolutely integrable—i.e. . Given as a continuous, piece-wise differentiable function, we need for . This imposes our gauge, and since we are not interested in future times let alone , we can artificially modify the function as we see fit to accomodate that. But how the function began at is important, and we must impose that. Hence, we have chosen, at least partially, a gauge.

We are left with a dilemma then about pure plain waves . How do those function?

Technically, they are outside of the bounds of the Fourier analysis and we can see that simply by the fact that if we tried to do the above procedure, we couldn’t have a well defined answer as (too oscillatory). However, we can approximate the plain wave in terms of an absolutely integrable function for any , and everything works. This shows us explicitly that does have for all . And this is the origin of the well known substitution .

The natural question to ask now is how this works for a real system (with dissipation). Why does such a term not exist at zero frequency?

Unless your system is a superconductor, there is some dissipation in the system. The simplest way to include this is classically: When an electron is going at velocity it experiences a “drag” that tends to slow it down. Thus, our Newton’s equations become

where describes how much drag the electron experiences. For more disorder, this would be a larger number. Playing the same Fourier transform game, we can obtain rather quickly that

This is just one step away from the well-known Drude model. We see that if , then . But our gauge choice that we described before is still in place, the only difference is that our “kick” at has an infinite time to dissipate back to rest (the inclusion of above is critical for ). This also suggests a steady state current when is constant: implies . Our current relaxes to zero when there’s nothing around (), as we would expect.

When a quantum mechanical description is done—by taking a random disorder potential and averaging over disorder configurations—one obtains similar results. The diamagnetic term for a clean system is real and has a physically well defined explanation.

One may not be surprised that this curious “diamagnetic term” occurs for superconductors, however it is sometimes explained that “gauge symmetry is broken” and that is why such a term exists. This is a misleading statement, but one I will explore in a future post.

Current in Single Particle Quantum Mechanics

Justin Wilson — 14 Jan 2014

For simplicity, I will only use one-dimension in this post, but this can be generalized to higher dimensions rather easily.

Many textbooks on Quantum Mechanics mention current density can be derived from the continuity equation and probability. The usual method for figuring this out is to assume you have some Hamiltonian where is the momentum and is the position. In this way the current density is written in terms of the wave function as

This then satisfies the continuity equation

with density . It should be noted that if you write the wave function as , then the current is just proportional to the gradient of the phase , giving the spatial change in phase a physical significance.

However, there are two lingering questions:

  1. Is this current density related to the Heisenberg operator which tracks the velocity of the system?
  2. If so, does it generalize to more arbitrary Hamiltonians?

To answer these questions, we consider the more arbitrary Hamiltonian

where is some potential and the kinetic energy is some polynomial

We are unworried about bounding the energy, so odd-order Kinetic energy terms are allowed (in the higher dimensional case, the Dirac-like Hamiltonians have linear terms in ). At this point, we can take our Heisenberg operator and find

where is the derivative of with respect to its argument. Now, we would like to obtain a current density from this quantity. We can certainly define the total current at a specific time as

where in the last line we go from the Heisenberg to Schroedinger picture. Now to get density, we need to use a complete set position states, so that

Now, acts as a derivative on position kets, so that one can verify that

However, there is an ambiguity here since we can write

This ambiguiuty in how to choose the derivatives leaves us with many way to define the current density. Fortunately, only one of these combinations satisfies the continuity equation. To figure out which one that is, let us reverse engineer the continuity equation to obtain a solution. The density is , and so using the Schroedinger’s equation, we have

Thus, the continuity equation must become

If we now assume that we have a current density that takes the form

and satisfies the continuity equation, , then we can equate operators to obtain

Anticipating the answer, we write the general form of as

Then we can take the left hand side Eq. \eqref{eq:diff-ops-cty} and write

On the other hand, we can calculate the right hand side of Eq. \eqref{eq:diff-ops-cty} to be

Equating the left and right sides, we can just read off that , and , so that .

Thus, we have

Returning all the way to when we were considering as an integral over position, this suggests that in Eq. \eqref{eq:T-delta}, we want to consider

Given the expression for total current Eq. \eqref{eq:total-current} and integrating the delta function by parts numerous times, we can replace with and with , and then the total current is just

which actually integrates the current density! Thus, we have shown that

and that

Indeed, does track the current of the problem and can even be written as the integral of a current density. Even for the more arbitrary Hamiltonian .

The δ-function Potential Lattice

Justin Wilson — 14 Jun 2013

I was messing around with some simple problems, and found this simple but illustrative problem. It starts you off in basic quantum mechanics and introduces concepts in a very straightforward way to both get to a more condensed matter perspective while even showing some interesting effects that have experimental consequences (energy bands and band gaps opening).

We look at the relatively simple problem1 of finding the energy spectrum for a particle in the lattice potential

Visualized, it looks like the picture:

Delta function lattice.

The time-independent Schrödinger equation takes the form

where is the energy.

Since we can solve the problem between the delta-functions quite simply ( there), let us restrict our focus to . Here, the wave function takes on the form

where we have . Now, we can find an operator that commutes with the Hamiltonian so that we can diagonlize it to help solve the problem — this will be the operator that translates us by .

To make this clear, let us abstract things to operators so that we have a momentum operator and a position operator , then we have the commutator . The operator commutes with functions of as though it were a derivative , so considering the translation operator , we can write

and since is periodic in , we have that . Thus, the operator commutes with the Hamiltonian and we can simulataneously diagonalize both it and the Hamiltonian. We say has quasi-momentum . It is important to note that this not the same as real momentum which is not a well-defined quantum number in this problem (that needs translation symmetry).

In other words, we can write our eigenfunctions such that , and this naturally leads us to relate the coefficients for our eigenfunctions above as

Now, we can apply matching conditions at remembering that our wavefunction should be continuous, but that the delta-function will cause the first derivatives to be discontinuous. The equations for the coefficients are

and if we insert the relation between the coefficients at and , we get a matrix equation

This equation has a non-zero solution only if the determinant of the matrix is zero which can be written as

This is an equation which relates the energy to the quasi-momentum. Since can only be between -1 and 1, this equation only has a solution when is between -1 and 1. The oscillatory nature of means that it should pass in this range multiple (in fact, a countable infinite) number of times.

Armed with this equation, we can use and an integer to label our energies and we obtain the following energy bands by just solving for (setting , , and )

Energy bands in the delta-function lattice.

The dotted lines in this plot represent the spectrum if there were no delta-function potentials (displaced in energy by for clarity). Notice how the introduction of the delta-functions opens up gaps in this energy spectrum, so that there are some energies that are inaccessible. The gaps actually open up when since there is no solution to our equation there. This energy gap for small just goes like , vanishing as we’d expect when .

Notice that in this energy spectrum, we see that if we get the same energy.

Additionally, if the energy ever goes negative the solutions turn from plane waves into functions localized around the delta functions . In fact, the wave functions look like this for a state:

Negative energy localized states in delta-function lattice with alpha less than 0.

These only appear when , and are related to the fact that the delta-function potential has a bound state. Additionally, only one band can ever have this state. This is due to the fact that the oscillatory sine and cosine change to their non-oscillatory hyperbolic counterparts. However, these states in the delta-function lattice are spread out throughout the crystal, and can not be said to be “localized” to a specific site — they still all have a definite quasi-momentum.

As with other single particle problems, upon considering the many particle picture, these energy bands get filled up to a set energy level (if we are considering fermions).

  1. This is problem 2.53 in Exploring Quantum Mechanics by Galitski, Karnakov, Kogan, and Galitski.