Justin WilsonResearch website and blog
/
Fri, 06 Sep 2019 00:35:38 -0400Fri, 06 Sep 2019 00:35:38 -0400Jekyll v4.0.0Quasicrystalline Art<figure class="fullwidth"><img src="/assets/img/quasicrystal_imagequilt.png" /><figcaption>An image quilt of quasicrystals</figcaption></figure>
<!--more-->
<p>Quasicrystals, a beautiful manifestation of something without a strict crystalline symmetry but nonetheless shows order, have won a <a href="https://www.nobelprize.org/prizes/chemistry/2011/press-release/">Nobel prize</a> and have recently interested my own work with a dodecagonal graphene quasicrystal making its way into Science<label for="sn-id-sciencegraphene" class="margin-toggle sidenote-number"></label><input type="checkbox" id="sn-id-sciencegraphene" class="margin-toggle" /><span class="sidenote"><a href="https://science.sciencemag.org/content/361/6404/782">S. J. Ahn et al., <em>Science</em> <strong>361</strong>, 782 (2018)</a> </span>.</p>
<p>This led to this beautiful cover in Science</p>
<figure><figcaption>Cover image from a <a href="https://science.sciencemag.org/content/361/6404/eaav1395">Science magazine cover story</a>.</figcaption><img src="/assets/img/F1.large.jpg" /></figure>
<p><label for="mn-id-myresearch" class="margin-toggle"> ⊕</label><input type="checkbox" id="mn-id-myresearch" class="margin-toggle" /><span class="marginnote">We have been studying how quasiperiodicity interplays with materials that have Dirac nodes, including <em>twisted bilayer graphene</em>. While we have not studied graphene at 30-degrees like the work in Science, that is an <em>extreme</em> where all crystalline periodicity is lost. </span>
This phenomena is a perfect example of the kind of research I’ve been doing a lot with these days, and so it inspired the new logo for this website</p>
<figure><figcaption>Building a Penrose tiling from two sheets of graphene twisted at 30-degrees with respect to each other.</figcaption><img src="/assets/img/quasicrystal_logo.png" /></figure>
<p>One can tell how this is done: You find the points where two hexagons are on top of each other, put down a point, and connect. There are three shapes: a rhombus, an equilateral triangle, and a square. This can be done along the entire sheet to create an amazing looking pattern. For completeness, we can fill in the rest of the pictured grid to obtain:</p>
<figure><figcaption>A fully Penrose tiled sheet.</figcaption><img src="/assets/img/quasicrystal_logo2.png" /></figure>
<p>The pattern starts to look even more intriguing the further out in the tiling you go.
There is much to learn about such physical systems and their quasiperiodic cousins.</p>
Thu, 05 Sep 2019 00:00:00 -0400
/2019/09/05/Quasicrystalline-Art/
/2019/09/05/Quasicrystalline-Art/Pulsing a two-band model to discover topology<p>In systems with an anomalous quantum Hall effect, the quantized Hall conductivity comes from the integral of a Chern number over some manifold. Usually, this integral is derived via Kubo formula.
However, there is geometry involved in how the state evolves, and in fact, we can use the dynamics of the current following a weak pulse in order to find the DC conductivity. The route is easy enough: Say we have a conductivity which when written with respect to time is <span><script type="math/tex">\sigma_{xy}(t-t')</script></span>, and without loss of generality we apply a pulse <span><script type="math/tex">E_x(t) = A_x\delta(t)</script></span>, then we can find the current response in the perpendicular direction
<!--more--></p>
<div class="mathblock"><script type="math/tex; mode=display">\begin{align} j_y(t) = \int dt \, \sigma_{yx}(t-t') E_x(t') = \sigma_{yx}(t) A_x. \end{align}</script></div>
<p>This allows us to derive an expression for the DC-conductivity</p>
<div class="mathblock"><script type="math/tex; mode=display">\begin{align} \sigma_{yx} = \int dt \, \sigma_{yx}(t) = \frac1{A_x} \int dt \, j_y(t). \end{align}</script></div>
<p>Geometrically, there is a lot going on with <span><script type="math/tex">j_y(t)</script></span> when we have a system with spin-orbit coupling. In particular, take the two band model</p>
<div class="mathblock"><script type="math/tex; mode=display">\begin{align}
h(\mathbf p) = \mathbf d(\mathbf p) \cdot \sigma,
\end{align}</script></div>
<p>where <span><script type="math/tex">\mathbf d</script></span> is 3D, <span><script type="math/tex">\mathbf p</script></span> is 2D, and <span><script type="math/tex">\sigma = (\sigma_x, \sigma_y, \sigma_z)</script></span>, the vector of Pauli matrices. The initial states of the system can be represented by where they are on the Bloch sphere <span><script type="math/tex">-\hat{\mathbf d}(\mathbf p)</script></span>. But once a pulse is supplied, this state will begin to rotate about a different vector <span><script type="math/tex">\mathbf d(\mathbf p - e \mathbf A)</script></span>. Thus, if we add time dependence to <span><script type="math/tex">\hat{\mathbf d}(\mathbf p, t)\equiv -\langle \sigma(t) \rangle</script></span> to represent the state’s location, we can use Heisenberg’s equations of motion to obtain</p>
<div class="mathblock"><script type="math/tex; mode=display">\begin{align}
\hbar \frac{\partial \hat{\mathbf d}(\mathbf p, t)}{\partial t} = 2\mathbf{d}(\mathbf p - e\mathbf A) \times \hat{\mathbf{d}}(p, t).
\end{align}</script></div>
<p>We can rewrite this equation as</p>
<div class="mathblock"><script type="math/tex; mode=display">\begin{align}
\hat{\mathbf d}(\mathbf p,t) = \hat{\mathbf d}(\mathbf p - e \mathbf A) [\hat{\mathbf d}(\mathbf p - e \mathbf A) \cdot \hat{\mathbf d}(\mathbf p)] - \hbar \frac{\hat{\mathbf d}(\mathbf p-e \mathbf A) \times \frac{\partial \hat{\mathbf d}(\mathbf p, t)}{\partial t}}{2d(\mathbf p - e A)}
\end{align}</script></div>
<p>However, this state has an associated current with it, and that can be represented by the operator <span><script type="math/tex">j_\mu = -e \partial_\mu \mathbf d(\mathbf p- e \mathbf A) \cdot \sigma</script></span>. And the vector <span><script type="math/tex">\langle\sigma\rangle = -\hat{\mathbf{d}}(p,t)</script></span>. Therefore,</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{align}
\langle j_\mu \rangle = e^2 \partial_\mu \mathbf d(\mathbf p - e\mathbf A ) \cdot \hat{\mathbf d}(\mathbf p, t)
\end{align}
</script></div>
<p>Combining these expressions, we have</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{multline}
\langle j_\mu \rangle = e^2 \partial_\mu \mathbf d(\mathbf p - e\mathbf A ) \cdot \Bigg[ \hat{\mathbf d}(\mathbf p - e \mathbf A) [\hat{\mathbf d}(\mathbf p - e \mathbf A) \cdot \hat{\mathbf d}(\mathbf p)] \\ - \hbar \frac{\hat{\mathbf d}(\mathbf p-e \mathbf A) \times \frac{\partial \hat{\mathbf d}(\mathbf p, t)}{\partial t}}{2d(\mathbf p - e A)} \Bigg].
\end{multline}
</script></div>
<p>Or simplified</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{multline}
\langle j_\mu \rangle = e^2 \partial_\mu d(\mathbf p - e\mathbf A )[\hat{\mathbf d}(\mathbf p - e \mathbf A) \cdot \hat{\mathbf d}(\mathbf p)] \\ - \frac{e^2}{2} \partial_\mu \hat{\mathbf{d}}(\mathbf p - e \mathbf A) \cdot \left[\hat{\mathbf d}(\mathbf p-e \mathbf A) \times \frac{\partial \hat{\mathbf d}(\mathbf p, t)}{\partial t} \right].
\end{multline}
</script></div>
<p>This is exact. At this point, we make a couple of approximations. First of all, the first term is independent of <span><script type="math/tex">t</script></span> so it cannot contribute to the total current <em>if</em> we have a finite DC conductivity. This leaves the second term. We can do the integral over time — there is an order of limits problem but we can get around this by noting that we do not expect the infinite time state to contribute to the energy (or: it averages to something proportional to <span><script type="math/tex">\mathbf d(\mathbf p - e \mathbf A)</script></span> anway and so the cross product vanishes), so we discard it and therefore, <span><script type="math/tex">\int_0^\infty dt' \, \mathbf d(\mathbf p, t') = - \mathbf d(\mathbf p)</script></span>.</p>
<p>Hence, we get the Hall conductivity</p>
<div class="mathblock"><script type="math/tex; mode=display">\begin{align}
\sigma_{yx} = \frac{e^2\hbar}{2 A_x} \int \frac{d^2 p}{h^2} \partial_y \hat{\mathbf{d}}(\mathbf p - e \mathbf A) \cdot \left[\hat{\mathbf d}(\mathbf p-e \mathbf A) \times \hat{\mathbf d}(\mathbf p) \right].
\end{align}</script></div>
<p>At this point, we actually have not expanded in terms of <span><script type="math/tex">A_x</script></span> yet. The first term will produce a term that is symmetric in <span><script type="math/tex">x</script></span> and <span><script type="math/tex">y</script></span>—however it drops out due to the cross product <span><script type="math/tex">\hat{\mathbf d}(\mathbf p) \times \hat{\mathbf d}(\mathbf p) = 0</script></span> vanishes. Thus, only the second term persists and we see directly that <span><script type="math/tex">x</script></span> and <span><script type="math/tex">y</script></span> must be different and in fact, we get the well-known formula</p>
<div class="mathblock"><script type="math/tex; mode=display">\begin{align}
\sigma_{yx}^{\mathrm{Hall}} =\frac{e^2}{h} \int \frac{d^2 p}{4\pi} \hat{\mathbf{d}}(\mathbf p) \cdot \left[\partial_y \hat{\mathbf d}(\mathbf p) \times \partial_x \hat{\mathbf d}(\mathbf p) \right].
\end{align}</script></div>
<p>This describes the Chern number of some manifold parametrized by <span><script type="math/tex">\mathbf p</script></span> (usually the Brillioun zone). It is the number of times the vector <span><script type="math/tex">\mathbf d</script></span> wraps the sphere.</p>
<p>To understand why it’s a topological invariant, note that the quantity in the integral looks very much like a Jacobian. In fact, it is; it describes a coordinate transformation from the <span><script type="math/tex">\hat {\mathbf d}</script></span> to <span><script type="math/tex">(p_x,p_y)</script></span>.
In this way, the integrand represents an area element on the sphere, and in general <span><script type="math/tex">\mathbf p</script></span> is a closed manifold. So <span><script type="math/tex">\mathbf d(\mathbf p)</script></span> maps that closed manifold to the sphere, and without any edges or boundaries the area it maps out must be <span><script type="math/tex">4 \pi</script></span> times an integer.</p>
<p>This formula is well-known, but this dynamical way of obtaining it is slightly less well-known. We have extended this idea in a paper published <a href="http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.117.235302">last year</a> to handle the out-of-equilibrium case of quenches. In that situation, new phenonmena appear that are quite different from the equilibrium case—terms that we discarded in this calculation become quite relevant.</p>
Mon, 20 Feb 2017 00:00:00 -0500
/2017/02/20/Chern-Number-From-Real-Time/
/2017/02/20/Chern-Number-From-Real-Time/Subtleties in linear response theory<p>In linear response theory, we consider some small perturbation to a Hamiltonian and look at the response of some observable to that perturbation. In the case considered here, the perturbation is an electric field, and the response is current. The linear response that characterizes these quantities is called the conductance.
<!--more--></p>
<p>There’s a problem though, an electric field <em>accelerates</em> a charge. Consider a classical electron for the time being, then</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{align}
m \ddot x(t) = - e \mathbf{E}. \label{eq:Newton} \tag{1}
\end{align}
</script></div>
<p>Or in terms of current <span><script type="math/tex">j(t) = -e \dot x(t)</script></span>, <span><script type="math/tex"> \frac{d j}{dt} = \frac{e^2}{m} \mathbf{E} </script></span>. From here we can quickly and naively go to frequency space to find <span><script type="math/tex"> j(\omega) = i \frac{e^2/m}{\omega} \mathbf{E}(\omega) </script></span>. Then one might remember that another way to define <span><script type="math/tex">\mathbf{E}</script></span> is in terms of a vector potential that is purely time-dependent, so <span><script type="math/tex">\mathbf{E}(\omega) = i \omega \mathbf A(\omega)</script></span>. Now, if we just plug this into our linear response for the current, we get</p>
<div class="mathblock"><script type="math/tex; mode=display"> j(\omega) = - \frac{ e^2}{m} \mathbf A(\omega). \label{eq:jA} </script></div>
<p>All is well and good, right? Well, not quite. In electromagnetism, the constant part of <span><script type="math/tex">\mathbf A(t)</script></span> corresponds to the <span><script type="math/tex">\omega = 0</script></span> term of <span><script type="math/tex">A(\omega)</script></span>. This represents what is known as a “pure gauge”. These gauges are <em>physically equivalent</em> to the null field <span><script type="math/tex">\mathbf A(\omega=0) = 0</script></span>. Thus, whatever linear response is represented above at <span><script type="math/tex">\omega = 0</script></span> must be unphysical, right?</p>
<p>Wrong.</p>
<p>Before explainging why this is wrong, let’s give some further context to this linear response theory. The term <span><script type="math/tex">- e^2/m</script></span> is actually the single particle term of what is known as the “diamagnetic” response to the conductivity when you add in more electrons (usually distributed in a Fermi distribution). This term persists in quantum mechanics, and no other terms appear to cancel it in the simplest case of <span><script type="math/tex"> H = \frac{p^2}{2m} </script></span>. In fact, while the math becomes more cumbersome, the solution we shall illustate below holds perfectly well for even the non-interacting multi-electron system.</p>
<p>Now, at this point you may have guessed that there’s something strange going on at <span><script type="math/tex">\omega = 0</script></span> due to the fact that the electric field <em>accelerates</em> the particle and doesn’t just have a velocity response. At the <span><script type="math/tex">\omega = 0</script></span> point, the <em>physical</em> field <span><script type="math/tex">E(\omega)</script></span> seems to necessarily be equal to zero in the gauge we have prescribed unless <span><script type="math/tex">A(\omega) \sim 1/\omega </script></span> for small <span><script type="math/tex">\omega</script></span>. This would lead to a divergent <span><script type="math/tex">j(\omega)</script></span>, restoring our faith that the system is accelerating out of control.</p>
<p>But what about when <span><script type="math/tex">\mathbf{A}(\omega=0) = \mathbf{A}_0</script></span>? It seems like then we have a true velocity response to an unphysical object. The solution is subtle: At some point in the quick derivation we made an assumption that implied <span><script type="math/tex">\mathbf A(t) \rightarrow 0 </script></span> at <span><script type="math/tex"> t \rightarrow -\infty</script></span>. This implies that if <span><script type="math/tex">\mathbf A(t) = \mathbf A_0</script></span> at any finite time, there had to be some time in between where <span><script type="math/tex">d\mathbf A/ dt \neq 0</script></span>. Thus, during that “ramp up” time, an electric field was on and it accelerated the charge to a specific velocity resulting in the current <span><script type="math/tex">j(\omega) = - \frac{ e^2}{m} \mathbf A(\omega)</script></span>.</p>
<p>The assumption is subtle, but the result is rather simple. For now, just assume that <span><script type="math/tex">\mathbf A(-\infty) = 0 </script></span> and at some <span><script type="math/tex">t_0</script></span>, <span><script type="math/tex">\mathbf A(t_0) = \mathbf A_0</script></span>, then we can integrate Eq. \eqref{eq:Newton} to obtain the velocity:</p>
<div class="mathblock"><script type="math/tex; mode=display"> m \dot x (t_0) = e \int_{-\infty}^{t_0} \frac{d \mathbf A}{d t} d t = e \mathbf A_0. </script></div>
<p>Or, in other words, <span><script type="math/tex">\mathbf j(t_0) = -\frac{e^2}{m} \mathbf A_0</script></span>, the same as before! This is how a constant <span><script type="math/tex">\mathbf A_0</script></span> can be physical: When it represents the change from a different constant vector potential.</p>
<p>Now, to isolate the assumption, let us run through what they were</p>
<ol>
<li><span><script type="math/tex">\frac{d j}{d t} = \frac{e^2}{m} E(t)</script></span>.</li>
<li>Take the Fourier transform: <span><script type="math/tex"> j(\omega) = i \frac{e^2/m}{\omega} E(\omega)</script></span>.</li>
<li>Insert vector potential with <span><script type="math/tex">E = -\frac{d}{dt} A</script></span> and <em>assume the Fourier transform exists</em> for <span><script type="math/tex">A</script></span>: <span><script type="math/tex">j(\omega) = - \frac{e^2}m A(\omega)</script></span>.</li>
<li>Undo the Fourier transform: <span><script type="math/tex">j(t) = - \frac{e^2}{m} A(t)</script></span>.</li>
</ol>
<p>Now, let us get the last equation (in #4 above) by a simpler route.</p>
<ol>
<li><span><script type="math/tex">\frac{d j}{dt} = \frac{e^2}{m} E(t)</script></span>.</li>
<li>Use <span><script type="math/tex">E = - \frac{d}{dt} A</script></span> and integrate the above expression from <span><script type="math/tex">-\infty</script></span> to <span><script type="math/tex">t</script></span>: <span><script type="math/tex">j(t) - j(-\infty) = -\frac{e^2}{m} ( A(t) - A(-\infty))</script></span>.</li>
</ol>
<p>Two perfectly legitimate calculations resulting in different results. Firstly, this highlights that the first procedure does actually assume <span><script type="math/tex">A(-\infty) = 0</script></span>. Secondly, the only assumptions that could have given <span><script type="math/tex">j(-\infty) = 0</script></span> (an assumption we probably wanted anyway) and <span><script type="math/tex">A(-\infty) = 0</script></span> are that they could be given in terms of Fourier transforms. In order for a function to have a Fourier transform it needs to be <em>absolutely integrable</em>—i.e. <span><script type="math/tex"> \int_{-\infty}^\infty \lvert A(t) \rvert dt \lt \infty</script></span>. Given <span><script type="math/tex">A(t)</script></span> as a continuous, piece-wise differentiable function, we need <span><script type="math/tex">A(t) \rightarrow 0 </script></span> for <span><script type="math/tex">t \rightarrow \pm \infty</script></span>. This imposes our gauge, and since we are not interested in future times let alone <span><script type="math/tex"> t \rightarrow +\infty</script></span>, we can artificially modify the function as we see fit to accomodate that. But how the function began at <span><script type="math/tex">-\infty</script></span> is important, and we must impose that. Hence, we have chosen, at least partially, a gauge.</p>
<p>We are left with a dilemma then about pure plain waves <span><script type="math/tex"> A(t) = A(\omega) e^{-i \omega t}</script></span>. How do those function?</p>
<p>Technically, they are outside of the bounds of the Fourier analysis and we can see that simply by the fact that if we tried to do the above procedure, we couldn’t have a well defined answer as <span><script type="math/tex">t \rightarrow -\infty</script></span> (too oscillatory). However, we can approximate the plain wave in terms of an absolutely integrable function <span><script type="math/tex"> A_\delta(t) = A(\omega) e^{-i (\omega + i \delta) t}</script></span> for any <span><script type="math/tex">\delta \gt 0</script></span>, and everything works. This shows us explicitly that <span><script type="math/tex">t = - \infty</script></span> does have <span><script type="math/tex"> A_\delta \rightarrow 0</script></span> for all <span><script type="math/tex">\delta \gt 0</script></span>. And this is the origin of the well known substitution <span><script type="math/tex">\omega \rightarrow \omega + i\delta</script></span>.</p>
<p>The natural question to ask now is how this works for a real system (with dissipation). Why does such a term not exist at zero frequency?</p>
<p>Unless your system is a superconductor, there is some dissipation in the system.
The simplest way to include this is classically: When an electron is going at velocity <span><script type="math/tex">\dot x(t)</script></span> it experiences a “drag” that tends to slow it down.
Thus, our Newton’s equations become</p>
<div class="mathblock"><script type="math/tex; mode=display"> m \ddot x(t) = - m \gamma \dot x(t) - e \mathbf E</script></div>
<p>where <span><script type="math/tex">\gamma</script></span> describes how much drag the electron experiences.
For more disorder, this would be a larger number.
Playing the same Fourier transform game, we can obtain rather quickly that</p>
<div class="mathblock"><script type="math/tex; mode=display"> j(\omega) = - \frac{e^2}{m} \frac{\omega}{\omega +i \gamma} A(\omega). </script></div>
<p>This is just one step away from the well-known <a href="http://en.wikipedia.org/wiki/Drude_model">Drude model</a>.
We see that if <span><script type="math/tex">A(t) = A_0</script></span>, then <span><script type="math/tex">j(\omega) = 0</script></span>.
But our gauge choice that we described before is still in place, the only difference is that our “kick” at <span><script type="math/tex">t=-\infty</script></span> has an infinite time to dissipate back to rest (the inclusion of <span><script type="math/tex">\gamma</script></span> above is critical for <span><script type="math/tex">j(\omega=0) =0</script></span>).
This also suggests a steady state current when <span><script type="math/tex">\mathbf E</script></span> is constant: <span><script type="math/tex">m\ddot x=0</script></span> implies <span><script type="math/tex">j = \frac{e^2}{m \gamma} \mathbf E</script></span>.
Our current relaxes to zero when there’s nothing around (<span><script type="math/tex">\mathbf E = 0</script></span>), as we would expect.</p>
<p>When a quantum mechanical description is done—by taking a random disorder potential and averaging over disorder configurations—one obtains similar results.
The diamagnetic term for a <em>clean system</em> is real and has a physically well defined explanation.</p>
<p>One may not be surprised that this curious “diamagnetic term” occurs for superconductors, however it is sometimes explained that “gauge symmetry is broken” and that is why such a term exists.
This is a misleading statement, but one I will explore in a future post.</p>
Mon, 22 Dec 2014 00:00:00 -0500
/2014/12/22/Subtleties-in-linear-response-theory/
/2014/12/22/Subtleties-in-linear-response-theory/Current in Single Particle Quantum Mechanics<p>For simplicity, I will only use one-dimension in this post, but this can be generalized to higher dimensions rather easily.</p>
<p>Many textbooks on Quantum Mechanics mention current density can be derived from the continuity equation and probability.
The usual method for figuring this out is to assume you have some Hamiltonian <span><script type="math/tex"> H = p^2/2m + V(x)</script></span> where <span><script type="math/tex">p</script></span> is the momentum and <span><script type="math/tex">x</script></span> is the position.
In this way the current density is written in terms of the wave function <span><script type="math/tex">\psi(x,t)</script></span> as
<!--more--></p>
<div class="mathblock"><script type="math/tex; mode=display">
j(x,t) = \frac1{2m i}\left[ \psi^* \overrightarrow{\partial_x} \psi - \psi^* \overleftarrow{\partial_x} \psi \right].
</script></div>
<p>This then satisfies the continuity equation</p>
<div class="mathblock"><script type="math/tex; mode=display">
\partial_t \rho(x,t) + \partial_x j(x,t) = 0,
</script></div>
<p>with density <span><script type="math/tex">\rho(x,t)=\psi^*(x,t)\psi(x,t)</script></span>. It should be noted that if you write the wave function as <span><script type="math/tex">\psi(x,t) = \sqrt{\rho(x,t)} e^{i \theta(x,t)}</script></span>, then the current is just proportional to the gradient of the phase <span><script type="math/tex">j(x,t)= \rho(x,t) \partial_x \theta(x,t)/m</script></span>, giving the spatial change in phase a physical significance.</p>
<p>However, there are two lingering questions:</p>
<ol>
<li>Is this current density related to the Heisenberg operator <span><script type="math/tex">\dot x(t)</script></span> which tracks the velocity of the system?</li>
<li>If so, does it generalize to more arbitrary Hamiltonians?</li>
</ol>
<p>To answer these questions, we consider the more arbitrary Hamiltonian</p>
<div class="mathblock"><script type="math/tex; mode=display">
H = T(p) + V(x),
</script></div>
<p>where <span><script type="math/tex">V(x)</script></span> is some potential and the kinetic energy is some polynomial</p>
<div class="mathblock"><script type="math/tex; mode=display">T(p) = \sum_{n=1} a_n \frac{p^n}{n!}.</script></div>
<p>We are unworried about bounding the energy, so odd-order Kinetic energy terms are allowed (in the higher dimensional case, the Dirac-like Hamiltonians have linear terms in <span><script type="math/tex">p</script></span>).
At this point, we can take our Heisenberg operator <span><script type="math/tex">\dot x(t)</script></span> and find</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{align*}
\dot x(t) & = i [H, x(t)] \\
& = T'(p(t)).
\end{align*}
</script></div>
<p>where <span><script type="math/tex">T'</script></span> is the derivative of <span><script type="math/tex">T</script></span> with respect to its argument.
Now, we would like to obtain a current density from this quantity.
We can certainly define the total current at a specific time as</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{align*}
I & = \langle\psi_0\lvert \dot x(t) \rvert \psi_0\rangle = \langle\psi_0\lvert T'(p(t)) \rvert \psi_0\rangle\\
& = \langle\psi(t)\lvert T'(p) \rvert \psi(t)\rangle,
\end{align*}
</script></div>
<p>where in the last line we go from the Heisenberg to Schroedinger picture. Now to get density, we need to use a complete set position states, so that</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{align} \label{eq:total-current} \tag{1}
I & = \int dx \, dy \, \langle\psi(t)\lvert x\rangle \langle x \lvert T'(p) \rvert y \rangle \langle y \lvert \psi(t)\rangle.
\end{align}
</script></div>
<p>Now, <span><script type="math/tex">p</script></span> acts as a derivative on position kets, so that one can verify that</p>
<div class="mathblock"><script type="math/tex; mode=display">
\langle x \lvert T'(p) \rvert y \rangle = T'(-i \partial_x ) \delta(x-y).
</script></div>
<p>However, there is an ambiguity here since we can write</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{align}
T'(-i \partial_x)\delta(x-y) & = \sum_{n=0} a_{n+1} \frac{(-i\partial_x)^n}{n!} \delta(x-y) \nonumber \\ & = \sum_{n=0} a_{n+1} \frac{(-i\partial_x)^{n-m} (i \partial_y)^m}{n!} \delta(x-y). \label{eq:T-delta} \tag{2}
\end{align}
</script></div>
<p>This ambiguiuty in how to choose the derivatives leaves us with many way to define the current density.
Fortunately, only one of these combinations satisfies the continuity equation.
To figure out which one that is, let us reverse engineer the continuity equation to obtain a solution.
The density is <span><script type="math/tex">\rho(x,t) = \psi^*(x,t) \psi(x,t)</script></span>, and so using the Schroedinger’s equation, we have</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{align*}
i \partial_t \rho(x,t) & = i(\psi^*(x,t) \overleftarrow {\partial_t} \psi(x,t) + \psi^*(x,t) \overrightarrow {\partial_t} \psi(x,t) ) \\
& = - [\psi^*(x,t)( T(i \overleftarrow{\partial_x} ) - T(-i \overrightarrow{\partial_x} ) )\psi(x,t) ].
\end{align*}
</script></div>
<p>Thus, the continuity equation must become</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{align*}
\partial_t \rho(x,t) - i [\psi^*(x,t)( T(i \overleftarrow{\partial_x} ) - T(-i \overrightarrow{\partial_x} ) )\psi(x,t) ] = 0.
\end{align*}
</script></div>
<p>If we now assume that we have a current density that takes the form</p>
<div class="mathblock"><script type="math/tex; mode=display"> j(x,t) = \psi^*(x,t) \vartheta(\overleftarrow{\partial_x},\overrightarrow{\partial_x}) \psi(x,t), </script></div>
<p>and satisfies the continuity equation, <span><script type="math/tex">\partial_t \rho + \partial_x j = 0</script></span>, then we can equate operators to obtain</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{align} \label{eq:diff-ops-cty} \tag{3}
\overleftarrow{\partial_x} \vartheta + \vartheta \overrightarrow{\partial_x} = -i [T(i \overleftarrow{\partial_x} ) - T(-i \overrightarrow{\partial_x} ) ].
\end{align}</script></div>
<p>Anticipating the answer, we write the general form of <span><script type="math/tex">\vartheta</script></span> as</p>
<div class="mathblock"><script type="math/tex; mode=display">
\vartheta = \sum_{n=0} \sum_{m=0}^n (-1)^{m} i^n b_{n,m} \overleftarrow{\partial}{}_x^{n-m} \overrightarrow{\partial}{}_x^m.
</script></div>
<p>Then we can take the left hand side Eq. \eqref{eq:diff-ops-cty} and write</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{multline*}
\overleftarrow{\partial_x} \vartheta + \vartheta \overrightarrow{\partial_x} = -i \sum_{n=1} i^n b_{n-1,0} \overleftarrow{\partial}{}_x^{n} - \sum_{n=0} \sum_{m=0}^{n-1} (-1)^m i^n \left[ b_{n,m+1} - b_{n,m} \right] \\ \times \overleftarrow{\partial}{}_x^{n-m} \overrightarrow{\partial}{}_x^{m+1}
+ i \sum_{n=1} (-i)^{n} b_{n-1,n-1} \overrightarrow{\partial}{}_x^{n} .
\end{multline*}
</script></div>
<p>On the other hand, we can calculate the right hand side of Eq. \eqref{eq:diff-ops-cty} to be</p>
<div class="mathblock"><script type="math/tex; mode=display">
-i [T(i \overleftarrow{\partial_x} ) - T(-i \overrightarrow{\partial_x} ) ] = -i \sum_{n=1} a_n i^n \frac{\overleftarrow{\partial}{}_x^n}{n!} + i \sum_{n=1} a_n (-i)^n \frac{\overrightarrow{\partial}{}_x^n}{n!}.
</script></div>
<p>Equating the left and right sides, we can just read off that <span><script type="math/tex">b_{n-1,0} = a_n/n!</script></span>, <span><script type="math/tex">b_{n-1,n-1}= a_n/n!</script></span> and <span><script type="math/tex">b_{n,m+1} = b_{n,m}</script></span>, so that <span><script type="math/tex">b_{n,m} = a_{n+1}/(n+1)!</script></span>.</p>
<p>Thus, we have</p>
<div class="mathblock"><script type="math/tex; mode=display">
\vartheta = - \sum_{n=0} \sum_{m=0}^n (-1)^{m} i^n \frac{a_{n+1}}{(n+1)!} \overleftarrow{\partial}{}_x^{n-m} \overrightarrow{\partial}{}_x^m.
</script></div>
<p>Returning all the way to when we were considering <span><script type="math/tex">\dot x(t)</script></span> as an integral over position,
this suggests that in Eq. \eqref{eq:T-delta}, we want to consider</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{equation*}
T'(-i \partial_x)\delta(x-y) \\ = \sum_{n=0} \frac{a_{n+1}}{n!} \frac1{n+1}\sum_{m=0}^n (-i\partial_x)^{n-m} (i \partial_y)^m \delta(x-y).
\end{equation*}
</script></div>
<p>Given the expression for total current Eq. \eqref{eq:total-current} and integrating the delta function by parts numerous times, we can replace <span><script type="math/tex">\partial_x</script></span> with <span><script type="math/tex">-\overleftarrow \partial_x</script></span> and <span><script type="math/tex">\partial_y</script></span> with <span><script type="math/tex">- \overrightarrow\partial_x</script></span>, and then the total current is just</p>
<div class="mathblock"><script type="math/tex; mode=display">
I = \int dx \, \psi^*(x,t) \sum_{n=0} \frac{a_{n+1}}{(n+1)!} \sum_{m=0}^n (i \overleftarrow \partial_x)^{n-m} (-i \overrightarrow \partial_x)^m \psi(x,t).
</script></div>
<p>which actually integrates the current density! Thus, we have shown that</p>
<div class="mathblock"><script type="math/tex; mode=display">
j(x,t) = \psi^*(x,t) \sum_{n=0} \frac{a_{n+1}}{(n+1)!} \sum_{m=0}^n (i \overleftarrow \partial_x)^{n-m} (-i \overrightarrow \partial_x)^m \psi(x,t),
</script></div>
<p>and that</p>
<div class="mathblock"><script type="math/tex; mode=display">\langle \dot x(t) \rangle = \int dx \, j(x,t). </script></div>
<p>Indeed, <span><script type="math/tex">\dot x(t)</script></span> does track the current of the problem and can even be written as the integral of a current density. Even for the more arbitrary Hamiltonian <span><script type="math/tex">H = T(p) + V(x)</script></span>.</p>
Tue, 14 Jan 2014 00:00:00 -0500
/2014/01/14/Current-in-single-particle-QM/
/2014/01/14/Current-in-single-particle-QM/The delta-function Potential Lattice<p>I was messing around with some simple problems, and found this simple but illustrative problem. It starts you off in basic quantum mechanics and introduces concepts in a very straightforward way to both get to a more condensed matter perspective while even showing some interesting effects that have experimental consequences (energy bands and band gaps opening).
<!--more--></p>
<p>We look at the relatively simple problem<label for="sn-id-galitski" class="margin-toggle sidenote-number"></label><input type="checkbox" id="sn-id-galitski" class="margin-toggle" /><span class="sidenote">This is problem 2.53 in <a href="http://www.amazon.com/Exploring-Quantum-Mechanics-Collection-Researchers/dp/0199232725"><em>Exploring Quantum Mechanics</em></a> by Galitski, Karnakov, Kogan, and Galitski. </span> of finding the energy spectrum for a particle in the lattice potential</p>
<div class="mathblock"><script type="math/tex; mode=display"> U(x) = \alpha\sum_{n=-\infty}^\infty \delta(x - n a).</script></div>
<p><label for="mf-id-deltafcnlattice" class="margin-toggle">⊕</label><input type="checkbox" id="mf-id-deltafcnlattice" class="margin-toggle" /><span class="marginnote"><img class="fullwidth" src="/assets/img/Delta-fcn-lattice.png" /><br />A visual representation of <em>U(x)</em>.</span></p>
<p>The time-independent Schrödinger equation takes the form</p>
<div class="mathblock"><script type="math/tex; mode=display"> \left[ - \frac{\partial_x^2}{ 2 m} + U(x) \right] \psi(x) = E \psi(x), </script></div>
<p>where <span><script type="math/tex">E</script></span> is the energy.</p>
<p>Since we can solve the problem between the delta-functions quite simply (<span><script type="math/tex">U(x) =0 </script></span> there), let us restrict our focus to <span><script type="math/tex">na \lt x \lt (n+1)a</script></span>. Here, the wave function takes on the form</p>
<div class="mathblock"><script type="math/tex; mode=display"> \psi(x) = A_n e^{ik(x-na)} + B_n e^{-ik(x-na)},</script></div>
<p>where we have <span><script type="math/tex">k = \sqrt{2m E}</script></span>. Now, we can find an operator that commutes with the Hamiltonian so that we can diagonlize it to help solve the problem — this will be the operator that translates us by <span><script type="math/tex">a</script></span>.</p>
<p>To make this clear, let us abstract things to operators so that we have a momentum operator <span><script type="math/tex"> p</script></span> and a position operator <span><script type="math/tex"> x</script></span>, then we have the commutator <span><script type="math/tex"> [ x, p] = i</script></span>. The operator <span><script type="math/tex"> p</script></span> commutes with functions of <span><script type="math/tex"> x</script></span> as though it were a derivative <span><script type="math/tex">[ p, f( x)] = -i f'( x)</script></span>, so considering the translation operator <span><script type="math/tex"> e^{i a p }</script></span>, we can write</p>
<div class="mathblock"><script type="math/tex; mode=display"> e^{i a p} U( x) e^{ - i a p} = \sum_{n=0}^\infty \frac1{n!} U^{(n)}( x) a^n = U( x + a),</script></div>
<p>and since <span><script type="math/tex">U(x)</script></span> is periodic in <span><script type="math/tex">a</script></span>, we have that <span><script type="math/tex">e^{i a p} U( x) e^{ - i a p} = U( x)</script></span>. Thus, the operator <span><script type="math/tex"> T_a = e^{i a p}</script></span> commutes with the Hamiltonian and we can simulataneously diagonalize both it and the Hamiltonian. We say <span><script type="math/tex"> T_a \lvert \psi\rangle = e^{i a q} \lvert \psi \rangle </script></span> has <em>quasi-momentum</em> <span><script type="math/tex">q</script></span>. It is important to note that this not the same as real momentum which is not a well-defined quantum number in this problem (that needs translation symmetry).</p>
<p>In other words, we can write our eigenfunctions such that <span><script type="math/tex"> \psi(x+a) = e^{i q a} \psi(x)</script></span>, and this naturally leads us to relate the coefficients for our eigenfunctions above as</p>
<div class="mathblock"><script type="math/tex; mode=display"> A_{n-1} = e^{-i q a} A_n, \quad B_{n-1} = e^{-i q a} B_n. </script></div>
<p>Now, we can apply matching conditions at <span><script type="math/tex"> x = na</script></span> remembering that our wavefunction should be continuous, but that the delta-function will cause the first derivatives to be discontinuous. The equations for the coefficients are</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{align*}
A_n + B_n & = e^{i k a} A_{n-1} + e^{-i k a} B_{n-1}, \\
\left( 1 + \frac{2 i m \alpha}{k} \right) A_n - \left( 1 - \frac{2 i m \alpha}k \right) B_n & = e^{i k a} A_{n-1} - e^{-ik a} B_{n-1},
\end{align*}
</script></div>
<p>and if we insert the relation between the coefficients at <span><script type="math/tex">n-1</script></span> and <span><script type="math/tex">n</script></span>, we get a matrix equation</p>
<div class="mathblock"><script type="math/tex; mode=display">
\begin{align*}
\begin{pmatrix}
e^{i q a} - e^{i k a} & e^{i q a} - e^{- i k a} \\
e^{i q a} \left(1 + \tfrac{2 i m \alpha}k \right) - e^{i k a} & - e^{i q a } \left( 1 - \tfrac{2 i m \alpha}k \right) + e^{-ik a}
\end{pmatrix}
\begin{pmatrix}
A_n \\ B_n
\end{pmatrix} = 0
\end{align*}.
</script></div>
<p>This equation has a non-zero solution only if the determinant of the matrix is zero which can be written as</p>
<div class="mathblock"><script type="math/tex; mode=display">
\cos q a - f(E) = 0 , \quad f(E) = \cos ka + \frac{m \alpha}{k} \sin ka.
</script></div>
<p>This is an equation which relates the energy to the quasi-momentum. Since <span><script type="math/tex">\cos q a</script></span> can only be between -1 and 1, this equation only has a solution when <span><script type="math/tex">f(E)</script></span> is between -1 and 1. The oscillatory nature of <span><script type="math/tex">f(E)</script></span> means that it should pass in this range multiple (in fact, a countable infinite) number of times.</p>
<p>Armed with this equation, we can use <span><script type="math/tex"> q</script></span> and an integer to label our energies and we obtain the following energy bands by just solving for <span><script type="math/tex">E</script></span> (setting <span><script type="math/tex">m = 10</script></span>, <span><script type="math/tex"> a = 1</script></span>, and <span><script type="math/tex"> \alpha = 0.3 </script></span>)</p>
<p><label for="mf-id-bands" class="margin-toggle">⊕</label><input type="checkbox" id="mf-id-bands" class="margin-toggle" /><span class="marginnote"><img class="fullwidth" src="/assets/img/Bands-delta-fcn-lattice.png" /><br />Energy bands in the delta-function lattice.</span></p>
<p>The dotted lines in this plot represent the spectrum if there were no delta-function potentials (displaced in energy by <span><script type="math/tex"> \alpha / a </script></span> for clarity). Notice how the introduction of the delta-functions opens up gaps in this energy spectrum, so that there are some energies that are inaccessible. The gaps actually open up when <span><script type="math/tex">\lvert f(E)\rvert \geq 1</script></span> since there is no solution to our equation there. This energy gap for small <span><script type="math/tex">\alpha</script></span> just goes like <span><script type="math/tex">\alpha/a</script></span>, vanishing as we’d expect when <span><script type="math/tex">\alpha = 0</script></span>.</p>
<p>Notice that in this energy spectrum, we see that if <span><script type="math/tex">q \rightarrow -q</script></span> we get the same energy.</p>
<p>Additionally, if the energy ever goes negative the solutions turn from plane waves <span><script type="math/tex">e^{\pm i k (x-na)}</script></span> into functions localized around the delta functions <span><script type="math/tex">e^{\pm\kappa(x-na)}</script></span>. In fact, the wave functions look like this for a <span><script type="math/tex">q=0</script></span> state:</p>
<p><label for="mf-id-wfcn" class="margin-toggle">⊕</label><input type="checkbox" id="mf-id-wfcn" class="margin-toggle" /><span class="marginnote"><img class="fullwidth" src="/assets/img/Localized-states-delta-fcn-lattice.png" /><br />Negative energy localized states in delta-function lattice with alpha less than 0.</span></p>
<p>These only appear when <span><script type="math/tex">\alpha \lt 0</script></span>, and are related to the fact that the delta-function potential has a bound state. Additionally, only one band can ever have this state. This is due to the fact that the oscillatory sine and cosine change to their non-oscillatory hyperbolic counterparts. However, these states in the delta-function lattice are spread out throughout the crystal, and can not be said to be “localized” to a specific site — they still all have a definite quasi-momentum.</p>
<p>As with other single particle problems, upon considering the many particle picture, these energy bands get filled up to a set energy level (if we are considering fermions).</p>
Fri, 14 Jun 2013 00:00:00 -0400
/2013/06/14/delta-function-lattice/
/2013/06/14/delta-function-lattice/Use GmailTeX to compose and view emails with LaTeX<p>I’ve received numerous emails with pseudo-LaTeX in them, and I’ve composed many emails as well. My normal solution is to convert LaTeX to unicode with the helpful Mac application <a href="http://www.svenkreiss.com/UnicodeIt">Unicodeit</a>. However, this method is incomplete and misses some of the more complicated LaTeX.
<!--more--></p>
<p>To address this, there is an extension to gmail called <a href="http://alexeev.org/gmailtex.html">GmailTeX</a> available on Chrome, Firefox, Safari, and Opera (also it has a bookmarklet for any other browser).</p>
<p>If someone sends you an email in pseudo-LaTeX it can try to parse what the math with <strong>simple math</strong>. The resulting output is kind of what you’d get by using Unicodeit. Or, if someone sends you LaTeX inside dollar signs (i.e., $ […] $), then it can just compile that to LaTeX for you with its <strong>rich math</strong> function.</p>
<p>But best of all, when you compose emails and use the <strong>rich math</strong> function, it creates an image of your math hosted on a remote server. They don’t need to have GmailTeX installed unless you decide to send them the code itself.</p>
<p>The best collaborative apps require little to no commitment for collaborators to use, and this is one of those cases. Collaborators will just receive email with LaTeX images without needing to install anything.</p>
<p>The only issue that I’ve found from playing around with this extension is that the receiver may have to flag your email as “trustworthy” and/or allow images from remote servers to be viewed in their email client. Otherwise, they may not see any mathematics. Additionally, as you might have guessed, the emails won’t be viewable in an offline mode. Unless you’re going through these emails on an airplane though, I don’t think that should be too big of an issue.</p>
<p>(h/t Brian Danielak)</p>
Thu, 16 May 2013 00:00:00 -0400
/2013/05/16/gmailtex/
/2013/05/16/gmailtex/Spin-orbit coupled Hamiltonian<p>For applications in many parts of condensed matter physics and cold atoms physics, we use what is known as the Rashba spin-orbit coupled Hamiltonian. This Hamiltoninan is so-named because it couples momentum <span><script type="math/tex">\mathbf{p}</script></span>
to the spin <span><script type="math/tex">\mathbf{S}=\frac12\sigma</script></span> where <span><script type="math/tex">\sigma = (\sigma_x,\sigma_y,\sigma_z)</script></span> are the Pauli matrices and <span><script type="math/tex">\mathbf{p}=(p_x,p_y,p_z)</script></span> is a vector of momentum operators:
<!--more--></p>
<div class="mathblock"><script type="math/tex; mode=display"> H = \frac{p^2}{2m} + \alpha (\boldsymbol{\sigma} \times \mathbf{p})\cdot \hat{\mathbf{z}} + \Delta \sigma_z. </script></div>
<p><span><script type="math/tex">m</script></span> is the mass, <span><script type="math/tex">\alpha</script></span> is the spin-orbit coupling strength, and <span><script type="math/tex">\Delta</script></span> is some Zeeman field (it acts as magnetic field on the spin).</p>
<p>In this post, we go through the calculation of the energy spectrum and eigenvectors – a straight forward exercise in undergraduate linear algebra.</p>
<p>First of all, instead of the normal method of finding eigenvectors, we note that we can rewrite this Hamiltonian in the form</p>
<div class="mathblock"><script type="math/tex; mode=display"> H = \frac{p^2}{2m} + \mathbf{b}(p) \cdot \boldsymbol{\sigma} </script></div>
<p>where <span><script type="math/tex">\mathbf{b}(p) = (\alpha p_y, -\alpha p_x, \Delta)</script></span>. Now, <span><script type="math/tex">\mathbf{b}(p)</script></span> represents a point on the Bloch sphere, and so we expect the eigenvectors to be parallel and anti-parallel to this vector. The energies in this case are very straight forward and amount to the positive and negative of <span><script type="math/tex">\lvert\mathbf{b}(p)\rvert</script></span>:</p>
<div class="mathblock"><script type="math/tex; mode=display"> \epsilon_\pm(p) = \frac{p^2}{2m} \pm \sqrt{ \alpha^2 p^2 + \Delta^2}. </script></div>
<p>With these eigenvalues, it is a straight forward exercise in linear algebra to find the eigenvectors. After a bit of algebra, the eigenvectors of <span><script type="math/tex">H</script></span> in terms of the eigenvectors of <span><script type="math/tex">\sigma_z</script></span> ( <span><script type="math/tex">\sigma_z\left\lvert\uparrow\right\rangle = \left\lvert\uparrow\right\rangle</script></span> and <span><script type="math/tex">\sigma_z\left\lvert\uparrow\right\rangle = -\left\lvert\uparrow\right\rangle</script></span> ) are</p>
<div class="mathblock"><script type="math/tex; mode=display">\left\lvert\pm\right\rangle = \frac1{\sqrt2}\left[\sqrt{1 \pm \frac{\Delta}{\sqrt{\Delta^2+\alpha^2 p^2}}}\left\lvert\uparrow\right\rangle + e^{-i\phi} \sqrt{1 \mp \frac{\Delta}{\sqrt{\Delta^2+\alpha^2 p^2}}}\left\lvert\downarrow\right\rangle \right]</script></div>
<p>where we have defined <span><script type="math/tex">\phi</script></span> by <span><script type="math/tex">p_y+ip_x = p e^{i\phi}</script></span>. Note that when <span><script type="math/tex">p_{x,y} \rightarrow -p_{x,y}</script></span>, the occupations stay the same. However, if we just look at one energy, <span><script type="math/tex">\epsilon_-(p)</script></span> the ground state energy, we see that the state we get when <span><script type="math/tex">p_{x,y} \rightarrow -p_{x,y}</script></span> is almost orthogonal to the original state.</p>
<p><label for="mf-rashba" class="margin-toggle">⊕</label><input type="checkbox" id="mf-rashba" class="margin-toggle" /><span class="marginnote"><img class="fullwidth" src="/assets/img/Energy-splitting-rashba-so.png" /><br /></span>
The energy bands themselves look like the figure on the right where the vertical axis is energy (and for this particular example, <span><script type="math/tex">m=1</script></span>, <span><script type="math/tex">\alpha = 3</script></span>, and <span><script type="math/tex">\Delta=2</script></span>). Interestingly, the introduction of <span><script type="math/tex">\Delta</script></span> actually causes the gap to open up – the dotted lines are for when <span><script type="math/tex">\Delta=0</script></span>.</p>
<p>Now, if we have a bunch of fermions filling up these energies, if we set the chemical potential to be in the gap, we would find that the only excitations would states that are spin-locked to the momentum.</p>
<p>Many things can be done with this Hamiltonian to interesting effect. It finds its way into <a href="http://arxiv.org/abs/1102.3945">cold atom physics</a> as well as <a href="http://books.google.com/books?hl=en&lr=&id=LQhcSCuzC3IC">condensed matter</a>.</p>
Sun, 28 Apr 2013 00:00:00 -0400
/2013/04/28/rashba-spin-orbit-coupling/
/2013/04/28/rashba-spin-orbit-coupling/