Notes for lecture 04a

Christian Maier

Notes 04a

We go on with the situation of last time, where we had a single particle potential so that the self energy is exact, by the Dyson equation. Here we even try to solve the Dyson equation, in steady state, to see what happens, and what the tricky parts are.

We start with the simplest Hamiltonian you can think of, a Hamiltonian without interaction. The unperturbed Hamiltonian in terms of non-interacting modes \[H_0=\sum_p\epsilon_p c^\dagger_p c_p\] where \(p\) are some quantum numbers. We talk about fermions, but this can easily be extended to the case of bosons. The perturbation describes time-dependence. We want to reach a steady state, so the time-dependent Hamiltonian must be constant starting for a certain point. We still can have non-equilibrium if we have a sudden switch-on of the perturbation term. We write it in this way. \[\hat{V}=\Theta(t-t_0)\sum_p \mathcal{V}_p c_p^\dagger c_p\] We switch on the perturbation at some point \(t_0\). Right now it does not mix up the modes, it just shifts the energies. Later we will try a more complicated perturbation.

Steady state occurs at some time \(t\gg t_0\). Physically we expect at \(t_0\) some calculations, which will be dampened after some time which then goes to a stationary solution, i.e. independent of time. The the Green’s functions also become time independent and we can use the Fourier transform far away from \(t_0\), so we can work in frequency space. The self energy is just described by this \(\mathcal{V}_p\), according to the discussion we had last time. The self energy, which depends of course on \(\omega\) and also some index \(p\) is

\[\underline{\Sigma}_p(\omega)=\mathcal{V}_p\,I_{2\times 2}\]

In Keldysh space this is just a 2-by-2 matrix. The Dyson equation is \[\underline G=\underline G_0+\underline G_0*\underline \Sigma*\underline G\]

The solution is \[\begin{aligned} \underline G^{-1}=\underline G_0^{-1}-\underline \Sigma\,. \end{aligned}\] We’ll leave out the underline when the meaning of the symbols is clear.

The only object here we have to consider is the 2-by-2 matrix \(I_{2\times 2}\). \(\underline\Sigma\) would be a function of \(p\) and \(p'\) in general. But in that case, it is diagonal in \(p,p'\). \[\Sigma_{pp'}=\delta_{pp'}\Sigma_p\] The same then goes for the Green’s function.

In the Fourier transform, the convolutions become simple products. So what then remains to be done from is to perform the inversion. And we only have to invert the 2-by-2 matrix, which is quite simple.

Still, this inversion of the 2-by-2 matrix can be done numerical. But there are also some tricks. We write explicitly the expression for the Green’s function: We’ll use a special notation to reduce the amount of indices: \[\underline g := \underline G_0=\left(\ \begin{array}{c|c} g_r & g_K \\ \hline & g_a \end{array}\right)\] Last time we already talked about the inversion of this: \[\underline g^{-1}=\left(\ \begin{array}{c|c} g_r^{-1} & -g_r^{-1}g_K g_a^{-1} \\ \hline & g_a^{-1} \end{array}\right)\] Note that the Keldysh component is a bit more complicated than the other two parts. We’ll write the Keldysh component down more explicitly: \[-g_r^{-1}\color{blue}g_K\color{black}g_a^{-1}= -(\omega-\epsilon_p+i\delta) \color{blue} \underbrace{2\pi i\delta(\omega-\epsilon_p) S(\omega)}_\textrm{Keldysh part} \color{black} (\omega-\epsilon_p-i\delta)\] where \(S(\omega)\) describes the occupation. The delta-function in the Keldysh part will cause the outer terms to reduce to \(\pm i\delta\), so that this becomes effectively zero. \[=0\] In other steady state examples we will see that this is very often the case.

However, this problem is ill posed. This Hamiltonian describes a series of modes that don’t interact with each other. That means, there is also no dissipation. For example, if there is an occupied mode, there is no way to let a particle leave it. To even reach a steady state, you’ll need some kind of dissipation mechanism. That works by formally considering the \(\delta\) to be zero at the end of the calculation. We will see later how it works. However, we need to consider that the \(\delta\) function in the Keldysh part also comes from the imaginary part of something like \(\pm i\delta\). So we need to consider this delta function as being in the limit of the quantity \(\delta\), a number that we’ll allow to become small in the end. Furthermore, we use a new notation from here on: \[\underline g^{-1}=\left(\ \begin{array}{c|c} (g^{-1})_r & (g^{-1})_K \\ \hline & (g^{-1})_a \end{array}\right)\]

It is better to work with the general expression for \(g_K\), which we found last time for the non-interacting system: \[g_K=(g_r-g_a)S(\omega)\] If we plug this into the expression for the Keldysh component of the inverse, we get \[(g^{-1})_K=-g_r^{-1}g_K g_a^{-1}\] \[=-g_r^{-1}(g_r-g_a)g_a^{-1}S(\omega)\] \[=\pqty{g_r^{-1}-g_a^{-1}}S(\omega)\] This is an interesting result, as the Keldysh component for the inverse is now very similar to that of the original Keldysh component. This is a generic property then, which is also valid in equilibrium, as we’ll se later. Plugging this into the solution for the Dyson equation \(\eqref{eq:dyson2}\), we get \[\underline G=\pqty{g^{-1}-\Sigma}^{-1}\] \[=\left( \begin{array}{c|c} g_r^{-1} - \mathcal{V}_p & (g^{-1})_K \\ \hline & g_a^{-1} - \mathcal{V}_p \end{array}\right)\]

We write the components explicitly again: The retarded component is \[G_r=(g_r^{-1} - \mathcal{V}_p)^{-1}\] \[=(\omega-\epsilon_p-\mathcal{V}_p+i\delta)^{-1}\] We see here that what the self energy does is simply apply a correction to the energy \(\epsilon_p\) by \(\mathcal{V}_p\). For the advanced component we have \(G_a=G_r^*\). To determine the Keldysh component, we’ll look at the matrix once more: \[\underline G=\left( \begin{array}{c|c} G_r^{-1} & (G^{-1})_K \\ \hline & G_a^{-1} \end{array}\right)^{-1}= \left( \begin{array}{c|c} G_r & G_K \\ \hline & G_a \end{array}\right)^{-1}\] \[G_K=-G_r (G^{-1})_K G_a\] \[=-G_r(g_r^{-1}-g_a^{-1})G_a S(\omega)\] Now we calculate the difference (which, if we expand the expression, will include a \(2i\delta\) term) \[g_r^{-1}-g_a^{-1}=G_r^{-1}-G_a^{-1}\color{gray}=2i\delta\color{black}\] and plug it into the previous equation to get: \[G_K=-G_r (G^{-1})_K G_a\] \[=(G_r - G_a) S(\omega)\] As a reminder: \[S(\omega)=1-2f_F(\omega-\mu)\] This tells us that the new retarded Green’s function is simply given by the Green’s function of particles with the energies given by \(\epsilon_p + \mathcal{V}_p\,,\) and similar for \(G_a\). For \(G_K\) it tells us that the expression for the Keldysh Green’s function is the one for free particles with the same energies and with a chemical potential that didn’t change.

Evaluate the particle number

Let’s see if this result is reasonable and what it means. In order to see this, we’ll evaluate the particle number, i.e. the occupation: Remember that \[G_K = G^> + G^<\] and we now calculate this for \(t=0\). \[G_{Kpp'}(t=0)=-i\ev{c_pc^\dagger_{p'}-c^\dagger_{p'}c_p}\] \[=2i\ev{c_pc^\dagger_{p'}}-i\delta_{pp'}\] where in the second line we exploited the commutator relations. We can rearrange the last equation as: \[\ev{c_pc^\dagger_{p'}}= -\frac{i}{2}G_{Kpp'}(t=0)+\frac{1}{2}\delta_{pp'}\] Now we do the Fourier transform1: \[=-\frac{i}{2}\int\frac{1}{2\pi} G_{Kpp'}(\omega)\dd{\omega}+\frac{1}{2}\delta_{pp'}\] This is a relation that is useful in general. Note: It seems that there is an \(e^{-i\omega t}\) factor missing in the Fourier transform, but that is simply eue to \(t=0\). That causes the \(e^{-i\omega t}=1\) to vanish.

For our case

For our case we have only a diagonal term, which is \[\ev{c_pc^\dagger_p}=-\frac{i}{2}\int\frac{\dd{\omega}}{2\pi} (G_r-G_a)S(\omega)+\frac{1}{2}\,.\] Then \(G_r-G_a\) can also be written as \[G_r-G_a=G_r-G_r^*\] \[=2i\Im\frac{1}{\omega-\epsilon_p-\mathcal{V}_p+i\delta}\] and then by using the Cauchy relation \[=-2\pi i \delta(\omega-\epsilon_p-\mathcal{V}_p)\,.\] If we put everything together, we get in total: \[\frac{1}{2}\pqty{1-S(\epsilon_p+\mathcal{V}_p)}\] \[=f_F(\epsilon_p+\mathcal{V}_p-\mu)\,.\]

That is the expected result, but it is not what it should be. We’ll illustrate this via the the following diagrams, representing the occupation.

Occupation of states: result vs. expectation

Let us assume for example \(\mathcal{V}_p=\mathrm{const.}=\mathcal{V}>0\) The situation for \(t < t_0\) is shown in picture a). Picture b) shows the situation for \(t > t_0\). All the states shift by a constant here. This is not what we expect, because there is no Hamiltonian term mixing the different terms. What we would expect is this show in picture c), where states that were occupied before the perturbation should still be occupied afterwards.

Why is this happening?

Where is the problem? The problem is that without dissipation, there is no steady-state. When we used the Fourier transform, we made the mistake of simply simply assuming that there even is a steady state. But this cannot be.

We took the limit \(\delta\rightarrow 0\), but we actually have two limits. To reach steady state, we have to wait for a long enough time for the effect of this perturbation to settle. So what we did in order for this to work is, we took the limits \(t\rightarrow\infty\) first and then \(\delta\rightarrow 0\) second. These two limits are not interchangable. A finite value of \(\delta\) causes dissipation, i.e. the system is not isolated but coupled to a bath. This means we introduce an artificial dissipation. On the other hand we have no choice. If we didn’t do it, there could not be a steady state.

What does it mean to have a dissipation? It means to have a coupling to a bath. Then the results that we obtain for \(t>t_0\), with the level shift, are correct.

Artificial dissipation via heat bath

What happens is that all levels up here loose their particles to the heat bath.


  1. What happens here is that we replace the function in the time domain by the appropriate Fourier integral, which contains the function in frequency domain.↩︎