Notes for lecture 03

Christian Maier

Now we want to find the explicit expressions for the building blocks of the Feynmann diagrams.

Non-interacting Green’s function

We derive it for a form of the Ham. that for the non-interacting case one can always obtain via normal mode. We assume the Ham. is discrete.

\[H_0=\sum_p\epsilon_p c^\dagger_p c_p\]

for some set of quantum numbers \(p\), e.g. momentum. Assume here that this contains all possible quantum numbers. It is always possible to write the Ham. this way.

We need the time evolution of the operator according to the Heisenberg representation in this Ham.

\[\dv{c_p^\dagger}{t}=i\comm{H_0}{c_p^\dagger}=i\epsilon_p c_p^\dagger\]

(Exercise 3.1: prove this)

We can formally solve this. The solution is a simple exponential.

\[c_p^\dagger(t)=e^{i\epsilon_p t}c_p^\dagger(0)\]

\[c_p(t)=\pqty{c_p^\dagger(t)}^\dagger=e^{-i\epsilon_p t}c_p(0)\]

Retarded Green’s function

From this we can start writing the retarded Green’s function

\[\begin{aligned} G_0^r(p,t)&=-i\theta(t)\ev{\comm{c_p(t)}{c^\dagger_p(0)}_\epsilon} \\&=-i\theta(t)e^{-i\epsilon_p t}\ev{\comm{c_p}{c^\dagger_p}_\epsilon} \\&=-i\theta(t)e^{-i\epsilon_p t} \end{aligned}\]

We only depend on one quantum number and one time. Time dependence is only introduced via perturbation, which we will do later. We have time invariance, so we can apply a Fourier transform.

\[G_0^r(p,\omega)=-i\int\dd{t}e^{i(\omega-\epsilon_p+i\delta)t} \theta(t)=\frac{1}{\omega-\epsilon_p+i\delta}\]

There is a convergence factor \(\delta = 0^+\). The other building block we need is the advanced GF.

\[G_0^a(p,t)=i\theta(-t)e^{-i\epsilon_p t}=G_0^r(p,-t)^*\]

If we write it out in frequency space:

\[G_0^a(p,\omega)=\frac{1}{\omega-\epsilon_p-i\delta}= G_0^r(p,\omega)^*\]

This relation between the advanced and retarged GF can be made more general, e.g. if we have an Ham. which is non-diagonal, like this:

\[H_0=\sum_{pp'}h_{pp'}c^\dagger_p c_{p'}\]

Exercises

Greater and lesser GF

\[\begin{aligned} G_0^>(p\,t_A, p'\,t_B)&=-i\ev{c_p(t_A)c^\dagger_p(t_B)} \\&=\delta_{pp'} \mathcal{G}_0^>(p,t_A-t_B) \\&=-ie^{-i\epsilon_p(t_A-t_B)}\ev{c_p c_p^\dagger} \\&=-ie^{-i\epsilon_p(t_A-t_B)}\bar n_p \end{aligned}\] With \(\bar n_p=1-\epsilon n_p\) and \(\epsilon= \begin{cases}1&\textrm{fermions}\\-1&\textrm{bosons}\end{cases}\). This is similar to the occupation or density.

Proof: \[1=\comm{c}{c^\dagger}_{\epsilon}=cc^\dagger+\epsilon c^\dagger c\] \[cc^\dagger = 1 - \epsilon c^\dagger c = 1 - \epsilon n\]

In equilibrium the distribution functions is

\[n_p=\pqty{e^{\beta\pqty{\epsilon_p-\mu}}+\epsilon}^{-1}\,.\]

So if we do the Fourier transform for this, we get:

\[G_0^>(p,\omega)=\int\dd{t}e^{i\omega t}G_0^>(p,t) =\color{red}-\color{black}2\pi i \delta(\omega - \epsilon_p)\bar{n}_p\]

Note: the minus sign was added by the author.

\[G_0^<(p,t)=i\epsilon\ev{c^\dagger_p(0) c_p(t)}= \dots=i\epsilon e^{-i\epsilon_p t}n_p\]

Exercise 3.4: solve this

In Fourier space we have

\[G_0^<(p,\omega)=2\pi i \epsilon \delta(\omega-\epsilon_p)n_p\]

Keldysh Green’s function

This is the sum of the greater and lesser GF.

\[\begin{aligned} G_0^K(p,t)&=G_0^>(p,t)+G_0^<(p,t) \\&=-ie^{-i\epsilon_p t}\pqty{\bar n_p - \epsilon n_p} \\&=-ie^{-i\epsilon_p t}(1-2\epsilon n_p)\end{aligned}\]

Since only the factor at the end is different, the Fourier transform is easy to do:

\[G^K_0(p,\omega)=-2\pi i \delta(\omega-\epsilon_p)(1-2\epsilon n_p)\]

We exploit the delta distribution to rewrite this as

\[G^K_0(p,\omega)=-2\pi i \delta(\omega-\epsilon_p)S(\omega)\] with \[S(\omega)=1-2 n(\omega)\] and \[n(\omega)=\pqty{e^{\beta(\omega-\mu)}+\epsilon}^{-1}\]

We now use the Cauchy relation \[\frac{1}{\omega-\epsilon_p+i\delta}= \mathcal{P}\frac{1}{\omega-\epsilon_p}-i\pi\delta(\omega-\epsilon_p)\]

We can take the imaginary part \[\Im G_0^r(p,\omega)=-\pi\delta(\omega-\epsilon_p)\]

We finally get \[G_0^K(p,\omega)=2\Im G_0^r(p,\omega)S(\omega) =\bqty{G_0^r(p,\omega)-G_0^a(p,\omega)}S(\omega)\]

Exercise 3.5: Extend this to the case of a non-diagonal Hamiltonian.

It is useful to observe that this function \(S\), say for fermions, at zero temperature is \[S(\omega)=1-2\theta(\omega-\mu)=\mathrm{sgn}(\mu-\omega)\] with \(\mathrm{sgn}\) being the sign-function.

Some useful relations

We go back to the previous matrix notation with the underscore. Consider the following \(2\times 2\) matrices of this type:

\[\underline{B}=\left(\begin{array}{c|c} b_r & b_K\\ \hline 0 & b_a \end{array}\right)\]

\[\underline{F}=\left(\begin{array}{c|c} f_r & f_K\\ \hline 0 & f_a \end{array}\right)\]

We also allow \(b_r, f_r, \dots\) to be matrices themselves.

Products

A product \[\underline{B}\cdot\underline{F}=\left(\begin{array}{c|c} b_r f_r & b_r f_K + b_K f_a \\ \hline 0 & b_a f_a \end{array}\right)\]

Note how the upper left element is zero again, so the general Keldysh structure is conserved. This tells you already that if you have some product of matrices, the retarded component is the product of the retarded components, and vice versa for the advanced component. However, the Keldysh component has a mixed form.

Inverse

We can use this to calculate the inverse:

\[\underline{F}=\underline{B}^{-1}\] \[\underline{B}\underline{F}=I\]

Then we get \(b_r f_r = I\) and from that \(f_r=(b_r)^{-1}\). What is interesting here is the retarded component of the inverse of such a Keldysh matrix can be done through an inverse of the retarded component of the matrix itself. We have the zero in the lower left corner to thank for that. Also for the advanced part we have \(b_a f_a = I\) and from that \(f_a=(b_a)^{-1}\).

But for the Keldysh part we have (by using the previous relation) \[b_r f_K + b_K b_a^{-1}=0\] We get \[f_K = \color{red}-\color{black}b_r^{-1}b_K b_a^{-1}\] for the final part we need. Interestingly we never need to invert the Keldysh component here.

Contribution to vertices

Single particle potential

This is the simplest vertex to consider. Typically in equilibrium cases we won’t care much about single particle potentials. It would be included in the non-interacting Ham. because in principle it can be solved exactly. But here we are interested in the case of a time-dependent single particle potential.

\[\mathcal{W}_u=\int\dd{\vb{x}_A}\dd{\vb{x}_B}c^\dagger(\vb{x}_A) c(\vb{x}_B)\mathcal{U}(\vb{x}_A,\vb{x}_B,t)\]

Here we have an integral over continous degrees of freedom. We write \(\vb{x}\) as vectors to shwo that we do not have time dependence yet. Later we will write this as \(x_A=(\tau_A\,\vb{x})\). The potential is a local one. If you were to set \(\vb{x}_A=\vb{x}_B\) under the integral, we would couple to the density of electrons (\(c^\dagger c\)). The term \(\mathcal{U}\) could also include a spin-flip or a hopping term, for example. If the term is purely local, it would only depend on one variable and in the diagram would only write one qualifier, rather than one at each end. Remember that the term \(\mathcal{W}_u\) appears in terms like \[S_0(-\infty\,C_2,-\infty\,C_1)=T_C e^{-i\int_C W_0(\tau)\dd{\tau}}\] The integral \(\mathcal{W}_{u0}(\tau)\) is done via Heisenberg representation of the field operators in \(\mathcal{W}\). \[\int\dd{\tau}\mathcal{W}_{U0}(\tau) =\int\dd{\tau}\dd{\vb{x}_A}\dd{\vb{x}_B}c^\dagger(\tau\,\vb{x}_A) c(\tau\,\vb{x}_B)\mathcal{U}(\vb{x}_A,\vb{x}_B,\tau)\]

Now here comes a proof, which is a bit tedious. We do a trick by introducing an additional integral over two variables \(\dd{\tau_A}\dd{\tau_B}\delta(\tau-\tau_A)\delta(\tau-\tau_B)\). That way we can replace the \(\tau\) in the field operators. \[=\int\dd{x}_A\dd{x}_B c^\dagger(x_A)c(x_B)\dd{\tau}\delta(\tau-\tau_A)\delta(\tau-\tau_B) \mathcal{U}(\vb{x}_A,\vb{x}_B,\tau)\] with the meaning \(\dd{x}_A=\dd{\vb{x}}_A\dd{\tau}_A\). Now we carry out the integral from \(\dd{\tau}\) to the end. This becomes \(U(x_A,x_B)=\delta(\tau_A-\tau_B) \mathcal{U}(\vb{x}_A,\vb{x}_B,\tau_A)\)

The final expression is simply \[\int\dd{\tau}\mathcal{W}_{U0}(\tau) =\int\dd{x}_A\dd{x}_Bc^\dagger(x_A)c(x_B)\dd{\tau}U(x_A,x_B)\] which is a relation worth remembering. The proof works exactly the same in equlibrium GF (see the equilibrium GF lecture notes).

Finally, we want to look at the structure of this object in Keldysh space.

\[U(t_A\,C_A\,\vb{x}_A,t_B\,C_B\vb{x}_B)= \delta_{C_A,C_B}\delta(t_A-t_B)(-1)^{C_A} \mathcal{U}(\vb{x}_A,\vb{x}_B,t_A)\]

\(\delta_{C_A,C_B}\) means that we must be at the same side of the contour. However, on the lower contour we integrate backward, so we have an additional minus sign. The term \((-1)^{C_A}\) evaluates to \(1\) if we are in the upper contour and to \(-1\) if we are in the lower contour.

Short addendum: Delta in Keldysh space

The definition of a delta distribution in a Keldysh contour integral is \[\int\dd{\tau}\delta(\tau-\tau_A)f(\tau)=f(\tau_A)\,.\] The integral of a contour can be written as \[\begin{aligned} &\int_{-\infty}^\infty\dd{t}\delta(t-t_A)\delta_{C_A,C_1}f(C_1,t)\\ &-\int_{-\infty}^\infty\dd{t}\delta(t-t_A)\delta_{C_A,C_2}f(C_2,t) \cdot S \\&=\delta_{C_A,C_1}f(C_1,t_A)-S\delta_{C_A,C_2}f(C_2,t_A) \\&\stackrel{!}{=}f(C_A, t_A) \end{aligned}\] where \(C_1\) is the upper contour and \(C_2\) the lower contour. The point is, there is this factor \(S\) which we will need to determine. The relation above can only hold true if \(S=-1\). In summary, we get the meaning of the delta distribution in a Keldysh contour integral: \[\delta(\tau-\tau_A)=\delta_{C,C_A}\delta_{\tau,\tau_A}(-1)^{C_A}\]

We continue:

Let us write the object as a \(2\times 2\) matrix:

\[\hat{U}(t_A\vb{x}_A,t_B\vb{x}_B)=U(\vb{x}_A,\vb{x}_B, t_A) \delta(t_A - t_B)\begin{pmatrix}1&0\\0&-1\end{pmatrix}\] where the matrix at the end is the \(\tau_3\) Pauli matrix.

Now remember that we went from he “hat” variables to the “underscore” variables in order to get rid of the \(\tau_3\) under the convolution integrals. We will attempt to do the same for \[\hat{\mathcal{U}}=\mathcal{U}(\vb{x}_A,\vb{x}_B,t_A)\delta(t_A-t_B)\tau_3\,.\] That quantity is \[\underline{U}=L\tau_3\mathcal{\hat{U}}L^\dagger =\mathcal{U}(\vb{x}_A,\vb{x}_B,t_A)\delta(t_A-t_B)I\] with \(I=\begin{pmatrix}1&0\\0&1\end{pmatrix}\) being the identity matrix in Keldysh space. Note that the terms before the identity matrix are just a scalar. This is the element one has to use.

In conclusion, when using the “underscore” Keldysh structure, we have the form where the matrix element is

=\mathcal{U}(\vb{x}_A,\vb{x}_B,t_A)\delta(t_A-t_B)I

Diagrams with U

Assuming there are no other perturbation terms, let us write down the contribution to the Green’s function.

Diagramatic representation

We only have on irreducible diagram (the second one). Remember, the self energy is given by amputating the legs of the irreducible diagram. So we have an exact expression for the self energy \(\Sigma\) and therefore an exact solution, so we have the perturbation expansion that can be carried out exactly. Of course we have to solve the Dyson equation now, which may be complicated depending on the time dependence, but at least we have a formally exact solution.

The Dyson equation is: \[\underline{G}(x_1,x_2)=G_0(x_1,x_2)+ \int\dd{x_3}\dd{x_4}G_0(x_1,x_3)\Sigma(x_3,x_4)G(x_4,x_2)\] with the notation \(x_1=t_1\vb{x}_1\). Let’s write the Dyson equation again diagramatically:

Diagramatic Dyson equation

In plain english: the full GF is the non-interacting GF plus an interacting GF with the self energy and the full GF.

This is an integral equation, but we could also reduce it to an algebraic equation. The \(\Sigma\) comes from the \(\mathcal{U}\), which is time dependent, so there is no time-translation-invariance, so we’d have to solve it in a numerical way. However there is a case where we can consider the self-energy as depending on a time difference, namely in steady-state, which is what we will study next. If the time dependence is such that we switch the potential and then wait long enough, after which the potential remain constant, then all quantities will again only depend on the difference in time. This will lead to the simplification in steady-state.