After having considered the academic model of last time, consiting of separated levels, we now consider a more realistic situation.
What we are discussing can be generalized easily for a small (or finite) number of electronic levels which are coupled to a continuum of bath levels. There will be some hopping \(V\) between the path and the system. In the end we want a situation where we have to baths, which are coupled to the center system levels.
Image here
Physically these describe for example metallic leads which is connected to a central region, e.g. a quantum dot or a molecule. Both leads will have different chemical potentials \(\mu_L\) and \(\mu_R\), the difference of which describes a bias voltage which one puts between the two leads. One interesting thing here would be the current from one lead to the other leads through the center region. Other interesting things would be e.g. the Condo Effekt.
We assume a central chemical potential. The operators describe Fermions, which will be denoted by \(c_p,\,c^\dagger_p\) for the leads. We’ll see that spin doesn’t play an important role for the moment. The center level will be described by operators \(d,d^\dagger\). We use just a single center level for the moment. The generalization to many level is not complicated.
Chemical potentials make sense in equilibrium, but this is not an equilibrum situation. The idea is, we start again from an initial situation where at \(t < t_0\), we have \(V=0\). Each region is in equilibrium, but with different chemical potentials. Now at \(t_0\), \(V\neq 0\) is switched on. From this sudden switch, we expect some kind of oscillation beginning to start. After some point in time, we’ll reach a steady state again and of course, if \(\mu_L\neq\mu_R\) we’ll see a current flowing.
What we also expect is the chemical potential \(\mu_C\) in steady state will play no role, as it just describes a discrete isolated level, that will be forgotten after enough time has passed. At some point with large \(t-t_0\), we’ll expect a steady state.
For simplicity, we start with a single lead and then extend this to a second one. This will also clearify what we did last time. We start with a Hamiltonian that describes the separate leads. The unperturbed part describes the separate leads: \[H_0=\sum_p\epsilon_p c^\dagger_p c_p+\Delta d^\dagger d\] with \(\Delta\) being some energy. It is important that the levels of the leads become continous and infinite at the end, as we’ll see. The we have the perturbation that describes the hopping, i.e. the coupling: \[V=\sum_p V_p\pqty{c^\dagger_p d +d^\dagger c_p}\] Also we say that \[d=c_{p=0},\quad d^\dagger=c^\dagger_{p=0}\] so that we can simply use \(p=0\) when we are inside and \(p\neq 0\) when we are outside the center region.
The self energy is very simple, the only difficulty here is solving Dysons equation. Since we have steady state, we can Fourier transform into frequency space. \(\underline G(\omega)\) is the full steady-state Green’s function, \(\underline g(\omega)\) the Green’s function for \(V=0\) (before having switched on the interaction). Remember that these are matrices in Keldysh space. \(\underline V\) describes the perturbation.
Note that \(\underline G(\omega)_{pp'}\), \(\underline g(\omega)_{pp'}\), and \(\underline V\) are 2x2 matrices. In contrast to before, \(p\) and \(p'\) can mix, e.g. via an electron starting from \(p\), going to the center level and then jumping back to \(p'\). Note that \[\underline g(\omega)_{pp'}\propto\delta_{pp'}\], as there is no coupling between the fermions. The matrix \(\underline V\) describes the coupling. and \(V_p\) describes an amplitude between operators \(c\) and \(d\). Remember that the Keldysh structure:
\[\underline V_{pp'}=\underline I V_{pp'}\] with \(\underline I\) being the identity in Keldysh space. The coefficient is only non-zero for \(V_{p0}=V_{0p}=:V_p\). All other terms are zero. This matrix has infinite dimension. Now to Dysons equation, omitting the argument \(\omega\). \[\underline G=\underline g+\underline g \underline V \underline G\] with the formal solution \[\underline g^{-1}=\underline G^{-1}+\underline V\] Inverting \(\underline g\) is relatively easy, as we will see. Inverting it after adding \(\underline V\) is more difficult. Doing it numerically is inconvenient, but we can do better.
Dysons equation in index notation is \[\underline G_{pp_1}=\underline g_{pp_1}+\sum_{p_2p_3} \underline g_{pp_2} \underline V_{p_2p_3} \underline G_{p_3p_1}\]
This is a matrix product, an each of these is a 2x2 matrix in Keldysh space. (20:20) The meaning of the underscore is of course the Keldysh structure: \[\underline G_{p_1p_2}=\left(\ \begin{array}{c|c} \underline G_{p_1p_2}^r & \underline G_{p_1p_2}^K \\ \hline 0 & \underline G_{p_1p_2}^a \end{array}\right)\] We exploit the fact that \[\underline g_{pp_1}\propto\delta_{pp_1}\] and that the only terms different from zero are \(V_{p0},\,V_{p0}\). We can now write an equation for the full GF: \[\underline G_{00}=\underline g_{00}+\underline g_{00} \underline V_{0p_3}\underline G_{p_3 0}\] with \(p_3\) being arbitrary except of zero. Note the use of Einstein sum convention here. The above is more cleanly written as. \[\underline G_{00}=\underline g_{00}+\sum_{p\neq 0}\underline g_{00} \underline V_{0p}\underline G_{p0}\] Next we evaluate \[\underline G_{p_3 0}=0+\underline g_{p_3 p_3} \underline V_{p_3 0}\underline G_{00}\] where again \(p_3\neq 0\), or more cleanly \[\forall p\neq 0.\;\underline G_{p 0}=\underline g_{pp} \underline V_{p0}\underline G_{00}\] and this closes the equation. We get \[\underline G_{00}=\underline g_{00}+\underline g_{00} \underline V_{0p}\underline g_{pp}\underline V_{p0} \underline G_{00}\] or more cleanly \[\underline G_{00}=\underline g_{00}+\sum_{p\neq 0}\underline g_{00} \underline V_{0p}\underline g_{pp}\underline V_{p0} \underline G_{00}\] We name the object \[\underline{\tilde\Sigma}= \underline V_{0p}\underline g_{pp}\underline V_{p0}\] and rewrite \[\underline G_{00}=\underline g_{00}+\underline g_{00} \underline{\tilde\Sigma} \underline G_{00}\] That is the general form of a Dyson equation.
We could put more center levels there and then the same discussion holds. Then you need to make sure that the zeros are indices that run over the center region. The main point is the matrix here is small, it’s just the size of the center region. That point is also important from the formal point of view. While the bath must be continuos (otherwise you won’t get dissipation), for the center region you can have a small matrix and still have dissipation. The formal solution for this we already know. \[\underline G_{00}=\pqty{\underline g_{00}^{-1}- \underline{\tilde\Sigma}}^{-1}\] So we need the inversion of a 2x2 matrix.
The previous expression can also be written as: \[\underline{\tilde\Sigma}=\sum_{p}\underline V_{0p}\underline g_{pp} \underline V_{p0}\]
Note: We could also write \(p\neq 0\) under the sum, since the \(p=0\) summand evaluates to 0.
This is like an expression for the self energy. In fact, in the literature this will often be called “lead self energy” or “bath energy”, but we won’t do that here. We restrict the name self-energy to those terms that come from correlation.
Now we carry out this inversion of the 2x2 matrix. Remember: when we invert this, we have \(f_r=b_r^{-1}\) and similar for the advanced term. The Keldysh component is a bit more complicated. We apply this now.
We take the retarded component of this expression ans use the fact that the 2x2 matrix \(\underline V_{0p}\) is just \(V_{0p}\) times the 2x2 identity matrix: \[G^r_{00}=\bqty{(g^r_{00})^{-1}-\tilde\Sigma^r}^{-1}\] \[=\bqty{(g^r_{00})^{-1}- \underline V_{0p}\underline g^r_{pp}\underline V_{p0}}^{-1}\] The expression for the advanced component is similar.
\[G_{00}^K=-G_{00}^r(\underline g_{00}^{-1}- \underline{\tilde\Sigma})^K G^a_{00}\]
\[\tilde\Sigma^K = V_{0p} g_{pp} V_{p0}\] where again we used the fact that \(V\) is diagonal in Keldysh space.
\[\pqty{\underline g_{00}^{-1}}^K= -{g_{00}^{r}}^{-1}g_{00}^K{g_{00}^{a}}^{-1}\] \[=-{g_{00}^{r}}^{-1}\bqty{ \pqty{g_{00}^r-g_{00}^a} S_C(\omega) }{g_{00}^{a}}^{-1}\]
We did this a couple of weeks ago: The Keldysh component of a non-interacting system is given the expression in the square brackets. Remember, we are calculating the non-interacting GF of the center region. By the way, we consider something even more general than that: we could put different temperatures to the left and right leads and consider the effect of thermal electricity. The center region may also have a temperature. So we put an index C for the center region: \[ S_C(\omega)=1-2 f_{FC}(\omega)\]
This expression can be simplified further: \[=\pqty{{g_{00}^r}^{-1}-{g_{00}^a}^{-1}} S_C(\omega)\] \[{g_{00}^r}^{-1}=\omega-\Delta+i0^+\] The difference \({g_{00}^r}^{-1}-{g_{00}^a}^{-1}\) is formally zero, but we have to be careful with this limit. Putting this to zero from the beginning leads to contradicting results. The consequence which we conclude from this is that \[{g_{00}^r}^{-1}=\omega-\Delta+i0^+\] \[\Rightarrow\quad {g_{00}^{-1}}^K = 0\] The questions is, is this correct? In other words, in the expression for the Keldysh component, can we neglect this term and only consider the contribution from \(\underline{\tilde\Sigma}\)? We’ll now discuss in which case this is possible and what the physical meaning of that is.
Let us look at where the initial conditions of the problem. We have different temperatures and chemical potentials. Once we switch on the hybridization, we expect that we reach an equilibrium or a non-equilibrum steady state, but it won’t matter what the initial temp. and chem. pot. of the center region was. In steady state, the chem. pot. of the center won’t contribute. The only point where it occurs is in the distribution function \(S_C(\omega)\). Now we see that it doesn’t contribute to the property of the system. On the other hand, the chem. pot of the leads are hidden in \(\underline{\tilde\Sigma}^K\), as it contains the Keldysh component of the GF of the leads.
To summarize: \({g_{00}^{-1}}^K\) can be neglected …
when \(G_{00}^r\) doesn’t have singularities.
In \[G_{00}^K=-G_{00}^r(\underline g_{00}^{-1}- \underline{\tilde\Sigma})^K G^a_{00}\] \(G_{00}^r\) and \(G_{00}^a\) can have singularities. But we can neglect \(\underline g_{00}^{-1}\) whenever we can neglect the singularities of that term.
when \(\tilde\Sigma^K\) has a finite imaginary part. Then we can neglect the \(0+\).
In practice, we’ll say for which frequences it can be neglected. Later we’ll see, these two conditions go hand in hand.
Physically, neglecting \((g_{00}^{-1})^K\) means neglecting \(S_C(\omega)\), or \(\mu_C, T_C\), i.e. the initial chem. pot. and temp. of the center region do not affect the properties of the steady state. This makes sense, because the small central region is coupled to inifinite leads.
Before we go on, let’s see what we would expect for the case in which we only have one lead. If we have a small system coupled to a bigger one, and wait enough, the center region will equilibrate with the same temp. and chem. pot. as the bath to which we couple. Let’s see whether we really get this. \[G_{00}^K=G_{00}^r \sum_p \pqty{V_{0p} g_{pp}^K V_{p0}} G_{00}^a\]
Note: Why is there no minus here? Because we used \(-\dots(0-\tilde\Sigma)\dots=+\dots\tilde\Sigma\dots\).
For simplicity, we’ll just take \(V_{0p}\) as a constant: \[\forall p.\;V_{0p}=V\] We’ll later generalize to it being \(p\) dependent. \[\sum_p g_{pp}^K=\sum_p (g_{pp}^r - g_{pp}^a) S_L(\omega)\] Note that we just consider one bath, so e.g. index L for the left bath. \[g_{pp}^r(\omega)=\frac{1}{\omega-\epsilon_p+i0^+}\] \[=\Re\dots-i\pi\delta(\omega-\epsilon_p)\] where due to the hermitian conjugate the real part finally cancels and we have just the delta contribution left. \[\dots=-2i\pi\sum_p\delta(\omega-\epsilon_p)\] Now if the \(\epsilon_p\) were really discrete, this would just consist of a sum of delta functions. Instead we consider a continuum in the bath. The expression looks familiar to what we know from Solid State Physics, and indeed the sum term is just a density of states. \[\rho_L(\omega)=\sum_p\delta(\omega-\epsilon_p)\] Due to the infinite continuum bath, we’ll have a smooth function. This is an important assumption! Otherwise, some thing won’t work out properly. The function is only smooth if the environment is infinite and if there are no isolated levels. \[\sum_p g_{pp}^K(\omega)=-2i\pi\rho_L S_L(\omega)\]
What we are really after is the expression for \(G_{00}^K\). We can write our goal: \[G_{00}^K=-2i\pi V^2 |G_{00}^r|^2 \rho_L S_L(\omega)\]
For the retarded component, \[G^r_{00}=\bqty{(g^r_{00})^{-1}-\tilde\Sigma^r}^{-1}\] \[=\bqty{(g^r_{00})^{-1}- \underline V_{0p}\underline g^r_{pp}\underline V_{p0}}^{-1}\]
\[G_{00}^r=\pqty{\omega- \underbrace{\Delta}_{=\epsilon_0} +i0^+-V^2\sum_p g_{pp}^r}^{-1}\] \[\sum_p g_{pp}^r=\sum_p\Re g_{pp}^r+i\sum_p\Im g_{pp}^r\] \[g_{pp}^r=\frac{1}{\omega-\epsilon_p+i0^+}\] \[=P\frac{1}{\omega-\epsilon_p}-i\pi\delta(\omega-\epsilon_p)\] The summands are just the real part and imaginary part. The \(\sum_p\Re g_{pp}^r\) part can’t be further simplified, so we’ll just call it \[R(\omega):=\sum_p\Re g_{pp}^r\,.\] \[\sum_p g_{pp}^r=R(\omega)-i\pi\rho_L(\omega)\] Putting everything together, we have \[G_{00}^r=\pqty{\omega-\bqty{\Delta+V^2 R(\omega)} +i\pi V^2 \rho_L(\omega)}^{-1}\]
Again, thanks to the finite imaginary part, we can neglect the \(0^+\). Of course, the DOS for that frequency has to be non-zero for this to work. This has a physical meaning! We can neglect the singularities of the center region, provided that at these frequencies there is a finite DOS in the bath. Let’s now give some names to the terms:
We plot the spectral function of the center region. \[\rho_{00}(\omega)=-\frac{1}{\pi}\Im G_{00}^r(\omega)\] What we get is, more or less, a Lorentz curve.
Image here: 1:07:40
Note: this result is for small \(V\).
Last time we had strange behavior with the limits. This time, we really took dissipation into account explicitly, through the leads. In the final result, the properties of the central region do not depend on the initial temp. and chem. pot. of the central region. Dissipation is produced by the coupling to the heat bath. Mathematically, the point where this occurs is when we neglect \(0^+\) in \({g_{00}^r}^{-1}=\omega-\Delta+i0^+\), because \(\tilde\Sigma^K\) has a non-zero imaginary part. That came from the fact that the sum in \[=-2i\pi\sum_p\delta(\omega-\epsilon_p)\] is finally a smooth function and we only have that if the bath levels are infinite and continous.
Also in the retarded component we could neglect \(0^+\) only because of the finite DOS. In practical terms, the system must be continous for this, again. That is the underlying dissipation mechanism. All this was mostly a sketch for the real calculation.