This derivation is largely based off the TD-DFT derivation (TDDFT and TDHF are very similar), found in the wonderful review, “Single-Reference ab Initio Methods for the Calculation of Excited States of Large Molecules” by Dreuw and Head-Gordon (Chem Reviews, 2005).
Substituting the first two expressions into the right hand side, these two expressions yields the Dirac form of the time-dependent Hartree-Fock equations,
where \({\mathbf{F}}\) is the Fock matrix and \({\mathbf{P} = \mathbf{CC}^{\dagger}}\) is the density matrix, same as in the time independent Hartree-Fock equations. Before a time dependent perturbation is applied, we assume that the system is in its electronic ground state, such that
with \({\mathbf{P}^{(0)}}\) and \({\mathbf{F}^{(0)}}\) as the unperturbed density and Fock matrix, respectively. The first condition comes from the fact that commuting matrices share a common set of eigenvectors, and the second comes from the fact that at convergence the eigenvectors of the Fock matrix are orthonormal. (Recall that the density matrix may be written as \({\mathbf{P} = \mathbf{CC}^{\dagger}}\), where the columns of \({\mathbf{C}}\) form an orthonormal set, e.g. \({\mathbf{C}^{\dagger}\mathbf{C} = \mathbf{1}}\)). The Fock matrix elements are given by
Note the Fock matrix dependence on the density matrix. At convergence, the Fock matrix is diagonal, with the elements corresponding to the orbital energies \({\epsilon}\).
Following convention, indices \({\{i,j,...\}}\) correspond to occupied orbitals, \({\{a,b,...\}}\) correspond to unoccupied orbitals, and \({\{p,q,...\}}\) correspond to general orbitals. In general perturbation theory, a perturbed wavefunction (or density matrix, for our purposes) can be decomposed, to first order, as
where the superscript (1) indicates the first-order time dependent change. Inserting the expressions 13 and 14 into the time-dependent equation 6 and collecting only the first order terms yields
The matrix \({f}\) is a one-electron operator, which describes the applied perturbation. This perturbation acts on the electron density, resulting in a first order Fock matrix response
with \({d_{pq}}\) as the perturbation densities. Plugging the above expressions for \({\mathbf{F}^{(1)}}\) and \({\mathbf{P}^{(1)}}\) into the first order TDHF equation 15, and collecting the terms containing \({e^{-i\omega t}}\) gives (after some algebra):
The terms multiplied by \({e^{i \omega t}}\) give the complex conjugate of the above expression. Because the density matrix must be idempotent, we can show that
Equating like powers of \({\lambda}\) yields the expression 21, as well as our original expression 8. Using 21 allows us to restrict the elements of \({d_{pq}}\), utilizing the nature of the zeroth order density matrix given in 11. As an example, consider the element \({d_{ii}}\):
which is only true if \({P_{ii}^{(1)} = 0}\). Similar arguments show that the only contributing blocks of \({d_{pq}}\) are \({d_{ia}}\) and \({d_{ai}}\). In other words, the only contributions to the TDHF equations are the occupied-virtual and virtual-occupied blocks. All virtual-virtual and occupied-occupied blocks are necessarily zero. With this in mind, along with the diagonal nature of the unperturbed Fock matrix, yields
Finally, we make the assumption that the perturbation is infinitely small, which sends \({f_{ai} = f_{ia} \rightarrow 0}\), and we recover (recognizing that \({F_{pp}^{(0)} = \epsilon_p}\) and \({P_{ii}^{(0)} = 1}\)):
I’ve given a derivation before of the density-density response function, but today I want to give an alternate derivation of the more general linear-response function, which will prove to be useful in the derivation of the time-dependent Hartree-Fock equations (TDHF), also known in the nuclear physics community as the Random Phase Approximation (RPA). This derivation is largely taken from McWeeny (1989).
In order to derive the response function — also called frequency-dependent polarizability — we must first partition the Hamiltonian into two parts: the time-independent Hamiltonian, and the time dependent response:
In the interaction picture, \({\Psi'_0(t) = \Psi'_0e^{-iE_0t/\hbar}}\). Thus we can partition the time-dependent wavefunction (expanding over basis of complete eigenstates) as
are real, positive, exact excitation frequencies of unperturbed system. Note that \({c_0(t) = 1}\). We are assuming that \({H'(t)}\) is turned on slowly at time \({t \rightarrow -\infty}\). Substitute 1 and 3 into 2, separate the orders, and impose the boundary conditions that \({c_0 = 1}\) and \({c_m = 0}\) \({(n\neq0)}\) at \({t \rightarrow -\infty}\). This gives
Where a ‘fixed’ Hermitian operator \({A}\) determines the `shape’ of the perturbation, while time dependence is confined to the (real) ‘strength’ factor \({F(t)}\). For a perturbation beginning at time \({t \rightarrow -\infty}\) up to time \({t}\),
\[c_n(t) = (i\hbar)^{-1}\int\limits_{-\infty}^{t} \langle n \vert A \vert 0 \rangle F(t') e^{i\omega_{0n}t'}dt' \ \ \ \ \ (9)\]
Which, to first order, determines the perturbed wavefunction. Now we are interested not in the perturbed wavefunction per se, but rather in the response of an observable \({O}\) to the perturbation.
\[\delta\langle O \rangle = \langle O \rangle - \langle O \rangle_0 = \int\limits_{-\infty}^{t} K(OA\vert t-t')F(t')dt' \ \ \ \ \ (10)\]
where
\[K(OA\vert t-t') = (i\hbar)^{-1} \sum\limits_{n\neq0} [\langle 0 \vert O \vert n \rangle\langle n \vert A \vert 0 \rangle e^{-i\omega_{0n}(t-t')} - \langle 0 \vert A \vert n \rangle\langle n \vert O \vert 0 \rangle e^{i\omega_{0n}(t-t')}] \ \ \ \ \ (11)\]
This is a time correlation function, relating fluctuation of \({\langle O \rangle}\) at time t to the strength of the perturbation \({A}\) at some earlier time \({t'}\). \({K(OA\vert t-t')}\) is defined only for \({t'<t}\), in accordance with the principle of causality. Thus, it is a function only of the difference \({\tau = t - t'}\). Recalling the definitions of the Fourier transform \({f(\omega)}\):
with \({F(t)}\) real. Instead of working in the time domain, we may also consider the response in terms of a single oscillatory perturbation. This means that
To ensure \({H'(\omega)}\) builds smoothly from zero at \({t \rightarrow -\infty}\), we can introduce a convergence factor \({e^{\eta t}}\) with the initial condition \({c_0 = 1}\) and \({c_n = 0}\), which gives:
Linear response time-dependent density function theory (LR-TDDFT) is kind of a mouthful to say, but is a really simple idea in practice. In computational chemistry, many ideas are quite simple, but they are so laden with jargon it just scares other people away! LR-TDDFT is one way of determining single excitation energies of a molecule from first principles. It is a way of describing how matter interacts with light (or any electromagnetic field). An everyday example is color: sunlight is absorbed by some material, which excites an electron. When this electron finally relaxes, it emits light of a certain energy – that light is the color you see!
To derive the LR-TDDFT working equations (e.g. the equations you can implement on a computer), we start by deriving a density-density response function. This function describes the response of electron density (the ‘density’ of ‘density functional theory’) to an external field: light of some kind, or a magnetic field, or whatever. The potential and the electronic density ‘couple’ in some way.
We describe this interacting system with a partitioned Hamiltonian:
\[H = H_0 + V_{ext}\]
Where \(H_0\) is time-independent unperturbed system, and \(V_{ext}\) is the time-_dependent_ Hamiltonian describing the field potential. Investigating the potential a little more closely, we find that the potential is:
Now, we will consider the time-dependent Schrodinger equation, but we will use the interaction picture, which describes the interacting system’s time dependence as:
With \(\vert \Psi(t)_I \rangle = e^{iH_0t}\vert \Psi(t)_I\rangle\) and \(V_I = e^{iH_0t}V_{ext}e^{-iH_0t}\).
Now, we can solve this interaction picture as a Dyson series, which we truncate after the first-order term (this is where we get linear response theory). In principle, we could keep going to get quadratic terms and so on, but the equations get much more complicated. And so:
A few notes about this expression are in order. The first term is simply the system at time zero. This is our first approximation. The second term introduces the perturbation of the potential, acting on the unperturbed wavefunction. The lower limit of infinity means that we “turn on” the perturbation infinitely long ago: this is an adiabatic approximation. In other words, we turn on the perturbation slowly enough that the ground state wavefunction can track with the disturbance. If we turn on the perturbation too sharply, the response won’t be smooth, and will display complicated ringing behavior (which we don’t care about anyway). This equation is also only valid for small perturbations – an approximation that holds true in most situations!
Now, we don’t care too much about the time evolution of the wavefunction per se. What we really care about is the response of some observable \(O\) (energy, whatever). In general, we can write this response as:
Note that the term \([O_I,V_I]\) is just the commutator expression. The \(e^{\eta t'}\) ensures adiabatic switching on, and \(\eta\) is an infinitesimal real number. Eventually we will send \(\eta\) to zero, but for now we need it for \(t \to 0\) compared with \(-\infty\). If you still don’t buy that explanation, consider what the function does at \(t = 0\) and \(t = -\infty\): at time \(t = 0\) the exponential is one, at time \(t = -\infty\), the exponential is zero.
With that said, we need to derive an expression for \(V_I\). Since \(V_{ext}\) commutes with \(H_0\), we have:
Where \(\rho_I(\mathbf{r}) = e^{iH_0t}\rho(\mathbf{r})e^{-iH_0t}\). If we substitute this expression into our expression for \(\delta\langle O\rangle\), and perform the integration from \(t' = -\infty\) to \(t' = t\), we get (after a little math):
The term \(\theta(t - t')\) is the Heaviside function. This is a step function which is zero for \(t - t' < 0\) and 1 for \(t - t' > 0\). In other words, there is no response before we turn on the perturbation, and a full response once we do. We keep this to preserve causality of the response (no sense responding before there is something to respond to!)
Quick recap of what we have done so far. At this point we have a function for the response of any observable to some external potential (a laser, a magnet, whatever). In the case of LR- TD-DFT, we are concerned with the density response to an external potential. And what do you know? Density is an observable! In fact, we gave the operator earlier in our derivation. So the next step is to plug in the density operator for \(\langle O\rangle\) and carry the results through. Plugging in we have:
Nothing too fancy here, just a bit of math. If we use our definition of \(\rho_I(\textbf{r},t) = e^{iH_0t}\rho(\textbf{r})e^{-iH_0t}\), the response function can be written as:
Here is where it gets interesting. Up until now, our response function has been in the time domain. If we introduce a Fourier transform, and make use of the convolution theorem, we can transform the time-dependent equations to the equivalent frequency-dependent equations. This allows us to determine responses to individual frequencies of light. This is perfect, because it allows for us to later determine specific energies (i.e frequencies) of light that give distinct electronic excitations. Performing the aforementioned Fourier transform and convoluting, we get:
Alright, let’s wrap this up and get the spectral representation of the density response function. If the system is initially in the ground state, e.g. ( \(\vert \Psi(0)\rangle = \vert \Psi_0\rangle\) with lowest eigenvalue \(E_0\) ), and \(H_0\vert \Psi_n\rangle = E_n\vert \Psi_n\rangle\), we can insert the complete set of unperturbed states ( \(\sum\limits_n \vert \Psi_n\rangle\langle\Psi_n\vert\) = 1 ) into the response function to get:
And there you have it! The spectral representation of the response function. The function has poles when \(\omega \to (E_n - E_0)\), which are the excitation energies. Thus we have a way to determine the excitation energies of the system! Later, I will show how we can use this result in the Kohn-Sham scheme to get excitation energies from the density functional theory framework.
I do quite a bit of development work as a graduate student, and one of the most helpful parts of my education was writing a Hartree-Fock SCF (self-consistent field) program from scratch. The SCF procedure is the workhorse of most computational chemistry software packages, as it is generally the first step before doing more advanced calculations (such as MP2, etc). It is also the conceptual basis for molecular orbital theory. I’ve shared my code below, in case you want to try it out for yourself. I wrote it in Python 2.7 (it won’t work in Python 3), utilizing the NumPy package. I find it to be very readable, which makes testing out ideas fast and (relatively) intuitive. The program is written to give an idea how SCF calculations work, so it isn’t that efficient – but I only use it for two electron systems anyway so who cares!
To use it, download the scf.py file to
your computer, and put it in a folder along with s.dat, t.dat,
v.dat, and enuc.dat. These are the integral values it needs to
run: s.dat is the overlap
matrix values,
t.dat is the kinetic energy matrix, v.dat is the potential
energy
matrix, and enuc.dat contains the nuclear repulsion energy. You’ll also need
the eri.dat, which contains the two electron integrals. All values were taken from Gaussian09 for HeH+ at a bond length of 0.9295 Angstrom with an STO-3G basis set. Once you have those downloaded, fire up your terminal and give it a try! See the example of my terminal below for usage. Just navigate to the folder you have everything in, and type python scf.py then hit Enter!