# TDHF + CIS in Python

So you completed a Hartree-Fock procedure, and you even transformed your two electron integrals. Now what can you do? We can use those results, specifically the orbital energies (eigenvalues of the Fock matrix) and the transformed two-electron integrals (using the eigenvalues of the Fock matrix), to calculate some simple response properties, such as excitation energies!. Configuration Interaction Singles (CIS), is a subset of the TDHF equations. If you haven’t performed the previous calculations, I’ll include my initial eigenvalues and transformed two electron integrals at the end. The values are for HeH+ at a bond length of 0.9295 Angstrom with an STO-3G basis set. We create and solve:

$\begin{bmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{-B} & \mathbf{-A} \\ \end{bmatrix} \begin{bmatrix} \mathbf{X} \\ \mathbf{Y} \\ \end{bmatrix} = \omega \begin{bmatrix} \mathbf{1} & \mathbf{0} \\ \mathbf{0} & \mathbf{1} \\ \end{bmatrix} \begin{bmatrix} \mathbf{X} \\ \mathbf{Y} \\ \end{bmatrix}$

With the elements of $${\mathbf{A}}$$ and $${\mathbf{B}}$$ as

$A_{ia,jb} = \delta_{ij}\delta_{ab}(\epsilon_a - \epsilon_i) + \langle aj\vert \vert ib \rangle$

and

$B_{ia,jb} = \langle ab \vert \vert ij \rangle$

Now, when we transformed our two electron integrals from atomic orbital (AO) basis to molecular orbital (MO) basis, we made the implicit assumption that we were working under a closed-shell approximation. Electrons have spin, and we assumed we had an even number of electrons so that all their spins paired and integrated out. If you’ve taken any general chemistry, you know that we fill each orbital with two electrons: one spin up, and one spin down. (Electrons are fermions, which means two electrons of opposite spin can occupy the same spatial location. Their opposite, bosons, are a little weirder, which is why we can get Bose-Einstein condensates — think a lot of particles occupying the same place in space). Anyway, we make the assumption that opposite spin electrons can share the same spatial orbital, which is reasonable for most calculations, but not strictly true – after all, the definition of an orbital is a one electron wavefunction. Moving along now…

We need to explicitly account for the spin of the electrons, which means we need to satisfy this transformation:

$\langle pq\vert rs\rangle = (pr\vert qs)\int d\omega_1 d\omega_2 \sigma_p(\omega_1) \sigma_q(\omega_2) \sigma_r(\omega_1) \sigma_s(\omega_2)$

Where $$\omega$$ is the spin of the electron, with coordinates 1 and 2. Note that $$(pq\vert rs) \neq \langle pq \vert rs\rangle$$, this is because we will be moving from chemists’ notation to physicists’ notation, which dominates the literature. Since the CIS and TDHF equations need only double bar integrals, e.g.

$\langle pq\vert \vert rs \rangle =\langle pq\vert rs \rangle - \langle pq\vert sr \rangle$

or, showing the conversion between physicists’ and chemists’ notation more directly

$\langle pq\vert \vert rs \rangle = ( pr\vert qs ) - ( ps\vert qr )$

We can also account for that in our transformation. In Python this gives:

Now, because Python begins its indexing at 0, and I began my indexing at 1, I had to make the index change at the end. From now on, the indexing starts at 0, so that $$(11\vert 11) = (00\vert 00)$$ and so on. Now that we have our integrals, we can also spin adapt our orbital energies. This basically maps spatial orbital energy 1 (which contains two electrons) to spin orbital 1 and 2: odd numbers are spin up, even numbers are spin down. If the spatial orbitals are found in an array E, now moved to an array of double the dimension fs:

Simple enough.

Putting it altogether then (I hard coded the initial values for self-containment):

Running the code we find our first excitation energy at the CIS level is E(CIS) = 0.911, and our first excitation at the TDHF level is E(TDHF) = 0.902, (all in Hartrees) in agreement with the literature values in the paper here. A couple of notes about the method. First, the TDHF method can be seen as an extension of the CIS approach. The $$A$$ matrix in the TDHF working equations is just the CIS matrix. This also means that the TDHF matrix is twice the dimension of the CIS matrix ( four times as large), which can get very costly. With a little bit of (linear) algebra, we can get the TDHF equations in the form:

$({\mathbf A} + {\mathbf B}) ({\mathbf A} - {\mathbf B})({\mathbf X} - {\mathbf Y}) = \omega^2 ({\mathbf X} - {\mathbf Y})$

Which is the same dimension as the CIS matrix. This is why you rarely see CIS in practice: for the same cost, you can get a TDHF calculation, which is more accurate because it includes the block off-diagonal elements. I also want to mention that TDDFT takes almost exactly the same form. The only difference is that the replace the two electron integrals with the exchange elements in DFT, and the HF orbital energies are the Kohn-Sham(KS) orbital eigenvalues. Because DFT generally handles correlation better, most single excitations are modeled using TDDFT. Finally, we did solve for all the excitation energies in the minimal basis…but in general we don’t need them all, and it is way to costly to do so. In this case, we can use iterative solvers — like the Davidson iteration — to pick off just the lowest few eigenvalues. Gaussian09, by default, solves for just the lowest three eigenvalues unless you ask for more.

For your reference, the transformed two electron integrals $$(pq\vert rs)$$ are given below, first four columns are index of p,q,r,s, and the last column is the value:

Let’s say you have your z-component dipole moment integrals (in the molecular orbital basis) collected into a matrix $$Z_{pq} = \langle p \vert \overrightarrow{z} \vert q \rangle$$.

You can compute the z-component of the transition dipole moment from your transition density, which is either the appropriate eigenvector for a given excitation $$X_{ia}$$ (CIS) or $$X_{ia}$$ and $$Y_{ai}$$ (TDHF). If the dipole integral matrix is $$N \times N$$, you’ll want to reshape your $$X_{ia}$$ and $$Y_{ai}$$ to be $$N \times N$$ as well, instead of $$OV \times 1$$. This gives you a $$N \times N$$ transition density matrix of

$\mathbf{D} = \begin{bmatrix} \mathbf{0} & \mathbf{X} \\ \mathbf{Y} & \mathbf{0} \\ \end{bmatrix}$

The expectation value of the z-component of this transition dipole moment is then the trace of the density dotted into the z-component dipole integrals, e.g.

$\langle z \rangle = Tr(\mathbf{D}\mathbf{Z})$

Then do the same for $$\langle x \rangle$$ and $$\langle y \rangle$$ and sum each component for the total transition dipole.

You’ll also need to repeat the process for each transition you are interested in.

Note that most dipole moment integrals are computed in the atomic orbital basis, so don’t forget to transform your transition density to the AO basis (or dipole integrals to MO–it doesn’t matter which way, expectation values are independent of basis–the key is that both matrices are in the same basis).