I have a new review on (and titled) real-time time-dependent electronic structure theory out now, which has been one of my active research areas over the past five years or so. Like other electronic structure methods, real-time time-dependent (RT-TD) methods seeks to understand molecules through a quantum treatment of the motions of their electrons. What sets RT-TD theories and methods apart is that they explore how electrons evolve in time, especially as they respond to things like lasers, magnetic fields, X-rays, and so on. Real time methods look at molecular responses to these things explicitly in time, and gives an intuitive and dynamic view of how molecules behave under all sorts of experimental conditions. From this, we can predict and explain how certain molecules make better candidates for solar energy conversion, molecular electronics, or nanosensors, etc.

The truth is that there is a wealth of information that can be obtained by real-time time-dependent methods. Because of this, RT-TD methods are often criticized as inefficient and overkill for many predictive applications. In some cases this is true, for example when computing absorption spectra of small organic molecules (linear response methods are usually a better choice). However, to dismiss all RT-TD methods is a mistake, as the methods comprise a general technique for studying extremely complex phenomena. RT-TD methods allow for complete control over the number and strength of interacting external perturbations (multiple intense lasers, for example) in the study of molecular responses. In the review, we address many of these unique applications, ranging from non-equilibrium solvent dynamics to dynamic hyperpolarizability to the emerging real-time quantum electrodynamics (QED), where even the photon field is quantized.

The article was extremely fun to write, and I hope you find something useful and interesting in it. RT-TD comprises a broad field with extremely talented scientists working on its development. You can find the article here.

where \(H(t)\) is the time-dependent Hamiltonian and \(\psi(t)\) is the
time-dependent wave function. The goal of the Magnus expansion is to
find a general solution for the time-dependent wave function in the case
where \(H\) is time-dependent, and, more crucially, when \(H\) does not
commute with itself at different times, e.g. when
\(\left[H(t_1),H(t_2)\right] \neq 0\). In the following we will follow
closely the notation of Blanes, et al..

First, for simplicity we redefine \(\tilde{H}(t) \equiv \frac{-i}{\hbar}H(t)\) and
introduce a scalar \(\lambda = 1\) as a bookkeeping device, so that

At the heart of the Magnus expansion is the idea of solving the TDSE by
using the quantum propagator \(U(t,t_0)\) that connects wave functions at
different times, e.g. \(\psi(t) = U(t,t_0)\psi(t_0)\) Furthermore, the
Magnus expansion assumes that \(U(t,t_0)\) can be represented as an
exponential, \(U(t,t_0) = \text{exp} \left( \Omega(t,t_0) \right)\) This
yields the modified TDSE

However, if \(H\) and \(U\) are matrices this is not necessarily true. In
other words, for a given matrix \(A\) the following expression does not
necessarily hold:

\[\frac{\partial}{\partial t } \left( \text{exp}\left(A(t)\right) \right) = \left(\frac{\partial}{\partial t }A(t) \right)\text{exp}\left(A(t)\right) = \text{exp}\left(A(t)\right) \left(\frac{\partial}{\partial t } A(t) \right)\]

because the matrix \(A\) and its derivatives do not necessarily commute.
Instead, Magnus proved that in general \(\Omega (t, t_0)\)
satisfies

where \(B_k\) are the Bernoulli numbers. This equation may be solved by
integration, and iterative substitution of \(\Omega(t,t_0)\). While it may
appear that we are worse off than when we started, collecting like
powers of \(\lambda\) (and setting \(\lambda = 1\)) allows us to obtain a
power-series expansion for \(\Omega(t,t_0)\),

This is the Magnus expansion, and here we have given up to the
third-order terms. We have also made the notational simplification that
\(\tilde{H}_k = \tilde{H}(t_k)\). This is the basis for nearly all
numerical methods to integrate the many-body TDSE in molecular physics.
Each subsequent order in the Magnus expansion is a correction that
accounts for the proper time-ordering of the Hamiltonian.

The Magnus expansion immediately suggests a route to
many numerical integrators. The simplest would be to approximate the
first term by

\[\int\limits_{t}^{t + \Delta t} \tilde{H}_1 dt_1 \approx \Delta t \tilde{H}(t)\]

leading to a forward-Euler-like time integrator of

\[\psi(t + \Delta t) = \text{exp}\left(\Delta t \tilde{H}(t)\right)\psi(t)\]

which we can re-write as

\[\psi(t_{k+1}) = \text{exp}\left(\Delta t \tilde{H}(t_{k})\right)\psi(t_{k})\]

where subscript \(k\) gives the node of the time-step stencil. This gives
a first-order method with error \(\mathcal{O}(\Delta t)\). A more accurate
second-order method can be constructed by approximating the first term
in the Magnus expansion by the midpoint rule, leading to an
\(\mathcal{O}({\Delta t}^2)\) time integrator

\[\psi(t_{k+1}) = \text{exp}\left(\Delta t \tilde{H}(t_{k+1/2})\right)\psi(t_{k})\]

Modifying the stencil to eliminate the need to evaluate the Hamiltonian
at fractional time steps (e.g. change time step to \(2 \Delta t\)) leads
to the modified midpoint unitary transformation (MMUT) method

\[\psi(t_{k+1}) = \text{exp}\left(2 \Delta t \tilde{H}(t_{k})\right)\psi(t_{k-1})\]

which is a leapfrog-type unitary integrator. Note that the midpoint
method assumes \(\tilde{H}\) is linear over its time interval, and the
higher order terms (containing the commutators) in this approximation go to zero. There are many other types of integrators
based off the Magnus expansion that can be found in the
literature. The key point for all of these
integrators is that they are symplectic, meaning they preserve
phase-space relationships. This has the practical effect of conserving
energy (within some error bound) in long-time dynamics, whereas
non-symplectic methods such as Runge-Kutta will experience energetic
“drift” over long times. A final note: in each of these schemes it is
necessary to evaluate the exponential of the Hamiltonian. In real-time
methods, this requires computing a matrix exponential. This is not a
trivial task, and, aside from the construction of the Hamiltonian
itself, is often the most expensive step in the numerical solution of
the TDSE. However, many elegant solutions to the construction of the
matrix exponential can be found in the literature.

References

Blanes, S., Casas, F., Oteo, J.A. and Ros, J., 2010. A pedagogical approach to the Magnus expansion. European Journal of Physics, 31(4), p.907.

Magnus, W., 1954. On the exponential solution of differential equations for a linear operator. Communications on pure and applied mathematics, 7(4), pp.649-673.

In quantum chemistry, we are often interested in evaluating integrals
over Gaussian basis functions. Here I am going to take much of the
theory for granted and talk about how one might actually implement the
integrals for quantum chemistry by using recursion relationships with
Hermite Gaussians. Our goal is to have clean, readable code. I’m writing
this for the people are comfortable with the mathematics behind the
Gaussian integrals, but want to see a readable computer implementation.
I’ll talk a bit about some computational considerations at the end, but
my goal is to convert equations to code. In fact, I’ve tried to
structure the equations and the code in such a way that the two look
very similar.

For a very good overview of integral evaluation, please see:

Helgaker, Trygve, and Peter R. Taylor. “Gaussian basis sets and
molecular integrals.” Modern Electronic Structure (1995).

I will try and follow the notation used in the above reference.

Mathematical preliminaries

Let’s start with some of the basics. First, we have our 3D Gaussian
functions

with orbital exponent \(\alpha\), electronic coordinates \(\mathbf{r}\),
origin \(\mathbf{A}\), and

\[\mathbf{r}_A = \mathbf{r} - \mathbf{A}\]

also, \(i,j,k\) are the angular quantum numbers (e.g. \(i=0\) is \(s\)-type,
\(i=1\) is \(p\) type, etc.) Cartesian Gaussians are separable in 3D along
\(x,y,z\) so that

where \(E_{t}^{ij}\)
are expansion coefficients (to be determined recursively) and
\(\Lambda_t\) is the Hermite Gaussian overlap of two Gaussians \(a\) and
\(b\). It has a simple expression that kills the sum via the Kronecker
delta \(\delta_{t0}\). It can be shown that the expansion coefficients can
be defined using the following recursive definitions

The first equation gives us a way to reduce the index \(j\) and the second
gives us a way to reduce index \(i\) so that we can get to the third
equation, which is our base case. The last equation tells us what to do
if we go out of index bounds.

Overlap integrals

The first thing we need to do is implement a function E which returns
our expansion coefficients \(E_t^{ij}\). Aside from angular momentum \(i\)
and \(j\) from the Gaussian functions, we also need the distance between
Gaussians \(Q_x\) and the orbital exponent coefficients \(\alpha\) and
\(\beta\) as inputs.

This is simple enough! So for a 1D overlap between two Gaussians we
would just need to evaluate \(E_0^{ij}\) and multiply it by
\(\sqrt{\frac{\pi}{p}}\). Overlap integrals in 3D are just a product of
the \(x,y,z\) 1D overlaps. We could imagine a 3D overlap function like
so

Note that we are using the NumPy package in order to take advantage of
the definitions of \(\pi\) and the fractional power to the \(3/2\). The
above two functions overlap and E are enough to get us the overlap
between two Gaussian functions (primitives), but most basis functions
are contracted, meaning they are the sum of multiple Gaussian
primitives. It is not too difficult to account for this, and we can
finally wrap up our evaluation of overlap integrals with a function
S(a,b) which returns the overlap integral between two contracted
Gaussian functions.

Basically, this is just a sum over primitive overlaps, weighted by
normalization and coefficient. A word is in order for the arguments,
however. In order to keep the number of arguments we have to pass into
our functions, we have created BasisFunction objects that contain all
the relevant data for the basis function, including exponents,
normalization, etc. A BasisFunction class looks like

So, for example if we had a STO-3G Hydrogen 1s at origin (1.0, 2.0,
3.0), we could create a basis function object for it like so

Where we used the EMSL STO-3G definition

So doing S(a,a) = 1.0, since the overlap of a basis function with
itself (appropriately normalized) is one.

Kinetic energy integrals

Having finished the overlap integrals, we move on to the kinetic
integrals. The kinetic energy integrals can be written in terms of
overlap integrals

For a 3D primitive, we can form a kinetic function analogous to
overlap,

and for contracted Gaussians we make our final function T(a,b)

Nuclear attraction integrals

The last one-body integral I want to consider here is the nuclear
attraction integrals. These differ from the overlap and kinetic energy
integrals in that the nuclear attraction operator \(1/r_C\) is Coulombic,
meaning we cannot easily factor the integral into Cartesian components
\(x,y,z\).

To evaluate these integrals, we need to set up an auxiliary Hermite
Coulomb integral \(R^n_{tuv}(p,\mathbf{P},\mathbf{C})\) that handles the
Coulomb interaction between a Gaussian charge distribution centered at
\(\mathbf{P}\) and a nuclei centered at \(\mathbf{C}\). The Hermite Coulomb
integral, like its counterpart \(E_t^{ij}\), is defined recursively:

\[\begin{align}
R^{n}_{t+1,u,v} &= t R^{n+1}_{t-1,u,v} + X_{PC}R^{n+1}_{t,u,v} \\
R^{n}_{t,u+1,v} &= u R^{n+1}_{t,u-1,v} + Y_{PC}R^{n+1}_{t,u,v} \\
R^{n}_{t,u,v+1} &= v R^{n+1}_{t,u,v-1} + Z_{PC}R^{n+1}_{t,u,v} \\
R^{n}_{0,0,0} &= (-2p)^n F_n (p R_{PC}^2) \end{align}\]

where \(F_n(T)\) is the Boys function

\[F_n(T) = \int_0^1 \hbox{exp}(-Tx^2)x^{2n}dx\]

which is a special case of the Kummer confluent hypergeometric function, \(_1F_1(a,b,x)\)

which is convenient for us, since SciPy has an implementation of \(_1F_1\) as
a part of scipy.special. So for R we can code up the recursion like
so

and we can define our boys(n,T) function as

There are other definitions of the Boys function of course, in case you
do not want to use the SciPy built-in. Note that R requires
knowledge of the composite center \(\mathbf{P}\) from two Gaussians
centered at \(\mathbf{A}\) and \(\mathbf{B}\). We can determine \(\mathbf{P}\)
using the Gaussian product center rule

\[P = \frac{\alpha A + \beta B}{\alpha + \beta}\]

which is very simply coded up as

Now that we have a the Coulomb auxiliary Hermite integrals
\(R^{n}_{tuv}\), we can form the nuclear attraction integrals with respect
to a given nucleus centered at \(\mathbf{C}\), \(V_{ab}(C)\), via the
expression

And, just like all the other routines, we can wrap it up to treat
contracted Gaussians like so:

Important: Note that this is the nuclear repulsion integral
contribution from an atom centered at \(\mathbf{C}\). To get the full
nuclear attraction contribution, you must sum over all the nuclei, as
well as scale each term by the appropriate nuclear charge!

Two electron repulsion integrals

We are done with the necessary one-body integrals (for a basic
Hartree-Fock energy code, at least) and are ready to move on to the
two-body terms: the electron-electron repulsion integrals. Thankfully,
much of the work has been done for us on account of the nuclear
attraction one-body integrals.

In terms of Hermite integrals, to evaluate the two electron repulsion
terms, we must evaluate the summation

which looks terrible and it is. However, recalling that
\(p = \alpha + \beta\) letting \(q = \gamma + \delta\) (that is, the
Gaussian exponents on \(a\) and \(b\), and \(c\) and \(d\)), we can write the
equation in a similar form to the nuclear attraction integrals

And, for completeness’ sake, we wrap the above to handle contracted
Gaussians

And there you have it! All the integrals necessary for a Hartree-Fock
SCF code.

Computational efficiency considerations

Our goal here has been to eliminate some of the confusion when it comes
to connecting mathematics to actual computer code. So the code that we
have shown is hopefully clear and looks nearly identical to the
mathematical equations they are supposed to represent. This is one
reason we chose Python as the code of choice to implement the
integrals. It emphasizes readability.

If you use the code as is, you’ll find that you can only really handle
small systems. To that end, I’ll give a few ideas on how to improve the
integral code to actually be usable.

First, I would recommend not using pure Python. The problem is that we
have some pretty deep loops in the code, and nested for loops will
kill your speed if you insist on sticking with Python. Now, you can
code in another language, but I would suggest rewriting some of the
lower level routines with Cython (http://www.cython.org). Cython
statically compiles your code to eliminate many of the Python calls
that slow your loops down. In my experience, you will get several orders
of magnitude speed up. This brings me to my second point. One of the
problems with Python is that you have the global interpreter lock
(GIL) which basically means you cannot run things in parallel. Many of
the integral evaluation routines would do well if you could split the
work up over multiple CPUs. If you rewrite some of the low level
routines such that they do not make Python calls anymore, you can turn
off the GIL and wrap the code in OpenMP directives, or even use
Cython’s cython.parallel module. This will take some thought, but
can definitely be done. Furthermore, removing the explicit recursion in the
E and R functions and making them iterative would go a long way to speed
up the code.

A couple other thoughts: be sure to exploit the permutational symmetry
of the integrals. The two electron repulsion integrals, for example, can
be done in \(1/8\) of the time just by exploiting these symmetries, which
are unrelated to point group. Also, you can exploit many of the integral
screening routines, since many of the two electron integrals are
effectively zero. There are a lot of tricks out there in the literature,
go check them out!

One of the cool things about working with real-time electronic dynamics is that you can make awesome videos.

I recently made a video of a resonant electronic excitation of hydrogen peroxide.

You can check out the video by clicking the picture below.

The video was made using the real-time TDHF module in Chronus Quantum. We visualize how the electronic density changes over time.

For this simple STO-3G model, hydrogen peroxide strongly absorbs at 18.0 eV. So we apply an oscillating electric field at the same frequency and the electronic density goes nuts. You can do this for any frequency of light, however.

In the future, we can look at more realistic systems with real time TDDFT.

It’s important because this type of phenomena underlies our ability to make better solar cells and faster computer processing devices.

Most of my development work is done on remote computing clusters, which are usually firewalled.

This means I can’t log in directly when I work from home. I have to log into a network machine, and then log in to my development machine from there.

It gets kind of tedious to keep doing something along the lines of

It also makes it a pain to copy files from home to the remote machine (and vice versa!).

However, you can easily set up a transparent tunnel through your network-host that lets you (effectively) log on to your development machine as if it weren’t even firewalled.

Just open up (or create) ~/.ssh/config:

Add the following lines (replacing network-host and development-machine with the names of your computers). You can name them whatever you want. Here I named each machine network and devel.

What this does is log you into the non-firewalled network-host, then establish a connection to the firewalled computer through netcat (nc in this case). You never actually see the network host, it acts on your behalf, invisibly in the background.

So, from your home computer, you can login directly to the development machine:

You can also scp directly from your development machine now, too.