The Magnus Expansion

Check out the PDF here

The goal of all real-time electronic dynamics methods is to solve the time-dependent Schrödinger equation (TDSE)

where is the time-dependent Hamiltonian and is the time-dependent wave function. The goal of the Magnus expansion is to find a general solution for the time-dependent wave function in the case where is time-dependent, and, more crucially, when does not commute with itself at different times, e.g. when . In the following we will follow closely the notation of Blanes, et al..

First, for simplicity we redefine and introduce a scalar as a bookkeeping device, so that

At the heart of the Magnus expansion is the idea of solving the TDSE by using the quantum propagator that connects wave functions at different times, e.g. Furthermore, the Magnus expansion assumes that can be represented as an exponential, This yields the modified TDSE

Now, for scalar and , the above has a simple solution, namely

However, if and are matrices this is not necessarily true. In other words, for a given matrix the following expression does not necessarily hold:

because the matrix and its derivatives do not necessarily commute. Instead, Magnus proved that in general satisfies

where are the Bernoulli numbers. This equation may be solved by integration, and iterative substitution of . While it may appear that we are worse off than when we started, collecting like powers of (and setting ) allows us to obtain a power-series expansion for ,

This is the Magnus expansion, and here we have given up to the third-order terms. We have also made the notational simplification that . This is the basis for nearly all numerical methods to integrate the many-body TDSE in molecular physics. Each subsequent order in the Magnus expansion is a correction that accounts for the proper time-ordering of the Hamiltonian.

The Magnus expansion immediately suggests a route to many numerical integrators. The simplest would be to approximate the first term by

leading to a forward-Euler-like time integrator of

which we can re-write as

where subscript gives the node of the time-step stencil. This gives a first-order method with error . A more accurate second-order method can be constructed by approximating the first term in the Magnus expansion by the midpoint rule, leading to an time integrator

Modifying the stencil to eliminate the need to evaluate the Hamiltonian at fractional time steps (e.g. change time step to ) leads to the modified midpoint unitary transformation (MMUT) method

which is a leapfrog-type unitary integrator. Note that the midpoint method assumes is linear over its time interval, and the higher order terms (containing the commutators) in this approximation go to zero. There are many other types of integrators based off the Magnus expansion that can be found in the literature. The key point for all of these integrators is that they are symplectic, meaning they preserve phase-space relationships. This has the practical effect of conserving energy (within some error bound) in long-time dynamics, whereas non-symplectic methods such as Runge-Kutta will experience energetic “drift” over long times. A final note: in each of these schemes it is necessary to evaluate the exponential of the Hamiltonian. In real-time methods, this requires computing a matrix exponential. This is not a trivial task, and, aside from the construction of the Hamiltonian itself, is often the most expensive step in the numerical solution of the TDSE. However, many elegant solutions to the construction of the matrix exponential can be found in the literature.


Blanes, S., Casas, F., Oteo, J.A. and Ros, J., 2010. A pedagogical approach to the Magnus expansion. European Journal of Physics, 31(4), p.907.

Magnus, W., 1954. On the exponential solution of differential equations for a linear operator. Communications on pure and applied mathematics, 7(4), pp.649-673.

A (hopefully) gentle guide to the computer implementation of molecular integrals over Gaussian basis functions.

Check out the PDF here

In quantum chemistry, we are often interested in evaluating integrals over Gaussian basis functions. Here I am going to take much of the theory for granted and talk about how one might actually implement the integrals for quantum chemistry by using recursion relationships with Hermite Gaussians. Our goal is to have clean, readable code. I’m writing this for the people are comfortable with the mathematics behind the Gaussian integrals, but want to see a readable computer implementation. I’ll talk a bit about some computational considerations at the end, but my goal is to convert equations to code. In fact, I’ve tried to structure the equations and the code in such a way that the two look very similar.

For a very good overview of integral evaluation, please see:

Helgaker, Trygve, and Peter R. Taylor. “Gaussian basis sets and molecular integrals.” Modern Electronic Structure (1995).

I will try and follow the notation used in the above reference.

Mathematical preliminaries

Let’s start with some of the basics. First, we have our 3D Gaussian functions

with orbital exponent , electronic coordinates , origin , and

also, are the angular quantum numbers (e.g. is -type, is type, etc.) Cartesian Gaussians are separable in 3D along so that

with the 1D Gaussian

So far, so good. Let’s consider the overlap integral of two 1D Gaussians, and

where we used the Gaussian product theorem so that

When using Hermite Gaussians, we can express as

where are expansion coefficients (to be determined recursively) and is the Hermite Gaussian overlap of two Gaussians and . It has a simple expression that kills the sum via the Kronecker delta . It can be shown that the expansion coefficients can be defined using the following recursive definitions

The first equation gives us a way to reduce the index and the second gives us a way to reduce index so that we can get to the third equation, which is our base case. The last equation tells us what to do if we go out of index bounds.

Overlap integrals

The first thing we need to do is implement a function E which returns our expansion coefficients . Aside from angular momentum and from the Gaussian functions, we also need the distance between Gaussians and the orbital exponent coefficients and as inputs.

def E(i,j,t,Qx,a,b):
    ''' Recursive definition of Hermite Gaussian coefficients.
        Returns a float.
        a: orbital exponent on Gaussian 'a' (e.g. alpha in the text)
        b: orbital exponent on Gaussian 'b' (e.g. beta in the text)
        i,j: orbital angular momentum number on Gaussian 'a' and 'b'
        t: number nodes in Hermite (depends on type of integral, 
           e.g. always zero for overlap integrals)
        Qx: distance between origins of Gaussian 'a' and 'b'
    p = a + b
    q = a*b/p
    if (t < 0) or (t > (i + j)):
        # out of bounds for t  
        return 0.0
    elif i == j == t == 0:
        # base case
        return np.exp(-q*Qx*Qx) # K_AB
    elif j == 0:
        # decrement index i
        return (1/(2*p))*E(i-1,j,t-1,Qx,a,b) - \
               (q*Qx/a)*E(i-1,j,t,Qx,a,b)    + \
        # decrement index j
        return (1/(2*p))*E(i,j-1,t-1,Qx,a,b) + \
               (q*Qx/b)*E(i,j-1,t,Qx,a,b)    + \

This is simple enough! So for a 1D overlap between two Gaussians we would just need to evaluate and multiply it by . Overlap integrals in 3D are just a product of the 1D overlaps. We could imagine a 3D overlap function like so

import numpy as np

def overlap(a,lmn1,A,b,lmn2,B):
    ''' Evaluates overlap integral between two Gaussians
        Returns a float.
        a:    orbital exponent on Gaussian 'a' (e.g. alpha in the text)
        b:    orbital exponent on Gaussian 'b' (e.g. beta in the text)
        lmn1: int tuple containing orbital angular momentum (e.g. (1,0,0))
              for Gaussian 'a'
        lmn2: int tuple containing orbital angular momentum for Gaussian 'b'
        A:    list containing origin of Gaussian 'a', e.g. [1.0, 2.0, 0.0]
        B:    list containing origin of Gaussian 'b'
    l1,m1,n1 = lmn1 # shell angular momentum on Gaussian 'a'
    l2,m2,n2 = lmn2 # shell angular momentum on Gaussian 'b'
    S1 = E(l1,l2,0,A[0]-B[0],a,b,n[0],A[0]) # X
    S2 = E(m1,m2,0,A[1]-B[1],a,b,n[1],A[1]) # Y
    S3 = E(n1,n2,0,A[2]-B[2],a,b,n[2],A[2]) # Z
    return S1*S2*S3*np.power(np.pi/(a+b),1.5) 

Note that we are using the NumPy package in order to take advantage of the definitions of and the fractional power to the . The above two functions overlap and E are enough to get us the overlap between two Gaussian functions (primitives), but most basis functions are contracted, meaning they are the sum of multiple Gaussian primitives. It is not too difficult to account for this, and we can finally wrap up our evaluation of overlap integrals with a function S(a,b) which returns the overlap integral between two contracted Gaussian functions.

def S(a,b):
    '''Evaluates overlap between two contracted Gaussians
       Returns float.
       a: contracted Gaussian 'a', BasisFunction object
       b: contracted Gaussian 'b', BasisFunction object
    s = 0.0
    for ia, ca in enumerate(a.coefs):
        for ib, cb in enumerate(b.coefs):
            s += a.norm[ia]*b.norm[ib]*ca*cb*\
    return s

Basically, this is just a sum over primitive overlaps, weighted by normalization and coefficient. A word is in order for the arguments, however. In order to keep the number of arguments we have to pass into our functions, we have created BasisFunction objects that contain all the relevant data for the basis function, including exponents, normalization, etc. A BasisFunction class looks like

class BasisFunction(object):
    ''' A class that contains all our basis function data
        origin: array/list containing the coordinates of the Gaussian origin
        shell:  tuple of angular momentum
        exps:   list of primitive Gaussian exponents
        coefs:  list of primitive Gaussian coefficients
        norm:   list of normalization factors for Gaussian primitives
    def __init__(self,origin=[0.0,0.0,0.0],shell=(0,0,0),exps=[],coefs=[]):
        self.origin = np.asarray(origin) = shell
        self.exps  = exps
        self.coefs = coefs
        self.norm = None

    def normalize(self):
        ''' Routine to normalize the basis functions, in case they
            do not integrate to unity.
        l,m,n =
        # self.norm is a list of length equal to number primitives
        self.norm = np.sqrt(np.power(2,2*(l+m+n)+1.5)*

So, for example if we had a STO-3G Hydrogen 1s at origin (1.0, 2.0, 3.0), we could create a basis function object for it like so

myOrigin = [1.0, 2.0, 3.0]
myShell  = (0,0,0) # p-orbitals would be (1,0,0) or (0,1,0) or (0,0,1), etc.
myExps   = [3.42525091, 0.62391373, 0.16885540] 
myCoefs  = [0.15432897, 0.53532814, 0.44463454]
a = BasisFunction(origin=myOrigin,shell=myShell,exps=myExps,coefs=myCoefs)

Where we used the EMSL STO-3G definition

!  STO-3G  EMSL  Basis Set Exchange Library 
! Elements                             References
! --------                             ----------
!  H - Ne: W.J. Hehre, R.F. Stewart and J.A. Pople, J. Chem. Phys. 2657 (1969).

H     0 
S   3   1.00
      3.42525091             0.15432897       
      0.62391373             0.53532814       
      0.16885540             0.44463454       

So doing S(a,a) = 1.0, since the overlap of a basis function with itself (appropriately normalized) is one.

Kinetic energy integrals

Having finished the overlap integrals, we move on to the kinetic integrals. The kinetic energy integrals can be written in terms of overlap integrals


For a 3D primitive, we can form a kinetic function analogous to overlap,

def kinetic(a,lmn1,A,b,lmn2,B):
    ''' Evaluates kinetic energy integral between two Gaussians
        Returns a float.
        a:    orbital exponent on Gaussian 'a' (e.g. alpha in the text)
        b:    orbital exponent on Gaussian 'b' (e.g. beta in the text)
        lmn1: int tuple containing orbital angular momentum (e.g. (1,0,0))
              for Gaussian 'a'
        lmn2: int tuple containing orbital angular momentum for Gaussian 'b'
        A:    list containing origin of Gaussian 'a', e.g. [1.0, 2.0, 0.0]
        B:    list containing origin of Gaussian 'b'
    l1,m1,n1 = lmn1
    l2,m2,n2 = lmn2
    term0 = b*(2*(l2+m2+n2)+3)*\
    term1 = -2*np.power(b,2)*\
                           (overlap(a,(l1,m1,n1),A,b,(l2+2,m2,n2),B) +
                            overlap(a,(l1,m1,n1),A,b,(l2,m2+2,n2),B) +
    term2 = -0.5*(l2*(l2-1)*overlap(a,(l1,m1,n1),A,b,(l2-2,m2,n2),B) +
                  m2*(m2-1)*overlap(a,(l1,m1,n1),A,b,(l2,m2-2,n2),B) +
    return term0+term1+term2

and for contracted Gaussians we make our final function T(a,b)

def T(a,b):
    '''Evaluates kinetic energy between two contracted Gaussians
       Returns float.
       a: contracted Gaussian 'a', BasisFunction object
       b: contracted Gaussian 'b', BasisFunction object
    t = 0.0
    for ia, ca in enumerate(a.coefs):
        for ib, cb in enumerate(b.coefs):
            t += a.norm[ia]*b.norm[ib]*ca*cb*\
    return t

Nuclear attraction integrals

The last one-body integral I want to consider here is the nuclear attraction integrals. These differ from the overlap and kinetic energy integrals in that the nuclear attraction operator is Coulombic, meaning we cannot easily factor the integral into Cartesian components .

To evaluate these integrals, we need to set up an auxiliary Hermite Coulomb integral that handles the Coulomb interaction between a Gaussian charge distribution centered at and a nuclei centered at . The Hermite Coulomb integral, like its counterpart , is defined recursively:

where is the Boys function

which is a special case of the Kummer confluent hypergeometric function,

which is convenient for us, since SciPy has an implementation of as a part of scipy.special. So for R we can code up the recursion like so

def R(t,u,v,n,p,PCx,PCy,PCz,RPC):
    ''' Returns the Coulomb auxiliary Hermite integrals 
        Returns a float.
        t,u,v:   order of Coulomb Hermite derivative in x,y,z
                 (see defs in Helgaker and Taylor)
        n:       order of Boys function 
        PCx,y,z: Cartesian vector distance between Gaussian 
                 composite center P and nuclear center C
        RPC:     Distance between P and C
    T = p*RPC*RPC
    val = 0.0
    if t == u == v == 0:
        val += np.power(-2*p,n)*boys(n,T)
    elif t == u == 0:
        if v > 1:
            val += (v-1)*R(t,u,v-2,n+1,p,PCx,PCy,PCz,RPC)
        val += PCz*R(t,u,v-1,n+1,p,PCx,PCy,PCz,RPC)
    elif t == 0:
        if u > 1:
            val += (u-1)*R(t,u-2,v,n+1,p,PCx,PCy,PCz,RPC)
        val += PCy*R(t,u-1,v,n+1,p,PCx,PCy,PCz,RPC)
        if t > 1:
            val += (t-1)*R(t-2,u,v,n+1,p,PCx,PCy,PCz,RPC)
        val += PCx*R(t-1,u,v,n+1,p,PCx,PCy,PCz,RPC)
    return val

and we can define our boys(n,T) function as

from scipy.special import hyp1f1

def boys(n,T):
    return hyp1f1(n+0.5,n+1.5,-T)/(2.0*n+1.0) 

There are other definitions of the Boys function of course, in case you do not want to use the SciPy built-in. Note that R requires knowledge of the composite center from two Gaussians centered at and . We can determine using the Gaussian product center rule

which is very simply coded up as

def gaussian_product_center(a,A,b,B):
    return (a*A+b*B)/(a+b)

Now that we have a the Coulomb auxiliary Hermite integrals , we can form the nuclear attraction integrals with respect to a given nucleus centered at , , via the expression

def nuclear_attraction(a,lmn1,A,b,lmn2,B,C):
    ''' Evaluates kinetic energy integral between two Gaussians
         Returns a float.
         a:    orbital exponent on Gaussian 'a' (e.g. alpha in the text)
         b:    orbital exponent on Gaussian 'b' (e.g. beta in the text)
         lmn1: int tuple containing orbital angular momentum (e.g. (1,0,0))
               for Gaussian 'a'
         lmn2: int tuple containing orbital angular momentum for Gaussian 'b'
         A:    list containing origin of Gaussian 'a', e.g. [1.0, 2.0, 0.0]
         B:    list containing origin of Gaussian 'b'
         C:    list containing origin of nuclear center 'C'
     l1,m1,n1 = lmn1 
     l2,m2,n2 = lmn2
     p = a + b
     P = gaussian_product_center(a,A,b,B) # Gaussian composite center
     RPC = np.linalg.norm(P-C)
     val = 0.0
     for t in xrange(l1+l2+1):
         for u in xrange(m1+m2+1):
             for v in xrange(n1+n2+1):
                 val += E(l1,l2,t,A[0]-B[0],a,b) * \
                        E(m1,m2,u,A[1]-B[1],a,b) * \
                        E(n1,n2,v,A[2]-B[2],a,b) * \
     val *= 2*np.pi/p 
     return val

And, just like all the other routines, we can wrap it up to treat contracted Gaussians like so:

def V(a,b,C):
    '''Evaluates overlap between two contracted Gaussians
        Returns float.
        a: contracted Gaussian 'a', BasisFunction object
        b: contracted Gaussian 'b', BasisFunction object
        C: center of nucleus
     v = 0.0
     for ia, ca in enumerate(a.coefs):
         for ib, cb in enumerate(b.coefs):
             v += a.norm[ia]*b.norm[ib]*ca*cb*\
     return v

Important: Note that this is the nuclear repulsion integral contribution from an atom centered at . To get the full nuclear attraction contribution, you must sum over all the nuclei, as well as scale each term by the appropriate nuclear charge!

Two electron repulsion integrals

We are done with the necessary one-body integrals (for a basic Hartree-Fock energy code, at least) and are ready to move on to the two-body terms: the electron-electron repulsion integrals. Thankfully, much of the work has been done for us on account of the nuclear attraction one-body integrals.

In terms of Hermite integrals, to evaluate the two electron repulsion terms, we must evaluate the summation

which looks terrible and it is. However, recalling that letting (that is, the Gaussian exponents on and , and and ), we can write the equation in a similar form to the nuclear attraction integrals

def electron_repulsion(a,lmn1,A,b,lmn2,B,c,lmn3,C,d,lmn4,D):
    ''' Evaluates kinetic energy integral between two Gaussians
         Returns a float.
         a,b,c,d:   orbital exponent on Gaussian 'a','b','c','d'
         lmn3,lmn4: int tuple containing orbital angular momentum
                    for Gaussian 'a','b','c','d', respectively
         A,B,C,D:   list containing origin of Gaussian 'a','b','c','d'
     l1,m1,n1 = lmn1
     l2,m2,n2 = lmn2
     l3,m3,n3 = lmn3
     l4,m4,n4 = lmn4
     p = a+b # composite exponent for P (from Gaussians 'a' and 'b')
     q = c+d # composite exponent for Q (from Gaussians 'c' and 'd')
     alpha = p*q/(p+q)
     P = gaussian_product_center(a,A,b,B) # A and B composite center
     Q = gaussian_product_center(c,C,d,D) # C and D composite center
     RPQ = np.linalg.norm(P-Q)
     val = 0.0
     for t in xrange(l1+l2+1):
         for u in xrange(m1+m2+1):
             for v in xrange(n1+n2+1):
                 for tau in xrange(l3+l4+1):
                     for nu in xrange(m3+m4+1):
                         for phi in xrange(n3+n4+1):
                             val += E(l1,l2,t,A[0]-B[0],a,b) * \
                                    E(m1,m2,u,A[1]-B[1],a,b) * \
                                    E(n1,n2,v,A[2]-B[2],a,b) * \
                                    E(l3,l4,tau,C[0]-D[0],c,d) * \
                                    E(m3,m4,nu ,C[1]-D[1],c,d) * \
                                    E(n3,n4,phi,C[2]-D[2],c,d) * \
                                    np.power(-1,tau+nu+phi) * \
     val *= 2*np.power(np.pi,2.5)/(p*q*np.sqrt(p+q))
     return val

And, for completeness’ sake, we wrap the above to handle contracted Gaussians

def ERI(a,b,c,d):
    '''Evaluates overlap between two contracted Gaussians
        Returns float.
        a: contracted Gaussian 'a', BasisFunction object
        b: contracted Gaussian 'b', BasisFunction object
        c: contracted Gaussian 'b', BasisFunction object
        d: contracted Gaussian 'b', BasisFunction object
     eri = 0.0
     for ja, ca in enumerate(a.coefs):
         for jb, cb in enumerate(b.coefs):
             for jc, cc in enumerate(c.coefs):
                 for jd, cd in enumerate(d.coefs):
                     eri += a.norm[ja]*b.norm[jb]*c.norm[jc]*d.norm[jd]*\
     return eri

And there you have it! All the integrals necessary for a Hartree-Fock SCF code.

Computational efficiency considerations

Our goal here has been to eliminate some of the confusion when it comes to connecting mathematics to actual computer code. So the code that we have shown is hopefully clear and looks nearly identical to the mathematical equations they are supposed to represent. This is one reason we chose Python as the code of choice to implement the integrals. It emphasizes readability.

If you use the code as is, you’ll find that you can only really handle small systems. To that end, I’ll give a few ideas on how to improve the integral code to actually be usable.

First, I would recommend not using pure Python. The problem is that we have some pretty deep loops in the code, and nested for loops will kill your speed if you insist on sticking with Python. Now, you can code in another language, but I would suggest rewriting some of the lower level routines with Cython ( Cython statically compiles your code to eliminate many of the Python calls that slow your loops down. In my experience, you will get several orders of magnitude speed up. This brings me to my second point. One of the problems with Python is that you have the global interpreter lock (GIL) which basically means you cannot run things in parallel. Many of the integral evaluation routines would do well if you could split the work up over multiple CPUs. If you rewrite some of the low level routines such that they do not make Python calls anymore, you can turn off the GIL and wrap the code in OpenMP directives, or even use Cython’s cython.parallel module. This will take some thought, but can definitely be done. Furthermore, removing the explicit recursion in the E and R functions and making them iterative would go a long way to speed up the code.

A couple other thoughts: be sure to exploit the permutational symmetry of the integrals. The two electron repulsion integrals, for example, can be done in of the time just by exploiting these symmetries, which are unrelated to point group. Also, you can exploit many of the integral screening routines, since many of the two electron integrals are effectively zero. There are a lot of tricks out there in the literature, go check them out!

Quantum Dynamics Movies

One of the cool things about working with real-time electronic dynamics is that you can make awesome videos.

I recently made a video of a resonant electronic excitation of hydrogen peroxide.

You can check out the video by clicking the picture below.

Quantum Dynamics

The video was made using the real-time TDHF module in Chronus Quantum. We visualize how the electronic density changes over time.

For this simple STO-3G model, hydrogen peroxide strongly absorbs at 18.0 eV. So we apply an oscillating electric field at the same frequency and the electronic density goes nuts. You can do this for any frequency of light, however.

In the future, we can look at more realistic systems with real time TDDFT.

It’s important because this type of phenomena underlies our ability to make better solar cells and faster computer processing devices.

Easy transparent SSH tunnels

Most of my development work is done on remote computing clusters, which are usually firewalled.

This means I can’t log in directly when I work from home. I have to log into a network machine, and then log in to my development machine from there.

It gets kind of tedious to keep doing something along the lines of

home:>$ ssh network-host
network-host:>$ ssh development-machine

It also makes it a pain to copy files from home to the remote machine (and vice versa!).

However, you can easily set up a transparent tunnel through your network-host that lets you (effectively) log on to your development machine as if it weren’t even firewalled.

Just open up (or create) ~/.ssh/config:

Add the following lines (replacing network-host and development-machine with the names of your computers). You can name them whatever you want. Here I named each machine network and devel.

# Non-firewalled computer
Host network 
HostName network-machine

# Firewalled computer
Host devel
HostName development-machine 
ProxyCommand ssh -o 'ForwardAgent yes' network 'ssh-add && nc %h %p'

What this does is log you into the non-firewalled network-host, then establish a connection to the firewalled computer through netcat (nc in this case). You never actually see the network host, it acts on your behalf, invisibly in the background.

So, from your home computer, you can login directly to the development machine:

home:>$ ssh devel 

You can also scp directly from your development machine now, too.

home:>$ scp devel:~/path/to/files .
files                           100% 4084     4.0KB/s   00:00 

Installing Chronus Quantum on Ubuntu 15.10

Chronus Quantum is a free, open-source software package to perform ab-initio computational chemistry calculations. It is primarily developed by Xiaosong Li and his research group at the University of Washington.

In particular, it was designed to excel with explicitly time-dependent calculations, as well as otherwise unconventional electronic structure methods.

In other words, Chronus Quantum is a free software package that solves the underlying quantum mechanics that determines how molecules react and behave.

Just like physics engines in video games make your gameplay more realistic and lifelike, here Chronus Quantum is an engine to give a realistic simulation of molecules on computers.

These types of calculations reveal just what electrons are doing in molecules, helping researchers design better drugs, better solar panels, and better computer chips, among many others.

Chronus Quantum is free and open-source, meaning anyone can download it, try it, and even contribute to it.

The latest public release is hosted on GitHub (just like this website), and you can browse and download the entire source code here.

I want to walk you through the steps I took to get Chronus Quantum working on my fresh install of Ubuntu 15.10. The steps should work for the past few releases of Ubuntu Linux, so if you are on 14.10 or something like that you needn’t worry.

I’ll assume that you are comfortable working on the command line, and have root/sudo privileges, but other than that no particular expertise is necessary!

Obtaining Chronus Quantum

For starters, open up your terminal. Move to a directory where you want to install (home directory is probably fine, it’s where I installed my copy).

Since the source is hosted on GitHub, we will use git to obtain our copy of the code. Type

$ git clone

If you don’t have git, you can install it by typing

$ sudo apt-get update && sudo apt-get install git-all

In fact, if Ubuntu ever complains about not having some package, 95% of the time you can just

$ sudo apt-get install <whatever-package-ubuntu-is-whining-about>

Okay, you should now have chronusq_public in your directory. cd into it.

$ cd chronusq_public

Take care of a few dependencies…

At this point in time, chronus won’t handle all the dependencies on its own, so we have to help it with a few things before we compile.

You may have some of these installed already, but I’m working with a fresh install. apt-get will let you know if you already have a certain package.

Let’s take care of python first, following the apt-get commands we’ve seen before.


Although chronus is mostly C++, the high level execution is handled by a python script. So let’s get that set up. Type

$ sudo apt-get install python-dev libxml2-dev libxstl1-dev python-pip

Followed by

$ sudo pip install configparser

This takes care of the python dependencies.

Math libraries

Same idea as above, but for the required linear algebra libraries.

Let’s knock it out in one shot. Type

$ sudo apt-get install libeigen3-dev libblas-dev liblapacke-dev libhdf5-dev

Okay, we are done here.


Now, you should still be in the top of the chronusq_public directory. If not,

$ cd /path/to/chronusq_public

From this directory, make a build directory and cd into it.

$ mkdir build && cd build

Now, inside the build/ directory, type

$ cmake -DCMAKE_CXX_FLAGS='-w -O2 -std=c++11' -DBUILD_LIBINT=ON ..

This will configure your compilation.

There are more options you can pass to cmake, and you can find them in the Chronus Quantum documentation (in the folder chronusq_public/doc/).

Most of the defaults should work for us. I wanted to pass the optimizer flag to cmake as well (-O2).

We also want chronus to deal with compiling the external integral package, LibInt, so we tell it to do so explicitly with -DBUILD_LIBINT=ON.

Don’t forget the .. at the end!


Perfect. Now just type

$ make -j <nproc>

where <nproc> is the number of processors. I recommend you use all CPUs available. You can check via

$ nproc --all

I had four, so

$ make -j 4

Now chronus will take care of the rest! It will clean up several more dependencies, so a lot more junk will dump to your terminal, but that’s expected. You can pretty much ignore the boost “warnings” it dumps out.

Heads up: this will probably take a while. It took me around 2 hours to compile. Granted, we are making chronus deal with LibInt which accounts for at least half of that compile time, but now would be a good time to take a long lunch or go outside.

Test it out!

In your build directory, there should now be a chronus python script, called

You can run it on a test file like so

$ python <input-file>

Here’s a test case from the documentation, which will do a Hartree-Fock SCF calculation on a water molecule:

# Molecule Specification

charge = 0
mult = 1
  O 0  0.000000000 -0.0757918436 0.0
  H 0  0.866811829  0.6014357793 0.0
  H 0 -0.866811829  0.6014357793 0.0

# Job Specification

reference = HF
job = SCF
basis = sto3g.gbs

# Misc Settings

nsmp = 1 

Save this file to a file named water.inp. Then you can run chronus by

$ python water.inp

The output will be named water.out.

Open this file up, and scroll down until you see


SCF Iteration   Energy (Eh)       ΔE (Eh)          |ΔP(α)|
-------------   -----------       -------           -------
  SCFIt: 1      -74.8749392555    -3.9671535e-01   2.0386303e+00
  SCFIt: 2      -74.9381834290    -6.3244173e-02   6.8448936e-01
  SCFIt: 3      -74.9414265478    -3.2431189e-03   1.2919179e-01
  SCFIt: 4      -74.9419370087    -5.1046089e-04   5.7926335e-02
  SCFIt: 5      -74.9420469702    -1.0996144e-04   2.4808865e-02
  SCFIt: 6      -74.9420722506    -2.5280437e-05   1.2054431e-02
  SCFIt: 7      -74.9420798968    -7.6462320e-06   1.1052335e-02
  SCFIt: 8      -74.9420798968    -4.3200998e-12   5.9411624e-06
  SCFIt: 9      -74.9420798968    -3.2684966e-13   1.0888457e-06
  SCFIt: 10     -74.9420798968     4.2632564e-14   4.5029536e-07
  SCFIt: 11     -74.9420798968    -4.2632564e-14   1.7644888e-07
  SCFIt: 12     -74.9420798968    -2.8421709e-14   8.5174409e-08
  SCFIt: 13     -74.9420798968    -2.8421709e-14   7.7070096e-08
  SCFIt: 14     -74.9420798968     8.5265128e-14   2.2941389e-14

SCF Completed: E(ℝ-RHF) = -74.9420798968  Eh after  14  SCF Iterations


And there you have it! The results of the Hartree-Fock SCF iteration.

The total energy of the water molecule is there at the bottom: E(ℝ-RHF) = -74.9420798968 in units of Hartrees.

There is plenty more you can do with chronus, and I’ve only scratched the very surface. You can check out the docs for more information about setting up real-time calculations and more.

If this project sounds interesting to you, and you want to contribute, feel free to fork it!