# Installing Chronus Quantum on Ubuntu 15.10

Chronus Quantum is a free, open-source software package to perform ab-initio computational chemistry calculations. It is primarily developed by Xiaosong Li and his research group at the University of Washington.

In particular, it was designed to excel with explicitly time-dependent calculations, as well as otherwise unconventional electronic structure methods.

In other words, Chronus Quantum is a free software package that solves the underlying quantum mechanics that determines how molecules react and behave.

Just like physics engines in video games make your gameplay more realistic and lifelike, here Chronus Quantum is an engine to give a realistic simulation of molecules on computers.

These types of calculations reveal just what electrons are doing in molecules, helping researchers design better drugs, better solar panels, and better computer chips, among many others.

Chronus Quantum is free and open-source, meaning anyone can download it, try it, and even contribute to it.

The latest public release is hosted on GitHub (just like this website), and you can browse and download the entire source code here.

I want to walk you through the steps I took to get Chronus Quantum working on my fresh install of Ubuntu 15.10. The steps should work for the past few releases of Ubuntu Linux, so if you are on 14.10 or something like that you needn’t worry.

I’ll assume that you are comfortable working on the command line, and have root/sudo privileges, but other than that no particular expertise is necessary!

## Obtaining Chronus Quantum

For starters, open up your terminal. Move to a directory where you want to install (home directory is probably fine, it’s where I installed my copy).

Since the source is hosted on GitHub, we will use git to obtain our copy of the code. Type

If you don’t have git, you can install it by typing

In fact, if Ubuntu ever complains about not having some package, 95% of the time you can just

Okay, you should now have chronusq_public in your directory. cd into it.

## Take care of a few dependencies…

At this point in time, chronus won’t handle all the dependencies on its own, so we have to help it with a few things before we compile.

You may have some of these installed already, but I’m working with a fresh install. apt-get will let you know if you already have a certain package.

Let’s take care of python first, following the apt-get commands we’ve seen before.

Python

Although chronus is mostly C++, the high level execution is handled by a python script. So let’s get that set up. Type

Followed by

This takes care of the python dependencies.

Math libraries

Same idea as above, but for the required linear algebra libraries.

Let’s knock it out in one shot. Type

Okay, we are done here.

## Configure

Now, you should still be in the top of the chronusq_public directory. If not,

From this directory, make a build directory and cd into it.

Now, inside the build/ directory, type

There are more options you can pass to cmake, and you can find them in the Chronus Quantum documentation (in the folder chronusq_public/doc/).

Most of the defaults should work for us. I wanted to pass the optimizer flag to cmake as well (-O2).

We also want chronus to deal with compiling the external integral package, LibInt, so we tell it to do so explicitly with -DBUILD_LIBINT=ON.

Don’t forget the .. at the end!

## Compile

Perfect. Now just type

where <nproc> is the number of processors. I recommend you use all CPUs available. You can check via

Now chronus will take care of the rest! It will clean up several more dependencies, so a lot more junk will dump to your terminal, but that’s expected. You can pretty much ignore the boost “warnings” it dumps out.

Heads up: this will probably take a while. It took me around 2 hours to compile. Granted, we are making chronus deal with LibInt which accounts for at least half of that compile time, but now would be a good time to take a long lunch or go outside.

## Test it out!

In your build directory, there should now be a chronus python script, called chronusq.py.

You can run it on a test file like so

Here’s a test case from the documentation, which will do a Hartree-Fock SCF calculation on a water molecule:

Save this file to a file named water.inp. Then you can run chronus by

The output will be named water.out.

Open this file up, and scroll down until you see

And there you have it! The results of the Hartree-Fock SCF iteration.

The total energy of the water molecule is there at the bottom: E(ℝ-RHF) = -74.9420798968 in units of Hartrees.

There is plenty more you can do with chronus, and I’ve only scratched the very surface. You can check out the docs for more information about setting up real-time calculations and more.

If this project sounds interesting to you, and you want to contribute, feel free to fork it!

# Moving from Wordpress to Jekyll

I’m currently in the process of porting over my old WordPress blog to GitHub Pages, which supports Jekyll. I was getting tired of the bloat dealing with Wordpress, and wanted a cleaner, snappier website.

That and now I can blog in markdown, which is a great advantage to using Jekyll! From the terminal! And version control and hosting with git and GitHub!

I’m a little late to the party, but glad I made it.

# New NFL team ranking program available

I released some new code today that should allow you to easily rank NFL teams for any given season (back through 2009) using either the Colley or Massey methods. Both methods have several weighting schemes implemented, and with the Massey method you can additionally rank according to point differential or total yard differential.

You can check it out on my GitHub here.

I’ve become increasingly dissatisfied with my old NFL Colley ranking system and its derivatives, mostly because (a) I didn’t write it in an object-oriented style, and (b) it became very (VERY) hard to modify later on. The new structure, by encapsulating everything in a “Season” object should keep everything pretty self-contained.

For example, now

Which gives the output

So obviously the NE Patriots were ranked #1 for the 2014 season with this method. You’ll recall they ended up winning the Super Bowl that same season.

So anyway, I’m starting over and making use of some great NFL APIs that I have found elsewhere on GitHub. In particular, I am using nflgame, which does a lot of the heavy lifting for me associated with scraping necessary data.

Check it out if this sounds like it may be something you’re interested in!

# Embarassingly parallel tasks in Python

Recently, I found myself executing the same commands (or some variation thereof) at the command line.

Over and over and over and over.

Sometimes I just write a do loop (bash) or for loop (python). But the command I was executing was taking almost a minute to finish. Not terrible if you are doing this once, but several hundred (thousand?) times more and I get antsy.

If you are curious, I was generating .cube files for a large quantum dot, of which I needed the molecular orbital density data to analyze. There was no way I was waiting half a day for these files to generate.

So instead, I decided to parallelize the for loop that was executing my commands. It was easier than I thought, so I am writing it here not only so I don’t forget how, but also because I’m sure there are others out there like me who (a) aren’t experts at writing parallel code, and (b) are lazy.

Most of the following came from following along here.

First, the package I used was the joblib package in python. I’ll assume you have it installed, if not, you can use pip or something like that to get it on your system. You want to import Parallel and delayed.

So start off your code with

If you want to execute a system command, you’ll also need the call function from the subprocess package. So you have

Once you have these imported, you have to structure your code (according to the joblib people) like so:

So do you imports first (duh), then define the functions you want to do (in my case, execute a command on the command line), and then finally call that function in the main block.

I learn by example, so I’ll show you how I pieced the rest of it together.

Now, the command I was executing was the Gaussian “ cubegen” utility. So an example command looks like

Which makes a .cube file (50.cube) containing the volumetric data of molecular orbital 50 (MO=50) from the formatted checkpoint file (qd.fchk). I wanted 120 points per side, and I wanted headers printed (120 h).

Honestly, the command doesn’t matter. If you want to parallelize

over a for loop, you certainly could. That’s not my business.

What does matter is that we can execute these commands from a python script using the call function that we imported from the subroutine package.

So we replace our functions with the system calls

Now that we have the command(s) defined, we need to piece it together in the main block.

In the case of the makeCube function, I want to feed it a list of molecular orbital (MO) numbers and let that define my for loop. So let’s start at MO #1 and go to, say, MO #500. This will define our inputs. I also want the cube resolution (npts) as a variable (well, parameter really).

I’ll also use 8 processors, so I’ll define a variable num_cores and set it to 8. Your mileage may vary. Parallel() is smart enough to handle fairly dumb inputs.

(Also, if you do decide to use cubegen, like I did, please make sure you have enough space on disk.)

Putting this in, our code looks like

Great. Almost done.

Now we need to call this function from within Parallel() from joblib.

The Parallel function (object?) first takes the number of cores as an input. You could easily hard code this if you want, or let Python’s multiprocessing package determine the number of CPUs available to you. Next we call the function using the delayed() function. This is “a trick to create a tuple (function, args, kwargs) with a function-call syntax”.

It’s on the developer’s web page. I can’t make this stuff up.

Then we feed it the list defined by our start and end values.

If you wanted to list the contents of your directory 500 times and over 8 cores, it would look like (assuming you defined the function and inputs above)

Essentially we are making the equivalence that

is the same as

Does that make sense? It’s just

Okay. Enough already. Putting it all together we have:

There you have it!

# Biorthogonalizing left and right eigenvectors the easy (lazy?) way

Lately I have been diagonalizing some nasty matrices.

Large. Non-Hermitian. Complex. Matrices. The only thing I suppose I have going for me is that they are relatively sparse.

Usually I haven’t have much of a problem getting eigenvalues. Eigenvalues are easy. Plug into ZGEEV, compute, move on.

The problem I ran into came when I wanted to use the eigenvectors. If you are used to using symmetric matrices all the time, you might not realize that non-Hermitian matrices have two sets of eigenvectors, left and right. In general, these eigenvectors of a non-Hermitian matrix are not orthonormal. But you can biorthogonalize them.

Why do you care if your eigenvectors are biorthogonalized?

Well, if your matrix corresponds to a Hamiltonian, and if you want to compute wave function properties, then you need a biorthonormal set of eigenvectors. This happens in linear response coupled cluster theory, for example. It is essential for a unique and physical description of molecular properties, e.g. transition dipole moments.

Now, with Hermitian matrices, your left and right eigenvectors are just conjugate transposes of each other, so it’s super easy to orthogonalize a set of eigenvectors. You can compute the QR decomposition (a la Gram-Schmidt) of your right eigenvectors $\mathbf{C}$ to get

where $\mathbf{Q}$ is your set of orthogonalized eigenvectors. (Usually they are orthogonalized anyway during your eigenvalue decomposition.)

For non-Hermitian matrices, this situation is different. You have two eigenvalue equations you want to solve:

$\mathbf{HR} = \mathbf{RE}$, and, $\mathbf{LH} = \mathbf{LE}$

where $\mathbf{R}$ and $\mathbf{L}$ are your right and left eigenvectors, and $\mathbf{H}$ and $\mathbf{E}$ are your matrix and eigenvalues. The eigenvalues are the same regardless of which side you solve for.

To biorthonormalize $\mathbf{L}$ and $\mathbf{R}$, we want to enforce the constraint

which says that the inner product of these two set of vectors is the identity. This is what biorthonormalization means.

Many times $\mathbf{LR} = \mathbf{D}$ where $\mathbf{D}$ is diagonal. This is easy. It’s already orthogonal, so you can just scale by the norm.

If that isn’t the case, how do you do it? One way would be to modify Gram-Schmidt. You could do that, but you won’t find a LAPACK routine to do it for you (that I know of). So you’d have to write one yourself, and doing that well is time-consuming and may be buggy. Furthermore, I’ve found the modified Gram-Schmdt to run into to serious problems when you encounter degenerate eigenvalues. In that case, the eigenvectors for each degenerate eigenvalue aren’t unique, even after constraining for biorthonormality, and so it’s tough to enforce biorthonormality overall.

Here’s a trick if you just want to get those dang eigenvectors biorthonormalized and be on your way. The trick lies in the LU decomposition.

Consider the following. Take the inner product of $\mathbf{L}$ and $\mathbf{R}$ to get the matrix $\mathbf{M}$.

Now take the LU decomposition of $\mathbf{M}$

Where $\mathbf{M}_L$ is lower triangular, and $\mathbf{M}_U$ is upper triangular. So our equation now reads:

Triangular matrices are super easy to invert, so invert the right hand side to get:

Now, since we want left and right eigenvectors that are biorthonormal, we can replace the identity:

where the prime indicates our new biorthonormal left and right eigenvectors.

This suggests our new biorthonormal left and right eigenvectors take the form:

and

And there you have it! An easy to implement method of biorthonormalizing your eigenvectors. All of the steps have a corresponding LAPACK routine. I’ve found it to be pretty robust. You can go a step further and prove that the new eigenvectors are still eigenvectors of the Hamiltonian. Just plug them in to the eigenvalue equation and you’ll see.

While I think this doesn’t scale any worse than your diagonalization in the first place, it is still a super useful trick. For example, it even works for set of eigenvectors that aren’t full rank (e.g. rectangular). Because you do the LU on the inner product of the left and right eigenvectors, you’ll get a much smaller square matrix the dimension of the number of eigenvectors you have.

Another use you might find with this is if the eigenvectors are nearly biorthonormal (which often happens when you have degenerate eigenvalues). You can do the same trick, but on the subspace of the eigenvectors corresponding to the degenerate eigenvalues. So if you have three degenerate eigenvalues, you can do an LU decomposition plus inversion on a 3x3 matrix.