Tensor Network Theory
Tensor Network Theory (TNT) provides efficient and accurate methods for simulating strongly correlated quantum systems. It does this by encoding, as a network of tensors, the many-body wave function representing the system and the operators that act on it. TNT algorithms can then be broken down into a series of tensor operations.
The TNT library contains highly optimised routines for manipulating tensors in the network, which are completely general and do not depend on any network geometry. It also contains routines that can be used to build the most common TNT algorithms, with complete versions of these algorithms, either to use as they are, or modify for your own purposes.
The library is being developed in the group of Prof Dieter Jaksch at the University in Oxford. For more information about the group and recent publications, please visit the group website.
We are receiving core support from Prof Chris Greenough's software engineering group at STFC Rutherford Appleton Laboratory, and University dCSE support from Dr Chris Goodyer at NAG.
Structure of the library
The library routines are organised in a three tier structure, shown right.
Tier I contains routines for manipulating the tensors that represent the nodes in the network. These routines are completely general and do not depend on network geometry. They include routines for modifying the tensor values through operations on the tensors, changing how the nodes are connected to one another in the network, and getting certain values (e.g. diagonal values) of the tensors. All these routines are contained in the core library libtnt.a, and they can be used to build your own custom TNT algorithms, or in fact use in any other application where tensor manipulations are required. The figure shows two example operations on network nodes that are commonly used in TNT algorithms: firstly node A is contracted along the physical leg with its complex conjugate, and secondly node B is factorised into three new nodes using a singular value decomposition.
Tier II contains routines that operate on a network. A few of these (e.g. copying a network, deleting a network) do not depend on network geometry and these are also included in the core library. The remaining routines are specific to the network geometry e.g. a matrix product state (MPS) network. These routines contain building blocks for building the TNT algorithms for these networks types, for example contracting an entire network to find an expectation value on a given site, or applying a sequence of two-site gates.
Tier III contains complete algorithms. For example a given algorithm may load an MPS start state and Hamiltonian from an initialisation file, time-evolve the MPS under the Hamiltonian for a given time t, calculating expectation values at given time-intervals. These algorithms can either be used without modification, or can be easily changed to build your own custom routines.
Currently availably for download is the alpha version of the core pre-compiled library libtnt.a, which contains optimised tier I routines and general tier II routines. Also available are many routines for manipulating and time-evolving an MPS. These routines are being used and tested within Dieter Jaksch's group.
A beta version of the core library libtnt.a will be released soon, along with an tier II MPS library libtnt_mps.a, which will contain a variety of routines for manipulating MPS networks. Algorithms will be provided for time-evolving a start state using time-evolving block decimation, and finding the ground state using a variational approach. For more details see the section 'Status of beta version' at the bottom of the page.
Plans for development
In the near future, development will focus on optimising the core library and the MPS library, parallelising the core tensor operations. This will include treatment of global physical symmetries, leading to dramatic speed-ups in systems where there are conserved quantities. This work is being carried out with support under the University dCSE scheme from HECToR, and will be completed by July 2013.
During this time, as well as continual optimisations to the core library, new routines will continually be added to the MPS library, taking into account feature requests and feedback from users.
After this, development will focus on adding to the tier II libraries, providing libraries for different network types (e.g. MERA, PEPS) moving to concentrating on TNT algorithms in two dimensions. These will first be provided in a serial library, before highly optimised libraries are released with parallelism implemented at the network and algorithm level as well as utilising the already parallelised core tensor routines.
Download the library
To get hold of a copy of the library, please first join the project by clicking the 'Join this Project' button on the top right. If you do not already have an account on CCPForge, you will need to do this first.
After this, you will be able to download the pre-compiled library from the Files section. More detailed information using the library is available in the documentation, which is also available for download here.
The source code is also available on request, please contact firstname.lastname@example.org for access.
Keep up to date with news on the project
To keep up to date, please join the tntlibrary-users mailing list by subscribing here. You will then be informed whenever a new version of the library is available.
Give us your feedback
We would appreciate any feedback you have on the TNT library, to help us develop stable code that is of use to as many researchers as possible.
If you think there are any features that would be useful, that are not already described in the 'Plans for development section', e.g. an operation on a node/network, or an option for input or output of data, please request it in the Feature Request tracker. We will change the status to let you know if your feature is added to the list of features that developers are working on, if it is in progress, or if it is complete.
If you notice a bug, please report it in the Bug Report tracker with as much detail as you can, and we will work to resolve it as soon as possible.
For any other comments or questions please contact email@example.com.
The current functions have library has been tested primarily on Linux systems using performance compilers (Intel) and linear algebra libraries (Intel MKL and NAG). Before the beta version is released we will complete development of versions that can be used on Mac, Linux and Windows environments using freely available libraries and compilers. Although this version will not be as highly optimised, we envisage that it will be useful for testing and initial development, before switching to the high performance version for treating large problems, for use on HPC systems.
The following functions (with current status indicated) will be included in the beta version:
Core library functions
Functions for manipulating general networks
Library functions for MPS