Using Gaussian 09 with Linda
This section describes the process of installing the Linda software you have purchased through Gaussian, Inc. and building a distributed-memory parallel version of Gaussian. It assumes that you have already built and tested the regular version of the program. It also assumes that you have read the normal installation instructions and also that you have access to the Gaussian 09 User’s Reference.
Linda Parallel Methods
HF, CIS=Direct, and DFT calculations on molecules are Linda parallel, including energies, optimizations and frequencies. TDDFT energies and gradients and MP2 energies and gradients are also Linda parallel. Portions of MP2 frequency and CCSD calculations are Linda parallel, but others are only SMP-parallel, so they see some speedup from using a few nodes but no further improvement from larger numbers of nodes.
It is always best to use SMP-parallelism within nodes and Linda only between nodes. For example on a cluster of 4 nodes, each with a dual quad-core EM64T, one should use
rather than using more than one Linda worker per node.
Installing the Linda Software and Compiling G09/Linda
If you have purchased Gaussian binaries, Linda is distributed on the same CD as the Gaussian binaries and no additional installation is necessary. Follow the directions in the file README file on the distribution CD.
If you have purchased Gaussian source code, then Linda is distributed on a separate CD. Follow the directions in the file README.source file on the Gaussian source distribution CD to install Linda and compile Gaussian to use Linda.
In either case, you must run the command bsd/install as detailed in the README files and installation instruction sheets.
Running Gaussian with Linda
The Linda parallel programming model involves a master process, which runs on the current processor, and a number of worker processes which can run on other nodes of the network. So a Gaussian 09/Linda run must specify the number of processors to use, the list of processors where the jobs should be run, and occasionally other job parameters. An environment variable is generally the easiest way to specify this information (as we will see).
Each of these nodes needs to have some access to the Gaussian 09 directory tree. The recommended configuration is to have G09 on each system that will be used for the parallel job. Note that the Linda binaries need to have the same path on each machine. If this is not feasible, the G09 tree can be made accessible via an NFS-mounted disk which is mounted in an identical location on all nodes.
For MP2 calculations, each node must also have some local disk where Gaussian 09 can put temporary files. This is defined as usual via the GAUSS_SCRDIR environment variable, which should be set in the .cshrc or .profile for your account on each node.
Configuring Gaussian 09/Linda
Gaussian 09 gets configuration information from three primary sources
The Gaussian input file via %Link0 commands.
The Default.Route file.
The environment variable GAUSS_LFLAGS.
Details about %Link0 commands and the Default.Route file can also be found in the Gaussian 09 User’s Reference manual. Entries specific to Gaussian 09/Linda are described below.
SPECIFYING THE WORKER COMPUTERS
The %LindaWorkers directive is used to specify computers where Linda worker processes should run. It has the following syntax:
This lists the TCP node name for each node to use. By default, one Linda worker is started on each node, but the optional value allows this to be varied. A worker is always started on the node where the job is started (the master node) whether or not it appears in the node list. %LindaWorkers may be combined with %NProcShared. In this case, one or more parallel worker processes will be run on each node (the number still determined by the values in %LindaWorkers). The value to %NProcShared specifies the number of SMP processors/cores to use on each system in the worker node list.
Do not use the obsolete %NProcLinda directive. G09 will compute the total number of Linda workers based on the %LindaWorkers input.
The following directive causes a network parallel job to be run across the specified 5 nodes. Nodes hamlet and ophelia will each run two worker processes.
The following directives specify that a parallel job will be executed on hosts norway, italy and spain. Nodes norway and italy will each run one 4-way SMP parallel worker, and spain will run two such workers:
%NProcShared=4 Specifies four-way SMP parallelism.
These directives make sense when norway and italy are 4 processor/core computers, and spain is an 8 processor/core computer.
Note that the %NProc directive used in earlier Gaussian versions is obsolete.
SPECIFYING THE AMOUNT OF MEMORY FOR A PARALLEL CALCULATION
Memory is specified using the %Mem Link0 command, just as for serial calculations.
USING SSH INSTEAD OF RSH
By default, Linda uses rsh to communicate between nodes. You can use ssh instead by including the following option in the GAUSS_LFLAGS environment variable:
% setenv GAUSS_LFLAGS '… -opt "Tsnet.Node.lindarsharg: ssh"'
Another way to override this default, you need to create a configuration file in your home directory on the master node named .tsnet.config which contains the following line:
This will cause ssh to be used instead. Note that passwordless ssh logins must already be configured from the master to all worker nodes.
SPECIFYING OTHER LINDA OPTIONS
A few Linda options that are sometime useful are:
-v Display verbose messages
-vv Display very verbose messages
Use the GAUSS_LFLAGS environment variable to set them.
For example, one could turn on very verbose Linda output using:
% setenv GAUSS_LFLAGS -vv
There are many other Linda options but most of them are not used by Gaussian. Check
the Linda manual on the Internet at www.lindaspaces.com/downloads/lindamanual.pdf. The -opt form can be used in GAUSS_LFLAGS to invoke any valid .tsnet.config file directive. Note that Gaussian 09/Linda does not use the native Linda resources minworker and maxworker.
Starting Parallel Gaussian 09 Jobs
The g09 command is used as usual to initiate a distributed memory parallel a Gaussian 09 job. For a Linda parallel job to execute successfully, the following conditions must be true:
You have already executed the appropriate Gaussian 09 initialization file ($g09root/g09/bsd/g09.login or $g09root/g09/bsd/g09.profile). Test this by running a serial Gaussian 09 calculation on the master node.
The directory $g09root/g09 is accessible on all nodes.
The LD_LIBRARY_PATH environment variable is set (see the G09 install notes) to locate the Linda shared libraries.
Local scratch space is available on each node if needed (via GAUSS_SCRDIR).
All nodes on which the program will run are trusted by the current host. You should be able to login remotely with the rlogin or ssh command without having to give a password to each of them. Contact your system administrator about configuring security for the nodes in the network.
Calculations can then be started as you would for a serial calculations :
% g09 input &
and Gaussian 09 will start the master and worker processes as needed.
Monitoring the Calculation
You will see processes started on the worker nodes for those links which have been parallelized, e.g. they will have a *.exel entry in the main G09 directory. Using the top or other commands on worker nodes will let you see lxxx.exel when it starts.
The relevant measure of performance for a parallel calculation is the elapsed or wall clock time. The easiest way to check this is to use an external monitor like time, times, or timex, e.g.
% times g09 input &
which will report elapsed, CPU and system times. Note that the last two are only on the master node and similar amounts of CPU and system time are expended on each node. So the speedup is the ratio of the elapsed time running a serial job and the elapsed time for the parallel job.
Specifying Workers Per Node on PPC-based Macs.
The -mp n option can be used to run Gaussian with Linda across multiple power PC-based Mac OS X and other multiprocessor systems. It specifies the maximum number of Linda processes to be scheduled per node. Set it to 2 when all of the individual nodes are dual processors.
Last update: 23 April 2013