How to install MaPHyS on a cluster using Spack

Table of Contents

The idea here is to download the tarballs required to install and test MaPHyS. We will first download Spack, then download the last releases of the MaPHyS stack thanks to Spack and then copy the necessary files on the remote machine (e.g a cluster). Finally we will install MaPHyS on the remote machine with Spack.

This page has been generated from the following emacs org-mode file http://morse.gforge.inria.fr/maphys/install-maphys-cluster.org and also available here:

svn checkout svn://scm.gforge.inria.fr/svnroot/morse/tutorials/maphys maphys_morse/ # or
svn checkout https://scm.gforge.inria.fr/anonscm/svn/morse/tutorials/maphys maphys_morse/

1 Useful links

2 Install Spack locally

Prerequisites: you should have a python2 and curl installed to use Spack.

You must have a local machine with an internet connection. Lets say your current directory is WORK_DIR

export WORK_DIR=$PWD
git clone https://github.com/solverstack/spack.git
cd spack

This execution makes Spack command available from anywhere.

. ./share/spack/setup-env.sh

Alternatively, simply add the bin directory to your path.

export PATH=./bin:$PATH

Of course it is recommended to add these lines in you .bashrc file in order to make Spack available in new shell environment you may open.

3 Download the MaPHyS stack

Just go back from where we start to clone spack repository. We use spack mirror to get maphys and all its dependencies.

cd $WORK_DIR
# Get all the packages except mumps
spack mirror create -D -d maphys-mirror maphys~mumps

This command may download more packages than you need, but it is not a problem.

If you want to use MUMPS (for some reason spack mirror does not work with MUMPS), you need the following workaround:

# Get mumps tarball
wget http://mumps.enseeiht.fr/MUMPS_5.0.2.tar.gz
# Copy it in the mirror directory
mkdir -p maphys-mirror/mumps
mv MUMPS_5.0.2.tar.gz maphys-mirror/mumps/mumps-5.0.2.tar.gz

And finally make a tarball with those packages.

tar -cf maphys-mirror.tar.gz maphys-mirror

4 Transfer the files on the remote machine

You will need to use Spack to install MaPHyS on the cluster, e.g. on curie

tar czf spack.tar.gz spack/
scp spack.tar.gz curie:

Transfer also the tarballs

scp maphys-mirror.tar.gz curie:

5 Install Spack on the cluster

Connect to the platform, e.g. Curie:

ssh curie

Move where you want the archives previously copied and go there. Lets say your current directory is WORK_DIR

export WORK_DIR=$PWD

Expand archives previously transfered

tar xf spack.tar.gz
tar xf maphys-mirror.tar.gz
cd spack

This execution makes Spack command available from anywhere

. ./share/spack/setup-env.sh

You should also update your MODULEPATH to make the modules generated by spack visible and be able to use spack load command, e.g.

export MODULEPATH=$WORK_DIR/spack/share/spack/modules/linux-x86_64:$MODULEPATH

Tell to Spack that your have a local mirror to get tarballs.

spack mirror add local file://${WORK_DIR}/maphys-mirror

One critical step now is to ensure Spack will use the compilers you aim at using on the cluster. Normally Spack should auto detect the compilers available currently in your environment. To check what Spack has automatically found use the compiler list and info commands

spack compiler list

I can see that a gcc has been detected, to get details use info

spack compiler info gcc@4.4.7

Oh but I want to use Intel compilers, it was not in my environment yet

module add mkl/14.0.3.174

Inform Spack that it can use the Intel compiler

spack compiler add /opt/intel/14.0.3.174/bin/intel64

6 Install MaPHyS on the cluster Curie

6.1 Install MaPHyS with Netlib suite for test (optional)

Install MaPHyS with Netlib suite (BLAS, LAPACK, SCALAPACK) which not performant but reliable to test. This will built OpenMPI, BLAS, LAPACK, SCALAPACK, etc.

If you want to skip this part we suggest to move to next sections to learn how to use Intel MKL kernels and existing MPI installation.

spack install -v --keep-stage maphys +pastix +mumps ^netlib-scalapack ^netlib-lapack ^netlib-blas ^openmpi

Load the environnment and test your MaPHyS installation

spack cd maphys +pastix +mumps ^netlib-scalapack ^netlib-lapack ^netlib-blas ^openmpi
spack load hwloc
spack load openmpi
spack load netlib-blas
spack load netlib-lapack
spack load netlib-scalapack
spack load metis
spack load scotch+esmumps
spack load mumps^netlib-scalapack
spack load pastix^netlib-blas
spack load maphys+pastix+mumps^netlib-scalapack^netlib-lapack^netlib-blas^openmpi
make check

6.2 Install MaPHyS with Intel MKL

First load the mkl in your environment e.g. module add mkl/14.0.3.174. Install MaPHyS with the MKL

spack install -v maphys +pastix +mumps ^mkl-scalapack ^mkl ^openmpi

If you get an error like

==> Error: maphys does not depend on mkl or mkl-scalapack

this means that you don't have set your MKLROOT and that certainly the Intel MKL is not available in the environment.

6.3 Install MaPHyS with an already installed MPI

  1. Existing MPICH, MVAPICH2 or OpenMPI

    You can build MPICH, MVAPICH2 or OpenMPI MPI implementation through Spack. But normally, a well tuned MPI is available on your machine so that we don't want to install it but still be able to link with the MPI available in the environment in Spack. You can do that by using the @exist version.

    For instance with an existing openmpi, load the module of your opempi e.g. module load mpi/openmpi

    export OPENMPI_DIR=path/to/your/mpi/install # (where lies bin, lib and include directories)
    spack install -v maphys +pastix +mumps ^mkl-scalapack ^mkl ^openmpi@exist
    
  2. Existing IntelMPI

    For IntelMPI we do not provide the installation but we can use it if available in the environment. For example if you have an Intel suite installed on your system, make it available in the environment. More precisely set I_MPI_ROOT. On my laptop for example:

    source /home/pruvost/intel/bin/compilervars.sh intel64
    source /home/pruvost/intel/impi/5.0.1.035/bin64/mpivars.sh
    spack install -v maphys %intel +pastix +mumps ^mkl-scalapack ^mkl ^intelmpi
    

    Be aware that to use IntelMPI, you should use the intel compiler. This can be set with %intel. Remember you should have defined your compilers first. Check that it is available

    spack compiler list
    

    If not add it

    spack compiler add /path/to/your/intel/compilers
    

    You can also edit the file $HOME/.spack/compilers.yaml to add/remove compilers or with spack config edit compilers command.

  3. Existing Bullxmpi

    Load the module of your Bullxmpi, e.g. module load mpi/bullxmpi

    export BULLXMPI_DIR=path/to/your/bullxmpi
    spack install -v maphys +pastix +mumps ^mkl-scalapack ^mkl ^bullxmpi
    

    Keep in mind that if you have loaded Intel compilers before your Bullxmpi module, the mpi wrapper will certainly use Intel compilers so that you should specify %intel in your Spack spec

    spack install -v maphys%intel +pastix +mumps ^mkl-scalapack ^mkl ^bullxmpi
    
  4. Existing Other MPI vendor

    If the Spack package does not exist for the MPI vendor you want to use, please send an email to: florent.pruvost@inria.fr or emmanuel.agullo@inria.fr

6.4 Troubleshooting

Openmpi does not build with Intel compilers, there is bug a between Spack's scripts and libtool one. If you want to build with Intel compilers, we recommend to use an existing already installed MPI (which has been built with Intel of course). If you really want to build openmpi and use Intel compilers we suggest to build the MaPHyS stack with Intel and use gcc for openmpi only, e.g.

spack install -v maphys%intel +pastix +mumps ^scotch+esmumps ^mkl-scalapack ^mkl ^openmpi%gcc

7 Install the multithreaded version of MaPHyS on the cluster Plafrim2

Here, we propose to install MaPHyS multithreaded version through Spack on plafrim2. For more details on Spack tool, we recommend to have a look at http://morse.gforge.inria.fr/spack/spack.html.

The following code blocks enable to install two configurations of MaPHyS:

  • a first full Intel one: Intel-MKL + Intel-MPI + Icc
  • a second full GNU one: Gcc-MKL + OpenMPI + Gcc

In the following, we suppose Spack is already installed on Plafrim2. See Sections 2, 4 and 5.

You can also try the all-in-one installation scripts section 7.5.

7.1 Install Spack on plafrim2

As we won't have a direct internet connection to download sources on plafrim2, we first have to transfer the files on the cluster:

git clone https://github.com/solverstack/spack.git
cd spack
cd -
tar czvf spack.tar.gz spack/
scp spack.tar.gz plafrim2:

Transfer also the tarballs required to install maphys:

wget http://morse.gforge.inria.fr/maphys/maphys-mirror.tar.gz
scp maphys-mirror.tar.gz plafrim2:

We can now connect on plafrim2, and extract the sent archives: Connect to the platform:

ssh plafrim2
tar -xf spack.tar.gz
tar -xf maphys-mirror.tar.gz

7.2 Spack environment variables setting

Spack environment file:

export SPACK_ROOT=$PWD/spack
cat <<EOF > env-spack.sh
export SPACK_ROOT=${SPACK_ROOT}
source ${SPACK_ROOT}/share/spack/setup-env.sh
export MODULEPATH=${MODULEPATH}:${SPACK_ROOT}/share/spack/modules/linux-x86_\64
EOF

Source Spack environment (binary path + module support)

source env-spack.sh

Intel configuration file:

echo "-Xlinker -rpath=/cm/shared/apps/intel/composer_xe/2013_sp1.3.174/compiler/lib/intel64" > intel.cfg

Configure Spack to look for local tarballs when installing maphys:

spack mirror add local file://${PWD}/maphys-mirror

7.3 Installation: full Intel

The following blocks installs MaPHyS with intel compiler version 16.0.0 (based on gcc version 6.1.0) on plafrim. It also installs existing modules for Hwloc and intel-MPI within spack.

First, we load the intel compiler:

#--------------------#
# Install full Intel #
#--------------------#
spack purge
module purge
module load compiler/gcc/6.1.0 compiler/intel/64/2013_sp1.3.174
spack compiler add /cm/shared/apps/gcc/6.1.0/bin/gcc
spack compiler add /cm/shared/apps/intel/composer_xe/2013_sp1.3.174/bin/icc

To use intel compilers, we have to add an intel configuration file. For more details, see http://morse.gforge.inria.fr/spack/spack.html#sec-3-4.

export ICCCFG=$PWD/intel.cfg
export ICPCCFG=$PWD/intel.cfg
export IFORTCFG=$PWD/intel.cfg

Then, we install existing intel-MPI within Spack:

source /cm/shared/apps/intel/composer_xe/2016.3.264/impi/5.1.3.210/intel64/bin/mpivars.sh
spack install intelmpi@exist%intel@14.0.3

Then, we install existing hwloc within Spack:

# HWLOC
export HWLOC_DIR=/cm/shared/apps/hwloc/1.9.1
spack install hwloc@exist%intel@14.0.3

Existing intel-MKL blas and lapack are installed within Spack:

source /cm/shared/apps/intel/composer_xe/2016.3.264/impi/5.1.3.210/intel64/bin/mpivars.sh intel64
# Intel MKL + icc
spack install mkl%intel@14.0.3

Also install MKL-scalapack:

spack install mkl-scalapack%intel@14.0.3 ^intelmpi@exist

Finally, we install the multithreaded full intel version of MaPHyS with all former modules:

# MaPHyS mt intel compiler & intel mpi
spack install -v --keep-stage maphys%intel@14.0.3+blasmt+mumps ^mumps@5.0.1 ^mkl-scalapack ^mkl ^hwloc@exist ^intelmpi@exist

Once the full intel version is installed, we can generate the following script to load this version of MaPHyS:

# Generate script for MaPHyS intel modules loading
LOAD_INTEL="load_maphys_icc.sh"
cat <<EOF > ${LOAD_INTEL}
module purge
module load slurm
module load compiler/gcc/6.1.0
module load compiler/intel/64/2013_sp1.3.174
spack load maphys%intel+blasmt
spack load pastix@5.2.2.22%intel
spack load mumps%intel@14.0.3
spack load mkl@exist%intel@14.0.3
spack load scotch@6.0.4%intel@14.0.3
spack load hwloc@exist%intel@14.0.3
spack load intelmpi@exist%intel@14.0.3
EOF

To load this version of MaPHyS, source the file generated by the former block:

source ${LOAD_INTEL}
module list

Test installation:

source ${LOAD_INTEL}
mkdir -p test_install_maphys
spack cd -i maphys%intel@14.0.3
cd examples
mpirun -np 4 ./smph_examplekv real_bcsstk17.in   > ~/test_install_maphys/iccstest.out
mpirun -np 4 ./dmph_examplekv real_bcsstk17.in   > ~/test_install_maphys/iccdtest.out
mpirun -np 4 ./cmph_examplekv complex_young1c.in > ~/test_install_maphys/iccctest.out
mpirun -np 4 ./zmph_examplekv complex_young1c.in > ~/test_install_maphys/iccztest.out
mpirun -np 4 ./smph_examplekv real_bcsstk17_mumps.in   > ~/test_install_maphys/iccstest_mumps.out
mpirun -np 4 ./dmph_examplekv real_bcsstk17_mumps.in   > ~/test_install_maphys/iccdtest_mumps.out
mpirun -np 4 ./cmph_examplekv complex_young1c_mumps.in > ~/test_install_maphys/iccctest_mumps.out
mpirun -np 4 ./zmph_examplekv complex_young1c_mumps.in > ~/test_install_maphys/iccztest_mumps.out

cd ~

7.4 Installation: full GNU

The following blocks installs MaPHyS with gcc compiler version 6.1.0 on plafrim2. It also installs existing modules for Hwloc and Intel-MPI within spack.

First, we load the gcc compiler:

#--------------------#
#  Install full GNU  #
#--------------------#
spack purge
module purge
module load compiler/gcc/6.1.0 build/cmake/3.2.1
spack compiler find

Then, we install existing hwloc within Spack:

# HWLOC
export HWLOC_DIR=/cm/shared/apps/hwloc/1.9.1
spack install hwloc@exist%gcc@6.1.0

Then, we install existing OpenMPI within Spack:

export OPENMPI_DIR=/cm/shared/apps/openmpi/gcc/64/2.0.0-hfi/
spack install openmpi@exist%gcc@6.1.0 ^hwloc@exist%gcc@6.1.0

Existing gnu-MKL blas and lapack are installed within Spack:

source /cm/shared/apps/intel/composer_xe/2013_sp1.3.174/mkl/bin/mklvars.sh intel64
# Intel MKL-blas and MKL-lapack + icc
spack install mkl%gcc@6.1.0

Also install MKL-scalapack:

spack install mkl-scalapack%gcc@6.1.0 ^openmpi@exist ^hwloc@exist%gcc

Finally, we install the multithreaded full intel version of MaPHyS with all former modules:

spack install -v --keep-stage maphys%gcc@6.1.0+blasmt+mumps ^mumps@5.0.1 ^mkl-scalapack%gcc ^mkl ^hwloc@exist ^openmpi@exist

Once the full GNU version is installed, we can generate the following script to load this version of MaPHyS:

# Generate script for MaPHyS gcc modules loading
LOAD_GCC="load_maphys_gcc.sh"
cat <<EOF > ${LOAD_GCC}
rm ${LOAD_GCC}
module purge
module load slurm
module load compiler/gcc/6.1.0
module load compiler/intel/64/2013_sp1.3.174
spack load maphys%gcc@6.1.0+blasmt
spack load pastix@5.2.2.22%gcc@6.1.0
spack load mumps%gcc@6.1.0
spack load mkl@exist%gcc@6.1.0
spack load scotch@6.0.4%gcc@6.1.0
spack load hwloc@exist%gcc@6.1.0
spack load openmpi@exist%gcc@6.1.0
EOF

To load this version of MaPHyS, source the file generated by the former block:

source load_maphys_gcc.sh
module list

Test installation:

source load_maphys_gcc.sh
mkdir -p test_install_maphys
spack cd -i maphys%gcc
cd examples
mpirun -np 4 ./smph_examplekv real_bcsstk17.in   > ~/test_install_maphys/gccstest.out
mpirun -np 4 ./dmph_examplekv real_bcsstk17.in   > ~/test_install_maphys/gccdtest.out
mpirun -np 4 ./cmph_examplekv complex_young1c.in > ~/test_install_maphys/gccctest.out
mpirun -np 4 ./zmph_examplekv complex_young1c.in > ~/test_install_maphys/gccztest.out
mpirun -np 4 ./smph_examplekv real_bcsstk17_mumps.in   > ~/test_install_maphys/gccstest_mumps.out
mpirun -np 4 ./dmph_examplekv real_bcsstk17_mumps.in   > ~/test_install_maphys/gccdtest_mumps.out
mpirun -np 4 ./cmph_examplekv complex_young1c_mumps.in > ~/test_install_maphys/gccctest_mumps.out
mpirun -np 4 ./zmph_examplekv complex_young1c_mumps.in > ~/test_install_maphys/gccztest_mumps.out
cd ~

7.5 Installation scripts

Alternatively, you can try these all-in-one scripts to install maphys directly.

http://morse.gforge.inria.fr/maphys/install_plafrim_local.sh

http://morse.gforge.inria.fr/maphys/install_plafrim_cluster.sh

First you need to download the two scripts in the same directory and launch the local script.

wget http://morse.gforge.inria.fr/maphys/install_plafrim_local.sh
wget http://morse.gforge.inria.fr/maphys/install_plafrim_cluster.sh
chmod +x install_plafrim_local.sh
chmod +x install_plafrim_cluster.sh
./install_plafrim_local.sh

This will download the files you need and copy them on plafrim2. Then you just need to run the cluster script on plafrim2.

ssh plafrim2
./install_plafrim_cluster.sh

Do not hesitate to look at the script code and change the library and compiler versions.

8 Install MaPHyS on the cluster Occigen

Here, we propose to install MaPHyS multithreaded version through Spack on Occigen. For more details on Spack tool, we recommend to have a look at http://morse.gforge.inria.fr/spack/spack.html.

The following code blocks enable to install two configurations of MaPHyS:

  • a first full multithreaded one: Intel-MKL + Intel-MPI + Icc + PaStiX only
  • a second singlethreaded: Intel-MKL + Intel-MPI + Icc + PaStiX +Mumps

In the following, we assume the user configured his ssh configuration file in order to connect on occigen with the command ssh occigen.

8.1 Install Spack on occigen

As we won't have a direct internet connection to download sources on occigen, we first have to transfer the files on the cluster:

git clone https://github.com/solverstack/spack.git
cd spack
cd -
tar czvf spack.tar.gz spack/
scp spack.tar.gz occigen:

Transfer also the tarballs required to install maphys:

wget http://morse.gforge.inria.fr/maphys/maphys-mirror.tar.gz
scp maphys-mirror.tar.gz occigen:

We can now connect on occigen, and extract the sent archives: Connect to the platform:

ssh occigen
tar -xf spack.tar.gz
tar -xf maphys-mirror.tar.gz

8.2 Spack environment variables setting

Spack environment file:

echo "source spack/share/spack/setup-env.sh" > env-spack.sh
echo "export MODULEPATH=/home/mkuhn/spack/share/spack/modules/linux-x86_64:$MODULEPATH" >> env-spack.sh

Source Spack environment (binary path + module support)

source env-spack.sh

Intel configuration file:

echo "-Xlinker -rpath=/opt/software/intel/compilers_and_libraries_2016.3.210/linux/compiler/lib/intel64" > intel.cfg

Configure Spack to look for local tarballs when installing maphys:

spack mirror add local file://${PWD}/maphys-mirror

8.3 Installation: multithreaded case

The following blocks install MaPHyS with intel compiler version 16.0.3 on occigen. It also installs existing modules for Hwloc and intel-MPI within spack.

First, we load the intel compiler:

#--------------------#
# Install full Intel #
#--------------------#
spack purge
module purge
module load intel/16.3
spack compiler find

To use intel compilers, we have to add an intel configuration file. For more details, see http://morse.gforge.inria.fr/spack/spack.html#sec-3-4.

export ICCCFG=$PWD/intel.cfg
export ICPCCFG=$PWD/intel.cfg
export IFORTCFG=$PWD/intel.cfg

Then, we install existing intel-MPI within Spack:

source /opt/software/intel/impi/5.1.2.150/intel64/bin/mpivars.sh
spack install intelmpi@exist%intel@16.0.3

Then, we install existing hwloc within Spack:

# HWLOC
export HWLOC_DIR=/opt/software/tools/hwloc/1.11.0
spack install hwloc@exist%intel@16.0.3

Existing intel-MKL blas and lapack are installed within Spack:

source /opt/software/intel/compilers_and_libraries_2016.3.210/linux/mkl/bin/mklvars.sh intel64
# Intel MKL + icc
spack install mkl%intel@16.0.3

Finally, we install the multithreaded full intel version of MaPHyS with all former modules:

# MaPHyS mt intel compiler & intel mpi
spack install maphys%intel@16.0.3+blasmt~mumps ^mkl ^hwloc@exist ^intelmpi@exist

Once the full intel version is installed, we can generate the following script to load the module of this MaPHyS version with all its dependencies:

# Generate script for MaPHyS intel modules loading
LOAD_MT="maphys_multithread.sh"
cat <<EOF > ${LOAD_MT}
module purge
source /panfs/panasas/cnt0040/isa1512/mkuhn/env-spack.sh"
module load intel/16.3
spack load maphys%intel@16.0.3+blasmt
spack load pastix@5.2.2.22%intel@16.0.3+blasmt
spack load mkl@exist%intel@16.0.3
spack load scotch@6.0.4%intel@16.0.3~esmumps
spack load hwloc@exist%intel@16.0.3
spack load intelmpi@exist%intel@16.0.3
EOF

To load this version of MaPHyS, source the file generated by the former block:

source maphys_multithread.sh

8.4 Installation: singlethreaded

The following blocks install MaPHyS with intel compiler version 16.0.3 on occigen. It also installs existing modules for Hwloc and intel-MPI within spack.

First, we load the intel compiler:

#--------------------#
# Install full Intel #
#--------------------#
spack purge
module purge
module load intel/16.3
spack compiler find

To use intel compilers, we have to add an intel configuration file. For more details, see http://morse.gforge.inria.fr/spack/spack.html#sec-3-4.

export ICCCFG=intel.cfg
export ICPCCFG=intel.cfg
export IFORTCFG=intel.cfg

Then, we install existing intel-MPI within Spack:

source /opt/software/intel/impi/5.1.2.150/intel64/bin/mpivars.sh
spack install intelmpi@exist%intel@16.0.3

Then, we install existing hwloc within Spack:

# HWLOC
export HWLOC_DIR=/opt/software/tools/hwloc/1.11.0
spack install hwloc@exist%intel@16.0.3

Existing intel-MKL blas and lapack are installed within Spack:

source /opt/software/intel/compilers_and_libraries_2016.3.210/linux/mkl/bin/mklvars.sh intel64
# Intel MKL + icc
spack install mkl%intel@16.0.3

Finally, we install the multithreaded full intel version of MaPHyS with all former modules:

# MaPHyS mt intel compiler & intel mpi
spack install -n maphys%intel@16.0.3+mumps ^mumps~examples ^mkl-scalapack ^mkl ^hwloc@exist ^intelmpi@exist

Once the full intel version is installed, we can generate the following script to load the module of this MaPHyS version with all its dependencies:

# Generate script for MaPHyS intel modules loading
LOAD_ST="maphys_singlethread.sh"
cat <<EOF > ${LOAD_ST}
module purge
source /panfs/panasas/cnt0040/isa1512/mkuhn/env-spack.sh"
module load intel/16.3
spack load maphys%intel@16.0.3~blasmt
spack load pastix@5.2.2.22%intel@16.0.3~blasmt
spack load mkl@exist%intel@16.0.3
spack load mumps%intel@16.0.3
spack load scotch@6.0.4%intel@16.0.3+esmumps
spack load hwloc@exist%intel@16.0.3
spack load intelmpi@exist%intel@16.0.3
EOF

To load this version of MaPHyS, source the file generated by the former block:

source maphys_singlethread.sh

9 Install MaPHyS on the cluster Nemo

Here, we propose to install MaPHyS multithreaded version through Spack on Nemo, a cluster available for CERFACS users. The following configuration is a setup using Intel-MKL + Intel-MPI + Icc.

As Nemo has access to the internet, we can connect directly to the cluster and install maphys from it.

The following script should allow you to intall maphys directly on the cluster. You can use the configuration section of the script to indicate the installation folder, intel/mkl/intelmpi modules you want to use, the path of the intel library and the versions of the dependencies needed for MaPHyS.

http://morse.gforge.inria.fr/maphys/install_nemo.sh

# Get the script
wget http://morse.gforge.inria.fr/maphys/install_nemo.sh
# Copy and connect to nemo
scp install_nemo.sh nemo:~
ssh nemo
# Launch the script
chmod +x install_nemo.sh
./install_nemo.sh

The script install MaPHyS release and debug version. The output of the script gives you the path of the file to source when you want to use MaPHyS. You will need to source them in your batch file for a job depending on MaPHyS.

10 Install MaPHyS on the cluster Anselm

Here, we propose to install MaPHyS on the it4i cluster Anselm. The procedure for Salomon should be quite similar. The cluster has access to exterior repositories, which makes the installation easier.

10.1 Getting started

First let's connect to anselm and set some installation directories:

ssh anselm

INSTALL_DIR=${HOME}/INSTALL_MAPHYS
mkdir -p $INSTALL_DIR

10.2 Install spack on Anselm

We can directly clone the spack repository.

cd $INSTALL_DIR
git clone https://github.com/solverstack/spack.git
cd spack
git checkout morse
. ./share/spack/setup-env.sh
export MODULEPATH=$INSTALL_DIR/spack/share/spack/modules/linux-x86_64:$MODULEPATH

And we save the spack environment in a script for future use.

cd ..
export SPACK_ROOT=$PWD/spack
cat <<EOF > env-spack.sh
export SPACK_ROOT=${SPACK_ROOT}
source ${SPACK_ROOT}/share/spack/setup-env.sh
export MODULEPATH=${MODULEPATH}:${SPACK_ROOT}/share/spack/modules/linux-x86_\64
EOF

10.3 Setting the environment to compile with Intel compiler

Let's load the modules we will need to install MaPHyS:

module load CMake/3.5.2
module load intel/2017.00
module load hwloc/1.11.3-GCC-5.3.0-2.26

Set Intel configuration file:

echo "-Xlinker -rpath=/apps/all/imkl/2017.0.098-iimpi-2017.00-GCC-5.4.0-2.26/mkl/lib/intel64 -Xlinker -rpath=/apps/all/impi/2017.0.098-iccifort-2017.0.098-GCC-5.4.0-2.26/lib64 -Xlinker -rpath=/apps/all/ifort/2017.0.098-GCC-5.4.0-2.26/compilers_and_libraries_2017.0.098/linux/mpi/intel64 -Xlinker -rpath=/apps/all/ifort/2017.0.098-GCC-5.4.0-2.26/compilers_and_libraries_2017.0.098/linux/compiler/lib/intel64 -Xlinker -rpath=/apps/all/ifort/2017.0.098-GCC-5.4.0-2.26/lib/intel64 -Xlinker -rpath=/apps/all/ifort/2017.0.098-GCC-5.4.0-2.26/lib -Xlinker -rpath=/apps/all/icc/2017.0.098-GCC-5.4.0-2.26/compilers_and_libraries_2017.0.098/linux/compiler/lib/intel64 -Xlinker -rpath=/apps/all/icc/2017.0.098-GCC-5.4.0-2.26/lib/intel64 -Xlinker -rpath=/apps/all/icc/2017.0.098-GCC-5.4.0-2.26/lib -Xlinker -rpath=/apps/all/binutils/2.26-GCCcore-5.4.0/lib -Xlinker -rpath=/apps/all/GCCcore/5.4.0/lib/gcc/x86_64-unknown-linux-gnu/5.4.0 -Xlinker -rpath=/apps/all/GCCcore/5.4.0/lib64 -Xlinker -rpath=/apps/all/GCCcore/5.4.0/lib" > intel.cfg
export ICCCFG=$PWD/intel.cfg
export ICPCCFG=$PWD/intel.cfg
export IFORTCFG=$PWD/intel.cfg

10.4 MaPHyS installation

Now we can install with spack MaPHyS dependencies, starting with the ones already available on the cluster: hwloc, cmake, intelmpi and mkl.

# HWLOC
echo " ### HWLOC ### "
export HWLOC_DIR=/apps/all/hwloc/1.11.3-GCC-5.3.0-2.26/
spack install hwloc@exist%intel@17.0.0
# CMAKE
echo " ### CMAKE ### "
export CMAKE_DIR=/apps/all/CMake/3.5.2/
spack install cmake@exist%intel@17.0.0
# INTELMPI
echo " ### INTELMPI ### "
export INTELMPI_DIR=/apps/all/impi/2017.0.098-iccifort-2017.0.098-GCC-5.4.0-2.26/lib64
spack install intelmpi@exist%intel@17.0.0

# MKL: blasmt + scalapack...
echo " ### MKL ### "
source /apps/all/imkl/2017.0.098-iimpi-2017.00-GCC-5.4.0-2.26/mkl/bin/mklvars.sh intel64
spack install mkl%intel@17.0.0

Finally we can install MaPHyS with the following command. In this example we compiler the develop version of MaPHyS in debug mode:

echo " ### MAPHYS ### "
spack install -v -j 4 maphys@develop%intel@17.0.0+mumps+pastix+blasmt+ib-bgmres-dr+debug ^pastix@5.2.2.22 ^hwloc@exist ^intelmpi@exist ^cmake@exist ^mkl@exist ^mkl-scalapack%intel

Now we save the environment to be able to load MaPHyS easily in the future:

LOAD_INTEL="load_intel.sh"
rm ${LOAD_INTEL}
cat <<EOF > ${LOAD_INTEL}
source ${INSTALL_DIR}/env-spack.sh
module purge
module load intel/2017.00
module load hwloc/1.11.3-GCC-5.3.0-2.26
EOF

To load MaPHyS environment you just need to source this script:

source ${INSTALL_DIR}/load_intel.sh
spack cd -i maphys && cd examples
mpirun -np 2 ./dmph_examplekv ./real_bcsstk17.in

11 Link your application with MaPHyS

Lets consider a small program test.F90. Don't forget to load MaPHyS and dependencies in the environment using spack load for instance:

spack load hwloc
spack load openmpi
spack load netlib-blas
spack load netlib-lapack
spack load netlib-scalapack
spack load metis
spack load scotch+esmumps
spack load mumps^netlib-scalapack
spack load pastix^netlib-blas
spack load maphys+pastix+mumps^netlib-scalapack^netlib-lapack^netlib-blas^openmpi

11.1 Non multithreaded version

To compile and link the test program with MaPHyS, execute

mpif90 `pkg-config --cflags maphys` -c test.F90
mpif90 test.o -o test `pkg-config --libs maphys`

11.2 Multithreaded version

The multithreaded version of MaPHyS depends on the additional library called toolkit. This dependency is explicited in the pkg-config file of MaPHyS, and the toolkit library is provided by MaPHyS and compiled with it.

To compile and link the test program with the multithreaded MaPHyS, execute

mpif90 `pkg-config --cflags --static maphys` -c test.F90
mpif90 test.o -o test `pkg-config --libs --static maphys`

Author: HiePACS

Created: 2017-03-24 Fri 14:03

Validate