Install and use HiePACS solvers

Table of Contents

HiePACS is a french research team from Inria Bordeaux, https://team.inria.fr/hiepacs/, which is involved in the development of high performance linear solvers such that Chameleon, PaStiX, HiPS, MaPHyS, qr_mumps, ScalFMM.

These libraries are far from being monolithic codes and are designed in a modular spirit relying on different separate libraries. Installing and using these programs have becoming complex for the end-user as for our close collaborators.

In order to address this complexity we have chosen to rely on Spack, see 1.1, allowing us to efficiently install our solver stacks. The originality is that we want to allow the developer to tune as highly as he wants the pieces he is a specialist of (may they be a runtime sytem, a scheduler, a numerical algorithm) in a way that is interoperable with other components that are automatically installed and tuned for the target platform.

This document aims at describing Spack features and usage in a first section 1, then to detail how to use Spack to install HiePACS solver stacks in section 2.

This page has been generated from the following emacs org-mode file http://morse.gforge.inria.fr/spack/spack.org and also available here:

svn checkout svn://scm.gforge.inria.fr/svnroot/morse/tutorials/spack spack_morse/ # or
svn checkout https://scm.gforge.inria.fr/anonscm/svn/morse/tutorials/spack spack_morse/

1 Spack-MORSE usage

1.1 Introduction

Spack is a flexible package manager designed to support multiple versions, configurations, platforms, and compilers.

It is developed at Lawrence Livermore National Laboratory and helps to efficiently install complex scientific software stacks, e.g. High Performance Computing softwares designed in a modular way.

Spack is available on github here: https://github.com/LLNL/spack. The official documentation is available here: http://llnl.github.io/spack.

Many well-known HPC libraries are packaged in this Spack repository. Nevertheless, for our specific needs, we have forked Spack here: https://github.com/solverstack/spack.

In this forked repository, a set of new packages are available, such as libraries developed in the context of MORSE project. More information about this project: http://icl.cs.utk.edu/projectsdev/morse/index.html.

Here is a summary of packages released:

spack-morse-packages.jpg

The Spack package manager has been chosen because:

  • installation is simple
    • just need python and a couple of system libraries of course
  • flexible
    • no need to be root to install packages
      • can be used on remote machines like clusters
    • we do not need to build compilers
      • detect available ones on the system
      • or give your specific compiler directory to Spack
    • a lot of options available to tune builds for complex software stacks
      • compilers, versions, build options, different vendor for virtual packages, etc.
  • create new packages is easy
  • automatic generation of modulefiles
  • can work with local filesystem
    • just tell to Spack the paths collecting your tarballs locally
    • required feature on a cluster not connected to internet

Spack weaknesses:

  • not many developers, not a large community
  • no update/upgrade features
  • rely on python 2., is not compatible with python >=3. for now
  • seems to be mainly tested on Unix with GNU compilers
    • not compatible with Windows
      • some python system calls will fail

1.2 Prerequisites

1.2.1 Python installation to run Spack

Spack is written in python, do you have python2.7?

You can install it from binary packages, e.g.

sudo apt-get install -y python2.7-dev

or install it from sources

wget https://www.python.org/ftp/python/2.7.11/Python-2.7.11.tgz
tar xf Python-2.7.11.tgz
cd Python-2.7.11/
./configure prefix=$HOME/python-2.7.11
make install
export PYTHONPATH=$HOME/python-2.7.11/bin
export PATH=$PYTHONPATH:$PATH

1.2.2 System library requirements to build all MORSE packages:

It is not mandatory to install everything: if you don't care about simgrid, no need to install libboost

  1. System Package list
    • python 2.7: to use spack
    • curl patch bzip2: for spack (to manipulate releases)
    • vim emacs: to edit files
    • git subversion mercurial: to fetch versions from VCS branches
    • build-essential gfortran: gnu compilers and make
    • autoconf automake cmake cmake-data doxygen texinfo : autotools and ~misc used by several libraries
    • libtool libtool-bin: for starpu, and co
    • libboost-dev: for simgrid, and co
    • gawk: for fxt (need an awk with gensub)
    • bison flex: for parsec
    • binutils-dev libelf-dev libiberty-dev: for eztrace
    • libz-dev: for eztrace, scotch, and co
    • libqt4-dev freeglut3-dev: for vite
    • environment-modules: to use module features
  2. Details about which MORSE package needs which system packages

    Same information but seen by library we may want to install:

    • basics for all Spack-MORSE packages : python (2.7 recommended) curl patch bzip2 pkg-config git subversion mercurial build-essential gfortran autoconf automake cmake environment-modules
    • starpu: autoconf (>=2.64) ~libtool (>=2.4)
    • simgrid: libboost-dev
    • fxt: gawk
    • eztrace: binutils-dev libelf-dev autoconf (>=2.68) libtool (>=2.4)
    • scotch: libz-dev, bison, flex
    • vite: libqt4-dev freeglut3-dev
    • parsec: bison, flex
  3. Debian (apt-get) packages:
    sudo apt-get update
    sudo apt-get install -y vim emacs curl patch bzip2 libz-dev git subversion mercurial build-essential gfortran clang nvidia-cuda-toolkit autoconf automake cmake cmake-data doxygen texinfo pkg-config autoconf automake cmake cmake-data doxygen texinfo libtool libtool-bin libboost-dev gawk bison flex binutils-dev libelf-dev libiberty-dev libqt4-dev freeglut3-dev environment-modules
    
  4. RedHat (yum) packages:
    yum update
    sudo yum -y install vim emacs curl patch bzip2 zlib-devel git subversion mercurial
    sudo yum -y groupinstall 'Development Tools'
    sudo yum -y install gcc-gfortran gcc-c++ cuda autoconf automake cmake cmake-data doxygen texinfo pkgconfig libtool boost boost-devel boost-doc gawk bison flex binutils-devel elfutils-libelf qt qt-devel qt4 environment-modules
    
  5. Fedora (dnf) packages
    sudo dnf update
    sudo dnf -y install vim emacs curl patch bzip2 zlib-devel git subversion mercurial
    sudo dnf -y groupinstall 'Development Tools'
    sudo dnf -y install gcc-gfortran gcc-c++ cuda autoconf automake cmake doxygen texinfo pkgconfig libtool boost-devel gawk bison flex binutils-devel elfutils-libelf qt qt4 environment-modules
    sudo yum -y install qt-devel
    
  6. ArchLinux (pacman) packages:
    sudo pacman -Sy
    sudo pacman -S vim emacs curl patch bzip2 git subversion mercurial gcc-fortran clang autoconf automake cmake doxygen texinfo pkg-config libtool boost gawk bison flex binutils libelf qt4 freeglut
    
  7. Mac OS X (macports) packages:

    Install procedure of macports:

    wget https://distfiles.macports.org/MacPorts/MacPorts-2.3.4.tar.bz2
    tar xvfj MacPorts-2.3.4.tar.bz2
    cd MacPorts-2.3.4
    ./configure
    make
    sudo make install
    rm -rf MacPorts-*
    

    Add this in your .profile or .bashrc and source it or open a new shell:

    # MacPorts
    export PATH=/opt/local/bin:/opt/local/sbin:$PATH
    export DYLD_LIBRARY_PATH=/opt/local/gcc49/lib:$DYLD_LIBRARY_PATH
    
    sudo port selfupdate
    

    Install packages with macports:

    sudo port install vim emacs curl bzip2 pkgconfig git subversion mercurial gcc49 python27 libtool cctools clang-3.4 cmake-devel autoconf automake gawk gsed doxygen texinfo boost bison flex binutils coreutils libelf
    sudo port select --set python python27
    sudo port select --set python2 python27
    

    A trick for clang compiler (compatibility with intrinsics and SSE/AVX assembler)

    sudo ln -s /opt/local/bin/clang-mp-3.4 /opt/local/bin/clang
    
  8. Mac OS X (brew) packages:
    brew update
    brew install vim emacs curl lbzip2 git subversion mercurial gcc autoconf automake make doxygen texinfo pkg-config libtool boost gawk bison flex binutils libelf qt4 modules
    

1.3 Spack-MORSE installation

1.3.1 Download Spack containing MORSE packages

The following webpage gives help to use the "morse" branch of Spack. Here are the steps to get it:

git clone https://github.com/solverstack/spack.git
cd spack

1.3.2 Initialize the environment to use Spack

The following step is optional, we use SPACK_ROOT environment variable later in this document.

export SPACK_ROOT=/path/to/where/you/unpacked/spack

This execution makes Spack command available from anywhere.

. $SPACK_ROOT/share/spack/setup-env.sh

Alternatively, simply add the bin directory to your path.

export PATH=$SPACK_ROOT/bin:$PATH

Of course it is recommended to add these lines in you .bashrc file in order to make Spack available in new shell environment you may open.

1.4 Spack usage

Do not hesitate to have a look to the official Spack documentation online: http://llnl.github.io/spack/

1.4.1 spack compiler list: what are the compilers known from spack

Check that you have some compilers known from spack

spack compiler list

You can add your compilers giving the path where they can be found

spack compiler add /usr/bin
spack compiler info clang
spack compiler info gcc
spack compiler info intel

You can edit the file $HOME/.spack/compilers.yaml to add/remove compilers. The principle is to give the paths to compiler executables. you have an Spack shortcut to do that

export EDITOR=emacs
spack config edit compilers

1.4.2 Spack list: to get the list of available packages

spack list

1.4.3 Spack info package : to get information about package

spack info pastix

Pay attention to:

  • the versions available, releases or git/svn branches,
  • the variants you can build and the default status

1.4.4 Spack spec a spec : to resolve a specification

Before installing anything, have a look to the stack that will be actually installed.

spack spec pastix

This allows to see what will be actually installed when calling spack install pastix.

You can play with specifications to get the appropriate build you want:

  • to act on the version, use @
  • to act on the compiler, use %
  • to act on a dependency, use ^
  • to add/remove a variant, use +/~

Example of a version with different MPI

spack spec pastix+mpi ^mpich
spack spec pastix+mpi ^openmpi

Notice that we depend on virtual packages such as BLAS or MPI. To get a list of virtual packages providers available, try the following:

spack providers blas
spack providers mpi

Note that for proprietary softwares such as Intel (Compiler, MKL, MPI), NVIDIA CUDA, we do not provide the installation but we can use them if available in the environment. For example if you have an Intel suite installed on your system, make it available in the environment and our packages in Spack can use it in the stack. More precisely we need some environment variable to be set like MKLROOT, I_MPI_ROOT, CUDA_ROOT. On my laptop for example for Intel:

source /home/pruvost/intel/bin/compilervars.sh intel64
source /home/pruvost/intel/mkl/bin/mklvars.sh intel64
source /home/pruvost/intel/impi/5.0.1.035/bin64/mpivars.sh
spack spec pastix+mpi ^mkl ^intelmpi

1.4.5 Spack install a spec

spack install pastix
  • use -v just after the install word to get a verbose mode
  • use -d just after the spack word to get a debug mode
spack -d install -v pastix

1.4.6 Spack find : to look for installed packages

  • Spack find will look for any installed package
  • Spack find spec will find packages corresponding to the spec
spack find pastix
  • using -d to add dependencies and -p to add the install path
spack find -d pastix
spack find -p pastix

1.4.7 Spack reindex

Sometimes because your installation directories are not clean you need to tell Spack to update the database of already installed programs.

spack reindex

1.4.8 Spack location -i a spec : to directly get the installation path

Go in the spack installation of pastix

spack location -i pastix
cd `spack location -i pastix`

Come back

cd -

1.4.9 Spack graph a spec : to get a graph of dependencies

spack graph pastix
spack graph --dot pastix+mpi+starpu > pastix.dot
dot -Tpdf -O pastix.dot

1.4.10 Spack uninstall a spec

Remove the package installed in $SPACK_ROOT/opt/spack/...

spack uninstall pastix

You have several options for uninstalling:

  • -f : to force uninstall
  • -a : to uninstall all matching specs
  • -d : to uninstall all upper programs that depend on this installation
  • -y : to avoid the prompt yes/no

For example

spack uninstall -fady pastix

1.4.11 Spack purge

This will clean all temporary directories, called stage, where tarballs are fetched and builds are processed.

spack purge

1.4.12 Spack edit package : to edit a package file, add new versions, change variants, etc

export EDITOR=emacs
spack edit pastix &

Add a specific version e.g. like this:

version('5.2-custom', '85127ecdfaeed39e850c996b78573d94',
  url='https://gforge.inria.fr/frs/download.php/file/35070/pastix_5.2.2.22.tar.bz2')
  • note that url can be a local address: file:///home/myfile.tar.gz
  • note also that, if several versions are available, you can set the default one using the keyword preferred
version('5.2.2.22', '85127ecdfaeed39e850c996b78573d94',
	url='https://gforge.inria.fr/frs/download.php/file/35070/pastix_5.2.2.22.tar.bz2', preferred=True)
version('5.2.3', '31a1c3ea708ff2dc73280e4b85a82ca8',
	url='https://gforge.inria.fr/frs/download.php/file/36212/pastix_5.2.3.tar.bz2')
version('5.2-local', '85127ecdfaeed39e850c996b78573d94',
  url='file:///home/pruvost/pastix_5.2.2.22.tar.bz2')
  • note also that you can install packages without checksum with spack install -n pastix
  • if you want to get the checksum, you can use spack md5 the tarball

1.4.13 Spack module refresh: to re-generate modulefiles

Modulefiles are automatically generated in the directory $SPACK_ROOT/share/spack/modules/. It can be necessary sometimes to re-generate modulefiles corresponding to the actual installed packages. To do so:

spack module refresh

1.5 Spack-MORSE advanced features

1.5.1 @exist versions

For MORSE packages we provide a way to use existing installations of components in the stack. The trick is to use symbolic links in Spack installation directories to the real installation paths. What you need to do is to set a variable environment to the path of your own installation. You can try to install and read the error to learn what environment variable you have to set:

spack install openmpi@exist

For example, this will work on a Linux system if you install the package libopenmpi-dev:

export OPENMPI_DIR=/usr
spack install openmpi@exist

It can be usefull to avoid building some components that already exist on your system and which are already well tuned for it. See for example cmake, mpi, bison, flex, etc.

export OPENMPI_DIR=/usr
spack install scotch+mpi ^openmpi@exist

1.5.2 @src versions

For MORSE packages we provide a way to use existing source directories instead of building in a directory coming from an untared tarball file. What you need to do is to set a variable environment, e.g. HWLOC_DIR MPI_DIR PASTIX_DIR, to the path of your source directory. You can try to install and read the error to learn what environment variable you have to set:

export HWLOC_DIR=/path/to/your/hwloc
spack install hwloc@src

This is practical if you need to install a stack with some source code under development, that has not yet been released.

1.5.3 Modulefiles

  • Spack creates automatically modulefiles
  • Be sure you have a modulefile environment, see packages: environment-modules (dpkg, redhat), modules (brew)
  1. Set Debian (dpkg) module environment:
    sudo apt-get install -y environment-modules
    . /usr/share/modules/init/bash
    
  2. Set RedHat (yum) module environment:
    sudo yum -y install environment-modules
    . /usr/share/Modules/init/bash
    
  3. Set Mac OS X (brew) module environment:
    brew install modules
    . /usr/local/Cellar/modules/3.2.10/Modules/init/bash
    
  4. Let's make PaStiX available in the current environment
    export ARCH="`ls $SPACK_ROOT/share/spack/modules/`"
    export MODULEPATH=$SPACK_ROOT/share/spack/modules/$ARCH
    module av
    spack load pastix
    pastix-conf
    
    export PASTIX_DIR=`spack location -i pastix`
    `spack location -i pastix`/examples/simple -lap 1000
    

    Do not hesitate to update the modules using

    spack module refresh
    

1.5.4 Local mirror to use on remote platforms (clusters or machine without internet)

You can download the tarballs you need to install libraries later on machines without internet connection like clusters. It can also be convenient to set up a local mirror on your own machine to be able to install libraries even if you lose your internet connections.

Download the pastix stack tarballs on your machine

spack mirror create pastix@5.2.2.22
spack mirror create hwloc@1.11.3
spack mirror create netlib@3.6.0
spack mirror create scotch@6.0.4

Or directly:

spack mirror create -D -o pastix

this will download the tarballs of the spec required.

You can also specify a directory where to receive the tarballs:

spack mirror create -d spack_mirror -D -o pastix

will download in a sub-directory spack_mirror all required tarballs to build pastix spec.

If you want a specific stack, just use the spec semantic:

spack mirror create -D -o pastix+mpi+starpu ^starpu@1.1.5~simgrid ^openmpi

Add the path containing tarballs as a local mirror, e.g.

spack mirror add local_filesystem file:///home/pruvost/spack-mirror-2015-10-26/

Here it is, you are now able to install PaStiX on your machine without a web access.

1.5.5 Use Spack on a remote cluster

Once you have create a local mirror on your local machine your are now able to copy an archive of the directory containing all tarballs Copy them on the remote platform (here plafrim)

tar czf spack-mirror-2015-10-26.tar.gz spack-mirror-2015-10-26/
scp spack-mirror-2015-10-26.tar.gz plafrim-acces:

Connect to the platform

ssh plafrim-acces

Add the path containing tarballs as a local mirror

tar xf spack-mirror-2015-10-26.tar.gz
spack mirror add local_filesystem file:///home/pruvost/spack-mirror-2015-10-26/

Let's install pastix on the platform

Warning: Do not forget to check you have a proper compiler in your environment and give it to spack (see $HOME/.spack/compilers file)

spack install pastix

2 Install and test HiePACS solvers

Parallel high performance linear solvers and fast multipole methods represent the heart of development of the HiePACS team (https://team.inria.fr/hiepacs/) and of the MORSE associate team (http://icl.cs.utk.edu/projectsdev/morse/index.html). In this section we detail how to use Spack to build the variant of the stack you aim at installing.

The solver we focus on are:

2.1 Chameleon

Chameleon is a dense linear solver developed in the context ot the MORSE project, see http://icl.cs.utk.edu/projectsdev/morse/index.html, and available at https://project.inria.fr/chameleon/.

Chameleon git repository is available here https://gitlab.inria.fr/solverstack/chameleon.

This library depends on many other complex components (computational kernels, advanced runtime systems, MPI, CUDA, etc) making the configuration and installation of this library painfull.

Spack helps us to automatically build and install all components required in a coherent way ensuring compatibility of variants and options.

Here is an overview of the dependency DAG of Chameleon:

chameleon_dep2.png

2.1.1 Install Chameleon

  1. Chameleon package options overview

    Use the spack info command to get some information about the Chameleon package

    spack info chameleon
    

    Pay attention to:

    • the versions available, releases or git/svn branches,
    • the variants you can build and the default variants chosen
  2. Chameleon versions available

    Releases available are denoted with integers and are considered to be more stable versions.

    spack install chameleon@0.9.1
    

    You can also use the HEAD of one of the svn branches (e.g. the trunk).

    spack install chameleon@trunk
    

    To use an existing installation of Chameleon in your stack choose the "exist" version. This option requires to set an environment variable, CHAMELEON_DIR, pointing to the installation path of your Chameleon

    export CHAMELEON_DIR=/path/to/your/chameleon/installation
    spack install chameleon@exist
    

    If you plan to use spack when developing into Chameleon, you can indicate to spack to build Chameleon in your own source directory, to be set in CHAMELEON_DIR, containing your modifications

    export CHAMELEON_DIR=/path/to/your/chameleon/sources
    spack install chameleon@src
    
  3. Chameleon as static or dynamic libraries

    Chameleon produces dynamic libraries by default because variant +shared is enabled.

    spack install chameleon # is identical to
    spack install chameleon+shared
    

    To build a static version of Chameleon use ~shared

    spack install chameleon~shared
    
  4. Chameleon with or without testing drivers

    Chameleon produces examples (plus testers and timers) by default because variant +examples is enabled.

    spack install chameleon # is identical to
    spack install chameleon+examples
    

    To disable the building of examples use ~examples

    spack install chameleon~examples
    
  5. Chameleon depends on BLAS, CBLAS, LAPACK, LAPACKE, TMG, MKL

    Chameleon depends on dense computational kernels for execution on conventional CPUs which are BLAS, CBLAS, LAPACK, LAPACKE, TMG. TMG is available through the LAPACK(E) package.

    When you invoke the following spack command

    spack spec chameleon
    

    you can notice that the default stack requires BLAS, CBLAS, LAPACK, LAPACKE to be installed. These libraries are seen in spack as virtual packages in the sense that several implementations of these interfaces exist and can be chosen. By default, openblas is chosen for BLAS, and netlib for the others. Other choices can be made by specifying it through an spack command. First you can look for the available providers of a virtual package by using the command spack providers.

    spack providers blas
    spack providers cblas
    spack providers lapack
    spack providers lapacke
    spack providers tmglib
    

    The default provider of a virtual package is chosen arbitrarily by Spack.

    You can specify the implementation you want to use in your stack with the ^ character

    spack install chameleon~simu ^netlib-lapack ^eigen-blas
    

    or

    spack install chameleon~simu ^netlib
    

    to use the netlib suite or

    spack install chameleon~simu ^openblas
    

    to use the openblas suite.

    Pay attention to the ~simu here which is required to specify kernels such as BLAS. Spack can see the dependency to BLAS here only if you are specific enough about the variant chosen. Here the variant +simu means we will use Chameleon without executing the kernels so that we do not depend on BLAS, LAPACK, etc, if +simu is used. Thus, we need to use ~simu to indicate that we depend on kernels. This is usefull only if you want to change the default behaviour of kernels installation. If you write:

    spack install chameleon ^netlib-blas
    

    Spack will tell you Chameleon does not depend on BLAS because it does not see the implicit ~simu variant. Thus, use a more specific spec like this:

    spack spec chameleon~simu ^netlib-blas
    spack install chameleon~simu ^netlib-blas
    

    When you have troubles with this Spack problem do not hesitate to look into the packages in order to see under which conditions a variant is available

    spack edit chameleon
    

    Sometimes a variant is available only for some versions for instance.

  6. How to use the kernels available in the Intel MKL

    For proprietary softwares such as the Intel MKL, we do not provide the installation but we can use them if available in the environment. For example if you have an Intel suite installed on your system, make it available in the environment and our packages in Spack can use it in the stack. More precisely we need the environment variable MKLROOT to be set to use kernels available in the MKL (this is quite standard). On my labtop for example

    source /home/pruvost/intel/bin/compilervars.sh intel64
    source /home/pruvost/intel/mkl/bin/mklvars.sh intel64
    

    Then because the MKLROOT is set we can use the MKL kernels

    spack install chameleon~simu ^mkl
    

    Please also refer to the following section about the Intel runtime path configuration.

  7. Chameleon depends on a runtime system

    Chameleon depends on either QUARK (http://icl.cs.utk.edu/quark/) or StarPU (http://starpu.gforge.inria.fr/) runtime system. ParSEC (http://icl.cs.utk.edu/parsec/) will also be available in the future.

    This dependency to a runtime is exclusive meaning that only one library should be used, either quark or starpu. By default StarPU is used. But a variant exists to switch to quark by specifying +quark~starpu

    spack install chameleon+quark~starpu
    

    Note that chameleon+quark can't make use of CUDA and MPI.

    To specify options about StarPU, the runtime used by default, +starpu should be used:

    spack install chameleon+starpu ^starpu@1.1.5
    
    1. Generating execution trace with StarPU+FxT

      When Chameleon is executed with StarPU, some execution traces can be automatically generated if the proper set of options are enabled.

      You should use +fxt for Chameleon

      spack install chameleon+starpu+fxt
      
    2. Simulation mode with StarPU+SimGrid

      Chameleon with StarPU has the ability to be simulated using the SimGrid (http://simgrid.gforge.inria.fr/) simulator.

      This implies that the kernels will not be really executed and we do not depend on their installation anymore. Nevertheless, having the performance models of the kernels is a prerequisite. Please contact the MORSE team to know more about this feature (send an email to morse-devel@lists.gforge.inria.fr).

      To enable the simulation mode, use +simu for Chameleon, +simgrid will be activated for StarPU

      spack install chameleon+starpu+simu
      
  8. Chameleon with CUDA/cuBLAS and MAGMA

    Chameleon can make use of one or multiple GPUs thanks to StarPU runtime system and cuBLAS/MAGMA (http://icl.cs.utk.edu/magma/) kernels.

    To use this feature you have to use StarPU runtime specifically. In addition you must have an Nvidia CUDA capable GPU and an installation of CUDA/cuBLAS in your system. You should give to spack your installation path of cuda by setting the environment variable CUDA_ROOT (this is quite standard).

    To fully benefit from GPU kernels you need to enable +cuda and +magma variant in chameleon, +cuda variant will be activated for starpu

    # where lie bin/nvcc, include/cuda.h, lib/ or lib64/libcudart.so, etc
    export CUDA_DIR=/path/to/your/cuda/cublas/installation
    spack install chameleon+starpu+cuda+magma
    

    If you do not intend to use MAGMA kernels just use +cuda variant

    # where lie bin/nvcc, include/cuda.h, lib/ or lib64/libcudart.so, etc
    export CUDA_DIR=/path/to/your/cuda/cublas/installation
    spack install chameleon+starpu+cuda
    
  9. Chameleon with MPI

    Chameleon can be executed on clusters of interconnected nodes using the MPI library.

    To use this feature you have to use StarPU runtime specifically. To enable MPI just use +mpi for chameleon, this will activate +mpi for StarPU

    spack install chameleon~simu+starpu+mpi ^mpich
    spack install chameleon~simu+starpu+mpi ^openmpi
    

    Notice that we depend on the MPI virtual package here. To get a list of MPI packages available, try the following:

    spack providers mpi
    

    Note that you can use an already existing MPI with the @exist version

    # where lies bin/mpicc, include/mpi.h, etc
    export OPENMPI_DIR=/path/to/your/openmpi/install/dir # can be an mpich, openmpi or bullxmpi
    spack install chameleon~simu+starpu+mpi ^openmpi@exist
    
  10. How to use IntelMPI

    For IntelMPI we do not provide the installation but we can use it if available in the environment. For example if you have an Intel suite installed on your system, make it available in the environment. More precisely set I_MPI_ROOT. On my labtop for example:

    source /home/pruvost/intel/bin/compilervars.sh intel64
    source /home/pruvost/intel/impi/5.0.1.035/bin64/mpivars.sh
    spack install chameleon+mpi~simu %intel ^mkl ^intelmpi
    

    Be aware that to use IntelMPI, you should use the intel compiler. This can be set with %intel. Remember you should have defined your compilers first. Check that it is available

    spack compiler list
    

    If not add it

    spack compiler add /path/to/your/intel/compilers
    

    You can also edit the file $HOME/.spack/compilers.yaml to add/remove compilers or with spack config edit compilers command.

    Please also refer to the following section about the Intel runtime path configuration.

  11. How to use BullxMPI

    For BullxMPI we do not provide the installation but we can use it if available in the environment. Set the BULLXMPI_DIR environment variable to the installation root of BullxMPI

    # where lies bin/mpicc, include/mpi.h, etc
    export BULLXMPI_DIR=/path/to/bullxmpi/install/dir
    spack install chameleon+starpu+mpi~simu ^bullxmpi
    
  12. Chameleon with MPI and CUDA

    Chameleon can also exploit clusters of heterogeneous nodes by the use of MPI and CUDA. There is not much to say here, if you have read the sections about MPI and CUDA you just have to cumulate the options to get the distributed and heterogeneous stack of chameleon

    # where lie bin/nvcc, include/cuda.h, lib/ or lib64/libcudart.so, etc
    export CUDA_ROOT=/path/to/your/cuda/cublas/installation
    spack install chameleon+cuda+magma+mpi
    
  13. Situations: I'm a developer of Chameleon

    To build your specific version with modified source code, you can still use Spack to install the libraries and drivers. The @src version should be used

    export CHAMELEON_DIR=/path/to/your/chameleon/sources
    spack install chameleon@src
    
  14. Situations: I'm a developer of a Chameleon dependency

    Imagine you want to modify some source files of StarPU for instance and test your modifications through Chameleon. To do that you shall use the @src version at the level you develop

    export STARPU_DIR=/path/to/your/starpu/sources
    spack install chameleon@trunk+starpu ^starpu@src
    

    You can then make new modifications and re-build your StarPU libraries

    spack uninstall -f starpu@src
    spack install starpu@src
    

    If StarPU is built as shared libraries Chameleon drivers will use the newly generated libraries. If StarPU libraries are static, it is required to re-generate Chameleon drivers.

    spack uninstall -d starpu@src~shared
    spack install chameleon@trunk+starpu ^starpu@src~shared
    
  15. Situations: I want to use an already installed dependency

    Imagine you need a specific version of a dependency which is already installed on your system, you can use the @exist version

    # where lies bin/mpicc, etc
    export OPENMPI_DIR=/path/to/your/mpi/install/dir
    spack install chameleon~simu+starpu+mpi ^starpu~simgrid+mpi ^openmpi@exist
    

    If a specific version of a dependency, StarPU for example, is already installed

    export STARPU_DIR=/path/to/your/starpu/install/dir
    spack install chameleon+starpu+cuda+magma ^starpu@exist
    
  16. Troubleshooting

2.1.2 Test Chameleon

  1. Test the installation of Chameleon

    Chameleon library delivers some testing drivers allowing to test the proper execution of functions. With Spack the testing driver executables are installed if the +example option is used in the spec, see previous section about installation.

    The executables are installed in the sub-directories example, timing, testing of lib/chameleon/.

    To see drivers usage please refer to this section of the Chameleon documentation.

    Note that you can get an easy access to the path of drivers by using the spack location command

    mpirun -np 4 `spack location -i chameleon+starpu+mpi`/lib/chameleon/timing/time_dpotrf_tile
    
  2. make test/ctest in Chameleon

    A developer may want to check a wide range of functionnalities by executing all the tests available.

    This is possible only if you keep the build directory. By default this directory is deleted just after the installation process but there is a way to prevent this removal, just add the --keep-stage option at the installation

    spack install --keep-stage chameleon+starpu+mpi
    

    Then move into the build directory

    spack cd chameleon+starpu+mpi
    

    and finally, execute the tests

    make test # or ctest ...
    

    If something is missing in the environment, use spack env

    spack env chameleon+starpu+mpi bash
    make test
    

2.1.3 Link your code with Chameleon

To call Chameleon functions from an upper level program, users have to link their executables with Chameleon libraries and their dependencies. The proper way to link with Chameleon is given thanks to a pkg-config .pc file.

During the installation process the .pc file is copied in the sub-directory lib/pkgconfig from the install directory.

To be able to use conveniently the information contained in the chameleon.pc file, we advise you to install pkg-config tool on your system, see the system package required.

The basic way of using these files is to make them available to pkg-config by updating the environment variable PKG_CONFIG_PATH with the directories containing the .pc files, e.g

export PKG_CONFIG_PATH=/home/user/install/chameleon/lib/pkgconfig:$PKG_CONFIG_PATH

Then you should be able to get libraries and flags lists thanks to a pkg-config basic usage, e.g

pkg-config --cflags chameleon
pkg-config --libs chameleon

Updating one by one all the paths required to link with Chameleon can be painful because the number of dependencies may be high. That is why we advise to use an "environment module" system package in order to use simple commands (e.g module load or spack load) to update the environment variables, see the system package required and spack load command.

  1. Without environment module

    If the user don't want to install an environment module package, it is still possible by updating by hand the PKG_CONFIG_PATH variable with Chameleon .pc files and some dependencies that also rely on .pc files such that StarPU and hwloc, e.g

    export PKG_CONFIG_PATH=`spack location -i hwloc`/lib/pkgconfig:$PKG_CONFIG_PATH
    export PKG_CONFIG_PATH=`spack location -i starpu`/lib/pkgconfig:$PKG_CONFIG_PATH
    export PKG_CONFIG_PATH=`spack location -i chameleon`/lib/pkgconfig:$PKG_CONFIG_PATH
    
  2. With environment module

    If the module command is available in the environment, then the user should update his MODULEPATH environment variable with the directory where Spack stores the modules that are automatically generated when a stack is installed. The path may already lies in MODULEPATH thanks to the Spack initialization

    . $SPACK_ROOT/share/spack/setup-env.sh
    

    but if it is not the case, find the proper path and add it to MODULEPATH, something like the following

    export ARCH="`ls $SPACK_ROOT/share/spack/modules/`"
    export MODULEPATH=$SPACK_ROOT/share/spack/modules/$ARCH
    

    Then you should see the spack modules available with

    module av
    

    You should also be able to use spack load command. The idea is to use the spack spec semantic, the same used to install the library, to update the environment variables like LIBRARY_PATH LD_LIBRARY_PATH PKG_CONFIG_PATH with the paths of spack installations. For example, you have installed a Chameleon with starpu and openmpi like this

    spack install chameleon~simu+starpu+mpi ^openmpi ^netlib
    

    then you could use

    spack load netlib
    spack load openmpi
    spack load starpu+mpi ^openmpi
    spack load chameleon~simu+starpu+mpi ^openmpi ^netlib
    

    in order to make available all the required libraries to link with Chameleon. Note that it also update the PKG_CONFIG_PATH so that you are able to use pgk-config to get the link line and options

    pkg-config --cflags chameleon
    pkg-config --libs chameleon
    

    The main drawback here is that the modules are not loaded in a nested way such that you need to load one by one all the modules your stack rely on. it could be very convenient in the future to update the environment to use Chameleon with just one command like

    spack load chameleon~simu+starpu+mpi ^openmpi ^netlib
    

2.1.4 Use Chameleon in your code

Please refer to the online documentation lo learn how to call Chameleon in a C or Fortran code and especially this tutorial section.

2.2 PaStiX

PaStiX (Parallel Sparse matriX package) is a scientific library that provides a high performance parallel solver for very large sparse linear systems based on direct methods. Numerical algorithms are implemented in single or double precision (real or complex) using LLt, LDLt and LU with static pivoting (for non symmetric matrices having a symmetric pattern). This solver provides also an adaptive blockwise iLU(k) factorization that can be used as a parallel preconditioner using approximated supernodes to build a coarser block structure of the incomplete factors.

Official website: http://pastix.gforge.inria.fr/files/README-txt.html

PaStiX git repository is located on the Inria forge: https://gforge.inria.fr/scm/?group_id=185

This library depends on other complex components like computational kernels (BLAS), partitioners like Scotch/PT-Scotch or Metis, MPI, HWLOC, making the configuration and installation of this library painfull.

Spack helps us to automatically build and install all components required in a coherent way ensuring compatibility of variants and options.

Here is an overview of the dependency DAG of PaStiX:

pastix_dep.png

2.2.1 Install PaStiX

  1. PaStiX package options overview

    Use the spack info command to get some information about the PaStiX package

    spack info pastix
    

    Pay attention to:

    • the versions available, releases or git/svn branches,
    • the variants you can build and the default variants chosen
  2. PaStiX versions available

    Releases available are denoted with integers and are considered to be more stable versions.

    spack install pastix@5.2.2.22
    

    You can also use the HEAD of one of the git branches (e.g. develop).

    spack install pastix@master
    spack install pastix@develop
    

    To use an existing installation of PaStiX in your stack choose the "exist" version. This option requires to set an environment variable, PASTIX_DIR, pointing to the installation path of your PaStiX

    export PASTIX_DIR=/path/to/your/pastix/installation
    spack install pastix@exist
    

    If you plan to use spack when developing into PaStiX, you can indicate to spack to build PaStiX in your own source directory, to be set in PASTIX_DIR, containing your modifications

    export PASTIX_DIR=/path/to/your/pastix/sources
    spack install pastix@src
    

    You can also imagine you want to modify some source files of Scotch and test your modifications through

    export SCOTCH_DIR=/path/to/your/scotch/sources
    spack install pastix+scotch ^scotch@src
    

    You can then make new modifications and re-build your Scotch libraries

    spack uninstall -f scotch@src
    spack install scotch@src
    

    If Scotch is built as shared libraries PaStiX drivers will use the newly generated libraries. If Scotch libraries are static, it is required to re-generate PaStiX drivers.

    spack uninstall pastix+scotch ^scotch@src~shared
    spack uninstall scotch@src~shared
    spack install pastix+scotch ^scotch@src~shared
    
  3. PaStiX as static or dynamic libraries

    PaStiX produces dynamic libraries by default because variant +shared is enabled.

    spack install pastix # is identical to
    spack install pastix+shared
    

    To build a static version of PaStiX use ~shared

    spack install pastix~shared
    
  4. PaStiX with or without drivers

    PaStiX produces examples by default because variant +examples is enabled.

    spack install pastix # is identical to
    spack install pastix+examples
    

    To disable the building of examples use ~examples

    spack install pastix~examples
    
  5. PaStiX depends on BLAS

    PaStiX depends on dense computational kernels for execution on conventional CPUs which is BLAS.

    When you invoke the following spack command

    spack install pastix
    

    you can notice that the default stack requires BLAS to be installed. This library is seen in spack as a virtual package in the sense that several implementations of this interface exists and can be chosen.

    You can look for the available providers of a virtual package by using the command spack providers.

    spack providers blas
    

    The default provider of a virtual package is chosen arbitrarily by Spack.

    You can specify the implementation you want to use in your stack with the ^ character

    spack install pastix ^openblas
    
  6. How to use the Intel MKL BLAS

    For proprietary softwares such as the Intel MKL, we do not provide the installation but we can use them if available in the environment. For example if you have an Intel suite installed on your system, make it available in the environment and our packages in Spack can use it in the stack. More precisely we need the environment variable MKLROOT to be set to use kernels available in the MKL (this is quite standard). On my labtop for example

    source /home/pruvost/intel/bin/compilervars.sh intel64
    source /home/pruvost/intel/mkl/bin/mklvars.sh intel64
    

    Then because the MKLROOT is set we can use the MKL kernels

    spack install pastix ^mkl
    

    Please also refer to the following section about the Intel runtime path configuration.

  7. PaStiX depends on METIS or Scotch

    PaStiX depends on a partitioning library. The choices available are Metis and Scotch. By default Scotch is chosen so that

    spack install pastix # is identical to
    spack install pastix+scotch~metis
    

    will build a stack with Scotch.

    To enable Metis use the variant +metis and disable scotch with ~scotch

    spack install pastix+metis~scotch
    

    because Metis and Scotch are incompatible (Scotch provides its own metis.h).

  8. PaStiX depends on MPI

    PaStiX can be executed on clusters of interconnected nodes using the MPI library.

    This dependency is optional, use variant +mpi to enable it

    spack install pastix+mpi
    

    Notice that we then depend on the MPI virtual package here. To get a list of MPI packages available, try the following:

    spack providers mpi
    

    To choose your MPI implementation, use the ^ character

    spack install pastix+mpi ^mpich
    spack install pastix+mpi ^openmpi
    

    Note that you can use an already existing MPI with the @exist version

    # where lies bin/mpicc, include/mpi.h, etc
    export OPENMPI_DIR=/path/to/your/mpi/install/dir # can be an mpich, openmpi or bullxmpi
    spack install pastix+mpi ^openmpi@exist
    
  9. How to use IntelMPI

    For IntelMPI we do not provide the installation but we can use it if available in the environment. For example if you have an Intel suite installed on your system, make it available in the environment. More precisely set I_MPI_ROOT. On my labtop for example:

    source /home/pruvost/intel/bin/compilervars.sh intel64
    source /home/pruvost/intel/impi/5.0.1.035/bin64/mpivars.sh
    spack install pastix+mpi %intel ^mkl ^intelmpi
    

    Be aware that to use IntelMPI, you should use the intel compiler. This can be set with %intel. Remember you should have defined your compilers first. Check that it is available

    spack compiler list
    

    If not add it

    spack compiler add /path/to/your/intel/compilers
    

    You can also edit the file $HOME/.spack/compilers.yaml to add/remove compilers or with spack config edit compilers command.

    Please also refer to the following section about the Intel runtime path configuration.

  10. How to use BullxMPI

    For BullxMPI we do not provide the installation but we can use it if available in the environment. Set the BULLXMPI_DIR environment variable to the installation root of BullxMPI

    # where lies bin/mpicc, include/mpi.h, etc
    export BULLXMPI_DIR=/path/to/bullxmpi/install/dir
    spack install pastix+mpi ^bullxmpi
    
  11. PaStiX depends on a runtime system

    PaStiX depends on StarPU (http://starpu.gforge.inria.fr/) runtime system.

    To enable StarPU, use +starpu variant

    spack install pastix+starpu
    

    If +mpi variant is used, StarPU will be built with mpi

    spack install pastix+mpi+starpu
    
    1. Generating execution trace with StarPU+FxT

      When PaStiX is executed with StarPU, some execution traces can be automatically generated if the proper set of options are enabled.

      You should use +fxt variant of StarPU, e.g.

      spack install pastix~mpi+starpu ^starpu@1.1.5+fxt
      

      Then set STARPU_GENERATE_TRACE environment variable to 1 to automatically generate the paje.trace file after any PaStiX+StarPU execution.

  12. Situations: I'm a developer of PaStiX

    To build your specific version with modified source code, you can still use Spack to install the libraries and drivers. The @src version should be used

    export PASTIX_DIR=/path/to/your/pastix/sources # in ricar/ directory
    spack install pastix@src
    
  13. Situations: I'm a developer of a PaStiX dependency

    Imagine you want to modify some source files of Scotch for instance and test your modifications through PaStiX. To do that you shall use the @src version at the level you develop

    export SCOTCH_DIR=/path/to/your/scotch/sources
    spack install pastix@develop+scotch ^scotch@src
    

    You can then make new modifications and re-build your Scotch libraries

    spack uninstall -f scotch@src
    spack install scotch@src
    

    If Scotch is built as shared libraries PaStiX drivers will use the newly generated libraries. If Scotch libraries are static, it is required to re-generate PaStiX drivers.

    spack uninstall pastix@develop+scotch ^scotch@src~shared
    spack uninstall scotch@src~shared
    spack install pastix@develop+scotch ^scotch@src~shared
    
  14. Situations: I want to use an already installed dependency

    Imagine you need a specific version of a dependency which is already installed on your system, you can use the @exist version

    export OPENMPI_DIR=/path/to/your/mpi/install/dir
    spack install pastix+mpi ^openmpi@exist
    

    If a specific version of Scotch, for example, is already installed

    export SCOTCH_DIR=/path/to/your/scotch/install/dir
    spack install pastix+scotch ^scotch@exist
    
  15. Troubleshooting
    • Be sure you have the pre-requisites installed on your system
    • If gcc is used, libgfortran library should be available in the environnement because we do not intend to build it through Spack.
    export LIBRARY_PATH=/path/to/libgfortran:$LIBRARY_PATH
    

2.2.2 Test PaStiX

PaStiX library delivers some testing drivers allowing to test the proper execution of functions. With Spack the testing driver executables are installed if the +examples option is used in the spec, see previous section about installation.

The executables are installed in the sub-directory examples.

To see drivers usage please refer to the PaStiX documentation.

Note that you can get an easy access to the path of drivers by using the spack location command

mpirun -np 2 `spack location -i pastix+mpi`/examples/simple -lap 1000

2.2.3 Link your code with PaStiX

To call PaStiX functions from an upper level program, users have to link their executables with PaStiX libraries and their dependencies.

PaStiX installation delivers a binary to give the required information, pastix-conf, and more recently (git repository) a pkg-config pastix.pc file is provided.

  1. pastix-conf executable

    The executable pastix-conf can be found in the sub-directory bin/ of PaStiX installation path, check your path

    `spack location -i pastix`/bin/pastix-conf
    

    To see usage:

    `spack location -i pastix`/bin/pastix-conf --help
    

    The compiler flags, PaStiX and BLAS libraries, should be added to your build chain to link with PaStiX.

  2. pkg-config file

    A way to link with PaStiX is to use a pkg-config pastix.pc file (currently only available in the develop git branch).

    During the installation process the .pc file is copied in the sub-directory lib/pkgconfig from the install directory.

    To be able to use conveniently the information contained in the chameleon.pc file, we advise you to install pkg-config tool on your system, see the system package required.

    The basic way of using these files is to make them available to pkg-config by updating the environment variable PKG_CONFIG_PATH with the directories containing the .pc files, e.g

    export PKG_CONFIG_PATH=/home/user/install/pastix/lib/pkgconfig:$PKG_CONFIG_PATH
    

    Then you should be able to get libraries and flags lists thanks to a pkg-config basic usage, e.g

    pkg-config --cflags pastix
    pkg-config --libs pastix
    

    Updating one by one all the paths required to link with PaStiX can be painful because the number of dependencies may be high. That is why we advise to use an "environment module" system package in order to use simple commands (e.g module load or spack load) to update the environment variables, see the system package required and spack load command.

    1. Without environment module

      If the user don't want to install an environment module package, it is still possible by updating by hand the PKG_CONFIG_PATH variable with PaStiX .pc files and some dependencies that also rely on .pc files such that StarPU and hwloc, e.g

      export PKG_CONFIG_PATH=`spack location -i hwloc`/lib/pkgconfig:$PKG_CONFIG_PATH
      export PKG_CONFIG_PATH=`spack location -i starpu`/lib/pkgconfig:$PKG_CONFIG_PATH
      export PKG_CONFIG_PATH=`spack location -i pastix+starpu`/lib/pkgconfig:$PKG_CONFIG_PATH
      
    2. With environment module

      If the module command is available in the environment, then the user should update his MODULEPATH environment variable with the directory where Spack stores the modules that are automatically generated when a stack is installed. The path may already lies in MODULEPATH thanks to the Spack initialization

      . $SPACK_ROOT/share/spack/setup-env.sh
      

      but if it is not the case, find the proper path and add it to MODULEPATH, something like the following

      export ARCH="`ls $SPACK_ROOT/share/spack/modules/`"
      export MODULEPATH=$SPACK_ROOT/share/spack/modules/$ARCH
      

      Then you should see the spack modules available with

      module av
      

      You should also be able to use spack load command. The idea is to use the spack spec semantic, the same used to install the library, to update the environment variables like LIBRARY_PATH LD_LIBRARY_PATH PKG_CONFIG_PATH with the paths of spack installations. For example, you have installed a PaStiX with starpu and openmpi like this

      spack install pastix+mpi+starpu ^openmpi ^netlib-blas
      

      then you could use

      spack load netlib-blas
      spack load openmpi
      spack load starpu+mpi~simgrid ^openmpi
      spack load pastix+mpi+starpu ^openmpi ^netlib-blas
      

      in order to make available all the required libraries to link with PaStiX. Note that it also update the PKG_CONFIG_PATH so that you are able to use pgk-config to get the link line and options

      pkg-config --cflags pastix
      pkg-config --libs pastix
      

      The main drawback here is that the modules are not loaded in a nested way such that you need to load one by one all the modules your stack rely on. it could be very convenient in the future to update the environment to use PaStiX with just one command like

      spack load pastix+mpi+starpu ^openmpi ^netlib-blas
      

2.2.4 Use PaStiX in your code

Please refer to the manual or the tutorial to learn how to call PaStiX in a C or Fortran code.

2.3 MaPHYS

MaPHyS (Massively Parallel Hybrid Solver) is a parallel linear solver which couple direct and iterative approaches. The underlying idea is to apply to general unstructured linear systems domain decomposition ideas developed for the solution of linear systems arising from PDEs.

Official website: https://gitlab.inria.fr/solverstack/maphys/

MaPHyS svn repository is located on the Inria forge: https://gforge.inria.fr/scm/?group_id=2485

MaPHyS documentation is available at:

This library depends on many other complex components like sparse direct solvers (PaStiX and/or MUMPS), computational kernels (BLAS, LAPACK), partitioners like Scotch, MPI, HWLOC, making the configuration and installation of this library painfull.

Spack helps us to automatically build and install all components required in a coherent way ensuring compatibility of variants and options.

Here is an overview of the dependency DAG of MaPHYS:

maphys_dep.png

2.3.1 Install MaPHyS

  1. MaPHyS package options overview

    Use the spack info command to get some information about MaPHyS package

    spack info maphys
    

    Pay attention to:

    • the versions available, releases or git/svn branches,
    • the variants you can build and the default variants chosen
  2. MaPHyS versions available

    Releases available are denoted with integers and are considered to be more stable versions.

    spack install maphys
    

    You can also use the HEAD of one of the svn branches (e.g. the trunk).

    spack install maphys@trunk
    

    To use an existing installation of MaPHyS in your stack choose the "exist" version. This option requires to set an environment variable, MAPHYS_DIR, pointing to the installation path of your MaPHyS

    export MAPHYS_DIR=/path/to/your/maphys/installation
    spack install maphys@exist
    

    If you plan to use spack when developing into MaPHyS, you can indicate to spack to build MaPHyS in your own source directory, to be set in MAPHYS_DIR, containing your modifications

    export MAPHYS_DIR=/path/to/your/maphys/sources
    spack install maphys@src
    
  3. MaPHyS with or without testing drivers

    MaPHyS produces examples by default because variant +examples is enabled. Drivers are copied in sub-directory examples/ of MaPHyS installation.

    spack install maphys # is identical to
    spack install maphys+examples
    

    To disable the building of examples use ~examples

    spack install maphys~examples
    
  4. MaPHyS depends on a sparse direct solver

    MaPHyS depends on MUMPS and/or PaStiX sparse direct solvers.

    By default it relies on both MUMPS and PaStiX which means

    spack install maphys # is identical to
    spack install maphys+mumps+pastix
    

    To disaable on of the solver use ~mumps/~pastix

    spack install maphys~mumps
    

    Note that at least one direct solver must be enabled.

  5. MaPHyS depends on BLAS, LAPACK, TMG, SCALAPACK, MKL

    MaPHyS depends on dense computational kernels for execution on conventional CPUs which are BLAS, LAPACK, TMG (for matrix generation in tests). TMG is available through the LAPACK package.

    When you invoke the following spack command

    spack spec maphys
    

    you can notice that the default stack requires BLAS, LAPACK, SCALAPACK to be installed. These libraries are seen in spack as virtual packages in the sense that several implementations of these interfaces exist and can be chosen. By default, openblas is chosen for BLAS, and netlib for the others. Other choices can be made by specifying it through an spack command. First you can look for the available providers of a virtual package by using the command spack providers.

    spack providers blas
    spack providers lapack
    spack providers scalapack
    

    The default behaviour follows a lexicographical logic on the package names.

    You can specify the implementation you want to use in your stack with the ^ character

    spack install maphys ^eigen-blas ^netlib-lapack
    spack install maphys+mumps ^netlib ^netlib-scalapack # to use blas/lapack/scalapack from netlib
    spack install maphys ^openblas
    
  6. How to use the kernels available in the Intel MKL

    For proprietary softwares such as the Intel MKL, we do not provide the installation but we can use them if available in the environment. For example if you have an Intel suite installed on your system, make it available in the environment and our packages in Spack can use it in the stack. More precisely we need the environment variable MKLROOT to be set to use kernels available in the MKL (this is quite standard). On my labtop for example

    source /home/pruvost/intel/bin/compilervars.sh intel64
    source /home/pruvost/intel/mkl/bin/mklvars.sh intel64
    

    Then because the MKLROOT is set we can use the MKL kernels

    spack install maphys ^mkl
    

    Please also refer to the following section about the Intel runtime path configuration.

  7. MaPHyS depends on a partitioner

    MaPHyS depends on PT-Scotch partitioning library. This dependency is required and not optional

    You can still tune the version of scotch if necessary

    spack install maphys ^scotch@6.0.3
    
  8. MaPHyS depends on MPI

    MaPHyS is a parallel solver which rely on an MPI implementation. Check with the spec command the default MPI to be used in your stack

    spack spec maphys
    

    MPI is a virtual package in Spack. A specific vendor can be chosen. To get a list of MPI packages available, try the following:

    spack providers mpi
    

    To choose your MPI implementation, use the ^ character

    spack install maphys ^mpich
    spack install maphys ^openmpi
    

    Note that you can use an already existing MPI with the @exist version

    # where lies bin/mpicc, include/mpi.h, etc
    export OPENMPI_DIR=/path/to/your/mpi/install/dir
    spack install maphys ^scotch+mpi ^openmpi@exist
    
  9. How to use IntelMPI

    For IntelMPI we do not provide the installation but we can use it if available in the environment. For example if you have an Intel suite installed on your system, make it available in the environment. More precisely set I_MPI_ROOT. On my labtop for example:

    source /home/pruvost/intel/bin/compilervars.sh intel64
    source /home/pruvost/intel/impi/5.0.1.035/bin64/mpivars.sh
    spack install maphys %intel ^mkl ^intelmpi
    spack install maphys+mumps %intel ^mkl ^mkl-scalapack ^intelmpi
    

    Be aware that to use IntelMPI, you should use the intel compiler. This can be set with %intel. Remember you should have defined your compilers first. Check that it is available

    spack compiler list
    

    If not add it

    spack compiler add /path/to/your/intel/compilers
    

    You can also edit the file $HOME/.spack/compilers.yaml to add/remove compilers or with spack config edit compilers command.

    Please also refer to the following section about the Intel runtime path configuration.

  10. How to use BullxMPI

    For BullxMPI we do not provide the installation but we can use it if available in the environment. Set the BULLXMPI_DIR environment variable to the installation root of BullxMPI

    # where lies bin/mpicc, include/mpi.h, etc
    export BULLXMPI_DIR=/path/to/bullxmpi/install/dir
    spack install maphys ^bullxmpi
    
  11. Situations: I'm a developer of MaPHyS

    To build your specific version with modified source code, you can still use Spack to install the libraries and drivers. The @src version should be used

    export MAPHYS_DIR=/path/to/your/maphys/sources
    spack install maphys@src
    
  12. Situations: I'm a developer of a MaPHyS dependency

    Imagine you want to modify some source files of PaStiX for instance and test your modifications through MaPHyS. To do that you shall use the @src version at the level you develop

    export PASTIX_DIR=/path/to/your/pastix/sources
    spack install maphys@trunk+pastix ^pastix@src
    

    You can then make new modifications and re-build your PaStiX libraries

    spack uninstall -f pastix@src
    spack install pastix@src
    

    If PaStiX is built as shared libraries MaPHyS drivers will use the newly generated libraries. If PaStiX libraries are static, it is required to re-generate MaPHyS drivers.

    spack uninstall maphys@trunk+pastix ^pastix@src~shared
    spack uninstall pastix@src~shared
    spack install maphys@trunk+pastix ^pastix@src~shared
    
  13. Situations: I want to use an already installed dependency

    Imagine you need a specific version of a dependency which is already installed on your system, you can use the @exist version

    export OPENMPI_DIR=/path/to/your/mpi/install/dir
    spack install maphys ^openmpi@exist
    

    If a specific version of PaStiX, for example, is already installed

    export PASTIX_DIR=/path/to/your/pastix/install/dir
    spack install maphys+pastix ^pastix@exist
    
  14. Troubleshooting

2.3.2 Test MaPHyS

MaPHyS library delivers some testing drivers allowing to test the proper execution of functions. With Spack the testing driver executables are installed if the +examples option is used in the spec, see previous section about installation.

The executables are installed in the sub-directory examples.

To see drivers usage please refer to the MaPHyS documentation, section 12.

Note that you can get an easy access to the path of drivers by using the spack location command

cd `spack location -i maphys`/examples/
mpirun -np 2 ./dmph_examplekv real_bcsstk17.in

2.3.3 Link your code with MaPHyS

To call MaPHyS functions from an upper level program, users have to link their executables with MaPHyS libraries and their dependencies. The proper way to link with MaPHyS is given thanks to a pkg-config .pc file.

During the installation process the .pc file is copied in the sub-directory lib/pkgconfig from the install directory.

To be able to use conveniently the information contained in the maphys.pc file, we advise you to install pkg-config tool on your system, see the system package required.

The basic way of using these files is to make them available to pkg-config by updating the environment variable PKG_CONFIG_PATH with the directories containing the .pc files, e.g

export PKG_CONFIG_PATH=/home/user/install/maphys/lib/pkgconfig:$PKG_CONFIG_PATH

Then you should be able to get libraries and flags lists thanks to a pkg-config basic usage, e.g

pkg-config --cflags maphys
pkg-config --libs maphys

Updating one by one all the paths required to link with MaPHyS can be painful because the number of dependencies may be high. That is why we advise to use an "environment module" system package in order to use simple commands (e.g module load or spack load) to update the environment variables, see the system package required and spack load command.

  1. Without environment module

    If the user don't want to install an environment module package, it is still possible by updating by hand the PKG_CONFIG_PATH variable with MaPHyS .pc file, e.g

    export PKG_CONFIG_PATH=`spack location -i maphys`/lib/pkgconfig:$PKG_CONFIG_PATH
    
  2. With environment module

    If the module command is available in the environment, then the user should update his MODULEPATH environment variable with the directory where Spack stores the modules that are automatically generated when a stack is installed. The path may already lies in MODULEPATH thanks to the Spack initialization

    . $SPACK_ROOT/share/spack/setup-env.sh
    

    but if it is not the case, find the proper path and add it to MODULEPATH, something like the following

    export ARCH="`ls $SPACK_ROOT/share/spack/modules/`"
    export MODULEPATH=$SPACK_ROOT/share/spack/modules/$ARCH
    

    Then you should see the spack modules available with

    module av
    

    You should also be able to use spack load command. The idea is to use the spack spec semantic, the same used to install the library, to update the environment variables like LIBRARY_PATH LD_LIBRARY_PATH PKG_CONFIG_PATH with the paths of spack installations. For example, you have installed a MaPHyS with pastix, netlib suite and openmpi like this

    spack install maphys ^netlib ^openmpi
    

    then you could use

    spack load hwloc
    spack load openmpi
    spack load netlib
    spack load scotch
    spack load pastix ^netlib
    spack load maphys ^netlib ^openmpi
    

    in order to make available all the required libraries to link with MaPHyS. Note that it also update the PKG_CONFIG_PATH so that you are able to use pgk-config to get the link line and options

    pkg-config --cflags maphys
    pkg-config --libs maphys
    

    The main drawback here is that the modules are not loaded in a nested way such that you need to load one by one all the modules your stack rely on. it could be very convenient in the future to update the environment to use MaPHyS with just one command like

    spack load maphys ^netlib ^openmpi
    

2.3.4 Use MaPHyS in your code

Please refer to the MaPHyS documentation to learn how to call MaPHyS interface in a C or Fortran code.

2.4 HIPS

HIPS (Hierarchical Iterative Parallel Solver) is a scientific library that provides an efficient parallel iterative solver for very large sparse linear systems, see http://hips.gforge.inria.fr/.

This library depends on partitioners (Metis or Scotch), BLAS and MPI.

Spack helps us to automatically build and install all components required in a coherent way ensuring compatibility of variants and options.

Here is an overview of the dependency DAG of HiPS:

hips_dep.png

2.4.1 Install HiPS

  1. HiPS package options overview

    Use the spack info command to get some information about the HiPS package

    spack info hips
    

    Pay attention to:

    • the versions available, releases or git/svn branches,
    • the variants you can build and the default variants chosen
  2. HiPS versions available

    Releases available are denoted with integers and are considered to be more stable versions.

    spack install hips@1.2b-rc5
    

    You can also use the HEAD of one of the svn branches (e.g. the trunk).

    spack install hips@trunk
    

    To use an existing installation of HiPS in your stack choose the "exist" version. This option requires to set an environment variable, HIPS_DIR, pointing to the installation path of your HiPS

    export HIPS_DIR=/path/to/your/hips/installation
    spack install hips@exist
    

    If you plan to use spack when developing into HiPS, you can indicate to spack to build HiPS in your own source directory, to be set in HIPS_DIR, containing your modifications

    export HIPS_DIR=/path/to/your/hips/sources
    spack install hips@src
    

    You can also imagine you want to modify some source files of Scotch and test your modifications through HiPS

    export SCOTCH_DIR=/path/to/your/scotch/sources
    spack install hips ^scotch@src
    

    You can then make new modifications and re-build your Scotch libraries

    spack uninstall -f scotch@src
    spack install scotch@src
    

    If Scotch is built as shared libraries HiPS drivers will use the newly generated libraries. If Scotch libraries are static, it is required to re-generate HiPS drivers.

    spack uninstall hips ^scotch@src~shared
    spack uninstall scotch@src~shared
    spack install hips ^scotch@src~shared
    
  3. HiPS as static or dynamic libraries

    HiPS produces static libraries by default because variant ~shared is enabled.

    spack install hips # is identical to
    spack install hips~shared
    

    The dynamic variant is not available for now.

  4. HiPS with or without drivers

    HiPS produces examples (plus testers and timers) by default because variant +examples is enabled.

    spack install hips # is identical to
    spack install hips+examples
    

    To disable the builling of examples use ~examples

    spack install hips~examples
    
  5. HiPS depends on BLAS

    HiPS depends on dense computational kernels for execution on conventional CPUs which is BLAS.

    When you invoke the following spack command

    spack spec hips
    

    you can notice that the default stack requires BLAS to be installed. This library is seen in spack as a virtual package in the sense that several implementations of this interface exists and can be chosen. By default, openblas is chosen for BLAS. Other choices can be made by specifying it through an spack command. First you can look for the available providers of a virtual package by using the command spack providers.

    spack providers blas
    

    You can specify the implementation you want to use in your stack with the ^ character

    spack install hips ^eigen-blas
    
  6. How to use the Intel MKL BLAS

    For proprietary softwares such as the Intel MKL, we do not provide the installation but we can use them if available in the environment. For example if you have an Intel suite installed on your system, make it available in the environment and our packages in Spack can use it in the stack. More precisely we need the environment variable MKLROOT to be set to use kernels available in the MKL (this is quite standard). On my labtop for example

    source /home/pruvost/intel/bin/compilervars.sh intel64
    source /home/pruvost/intel/mkl/bin/mklvars.sh intel64
    

    Then because the MKLROOT is set we can use the MKL kernels

    spack install hips ^mkl
    

    Please also refer to the following section about the Intel runtime path configuration.

  7. HiPS depends on METIS or Scotch

    HiPS depends on a partitioning library. The choices available are Metis and Scotch. These variants are mutually exclusive. Choose either Metis or Scotch in your stack. By default Scotch is chosen so that

    spack install hips # is identical to
    spack install hips~metis
    

    will build a stack with Scotch.

    To use Metis instead use the variant +metis.

    spack install hips+metis
    
  8. HiPS depends on MPI

    HiPS can be executed on clusters of interconnected nodes using the MPI library.

    Notice that we depend on the MPI virtual package here. To get a list of MPI packages available, try the following:

    spack providers mpi
    

    To choose your MPI implementation, use the ^ character

    spack install hips ^mpich
    spack install hips ^openmpi
    

    Note that you can use an already existing MPI with the @exist version

    # where lies bin/mpicc, include/mpi.h, etc
    export OPENMPI_DIR=/path/to/your/mpi/install/dir # can be an mpich, openmpi or bullxmpi
    spack install hips ^openmpi@exist
    
  9. How to use IntelMPI

    For IntelMPI we do not provide the installation but we can use it if available in the environment. For example if you have an Intel suite installed on your system, make it available in the environment. More precisely set I_MPI_ROOT. On my labtop for example:

    source /home/pruvost/intel/bin/compilervars.sh intel64
    source /home/pruvost/intel/impi/5.0.1.035/bin64/mpivars.sh
    spack install hips %intel ^mkl ^intelmpi
    

    Be aware that to use IntelMPI, you should use the intel compiler. This can be set with %intel. Remember you should have defined your compilers first. Check that it is available

    spack compiler list
    

    If not add it

    spack compiler add /path/to/your/intel/compilers
    

    You can also edit the file $HOME/.spack/compilers.yaml to add/remove compilers or with spack config edit compilers command.

    Please also refer to the following section about the Intel runtime path configuration.

  10. How to use BullxMPI

    For BullxMPI we do not provide the installation but we can use it if available in the environment. Set the BULLXMPI_DIR environment variable to the installation root of BullxMPI

    # where lies bin/mpicc, include/mpi.h, etc
    export BULLXMPI_DIR=/path/to/bullxmpi/install/dir
    spack install hips ^bullxmpi
    
  11. Situations: I'm a developer of HiPS

    To build your specific version with modified source code, you can still use Spack to install the libraries and drivers. The @src version should be used

    export HIPS_DIR=/path/to/your/hips/sources
    spack install hips@src
    
  12. Situations: I'm a developer of a HiPS dependency

    Imagine you want to modify some source files of Scotch for instance and test your modifications through HiPS. To do that you shall use the @src version at the level you develop

    export SCOTCH_DIR=/path/to/your/scotch/sources
    spack install hips@trunk~metis ^scotch@src
    

    You can then make new modifications and re-build your Scotch libraries

    spack uninstall -f scotch@src
    spack install scotch@src
    

    If Scotch is built as shared libraries HiPS drivers will use the newly generated libraries. If Scotch libraries are static, it is required to re-generate HiPS drivers.

    spack uninstall hips@trunk~metis ^scotch@src~shared
    spack uninstall scotch@src~shared
    spack install hips@trunk~metis ^scotch@src~shared
    
  13. Situations: I want to use an already installed dependency

    Imagine you need a specific version of a dependency which is already installed on your system, you can use the @exist version

    export OPENMPI_DIR=/path/to/your/mpi/install/dir
    spack install hips ^openmpi@exist
    

    If a specific version of Scotch, for example, is already installed

    export SCOTCH_DIR=/path/to/your/scotch/install/dir
    spack install hips~metis ^scotch@exist
    
  14. Troubleshooting
    1. Mac OS X
      • malloc.h does not exist: ./io_hb.h:8:9: fatal error: 'malloc.h' file not found, waiting for next HiPS release which fix this

2.4.2 Test HiPS

HiPS library delivers some testing drivers allowing to test the proper execution of functions. With Spack the testing driver executables are installed if the +examples option is used in the spec, see previous section about installation.

The executables are installed in the sub-directory TESTS.

To see drivers usage please refer to the HiPS documentation.

Note that you can get an easy access to the path of drivers by using the spack location command

mpirun -np 2 `spack location -i hips`/TESTS/testHIPS1.ex 100

2.4.3 Use HiPS in your code

Please refer to the online documentation to learn how to call HiPS in a C or Fortran code.

2.5 qr_mumps

qr_mumps is a software package for the solution of sparse, linear systems on multicore computers. It implements a direct solution method based on the QR factorization of the input matrix. Therefore, it is suited to solving sparse least-squares problems and to computing the minimum-norm solution of sparse, underdetermined problems. It can obviously be used for solving square problems in which case the stability provided by the use of orthogonal transformations comes at the cost of a higher operation count with respect to solvers based on, e.g., the LU factorization. qr_mumps supports real and complex, single or double precision arithmetic.

Official website: http://buttari.perso.enseeiht.fr/qr_mumps/

qr_mumps svn repository: https://wwwsecu.irit.fr/svn/qr_mumps/

This library depends on other complex components like computational kernels (BLAS, LAPACK), StarPU runtime system, partitioners like Scotch/PT-Scotch and Metis, HWLOC, making the configuration and installation of this library painfull.

Spack helps us to automatically build and install all components required in a coherent way ensuring compatibility of variants and options.

Here is an overview of the dependency DAG of qr_mumps:

qrmumps_dep.png

2.5.1 Install qr_mumps

  1. qr_mumps package options overview

    Use the spack info command to get some information about the qr_mumps package

    spack info qr_mumps
    

    Pay attention to:

    • the versions available, releases or git/svn branches,
    • the variants you can build and the default variants chosen
  2. qr_mumps versions available

    Releases available are denoted with integers and are considered to be more stable versions.

    spack install qr_mumps@2.0
    

    You can use the HEAD of one of the svn branches. Only the branch trunk is supported for now.

    spack install qr_mumps@trunk
    

    To use an existing installation of qr_mumps in your stack choose the "exist" version. This option requires to set an environment variable, QR_MUMPS_DIR, pointing to the installation path of your qr_mumps

    export QR_MUMPS_DIR=/path/to/your/qr_mumps/installation
    spack install qr_mumps@exist
    

    If you plan to use spack when developing into qr_mumps, you can indicate to spack to build qr_mumps in your own source directory, to be set in QR_MUMPS_DIR, containing your modifications

    export QR_MUMPS_DIR=/path/to/your/qr_mumps/sources
    spack install qr_mumps@src
    

    You can also imagine you want to modify some source files of StarPU and test your modifications through qr_mumps

    export STARPU_DIR=/path/to/your/starpu/sources
    spack install qr_mumps+starpu ^starpu@src
    

    You can then make new modifications and re-build your StarPU libraries

    spack uninstall -f starpu@src
    spack install starpu@src
    

    If StarPU is built as shared libraries qr_mumps drivers will use the newly generated libraries. If StarPU libraries are static, it is required to re-generate qr_mumps drivers.

    spack uninstall qr_mumps ^starpu@src~shared
    spack uninstall starpu@src~shared
    spack install qr_mumps ^starpu@src~shared
    
  3. qr_mumps depends on BLAS and LAPACK

    qr_mumps depends on dense computational kernels for execution on conventional CPUs.

    When you invoke the following spack command

    spack spec qr_mumps
    

    you can notice that the default stack requires BLAS and LAPACK to be installed. These packages are seen in spack as virtual packages in the sense that several implementations of these interfaces exist and can be chosen. By default, openblas is chosen for BLAS. Other choices can be made by specifying it through an spack command. First you can look for the available providers of a virtual package by using the command spack providers.

    spack providers blas
    spack providers lapack
    

    You can specify the implementation you want to use in your stack with the ^ character

    spack install qr_mumps ^openblas ^netlib-lapack
    spack install qr_mumps ^netlib # to use netlib blas/lapack
    
  4. How to use the Intel MKL BLAS, LAPACK

    For proprietary softwares such as the Intel MKL, we do not provide the installation but we can use them if available in the environment. For example if you have an Intel suite installed on your system, make it available in the environment and our packages in Spack can use it in the stack. More precisely we need the environment variable MKLROOT to be set to use kernels available in the MKL (this is quite standard). On my labtop for example

    source /home/pruvost/intel/bin/compilervars.sh intel64
    source /home/pruvost/intel/mkl/bin/mklvars.sh intel64
    

    Then because the MKLROOT is set we can use the MKL kernels

    spack install qr_mumps ^mkl
    

    Please also refer to the following section about the Intel runtime path configuration.

  5. qr_mumps depends on METIS or Scotch

    qr_mumps depends on Metis or Scotch partitioning libraries. By default Scotch is used:

    spack install qr_mumps # equivalent to
    spack install qr_mumps+scotch~metis
    

    To enable Metis, use +scotch and ~metis

    spack install qr_mumps+metis~scotch
    

    because Metis and Scotch are incompatible (Scotch provides its own metis.h).

    To specify variants on these dependencies use the ^ character

    spack install qr_mumps+scotch ^scotch@6.0.3
    
  6. qr_mumps depends on a runtime system

    qr_mumps depends on StarPU (http://starpu.gforge.inria.fr/) runtime system.

    To specify something about StarPU, use the ^ character

    spack install qr_mumps+starpu ^starpu@svn-trunk+debug
    
    1. Generating execution trace with StarPU+FxT

      When qr_mumps is executed with StarPU, some execution traces can be automatically generated if the proper set of options are enabled.

      You should use +fxt variant of StarPU, e.g.

      spack install qr_mumps+starpu ^starpu+fxt
      

      Then set STARPU_GENERATE_TRACE environment variable to 1 to automatically generate the paje.trace file after any qr_mumps+StarPU execution.

  7. Situations: I'm a developer of qr_mumps

    To build your specific version with modified source code, you can still use Spack to install the libraries and drivers. The @src version should be used

    export QR_MUMPS_DIR=/path/to/your/qr_mumps/sources
    spack install qr_mumps@src
    
  8. Situations: I'm a developer of a qr_mumps dependency

    Imagine you want to modify some source files of StarPU for instance and test your modifications through qr_mumps. To do that you shall use the @src version at the level you develop

    export STARPU_DIR=/path/to/your/starpu/sources
    spack install qr_mumps+starpu ^starpu@src
    

    You can then make new modifications and re-build your StarPU libraries

    spack uninstall -f starpu@src
    spack install starpu@src
    

    If StarPU is built as shared libraries qr_mumps drivers will use the newly generated libraries. If StarPU libraries are static, it is required to re-generate qr_mumps drivers.

    spack uninstall qr_mumps+starpu ^starpu@src~shared
    spack uninstall starpu@src~shared
    spack install qr_mumps+starpu ^starpu@src~shared
    
  9. Situations: I want to use an already installed dependency

    Imagine you need a specific version of a dependency which is already installed on your system, you can use the @exist version

    export STARPU_DIR=/path/to/your/starpu/install/dir
    spack install qr_mumps+starpu ^starpu@exist
    

    If a specific version of Scotch, for example, is already installed

    export SCOTCH_DIR=/path/to/your/scotch/install/dir
    spack install qr_mumps+scotch ^scotch@exist
    
  10. Troubleshooting

2.6 ScalFMM

ScalFMM is a software/library to simulate N-body interactions using the Fast Multipole Method. This is a kernel independent fast multipole method based on interpolation (Chebyshev or Lagrange).

Officiel webpage: http://scalfmm-public.gforge.inria.fr/doc/.

ScalFMM git repository is available on the Inria forge:

This library depends on other complex components (computational kernels, advanced runtime systems, MPI, CUDA, etc) making the configuration and installation of this library painfull.

Spack helps us to automatically build and install all components required in a coherent way ensuring compatibility of variants and options.

Here is an overview of the dependency DAG of ScalFMM:

scalfmm_dep.png

2.6.1 Install ScalFMM

  1. ScalFMM package options overview

    Use the spack info command to get some information about the ScalFMM package

    spack info scalfmm
    

    Pay attention to:

    • the versions available, releases or git/svn branches,
    • the variants you can build and the default variants chosen
  2. ScalFMM versions available

    Releases available are denoted with integers and are considered to be more stable versions.

    spack install scalfmm@1.4-148
    

    You can also use the HEAD of one of the git branches (e.g. the master).

    spack install scalfmm@master
    

    To use an existing installation of ScalFMM in your stack choose the "exist" version. This option requires to set an environment variable, SCALFMM_DIR, pointing to the installation path of your ScalFMM

    export SCALFMM_DIR=/path/to/your/scalfmm/installation
    spack install scalfmm@exist
    

    If you plan to use spack when developing into ScalFMM, you can indicate to spack to build ScalFMM in your own source directory, to be set in SCALFMM_DIR, containing your modifications

    export SCALFMM_DIR=/path/to/your/scalfmm/sources
    spack install scalfmm@src
    
  3. ScalFMM as static or dynamic libraries

    ScalFMM library is built in dynamic by default

    spack install scalfmm # is identical to
    spack install scalfmm+shared
    

    To build a static version of ScalFMM use ~shared

    spack install scalfmm~shared
    

    Last ScalFMM releases do not deliver dynamic libraries building but the @master version do so.

  4. ScalFMM with or without testing drivers

    ScalFMM produces some executables (stored in bin/) by default because variant +examples is enabled.

    spack install scalfmm # is identical to
    spack install scalfmm+examples
    

    To disable the building of examples use ~examples

    spack install scalfmm~examples
    

    Testing drivers are also available with the +tests variant

    spack install scalfmm+tests
    
  5. ScalFMM depends on BLAS, LAPACK, FFTW, MKL

    ScalFMM depends on dense computational kernels for execution on conventional CPUs which are BLAS, LAPACK, FFTW. The dependency to FFTW is optional, use +fft to enable it.

    When you invoke the following spack command

    spack spec scalfmm
    

    you can notice that the default stack requires BLAS and LAPACK to be installed. These libraries are seen in spack as virtual packages in the sense that several implementations of these interfaces exist and can be chosen. By default, openblas is chosen for BLAS, and netlib for the others. Other choices can be made by specifying it through an spack command. First you can look for the available providers of a virtual package by using the command spack providers.

    spack providers blas
    spack providers lapack
    spack providers fft
    

    The default behaviour follows a lexicographical logic on the package names.

    You can specify the implementation you want to use in your stack with the ^ character

    spack install scalfmm+fft ^fftw ^netlib-lapack ^openblas
    spack install scalfmm ^netlib # to use blas/lapack from netlib
    
  6. How to use the kernels available in the Intel MKL

    For proprietary softwares such as the Intel MKL, we do not provide the installation but we can use them if available in the environment. For example if you have an Intel suite installed on your system, make it available in the environment and our packages in Spack can use it in the stack. More precisely we need the environment variable MKLROOT to be set to use kernels available in the MKL (this is quite standard). On my labtop for example

    source /home/pruvost/intel/bin/compilervars.sh intel64
    source /home/pruvost/intel/mkl/bin/mklvars.sh intel64
    

    Then because the MKLROOT is set we can use the MKL kernels

    spack install scalfmm+fft ^mkl
    

    Please also refer to the following section about the Intel runtime path configuration.

  7. ScalFMM depends on a runtime system

    ScalFMM depends on StarPU (http://starpu.gforge.inria.fr/) runtime system.

    To enable StarPU, use +starpu

    spack install scalfmm+starpu
    

    It is possible to specify options about StarPU

    spack install scalfmm+starpu ^starpu@1.1.5+fxt
    spack install scalfmm+starpu+mpi
    
    1. Generating execution trace with StarPU+FxT

      When ScalFMM is executed with StarPU, some execution traces can be automatically generated if the proper set of options are enabled.

      You should use +fxt variant of StarPU

      spack install scalfmm+starpu ^starpu+fxt
      
  8. ScalFMM with MPI

    ScalFMM can be executed on clusters of interconnected nodes using the MPI library.

    To use this feature you have to use +mpi variant

    spack install scalfmm+mpi
    

    and if starpu is used

    spack install scalfmm+mpi+starpu
    

    Notice that we depend on the MPI virtual package here. To get a list of MPI packages available, try the following:

    spack providers mpi
    

    To choose your MPI implementation, use the ^ character

    spack install scalfmm+mpi ^mpich
    spack install scalfmm+mpi ^openmpi
    

    Note that you can use an already existing MPI with the @exist version

    # where lies bin/mpicc, include/mpi.h, etc
    export OPENMPI_DIR=/path/to/your/mpi/install/dir # can be an mpich, openmpi or bullxmpi
    spack install scalfmm+mpi ^openmpi@exist
    
  9. How to use IntelMPI

    For IntelMPI we do not provide the installation but we can use it if available in the environment. For example if you have an Intel suite installed on your system, make it available in the environment. More precisely set I_MPI_ROOT. On my labtop for example:

    source /home/pruvost/intel/bin/compilervars.sh intel64
    source /home/pruvost/intel/impi/5.0.1.035/bin64/mpivars.sh
    spack install scalfmm+mpi %intel ^mkl-blas ^intelmpi
    

    Be aware that to use IntelMPI, you should use the intel compiler. This can be set with %intel. Remember you should have defined your compilers first. Check that it is available

    spack compiler list
    

    If not add it

    spack compiler add /path/to/your/intel/compilers
    

    You can also edit the file $HOME/.spack/compilers.yaml to add/remove compilers or with spack config edit compilers command.

    Please also refer to the following section about the Intel runtime path configuration.

  10. How to use BullxMPI

    For BullxMPI we do not provide the installation but we can use it if available in the environment. Set the BULLXMPI_DIR environment variable to the installation root of BullxMPI

    # where lies bin/mpicc, include/mpi.h, etc
    export BULLXMPI_DIR=/path/to/bullxmpi/install/dir
    spack install scalfmm+mpi ^bullxmpi
    
  11. Situations: I'm a developer of ScalFMM

    To build your specific version with modified source code, you can still use Spack to install the libraries and drivers. The @src version should be used

    export SCALFMM_DIR=/path/to/your/chameleon/sources
    spack install scalfmm@src
    
  12. Situations: I'm a developer of a ScalFMM dependency

    Imagine you want to modify some source files of StarPU for instance and test your modifications through ScalFMM. To do that you shall use the @src version at the level you develop

    export STARPU_DIR=/path/to/your/starpu/sources
    spack install scalfmm@master+starpu ^starpu@src
    

    You can then make new modifications and re-build your StarPU libraries

    spack uninstall -f starpu@src
    spack install starpu@src
    

    If StarPU is built as shared libraries ScalFMM drivers will use the newly generated libraries. If StarPU libraries are static, it is required to re-generate ScalFMM drivers.

    spack uninstall scalfmm@master+starpu ^starpu@src~shared
    spack uninstall starpu@src~shared
    spack install scalfmm@master+starpu ^starpu@src~shared
    
  13. Situations: I want to use an already installed dependency

    Imagine you need a specific version of a dependency which is already installed on your system, you can use the @exist version

    export OPENMPI_DIR=/path/to/your/mpi/install/dir
    spack install scalfmm+mpi+starpu ^starpu+mpi~simgrid ^openmpi@exist
    

    If a specific version of StarPU, for example, is already installed

    export STARPU_DIR=/path/to/your/starpu/install/dir
    spack install scalfmm+starpu ^starpu@exist
    
  14. Troubleshooting

2.6.2 Test ScalFMM

  1. Test the installation of ScalFMM

    ScalFMM library delivers some testing drivers allowing to test the proper execution of functions. With Spack the testing driver executables are installed if the +example option is used in the spec, see previous section about installation.

    The executables are installed in the sub-directory bin of ScalfMM installation.

    To see drivers usage please refer to this page and this one.

    Note that you can get an easy access to the path of drivers by using the spack location command

    `spack location -i scalfmm+mpi`/bin/DirectAlgorithm
    
  2. make test/ctest in ScalFMM

    A developer may want to check a wide range of functionnalities by executing all the tests available.

    This is possible only if you keep the build directory. By default this directory is deleted just after the installation process but there is a way to prevent this removal, just add the --keep-stage option at the installation

    spack install --keep-stage scalfmm+tests
    

    Then move into the build directory

    spack cd scalfmm+tests
    

    and finally, execute the tests

    ctest
    

    If something is missing in the environment, use spack env

    spack env scalfmm+tests bash
    ctest
    

3 Troubleshooting

3.1 spack is not compatible with your version of python

To our knowledge, Spack is not compatible with recent versions of python (3:). Make sure you have a python2 version installed on your system

sudo apt-get install -y python2.7-dev

Make your python command pointing to python2, something like

ln -s /usr/python2 python

or change the python to be used by Spack, edit the bin/spack script and replace #!/usr/bin/env python by #!/usr/bin/env python2 or #!/usr/bin/env python2.7

3.2 your don't have a proper compiler

Check that you have some compilers known from spack

spack compiler list

You can add your compilers giving the path where they can be found

spack compiler add /usr/bin
spack compiler info clang
spack compiler info gcc
spack compiler info intel

You can edit the file $HOME/.spack/compilers.yaml to add/remove compilers. The principle is to give the paths to compiler executables. you have an Spack shortcut to do that

export EDITOR=emacs
spack config edit compilers

3.3 this package does not depend on another while it is the case

This problem can be often met, you are sure your package depends on another but spack complain telling you that it is not the case. Here is an example

spack spec scotch ^openmpi

You think scotch depends on MPI but spack tells you Error: scotch does not depend on openmpi. In this case it is quite normal because you have not enabled +mpi variant which makes scotch dependent on MPI.

spack spec scotch+mpi ^openmpi

This last command is better.

But there are other cases less intuitive! For example

spack spec mumps ^openmpi

Here spack tells Error: mumps does not depend on openmpi while mumps by default depends on MPI so what the point here? This problem occurs when some dependencies are optional. In the previous example, mumps has a variant +mpi available. If mpi is disabled, with ~mpi variant, mumps will not depend on MPI anymore so that the dependency on MPI is conditionned. As soon as you have some conditions on a dependency spack will not see it during its normalized process, see the two subsequent processes normalized and concretized

spack spec mumps

Here during the normalized process, spack don't see MPI as a dependency. It is seen after the concretized process.

When you want to specify something on a dependency in the spec with the ^ character, spack should see the dependency in the normalized process. To help spack seeing it, you just need to know under which condition this dependency is enabled and specify this condition in the spec, e.g. like this

spack spec mumps+mpi ^openmpi

To know the conditions on the dependencies of a package, open the package file and look for depends_on lines that define the dependencies

export EDITOR=emacs
spack edit mumps

Other examples:

spack spec chameleon ^starpu@1.1.5

the dependency to starpu is optional because of the variant +quark. Specify that you don't want to use quark but starpu

spack spec chameleon+starpu ^starpu@1.1.5
spack spec hips ^scotch+mpi

the dependency to scotch is optional because of the variant +metis. Specify that you don't want to use metis

spack spec hips+int64~metis ^scotch+mpi
spack spec netlib-scalapack ^openmpi

netlib-scalapack depends on mpi only for version higher than 2. Specify that you don't want a 2. version to be able to specify the mpi vendor

spack spec netlib-scalapack@2.0.2 ^openmpi

3.4 build problems with Intel compilers

The compilation of some packages with Intel compilers may be buggy with Spack, cf. openmpi https://groups.google.com/forum/?hl=en#!searchin/spack/intel/spack/Jj9-iWiMIAU/qloJTlNzCAAJ.

The user has to add an Intel config file available in his environment to have access to all intel symbols. To do this, consider creating a file, e.g. intel.cfg, containing the -Xlinker -rpath= to all the path required to use Intel libraries, paths available in [DY]LD_LIBARRY_PATH after sourcing the intel scripts like intel/bin/compilervars.sh, intel/mkl/bin/mklvars.sh intel/impi/5.0.1.035/bin64/mpivars.sh.

Example of intel.cfg file:

-Xlinker -rpath=$HOME/intel/impi/5.0.1.035/intel64/lib -Xlinker -rpath=$HOME/intel/composer_xe_2015.0.090/compiler/lib/intel64 -Xlinker -rpath=$HOME/intel/composer_xe_2015.0.090/mkl/lib/intel64 -Xlinker -rpath=$HOME/intel/composer_xe_2015.0.090/mpirt/lib/intel64 -Xlinker -rpath=$HOME/intel/composer_xe_2015.0.090/ipp/../compiler/lib/intel64 -Xlinker -rpath=$HOME/intel/composer_xe_2015.0.090/ipp/lib/intel64 -Xlinker -rpath=/home/pruvost/intel/composer_xe_2015.0.090/tbb/lib/intel64/gcc4.1

Ones the intel.cfg file has been written, make it available from Intel compilers

export ICCCFG=$HOME/intel/intel.cfg
export ICPCCFG=$HOME/intel/intel.cfg
export IFORTCFG=$HOME/intel/intel.cfg

The link phase with Intel compilers should be improved.

3.5 MacOS X

3.5.1 Mac OS X Yosemite, problem in the /usr/include/dispatch/object.h

3.5.2 Mac OS X: build issues

It seems that the following packages are not compatible with Mac OS X (tested under a VM OS X 10.9 Mavericks)

  • eztrace:
    • cannot find -lbfd (while it is located in /opt/local/lib)
    • FC unknown compiler
/bin/sh ../libtool  --tag=F77   --mode=compile Unkown compiler: FC   -c -o FORTRAN/GTGBasic1_f.lo FORTRAN/GTGBasic1_f.f
libtool: compile:  Unkown compiler: FC -c FORTRAN/GTGBasic1_f.f
../libtool: line 1746: Unkown: command not found
  • papi: zero_shmem.c:36:20: fatal error: malloc.h: No such file or directory
  • netlib-scalapack: configure error issue with fortran mangling (cmake-3.4.0, FC is mpif90 from MPICH)
  • vite: Render_opengl.cpp:52:20: fatal error: GL/glu.h: No such file or directory

3.6 Problem with import 'wraps'

Sometimes this importing error happen:

  File "/usr/lib/python2.7/unittest/result.py", line 10, in <module>
    from functools import wraps
ImportError: cannot import name wraps

Please remove the functool.pyc file generated in the spack externals

rm $SPACK_ROOT/lib/spack/external/functools.pyc

Author: HiePACS

Created: 2017-06-06 Tue 16:14

Validate