Pawel Pomorski
High Performance Computing Programming Specialist
University of Waterloo
User:Ppomorsk/Toward Exascale Simulations of 3D Radiative Transfer for Cloudy Atmospheres
Contents
Help Wiki stats
Wiki stats page is here.
This wiki has 6,535 users, among them 0 active users (users who performed an action within last 91 days). The total number of page edits is 19,165. The total number of page views is Template:NUMBEROFVIEWS. There are 498 pages and 415 articles.
NAMD instructions
These instructions for monk, current as of Feb.2014, with default modules:
intel/12.1.3 cuda/5.5.22
Preliminaries:
module unload openmpi module load openmpi/intel/1.7.4 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/sharcnet/intel/12.1.3/icc/composer_xe_2011_sp1.9.293/compiler/lib/intel64
OpenMPI 1.7.4 is compiled with CUDA support. Environment variable is needed to locate all the libraries at runtime.
Get all the required source files:
tar xvfz NAMD_2.9_Source.tar.gz cd NAMD_2.9_Source
Download and install precompiled TCL and FFTW libraries: (working in NAMD source directory)
wget http://www.ks.uiuc.edu/Research/namd/libraries/fftw-linux-x86_64.tar.gz tar xzf fftw-linux-x86_64.tar.gz mv linux-x86_64 fftw wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl8.5.9-linux-x86_64.tar.gz wget http://www.ks.uiuc.edu/Research/namd/libraries/tcl8.5.9-linux-x86_64-threaded.tar.gz tar xzf tcl8.5.9-linux-x86_64.tar.gz tar xzf tcl8.5.9-linux-x86_64-threaded.tar.gz mv tcl8.5.9-linux-x86_64 tcl mv tcl8.5.9-linux-x86_64-threaded tcl-threaded
Compile charm, choosing which parallelization is to be used (threaded, MPI, MPI-SMP):
tar xvf charm-6.4.0.tar cd charm-6.4.0
Compile charm - MPI build:
env MPICXX=mpicxx ./build charm++ mpi-linux-x86_64 --no-build-shared --with-production
Compile charm - MPI-SMP build:
env MPICXX=mpicxx ./build charm++ mpi-linux-x86_64 smp --no-build-shared --with-production
Compile charm - threaded build:
./build charm++ multicore-linux64 --no-build-shared icc8 --with-production
Configure NAMD and compile (with CUDA support enabled) - replace CHARM_ARCHITECTURE with one of the three options above:
./config Linux-x86_64-icc --charm-arch CHARM_ARCHITECTURE --with-cuda --cuda-prefix /opt/sharcnet/cuda/5.5.22/toolkit/ cd Linux-x86_64-icc make
Test charm if needed:
From the directory charm is in,
cd architecture_directory/tests/charm++/megatest make pgm mpirun -n 4 ./pgm
Python installation instructions
Even though Python modules are provided by SHARCNET, sometimes there may be a need to compile your own Python and Numpy. Here are instructions on how to do it (tested in September, 2014):
To get the source tarballs, do:
wget --no-check-certificate https://www.python.org/ftp/python/2.7.8/Python-2.7.8.tgz wget http://sourceforge.net/projects/numpy/files/NumPy/1.8.2/numpy-1.8.2.tar.gz
and unpack these somewhere in your directories.
These were built under modules:
module unload intel module unload mkl module unload openmpi module load gcc/4.8.2 module load openmpi/gcc/1.8.1
(openmpi is not necessary for this built, but I would suggest loading the gcc/1.8.1 anyway)
Python was built with:
./configure --enable-shared --prefix=~/software_installs/python/2.7.8/gcc/installdir make make install
Then set ( in .bashrc for permanent change):
export PATH=~/software_installs/python/2.7.8/gcc/installdir/bin:$PATH export LD_LIBRARY_PATH=~/software_installs/python/2.7.8/gcc/installdir/lib:$LD_LIBRARY_PATH
Numpy was built with:
unset LDFLAGS python setup.py build --fcompiler=gnu95 python setup.py install --prefix=~/software_installs/numpy/1.8.2/gcc/installdir
Also did:
ln -sf ~/software_installs/numpy/1.8.2/gcc/installdir/lib/python2.7/site-packages/numpy/core/include/numpy ~/software_installs/python/2.7.8/gcc/installdir/include/python2.7
and finally
export PYTHONPATH=~/software_installs/numpy/1.8.2/gcc/installdir/lib/python2.7/site-packages/
CNVnator instructions
Install root prerequisite
Note: compiling this package takes a long time, so it's best to use /tmp directory for faster disk access.
wget https://root.cern.ch/download/root_v6.06.06.source.tar.gz tar xvfz root_v6.06.06.source.tar.gz module unload intel openmpi mkl module load gcc/4.9.2 module load python/gcc/2.7.8 mkdir builddir cd builddir cmake ../ -DCMAKE_INSTALL_PREFIX=/work/lianglab/bin/install_root -Dgnuinstall=ON cmake --build . cmake --build . --target install
Install CNVnator
(load same modules as for root above)
module unload intel openmpi mkl module load gcc/4.9.2 module load python/gcc/2.7.8
wget http://sv.gersteinlab.org/cnvnator/CNVnator_v0.3.zip unzip CNVnator_v0.3.zip cd CNVnator_v0.3/src/samtools/ make cd ..
now edit the Makefile so it has
ROOTLIBS = -L$(ROOTSYS)/lib/root -lCore -lRIO -lNet -lHist -lGraf -lGraf3d \ -lGpad -lTree -lRint -lMatrix -lPhysics \ -lMathCore -lThread -lGui CXX = g++ -std=c++11 $(ROOTFLAGS) -DCNVNATOR_VERSION=\"$(VERSION)\" SAMDIR = samtools INC = -I$(ROOTSYS)/include/root -I$(SAMDIR) SAMLIB = $(SAMDIR)/libbam.a
Note that -lCint library was removed from ROOTLIBS.
Finally, run:
export ROOTSYS=/work/lianglab/bin/install_root export LD_LIBRARY_PATH=/work/lianglab/bin/install_root/lib/root:$LD_LIBRARY_PATH
make
This will produce the executable. In the future, run the above export LD_LIBRARY_PATH command to be able to run it.
Checking global work location
which-global-work USER=ppomorsk which-global-work
Octopus instructions
Install cblas
module load openblas/0.2.20 module load cuda/8.0.44 module load boost/1.60.0 FC=ifort CC=icc CXX=icpc cmake .. -DNetlib_BLAS_LIBRARY=$EBROOTOPENBLAS -DNetlib_INCLUDE_DIRS=$EBROOTOPENBLAS/include - DCMAKE_INSTALL_PREFIX=~/clblasinstall -DBUILD_TEST:BOOL=OFF make make install ln -s ~/clblasinstall/lib64 ~/clblasinstall/lib
Install clFFT
git clone https://github.com/clMathLibraries/clFFT.git ... module load cuda/8.0.44 module load fftw/3.3.6 module load boost/1.60.0 FC=ifort CC=icc CXX=icpc cmake .. -DCMAKE_INSTALL_PREFIX=~/clfftinstall -DOpenCL_LIBRARY=$CUDA_HOME/lib64/libOpenCL.so
Install Octopus
module load openblas/0.2.20 module load cuda/8.0.44 module load boost/1.60.0 module load libxc/3.0.0 module load fftw/3.3.6 module load gsl/2.3 ./configure FC='ifort -mkl' CC='icc -mkl' CXX='icpc -mkl' --prefix=/home/ppomorsk/octopus-exec --enable-opencl --with-clblas-prefix=/home/ppomorsk/clblasinstall --with-clfft-prefix=/home/ppomorsk/clfftinstall
OpenSees (Open System for Earthquake Engineering Simulation) is a software framework for simulating the seismic response of structural and geotechnical systems. It has advanced capabilities for modeling and analyzing the nonlinear response of systems using a wide range of material models, elements, and solution algorithms. Website: opensees.berkeley.edu
The "svn" and "make" steps are likely to take a long time. To avoid having to stay logged in for many hours while these steps run, you can use the "screen" utility. Please see this page for instructions: FAQ:_Logging_in_to_Systems,_Transferring_and_Editing_Files#How_can_I_suspend_and_resume_my_session.3F.
Trunk (24mar2017) tested on orca running centos6 by preney@sharcnet.ca March 24, 2017. The compile is done with the GCC compiler and system TCL library.
Serial version
Starting in your home directory, run the following commands.
mkdir bin mkdir lib svn co svn://peera.berkeley.edu/usr/local/svn/OpenSees/trunk OpenSees cd OpenSees cp ./MAKES/Makefile.def.EC2-REDHAT-ENTERPRISE ./Makefile.def
Now edit Makefile.def so it has:
LINKFLAGS = -rdynamic
instead of
LINKFLAGS = -rdynamic -Wl
and also have
CC++ = g++ CC = gcc FC = gfortran
Then compile with:
module purge module load gcc/5.1.0 make
Parallel MPI version
Starting in your home directory, run the following commands.
mkdir bin mkdir lib svn co svn://peera.berkeley.edu/usr/local/svn/OpenSees/trunk OpenSees cd OpenSees cp ./MAKES/Makefile.def.EC2-REDHAT-ENTERPRISE ./Makefile.def
Now edit Makefile.def so it has:
PROGRAMMING_MODE = DISTRIBUTED_MPI CC++ = mpiCC CC = mpicc FC = gfortran
and
LINKFLAGS = -rdynamic
instead of
LINKFLAGS = -rdynamic -Wl
Then compile with:
module unload intel module unload openmpi module load gcc/5.1.0 module load openmpi/gcc510-std/1.8.7 make
You need to perform the same module operations before running the program with MPI.