<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>simulation | Dr. Ravindra Shinde</title>
    <link>https://neelravi.com/tag/simulation/</link>
      <atom:link href="https://neelravi.com/tag/simulation/index.xml" rel="self" type="application/rss+xml" />
    <description>simulation</description>
    <generator>Wowchemy (https://wowchemy.com)</generator><language>en</language><copyright>© 2023 Dr. Ravindra Shinde</copyright><lastBuildDate>Sun, 21 Feb 2021 00:00:00 +0000</lastBuildDate>
    
    
    <item>
      <title>GPAW v20 installation on the comet cluster at XSEDE, San Diego</title>
      <link>https://neelravi.com/post/gpaw-xsede/</link>
      <pubDate>Sun, 21 Feb 2021 00:00:00 +0000</pubDate>
      <guid>https://neelravi.com/post/gpaw-xsede/</guid>
      <description>&lt;p&gt;The following set of instructions will guide you through the steps involved in the installation of GPAW on the Comet cluster, located at the XSEDE, San Diego.&lt;/p&gt;
&lt;p&gt;This will be an Intel Python-based installation. You may have to install it separately in your home directory. I will be installing an MPI enabled version sans scalapack.&lt;/p&gt;
&lt;p&gt;Make sure that you load the following modules:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;intel/2018.1.163&lt;/li&gt;
&lt;li&gt;mvapich2_ib/2.3.2&lt;/li&gt;
&lt;li&gt;gsl/2.5&lt;/li&gt;
&lt;li&gt;mkl/2018.1.163&lt;/li&gt;
&lt;li&gt;fftw/3.3.8&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Copy siteconfig_example.py file to siteconfig.py&lt;/p&gt;
&lt;p&gt;Make the necessary changes in the path such that the final file looks like this:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#34;language-python&#34;&gt;compiler = &#39;./icc.py&#39;
mpicompiler = &#39;./icc.py&#39;
mpilinker = &#39;MPICH_CC=gcc mpicc&#39;

compiler = &#39;gcc&#39;
mpicompiler = &#39;mpicc&#39;
mpilinker = &#39;mpicc&#39;
scalapack = True

library_dirs += [&#39;/opt/intel/oneapi/mkl/latest/lib/intel64&#39;]

include_dirs = [&#39;/opt/intel/oneapi/mpi/latest/include&#39;]

libraries = [&#39;mkl_rt&#39;, &#39;pthread&#39;, &#39;m&#39;, &#39;dl&#39;]

libraries += [&#39;xc&#39;]
# change this to your installation directory
LIBXCDIR=&#39;/home/neelravi/libxc/libxc-4.3.4/install/&#39;
library_dirs += [LIBXCDIR + &#39;lib&#39;]
include_dirs += [LIBXCDIR + &#39;include&#39;]

define_macros += [(&#39;GPAW_NO_UNDERSCORE_CBLACS&#39;, &#39;1&#39;)]
define_macros += [(&#39;GPAW_NO_UNDERSCORE_CSCALAPACK&#39;, &#39;1&#39;)]
define_macros += [(&amp;quot;GPAW_ASYNC&amp;quot;,1)]
define_macros += [(&amp;quot;GPAW_MPI2&amp;quot;,1)]


# FFTW3:
fftw = False
if fftw:
    libraries += [&#39;fftw3&#39;]
&lt;/code&gt;&lt;/pre&gt;
&lt;hr&gt;
&lt;p&gt;After making the necessary changes, install the gpaw in the home directory of the user.&lt;/p&gt;
&lt;p&gt;&lt;code&gt;python3 setup.py install --user&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;This will compile the C code and copy the python files in appropriate locations.&lt;/p&gt;
&lt;p&gt;You may have to add the &lt;code&gt;PYTHONPATH&lt;/code&gt; in you &lt;code&gt;.bashrc&lt;/code&gt; file.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/user/libxc/libxc-4.3.4/install/lib
export PATH=/home/user/.local/bin:$PATH
export PYTHONPATH=/home/user/ase:/home/user/gpaw
&lt;/code&gt;&lt;/pre&gt;
</description>
    </item>
    
    <item>
      <title>CP2K wrap atoms back to the home unit cell in VMD</title>
      <link>https://neelravi.com/post/vmd-tips/</link>
      <pubDate>Tue, 16 Jun 2020 00:00:00 +0000</pubDate>
      <guid>https://neelravi.com/post/vmd-tips/</guid>
      <description>&lt;p&gt;The following set of commands in the VMD console will wrap back the atoms in the neighboring unit cells back to the home unit cell.&lt;/p&gt;
&lt;p&gt;a, b, and c are the lattice constants in x,y and z directions.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#34;language-perl&#34;&gt;pbc set {a b c} -all
pbc box
pbc wrap -all
&lt;/code&gt;&lt;/pre&gt;
</description>
    </item>
    
    <item>
      <title>Octopus TDDFT code installation on the comet cluster at XSEDE, San Diego</title>
      <link>https://neelravi.com/post/octopus-xsede/</link>
      <pubDate>Thu, 21 Nov 2019 18:46:00 +0000</pubDate>
      <guid>https://neelravi.com/post/octopus-xsede/</guid>
      <description>&lt;h2 id=&#34;load-modules&#34;&gt;Load Modules:&lt;/h2&gt;
&lt;p&gt;I chose Intel compiler and intelmpi to compile the mpi version of octopus on Comet (after the Rocks OS upgrade to Rocks 7).&lt;/p&gt;
&lt;p&gt;The following modules need to be loaded first.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;intel/2018.1.163&lt;/li&gt;
&lt;li&gt;intelmpi/2018.1.163&lt;/li&gt;
&lt;li&gt;gsl/2.5&lt;/li&gt;
&lt;li&gt;mkl/2018.1.163&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id=&#34;install-dependencieslibraries&#34;&gt;Install dependencies/libraries:&lt;/h2&gt;
&lt;h3 id=&#34;install-libxc-434-or-above&#34;&gt;Install libxc-4.3.4 or above&lt;/h3&gt;
&lt;p&gt;Download the latest package of libxc. Use the following configure line:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;./configure --prefix=/home/neelravi CC=mpicc FC=mpif90 FCFLAGS=-O3 CFLAGS=-O3
make 
make install
make check
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;install-fftw-338-or-above&#34;&gt;Install fftw-3.3.8 or above&lt;/h3&gt;
&lt;p&gt;Do not use the pre-installed fftw package.
Download the latest fftw package. Use the following configure line to compile.&lt;/p&gt;
&lt;p&gt;I chose fftw-3.3.8.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;./configure --prefix=/home/neelravi CC=mpicc CFLAGS=-O3 F77=mpif90 F77FLAGS=-O3
make 
make install
make check
&lt;/code&gt;&lt;/pre&gt;
&lt;h3 id=&#34;install-octopus-code&#34;&gt;Install octopus code&lt;/h3&gt;
&lt;p&gt;Download the stable version of the octopus. (i.e. octopus 9.1)
Use the following configure script.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;./configure --prefix=$HOME \
  CC=mpicc FC=mpif90 CFLAGS=&amp;quot;-O3&amp;quot; FCFLAGS=&amp;quot;-O3&amp;quot; \
--enable-mpi --enable-openmp \
--with-blas=&amp;quot;-L${MKLROOT}/lib/intel64 -lmkl_rt -lpthread -lm -ldl&amp;quot; \
--with-libxc-prefix=$HOME \
--with-fftw-prefix=$HOME \
--with-blacs=&amp;quot;-L${MKLROOT}/lib/intel64 -lmkl_rt -lpthread -lm -ldl&amp;quot; \
--with-scalapack=&amp;quot;-L${MKLROOT}/lib/intel64 -lmkl_rt -lpthread -lm -ldl&amp;quot; \
--with-gsl-prefix=&amp;quot;/opt/gsl/2.5/intel&amp;quot;

make 
make install
make check
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The last command &lt;code&gt;make check&lt;/code&gt; should either pass or fail all the tests. If some tests pass and some fail, then there is something wrong with the installation. Please note that a test corresponding to the glib may fail (probably this is used when Species are read from the picture file). You may ignore it.&lt;/p&gt;
&lt;p&gt;Run a sample calculation with more than one node. The output file should show the parallelization and anticipated resource wastage.&lt;/p&gt;
&lt;p&gt;Your octopus is ready!&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>Octopus TDDFT code installation on the XC40 Cray supercomputer</title>
      <link>https://neelravi.com/post/octopus-cray/</link>
      <pubDate>Thu, 06 Jul 2017 13:00:00 +0000</pubDate>
      <guid>https://neelravi.com/post/octopus-cray/</guid>
      <description>&lt;h2 id=&#34;load-modules-in-the-prgenv-gnu&#34;&gt;Load Modules in the PrgEnv-gnu:&lt;/h2&gt;
&lt;p&gt;I managed to install octopus in gnu programming environment with the following settings:&lt;/p&gt;
&lt;p&gt;Modules loaded in the PrgEnv-gnu&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;modules/3.2.10.5&lt;/li&gt;
&lt;li&gt;nodestat/2.2-1.0502.60539.1.31.ari&lt;/li&gt;
&lt;li&gt;sdb/1.1-1.0502.63652.4.25.ari&lt;/li&gt;
&lt;li&gt;alps/5.2.4-2.0502.9822.32.1.ari&lt;/li&gt;
&lt;li&gt;lustre-cray_ari_s/2.5_3.0.101_0.46.1_1.0502.8871.20.1-1.0502.21538.30.1&lt;/li&gt;
&lt;li&gt;udreg/2.3.2-1.0502.10518.2.17.ari&lt;/li&gt;
&lt;li&gt;ugni/6.0-1.0502.10863.8.29.ari&lt;/li&gt;
&lt;li&gt;gni-headers/4.0-1.0502.10859.7.8.ari&lt;/li&gt;
&lt;li&gt;dmapp/7.0.1-1.0502.11080.8.76.ari&lt;/li&gt;
&lt;li&gt;xpmem/0.1-2.0502.64982.5.3.ari&lt;/li&gt;
&lt;li&gt;hss-llm/7.2.0&lt;/li&gt;
&lt;li&gt;Base-opts/1.0.2-1.0502.60680.2.4.ari&lt;/li&gt;
&lt;li&gt;gcc/6.1.0&lt;/li&gt;
&lt;li&gt;craype/2.5.7&lt;/li&gt;
&lt;li&gt;nodehealth/5.1-1.0502.65826.9.1.ari&lt;/li&gt;
&lt;li&gt;cray-libsci/16.09.1&lt;/li&gt;
&lt;li&gt;pmi/5.0.10-1.0000.11050.0.0.ari&lt;/li&gt;
&lt;li&gt;atp/2.0.2&lt;/li&gt;
&lt;li&gt;PrgEnv-gnu/5.2.82&lt;/li&gt;
&lt;li&gt;pbs/12.2.404.152084&lt;/li&gt;
&lt;li&gt;cray-parallel-netcdf/1.7.0&lt;/li&gt;
&lt;li&gt;cray-netcdf-hdf5parallel/4.4.1&lt;/li&gt;
&lt;li&gt;cray-hdf5-parallel/1.8.16&lt;/li&gt;
&lt;li&gt;fftw/3.3.4.10&lt;/li&gt;
&lt;li&gt;craype-network-aries&lt;/li&gt;
&lt;li&gt;craype-haswell&lt;/li&gt;
&lt;li&gt;cce/8.5.3&lt;/li&gt;
&lt;li&gt;cray-shmem/7.4.3&lt;/li&gt;
&lt;li&gt;cray-mpich/7.4.3&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The LD LIBRARY PATH had following things in the path:&lt;/p&gt;
&lt;p&gt;&lt;code&gt;/opt/gcc/6.1.0/snos/lib64:/opt/cray/nodehealth/5.1-1.0502.65826.9.1.ari/lib64 /opt/cray/fftw/3.3.4.10/haswell/lib:/opt/cray/hdf5-parallel/1.8.16/GNU/5.1/lib:/opt/cray/netcdf-hdf5parallel/4.4.1/GNU/5.1/lib:/opt/cray/parallel-netcdf/1.7.0/GNU/5.1/lib:/opt/cray/pmi/5.0.10-1.0000.11050.0.0.ari/lib64:/opt/cray/libsci/16.09.1/GNU/5.1/x86_64/lib:/opt/cray/xpmem/0.1-2.0502.64982.5.3.ari/lib64:/opt/cray/dmapp/7.0.1-1.0502.11080.8.76.ari/lib64:/opt/cray/ugni/6.0-1.0502.10863.8.29.ari/lib64:/opt/cray/udreg/2.3.2-1.0502.10518.2.17.ari/lib64:/opt/cray/alps/5.2.4-2.0502.9822.32.1.ari/lib64&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;the configure script looks like:&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#34;language-bash&#34;&gt;set LD_LIBRARY_PATH=($LD_LIBRARY_PATH $CRAY_LD_LIBRARY_PATH)
setenv LIBS_BLAS /opt/cray/libsci/16.09.1/GNU/5.1/x86_64/lib/libsci_gnu_mpi_mp.a
setenv LIBS_LAPACK /opt/cray/libsci/16.09.1/GNU/5.1/x86_64/lib/libsci_gnu_mpi_mp.a
setenv LIBS_FFT /opt/cray/fftw/3.3.4.10/haswell/lib/libfftw3.a

./configure  --prefix=/mnt/lustre/mrc2/mrcravi/devel-basedir-test  \
--with-libxc-prefix=/mnt/lustre/mrc2/mrcravi/devel-basedir  \
--with-libxc-include=/mnt/lustre/mrc2/mrcravi/devel-basedir/include \
--with-gsl-prefix=/mnt/lustre/mrc2/mrcravi/devel-basedir \
--with-fftw-prefix=/opt/cray/fftw/3.3.4.10/haswell \
--enable-openmp --enable-mpi --host=cray

make -j 24
make install
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;The above script is for minimum working example. I managed to couple &lt;code&gt;etsf&lt;/code&gt;, &lt;code&gt;netcdf&lt;/code&gt;, &lt;code&gt;hdf5&lt;/code&gt; etc to octopus.&lt;/p&gt;
</description>
    </item>
    
    <item>
      <title>GPAW v20 installation on a workstation or desktop</title>
      <link>https://neelravi.com/post/gpaw-desktop/</link>
      <pubDate>Sat, 10 Jun 2017 00:00:00 +0000</pubDate>
      <guid>https://neelravi.com/post/gpaw-desktop/</guid>
      <description>&lt;p&gt;I tried following configuration for the installation of GPAW on a standalone machine with python installed from miniconda2.&lt;/p&gt;
&lt;p&gt;Install openmpi with intel compilers to get mpicc etc&lt;/p&gt;
&lt;p&gt;install libxc in static mode with -fPIC flag&lt;/p&gt;
&lt;p&gt;&lt;code&gt;./configure FC=ifort CC=icc --disable-shared --enable-static --prefix=/usr/local CFLAGS=-fPIC&lt;/code&gt;&lt;/p&gt;
&lt;p&gt;After that, we need mkl blas lapack, blacs and scalapack libraries,
Following is the customize.py file for appropriate options.&lt;/p&gt;
&lt;pre&gt;&lt;code class=&#34;language-python&#34;&gt;extra_compile_args += [&#39;-fPIC&#39;]


libraries = [ &#39;xc&#39;, &#39;mkl_blas95_lp64&#39;, 
              &#39;mkl_lapack95_lp64&#39;, 
              &#39;mkl_scalapack_lp64&#39;,
              &#39;mkl_intel_lp64&#39;, 
              &#39;mkl_sequential&#39;,
              &#39;mkl_core&#39;, 
              &#39;mkl_blacs_openmpi_lp64&#39;,
              &#39;pthread&#39;, &#39;m&#39;, &#39;dl&#39;]

library_dirs += [&#39;/usr/local/lib&#39;, 
                 &#39;/opt/intel/composer_xe_2013.2.146/mkl/lib/intel64&#39;]
include_dirs += [&#39;/usr/local/include&#39;, 
                 &#39;/opt/intel/composer_xe_2013.2.146/mkl/include&#39;]

mpi_libraries = [&#39;mpi&#39;]
mpi_library_dirs += [&#39;/usr/local/lib&#39;]
mpi_include_dirs += [&#39;/usr/local/include&#39;]

compiler = &#39;mpicc&#39;
mpicompiler = &#39;mpicc&#39;  
mpilinker = &#39;mpicc&#39;

scalapack = True

if scalapack:
    define_macros += [(&#39;GPAW_NO_UNDERSCORE_CBLACS&#39;, &#39;1&#39;)]
    define_macros += [(&#39;GPAW_NO_UNDERSCORE_CSCALAPACK&#39;, &#39;1&#39;)]

# - static linking:
if 0:
    include_dirs += [&#39;/usr/local/include&#39;]
    extra_link_args += [&#39;/usr/local/lib/libxc.a&#39;]
    if &#39;xc&#39; in libraries:
        libraries.remove(&#39;xc&#39;)


# Build MPI-interface into _gpaw.so:
if 0:
    compiler = &#39;mpicc&#39;
    define_macros += [(&#39;PARALLEL&#39;, &#39;1&#39;)]
    mpicompiler = &#39;mpicc&#39;
&lt;/code&gt;&lt;/pre&gt;
</description>
    </item>
    
  </channel>
</rss>
