Page 1 of 1

mpi/parallel version of elegant

Posted: 17 Mar 2009, 10:06
by simone.dimitri
Hi,
is there a compiled version of Pelegant for 32bit linux?
I am planning to deploy (preliminary test) Pelegant on an MPI cluster. Are there any relevant information or known limitations?
thanks in advance,
Simone

Re: mpi/parallel version of elegant

Posted: 17 Mar 2009, 10:45
by soliday
We don't post binary Pelegant for download because there are too many variables that would lead to needing to post many many versions.
Variables include:
Version of Linux (this can be mitigated by building statically)
Version of MPI: (we prefer MPICH2 or MVAPICH2)
Once built against a version of MPI it will only run using that MPI implementation.
On our 32bit cluster our MPI implementation is:

Version: 1.0.5
Device: ch3:nemesis
Configure Options: '--prefix=/disk1/mpich2_smpd_nemesis' '--with-pm=smpd'
'--with-pmi=smpd' '--enable-totalview' '--with-device=ch3:nemesis'
'--enable-fast' '--enable-mpe'

The PM/PMI and DEVICE in the configure options are important. mpd is probably easier to setup then smpd. ch3:nemesis is recommended for multi-core CPUs.

Re: mpi/parallel version of elegant

Posted: 17 Mar 2009, 10:59
by michael_borland
Simone,

The Pelegant manual discusses limitations and also gives guidance on building Pelegant:
http://www.aps.anl.gov/Accelerator_Syst ... nt_manual/

--Michael

Re: mpi/parallel version of elegant

Posted: 21 Mar 2009, 09:42
by ywang25
To limit the number of executable versions to provide for Pelegant, we can focus on one of the most popular implementations of MPI (e.g., MPICH2) first, with its default configuration and daemon, building Pelegant statically. Once we have same MPI setup, a compiled version of Pelegant should be able to distribute.

A script to install MPICH2 with the default configuration is attached. It will install the most recent release of MPICH2 (1.1) with MPD daemon under user's current directory (no root privilege required). Check http://www.mcs.anl.gov/research/projects/mpich2/documentation/files/mpich2-1.1-userguide.pdf
for information about running an MPI program with MPD daemon.

Once the setup is done, running a compiled Pelegant should be very similar as running elegant.

The Pelegant in the current release is using serial I/O, witch could be a big bottleneck (both communications and I/O operations) if you use watch points frequently. This also causes inefficient usage of memory, especially for a very large number of particles. Also, the scalability test of Pelegant should based on proper workload (i.e., it can't be too small). 100,000 particles might be a good start point. A new version with parallel I/O is under test, we might be able to provide a compiled version of Pelegant for the next release.

Yusong