mpi/parallel version of elegant

Moderators: cyao, michael_borland

Post Reply
simone.dimitri
Posts: 46
Joined: 09 Jun 2008, 01:19

mpi/parallel version of elegant

Post by simone.dimitri » 17 Mar 2009, 10:06

Hi,
is there a compiled version of Pelegant for 32bit linux?
I am planning to deploy (preliminary test) Pelegant on an MPI cluster. Are there any relevant information or known limitations?
thanks in advance,
Simone

soliday
Posts: 390
Joined: 28 May 2008, 09:15

Re: mpi/parallel version of elegant

Post by soliday » 17 Mar 2009, 10:45

We don't post binary Pelegant for download because there are too many variables that would lead to needing to post many many versions.
Variables include:
Version of Linux (this can be mitigated by building statically)
Version of MPI: (we prefer MPICH2 or MVAPICH2)
Once built against a version of MPI it will only run using that MPI implementation.
On our 32bit cluster our MPI implementation is:

Version: 1.0.5
Device: ch3:nemesis
Configure Options: '--prefix=/disk1/mpich2_smpd_nemesis' '--with-pm=smpd'
'--with-pmi=smpd' '--enable-totalview' '--with-device=ch3:nemesis'
'--enable-fast' '--enable-mpe'

The PM/PMI and DEVICE in the configure options are important. mpd is probably easier to setup then smpd. ch3:nemesis is recommended for multi-core CPUs.

michael_borland
Posts: 1927
Joined: 19 May 2008, 09:33
Location: Argonne National Laboratory
Contact:

Re: mpi/parallel version of elegant

Post by michael_borland » 17 Mar 2009, 10:59

Simone,

The Pelegant manual discusses limitations and also gives guidance on building Pelegant:
http://www.aps.anl.gov/Accelerator_Syst ... nt_manual/

--Michael

ywang25
Posts: 52
Joined: 10 Jun 2008, 19:48

Re: mpi/parallel version of elegant

Post by ywang25 » 21 Mar 2009, 09:42

To limit the number of executable versions to provide for Pelegant, we can focus on one of the most popular implementations of MPI (e.g., MPICH2) first, with its default configuration and daemon, building Pelegant statically. Once we have same MPI setup, a compiled version of Pelegant should be able to distribute.

A script to install MPICH2 with the default configuration is attached. It will install the most recent release of MPICH2 (1.1) with MPD daemon under user's current directory (no root privilege required). Check http://www.mcs.anl.gov/research/projects/mpich2/documentation/files/mpich2-1.1-userguide.pdf
for information about running an MPI program with MPD daemon.

Once the setup is done, running a compiled Pelegant should be very similar as running elegant.

The Pelegant in the current release is using serial I/O, witch could be a big bottleneck (both communications and I/O operations) if you use watch points frequently. This also causes inefficient usage of memory, especially for a very large number of particles. Also, the scalability test of Pelegant should based on proper workload (i.e., it can't be too small). 100,000 particles might be a good start point. A new version with parallel I/O is under test, we might be able to provide a compiled version of Pelegant for the next release.

Yusong
Attachments
install_mpi.gz
Just rename install_mpi.gz to install_mpi.sh and run. As .sh is not acceptable, I have to use another name to upload
(992 Bytes) Downloaded 970 times

Post Reply