/dev/null can be used for mac as well.
NUL can be used for Windows.
Yusong
Search found 52 matches
- 16 Nov 2010, 09:27
- Forum: General
- Topic: no printout in terminal
- Replies: 2
- Views: 3989
- 03 Nov 2010, 15:07
- Forum: Parallel Elegant
- Topic: Is CWiggler element parallelizable
- Replies: 12
- Views: 24600
Re: Is CWiggler element parallelizable
Great to know you got super-linear speedup with Pelegant.
Yusong
Yusong
- 03 Nov 2010, 12:32
- Forum: Parallel Elegant
- Topic: Is CWiggler element parallelizable
- Replies: 12
- Views: 24600
Re: Is CWiggler element parallelizable
There is no much performance difference after I enabled the MASTER_READTITLE_ONLY flag on the clusters I used. I changed it to default for future release in case other users have same problems on a particular system. There could be a little negative impact if the network between compute nodes is slo...
- 03 Nov 2010, 08:22
- Forum: Parallel Elegant
- Topic: Is CWiggler element parallelizable
- Replies: 12
- Views: 24600
Re: Is CWiggler element parallelizable
It depends on which file system you used. I tested with the default configuration on both Lustre and GPFS parallel file system. The time spent on I/O part should be trivial for this test. You can comment out the "track" command and find out how much time spent on I/O before and after you enable DMAS...
- 02 Nov 2010, 16:40
- Forum: Parallel Elegant
- Topic: Is CWiggler element parallelizable
- Replies: 12
- Views: 24600
Re: Is CWiggler element parallelizable
I did a performance study on another cluster, witch does not have shared CPUs for different jobs, the performance is very good. More than 95% of time is spent on the CWIGGLER element.
Pelegant (8 cores, 7 cores for tracking) 00:04:10
elegant 00:28:34
Yusong
Pelegant (8 cores, 7 cores for tracking) 00:04:10
elegant 00:28:34
Yusong
- 02 Nov 2010, 15:03
- Forum: Parallel Elegant
- Topic: Is CWiggler element parallelizable
- Replies: 12
- Views: 24600
Re: Is CWiggler element parallelizable
Wanming, I tested your input files with 1000, and 10000 particles respectively on 10 CPU cores. Pelegant is faster for both cases, although not linear. For simulation with 10000 particles, Pelegant finished in 00:07:12 with 10 cores (9 cores for tracking), and it took 00:30:36 for elegant. Do you ha...
- 01 Nov 2010, 18:28
- Forum: Parallel Elegant
- Topic: Is CWiggler element parallelizable
- Replies: 12
- Views: 24600
Re: Is CWiggler element parallelizable
wl, Could you post or send your input files, so we can do a performance analysis? One tip to get reasonable performance is running simulation with a relatively large number of particles. Please check the reference if you want all the 4 cores to track particles (n_cores-1 will be used for tracking by...
- 30 Sep 2010, 08:40
- Forum: Parallel Elegant
- Topic: Implementation of parallel I/O
- Replies: 6
- Views: 13849
Re: Implementation of parallel I/O
"The reason is the last process to close file overwrites the updates of all the other processes on AFS." AFS might not be designed for parallel file operations, unless they have a new version to support the parallel file system. An easy way to try it is using the IO examples distributed with your MP...
- 29 Sep 2010, 11:38
- Forum: Parallel Elegant
- Topic: Implementation of parallel I/O
- Replies: 6
- Views: 13849
Re: Implementation of parallel I/O
Here are the list of the MPI functions used in Pelegant MPI_File_open MPI_File_write MPI_File_set_view MPI_File_close MPI_File_seek MPI_File_read MPI_File_read_all MPI_File_sync MPI_File_set_size MPI_File_write_at MPI_File_write_all For information about the supported file systems, the following lin...
- 27 Sep 2010, 14:41
- Forum: Parallel Elegant
- Topic: Implementation of parallel I/O
- Replies: 6
- Views: 13849
Re: Implementation of parallel I/O
Pelegant integrated parallel SDDS I/O for parallel file operations. The parallel SDDS I/O is implemented on top of MPI-IO library, which usually comes with an MPI distribution. More technical details about how the parallel I/O was implemented in Pelegant can be found in the attached paper: "Parallel...