Yuantao,
There is another file (attached) under the elegant directory need to be replaced. The problem you experienced can be reproduced with a little more number of processors. I tried a very small number of processors (4) at first and it worked fine.
Yusong
Search found 52 matches
- 01 Jun 2011, 16:27
- Forum: Parallel Elegant
- Topic: CLEAN in Pelegant
- Replies: 13
- Views: 100583
- 01 Jun 2011, 13:43
- Forum: Parallel Elegant
- Topic: CLEAN in Pelegant
- Replies: 13
- Views: 100583
Re: CLEAN in Pelegant
You do need recompile the code. The fix I just provided should only affect the watch point output file. I will provide a better fix soon.
Yusong
Yusong
- 01 Jun 2011, 13:15
- Forum: Parallel Elegant
- Topic: CLEAN in Pelegant
- Replies: 13
- Views: 100583
Re: CLEAN in Pelegant
Yuantao, You can use the attached file to replace the file under the elegant directory for the version 24.0 to fix the charge issue. The problem is caused by the delay of updating the total particle number across all the processors for minimizing communication overhead. The incorrect charge happens ...
- 31 May 2011, 08:53
- Forum: Parallel Elegant
- Topic: CLEAN in Pelegant
- Replies: 13
- Views: 100583
Re: CLEAN in Pelegant
Yuntao, I realized you were using the AFS file system. Please be aware the AFS is not listed as a supported file system for parallel I/O, which is required for the latest version of Pelegant. You can see previous discussions at: https://www.aps.anl.gov/Accelerator_Systems_Division/Operations_Analysi...
- 31 May 2011, 08:34
- Forum: Parallel Elegant
- Topic: CLEAN in Pelegant
- Replies: 13
- Views: 100583
Re: CLEAN in Pelegant
Yuntao, Can you upload the wake files: ZWAKEFILE="/afs/slac.stanford.edu/u/rl/ding/elegant/wakefiles/Sz_p05um_10mm.sdds", & TRWAKEFILE="/afs/slac.stanford.edu/u/rl/ding/elegant/wakefiles/Sx_p05um_10mm.sdds", & ZWAKEFILE="/afs/slac.stanford.edu/u/rl/ding/elegant/wakefiles/Sz_p05um_10mm.sdds", & TRWAK...
- 11 May 2011, 08:56
- Forum: Parallel Elegant
- Topic: MPI_Barrier
- Replies: 4
- Views: 11629
Re: MPI_Barrier
Joel, I tested your example on two clusters: both have Red Hat 4.1.2 installed. One test uses MVAPICH2 1.4.0rc1 with Infiniband network and the other uses MPICH2 version 1.2.1. The problem your described didn't show on both cases. I also used a memory debugger and it didn't report any problem for th...
- 06 May 2011, 16:06
- Forum: Parallel Elegant
- Topic: MPI_Barrier
- Replies: 4
- Views: 11629
Re: MPI_Barrier
Joel,
It appears the memory is not handled properly somewhere in the code.
Can you check the file you uploaded? I downloaded and got 0 byte.
Thanks,
Yusong
It appears the memory is not handled properly somewhere in the code.
Can you check the file you uploaded? I downloaded and got 0 byte.
Thanks,
Yusong
- 02 May 2011, 13:31
- Forum: Parallel Elegant
- Topic: SCRIPT causing hangs
- Replies: 8
- Views: 23741
Re: SCRIPT causing hangs
Joel, Does the example "http://stanford.edu/~joelfred/drift.tar.gz" have the same file I/O operation pattern with the application you plan to do? If so, I can use it as an example to develop a solution for this case. It will be something like this: Pelegant dumps an output file in parallel -> proces...
- 02 May 2011, 09:11
- Forum: Parallel Elegant
- Topic: SCRIPT causing hangs
- Replies: 8
- Views: 23741
Re: SCRIPT causing hangs
Yusong, Thanks so much for your explanation, particularly about NFS. I may be a little confused still. When you say I/O in the script could make it complicated to write to the same file with multiple processors, does that mean that it's possible for the SCRIPT element to get called more than once f...
- 26 Apr 2011, 08:13
- Forum: Parallel Elegant
- Topic: SCRIPT causing hangs
- Replies: 8
- Views: 23741
Re: SCRIPT causing hangs
Joel, The SCRIPT element is not fully supported in the current version of Pelegant, primarily due to too many ways of using the scripts during a simulation and they need to be handled specifically for different situations. If you have I/O operations in your script, it could make it more complicated ...