Problems on running Pelegant with FTRFMODE
Moderators: cyao, michael_borland
Problems on running Pelegant with FTRFMODE
Hello,
I encounter problems when I was trying to run simulations with FTRFMODE elements using Pelegant.
The attached example ran fine with the serial version of Elegant, but it crashed when I was using Pelegant.
The total length of a single bunch is about 2ps, so I set BIN_SIZE=0.3ps and N_BINS=300 to cover the whole bunch.
If this set of parameters ran fine with serial version, what caused it to crash with Pelegant?
Thanks.
I encounter problems when I was trying to run simulations with FTRFMODE elements using Pelegant.
The attached example ran fine with the serial version of Elegant, but it crashed when I was using Pelegant.
The total length of a single bunch is about 2ps, so I set BIN_SIZE=0.3ps and N_BINS=300 to cover the whole bunch.
If this set of parameters ran fine with serial version, what caused it to crash with Pelegant?
Thanks.
-
- Posts: 1959
- Joined: 19 May 2008, 09:33
- Location: Argonne National Laboratory
- Contact:
Re: Problems on running Pelegant with FTRFMODE
This runs fine for me. If this is the same issue that Bob Soliday was looking into for you, the files he forwarded to me had BIN_SIZE=0 on FTRFMODE, which caused a crash in the parallel version. In the serial version, it didn't crash but the element was non-functional. The manual incorrectly states that BIN_SIZE=0 will result in automatic setting of the bin size. We'll fix that and add a check in the next release.
--Michael
--Michael
Re: Problems on running Pelegant with FTRFMODE
Hello Michael,
This is a different issue.
For this example, I set BIN_SIZE=3e-13 and N_BINS=300.
If it runs fine for you, that it means something wrong with the Pelegant that I have here.
Here's the error message
[7] fatal error
Fatal error in MPI_Allreduce: Invalid count, error stack:
MPI_Allreduce(sbuf=0x0000017969614DF0, rbuf=0x0000000000000000, count=-199, MPI_DOUBLE, MPI_SUM, comm=0x84000000) failed
Negative count, value is -199
---- error analysis -----
[1-7] on PC99258
mpi has detected a fatal error and aborted Pelegant
This is a different issue.
For this example, I set BIN_SIZE=3e-13 and N_BINS=300.
If it runs fine for you, that it means something wrong with the Pelegant that I have here.
Here's the error message
[7] fatal error
Fatal error in MPI_Allreduce: Invalid count, error stack:
MPI_Allreduce(sbuf=0x0000017969614DF0, rbuf=0x0000000000000000, count=-199, MPI_DOUBLE, MPI_SUM, comm=0x84000000) failed
Negative count, value is -199
---- error analysis -----
[1-7] on PC99258
mpi has detected a fatal error and aborted Pelegant
-
- Posts: 1959
- Joined: 19 May 2008, 09:33
- Location: Argonne National Laboratory
- Contact:
Re: Problems on running Pelegant with FTRFMODE
Wei-Hou,
That's a suspicious-looking error. The negative count value shouldn't happen. I'll do some more checking.
--Michael
That's a suspicious-looking error. The negative count value shouldn't happen. I'll do some more checking.
--Michael
-
- Posts: 1959
- Joined: 19 May 2008, 09:33
- Location: Argonne National Laboratory
- Contact:
Re: Problems on running Pelegant with FTRFMODE
Wei-Hou,
This is an issue related to an unusual combination of settings: RFMODE elements are not compatible with CHANGE_T=1 on the RFCA elements. As the RFCA manual says
As for why it seemed to work with the serial version and not the parallel version, that's just luck and the results were not valid.
This kind of problem comes up fairly frequently, so I'll try to add some warnings. Even I sometimes forget about this.
--Michael
This is an issue related to an unusual combination of settings: RFMODE elements are not compatible with CHANGE_T=1 on the RFCA elements. As the RFCA manual says
Since RFMODE elements for parasitic modes have essentially random frequencies, they are not harmonic with the main rf cavities.N.B.: Do not use CHANGE_T=1 if you have rf cavities that are not at harmonics of one another or if you have other time-dependent elements that are not resonant. Also, if you have harmonic cavities, only use CHANGE_T on the cavity with the lowest frequency.
As for why it seemed to work with the serial version and not the parallel version, that's just luck and the results were not valid.
This kind of problem comes up fairly frequently, so I'll try to add some warnings. Even I sometimes forget about this.
--Michael
-
- Posts: 1959
- Joined: 19 May 2008, 09:33
- Location: Argonne National Laboratory
- Contact:
Re: Problems on running Pelegant with FTRFMODE
Wei-Hou,
Another think I noticed in your lattice file is that you have bunched beam mode for the TRFMODE element but not for the WAKE or TRWAKE elements. I guess you want bunched beam mode for everything.
--Michael
Another think I noticed in your lattice file is that you have bunched beam mode for the TRFMODE element but not for the WAKE or TRWAKE elements. I guess you want bunched beam mode for everything.
--Michael
Re: Problems on running Pelegant with FTRFMODE
Hello Michael,
After applying changes that you mentioned, Pelegant now works with FTRFMODE and single bunch.
However, it still crashed when I was running Pelegant with FTRFMODE and multiple bunches, with the same error messages.
In addition, it crashed immediately after the simulation reached the first FTRFMODE element.
By the way, I am testing Pelegant on Windows ( as I am still having troubles compiling Pelegant on clusters)
Would this be a Windows specific problems?
Thank you.
After applying changes that you mentioned, Pelegant now works with FTRFMODE and single bunch.
However, it still crashed when I was running Pelegant with FTRFMODE and multiple bunches, with the same error messages.
In addition, it crashed immediately after the simulation reached the first FTRFMODE element.
By the way, I am testing Pelegant on Windows ( as I am still having troubles compiling Pelegant on clusters)
Would this be a Windows specific problems?
Thank you.
Re: Problems on running Pelegant with FTRFMODE
Hello Michael,
I was able to figure out the cause of my problem.
Here's the simulation workflow.
1. Run the single bunch simulation ( the script elegant.ele) to generate initial bunch and the energy profile.
2. Run the multi bunches simulation ( using bunch from step 1 to duplicate them).
Somehow, if I run step 1 with Pelegant, the generated bunch file will not have the IDSlotPerBunch parameter, and therefore causing problems in multi bunches simulation.
Here's my questions.
What caused IDSlotPerBunch missing in Pelegant simulation, but not in the serial version?
Should I run a zero-length drift tracking just to generate the bunch, and then run other simulations?
Thank you.
I was able to figure out the cause of my problem.
Here's the simulation workflow.
1. Run the single bunch simulation ( the script elegant.ele) to generate initial bunch and the energy profile.
2. Run the multi bunches simulation ( using bunch from step 1 to duplicate them).
Somehow, if I run step 1 with Pelegant, the generated bunch file will not have the IDSlotPerBunch parameter, and therefore causing problems in multi bunches simulation.
Here's my questions.
What caused IDSlotPerBunch missing in Pelegant simulation, but not in the serial version?
Should I run a zero-length drift tracking just to generate the bunch, and then run other simulations?
Thank you.
-
- Posts: 1959
- Joined: 19 May 2008, 09:33
- Location: Argonne National Laboratory
- Contact:
Re: Problems on running Pelegant with FTRFMODE
Wei-Hou,
There seems to be a bug in Pelegant that the bunch output from &bunched_beam gets 0 for the IDSlotsPerBunch parameter. I'll look into it.
Using a zero-length drift lattice to get output via &run_setup's output parameter is a good workaround. Also, there are scripts called generateBunch and generateBunchTrain that might be helpful.
--Michael
There seems to be a bug in Pelegant that the bunch output from &bunched_beam gets 0 for the IDSlotsPerBunch parameter. I'll look into it.
Using a zero-length drift lattice to get output via &run_setup's output parameter is a good workaround. Also, there are scripts called generateBunch and generateBunchTrain that might be helpful.
--Michael