Dear all,
I really like working with (P)elegant and sdds, it is a great, flexible and fast tool. Currently I am wondering, if I am reaching a limit, but maybe I am just using the tools the wrong way:
I am working with some sdds commands to get a FFT of coordinates (e.g. x(t) or p(t)) recorded at several watch elements around a ring:
1. sddscombine -merge
2. sddsprocess -filter,col,particleID,[...]
3. sddssort -column=t
4. sddsfft -column=t,[...]
It works fine, but I would like to use large data sets like
a bunch of 1,000 particles, 10,000 turns, 30 watch elements.
The tracking is really fast with Pelegant (12 cores), but the analysis takes a long time. I played around with 1.-4. and found out that the most time-consuming command is the -merge option of sddscombine**.
Is there a faster way to prepare the file for the FFT?
Best regards,
Jan
** The fastes way seems to be executing sddscombine without -merge first (to combine the watch-files), then filter a single particleID (2.) and then do sddscombine -merge. But it still takes about 30h for 1000 particles (without the inital sddscombine).
sddscombine -merge for FFT
Moderators: cyao, michael_borland
-
- Posts: 2008
- Joined: 19 May 2008, 09:33
- Location: Argonne National Laboratory
- Contact:
Re: sddscombine -merge for FFT
Jan,
I think that you want to do the FFT for each particle, is that right?
If so, the best way I can think of is
sddscombine watch.out -pipe=out -merge | sddssort -pipe -column=particleID -column=Pass | sddsbreak -pipe -change=particleID | sddsfft -pipe=in watch.fft -column=Pass,x
With 1000 partiles and 10k turns, there is a lot of data. You can reduce the amount of data by removing columns that are not interesting, e.g.,
w1: watch,longit_data=0,exclude_slopes=1,...
would remove (t, p, xp, yp) from the files, making them much smaller.
Another option is to use the &frequency_map command, which gives you the tune as a function of x and y.
--Michael
I think that you want to do the FFT for each particle, is that right?
If so, the best way I can think of is
sddscombine watch.out -pipe=out -merge | sddssort -pipe -column=particleID -column=Pass | sddsbreak -pipe -change=particleID | sddsfft -pipe=in watch.fft -column=Pass,x
With 1000 partiles and 10k turns, there is a lot of data. You can reduce the amount of data by removing columns that are not interesting, e.g.,
w1: watch,longit_data=0,exclude_slopes=1,...
would remove (t, p, xp, yp) from the files, making them much smaller.
Another option is to use the &frequency_map command, which gives you the tune as a function of x and y.
--Michael
Re: sddscombine -merge for FFT
Dear Michael,
Thank you for your suggestions.
Reducing the amount of data by removing columns seems to be the most effective way to shorten calculation time. I have tried it before with sddsconvert -delete, because I overlooked the usefull watch parameters "exclude_slopes" etc.
Thank you.
Jan
Thank you for your suggestions.
Reducing the amount of data by removing columns seems to be the most effective way to shorten calculation time. I have tried it before with sddsconvert -delete, because I overlooked the usefull watch parameters "exclude_slopes" etc.
Thank you.
Jan