bug fix of SDDS MPI IO on GPFS file system
Posted: 03 Apr 2018, 14:41
Recently we found that SDDS MPI IO file write had problem on GPFS file system (blues cluster for example), but works fine on lustre file system.
The changes were: set the defaultBufferSize to 0 in SDDS_MPI_binary.c (was 4MB) for MPI IO write buffer. Only one source file SDDS_MPI_binary.c is changed. Update it and recompile the SDDSlib, or making following changes in SDDS_MPI_binary.c
change this line
static int32_t defaultBufferSize = 4000000;
to
static int32_t defaultBufferSize = 0;
This change did not affect the performance on lustre, but fixed the problem on GPFS. It is possible that buffering is not needed for MPI IO. We will remove the buffering code if no problems emerge in the future.
Thanks. Hairong Shang
PS: for your convenience, I attached the source code: SDDS_MPI_binary.c
The changes were: set the defaultBufferSize to 0 in SDDS_MPI_binary.c (was 4MB) for MPI IO write buffer. Only one source file SDDS_MPI_binary.c is changed. Update it and recompile the SDDSlib, or making following changes in SDDS_MPI_binary.c
change this line
static int32_t defaultBufferSize = 4000000;
to
static int32_t defaultBufferSize = 0;
This change did not affect the performance on lustre, but fixed the problem on GPFS. It is possible that buffering is not needed for MPI IO. We will remove the buffering code if no problems emerge in the future.
Thanks. Hairong Shang
PS: for your convenience, I attached the source code: SDDS_MPI_binary.c