bug fix of SDDS MPI IO on GPFS file system

Moderators: michael_borland, soliday

Post Reply
hshang
Posts: 1
Joined: 26 Apr 2012, 10:28

bug fix of SDDS MPI IO on GPFS file system

Post by hshang » 03 Apr 2018, 14:41

Recently we found that SDDS MPI IO file write had problem on GPFS file system (blues cluster for example), but works fine on lustre file system.

The changes were: set the defaultBufferSize to 0 in SDDS_MPI_binary.c (was 4MB) for MPI IO write buffer. Only one source file SDDS_MPI_binary.c is changed. Update it and recompile the SDDSlib, or making following changes in SDDS_MPI_binary.c
change this line
static int32_t defaultBufferSize = 4000000;
to
static int32_t defaultBufferSize = 0;

This change did not affect the performance on lustre, but fixed the problem on GPFS. It is possible that buffering is not needed for MPI IO. We will remove the buffering code if no problems emerge in the future.
Thanks. Hairong Shang

PS: for your convenience, I attached the source code: SDDS_MPI_binary.c
Attachments
SDDS_MPI_binary.c
bug fix of SDDS MPI IO
(73.89 KiB) Downloaded 267 times

Post Reply