ParallelIO::load problem for n proc > 60

I am trying to implement a large scale LB simulation using palabos (v 1.2). I am having diffculties with parallelIO::load when nproc >= 72. This happens irrespective of whether I use a single atomic block or multiple atomic blocks. Since the code works for nprocs = 60, I have tried writing out the binary file using parallelIO::save with 60 procs and reading it back in with up to 240 procs. This also fails for nprocs>=72. I have used palabos once before (about 1 year ago) and had no problem with scaling the number of processors, so I am a little confused.

The machines I am trying to run simple test cases are Westmere or Nehalem nodes (dual socket hex core or dual socket quad cores with 48 GB RAM per node). I have tried the load operation for 512512512 ints (up to 102410241024 ints)

I am giving a small test code below (for only the load portion). I would appreciate very much if someone could point me in the right direction to resolve the issue.



#include “palabos3D.h”
#include “palabos3D.hh”

using namespace plb;
typedef double T;

int main(int argc, char **argv)
plbInit(&argc, &argv);
const plint nx = atoi(argv[2]); const plint ny = atoi(argv[3]); const plint nz = atoi(argv[4]);
pcout << “Creation of the geomtry.” << endl;
MultiScalarField3D geometry(nx,ny,nz);
pcout << “Reading the geometry file.” << endl;
parallelIO::load(argv[1], geometry);
// parallelIO::save(geometry,“test_out”);
pcout << "Done loading file " << std::endl;




I have experienced very similar problems while I used IntelMPI on Nehalem nodes.
I could not find the exact source of the problem, however, switching to OpenMPI solved the problem.


The newest release (v1.4) has a bug fix wrt to parallelism. Does this fix the problem you are experiencing with I/O?


I follow the development closely, so I have noticed the bugfix in the changelog.
I will try it out once I have some free time. Thank you for the continued development of Palabos :slight_smile:

Best regards,

Thanks for the bug fix. I can confirm that the fix rectifies the read issue with nproc > 60 on the following configuration.

icpc 13.1.0
intel mpi 4.1.0


Thanks a lot for the feedback! Glad to hear that the bug fix solved your problem.