Dear all,
I have played around with the manualBlockCavityXd examples and found out that they do not run in serial as promised in the code. When running them on one processor, they both crash with the error message
$ mpirun -np 1 ./manualBlockCavity3d
[fluid57:27302] *** An error occurred in MPI_Isend
[fluid57:27302] *** on communicator MPI_COMM_WORLD
[fluid57:27302] *** MPI_ERR_RANK: invalid rank
[fluid57:27302] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
Running them with the correct number of processes works fine.
I took a look at the code, and I found that in the file “defaultMultiBlockPolicy3d.h” replacing
[code=“cpp”]
ThreadAttribution* getThreadAttribution() {
#ifdef PLB_MPI_PARALLEL
return new OneToOneThreadAttribution();
#else
return new SerialThreadAttribution();
#endif
}
with
[code="cpp"]
ThreadAttribution* getThreadAttribution() {
#ifdef PLB_MPI_PARALLEL
if(global::mpi().getSize()==1)
return new SerialThreadAttribution();
else
return new OneToOneThreadAttribution();
#else
return new SerialThreadAttribution();
#endif
}
resolves this issue and allows running cases compiled with mpi support that contain manual multi-blocks on one processor. Probably the same changes need to be made to “defaultMultiBlockPolicy2d.h”
best
Philippe