I have a problem with the parallel running of a code with grid refinement.
Namely, when I run the code /examples/showCases/gridRefinement2d/cavty2d.cpp on the 8 processors all processors are active but they are doing the same job (the same job has been distributed to all processor).
The same problem appears for the 2D and, as well as, for the 3D problems. Probably something is wrong with the code segment:
// Parallelization. Here, the domain is parallelized along the x-direction. The position
// of the different slices is explicitly created, and provided in units of the
// finest lattice.
std::vector parallelizeIn;
plint procNumber = global::mpi().getSize();
std::vector<std::pair<plint,plint> > ranges;
plint finestNy = convectiveParameters[numLevel-1].getNy()-1;
util::linearRepartition(0,finestNy,procNumber,ranges);
for (plint iBlock=0; iBlock<(plint)ranges.size()-1; ++iBlock){
parallelizeIn.push_back(ranges[iBlock].second);
}
management.parallelizeX(parallelizeIn);
Have you an idea how I can fix this problem?
Is possible to use the MPI like with the MultiBlockLattice?
Do you know about the possibility to use the grid refinement for the boussinesqThermal3d example (the same grid refinement for velocity and temperature field)?
first of all, the behavior that you see is exactly what MPI is supposed to do: execute the same code over all the processors.
However, each processor has different data. In Palabos for instance, each processor will receive a region of the simulation domain and will automatically communicate with other processors when needed.
To test that this is working correctly, you can run a sample code over one processor and watch the memory used. Then rerun the code over more processors, this time each processor should use a smaller amount of memory.
Also you need to make sure to have compiled the code with an mpi compiler.
Thank you for your replay.
I have tested it. Also, I compared time with one processor, 8 and 16 processors, it is the same.
If I run on 8 processors I have 8 VTK files as an output. Each processor receiving the same data.
I am compiling using the make file (I just type “make” on the command line). But I have to set MPI-parallel mode “off”, because with “on” there is a compilation error:
/src/multiGrid/multiGridLattice2D.hh(255): error: more than one instance of overloaded function “pow” matches the argument list:
function “pow(double, double)”
function “std::pow(long double, int)”
function “std::pow(float, int)”
function “std::pow(double, int)”
function “std::pow(long double, long double)”
function “std::pow(float, float)”
argument types are: (int, plb::plint)
for (plint iterations = 0; iterations < pow(2,iLevel); ++iterations){
^
But I still can run in parallel if I set MPI-parallel mode off, but with the previously mentioned problem?
Do you know is possible to use grid refinement for thermal simulations?
I think you must get the latest version (palabos-v0.7r3) in the site palabos. I have tested that this version compiles with with no problem (mpich et open mpi).
If you compile in serial and you use mpirun to run the code over n processors, you will have the behavior that you see, namely that many copies of the same serial code execute (thus writing the same vtk etc.). You must compile with the parallel option.
As for the grid refinement, although in many cases we have found good results for the velocity, we have experienced some problems related to the continuity of the density between the grids. We are currently working in some mass balancing and other algorithms to get rid of the problem. As for thermal simulations, I guess that it should be possible, as long as the quantities that depend of delta x and delta t (for example Rayleigh number) are consistent between the grids that communicate.
Thank you very much for the link.
I am using the openmpi/v1.5. I have specified in the Makefile “# Compiler to use with MPI parallelism parallelCXX = mpiCC”
I have installed the latest version of the Palabos and I checked the code again. If I want to compile in parallel I have to specified in the makefile “ # Set SMP parallel mode on/off (shared-memory parallelism) SMPparallel = true”, otherwise if I specified “ # Set MPI-parallel mode on/off (parallelism in cluster-like environment) MPIparallel = true” I have compilation error (I am using cluster). With the SMPparallel mode it works but when I compare the size of memory on one processor (few processors running) is the same as the size when single processor running, it is the same with time.
I have tried to compile code with thermal modeling but when I tried to use ADESCRIPTOR “MultiGridLattice3D<T, ADESCRIPTOR>& adLattice” it is compiled with error:
I have tested the grid refinement code on one old cluster with Itanium cores and it works fine.
So the problem is with the MPI.
Other Palabos examples work fine on all clusters with Itanium and other cores.
But still when I want to compile code with thermal modeling (I tried to use ADESCRIPTOR “MultiGridLattice3D<T, ADESCRIPTOR>& adLattice”) it has been compiled with error.