Parallel Compilation on Mac OS X

Hi:

I was trying to compile the cavity2d_cuboid_mpi and the cavity3d_cuboid_mpi examples and get the following error:

Create dependencies for cavity2d.cpp
Compile cavity2d.cpp
g++ -O3 -Wall -I…/…/src -c cavity2d.cpp -o /Users/gersappe/Desktop/olb/examples/cavity2d_cuboid_mpi/cavity2d.o
cavity2d.cpp: In function ‘void writeVTK(olb::SuperLattice2D<T, olb::descriptors::D2Q9Descriptor>&, const olb::LBunits&, int)’:
cavity2d.cpp:86: error: ‘BlockStatistics2D’ was not declared in this scope
cavity2d.cpp:86: error: expected primary-expression before ‘,’ token
cavity2d.cpp:86: error: missing template arguments before ‘>’ token
cavity2d.cpp:86: error: ‘statistics’ was not declared in this scope
make: *** [/Users/gersappe/Desktop/olb/examples/cavity2d_cuboid_mpi/cavity2d.o] Error 1

I downloaded the fixed version of olb, but the error persists.

Thanks
Dilip

Hi Dilip,

I suspect that you are using an old version of the code cavity2d. You should update to version 0.5, where there is no mention of the class BlockStatistics2D.

Actually, the class BlockStatistics2D has become obsolete and should be replaced by DataAnalysis2D. In order to reach a high-quality final product, the user interface of OpenLB will be subject to changes until we reach version 1.0, although we try to limit these changes as strongly as possible.

I am using version 0.5 and I noticed that the writeVTK function did have a BlockStatistics2D call.

for (int iC=0; iC<sLattice.get_load().size(); iC++) {
BlockStatistics2D<T,D2Q9Descriptor> statistics(sLattice.get_lattice(iC)
);

I changed BlockStatistics2D to DataAnalysis2D but then get the following error:

Create dependencies for cavity2d.cpp
Compile cavity2d.cpp
g++ -O3 -Wall -I…/…/src -c cavity2d.cpp -o /Users/gersappe/Desktop/OpenLB/examples/cavity2d_cuboid_mpi/cavity2d.o
cavity2d.cpp: In function ‘void writeVTK(olb::SuperLattice2D<T, olb::descriptors::D2Q9Descriptor>&, const olb::LBunits&, int)’:
cavity2d.cpp:87: error: no matching function for call to ‘std::vector<olb::ScalarField2D, std::allocator<olb::ScalarField2D > >::push_back(const olb::ScalarFieldBase2D&)’
/usr/include/c++/4.0.0/bits/stl_vector.h:602: note: candidates are: void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = olb::ScalarField2D, _Alloc = std::allocator<olb::ScalarField2D >]
cavity2d.cpp:88: error: no matching function for call to ‘std::vector<olb::TensorField2D<T, 2>, std::allocator<olb::TensorField2D<T, 2> > >::push_back(const olb::TensorFieldBase2D<T, 2>&)’
/usr/include/c++/4.0.0/bits/stl_vector.h:602: note: candidates are: void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = olb::TensorField2D<T, 2>, _Alloc = std::allocator<olb::TensorField2D<T, 2> >]
make: *** [/Users/gersappe/Desktop/OpenLB/examples/cavity2d_cuboid_mpi/cavity2d.o] Error 1

Dilip
p.s. I used the examples dir that came with the 0.5 release.

Oh, you are right. I had not realized that this was the “cuboid_mpi” version of the cavity flow. So here’s the thing. There are two approaches to MPI-parallelism in OpenLB which we develop at the same time to maintain an alternative in case one of the approaches turns out to have a major issue. The first approach is referred to as “multiblock MPI”, and the other one as “cuboid MPI”. Both approaches use the same philosophy but are developed independently.

Currently, all example programs can be run in serial or parallel. When they are compiled in parallel (MPI flag is turned on in Makefile.inc), they use the “multiblock MPI” approach. We also added a few examples for those who want to specifically experiment with the cuboid approach. As it turns out, the cuboid approach has however compilation issues with the new OpenLB versions, and we should probably remove them from the examples directory, temporarily at least.

In the meantime, please simply use the multiblock approach. That is, go to the cavity2d example, and compile it with the flag for MPI parallelism. Good luck!

Thanks a lot. I will do that. I did see the lines in the other files for parallelization, but thought that the cubiod_mpi was the one to use as a test case for mpi.

Dilip