[parallel] gravitational falling sphere immersed in a fluid

Dear Sirs,
The topic is the simulation I am conducting now; however, I have three questions to be solved.

  1. As your reply of the question I had asked about “How to make the get function more efficient,” I cannot understand the implementation of “postprocess” after reading the source code. Would you mind briefly explaining it? or specifying the lines I should put more focus.

  2. Ignoring the issue of efficiency, I find that the value I get from the “get function” is always zero when I run the code in MPI parallel mode. In the same situation but serial mode, I can get pretty good value. Part of our code is shown in the end of this article.

  3. I am now preparing a serial 3D simulation; however, its boundary is too large for only one computer. Our stategy now is to simulate only part of the boundary, which means in some boundaries I have to simulate float boundary. Would you mind briefly explaining how to simulate the float boundary?

###############################################################################

TensorFieldBase3D<T,3> const& velField = lattice.getDataAnalysis().getVelocity();

int id_nx = (int)floor(pt.x); // pt.x, pt.y, pt.z are position with double(T) type.
int id_ny = (int)floor(pt.y);
int id_nz = (int)floor(pt.z);

int id_px = id_nx + 1;
int id_py = id_ny + 1;
int id_pz = id_nz + 1;

T w_px = pt.x - (double)(id_nx);
T w_py = pt.y - (double)(id_ny);
T w_pz = pt.z - (double)(id_nz);

T w_nx = 1.0 - w_px;
T w_ny = 1.0 - w_py;
T w_nz = 1.0 - w_pz;

T u[3];
for(int dim = 0; dim < 3; dim++)
{
u[dim] = velField.get(id_nx, id_ny, id_nz)[dim] * w_nx * w_ny * w_nz;
u[dim] += velField.get(id_nx, id_ny, id_pz)[dim] * w_nx * w_ny * w_pz;
u[dim] += velField.get(id_nx, id_py, id_nz)[dim] * w_nx * w_py * w_nz;
u[dim] += velField.get(id_nx, id_py, id_pz)[dim] * w_nx * w_py * w_pz;
u[dim] += velField.get(id_px, id_ny, id_nz)[dim] * w_px * w_ny * w_nz;
u[dim] += velField.get(id_px, id_ny, id_pz)[dim] * w_px * w_ny * w_pz;
u[dim] += velField.get(id_px, id_py, id_nz)[dim] * w_px * w_py * w_nz;
u[dim] += velField.get(id_px, id_py, id_pz)[dim] * w_px * w_py * w_pz;
}

###############################################################################

My compiler I use is pgi ( linux86-64/7.0-5 ).

Thanks.

Hi,

The concept of postprocessor is used in OpenLB to execute a non-local operation on a lattice. The description of the operation is based on data-parallel paradigm, which allows an efficient parallel implementation. We work currently hard on the interface of the postprocessor, in order to make it more userfriendly. I hope the next release ships a nice interface and a corresponding documentation. In the meantime, the best way to get familiar with the postprocessor is to check out the implementation of non-local boundary conditions, which are based on postprocessors.

It is not clear why your code fails to work in parallel. As a suggestion, try to access velocities on a cell directly, instead of using a dataAnalysis. That is, instead of writing

u[dim] += velField.get(id_nx, id_ny, id_pz)[dim] * w_nx * w_ny * w_pz;

write something like

T tmpU[Lattice::d];
lattice.get(id_nx, id_ny, id_pz).computeU(tmpU);
u[dim] += tmpU[dim] * w_nx * w_ny * w_pz;

What do you mean by float boundary?