Data Reduction in Parallel

Hi:
Along with the LB flow field calculation, I am tracking the evolution of a particle concentration on each of the nodes of the lattice. I have used the following declaration:

TensorField3D<T,numComp> comp(nx,ny,nz); comp.construct();

The problem is that I need to calculate the average composition at each nx,ny,nz. The function computeAverage from what I see calculates the average of a scalar qty over all the nodes of the lattice. What I want to do is

sum(over all nodes) {comp.get(iX,iY,iZ)][0]} / num_of_nodes

Is there an inbuilt command in OpenLB that will allow me to do this or do I have to write the MPI commands by hand?

thanks
P.

The following should do the job:


comp.extractComponent(0).computeReduction(AverageReduction<T>);

Thanks. That really works for me.

P.

Can I also use the same function for averaging a scalar quantity at each lattice grid point?

i.e.


ScalarField3D <T> temp(nx,ny,nz); temp.construct();

temp.computeReduction(AverageReduction<T>);


From what I understand, this will result in each element in the ScalarField3D (at iX,iY,iZ) being replaced with its average qty at that (iX,iY,iZ), and then the ScalarField will be updated on each node.
Is this correct?

thanks

Hi:
I get the following errors when I try and do the computeReduction command (for both the Scalar and the Tensor fields).
Sorry if I dont seem to quite follow the code, but I am new to C++. The error is at the line where I use computeReduction.


error: expected primary-expression before ‘)’ token


Any hints would be greatly helpful.


for (iComp = 0; iComp < numComp; ++iComp) {

	    for (iNum = 0; iNum < numPart; ++iNum) 
	      {				
		
		iX = Particle[iNum].dX;
		iY = Particle[iNum].dY;
		iZ = Particle[iNum].dZ;
		
		comp.get(iX,iY,iZ)[0] +=  Particle[iNum].comp[iComp];
		
	      }	

	    //Average over all the nodes
	    //
	    comp.extractComponent(0).computeReduction(AverageReduction<T>);

I have gotten my code to run in serial mode, but the inability to sum over this array is causing problems with
the parallel implementation.

thanks…