Question about corner boundaries

Hi, all

I am new to openLB code, I have questions about the example of cylinder.cpp .

I try to understand the code by stusying the example of cylinder.cpp (old version of olb-0.5r1 ) I tried both LocalBoundaryCondition2D or ZouHeBoundary2D, and I noticed that :

(1)  sometimes the  distribution function  is negative,  is it normal or something is wrong ? 

(2)  For the four corner boundaries (Fixed Velocity are used),  I did not know how the collide steps are  treated, it seems that just normal RLBdynamics or BGKdynamics collide steps are called. But how about the five  unknown distribution functions are determined ?  



 Any help is appreciated !

ycwang

Hi,

OpenLB and Palabos don’t directly store the distribution function f_i. Instead, they store the value f_i-t_i, where t_i is the weight of a given direction. The reason for this is explained here: http://lbmethod.org/howtos:reduce_roundoff

To understand how the regularized (“LocalBoundaryCondition”) or Zou/He (“ZouHeBoundaryCondition”) work, I suggest you have a look at the following paper: http://lbmethod.org/literature:latt_08

Thanks, Jonas,

 Actuallt I 've downloaded many of your thesis, papers and reports.  You 've done a great job, it is a great pity that I could go to your openlbm training. 

I 've read your paper about boundary conditions , is your version of ZouHeBoundaryCondition different from the ZouHe’s original paper ? I still do not know how you deal with the cornal node, especially for the 5 unknown distributions. Could you explain this to me ?

By the way , I am trying to couple LBM to my Discrete Element Model (DEM). Since my DEM is a paralleled (MPI) code, I am reading your parallel part now. My question are :

 (1)   Are there  some fundimental differences in the paralled code   between oib-0.5r1 and palabos0.6r1 version ?     I started from the old version  and can not followed you quickly, I found palaos is more complicated  to me. 

 (2)   Do you have some detailed documentations about the paralleled part ? In your new and old documents there are a few things about MPI strategy.  I need to understand it and couple it into my DEM code, which emplys a master- slave mode. Any suggestions ?  

  Thanks

   ycwang

Dear ycwang,

I am working on a coupled system of LBM (fluid) and FEM (suspended particles). Currently, we are working on the parallelization of the FEM part. I am very interested in efficient concepts of parallelization via MPI. A master-slave approach seems to be promising. Would you mind providing me some basic information how you have realized the MPI parallelization? In particular, a reduction of the necessary MPI calls and data amount is important. How does your parallelization scale on multiple cores? My mail address is t.krueger@mpie.de

Thank you,
Timm

Hi,

The Zou/He algorithm is identical in the original Zou/He paper, in all my publications, and in the Palabos code.

I don’t know how to implement this algorithm in corners or on edges. No matter which boundary condition you select, Palabos and OpenLB always use the Skordos (non-local) boundary condition in corners and on edges. The way you can understand this intuitively is that this kind of nodes have an insufficient number of known particle populations, and a non-local scheme is therefore required to gather the missing information from neighboring nodes.

Parallelization is more involved in Palabos than in OpenLB, because we have extended the data structure to take into account the possibility of handling inhomogeneous hardware (in the current release of Palabos (0.6), inhomogeneous hardware cannot be handled yet, though).

We are currently investing in the documentation of end-user aspects of the code, and I am afraid that technical topics like the parallelization will not be documented well in a close future. So for now, the documentation consists of the code itself… we’re trying to be disciplined in commenting as many code portions as possible.

Generally speaking, Palabos is moving away from a master-slave model into a fully decentralized scheme. We need this to adapt to the pace of increase of modern parallel computers. Imagine that the parallel computer consists of, say, 100’000 cores, uses a master-slave scheme, and the master runs on just one of these cores. Relatively speaking, the master then possesses only a fraction of 10ˆ-5 of the computational resources (CPU power and memory), and with these very limited resources needs to solve the complex task of managing the parallel code structure. This can end up in a serious bottleneck.

Thanks? Jonas

 I 'll study Skordos paper. 

you are right.  I agree with you in that the master maybe idle most of time.  

In palabos, is the Node of PocId= 0 special ?

Timm

  The paralleled part of DEM is not written by me, but my colleague,  here is the paper : 


 Pure Appl. Geophys., 161,2265-2277, 2004

ycwang

@ycwang: Thank you, I will read the paper.
@Jonas: A decentralized parallelization is very desirable. Unfortunately, this is not always possible with an arbitrary algorithm. Even if LBM is fully parallelizable, additional physics added to the fluid may be not.

@ycwang: In both OpenLB and the current version of Palabos, the processor with Id=0 is special in that it takes care of Input/Output operations (all data is flushed through processor 0). This essentially means that all I/O is non-parallel, which is a bad thing. One of our biggest current development efforts is for moving to parallel I/O.

@Timm: I definitely agree with you; when coupling LB with different methods, a partial parallelization is often the right thing to do. Incidentally, I think that one of the biggest challenges for the LB community in a near future is to find ways to formulate as many physical ingredients as possible in a “LB spirit”, to end up with a consistent overall code structure and parallelization scheme. If I remember right, I have seen that you are fiddling with immersed boundaries which, as I’ve often heard, offer a way of implementing flexible walls within a LB-compatible model. What’s your opinion on this? Do your observations confirm this claim?

The immersed boundary method (IBM) is more or less local. The idea is to interpolate velocities and forces between the regular LBM grid and unstructured grids defining the shape of the deformable objects which are immersed (suspended) in the fluid. In principle, one can choose different interpolation stencils. However, there are two major points to consider:

  1. interpolation should be as accurate and smooth as possible
  2. interpolation should be as cheap as possible
    The second point can be dealt with by defining interpolations stencils with finite support (say, a range of 1, 1.5, or 2 LBM lattice nodes in each direction). In this sense, IBM itself should be parallelizable with a “not too bad scaling”. Right now, we are thinking about this (e.g., ghost layers similar to LBM).
    The bad thing I see is the FEM for the suspended particles. Those particles can span, say, 10 LBM nodes in each dimension. There are operations which require the total surface or total volume of those objects. In other words, the FEM code is not entirely local. Each surface node of the unstructured particle mesh is connected to a number of neighbors which may be in different neighboring MPI domains. Another point is that one does not know at the beginning where the particles will move. There may be regions in the fluid which are devoid of particles, others could be filled with them. So, load balancing will be an issue.
    Making the code parallel is not the big problem, but turning it into a parallel AND efficient code is the big task.

Thanks again, Jonas

 For parallel I/O, do you mean that each node  outputs its own data ?  how about the average values ? 

To @Timm,

      I just wonder why do you use  the FEM to model the suspended particles, DEM may be a better choice to mdel particles.

Hi ycwang,

thanks for your hint.
For the physical system I consider, FEM is quite sufficient, and its accuracy is already better than that of LBM (i.e., hydrodynamic errors are dominant). Additionally, in my case, the FEM is extremely efficient and simple to code. Right now, I see no advantage in using DEM.