I’m new to LBM and Palabos. When I changed parameters in showcases “MovingWall”, I found out that if

the moving speed of the wall increases, or 2. the viscosity in lattice units “nuLB” is large (nuLB=nu*dt/dx^2), the computed average energy will also be too large so that it outputs no values any more but “NaN”. The velocities tensor field gets no values either.

However when simulation nanoscale flows, the small lattice distance “dx”, which means high “nuLB” value, is unavoiadable. Unless to reduce “dt” to an incrediable small value, which makes the simulation too expensive.

My questions are:

What can be a deeper reason for the failed computation of velocities? The Reynold number remains the same with smaller “dx”.

Re is kept smaller than 0.1 as I simulate it with only a nanoscale domain.
I knew that large Re can lead to approxiation errors. But this seems not the case here.

This set of parameters can be computed.
But as long as I increased param.nu to 1e-6, for example, the velocity result can’t be computed Right anymore and the energy result turned out to be NaN.
It could be solved by decreasing the time step (giving a smaller param.dt), but this is obviously not a computational effective solution…

In principle, yes. But my question isn’t “why the simulated results doen’t match the physical one”, rather “why a certain result can’t be computed”. I didn’t get the reason for such errors rooted in the algorithm, pure numerically. I suppose it doesn’t matter, which Re it should be physically.

very small nu means tau ~0.5 (or omega ~2). This is a limit for the numerical stability of the LBM (another would be u >= 0.1). There are better suited models in palabos to have a higher numerical stability such as CompleteRegularizedBGKdynamics for example, or Smagorinsky models.

thanks. I would try some other models.
One limit is that the velocity can’t be too large, yes.
However, another error I came across is just opposite to your describtion. It seems due to big nu. Interesting…