# why lattice velocity cannot (shouldn't) be 1 ?

Hello
I have problem when I establish u_lb = 0.1,like it is said in many posts in this forum.
I simulate flow in system of connected cylinders ( system of blood vessels ), and the whole system (cubic) has size 25[cm]*25[cm]*25[cm]. (vessels are usually in the shape of long cylinders)

1. So I decided to take N=2000 points in each direction,
dx_p = 25 / N [cm] = 0,0125[cm]

2. I have the arbitrary value of velocity in the first cylinder (in the biggest vessel) u_p=69,32[cm/s] and also viscosity of blood nu_p=0,0381[cm^2/s].
I fixed u_lb=0.1, so from
u_lb = u_p * ( dt_p / dx_p ) I receive dt_p=1,80305E-05[s]

3. Next using
nu_lb = nu_p * ( dt_p / dx_p^2 ) I receive nu_lb=0,00439655

4. and by equation
nu_lb = 1/3 * c^2 * dt_lb (tau - 1/2)
where c = 1 = dx_lb / dt_lb = 1 / 1
AND simulation crashes, after some interation I receive in distibution function infinity numbers

but when I use for example u_lb = 1, tau is equal 0,631 and after 40000 I receive convergent results.

So why I should use u_lb=0.1 but not u_lb=1 ???

Thanks for any help/suggestions.
Krzysztof

On which machine do you run the simulations? 2000x2000x2000 grid nodes? Is it a BlueGene?

I forgot to say: You may have convergence, but for u_LB = 1, the LBM approximation is simply wrong. Due to the Taylor expansion of the equilibrium, LBM is only solving the Navier-Stokes equations asymptotically if the velocity is small. In other words: Even if you get results, they are not necessarily physical.

1. I remember/store the fluid/wall LB points in lists/vectors, not the all points from 200020002000 cubic, so the number of the points is not so big, about 23 000 000 (23M). And additionally I implement MPI parallelization on cluster machine.
2. I drive my flow by pressure difference.
3. I attached the image (velocity magnitudes) which shows the simple situation but in real a have in such a system for example 200 vessels sometimes more for example 1000.
image1
image2
4. And I would like to mark that I don’t need very precise results because ealier we were using a method which was even more simple.
On the other side, for example to simulate blood flow is such a model using directly Navier-Stokes eq. would be computationally impossible.

Still, a lattice velocity u_LB of 1 is too large. You cannot even be sure that the physics is correct. There must be another reason why your simulation crashes. What is the numerical value of the pressure gradient? How do you initialize your flow? Where exactly in your geometry do the populations diverge? Near a boundary?

So, I simulate flow in 120 vessels (example visualisation for 7 vessels image1) and the input pressure equals 12650Pa and output pressure equals 7660Pa.
_p - physical unit
_lb - lattice unit
p - pressure

So, basing one eq. rho_lb = 1 + 1/cs^2 * p_p * dx_p^2/dt_p^2
and taking into account that u_lb = 0.1
pho_Input_lb = 1,067064196
pho_Output_lb = 1,000005657

and I use this values (pho_Input_lb, phoOutput_lb) in input/output nodes to simulate constant pressure bondary condition by Guo extrapolation method.

At the beginning of simulation, I initilize density in the fluid nodes by pho_lb=(pho_Input_lb+pho_Output_lb)/2

-------->>>>basing on above parameters, tau = 0.511
and simulation crashes, after some interation I receive in distibution function infinity numbers (I have not found yet the place where) M=0.18 (Mach number)
------->>>>>> but when I change u_lb = 0.4, obiously I have bigger difference in pho
pho_Input_lb = 2,073027133
pho_Output_lb = 1,000175843
but I am able to receive convergent results but M=0.69 (Mach number)

Obviously, I have found in many papers information that one of the condition is M<<1 (in order to receive comparability to Navier-Stokes solution etc…)
But it is streng that I can’t do simulation with this condition but without it I can.
Obviously I can make N=10000 but in this case even cluster machine will not help me to do experiments:)

Thanks for help

I think that there is a serious bug in your code. If it works for u_LB = 1, then it must also work for smaller values of u_LB. There is clearly something wrong. You cannot use u_LB = 1 in your simulations. Every reviewer would reject your article at once.

And what do You think about tau which is very lose to 0,5 for example 0,511 (like in my case) ??

In many posts I have found information that is better to use tau which is not so close to 0.5 beacuse of problem with convergency. Is it true ?

tau = 0.511 could be a problem, but I think it is still safe. What type of lattice do you use? D3Q19? How do you implement your boundary conditions? Standard bounceback?

Yes, D3Q19, at vessel walls half bounce back rule and at input/output nodes constant pressure boundary condition basing on Guo extrapolation scheme.
But I think that this simple bounce back rule is enough because during vessel disretization I didn’t take into account where analitical wall vessel line cut the 3D LBM voxel.

As You said that I should drop an idea with u_lb = 1, but with u_lb =0,25 or u_lb=0.3 also ??

Obviously, then Mach number is bigger than 0.15, precisly about 0.4-0.6

You really have to find out whether the results for large Mach numbers are still acceptable. If you are interested in steady state solutions, I could imagine that high Mach numbers are less problematic than for a unsteady flows. One cannot answer this question generally.

I am wondering if we can have the lattice velocity greater the the velocity of sound in LBM , which is typically 1/sqrt(3) ~ 0.57735 for D2Q9 lattice ?

In principle yes I believe, but the LBM then is not a valid Navier-Stokes solver anymore. Please always have in mind that the expanded equilibrium is only reasonable for small velocities.
What do you want to do?

I am trying to find the connection between my problem and your sentence “I am wondering if we can have the lattice velocity greater the the velocity of sound in LBM”.
As far as I know (obviously I can be wrong and thanks if You prove it)
(cs^2)=(c^2)/3
where c = 1 = dx_lb / dt_lb = 1 / 1 (as I wrote in first post in this subject).

so where is the connection with u_lb? Thanks once again If You know:)

As You suggest, I have tried to find an code error (day, night, day etc etc:)).
But I still have problem with simulation when tau close to 0.5 for example 5.1, especially with 5.05.

Qestion:
I use Guo extrapolation boundary condition and in his publication he tested it with ID2Q9.
Do You think that I try to use this BC without “imcompressible” model in 3D (D3Q19) (without imcompressible improvement) can be a cause of my problems?

Thanks a lot.

This may be the explanation, but I cannot tell you that. You should test other boundary conditions to find out.