Increasing the lattice size.

In a simulation with two components I noticed that when I double the lattice size the results are totally different.
I made nx = 2*nx_old and the same for ny.

What would be necessary to change for an analysis with twice the lattice size in order to simulate the same fluid and obtain similar results?
I would appreciate an explanation also.


Hello Raul,

you increase the lattice size, but you want to have the same physics. In order to do this, you should consider the following.
The Reynolds number Re = U D / nu must be the same (U: typical velocity, D: typical distance, nu: kinematic viscosity). In physical units, everything is the same (Re, U, D, nu), since your physics is the same. In lattice units, however, you change the quantities according to these rules:

  • keep Re
  • keep nu and tau
  • increase D by a factor (in your case 2)
  • decrease U by the same factor
    This means that by increasing your lattice resolution, you have to decrease the lattice velocity, i.e. the Mach number. Note: The lattice sound speed is the same as before, c[sub]s[/sub][sup]2[/sup] = 1 / 3.
    For example, your maximum velocity in lattice units was 0.01 before and your numerical grid size 40[sup]3[/sup] in 3D. After rescaling by a factor of 2, your maximum velocity then is 0.005 and the grid 80[sup]3[/sup]. The value of the numerical viscosity is the same, say 0.1 such that tau = 0.8.
    The reason is that the second-order convergence of the LBM can only be realized if you simultaneously increase the lattice resolution and decrease the Mach number in the system. You can find a lot of references. For simplicity, have a look at our paper here. Just read sections II F and IV B and have a closer look at the cited references.


Hi Timm,
Thanks for the answer,
Your explanation gave me some useful clues.
Actually what I am simulating is a multiphase analysis (say water and oil) under gravity forces. Periodical boundary conditions are being used. The goal of the simulation is to see the oil rising due to its lower density. The initial velocity for both components is equal to zero.
Well, so far I think I would need to change properly the gravity value and maybe the values of the interaction forces between fluids.


Exactly, in more complex systems (such as multiphase) you have additional constraints. Changing the lattice size also leads to a redefinition of the body force. Just an example here:
Assume you want to run the same simulation (same physics), but increase the spatial resolution by a factor of q > 1. We have seen that tau is the same, hence nu (kinematic viscosity) is also the same. Due to nu[sub]p[/sub] = nu[sub]l[/sub] * dx[sup]2[/sup] / dt (p means physical units, l lattice units, dx and dt are physical lattice constant and time step, say 0.01 mm and 0.005 ms, respectively) you know that dt changes by a factor of q[sup]-2[/sup]. The acceleration due to a force is a[sub]p[/sub] = a[sub]l[/sub] * dx / dt[sup]2[/sup]. Since the physical acceleration a[sub]p[/sub] is fixed, you can compute the lattice acceleration, and it scales with q[sup]-3[/sup] (if I did not make any mistakes). Increasing the lattice resolution by a factor of q thus leads to a reduction of the lattice acceleration by a factor of q[sup]-3[/sup].
Of course you can do this for any other quantity with a unit made of kg, m and s.


Hi Timm,

Thanks for the answer.
What would be the problems if I switch a LBM program to work with real units (instead of lattice units) ?
I think I would need to consider c=dx/dt in the equilibrium distribution function and in the velocity vectors too.

What’s the main reason for the conventional use of lattice units? It is just for less calculations?


I think there are two reasons:
First, the computer cannot handle physical units, thus you have to take numbers.
Second, it is more natural to work with simple numbers. This way, the LBM has always the same algorithm. The LB velocities point to the next neighbor at say, (10, 2, 4) and not (133.23424, 23.00324, 9.23456). It would make the computations much more confusing.

Hello Timm,

I’m doing some convergence studies with varying grid sizes here, too…

Until now, I get as input all the physical properties… that is the physical viscosity nu_p, the reference length l_p and the reference velocity u_p. Thus, I can compute the Reynolds number Re = u_p * l_p / nu_p.

Now, I define a resolution Nx and chose the lattice velocity u_lb (e.g. u_lb = 0.1). Given these values I further know l_lb and can now compute the lattice viscosity nu_lb by nu_lb = l_lb * u_lb / Re.

Well… I hope this is correct so far.

Let’s say I want to double grid resolution now… Then, following your post above I think I’d have to scale the lattice velocity, too. Thus for the fine grid I’d have to compute a new lattice velocity u_lb’ = 1/2 * u_lb.

Is this correct?

I’m kind of confused with J.Latts “Choice of units in lattice Boltzmann simulations” here… There he states, that following relation should hold: dt ~ dx^2… I suppose this would imply u_lb ~ dx^2 in our case… if this is true, why don’t we rescale u_lb by a factor of 4?

Further, a reasonable value for u_lb has to be chosen for a given grid resolution. How far can you raise u_lb e.g. for a grid resolution of Nx = 100. In Latts paper he states, that u_lb = 0.02 is a reasonable value here… would 0.1 still be reasonable?

Oh… well…

I think my assumption of u_lb ~ dx^2 is wrong…

I did consider the relation u_lb = dt / dx and thought as u_lb ~ dt and dt ~ dx^2 ==> u_lb ~ dx^2

But this is actually not true since u_lb = dt/dx and thus u_lb ~ dt/dx

Thus if I double grid resolution (e.g. dx’ = dx/2), I truely have to scale dt by the factor 4 (i.e. dt’ = 1/4 dt). But using u_lb = dt/dx I get u_lb’ = dt’/dx’ = (1/4 dt) / (1/2 dx) that yields u_lb’ = 1/2 u_lb.

I hope this is correct now and anybody may answer my second question :slight_smile:

Yes, your velocity scaling is correct IF you choose the diffusive scaling, i.e., dt \propto dx^2. This is the most reasonable thing to do in LBM. A side effect is that the lattice viscosity is constant, nu’ = nu.

Concerning your second question: You have to test which velocity is okay. My experience is that for applications 0.1 is not a problem, as long as you are not interested in high accuracy. If I remember correctly, in my last simulations, the velocity reached 0.2 or 0.3 at some distinct lattice nodes. Still, the results were good.

Hi Timm,

I’ve got another question… it’s about the dt’s and dx’s in the lattice Boltzmann method.

Within the derivation of the lattice Boltzmann method it is often stated, that dx and dt are set to unity thus the lattice speed c is unity, too. I can’t really see how these values relate to the dx’s and dt’s used above? Or aren’t they related at all?

This is one of the most common questions around.
The explanation is straightforward: For each physical quantity, there is also a lattice version of it. The reason is that the computer doesn’t understand the concept of units (e.g., meter, second). So it is your job to make sure that each quantity has a well defined lattice value. In order to do this, you need conversion factors which carry the unit and, additionally, a scale. For example, imagine your characteristic length is 25cm, but on the lattice this is 5 lattice nodes. Then the conversion factor will be 5cm. Clear?
This you do for each physical quantity of interest. For simplicity you set the lattice constant in lattice units to unity and absorb the factor in the conversion factor. The same for the time step. That’s all.
What you should do whenever you think of a new simulation: Write a table with four columns. 1) name of quantity 2) typical physical value 3) typical value on the lattice 4) conversion factor.
Now, the typical relation $\delta t \propto \delta x^2$ addresses the conversion factors, e.g.:
simulation 1: \delta t = 1e-5 s; \delta x = 1e-6 m
simulation 2: \delta t = 2.5e-6 s; \delta x = 5e-7 m (refined by factor 2)
Is it clear now?

Thank you Timm!

I’ll try to put it in my own words now… just to make sure I got it… and I’ll do a little example to make some things clearer.

Let’s say I’m doing a lid driven cavity flow. The cavity is of size 1m x 1m x 1m and the velocity u_p at the top is 2 m/s. I’ll choose a grid of size 100 x 100 x 100.

Now, for convenience, dx_lb and dt_lb are set to unity in lattice units ( dx_lb = 1; dt_lb = 1) whereas dx_p in physical units is dx_p = 1/100m.
Thus the conversion factor would be fac_x = 1/100m and I can calculate any quantity x_p in [m] with x_p = fac_x * x_lb. For example if there is an obstacle in the cavity covering 10 lattice nodes it is x_p = 1/100m * 10 = 0.1m in physical units.

Well… easy so far :slight_smile:
Though, I find it more difficult with the time scale… here we go.

In my simulations rather than choosing dt to a fix value, I compute dt given the lattice velocity u_lb (say u_lb=0.1). Given all the properties above this yields dt_p = dx_p * (u_lb/u_p) = 1/100 m * 0.1/2m/s = 0,0005s. If this conversion is correct I think the time scale factor fac_t = dt_p since dt_lb = 1 and dt_p = fac_t * dt_lb.

Or an alternative way: by choosing u_lb I also choose a scaling factor for the velocity. This would be fac_u = u_p/u_lb = 2/0.1 m/s = 20 m/s. Given this factor I should be able to compute the time scale factor, right?

Since u = x/t ==> t = x/u ==> fac_t = fac_x / fac_u = 0.01m/20m/s = 0.0005s

Fine! :slight_smile:
Now, if I perform one update step in the simulation I update the actual flows time with 0.0005s.

Generally I think one can state, that chosing dt_lb = 1, dx_lb = 1 their counterparts in physical units dt_p and dx_p denote the scale factor.

But I’d like to discuss some further stuff here regarding the lattice velocity and the lattice speed of sound. The lattice speed is always defined as c=dt/dx. Is there an intuitive explanation for this? What does the lattice speed actually represent?

Your explanation seems to be correct. :slight_smile:
I am not entirely sure how to interpret the speed of sound in the lattice. But I would be glad if somebody would explain it.

Hi Timm

How important is the lattice speed in the OpenLb converter if your are starting flow from zero, especially for porous media where you do not know the final flow rate for a given pressure drop?
I have used the OpenLb a lot for zero initial condition flows and in all, i just pick arbitrary combinations of Re and U such that they give me a specified tau, i.e viscosity. For me, I am not interested in the OpenLB Re and U, my final simulation gives me velocities that I can use to calculate the main Re for my flow.
Are my assumptions correct?


if you do not U or Re, then you have to try and play around. I have never simulated porous media flow. But you can always compute the upper value of the velocity: Due to the obstacles, the final flow velocity will always be smaller than that of a Poiseuille flow with same pressure gradient, right? The question is by which amount the final velocity is reduced. What about Darcy’s law? If you could guess the order of the permeability, it would help.