time relaxation

Good day,

Dear All,

I read a book, stating that the rate of convergence is depend on the value of time relaxation. From my understanding, the higher the value time relaxation, the smaller the amount of non-equilibrium particle relax to equilibrium state. Then longer time step required to reach steady state. Is my understanding correct?
Then I tested to lid driven cavity flow simulation. I fixed the value of Reynolds number and mesh size. I adjust the value of top wall velocity (equivalent in adjusting the value of viscosity) to adjust the value of time relaxation. But what I got is, the higher the value of time relaxation, the simulation took smaller iteration to converge. Can anybody help me

Hi,

It is always tricky to get a physical intuition for the viscosity-related behavior of a fluid (is a viscous fluid “fast” or “slow”?). As the Reynolds number is fixed in your example, I would interpret the rate of convergence as an effect of time scales rather than fluid viscosity. The larger the fluid velocity is, as measured in lattice units, and the faster you converge toward a stationary state (and that’s what you want to converge to, right?). This is easy to understand. Large velocity in lattice units means the fluid travels a large distance per iteration step. And the quicker you move, the faster you advance…

In your example with fixed Reynolds number and fixed resolution, increasing tau means increasing the viscosity in lattice units, which in its turn means that you increase velocity in lattice units.

Yet another way of looking at this, which I prefer, is to explicitly state the resolution of your time axis. After all, you do say explicitly what the resolution of your spatial grid is, right? The time resolution dt is the amount of physical time you advance during one iteration step. Just as the space resolution dx = L/N is the physical distance between two neighboring lattice sites. The two parameters dt and dx are intimately connected with the velocity:

u_{phys} = dx/dt u_{LB}

It makes sense to consider “dimensionless variables”, i.e. a physical system in which all reference variables are unity: L=u_{phys}=1 (if you are not happy with this, take any system of units: the conclusions are the same). From this, you get

dt = dx u_{LB}

There we have it: when you increase the velocity in lattice units, you increase the time step (you make time resolution coarser). By doing this, you obviously get a simulation which progresses faster. The drawback is that you under-resolve the time axis and lose accuracy for the time-dependence of your problem (but not necessarily on the value of the stationary state).

Hi,
well well …

I really like this question … and I’d really really love the hear the answer of Mr. Latt.
I always have the same kind of problems anytime that I try to think about LB method in term of its micro-mesoscopic nature. Then I wont investigate the first part of your message … actually your question. Sorry :wink:
On the other hand I personally agree with you numerical experiment and I want to share you guys what I think. Of course I would be pleased to be taught and then flogged by my Mr. Latt and Orestis.

. Fixing the Reynolds (characteristic length (L) * characteristic velocity (U) / viscosity (nu) ) number, you fix hydrodynamics;
. You also fix your geometry (and then the characteristic length L );

… I would write then Re/L =U/nu eq 1.

then I write something in lattice units: L-----> Nx (number of lattice nodes is chosen constant)
U------>Nx/T (where T is the number of iterations)
nu------> cs2*(tau-0.5) (where cs2 is the speed of sound
and tau the relaxation time)

I rewrite then eq.1 in lattice units

Re/Nx=Nx/Tcs2(tau-0.5) -------------------> Re/Nx^2= 1/Tcs2(tau-0.5)

then If i choose bigger and bigger tau, in order to keep constant the ratio Re/Nx^2, I have to decrease the value of T (the number of iteration get smaller ------> and then we “speed up” the convergence … at least this is what I think … But I use to say many bullshit … then Mr. Latt, please help)
-------------- BUT --------------- (and here you can stop because I will star talking about things that I really don’t know about … but which puzzle me a lot )

Can we make tau bigger and bigger and bigger and then increase and increase the speed of converge of our simulation? Well, I would like answer : “Of course not… and I think that one of reasons is the Knudsen number (the ratio between the mean free path of our “gas”-particles divided by the characteristic Length of our system) which has to be kept small (see Jonas Latt thesis (pag. 18 ))”. Now, in some way that is not complete clear to me, the Knudsen number is proportional to the relaxation time and then… I cannot make tau bigger and bigger and bigger.

Now, if we think that bigger tau is bigger the viscosity-----> … probably we should state that with LB method we cannot model extremely high viscous fluid because we would should have high Knudsen number.
And more? from the kinetic theory point of view (or at least something that I think to be kinetic theory :wink: ), if I have an high Kundsen number I’m looking at dilute gases (or microfluids… see Jonas thesis pag 18.) . Wow…
Is then any “link”… or parallelism… between dilute gases and high viscous fluid?

My answer to the first question is: “We can model high viscous fluid thank to the fact that the real viscosity of the fluid we model (viscosity in real units) does not depend only on tau but also on the chosen deltaX and deltaT of the simulation. Playing with deltaX and deltaT we can make our fluid more and more viscous keeping tau small.”
But we all know that to increase tau is the usual way to model a more viscous fluid then I want to give my answer also to the second question … and please let me know what you think.
I see the parallelism in this term. Both dilute gases and high viscous fluid need time to let an “instability” travel in the system and return to the “equilibrium”…

Voila … I’m ready to be punished…

ciao
Andrea

Ciao Tovarish

Well, it seems that we both answered this post at the time and came up with the same interpretation, linking the rate of convergence toward stationary state to the value of the time step.

I think, to answer one of your comments, that there is a very simple technical obstacle which prevents you from increasing tau indefinitely. As said before, under the assumption that the Reynolds number and lattice resolution are fixed, increasing tau implies increasing the velocity u_{LB} in lattice units. This velocity has however an upper bound. If u_{LB} were to be larger than 1, you would be describing a fluid in which information travels faster than one lattice node per iteration step. The numerical model cannot support this, because the dynamics uses nearest-neighbor interaction only. Actually, in order to get a meaningful result, u_{LB} must be smaller than the speed of sound c_s = 1/sqrt(3). And even then, you may need to decrease u_{LB} much more to reduce compressibility effects (as the Mach number is u_{LB}/c_s) and increase the accuracy of the result.

But to get back to the question of the original poster, I have the disturbing feeling that none of us gave a really straight answer. Noraz23 mentions the contradiction between intuition ( “high relaxation time means you need a long time before reaching equilibrium” ) and experiment (the simulation reaches steady state faster at high relaxation time). As we both pointed out, this argument neglects the fact that u_{LB} is modified simultaneously, and the time scale of the system changes. But it seems to me that there is also another flaw in the argument. The picture suggested by the intuition-based argument implies that the fluid relaxes to equilibrium at every point in space along a straight path. That’s not the case in practice, where the fluid keeps jumping from a non-equilibrium state to another one. When tau is smaller than one, the BGK model is an over-relaxation scheme. During collision, the state of the system shoots through equilibrium and ends up at a point in the opposite direction. This should make it clear that equilibrium may be reached faster by using a lower relaxation rate (i.e. a higher relaxation time) but walking a more direct path.

Thank you guys,

So can I say that the simulation takes smallest iteration step to reach steady state at tau equal to one??

No, you cannot draw this conclusion. The above was just a simple argument to motivate why your intuition on tau cannot hold. You should be aware that local equilibrium and steady state are not the same thing. Actually, your system is not at local equilibrium in the steady state, except for the trivial case where velocity is space-independent. Reversely, you can artificially create an initial condition in which the system is everywhere at equilibrium with the local value of rho and u, but globally off-equilibrium (far from a steady state).

The only valid rule of thumb I can think of is that larger value of u_{LB} generally means faster convergence toward steady state, for the reasons explained above.