# Error analysis

Hello,

I have some questions concerning the error analysis in the LBM. To make it short:
If I have a limit of computation time T (say 2 days or so). For a given Reynolds number Re, how do I have to choose the parameters Ma, tau and \Delta x in order to achieve the best accuracy, i. e. the smallest L2-error?
I think that a general answer to this question is not possible, but maybe someone could give me some useful links to a paper or a thesis covering this question.

Thanks,
Timm

A first remark, is that if you fix Re, then you will not be able to constrain freely Delta X, Ma, and tau. You will be able to fix two of them (in fact if you want to be able to have second order accuracy you have a relation between delta x and delta t, and therefore you have no freedom anymore) and the third will be a consequence of this choice.

I do not have any real idea on how to solve your question. But something that is sure is that it is completely problem dependent (as you said). When you do a LB simulation the computational cost (time of computation) T, for a given number of timesteps, N, on a grid with dimensions [0,nx-1] x [0,ny-1] x [0,nz-1]

T = alpha * N * nx * ny * nz = T0.

where alpha is the time needed to do a collide and stream step on a node. Now if you want to refine your grid by a factor 2, then in order to avoid compressibility errors (if you are in the incompressible regime) you have to refine the timestep by a factor 4. Therefore

T* = alpha * (4N) * (2nx) * (2ny) * (2nz) = 32alphaNnxnynz = 32T.

This will result in a 4 times better accuracy result. (In 2D the factor will be 16.) The “law” will be more or less

T(T0,factor) = T0*factor^5 (if I haven’t done any mistake).

accuracy = accuracy0 / factor^2

Therefore for an initial setup of accuracy, accuracy0, and of total time simulation time T0,you can evaluate the potential gain for T=Tmax…

This is of course not a satisfactory answer, because you have to test a wide variety of “initial setups” to optimize your results (I guess that there are lots of books on this topics although I don’t know any)… But it may help you a bit.

Orestis, you are absolutely right about your last comments. The computational time scales with the resolution to the power of 5 (in 3D and if Ma \propto \Delta x). For a given Reynolds number, only two variables are independent. And if you enforce Ma \propto \Delta x then you really have one quantity left. And of course the question is how to choose the initial parameter values.

I have an idea to deal with that problem: For a fixed Mach number (choose a larger value for shorter simulation times) vary the lattice resolution and tau accordingly. I have seen that there is a minimum in the error for a certain value of \Delta x and hence tau. This value of tau should then be taken as initial value for the simulations. If the accuracy is too small or the simulation time too long, one can adjust this by rescaling \Delta x and Ma according to \Delta x / Ma = const. This way one can be more or less sure that the computation returns an optimum result.
However, there are two remaining questions:

1. In principle, this method has to be repeated for different Reynolds numbers, since I expect that the optimum value of tau depends on the Reynolds number. One can of course easily check this. I will do that as a next step.
2. The described method does only work if the solution is known analytically. It is possible that the optimum value of tau does not only depend on the Reynolds number but also on the geometry. This issue should be checked as well.

Timm

Also for this topic there is an exhaustive paper:
Holdych, Noble, Georgiadis, Buckius: “Truncation Error Analysis of Lattice Boltzmann Methods” (Journal of Comp. Phys. 193 (2004) 595-619)