is there any way how to speed up a computation (rescale time) for a fixed low reynolds number and highly fine mesh discretization? The maximum speed of the flow is far bellow the compressibility limit (cca. 0.001 lattice space units per lattice time unit and bellow). The only way I see now is to increase lattice boltzmann viscosity but since tau should not go above 1, I do not see any way to speed up the computation.

what is your Reynolds number?
What you could do: increase the numerical Reynolds number. This is what I often do. If the Reynolds number is small, you may increase it without seeing immediate effects in the flow. This does not work if the Reynolds number is about 1 or larger.
Of course, you have to check whether your physics remains correct.
In fact, we are doing the same thing with the Mach number all the time. The numerical Mach number is always larger than the physical Mach number. As long as it is sufficiently small (say, below 0.1), there is no significant influence.

thanks for your answer. It is hard to give the exact Re because we use non-Newtonian flow (close to Bingham rheology). The simulation is for a self-compacting concrete with fibres included. But I am not sure if the Re is bellow 1. Maybe I am wrong, but by increasing the numerical Re is equivalent to for example decreasing the real physical viscosity of the fluid. So for example instead of having viscosity 10 Pas you would simulate 5 Pas fluid. Am I wrong in that? Could you please comment more on the increasing the numerical Re? What I do now is that I allow the relaxation time to go above 1. I put the upper limit to around 6. So the liquid parts of the flow have the relaxation time around 1 and the solid ones are above. BTW, there are no ways to increase accuracy of the LBM for the relaxation time above 1, right?

you are right, increasing Re basically means reducing the physical viscosity. In principle, you keep the relaxation parameter and the resolution and increase the lattice velocity. This way, the numerical Re increases and the runtime goes down.
Regarding your other comment: LBM is known to be intrinsically more inaccurate when tau is larger than unity. As far as I know, there is no way to counteract this.
Unfortunately, LBM is not very efficient when both very large and very small Re numbers have to be simulated.

Just a quick thought: The most commonly used LBE is the one which is effectively obtained by an Euler-integration of the (discrete) Boltzmann equation, which is a first order integration method in time. Another approach is to use the Crack-Nicholson integration method, which is second order. The resulting difference equation is implicit but cam be made explicit by a quick change of variables. This was first proposed by He, Shan and Doolen (PRE 1997). This may help you with the tau problem in you case, but I’m not sure.

Another thought increasing tau is bad for BGK, but not for TRT/MRT models. However, you should be really accurate with it and do proper implementation. I guess you can speed up it 10 times.