real*4 or real*8?

Hi,
is it always necessary to use double precision (real*8) in this method for u, v, w, rho, f, f_tmp? I try to apply LBM on very fine grids therefore the economy of memory is important.

Hi Kaff,

As I talked with some guys which use GPUs there is no difference for them to use single precision instead of double.

Alex

Of course GPU guys have no problems using floats instead of doubles. :wink:
I am not sure whether single precision is sufficient in some cases. I could imagine that in simulations with tau close to 0.5 there could be problems.

Timm

This paper by P. Dellar[/url] reports issues with round-off errors at very small Mach number. As a work-around, instead of increasing the machine size of floating point representation, the paper reports that good results are achieved with [url=http://www.lbmethod.org/howtos:reduce_roundoff]the round-off optimization procedure by P. Skordos.