New Palabos release: Version 0.7 Release 0

It is our pleasure to announce that a new major release of Palabos is available. The release reaches two important milestones in the project: a Python scripting interface, and local grid refinement. As usual, the code can be downloaded from the Palabos web page: http://www.lbmethod.org/palabos/download.html .

A “Python interface” means: you can execute Palabos code through specific library calls from the Python language. In practice, this implies that Palabos gets a look-and-feel similar to Matlab scripts. The code can be developed either interactively at the Python prompt, or executed programmatically through a Python script. This approach is much more lightweight than the development of applications in C++: the language is more straightforward and intuitive, and there is no need for compiling your applications. From an efficiency standpoint, nothing is changed: the codes remain just as efficient as through the C++ interface (we sometimes lose something around 10%, which is definitely negligible), it is still parallel, and scales just as well as it used to scale.

For now, not all Palabos features have been ported to Python, but this will be done progressively in subsequent releases. The Python scripting interface is an experiment which is completely new to us as well. Therefore, your feedback on the forum is crucial. How do you like the Python interface? Is the syntax convenient? Was it easy to compile? Please let us know your expericence on this. The compilation of the Python interface is documented in the user’s guide at http://www.lbmethod.org/palabos/documentation.userguide/pythonic.html . Have a look at the example Python scripts to understand how it works.

The other major innovation of the release is grid refinement, developed mainly by Daniel Lagrava. For now, only the 2D interface is provided, and the 3D one will follow up soon. The principles of the grid refinement strategy are explained in the user’s guide ( http://www.lbmethod.org/palabos/documentation.userguide/grid-refinement.html ), and an example program is provided.

Finally, a new multi-phase model, the He/Lee model, is implemented in the new release. Implementing this model was a real crash-test for Palabos. The model requires couplings between many scalar-fields, vector-fields, and lattices, and uses next-to-nearest-neighbor interaction stencils. Test result positive: the model works fine, runs fast, and scales impressively. This adds a new multi-phase model to Palabos which handles large densit and viscosity ratios between the two phases.

Hey,

New version sounds good, I look forward to trying out grid refinement. Can you give me the name of a paper on the He/Lee multi-phase model? Had a quick search but didnt find anything.

Im also interest in parallel processing with LB, do you have any plans to support CUDA in the future?

Regards,
Bruce

Hey,
jlatt
I am a new one for OpenLB. I am struggling my best to get familiar with the framework.
In the process, I met some problems that I want to get help from you if possible.
You will be appreciated if you could help me.

  1. What is the use of the .hh files?
  2. Nowatime, have you get some inclusion for the codes to be complied in the Microsoft Visual c++, and the error c2244? what are they?

Regards,
Carly

@brucedjones: Have a look at the comments at the beginning of file multiPhysics/heLeeProcessor3D.h, you will find a reference to two papers. CUDA support is planned, but it is for now difficult for me to predict how much time this will take (it is not a top priority).
@Carly: Traditional C++ programs are split in .h and .cpp files. As we use templates, this is replaced by .h and .hh files. The .hh files are implementation files, just like the traditional .cpp ones. Palabos still doesn’t work with Visual C++, you have to use the procedure explained in the user’s guide to compile under Windows.

Hey Jonas,

Thanks for the references. I have some limited experience with CUDA and plan to explore it further in the coming months, do you have some ideas on what studies need to be done with CUDA before integration with palabos is possible?

I’m keen to help out with this, my first thought is that it should just be a matter of porting the data processors to CUDA kernels.

Regards,
Bruce

Many thanks for providing a Python interface!

I just wish you’d done it half a year ago – before I started hacking on the C++ simulation :wink:

Does it mean, I can now use Palabos from the Ppython interpreter, building the simulation bit by bit, streaming one step, then refining?

Anyway: This is great news!

Time to merge in your new version.

Did you include the binary vtk patch? ? http://www.lbmethod.org/forum/read.php?4,2603

@ArneBab: Yes, a Python interface means that you can either build your simulation interactively, or write a script file which is certainly more straightforward than a C++ program. We have however not yet ported all features and models from C++ to Python; you will probably want to wait a few months until everything is ported before switching for good from C++ to Python.

I had seen the post on the binary VTK format you are mentioning, but it is not clear to me in which way it can be used to extend the capabilities of Palabos. After all, Palabos already writes data in a binary VTK format. It is Base64-encoded and therefore based on a set of ASCII characters, but it definitely is a binary format. Could you provide more explanations about what type of extension to Palabos’ VTK output you would like to have, and how the code in the mentioned post can help achieve it?

@brucedjones: Thank you for expressing your willingness to help with the development of CUDA kernels, which we on our side are highly willing to accept. I agree with your point of view: adding CUDA capabilities to Palabos is, from a technical point of view, as easy as translating individual data processors to CUDA kernels. From a software architecture point of view there remains however a certain amount of work for us to be done on the Palabos core before CUDA kernels can be included in a convenient way. First of all, an interface is required for the non-intrusive development of CUDA kernels, which means, for the possibility to develop kernels in a modular way without having to hack into the Palabos core every time. The second structural ingredient needed in Palabos is relative to global load balancing strategy which takes into account hybrid execution models with CUDA and other kernels.

What I can promise is that we work as fast as possible in the core development team towards a first prototype of the interface which would allow you and other interested contributors to develop their kernels. If you are eager to get started already now, I could maybe suggest that you go ahead and develop CUDA kernels relative to existing Palabos data processors right away, and execute them for now by your own means by some sort of execution mechanism. As soon as we provide a corresponding Palabos interface, my guess would be that it is then relatively easy to integrate your kernels.

As a technical side remark, I should maybe point out that there exists an exception to the data processor mechanism in Palabos: the collision and streaming steps (and the combined collideAndStream() function) are not expressed as data processors; they are written natively as properties of the block-lattice. A CUDA kernel for them will need to be written anyway, of course, although they are no data processors.

We will keep you updated on these points through the forum.

Hey Jonas

Thanks for that, Collide and Stream is where I see CUDA being of most benefit. Admittedly I’m very new to CUDA and am currently in the process of writing some LB cuda kernels so that I may learn. In the coming months I will spend some time digging through the palabos source and try and identify some “easy targets”. Whatever my contribution may be, please bear in mind that I see myself more of an engineer then a programmer, although that distinction seems less and less applicable to me these days.

Regards,
Bruce

PS I owe my MRes to openLB so thanks for all your effort!

@jlatt: What I’d really need is a compressed VTK format, because the basic vtk output takes so much space that outputting multiple timesteps can easily fill my whole disk. I’m now circumventing that by only outputting the relevant values into a FIFO connected to bzip2. An advantage of Palabos for me is clearly, that I can easily get access to only these values.

For Python bindings, I’ll wait, then. After all my C++ simulation works as it should, so currently I wouldn’t win much. On the long run it’s great, though, because I can fit it much better into my general Python based codebase.

@brucedjones: if you are new to CUDA, I can recommend that you have a look at the open-source project Sailfish (http://sailfish.us.edu.pl/), which implements LB on GPUs. I found the code a bit tricky to understand in the beginning, because it uses a couple of somewhat advanced Python programming techniques. But once you get into it, you find yourself with a very carefully implemented framework. The project consists essentially of a Python framework which generates the CUDA code for you. This means that if you dislike, or don’t understand, the Python stuff, you can simply pre-generate the CUDA code (which is well structured) and investigate it. The procedure is explained in the Sailfish user’s guide.

Following my previous post I started doing some digging.

I found a really good series of tutorials for CUDA here: Dr. Dobbs - CUDA, Supercomputing for the Masses

After reading that I applied what I had learned in pulling apart the CUDA code which can be found here: Cambridge many-core, Lattice Boltzmann Demo

I think I’ve put together a bare bones LB code, just need to work on an output routine to check its all working as expected (it wont be). Just to do some basic number crunching it seems to be pretty straight forward.