is it possible to send objects (= class members) via MPI_Bcast, MPI_Gather etc.?
The problem is that I have an array of objects which I want to transfer simultaneously, and I have to declare the datatype of the data, e. g. MPI_DOUBLE, MPI_INT. Now, how can I do this for arbitrary objects?
You might be able to do something with MPI_Pack to pack the data of your objects for sending, then MPI_Unpack to put the data into objects on the other node. Think you can only send MPI_Datatypes using regular MPI calls.
this is a very nice hint, and maybe it will do the job.
My idea was to extract the data from my objects and send it as a derived datatype or something like that. I have to find out which way is more efficient and - most important - faster under the given circumstances.
However, I am thinking of something else: Is it possible to allocate some memory which is visible to all processes? This way I would save MPI time. The idea is that the LBM computations themselves are purely local to a given process, i.e. at the borders I still have MPI communication in order to stream the fluid. But the additional objects moving in the flow (deformable cells, actually) are visible to all processes which avoids sending data if an object crosses the domain border of two processes. Does anybody have experience using this approach?
The question is whether the global availability of the memory is more expensive than the MPI communication or not.
PS: I just found out that also this can be done using boost: boost/interprocess. This is a very interesting way of exchanging data.