You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For very large problems it would be very useful to create and partition the mesh in parallel. At present we create a sequantial mesh on one processor, and the n partition that mesh on the ame processor and fnally communicate the partitions to the various processors.
It would be great if we could partition the high level description of the mesh and do the subsequent triangulations independently on each of the processors.
* Test for pypar_available
* add mpi4py install to travis.ci
* Adding mpi4py
* more tweaking of install scripts
* added a space to if statement
* Needed to add explicit reference to MSMPI_BIN in path
* remove nt test
* reorder assert commands to avoid deadlock
* using commands for parallel mesh
* get working for linux
* return if win32
For very large problems it would be very useful to create and partition the mesh in parallel. At present we create a sequantial mesh on one processor, and the n partition that mesh on the ame processor and fnally communicate the partitions to the various processors.
It would be great if we could partition the high level description of the mesh and do the subsequent triangulations independently on each of the processors.
Check out http://msl.cs.odu.edu/mediawiki/index.php/Parallel_Constrained_Delaunay_Mesh_%28PCDM%29_Generation
The text was updated successfully, but these errors were encountered: