Skip to content

Implement memory pool allocator#18

Draft
abullock-lanl wants to merge 22 commits intomainfrom
abullock/mem_pool
Draft

Implement memory pool allocator#18
abullock-lanl wants to merge 22 commits intomainfrom
abullock/mem_pool

Conversation

@abullock-lanl
Copy link
Collaborator

This adds the infrastructure for a custom GPU-enabled memory pool allocator.

More extensive testing and documentation is needed before I can mark this ready. I will likely also add a process error handler.

@abullock-lanl abullock-lanl self-assigned this Mar 4, 2026
@abullock-lanl
Copy link
Collaborator Author

Update: I've added an error handler function for graceful exits that also uses libunwind for stack traces (if available on the given platform).

While memory pool returns are usually autoed, these typedefs are needed
for parameter passing of memory pool arrays.
@abullock-lanl
Copy link
Collaborator Author

Here's some of the next steps I'm thinking about (not for this PR) to fully adopt the new array types here:

  • We want to replace the usage of all STL vectors for things that will be used on device to Kokkos views or std::array instead of the current process of creating views from STL vectors. The simplest thing to start with would be replacing local DBLV_T variables with the new ArrayRank1<double> type that gets returned from a memory pool allocation. For instance:
- DBLV_T point_volume(pll, 0.0);
+ auto point_volume = NewArrayRank1<double>("pvol", pll);
  • Replacing the STL vector usage for database accesses can come later.
  • Before we can replace the simple vectors, the MPI comm machinery has to be updated to operate on views/arrays instead of just STL vectors, otherwise we can't do things like gather/scatters on the point volume example above.
  • Should I maintain left layout for memory pool arrays? I think probably not since this can be awkward in the UME context where looping over an inner dimension variable does not happen (we always have 3-element std::array types as the "inner" dimension).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant