You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Completing shift to new name sharpy
* renaming files ddptensor (and alike) -> sharpy
* also renaming variables, namespaces etc (also shortcuts like ddpt)
Typically, ddptensor operations do not get executed immediately. Instead, the function returns a transparent object (a future) only.
77
+
Typically, sharpy operations do not get executed immediately. Instead, the function returns a transparent object (a future) only.
78
78
the actual computation gets deferred by creating a promise/deferred object and queuing it for later. This is not visible to users, they can use it as any other numpy-like library.
79
79
80
80
Only when actual data is needed, computation will happen; that is when
81
-
- the values of tensor elements are casted to bool int or float
82
-
- the tensor is printed
81
+
- the values of array elements are casted to bool int or float
82
+
- the array is printed
83
83
84
84
In the background a worker thread handles deferred objects. Until computation is needed it dequeues deferred objects from the FIFO queue and asks them to generate MLIR.
85
85
Objects can either generate MLIR or instead provide a run() function to immediately execute. For the latter case the current MLIR function gets executed before calling run() to make sure potential dependences are met.
86
86
87
87
### Distribution
88
-
Tensors and operations on them get transparently distributed across multiple processes. Respective functionality is partly handled by this library and partly IMEX dist dialect.
88
+
Arrays and operations on them get transparently distributed across multiple processes. Respective functionality is partly handled by this library and partly IMEX dist dialect.
89
89
IMEX relies on a runtime library for complex communication tasks and for inspecting runtime configuration, such as number of processes and process id (MPI rank).
90
-
ddptensor provides this library functionality in a separate dynamic library "idtr".
90
+
sharpy provides this library functionality in a separate dynamic library "idtr".
91
91
92
92
Right now, data is split in the first dimension (only). Each process knows the partition it owns. For optimization partitions can actually overlap.
93
93
94
-
ddptensor supports to execution modes:
94
+
sharpy supports to execution modes:
95
95
1. CSP/SPMD/explicitly-distributed execution, meaning all processes execute the same program, execution is replicated on all processes. Data is typically not replicated but distributed among processes.
96
96
2. Controller-Worker/implicitly-distributed execution, meaning only a single process executes the program and it distributes data and work to worker processes.
0 commit comments