We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code I used: sc.tl.umap(ad,min_dist=0.1,method='rapids',neighbors_key='neighbors')
sc.tl.umap(ad,min_dist=0.1,method='rapids',neighbors_key='neighbors')
AWS G5 4xlarge 1 GPU 24G GPU memory.
During the process the GPU usage is 4G/24G but at the end, it crashed saying out of memory.
--------------------------------------------------------------------------- MemoryError Traceback (most recent call last) Cell In[11], line 1 ----> 1 sc.tl.umap(ad,min_dist=0.1,method='rapids',neighbors_key='neighbors') File ~/mambaforge/envs/rapids-23.08/lib/python3.10/site-packages/scanpy/tools/_umap.py:237, in umap(adata, min_dist, spread, n_components, maxiter, alpha, gamma, negative_sample_rate, init_pos, random_state, a, b, copy, method, neighbors_key) 222 X_contiguous = np.ascontiguousarray(X, dtype=np.float32) 223 umap = UMAP( 224 n_neighbors=n_neighbors, 225 n_components=n_components, (...) 235 random_state=random_state, 236 ) --> 237 X_umap = umap.fit_transform(X_contiguous) 238 adata.obsm['X_umap'] = X_umap # annotate samples with UMAP coordinates 239 logg.info( 240 ' finished', 241 time=start, 242 deep=('added\n' " 'X_umap', UMAP coordinates (adata.obsm)"), 243 ) File ~/mambaforge/envs/rapids-23.08/lib/python3.10/site-packages/cuml/internals/api_decorators.py:188, in _make_decorator_function.<locals>.decorator_function.<locals>.decorator_closure.<locals>.wrapper(*args, **kwargs) 185 set_api_output_dtype(output_dtype) 187 if process_return: --> 188 ret = func(*args, **kwargs) 189 else: 190 return func(*args, **kwargs) File ~/mambaforge/envs/rapids-23.08/lib/python3.10/site-packages/cuml/internals/api_decorators.py:393, in enable_device_interop.<locals>.dispatch(self, *args, **kwargs) 391 if hasattr(self, "dispatch_func"): 392 func_name = gpu_func.__name__ --> 393 return self.dispatch_func(func_name, gpu_func, *args, **kwargs) 394 else: 395 return gpu_func(self, *args, **kwargs) File ~/mambaforge/envs/rapids-23.08/lib/python3.10/site-packages/cuml/internals/api_decorators.py:190, in _make_decorator_function.<locals>.decorator_function.<locals>.decorator_closure.<locals>.wrapper(*args, **kwargs) 188 ret = func(*args, **kwargs) 189 else: --> 190 return func(*args, **kwargs) 192 return cm.process_return(ret) File base.pyx:665, in cuml.internals.base.UniversalBase.dispatch_func() File umap.pyx:658, in cuml.manifold.umap.UMAP.fit_transform() File ~/mambaforge/envs/rapids-23.08/lib/python3.10/site-packages/cuml/internals/api_decorators.py:188, in _make_decorator_function.<locals>.decorator_function.<locals>.decorator_closure.<locals>.wrapper(*args, **kwargs) 185 set_api_output_dtype(output_dtype) 187 if process_return: --> 188 ret = func(*args, **kwargs) 189 else: 190 return func(*args, **kwargs) File ~/mambaforge/envs/rapids-23.08/lib/python3.10/site-packages/cuml/internals/api_decorators.py:393, in enable_device_interop.<locals>.dispatch(self, *args, **kwargs) 391 if hasattr(self, "dispatch_func"): 392 func_name = gpu_func.__name__ --> 393 return self.dispatch_func(func_name, gpu_func, *args, **kwargs) 394 else: 395 return gpu_func(self, *args, **kwargs) File ~/mambaforge/envs/rapids-23.08/lib/python3.10/site-packages/cuml/internals/api_decorators.py:190, in _make_decorator_function.<locals>.decorator_function.<locals>.decorator_closure.<locals>.wrapper(*args, **kwargs) 188 ret = func(*args, **kwargs) 189 else: --> 190 return func(*args, **kwargs) 192 return cm.process_return(ret) File base.pyx:665, in cuml.internals.base.UniversalBase.dispatch_func() File umap.pyx:595, in cuml.manifold.umap.UMAP.fit() MemoryError: std::bad_alloc: out_of_memory: CUDA error at: /home/ec2-user/mambaforge/envs/rapids-23.08/include/rmm/mr/device/cuda_memory_resource.hpp
The text was updated successfully, but these errors were encountered:
had to use this to avoid out of memory.
import cupy as cp import rmm rmm.reinitialize(managed_memory=True) cp.cuda.set_allocator(rmm.rmm_cupy_allocator)
Sorry, something went wrong.
No branches or pull requests
Code I used:
sc.tl.umap(ad,min_dist=0.1,method='rapids',neighbors_key='neighbors')
AWS G5 4xlarge 1 GPU 24G GPU memory.
During the process the GPU usage is 4G/24G but at the end, it crashed saying out of memory.
The text was updated successfully, but these errors were encountered: