Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase the resolution of the grid #3

Open
fresh-men opened this issue Apr 7, 2023 · 2 comments
Open

Increase the resolution of the grid #3

fresh-men opened this issue Apr 7, 2023 · 2 comments

Comments

@fresh-men
Copy link

Hello, I want to try to increase the resolution of the grid to realize the representation of more detailed objects, but I have a problem, how can I solve it? Have you ever tried to improve the grid resolution?

I tried to change pg_scale = [1000, 2000, 4000]to pg_scale= [1000, 2000, 3000,4000] ,and then encountered an error:

Traceback (most recent call last): File "/opt/data/private/PAC-NeRF-main/train.py", line 287, in <module> train_static(cfg, pnerf, optimizer, start, cfg['N_static'], rays_o_all, rays_d_all, viewdirs_all, rgb_all, ray_mask_all) File "/opt/data/private/PAC-NeRF-main/train.py", line 163, in train_static global_loss = pnerf.forward(1, rays_o_all, File "/opt/data/private/PAC-NeRF-main/lib/pac_nerf.py", line 204, in forward self.dynamic_observer.initialize(self.init_particles, self.init_features, self.init_velocities, self.init_rhos, self.init_mu, self.init_lam, self.nerf.voxel_size, self.init_yield_stress, self.init_plastic_viscosity, self.init_friction_alpha, self.cohesion) File "/opt/data/private/PAC-NeRF-main/lib/engine/dynamic_observer.py", line 160, in initialize self.from_torch(particles.data.cpu().numpy(), features.data.cpu().numpy(), velocities.data.cpu().numpy(), particle_rho.data.cpu().numpy(), particle_mu.data.cpu().numpy(), particle_lam.data.cpu().numpy()) File "/root/miniconda3/envs/pacnerf/lib/python3.9/site-packages/taichi/lang/kernel_impl.py", line 1002, in __call__ return self._primal(self._kernel_owner, *args, **kwargs) File "/root/miniconda3/envs/pacnerf/lib/python3.9/site-packages/taichi/lang/kernel_impl.py", line 869, in __call__ return self.runtime.compiled_functions[key](*args) File "/root/miniconda3/envs/pacnerf/lib/python3.9/site-packages/taichi/lang/kernel_impl.py", line 785, in func__ raise e from None File "/root/miniconda3/envs/pacnerf/lib/python3.9/site-packages/taichi/lang/kernel_impl.py", line 782, in func__ t_kernel(launch_ctx) RuntimeError: [cuda_driver.h:operator()@87] CUDA Error CUDA_ERROR_ASSERT: device-side assert triggered while calling stream_synchronize (cuStreamSynchronize) [E 04/07/23 02:37:09.064 434] [cuda_driver.h:operator()@87] CUDA Error CUDA_ERROR_ASSERT: device-side assert triggered while calling stream_synchronize (cuStreamSynchronize)

@xuan-li
Copy link
Owner

xuan-li commented Apr 7, 2023

I'm not sure about this error.

The simulation part may be too slow if you double the resolution.

@fresh-men
Copy link
Author

This problem happens here:
self.from_torch(particles.data.cpu().numpy(), features.data.cpu().numpy(), velocities.data.cpu().numpy(), particle_rho.data.cpu().numpy(), particle_mu.data.cpu().numpy(), particle_lam.data.cpu().numpy())

Have you ever tried to improve the grid resolution?

I'm not sure about this error.

The simulation part may be too slow if you double the resolution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants