You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for sharing your code with the community, truly great work!
I encountered some issues when running the app on my local machine and got the same result when running Colab. The problem appears to be related to the wrong build of bitsandbytes:
Initializing ImageCaptioning to cuda:0
Overriding torch_dtype=None with `torch_dtype=torch.float16` due to requirements of `bitsandbytes` to enable model loading in mixed int8. Either pass torch_dtype=torch.float16 or don't pass this argument at all to remove this warning.
CUDA SETUP: CUDA runtime path found: /scratch/miniconda/envs/CSAM_2/lib/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 117
CUDA SETUP: Loading binary /scratch/miniconda/envs/CSAM_2/lib/python3.9/site-packages/bitsandbytes/libbitsandbytes_cuda117.so...
Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s]
Error named symbol not found at line 479 in file /mmfs1/gscratch/zlab/timdettmers/git/bitsandbytes/csrc/ops.cu
I tried running the colab and my session consistently crashes on this line: captioning_model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map = "sequential", load_in_8bit = True)
It crashes after attempting to load the checkpoint shards as with the app.py script.
I tried various CUDA versions (11.3, 11.7) and the issue persists.
Could you please check if you encounter the same issue and give suggestions as to how to fix this problem? Maybe if you specify the exact version of bitsandbytes in requirements.txt this problem will be resolved?
Thanks!
The text was updated successfully, but these errors were encountered:
I have never encountered such an issue before. As described in 'Error named symbol not found at line ...', The possible reason for this issue may be the incompatible version of bitsandbytes. Here are some suggestions about how to fix this issue:
update your bitsandbytes library. I use bitsandbytes0.37.2 !pip install --upgrade bitsandbytes
or !pip install bitsandbytes==0.37.2
If 1. does not work, maybe you should try to load BLIP2 without int8 quantization algorithm. captioning_model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16)
Hi,
Thank you for sharing your code with the community, truly great work!
I encountered some issues when running the app on my local machine and got the same result when running Colab. The problem appears to be related to the wrong build of bitsandbytes:
I tried running the colab and my session consistently crashes on this line:
captioning_model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map = "sequential", load_in_8bit = True)
It crashes after attempting to load the checkpoint shards as with the app.py script.
I tried various CUDA versions (11.3, 11.7) and the issue persists.
Could you please check if you encounter the same issue and give suggestions as to how to fix this problem? Maybe if you specify the exact version of bitsandbytes in requirements.txt this problem will be resolved?
Thanks!
The text was updated successfully, but these errors were encountered: