I am using the fully up to date alpaca repo with the 13B model. I downloaded the 13B model from the magnet link in the README. This is what I've run ``` make chat ./chat -m ../ggml-alpaca-13b-q4.bin ``` It outputs this: ``` main: seed = 1679542559 llama_model_load: loading model from '../ggml-alpaca-13b-q4.bin' - please wait ... llama_model_load: ggml ctx size = 10959.49 MB Segmentation fault ``` I think I have enough RAM (14 gb), is this just what it does if you don't have enough RAM?