Releases: ggerganov/llama.cpp
Releases · ggerganov/llama.cpp
b3044
ggml : fix loongarch build (O2 issue) (#7636)
b3042
[SYCL] fix intel docker (#7630) * Update main-intel.Dockerfile * workaround for https://github.com/intel/oneapi-containers/issues/70 * reset intel docker in CI * add missed in server
b3040
metal : remove invalid asserts (#7617)
b3039
metal : add missing asserts (#7617)
b3038
ggml : fix YARN + add tests + add asserts (#7617) * tests : add rope tests ggml-ci * ggml : fixes (hopefully) ggml-ci * tests : add non-cont tests ggml-ci * cuda : add asserts for rope/norm + fix DS2 ggml-ci * ggml : assert contiguousness * tests : reduce RoPE tests ggml-ci
b3037
cuda : non-cont concat support (#7610) * tests : add non-cont concat tests * cuda : non-cont concat support ggml-ci
b3036
llama-bench : add support for the RPC backend (#7435)
b3035
ggml : use atomic_flag for critical section (#7598) * ggml : use atomic_flag for critical section * add windows shims
b3033
sync : ggml
b3030
ggml : fix typo in ggml.c (#7603)