Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[CI][Benchmarks] update llama.cpp and requirements to latest #17881

Merged
merged 1 commit into from
Apr 7, 2025

Conversation

pbalcer
Copy link
Contributor

@pbalcer pbalcer commented Apr 7, 2025

This patch updates llama.cpp to the latest available version, uses a new, more relevant, GGUF model, and updates oneAPI to 2025.1.

I was trying to avoid updating oneAPI, but the latest llama.cpp internal pooling logic seems to be broken on 2025.0, resulting in double-free errors when using older oneAPI components.

The utils.download function also had to be updated, because it was using a deprecated features and didn't work on some configurations.

@pbalcer pbalcer requested a review from a team as a code owner April 7, 2025 09:55
@pbalcer pbalcer temporarily deployed to WindowsCILock April 7, 2025 09:56 — with GitHub Actions Inactive
@pbalcer pbalcer temporarily deployed to WindowsCILock April 7, 2025 10:17 — with GitHub Actions Inactive
@pbalcer pbalcer temporarily deployed to WindowsCILock April 7, 2025 10:17 — with GitHub Actions Inactive
This patch updates llama.cpp to the latest available version, uses a
new, more relevant, GGUF model, and updates oneAPI to 2025.1.

I was trying to avoid updating oneAPI, but the latest llama.cpp
internal pooling logic seems to be broken on 2025.0, resulting in
double-free errors when using older oneAPI components.

The utils.download function also had to be updated, because it was
using a deprecated features and didn't work on some configurations.
@pbalcer
Copy link
Contributor Author

pbalcer commented Apr 7, 2025

@intel/llvm-gatekeepers please merge. The CI failure is unrelated (system is dead).

@martygrant martygrant merged commit f365bf0 into intel:sycl Apr 7, 2025
24 of 25 checks passed
@dm-vodopyanov
Copy link
Contributor

Just in case - if some check failed due to sporadic failure in CI, it is better to restart it. If CI failure happens more that one time, there should be a GH issue linked to this PR.

@aelovikov-intel
Copy link
Contributor

Just in case - if some check failed due to sporadic failure in CI, it is better to restart it. If CI failure happens more that one time, there should be a GH issue linked to this PR.

It's not black and white. If CI is heavily loaded and people would start doing such restarts that won't help anybody.

@pbalcer
Copy link
Contributor Author

pbalcer commented Apr 7, 2025

Just in case - if some check failed due to sporadic failure in CI, it is better to restart it. If CI failure happens more that one time, there should be a GH issue linked to this PR.

It's not black and white. If CI is heavily loaded and people would start doing such restarts that won't help anybody.

Right, in this case restarting would only put additional strain on CI. These scripts don't touch sycl or its tests. Only benchmark CI scripts that currently run on a separate infrastructure.

@dm-vodopyanov
Copy link
Contributor

Just in case - if some check failed due to sporadic failure in CI, it is better to restart it. If CI failure happens more that one time, there should be a GH issue linked to this PR.

It's not black and white. If CI is heavily loaded and people would start doing such restarts that won't help anybody.

Right, in this case restarting would only put additional strain on CI. These scripts don't touch sycl or its tests. Only benchmark CI scripts that currently run on a separate infrastructure.

It's not about that patch -- if something is broken in CI, we need to identify it and report it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants