Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request to Include MMLU and Winogrande human-translated into 11 African languages in HELM Leaderboard Frontend #3326

Open
Adam-Kasumovic opened this issue Feb 7, 2025 · 0 comments

Comments

@Adam-Kasumovic
Copy link

Hello HELM authors,

I am one of the authors of the benchmark "MMLU and Winogrande human-translated into 11 African languages" added to the HELM repo about a month ago via this PR: #3237 and providing a benchmark to test LLMs on low-resource languages. However, when I was checking the leaderboard website at this link (https://crfm.stanford.edu/helm/), I noticed that our benchmark was not present. As such, I was wondering what the process was for getting added to the website (e.g. with a page like this one: https://crfm.stanford.edu/helm/cleva/latest/)? We would be glad to contribute any code, code modifications, or assets to be added; please just inform us about the steps we need to take and we can follow them promptly. Thank you very much!

Links to the benchmark information are provided below again for reference.

Repository
https://github.com/InstituteforDiseaseModeling/Bridging-the-Gap-Low-Resource-African-Languages
Dataset
https://huggingface.co/datasets/Institute-Disease-Modeling/mmlu-winogrande-afr
Paper
https://arxiv.org/abs/2412.12417

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant