Skip to content

Commit 4778d42

Browse files
authored
Jenkins and Docs minor bug fix (#162)
* Added QEFF_HOME for CI setup Signed-off-by: amitraj <[email protected]> * fixed broken links and updated Signed-off-by: amitraj <[email protected]> --------- Signed-off-by: amitraj <[email protected]>
1 parent b8cb759 commit 4778d42

File tree

3 files changed

+15
-3
lines changed

3 files changed

+15
-3
lines changed

README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@
88
*Latest news* :fire: <br>
99

1010
- [coming soon] Support for more popular [models](https://quic.github.io/efficient-transformers/source/validate.html#models-coming-soon) and inference optimization technique speculative decoding <br>
11+
- [09/2024] [AWQ](https://arxiv.org/abs/2306.00978)/[GPTQ](https://arxiv.org/abs/2210.17323) 4-bit quantized models are supported
12+
- [09/2024] Now we support [PEFT](https://huggingface.co/docs/peft/index) models
1113
- [09/2024] Added support for [Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B)
1214
- [09/2024] Added support for [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
1315
- [09/2024] Added support for [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct)
@@ -27,6 +29,7 @@
2729
- [05/2024] Added support for [Mixtral-8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) & [Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1).
2830
- [04/2024] Initial release of [efficient transformers](https://github.com/quic/efficient-transformers) for seamless inference on pre-trained LLMs.
2931

32+
3033
# Overview
3134

3235
## Train anywhere, Infer on Qualcomm Cloud AI with a Developer-centric Toolchain
@@ -77,7 +80,7 @@ For more details about using ``QEfficient`` via Cloud AI 100 Apps SDK, visit [Li
7780
* [Quick Start Guide](https://quic.github.io/efficient-transformers/source/quick_start.html#)
7881
* [Python API](https://quic.github.io/efficient-transformers/source/hl_api.html)
7982
* [Validated Models](https://quic.github.io/efficient-transformers/source/validate.html)
80-
* [Models coming soon](models-coming-soon)
83+
* [Models coming soon](https://quic.github.io/efficient-transformers/source/validate.html#models-coming-soon)
8184

8285
> Note: More details are here: https://quic.github.io/cloud-ai-sdk-pages/latest/Getting-Started/Model-Architecture-Support/Large-Language-Models/llm/
8386

docs/source/introduction.md

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,16 @@ For other models, there is comprehensive documentation to inspire upon the chang
2222

2323
***Latest news*** : <br>
2424

25-
- [coming soon] Support for more popular [models](coming_soon_models) and inference optimization technique speculative decoding <br>
26-
- [08/2024] Added Support for inference optimization technique ```continuous batching```
25+
- [coming soon] Support for more popular [models](https://quic.github.io/efficient-transformers/source/validate.html#models-coming-soon) and inference optimization technique speculative decoding <br>
26+
- [09/2024] [AWQ](https://arxiv.org/abs/2306.00978)/[GPTQ](https://arxiv.org/abs/2210.17323) 4-bit quantized models are supported
27+
- [09/2024] Now we support [PEFT](https://huggingface.co/docs/peft/index) models
28+
- [09/2024] Added support for [Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B)
29+
- [09/2024] Added support for [Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
30+
- [09/2024] Added support for [Meta-Llama-3.1-70B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct)
31+
- [09/2024] Added support for [granite-20b-code-base](https://huggingface.co/ibm-granite/granite-20b-code-base-8k)
32+
- [09/2024] Added support for [granite-20b-code-instruct-8k](https://huggingface.co/ibm-granite/granite-20b-code-instruct-8k)
33+
- [09/2024] Added support for [Starcoder1-15B](https://huggingface.co/bigcode/starcoder)
34+
- [08/2024] Added support for inference optimization technique ```continuous batching```
2735
- [08/2024] Added support for [Jais-adapted-70b](https://huggingface.co/inceptionai/jais-adapted-70b)
2836
- [08/2024] Added support for [Jais-adapted-13b-chat](https://huggingface.co/inceptionai/jais-adapted-13b-chat)
2937
- [08/2024] Added support for [Jais-adapted-7b](https://huggingface.co/inceptionai/jais-adapted-7b)

scripts/Jenkinsfile

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,7 @@ pipeline
3636
sh '''
3737
. preflight_qeff/bin/activate
3838
export TOKENIZERS_PARALLELISM=false
39+
export QEFF_HOME=$PWD
3940
pytest tests --ignore tests/cloud --junitxml=tests/tests_log1.xml
4041
pytest tests/cloud --junitxml=tests/tests_log2.xml
4142
junitparser merge tests/tests_log1.xml tests/tests_log2.xml tests/tests_log.xml

0 commit comments

Comments
 (0)