Skip to content

Commit f1452ca

Browse files
committed
doc: update release notes
1 parent 987d668 commit f1452ca

File tree

2 files changed

+8
-3
lines changed

2 files changed

+8
-3
lines changed

README.md

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ Llama Coder is a better and self-hosted Github Copilot replacement for VS Studio
66

77
## Features
88
* 🚀 As good as Copilot
9-
* ⚡️ Fast. Works well on consumer GPUs. RTX 4090 is recommended for best performance.
9+
* ⚡️ Fast. Works well on consumer GPUs. Apple Silicon or RTX 4090 is recommended for best performance.
1010
* 🔐 No telemetry or tracking
1111
* 🔬 Works with any language coding or human one.
1212

@@ -27,10 +27,11 @@ Install [Ollama](https://ollama.ai) on dedicated machine and configure endpoint
2727

2828
## Models
2929

30-
Currently Llama Coder supports only Codellama. Model is quantized in different ways, but our tests shows that `q4` is an optimal way to run network. When selecting model the bigger the model is, it performs better. Always pick the model with the biggest size and the biggest possible quantization for your machine. Default one is `codellama:7b-code-q4_K_M` and should work everywhere, `codellama:34b-code-q4_K_M` is the best possible one.
30+
Currently Llama Coder supports only Codellama. Model is quantized in different ways, but our tests shows that `q4` is an optimal way to run network. When selecting model the bigger the model is, it performs better. Always pick the model with the biggest size and the biggest possible quantization for your machine. Default one is `stable-code:3b-code-q4_0` and should work everywhere and outperforms most other models.
3131

3232
| Name | RAM/VRAM | Notes |
3333
|---------------------------|----------|-------|
34+
| stable-code:3b-code-q4_0 | 3GB | |
3435
| codellama:7b-code-q4_K_M | 5GB | |
3536
| codellama:7b-code-q6_K | 6GB | m |
3637
| codellama:7b-code-fp16 | 14GB | g |
@@ -48,6 +49,10 @@ Most of the problems could be seen in output of a plugin in VS Code extension ou
4849

4950
## Changelog
5051

52+
## [0.0.11]
53+
- Added Stable Code model
54+
- Pause download only for specific model instead of all models
55+
5156
## [0.0.10]
5257
- Adding ability to pick a custom model
5358
- Asking user if they want to download model if it is not available

package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
"name": "llama-coder",
33
"displayName": "Llama Coder",
44
"description": "Better and self-hosted Github Copilot replacement",
5-
"version": "0.0.10",
5+
"version": "0.0.11",
66
"icon": "icon.png",
77
"publisher": "ex3ndr",
88
"repository": {

0 commit comments

Comments
 (0)