You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Llama Coder
2
2
3
-
Llama Coder is a better and self-hosted Github Copilot replacement for VS Studio Code. Llama Coder uses [Ollama](https://ollama.ai) and codellama to provide autocomplete that runs on your hardware. Works best with Mac M1/M2/M3 or with RTX 4090.
3
+
Llama Coder is a better and self-hosted Github Copilot replacement for [VS Code](https://github.com/microsoft/vscode). Llama Coder uses [Ollama](https://ollama.ai) and codellama to provide autocomplete that runs on your hardware. Works best with Mac M1/M2/M3 or with RTX 4090.
@@ -14,7 +14,7 @@ Llama Coder is a better and self-hosted Github Copilot replacement for VS Studio
14
14
15
15
Minimum required RAM: 16GB is a minimum, more is better since even smallest model takes 5GB of RAM.
16
16
The best way: dedicated machine with RTX 4090. Install [Ollama](https://ollama.ai) on this machine and configure endpoint in extension settings to offload to this machine.
17
-
Second best way: run on MacBook M1/M2/M3 with enougth RAM (more == better, but 10gb extra would be enougth).
17
+
Second best way: run on MacBook M1/M2/M3 with enough RAM (more == better, but 10gb extra would be enough).
18
18
For windows notebooks: it runs good with decent GPU, but dedicated machine with a good GPU is recommended. Perfect if you have a dedicated gaming PC.
0 commit comments