Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add deep link to Local Chat for AiBrow #1065

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

rhys101
Copy link

@rhys101 rhys101 commented Dec 5, 2024

Hi, I've added AiBrow to the list of local apps. AiBrow is a new open source (https://github.com/axonzeta/aibrow) all-in-one local llm Extension for Chromium based browsers and Firefox. It comes bundled with llama.cpp and works on Mac, Windows, and Linux.

It implements the new Chrome AI Prompt API (https://github.com/explainers-by-googlers/prompt-api) under its own namespace (window.aibrow) and polyfills the window .ai Chrome proposal in other browsers. As well as the Prompt API implementation, AiBrow supports using different models (such as gguf models from HF), embeddings, grammar and more. The installer also includes the Q4_K_M version of SmolLM2 1.7 Instruct as the default model, all ready to go.

We just released a new version today with the support for Hugging Face models and deep linking. If you would like to check it out, please visit: https://aibrow.ai/

Copy link
Member

@Vaibhavs10 Vaibhavs10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @rhys101 - sorry for the delay in getting back to you! It has been a bit crazy at work! Is there a limit to which size of GGUFs can you run via AiBrow?

In addition to this can you send us a SVG of the logo as well!

@rhys101
Copy link
Author

rhys101 commented Jan 29, 2025

Hey @Vaibhavs10
aibrow-mono

There's no particular limit to the size of the GGUFs. It's using llama.cpp and node-llama-cpp under the hood, and it calculates a suitability score to check before attempting to run any partcular GGUF on the machine. If the model is too big it will get a score of 0 and refuse to run. So a fancy new M4 Pro 128 will run a larger GGUF than an 8Gb M1 Air for example.

SVG attached.
Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants