Skip to content

llamagi/lmagi

Repository files navigation

lmagi thinking machine

local model Augmented Generative Intelligence

ollama models are recognized in the ollama tab

lmAGI display helpers

lmAGI

language model Augmented Generative Intelligence
lmagi showcases multi-model reasoning with persistent memory and a modern web UI powered by NiceGUI. The preferred way to run is via the native desktop wrapper lmagi_gui.py, which launches the backend and embeds the web app. local model / language model Augmented Generative Intelligence
this is the development version of ezAGI becoming lmagi on route to easyAGI roadmap to display the reasoning capabilities as log files
integration with ollama point of departure can be found at lmagi
project development has migrated to the easyAGI roadmap

Project development aligns with the easyAGI roadmap.
Source code: lmagi

An exercise in multi-model integration for LLM rational enhancement.

aug·ment·ed
/ôɡˈmen(t)əd/

adjective: augmented
    having been made greater in size or value
gen·er·a·tive
/ˈjen(ə)rədiv,ˈjenəˌrādiv/

adjective: generative

    denoting an approach to any field of linguistics that involves applying a finite set of rules to linguistic input in order to produce all and only the well-formed items of a language
    relating to or capable of production or reproduction
in·tel·li·gence
/inˈteləj(ə)ns/

noun: intelligence

    the ability to acquire and apply knowledge and skills

lmAGI An expression of enhanced reasoning for LLM with ./memory/stm and advanced log files that highlight internal reasoning as a working concept of machine reasoning.

Requirements

Python ≥ 3.9
pip

API keys (one or more):
Groq API keyOpenAI API keyTogether.ai API key

Quick Start (macOS & Linux)

git clone https://github.com/llamagi/lmagi
cd lmagi
chmod +x setup.sh
./setup.sh  # creates ./venv, installs deps, scaffolds .env

Run the GUI (recommended):

source venv/bin/activate
python lmagi_gui.py

Run the backend directly (browser opens automatically):

source venv/bin/activate
python lmagi.py  # serves at http://localhost:8080

Quick Start (Windows)

Open Command Prompt and run:

git clone https://github.com/llamagi/lmagi
cd lmagi
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
python lmagi_gui.py

If pip is not on PATH, you may need:

python -m pip install -r requirements.txt

Usage

  • Open the app (GUI or python lmagi.pyhttp://localhost:8080).
  • Go to Settings → API Keys to add one or more provider keys.
  • Return to Chat and select the model/provider in the footer menu.
  • Start chatting; autonomous reasoning can be toggled from the header.

Chat example: Chat Screenshot

Adding API keys: API Keys Screenshot

About

local model augmented generative intelligence

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •