lmagi thinking machine
language model Augmented Generative Intelligence
lmagi showcases multi-model reasoning with persistent memory and a modern web UI powered by NiceGUI. The preferred way to run is via the native desktop wrapper lmagi_gui.py, which launches the backend and embeds the web app.
local model / language model Augmented Generative Intelligence
this is the development version of ezAGI becoming lmagi on route to easyAGI roadmap to display the reasoning capabilities as log files
integration with ollama point of departure can be found at lmagi
project development has migrated to the easyAGI roadmap
Project development aligns with the easyAGI roadmap.
Source code: lmagi
An exercise in multi-model integration for LLM rational enhancement.
aug·ment·ed
/ôɡˈmen(t)əd/
adjective: augmented
having been made greater in size or valuegen·er·a·tive
/ˈjen(ə)rədiv,ˈjenəˌrādiv/
adjective: generative
denoting an approach to any field of linguistics that involves applying a finite set of rules to linguistic input in order to produce all and only the well-formed items of a language
relating to or capable of production or reproductionin·tel·li·gence
/inˈteləj(ə)ns/
noun: intelligence
the ability to acquire and apply knowledge and skillslmAGI
An expression of enhanced reasoning for LLM with ./memory/stm and advanced log files that highlight internal reasoning as a working concept of machine reasoning.
Python ≥ 3.9
pip
API keys (one or more):
Groq API key • OpenAI API key • Together.ai API key
git clone https://github.com/llamagi/lmagi
cd lmagi
chmod +x setup.sh
./setup.sh # creates ./venv, installs deps, scaffolds .envRun the GUI (recommended):
source venv/bin/activate
python lmagi_gui.pyRun the backend directly (browser opens automatically):
source venv/bin/activate
python lmagi.py # serves at http://localhost:8080Open Command Prompt and run:
git clone https://github.com/llamagi/lmagi
cd lmagi
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
python lmagi_gui.pyIf pip is not on PATH, you may need:
python -m pip install -r requirements.txt- Open the app (GUI or
python lmagi.py→http://localhost:8080). - Go to Settings → API Keys to add one or more provider keys.
- Return to Chat and select the model/provider in the footer menu.
- Start chatting; autonomous reasoning can be toggled from the header.


