mchat was an entirely text-based LLM chat application with support for different LLMs and pre-establshed customizable prompt 'personas' that support mult-shot prompts. mchat uses the excellent Textual framework for a GUI-like experience complete with mouse support and should run anywhere a decent terminal is available, including over SSH.
NEW: As good as Textual is, I needed to move to a more graphically capable UI to be able to support embedded images, LaTeX, and graphing. To support this I have migrated the frontend to Nicegui The Textual code is still present and should continue to work, but I will not likely be maintaining it.
All that is needed is an OpenAI API key. Azure OpenAI will also work, but you will need to disable the Dall-E support if you don't also have an OpenAI API key.
Easy to modify agents and teams:
- Copy text to past buffer when clicking on the response
- Agent support
- Support Multi-line prompts
- History and reloadable sessions with local database storage
- Support for image creation (currently just dall-e)
- Support for functions/tools
- Round Robin multi-agent support
- Swarm multi-agent support
- Selector multi-agent support
- Cancellation Buttons to stop running team
- Smarter handling of user_proxy_agent
- Nice display for 'thought' messages for reasoning models
- AWS Bedrock support
-
Make sure you have Python and uv installed on your system. If not, you can download and install them from their official websites:
-
Open a terminal or command prompt and navigate to the project directory.
-
Run the following command to install the project dependencies:
uv sync --all-extras
This will create a virtual environment and install all the required dependencies specified in the
pyproject.toml
file.
Configuration is done within three files: settings.toml
, .secrets.toml
(optional, but recommended) and agents.yaml
Open the settings.toml file in a text editor to configure your application. Here's an explanation of the provided configuration options:
Sections need to start with model. (with period) and no other periods in the section name models.type.model_id
model_id is what will show in the interface.
[models.chat.gpt-4o]
api_key = "@format {this.openai_api_key}"
model = "gpt-4o"
api_type = "open_ai"
base_url = "https://api.openai.com/v1"
Image models and settings here are for expliclitly calling the immage models from the prompt. The generate_image tool does not use these settings, only the API key
Chat Models
- api_type: ["open_ai", "azure"]
- model_type: "chat"
- model: "name of model"
- api_key: "your key or dynaconf lookup to get the key"
- model: "the openai name for the model"
Azure Chat Models (additional)
- azure_endpoint: "URL for your endpoint"
- azure_deployment: "the azure name for the model in your deployment"
- api_version = "api version"
Image Models
- api_type: ["open_ai", "azure"]
- model_type: "image"
- model: "name of model"
- size: "size of images to create"
- num_images: "number of images to create"
- api_key: "your key or dynaconf lookup to get the key"
- default_model: Specifies the default model to use.
- default_temperature: Specifies the default temperature for generating text.
- default_persona: Specifies the default persona for generating text.
mchat maintains memory of the current chat in order to retain context in long conversations. When the retained memory exceeds the size the model supports, it will summarize the convseration to reduce size. Since this can be called often for longer chats, it is recommended to use an inexpensive model.
You can configure the following properties:
- memory_model: Specifies the specific model to use for memory, use one of the models you sepcified in your model lists
- memory_model_temperature: Specifies the temperature for the memory model.
- memory_model_max_tokens: Specifies the maximum tokens for the memory model.
Note that some configuration options, such as API keys, are meant to be kept in a separate .secrets.toml file. You can include the following configuration in that file:
# In .secrets.toml
# dynaconf_merge = true
# Replace the following with your actual API keys
# openai_models_api_key = "oai_ai_api_key_goes_here"
# ms_models_api_key = "ms_openai_api_key_goes_here"
mchat comes with a default persona and two example agents linux computer and financial manager and example round-robin and selector teams. Additional agents and teams can be added in a agents.yaml
file at the top level (same level as this README) using a similar pattern to mchat/default_personas.yaml
in the code. When configuring personas, the extra_context
list can allow you to respresent a multi-shot prompt, see the linux computer
persona in mchat/default_personas.json
as an example.
-
Run the application in uv using the following command
uv run poe mchat
-
Activate the virtual environment created by uv using the following command:
source ./.venv/bin/activate
-
Run the application using the following command:
poe mchat
or
python -m mchat.main
Thank you for considering contributing to the project! To contribute, please follow these guidelines:
-
Fork the repository and clone it to your local machine.
-
Create a new branch for your feature or bug fix:
git checkout -b feature/your-feature-name
Replace
your-feature-name
with a descriptive name for your contribution. -
Make the necessary changes and ensure that your code follows the project's coding conventions and style guidelines - which currently are using PEP 8 for style and black for formatting
-
Commit your changes with a clear and descriptive commit message:
git commit -m "Add your commit message here"
-
Push your branch to your forked repository:
git push origin feature/your-feature-name
-
Open a pull request from your forked repository to the main repository's
main
branch. -
Provide a clear and detailed description of your changes in the pull request. Include any relevant information that would help reviewers understand your contribution.
This project is licensed under the MIT License.
Feel free to reach out to me at @jspv on GitHub