Use AI Agents to control Blender with natural language.
- Focused on AI as a tool, not replacement
- Multiple backend options:
- Local inference with llama.cpp or Ollama
- Remote inference with Hugging Face, Anthropic, or OpenAI
- Optional LLaMA-Mesh integration for local mesh understanding and generation
- Optional Hyper3D integration for high-fidelity 3D mesh generation
- Go to the Latest Release page
- Download the addon ZIP file for your platform
- In Blender, go to
Edit->Preferences->Add-ons-> Top-Right Arrow ->Install from Disk... - Select the downloaded ZIP file
- In blender, go to
Edit->Preferences->Add-ons->meshgen - Select either
LocalorRemoteand follow instructions below
Run models free locally directly in Blender.
Only select this option if you:
- Have a powerful NVIDIA GPU with at least 8GB of VRAM
- Installed a
cudaversion of the addon during Installation - Prefer running the model directly in Blender instead of a local Ollama server
To set up the local backend, you can either:
- Click
Download Recommended Modelto download Meta-Llama-3.1-8B-Instruct-GGUF - Manually download a
.GGUFmodel and put it in the models folder (located by clicking the folder icon)
MeshGen supports a variety of remote backends.
- Ollama to run models in a free local server
- Hugging Face, Anthropic, or OpenAI to run powerful models via API
Hugging Face is recommended for most users, providing limited free use of powerful models.
Run models free locally with an Ollama server.
- Install Ollama
- Run
ollama servein the terminal - Select
Ollamain theProviderdropdown - Enter your
Ollamaserver endpoint and model name (the defaults should work for most users)
Run a wide variety of models such as Llama, DeepSeek, Mistral, Qwen, and more via the Hugging Face API.
- Create an account on Hugging Face
- Go to hf.co/settings/tokens and create a new token
- Select
Hugging Facein theProviderdropdown - Enter your
Hugging Facetoken in theAPI Keyfield - Optionally, change the
Model IDto the model you want to use (e.g.meta-llama/Llama-3.3-70B-Instruct)
Run Anthropic models (i.e. Claude) with the Anthropic API.
- Create an account on Anthropic
- Go to console.anthropic.com/settings/keys and create a new key
- Select
Anthropicin theProviderdropdown - Enter your
Anthropickey in theAPI Keyfield - Optionally, change the
Model IDto the model you want to use (e.g.claude-3-5-sonnet-latest)
Run OpenAI models (i.e. ChatGPT) with the OpenAI API.
- Create an account on OpenAI
- Go to platform.openai.com/api-keys and create a new secret key
- Select
OpenAIin theProviderdropdown - Enter your
OpenAIsecret key in theAPI Keyfield - Optionally, change the
Model IDto the model you want to use (e.g.gpt-4o-mini)
To enable optional integrations, go to Edit -> Preferences -> Add-ons -> meshgen -> Integrations.
When these are enabled, the agent will automatically be given access to these tools, depending on the context.
Use LlamaMesh local mesh understanding and generation.
Only select this option if you:
- Have a powerful NVIDIA GPU with at least 8GB of VRAM
- Installed a
cudaversion of the addon during Installation - Are using a remote API backend (e.g. Hugging Face, Anthropic, or OpenAI), as LLaMA-Mesh will load locally on your machine
To enable LLaMA-Mesh, click Load LLama-Mesh and wait for the model to load.
Use Hyper3D for high-fidelity 3D mesh generation.
To enable:
- Check
Enable Hyper3D - Enter your Hyper3D API key in the
API Keyfield (free use is currently provided with theawesomemcpkey)
This may take several minutes per mesh.
- Press
N->MeshGen(orView->Sidebar-> Select theMeshGentab) - Enter a prompt, for example:
Create a snowman - Click
Submit
- Find errors in the console:
- Windows: In Blender, go to
Window->Toggle System Console - Mac/Linux: Launch Blender from the terminal
- Windows: In Blender, go to
- Report errors in Issues



