A smart fuzzing tool that leverages Large Language Models to systematically test the Tact compiler for the TON blockchain.
LLM-Fuzz employs AI agents to intelligently fuzz test the Tact compiler by:
- Testing edge cases and unusual code patterns
- Comparing actual compiler behavior against documentation
- Identifying bugs, crashes, and documentation mismatches
- Generating minimal reproducible examples for each issue
- Clone this repository:
git clone https://github.com/tact-lang/llm-fuzz.git
cd llm-fuzz
- Install dependencies:
pip install -r requirements.txt
-
Ensure you have the Tact compiler installed and available in your PATH.
-
Set your OpenAI API key:
export OPENAI_API_KEY="your-api-key-here"
Run the fuzzing tool:
python main.py
The tool will:
- Launch 20 parallel LLM agents to test different aspects of the compiler
- Log all activities and findings in real-time
- Store successful code snippets in the
snippets/
directory - Track found issues in
found_issues.md
- Record reported issues in
reported_issues.md
To stop the tool, press Ctrl+C in your terminal.
- Each agent is initialized with comprehensive instructions to test specific Tact compiler features
- Agents systematically craft test cases to explore edge cases and potential issues
- Snippets are compiled with the actual Tact compiler to verify behavior
- When an issue is found, it is automatically reported with a detailed explanation and minimal example
- New agents are spawned to replace those that complete their tasks
found_issues.md
: Documents already known but not yet resolved issues. This prevents agents from repeatedly reporting the same issues and helps focus testing efforts on undiscovered problems.reported_issues.md
: Records new issues discovered during test runs. Each report includes a detailed explanation, reproducible code, and documentation references.snippets/
: Contains all successfully compiled code snippets for reference and further analysis.tmp/
: Stores all temporary files generated during testing, including compilation artifacts and files that failed to compile.
Key configuration options (in main.py
):
MODEL_NAME
: The OpenAI model to use (default: "o3-mini")REASONING
: The reasoning effort level (default: "medium")num_agents
: Number of parallel agents to run (default: 20)
main.py
: Main fuzzing enginerequirements.txt
: Python dependenciessnippets/
: Successful code snippetstmp/
: Temporary filesfound_issues.md
: Documented issuesreported_issues.md
: Issues reported by agents
This approach can be scaled in three dimensions:
- Horizontal: Increase the number of parallel agents
- Vertical: Use more advanced LLM models
- Depth: Adjust prompts for deeper exploration of specific features
- Console output provides real-time updates on agent activities
snippets/
directory contains all successfully compiled test casesreported_issues.md
details all issues found with reproduction steps
This project is licensed under the MIT License - see the LICENSE file for details.