Skip to content

Adds examples for Tree of Thought prompting #9

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ yarn-error.log*

# local env files
.env*.local
.envrc*

# vercel
.vercel
Expand Down
8 changes: 4 additions & 4 deletions _internal/hero_image/marketing.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ def __():
@app.cell
def __():
# Try to keep the text simple, one or two words.
text = """ComputeText"""
font_name = "IBMPlexMono-Bold" # For code
# font_name = "IBMPlexSans-Medium" # For text
text = """Tree of Thought"""
# font_name = "IBMPlexMono-Bold" # For code
font_name = "IBMPlexSans-Medium" # For text
font_size = 250 # You may need to decrease this
return font_name, font_size, text

Expand Down Expand Up @@ -89,7 +89,7 @@ def __(mo):
- high altitude drone footage birds eye view of mountain range with clouds and crashing turbulent ocean waves at sunset

{gen_prompt}

#### Alternate prompts
- highly detailed anime scene miyazaki ghost in the shell bright colors
- highly detailed art deco illustration
Expand Down
925 changes: 925 additions & 0 deletions poetry.lock

Large diffs are not rendered by default.

19 changes: 19 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
[tool.poetry]
name = "substrate-examples"
version = "0.1.0"
description = "Various examples of Substrate usage"
authors = ["Substrate <[email protected]>"]
readme = "README.md"

[tool.poetry.dependencies]
python = "^3.10"

[tool.poetry.group.dev.dependencies]
marimo = "^0.8.0"
pillow = "^10.4.0"
numpy = "^2.1.0"
substrate = "^220240617.1.8"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
61 changes: 61 additions & 0 deletions techniques/tree-of-thought/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
# Tree of Thought

<details>
<summary>How to run this example</summary>
<br/>

```bash
# Set your API key as an environment variable.
export SUBSTRATE_API_KEY=ENTER_YOUR_KEY

# Run the TypeScript example

# If using tsx:
cd typescript # Navigate to the typescript example
npm install # Install dependencies
npx tsx example.ts # Run the example

# If using Deno:
cd typescript
deno run example.ts

# Run the Python example

# If using Poetry:
cd python # Navigate to the python example
poetry install # Install dependencies and build the example
poetry run main # Run the example

# If using Rye:
# Update pyproject.toml to switch to Rye.
cd python
rye sync
rye run main
```

</details>

![hero](hero.jpg)

## Overview

Tree of Thought (ToT) is an advanced prompting technique for large language models (LLMs) that enhances problem-solving capabilities by simulating a multi-step reasoning process. This approach encourages the model to:

1. Break down complex problems into smaller, manageable sub-problems
2. Generate multiple potential solutions or "thoughts" for each sub-problem
3. Evaluate and prune less promising paths
4. Combine the most promising thoughts to form a coherent solution

By mimicking human-like reasoning, ToT allows LLMs to tackle more challenging tasks, improve accuracy, and provide more transparent decision-making processes. This technique is particularly useful for problems requiring multi-step reasoning, strategic planning, or creative problem-solving.

There are a few novel problems which can be solved using this technique, like solving a Sudoku puzzle, but it seems equally suited to improving LLM responses by providing a structured and transparent reasoning process. Our example asks the model to reason through a game of Hide and Seek, displaying the final reasoning behind where the LLM believes the hider to be.

While putting this example together, we found that the general structure can be remain while changing only the initial framing of the problem, with interesting results coming from simply changing prompt text. Try asking it to refine a short story in a specific style or to play a different kind of game!

## How it works

In our example, the tree is represented by a panel of experts at each step - we prompt the LLM for multiple responses to a given prompt, and then ask it to rank those responses and choose the best one. This can be thought of as a breadth-first approach to tree pruning, since we explore each of the next paths before moving to the next layer. This is in contrast to a depth-first approach, where we explore each path to its end before moving to the next.

The key to this process is the repeated process of generating multiple alternative responses to a prompt followed by synthesis of those responses. This feedback loop allows the LLM to refine its reasoning and improve its output over time.

![diagram](diagram.svg)
74 changes: 74 additions & 0 deletions techniques/tree-of-thought/diagram.d2
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
direction: right
classes: {
substrate: {
label: "substrate"
style: {
font: mono
font-color: gray
font-size: 20
stroke: gray
stroke-dash: 1
fill: "transparent"
border-radius: 16
}
}
node: {
style: {
font: mono
font-size: 24
stroke-width: 2
fill: transparent
stroke: gray
border-radius: 16
stroke-dash: 1
3d: true
}
}
edge: {
style: {
stroke: "#000"
stroke-dash: 2
}
}
}

substrate.class: substrate
substrate.a.class: node
substrate.b.class: node
substrate.c.class: node
substrate.d.class: node
substrate.a.label: expert a
substrate.b.label: expert b
substrate.c.label: expert c
substrate.d.label: consensus
substrate.a->substrate.d : Idea 1 { class: edge }
substrate.b->substrate.d : Idea 1 { class: edge }
substrate.c->substrate.d : Idea 1 { class: edge }
substrate.e.class: node
substrate.f.class: node
substrate.g.class: node
substrate.h.class: node
substrate.e.label: expert a
substrate.f.label: expert b
substrate.g.label: expert c
substrate.h.label: consensus
substrate.d->substrate.e : Expert b Idea 1 { class: edge }
substrate.d->substrate.f : Expert b Idea 1 { class: edge }
substrate.d->substrate.g : Expert b Idea 1 { class: edge }
substrate.e->substrate.h : Idea 2 { class: edge }
substrate.f->substrate.h : Idea 2 { class: edge }
substrate.g->substrate.h : Idea 2 { class: edge }
substrate.i.class: node
substrate.j.class: node
substrate.k.class: node
substrate.l.class: node
substrate.i.label: expert a
substrate.j.label: expert b
substrate.k.label: expert c
substrate.l.label: consensus
substrate.h->substrate.i : Expert a Idea 2 { class: edge }
substrate.h->substrate.j : Expert a Idea 2 { class: edge }
substrate.h->substrate.k : Expert a Idea 2 { class: edge }
substrate.i->substrate.l : Idea 3 { class: edge }
substrate.j->substrate.l : Idea 3 { class: edge }
substrate.k->substrate.l : Idea 3 { class: edge }
Loading