Skip to content

Commit 7b53087

Browse files
docs: document more features (#694)
Signed-off-by: Grant Linville <[email protected]> Co-authored-by: Donnie Adams <[email protected]>
1 parent d6f6096 commit 7b53087

File tree

4 files changed

+295
-20
lines changed

4 files changed

+295
-20
lines changed

docs/docs/03-tools/08-workspace.md

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
# Workspace
2+
3+
One concept in GPTScript is the workspace directory.
4+
This is a directory meant to be used by tools that need to interact with the local file system.
5+
By default, the workspace directory is a one-off temporary directory.
6+
The workspace directory can be set with the `--workspace` argument when running GPTScript, like this:
7+
8+
```bash
9+
gptscript --workspace . my-script.gpt
10+
```
11+
12+
In the above example, the user’s current directory (denoted by `.`) will be set as the workspace.
13+
The workspace directory is no longer temporary if it is explicitly set, and everything in it will persist after the script has finished running.
14+
Both absolute and relative paths are supported.
15+
16+
Regardless of whether it is set implicitly or explicitly, the workspace is then made available to the script execution as the `GPTSCRIPT_WORKSPACE_DIR` environment variable.
17+
18+
:::info
19+
GPTScript does not force scripts or tools to write to, read from, or otherwise use the workspace.
20+
The tools must decide to make use of the workspace environment variable.
21+
:::
22+
23+
## The Workspace Context Tool
24+
25+
To make a non-code tool that uses the LLM aware of the workspace, you can reference the workspace context tool:
26+
27+
```
28+
Context: github.com/gptscript-ai/context/workspace
29+
```
30+
31+
This tells the LLM (by way of a [system message](https://platform.openai.com/docs/guides/text-generation/chat-completions-api)) what the workspace directory is,
32+
what its initial contents are, and that if it decides to create a file or directory, it should do so in the workspace directory.
33+
This will not, however, have any impact on code-based tools (i.e. Python, Bash, or Go tools).
34+
Such tools will have the `GPTSCRIPT_WORKSPACE_DIR` environment variable available to them, but they must be written in such a way that they make use of it.
35+
36+
This context tool also automatically shares the `sys.ls`, `sys.read`, and `sys.write` tools with the tool that is using it as a context.
37+
This is because if a tool intends to interact with the workspace, it minimally needs these tools.
Lines changed: 135 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,135 @@
1+
# Code Tool Guidelines
2+
3+
GPTScript can handle the packaging and distribution of code-based tools via GitHub repos.
4+
For more information on how this works, see the [authoring guide](02-authoring.md#sharing-tools).
5+
6+
This guide provides guidelines for setting up GitHub repos for proper tool distribution.
7+
8+
## Common Guidelines
9+
10+
### `tool.gpt` or `agent.gpt` file
11+
12+
Every repo should have a `tool.gpt` or `agent.gpt` file. This is the main logic of the tool.
13+
If both files exist, GPTScript will use the `agent.gpt` file and ignore the `tool.gpt` file.
14+
Your repo can have other `.gpt` files that are referenced by the main file, but there must be a `tool.gpt` or `agent.gpt` file present.
15+
16+
Under most circumstances, this file should live in the root of the repo.
17+
If you are using a single repo for the distribution of multiple tools (see [gptscript-ai/context](https://github.com/gptscript-ai/context) for an example),
18+
then you can have the `tool.gpt`/`agent.gpt` file in a subdirectory, and the tool will now be able to be referenced as `github.com/<user>/<repo>/<subdirectory>`.
19+
20+
### Name and Description directives
21+
22+
We recommend including a `Name` and `Description` directive for your tool.
23+
This is useful for both people and LLMs to understand what the tool will do and when to use it.
24+
25+
### Parameters
26+
27+
Any parameters specified in the tool will be available as environment variables in your code.
28+
We recommend handling parameters that way, rather than using command-line arguments.
29+
30+
## Python Guidelines
31+
32+
### Calling Python in the tool body
33+
34+
The body of the `tool.gpt`/`agent.gpt` file needs to call Python. This can be done as an inline script like this:
35+
36+
```
37+
Name: my-python-tool
38+
39+
#!python3
40+
41+
print('hello world')
42+
```
43+
44+
An inline script like this is only recommended for simple use cases that don't need external dependencies.
45+
46+
If your use case is more complex or requires external dependencies, you can reference a Python script in your repo, like this:
47+
48+
```
49+
Name: my-python-tool
50+
51+
#!/usr/bin/env python3 ${GPTSCRIPT_TOOL_DIR}/tool.py
52+
```
53+
54+
(This example assumes that your entrypoint to your Python program is in a file called `tool.py`. You can call it what you want.)
55+
56+
### `requirements.txt` file
57+
58+
If your Python program needs any external dependencies, you can create a `requirements.txt` file at the same level as
59+
your `tool.gpt`/`agent.gpt` file. GPTScript will handle downloading the dependencies before it runs the tool.
60+
61+
The file structure should look something like this:
62+
63+
```
64+
.
65+
├── requirements.txt
66+
├── tool.py
67+
└── tool.gpt
68+
```
69+
70+
## JavaScript (Node.js) Guidelines
71+
72+
### Calling Node.js in the tool body
73+
74+
The body of the `tool.gpt`/`agent.gpt` file needs to call Node. This can be done as an inline script like this:
75+
76+
```
77+
Name: my-node-tool
78+
79+
#!node
80+
81+
console.log('hello world')
82+
```
83+
84+
An inline script like this is only recommended for simple use cases that don't need external dependencies.
85+
86+
If your use case is more complex or requires external dependencies, you can reference a Node script in your repo, like this:
87+
88+
```
89+
Name: my-node-tool
90+
91+
#!/usr/bin/env node ${GPTSCRIPT_TOOL_DIR}/tool.js
92+
```
93+
94+
(This example assumes that your entrypoint to your Node program is in a file called `tool.js`. You can call it what you want.)
95+
96+
### `package.json` file
97+
98+
If your Node program needs any external dependencies, you can create a `package.json` file at the same level as
99+
your `tool.gpt`/`agent.gpt` file. GPTScript will handle downloading the dependencies before it runs the tool.
100+
101+
The file structure should look something like this:
102+
103+
```
104+
.
105+
├── package.json
106+
├── tool.js
107+
└── tool.gpt
108+
```
109+
110+
## Go Guidelines
111+
112+
GPTScript does not support inline code for Go, so you must call to an external program from the tool body like this:
113+
114+
```
115+
Name: my-go-tool
116+
117+
#!${GPTSCRIPT_TOOL_DIR}/bin/gptscript-go-tool
118+
```
119+
120+
:::important
121+
Unlike the Python and Node cases above where you can name the file anything you want, Go tools must be `#!${GPTSCRIPT_TOOL_DIR}/bin/gptscript-go-tool`.
122+
:::
123+
124+
GPTScript will build the Go program located at `./main.go` to a file called `./bin/gptscript-go-tool` before running the tool.
125+
All of your dependencies need to be properly specified in a `go.mod` file.
126+
127+
The file structure should look something like this:
128+
129+
```
130+
.
131+
├── go.mod
132+
├── go.sum
133+
├── main.go
134+
└── tool.gpt
135+
```

docs/docs/03-tools/10-daemon.md

Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,108 @@
1+
# Daemon Tools (Advanced)
2+
3+
One advanced use case that GPTScript supports is daemon tools.
4+
A daemon tool is a tool that starts a long-running HTTP server in the background, that will continue running until GPTScript is done executing.
5+
Other tools can easily send HTTP POST requests to the daemon tool.
6+
7+
## Example
8+
9+
Here is an example of a daemon tool with a simple echo server written in an inline Node.js script:
10+
11+
```
12+
Tools: my-daemon
13+
Param: first: the first parameter
14+
Param: second: the second parameter
15+
16+
#!http://my-daemon.daemon.gptscript.local/myPath
17+
18+
---
19+
Name: my-daemon
20+
21+
#!sys.daemon node
22+
23+
const http = require('http');
24+
25+
const server = http.createServer((req, res) => {
26+
if (req.method === 'GET' || req.method === 'POST') {
27+
// Extract the path from the request URL
28+
const path = req.url;
29+
30+
let body = '';
31+
32+
req.on('data', chunk => {
33+
body += chunk.toString();
34+
})
35+
36+
// Respond with the path and body
37+
req.on('end', () => {
38+
res.writeHead(200, { 'Content-Type': 'text/plain' });
39+
res.write(`Body: ${body}\n`);
40+
res.end(`Path: ${path}`);
41+
})
42+
} else {
43+
res.writeHead(405, { 'Content-Type': 'text/plain' });
44+
res.end('Method Not Allowed');
45+
}
46+
});
47+
48+
const PORT = process.env.PORT || 3000;
49+
server.listen(PORT, () => {
50+
console.log(`Server is listening on port ${PORT}`);
51+
});
52+
```
53+
54+
Let's talk about the daemon tool, called `my-daemon`, first.
55+
56+
### The Daemon Tool
57+
58+
The body of this tool begins with `#!sys.daemon`. This tells GPTScript to take the rest of the body as a command to be
59+
run in the background that will listen for HTTP requests. GPTScript will run this command (in this case, a Node script).
60+
GPTScript will assign a port number for the server and set the `PORT` environment variable to that number, so the
61+
server needs to check that variable and listen on the proper port.
62+
63+
After GPTScript runs the daemon, it will send it an HTTP GET request to make sure that it is running properly.
64+
The daemon needs to respond with a 200 OK to this request.
65+
By default, the request goes to `/`, but this can be configured with the following syntax:
66+
67+
```
68+
#!sys.daemon (path=/api/ready) node
69+
70+
// (node script here)
71+
```
72+
73+
### The Entrypoint Tool
74+
75+
The entrypoint tool at the top of this script sends an HTTP request to the daemon tool.
76+
There are a few important things to note here:
77+
78+
- The `Tools: my-daemon` directive is needed to show that this tool requires the `my-daemon` tool to already be running.
79+
- When the entrypoint tool runs, GPTScript will check if `my-daemon` is already running. If it is not, GPTScript will start it.
80+
- The `#!http://my-daemon.daemon.gptscript.local/myPath` in the body tells GPTScript to send an HTTP request to the daemon tool.
81+
- The request will be a POST request, with the body of the request being a JSON string of the parameters passed to the entrypoint tool.
82+
- For example, if the script is run like `gptscript script.gpt '{"first":"hello","second":"world"}'`, then the body of the request will be `{"first":"hello","second":"world"}`.
83+
- The path of the request will be `/myPath`.
84+
- The hostname is `my-daemon.daemon.gptscript.local`. When sending a request to a daemon tool, the hostname must always start with the daemon tool's name, followed by `.daemon.gptscript.local`.
85+
- GPTScript recognizes this hostname and determines the correct port number to send the request to, on localhost.
86+
87+
### Running the Example
88+
89+
Now let's try running it:
90+
91+
```bash
92+
gptscript script.gpt '{"first":"hello","second":"world"}'
93+
```
94+
95+
```
96+
OUTPUT:
97+
98+
Body: {"first":"hello","second":"world"}
99+
Path: /myPath
100+
```
101+
102+
This is exactly what we expected. This is a silly, small example just to demonstrate how this feature works.
103+
A real-world situation would involve several different tools sending different HTTP requests to the daemon tool,
104+
likely with an LLM determining when to call which tool.
105+
106+
## Real-World Example
107+
108+
To see a real-world example of a daemon tool, check out the [GPTScript Browser tool](https://github.com/gptscript-ai/browser).

docs/docs/09-faqs.md

Lines changed: 15 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -54,26 +54,7 @@ By default, this directory is a one-off temp directory, but you can override thi
5454
gptscript --workspace . my-script.gpt
5555
```
5656

57-
In the above example, the user's current directory (denoted by `.`) will be set as the workspace. Both absolute and relative paths are supported.
58-
59-
Regardless of whether it is set implicitly or explicitly, the workspace is then made available to the script execution as the `GPTSCRIPT_WORKSPACE_DIR` environment variable.
60-
61-
:::info
62-
GPTScript does not force scripts or tools to write to, read from, or otherwise use the workspace. The tools must decide to make use of the workspace environment variable.
63-
:::
64-
65-
To make prompt-based tools workspace aware, you can reference our workspace context tool, like so:
66-
67-
```
68-
Context: github.com/gptscript-ai/context/workspace
69-
```
70-
71-
This tells the LLM (by way of a [system message](https://platform.openai.com/docs/guides/text-generation/chat-completions-api)) what the workspace directory is, what its initial contents are, and that if it decides to create a file or directory, it should do so in the workspace directory.
72-
This will not, however, have any impact on code-based tools (ie python, bash, or go tools).
73-
Such tools will have the `GPTSCRIPT_WORKSPACE_DIR` environment variable available to them, but they must be written in such a way that they make use of it.
74-
75-
This context also automatically shares the `sys.ls`, `sys.read`, and `sys.write` tools with the tool that is using it as a context.
76-
This is because if a tool intends to interact with the workspace, it minimally needs these tools.
57+
For more info, see the [Workspace](03-tools/08-workspace.md) page.
7758

7859
### I'm hitting GitHub's rate limit for unauthenticated requests when using GPTScript.
7960

@@ -85,3 +66,17 @@ If you're already authenticated with the `gh` CLI, you can use its token by runn
8566
```bash
8667
export GITHUB_AUTH_TOKEN="$(gh auth token)"
8768
```
69+
70+
### Can I save my chat and resume it later?
71+
72+
Yes! When you run GPTScript, be sure to specify the `--save-chat-state-file` argument like this:
73+
74+
```bash
75+
gptscript --save-chat-state-file chat-state.json my-script.gpt
76+
```
77+
78+
Then, when you want to resume your chat, you can use the `--chat-state` argument to specify the file you saved:
79+
80+
```bash
81+
gptscript --chat-state chat-state.json my-script.gpt
82+
```

0 commit comments

Comments
 (0)