Skip to content

Commit f4284f4

Browse files
authored
feat: credentials framework (#212)
Signed-off-by: Grant Linville <[email protected]>
1 parent a1b26c3 commit f4284f4

File tree

25 files changed

+927
-67
lines changed

25 files changed

+927
-67
lines changed

docs/docs/03-tools/04-credentials.md

Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
# Credentials
2+
3+
GPTScript supports credential provider tools. These tools can be used to fetch credentials from a secure location (or
4+
directly from user input) and conveniently set them in the environment before running a script.
5+
6+
## Writing a Credential Provider Tool
7+
8+
A credential provider tool looks just like any other GPTScript, with the following caveats:
9+
- It cannot call the LLM and must run a command.
10+
- It must print contents to stdout in the format `{"env":{"ENV_VAR_1":"value1","ENV_VAR_2":"value2"}}`.
11+
- Any args defined on the tool will be ignored.
12+
13+
Here is a simple example of a credential provider tool that uses the builtin `sys.prompt` to ask the user for some input:
14+
15+
```yaml
16+
# my-credential-tool.gpt
17+
name: my-credential-tool
18+
19+
#!/usr/bin/env bash
20+
21+
output=$(gptscript -q --cache=false sys.prompt '{"message":"Please enter your fake credential.","fields":"credential","sensitive":"true"}')
22+
credential=$(echo $output | jq -r '.credential')
23+
echo "{\"env\":{\"MY_ENV_VAR\":\"$credential\"}}"
24+
```
25+
26+
## Using a Credential Provider Tool
27+
28+
Continuing with the above example, this is how you can use it in a script:
29+
30+
```yaml
31+
credentials: my-credential-tool.gpt
32+
33+
#!/usr/bin/env bash
34+
35+
echo "The value of MY_ENV_VAR is $MY_ENV_VAR"
36+
```
37+
38+
When you run the script, GPTScript will call the credential provider tool first, set the environment variables from its
39+
output, and then run the script body. The credential provider tool is called by GPTScript itself. GPTScript does not ask the
40+
LLM about it or even tell the LLM about the tool.
41+
42+
If GPTScript has called the credential provider tool in the same context (more on that later), then it will use the stored
43+
credential instead of fetching it again.
44+
45+
You can also specify multiple credential tools for the same script:
46+
47+
```yaml
48+
credentials: credential-tool-1.gpt, credential-tool-2.gpt
49+
50+
(tool stuff here)
51+
```
52+
53+
## Storing Credentials
54+
55+
By default, credentials are automatically stored in a config file at `$XDG_CONFIG_HOME/gptscript/config.json`.
56+
This config file also has another parameter, `credsStore`, which indicates where the credentials are being stored.
57+
58+
- `file` (default): The credentials are stored directly in the config file.
59+
- `osxkeychain`: The credentials are stored in the macOS Keychain.
60+
61+
In order to use `osxkeychain` as the credsStore, you must have the `gptscript-credential-osxkeychain` executable
62+
available in your PATH. There will probably be better packaging for this in the future, but for now, you can build it
63+
from the [repo](https://github.com/gptscript-ai/gptscript-credential-helpers).
64+
65+
There will likely be support added for other credential stores in the future.
66+
67+
:::note
68+
Credentials received from credential provider tools that are not on GitHub (such as a local file) will not be stored
69+
in the credentials store.
70+
:::
71+
72+
## Credential Contexts
73+
74+
Each stored credential is uniquely identified by the name of its provider tool and the name of its context. A credential
75+
context is basically a namespace for credentials. If you have multiple credentials from the same provider tool, you can
76+
switch between them by defining them in different credential contexts. The default context is called `default`, and this
77+
is used if none is specified.
78+
79+
You can set the credential context to use with the `--credential-context` flag when running GPTScript. For
80+
example:
81+
82+
```bash
83+
gptscript --credential-context my-azure-workspace my-azure-script.gpt
84+
```
85+
86+
Any credentials fetched for that script will be stored in the `my-azure-workspace` context. If you were to call it again
87+
with a different context, you would be able to give it a different set of credentials.
88+
89+
## Listing and Deleting Stored Credentials
90+
91+
The `gptscript credential` command can be used to list and delete stored credentials. Running the command with no
92+
`--credential-context` set will use the `default` credential context. You can also specify that it should list
93+
credentials in all contexts with `--all-contexts`.
94+
95+
You can delete a credential by running the following command:
96+
97+
```bash
98+
gptscript credential delete --credential-context <credential context> <credential tool name>
99+
```

docs/docs/07-gpt-file-reference.md

Lines changed: 12 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -43,17 +43,18 @@ Tool instructions go here.
4343

4444
Tool parameters are key-value pairs defined at the beginning of a tool block, before any instructional text. They are specified in the format `key: value`. The parser recognizes the following keys (case-insensitive and spaces are ignored):
4545

46-
| Key | Description |
47-
|------------------|-----------------------------------------------------------------------------------------------------------------------------------------|
48-
| `Name` | The name of the tool. |
49-
| `Model Name` | The OpenAI model to use, by default it uses "gpt-4-turbo-preview" |
50-
| `Description` | The description of the tool. It is important that this properly describes the tool's purpose as the description is used by the LLM. |
51-
| `Internal Prompt`| Setting this to `false` will disable the built-in system prompt for this tool. |
52-
| `Tools` | A comma-separated list of tools that are available to be called by this tool. |
53-
| `Args` | Arguments for the tool. Each argument is defined in the format `arg-name: description`. |
54-
| `Max Tokens` | Set to a number if you wish to limit the maximum number of tokens that can be generated by the LLM. |
55-
| `JSON Response` | Setting to `true` will cause the LLM to respond in a JSON format. If you set true you must also include instructions in the tool. |
56-
| `Temperature` | A floating-point number representing the temperature parameter. By default, the temperature is 0. Set to a higher number for more creativity. |
46+
| Key | Description |
47+
|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
48+
| `Name` | The name of the tool. |
49+
| `Model Name` | The OpenAI model to use, by default it uses "gpt-4-turbo-preview" |
50+
| `Description` | The description of the tool. It is important that this properly describes the tool's purpose as the description is used by the LLM. |
51+
| `Internal Prompt` | Setting this to `false` will disable the built-in system prompt for this tool. |
52+
| `Tools` | A comma-separated list of tools that are available to be called by this tool. |
53+
| `Credentials` | A comma-separated list of credential tools to run before the main tool. |
54+
| `Args` | Arguments for the tool. Each argument is defined in the format `arg-name: description`. |
55+
| `Max Tokens` | Set to a number if you wish to limit the maximum number of tokens that can be generated by the LLM. |
56+
| `JSON Response` | Setting to `true` will cause the LLM to respond in a JSON format. If you set true you must also include instructions in the tool. |
57+
| `Temperature` | A floating-point number representing the temperature parameter. By default, the temperature is 0. Set to a higher number for more creativity. |
5758

5859

5960

go.mod

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,8 @@ require (
88
github.com/acorn-io/broadcaster v0.0.0-20240105011354-bfadd4a7b45d
99
github.com/acorn-io/cmd v0.0.0-20240404013709-34f690bde37b
1010
github.com/adrg/xdg v0.4.0
11+
github.com/docker/cli v26.0.0+incompatible
12+
github.com/docker/docker-credential-helpers v0.8.1
1113
github.com/fatih/color v1.16.0
1214
github.com/getkin/kin-openapi v0.123.0
1315
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510
@@ -63,6 +65,7 @@ require (
6365
github.com/olekukonko/tablewriter v0.0.6-0.20230925090304-df64c4bbad77 // indirect
6466
github.com/perimeterx/marshmallow v1.1.5 // indirect
6567
github.com/pierrec/lz4/v4 v4.1.15 // indirect
68+
github.com/pkg/errors v0.9.1 // indirect
6669
github.com/pmezard/go-difflib v1.0.0 // indirect
6770
github.com/rivo/uniseg v0.1.0 // indirect
6871
github.com/spf13/pflag v1.0.5 // indirect
@@ -77,5 +80,6 @@ require (
7780
golang.org/x/sys v0.16.0 // indirect
7881
golang.org/x/text v0.14.0 // indirect
7982
golang.org/x/tools v0.17.0 // indirect
83+
gotest.tools/v3 v3.5.1 // indirect
8084
mvdan.cc/gofumpt v0.6.0 // indirect
8185
)

go.sum

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -51,6 +51,10 @@ github.com/creack/pty v1.1.17/go.mod h1:MOBLtS5ELjhRRrroQr9kyvTxUAFNvYEK993ew/Vr
5151
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
5252
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
5353
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
54+
github.com/docker/cli v26.0.0+incompatible h1:90BKrx1a1HKYpSnnBFR6AgDq/FqkHxwlUyzJVPxD30I=
55+
github.com/docker/cli v26.0.0+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
56+
github.com/docker/docker-credential-helpers v0.8.1 h1:j/eKUktUltBtMzKqmfLB0PAgqYyMHOp5vfsD1807oKo=
57+
github.com/docker/docker-credential-helpers v0.8.1/go.mod h1:P3ci7E3lwkZg6XiHdRKft1KckHiO9a2rNtyFbZ/ry9M=
5458
github.com/dsnet/compress v0.0.1 h1:PlZu0n3Tuv04TzpfPbrnI0HW/YwodEXDS+oPKahKF0Q=
5559
github.com/dsnet/compress v0.0.1/go.mod h1:Aw8dCMJ7RioblQeTqt88akK31OvO8Dhf5JflhBbQEHo=
5660
github.com/dsnet/golib v0.0.0-20171103203638-1ea166775780/go.mod h1:Lj+Z9rebOhdfkVLjJ8T6VcRQv3SXugXy999NBtR9aFY=
@@ -191,6 +195,8 @@ github.com/perimeterx/marshmallow v1.1.5/go.mod h1:dsXbUu8CRzfYP5a87xpp0xq9S3u0V
191195
github.com/pierrec/lz4/v4 v4.1.15 h1:MO0/ucJhngq7299dKLwIMtgTfbkoSPF6AoMYDd8Q4q0=
192196
github.com/pierrec/lz4/v4 v4.1.15/go.mod h1:gZWDp/Ze/IJXGXf23ltt2EXimqmTUXEy0GFuRQyBid4=
193197
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
198+
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
199+
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
194200
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
195201
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
196202
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
@@ -432,6 +438,8 @@ gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C
432438
gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
433439
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
434440
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
441+
gotest.tools/v3 v3.5.1 h1:EENdUnS3pdur5nybKYIh2Vfgc8IUNBjxDPSjtiJcOzU=
442+
gotest.tools/v3 v3.5.1/go.mod h1:isy3WKz7GK6uNw/sbHzfKBLvlvXwUyV06n6brMxxopU=
435443
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
436444
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
437445
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=

pkg/builtin/builtin.go

Lines changed: 57 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,11 @@ import (
1717
"strings"
1818
"time"
1919

20+
"github.com/AlecAivazis/survey/v2"
2021
"github.com/BurntSushi/locker"
2122
"github.com/google/shlex"
2223
"github.com/gptscript-ai/gptscript/pkg/confirm"
24+
"github.com/gptscript-ai/gptscript/pkg/runner"
2325
"github.com/gptscript-ai/gptscript/pkg/types"
2426
"github.com/jaytaylor/html2text"
2527
)
@@ -149,6 +151,17 @@ var tools = map[string]types.Tool{
149151
},
150152
BuiltinFunc: SysStat,
151153
},
154+
"sys.prompt": {
155+
Parameters: types.Parameters{
156+
Description: "Prompts the user for input",
157+
Arguments: types.ObjectSchema(
158+
"message", "The message to display to the user",
159+
"fields", "A comma-separated list of fields to prompt for",
160+
"sensitive", "(true or false) Whether the input should be hidden",
161+
),
162+
},
163+
BuiltinFunc: SysPrompt,
164+
},
152165
}
153166

154167
func SysProgram() *types.Program {
@@ -633,3 +646,47 @@ func SysDownload(ctx context.Context, env []string, input string) (_ string, err
633646

634647
return params.Location, nil
635648
}
649+
650+
func SysPrompt(ctx context.Context, _ []string, input string) (_ string, err error) {
651+
monitor := ctx.Value(runner.MonitorKey{})
652+
if monitor == nil {
653+
return "", errors.New("no monitor in context")
654+
}
655+
656+
unpause := monitor.(runner.Monitor).Pause()
657+
defer unpause()
658+
659+
var params struct {
660+
Message string `json:"message,omitempty"`
661+
Fields string `json:"fields,omitempty"`
662+
Sensitive string `json:"sensitive,omitempty"`
663+
}
664+
if err := json.Unmarshal([]byte(input), &params); err != nil {
665+
return "", err
666+
}
667+
668+
if params.Message != "" {
669+
_, _ = fmt.Fprintln(os.Stderr, params.Message)
670+
}
671+
672+
results := map[string]string{}
673+
for _, f := range strings.Split(params.Fields, ",") {
674+
var value string
675+
if params.Sensitive == "true" {
676+
err = survey.AskOne(&survey.Password{Message: f}, &value, survey.WithStdio(os.Stdin, os.Stderr, os.Stderr))
677+
} else {
678+
err = survey.AskOne(&survey.Input{Message: f}, &value, survey.WithStdio(os.Stdin, os.Stderr, os.Stderr))
679+
}
680+
if err != nil {
681+
return "", err
682+
}
683+
results[f] = value
684+
}
685+
686+
resultsStr, err := json.Marshal(results)
687+
if err != nil {
688+
return "", err
689+
}
690+
691+
return string(resultsStr), nil
692+
}

pkg/cli/credential.go

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
package cli
2+
3+
import (
4+
"fmt"
5+
"os"
6+
"sort"
7+
"text/tabwriter"
8+
9+
cmd2 "github.com/acorn-io/cmd"
10+
"github.com/gptscript-ai/gptscript/pkg/config"
11+
"github.com/gptscript-ai/gptscript/pkg/credentials"
12+
"github.com/gptscript-ai/gptscript/pkg/version"
13+
"github.com/spf13/cobra"
14+
)
15+
16+
type Credential struct {
17+
root *GPTScript
18+
AllContexts bool `usage:"List credentials for all contexts" local:"true"`
19+
}
20+
21+
func (c *Credential) Customize(cmd *cobra.Command) {
22+
cmd.Use = "credential"
23+
cmd.Version = version.Get().String()
24+
cmd.Aliases = []string{"cred", "creds", "credentials"}
25+
cmd.Short = "List stored credentials"
26+
cmd.Args = cobra.NoArgs
27+
cmd.AddCommand(cmd2.Command(&Delete{root: c.root}))
28+
}
29+
30+
func (c *Credential) Run(_ *cobra.Command, _ []string) error {
31+
cfg, err := config.ReadCLIConfig(c.root.ConfigFile)
32+
if err != nil {
33+
return fmt.Errorf("failed to read CLI config: %w", err)
34+
}
35+
36+
ctx := c.root.CredentialContext
37+
if c.AllContexts {
38+
ctx = "*"
39+
}
40+
41+
store, err := credentials.NewStore(cfg, ctx)
42+
if err != nil {
43+
return fmt.Errorf("failed to get credentials store: %w", err)
44+
}
45+
46+
creds, err := store.List()
47+
if err != nil {
48+
return fmt.Errorf("failed to list credentials: %w", err)
49+
}
50+
51+
if c.AllContexts {
52+
// Sort credentials by context
53+
sort.Slice(creds, func(i, j int) bool {
54+
if creds[i].Context == creds[j].Context {
55+
return creds[i].ToolName < creds[j].ToolName
56+
}
57+
return creds[i].Context < creds[j].Context
58+
})
59+
60+
w := tabwriter.NewWriter(os.Stdout, 10, 1, 3, ' ', 0)
61+
defer w.Flush()
62+
_, _ = w.Write([]byte("CONTEXT\tTOOL\n"))
63+
for _, cred := range creds {
64+
_, _ = fmt.Fprintf(w, "%s\t%s\n", cred.Context, cred.ToolName)
65+
}
66+
} else {
67+
// Sort credentials by tool name
68+
sort.Slice(creds, func(i, j int) bool {
69+
return creds[i].ToolName < creds[j].ToolName
70+
})
71+
72+
for _, cred := range creds {
73+
fmt.Println(cred.ToolName)
74+
}
75+
}
76+
77+
return nil
78+
}

pkg/cli/credential_delete.go

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
package cli
2+
3+
import (
4+
"fmt"
5+
6+
"github.com/gptscript-ai/gptscript/pkg/config"
7+
"github.com/gptscript-ai/gptscript/pkg/credentials"
8+
"github.com/spf13/cobra"
9+
)
10+
11+
type Delete struct {
12+
root *GPTScript
13+
}
14+
15+
func (c *Delete) Customize(cmd *cobra.Command) {
16+
cmd.Use = "delete <tool name>"
17+
cmd.SilenceUsage = true
18+
cmd.Short = "Delete a stored credential"
19+
cmd.Args = cobra.ExactArgs(1)
20+
}
21+
22+
func (c *Delete) Run(_ *cobra.Command, args []string) error {
23+
cfg, err := config.ReadCLIConfig(c.root.ConfigFile)
24+
if err != nil {
25+
return fmt.Errorf("failed to read CLI config: %w", err)
26+
}
27+
28+
store, err := credentials.NewStore(cfg, c.root.CredentialContext)
29+
if err != nil {
30+
return fmt.Errorf("failed to get credentials store: %w", err)
31+
}
32+
33+
if err = store.Remove(args[0]); err != nil {
34+
return fmt.Errorf("failed to remove credential: %w", err)
35+
}
36+
return nil
37+
}

0 commit comments

Comments
 (0)