Skip to content

Ability to set maxTokens when calling prompt() #36

Open
@duci9y

Description

@duci9y

I'd like to be able to constrain the output to N tokens for my use case (tab autocompletion). Otherwise the model generates strings that are too long to be useful. It also takes more time.

Metadata

Metadata

Assignees

No one assigned

    Labels

    ecosystem parityA feature that other popular language model APIs offerenhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions