@guardrails-ai/core / Exports / History / Call
History.Call
Name |
---|
T |
ICall
<T
>
- _compiledInstructions
- _compiledPrompt
- _completionTokensConsumed
- _error
- _exception
- _failedValidations
- _fixedOutput
- _inputs
- _instructions
- _iterations
- _logs
- _parsedOutputs
- _prompt
- _promptParams
- _promptTokensConsumed
- _rawOutputs
- _reaskInstructions
- _reaskPrompts
- _reasks
- _status
- _tokensConsumed
- _validatedOutput
- _validationOutput
- _validatorLogs
- compiledInstructions
- compiledPrompt
- completionTokensConsumed
- error
- exception
- failedValidations
- fixedOutput
- inputs
- instructions
- iterations
- logs
- parsedOutputs
- prompt
- promptParams
- promptTokensConsumed
- rawOutputs
- reaskInstructions
- reaskPrompts
- reasks
- status
- tokensConsumed
- validatedOutput
- validationOutput
- validatorLogs
• new Call<T
>(call
): Call
<T
>
Name |
---|
T |
Name | Type |
---|---|
call |
ICall <T > |
Call
<T
>
• Private
Optional
_compiledInstructions: string
The initial compiled instructions that were passed to the LLM on the first call.
• Private
Optional
_compiledPrompt: string
The initial compiled prompt that was passed to the LLM on the first call.
• Private
Optional
_completionTokensConsumed: number
Returns the total number of completion tokens consumed during all iterations with this call.
• Private
Optional
_error: string
The error message from any exception that raised and interrupted the run.
• Private
Optional
_exception: Error
The exception that interrupted the run.
• Private
_failedValidations: Stack
<ValidatorLogs
>
The validator logs for any validations that failed during the entirety of the run.
• Private
Optional
_fixedOutput: T
The cumulative validation output across all current iterations with any automatic fixes applied.
• Private
_inputs: CallInputs
The inputs as passed in to Guard.call or Guard.parse
• Private
Optional
_instructions: string
The instructions as provided by the user when intializing or calling the Guard.
• Private
_iterations: Stack
<Iteration
<T
>>
A stack of iterations for each step/reask that occurred during this call.
• Private
_logs: Stack
<string
>
Returns all logs from all iterations as a stack.
• Private
_parsedOutputs: Stack
<T
>
The outputs from the LLM after undergoing parsing but before validation.
• Private
Optional
_prompt: string
The prompt as provided by the user when intializing or calling the Guard.
• Private
Optional
_promptParams: Dictionary
The prompt parameters as provided by the user when intializing or calling the Guard.
• Private
Optional
_promptTokensConsumed: number
Returns the total number of prompt tokens consumed during all iterations with this call.
• Private
_rawOutputs: Stack
<string
>
The exact outputs from all LLM calls.
• Private
_reaskInstructions: Stack
<string
>
The compiled instructions used during reasks. Does not include the initial instructions.
• Private
_reaskPrompts: Stack
<string
>
The compiled prompts used during reasks. Does not include the initial prompt.
• Private
_reasks: Stack
<ReAsk
>
Reasks generated during validation that could not be automatically fixed. These would be incorporated into the prompt for the next LLM call if additional reasks were granted.
• Private
Optional
_status: string
Returns the cumulative status of the run based on the validity of the final merged output.
• Private
Optional
_tokensConsumed: number
Returns the total number of tokens consumed during all iterations with this call.
• Private
Optional
_validatedOutput: T
The output from the LLM after undergoing validation. This will only have a value if the Guard is in a passing state.
• Private
Optional
_validationOutput: ReAsk
| T
The cumulative validation output across all current iterations. Could contain ReAsks.
• Private
_validatorLogs: Stack
<ValidatorLogs
>
The results of each individual validation performed on the LLM responses during all iterations.
• get
compiledInstructions(): undefined
| string
undefined
| string
• get
compiledPrompt(): undefined
| string
undefined
| string
• get
completionTokensConsumed(): undefined
| number
undefined
| number
ICall.completionTokensConsumed
• get
error(): undefined
| string
undefined
| string
• get
exception(): undefined
| Error
undefined
| Error
• get
failedValidations(): Stack
<ValidatorLogs
>
• get
fixedOutput(): undefined
| T
undefined
| T
• get
inputs(): CallInputs
• get
instructions(): undefined
| string
undefined
| string
• get
iterations(): Stack
<Iteration
<T
>>
• get
logs(): Stack
<string
>
Stack
<string
>
• get
parsedOutputs(): Stack
<T
>
Stack
<T
>
• get
prompt(): undefined
| string
undefined
| string
• get
promptParams(): undefined
| Dictionary
undefined
| Dictionary
• get
promptTokensConsumed(): undefined
| number
undefined
| number
• get
rawOutputs(): Stack
<string
>
Stack
<string
>
• get
reaskInstructions(): Stack
<string
>
Stack
<string
>
• get
reaskPrompts(): Stack
<string
>
Stack
<string
>
• get
status(): undefined
| string
undefined
| string
• get
tokensConsumed(): undefined
| number
undefined
| number
• get
validatedOutput(): undefined
| T
undefined
| T
• get
validationOutput(): undefined
| ReAsk
| T
undefined
| ReAsk
| T
• get
validatorLogs(): Stack
<ValidatorLogs
>
▸ fromPyCall<U
>(pyCall
): Promise
<Call
<U
>>
Name |
---|
U |
Name | Type |
---|---|
pyCall |
any |
Promise
<Call
<U
>>