Skip to content

[Codex Feature] Add Conversational Capability #89

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions src/cleanlab_codex/project.py
Original file line number Diff line number Diff line change
Expand Up @@ -245,6 +245,7 @@ def validate(
query: str,
response: str,
*,
messages: Optional[List[Dict[str, Any]]] = None,
constrain_outputs: Optional[List[str]] = None,
custom_metadata: Optional[object] = None,
eval_scores: Optional[Dict[str, float]] = None,
Expand All @@ -264,4 +265,5 @@ def validate(
eval_scores=eval_scores,
options=options,
quality_preset=quality_preset,
messages=messages,
)
3 changes: 3 additions & 0 deletions src/cleanlab_codex/validator.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ def validate(
context: str,
response: str,
prompt: Optional[str] = None,
messages: Optional[list[dict[str, Any]]] = None,
form_prompt: Optional[Callable[[str, str], str]] = None,
metadata: Optional[dict[str, Any]] = None,
eval_scores: Optional[dict[str, float]] = None,
Expand All @@ -71,6 +72,7 @@ def validate(
context (str): The context that was retrieved from the RAG Knowledge Base and used to generate the response.
response (str): A reponse from your LLM/RAG system.
prompt (str, optional): Optional prompt representing the actual inputs (combining query, context, and system instructions into one string) to the LLM that generated the response.
messages (list[dict[str, Any]], optional): Optional list of messages representing the conversation history, if applicable in OpenAI format. Messages should **not** include the curent prompt.
form_prompt (Callable[[str, str], str], optional): Optional function to format the prompt based on query and context. Cannot be provided together with prompt, provide one or the other. This function should take query and context as parameters and return a formatted prompt string. If not provided, a default prompt formatter will be used. To include a system prompt or any other special instructions for your LLM, incorporate them directly in your custom form_prompt() function definition.
metadata (dict, optional): Additional custom metadata to associate with the query logged in the Codex Project.
eval_scores (dict[str, float], optional): Scores assessing different aspects of the RAG system. If provided, TLM Trustworthy RAG will not be used to generate scores.
Expand Down Expand Up @@ -107,4 +109,5 @@ def validate(
eval_thresholds=self._eval_thresholds,
options=options,
quality_preset=quality_preset,
messages=messages,
)
Loading