Skip to content

bug: support for system message in runnable rails chatprompttemplate #1107

@smruthi33

Description

@smruthi33

Did you check docs and existing issues?

  • I have read all the NeMo-Guardrails docs
  • I have updated the package to the latest version before submitting this issue
  • (optional) I have used the develop branch
  • I have searched the existing issues of NeMo-Guardrails

Python version (python --version)

3.11.9

Operating system/version

MacOS 15.3.2

NeMo-Guardrails version (if you must use a specific version and not the latest

0.13.0

Describe the bug

The current implementation does not support messages that have a System Message

guardrails_config = RailsConfig.from_path(<Path>)
guardrails = RunnableRails(guardrails_config)
client = ChatOpenAI(
                base_url=GEN_AI_ENDPOINT,
                api_key=GEN_AI_ENDPOINT_AUTHORIZATION,
                model= MODEL_NAME,
                temperature=0                
            )
messages = [
                (
                    "system", system
                )
                ,(
                    "human", prompt
                )
            ]
prompt = ChatPromptTemplate.from_messages(messages)

chain_with_guardrails =  prompt | (guardrails | client)

In this setup, RunnableRails does not handle messages containing system roles. Attempting to pass such a structure as a dictionary causes conflicts with the format expected by the Runnable interface from LangChain, which assumes a flat input structure and does not support multiple message types out of the box. This change addresses that limitation.

Steps To Reproduce

guardrails_config = RailsConfig.from_path(<Path>)
guardrails = RunnableRails(guardrails_config)
client = ChatOpenAI(
                base_url=GEN_AI_ENDPOINT,
                api_key=GEN_AI_ENDPOINT_AUTHORIZATION,
                model= MODEL_NAME,
                temperature=0                
            )
messages = [
                (
                    "system", system
                )
                ,(
                    "human", prompt
                )
            ]
prompt = ChatPromptTemplate.from_messages(messages)

chain_with_guardrails =  prompt | (guardrails | client)

Expected Behavior

The configured chain should also read the system prompt and include it during execution of the runnable chain, along with the user prompt

Actual Behavior

The configured chain ignores the system prompt completely and answers the question only based on the configured prompts within guardrails config and user prompt

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions