Skip to content

Feature Request: Add action_guard Parameter for centralized validation of agent tool calls #2961

@prane-eth

Description

@prane-eth

Confirm this is a feature request for the Python library and not the underlying OpenAI API.

  • This is a feature request for the Python library

Describe the feature or improvement you're requesting

Background

AI agents have a growing adoption across the industry, including critical applications. AI agents that have access to tools (including MCP servers) can currently call tools directly with no centralized validation layer that inspects these calls before execution, allowing harmful or disallowed tool calls to be executed without oversight. In OpenAI-Python package, Action Guard feature automates the validation, making the workflow secure.

The Agent-Action-Guard experiments proved GPT-5.3 has a safety score of 17.33%, which shows a very high vulnerability, proving the requirement for the Action Guard.

Proposed Change

Introduce an action_guard parameter in the OpenAI Python client that allows developers to define a centralized validation function for agent actions.

This guard would be invoked whenever the agent attempts a tool call (including MCP actions). The guard function can decide whether to allow or block the action.

Example:

from agent_action_guard import is_action_harmful

def my_guard_function(action: AgentAction) -> GuardDecision:
    # This can use code-based validation or a classifier model
    is_harmful, confidence = is_action_harmful(action)
    if is_harmful:
        return GuardDecision.BLOCK
    return GuardDecision.ALLOW

# GuardDecision options: ALLOW and BLOCK

response = openai.chat.completions.create(
    model="gpt-...",
    messages=[...],
    tools=[...],
    action_guard=my_guard_function,  # The new argument
)
# Same for openai.chat.completions.stream, openai.responses.create, openai.responses.stream,
# openai.chat.completions.create(stream=True), and openai.responses.create(stream=True)

Behavior

The guard function receives a ToolCall object representing the pending action.

Possible outcomes:

  • ALLOW – Execute the action normally.
  • BLOCK – Prevent execution and return an error to the agent.
  • New options may be added in future.

Additional context

Benefits

  • Centralized enforcement of action policies
  • Reduced boilerplate in tool implementations
  • Improved safety for agentic systems
  • Seamless integration with existing tool and MCP ecosystems

Related Work

If user approval is made mandatory for each action, the workflow becomes slow and inefficient.

Question

The updated code is available at https://github.com/prane-eth/openai-action-guard/tree/feature/agent-tool-call-action-guard.
May I continue the development and create a PR?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions