SDialog is a modular Python toolkit for synthetic dialog generation, evaluation, and analysis with LLMs. It standardizes a Dialog schema and offers personaβdriven multiβagent simulation, composable orchestration, builtβin metrics, and mechanistic interpretabilityβso you can generate reliable, controllable dialog data at scale.
Quick links: Docs β’ API β’ Demo (Colab) β’ Tutorials β’ Issues
- Standard Dialog schema with JSON import/export (aiming to help standardize dialog datasets with community support)
- Personaβdriven multiβagent simulation with contexts, tools, and thoughts
- Composable orchestration for precise control over behavior and flow
- Builtβin evaluation (metrics + LLMβasβjudge) for comparison and iteration
- Native mechanistic interpretability (inspect and steer activations)
- Easy creation of user-defined components by inheriting from base classes (personas, metrics, orchestrators, etc.)
- Interoperability across OpenAI, HuggingFace, Ollama, AWS, and more
pip install sdialog
Short example showing personas, agents, a simple rule (orchestrator), and a tool.
import sdialog
from sdialog import Context
from sdialog.agents import Agent
from sdialog.personas import Persona
from sdialog.orchestrators import SimpleReflexOrchestrator
# Set your preferred backend/model and parameters
sdialog.config.llm("openai:gpt-4.1", temperature=0.9)
# Define personas and shared context
alice = Persona(name="Alice", role="barista", personality="cheerful")
bob = Persona(name="Bob", role="customer", personality="curious")
ctx = Context(location="Downtown cafe", topics=["coffee"])
# (Optional) Define tools for the agents
# Just any user-defined function, let's define a mock one for our agent
def lookup_menu(item: str) -> dict:
return {"item": item, "specials": ["vanilla latte", "cold brew"]}
# (Optional) Define orchestrators for the agents
# Let's define a simple rule-based orchestrator
react = SimpleReflexOrchestrator(
condition=lambda utt: "decaf" in utt.lower(),
instruction="Explain decaf options and suggest one."
)
# Create the agents
barista = Agent(persona=alice, tools=[lookup_menu])
customer = Agent(persona=bob, first_utterance="Hi!")
# Add orchestrators to your agent using pipe-like composition
barista = barista | react
# Generate three dialogs!
for ix in range(3):
dialog = customer.dialog_with(barista, context=ctx)
dialog.print(orchestration=True)
dialog.to_file(f"dialog_{ix}.json")
Note
- See orchestration tutorial and agents with tools/thoughts.
- Dialogs are rich objects with helper methods (filter, slice, transform, etc.) that can be easily exported and loaded.
Load a saved dialog later:
from sdialog import Dialog
my_dialog = Dialog.from_file("dialog_0.json")
my_dialog.print()
Generate personas and contexts for your agents automatically when you need diversity, and use the .set()
method when you need more control:
from sdialog.personas import Doctor, Patient
from sdialog.generators import PersonaGenerator, ContextGenerator
from sdialog import Context
# By default, all attribute values will be LLM generated.
doc = PersonaGenerator(Doctor(specialty="Cardiology")).generate()
pat = PersonaGenerator(Patient(symptoms="chest pain")).generate()
# Alternatively, specify how you want each attribute to be generated
ctx_base = Context(location="emergency room")
ctx_gen = ContextGenerator(ctx_base)
ctx_gen.set(
objects=get_objects_from_db, # A user-defined function
circumstances="{csv:circumstances:./data/circumstances.csv}", # A CSV file
goals="{llm:Suggest a realistic goal for the context}" # LLM but with specific instruction, etc.
)
ctx = ctx_gen.generate()
Tip
πΉοΈ π Check out our demo notebook in Colab to play around with sdialog.
Use builtβin metrics (readability, flow, linguistic features, LLM judges) or easily create new ones, then aggregate and compare datasets via DatasetComparator
.
from sdialog.evaluation import LLMJudgeRealDialog, LinguisticFeatureScore
from sdialog.evaluation import FrequencyEvaluator, MeanEvaluator
from sdialog.evaluation import DatasetComparator
reference = [...] # list[Dialog]
candidate = [...] # list[Dialog]
judge = LLMJudgeRealDialog()
flesch = LinguisticFeatureScore(feature="flesch-reading-ease")
comparator = DatasetComparator([
FrequencyEvaluator(judge, name="Realistic dialog rate"),
MeanEvaluator(flesch, name="Mean Flesch Reading Ease"),
])
results = comparator({"reference": reference, "candidate": candidate})
# Plot results for each evaluator
comparator.plot()
Tip
See evaluation tutorial.
Attach Inspectors to capture perβtoken activations and optionally steer (add/ablate directions) to analyze or intervene in model behavior.
from sdialog.interpretability import Inspector
from sdialog.agents import Agent
agent = Agent(name="Bob")
inspector = Inspector(target="model.layers.16.post_attention_layernorm")
agent = agent | inspector
agent("How are you?")
agent("Cool!")
# Let's get the last response's first token activation vector!
act = inspector[-1][0].act # [response index][token index]
Steering intervention (subtracting a direction):
anger_direction = torch.load("anger_direction.pt") # A direction vector (e.g., PCA / difference-in-mean vector)
agent_steered = agent | inspector - anger_direction # Ablate the anger direction from the target activations
agent_steered("You are an extremely upset assistant") # Agent "can't get angry anymore" :)
Tip
See the tutorial on using SDialog to remove the refusal capability from LLaMA 3.2.
Many backends supported, just use "BACKEND:MODEL"
string format to either set a global default LLM for all components or pass one to each component:
import sdialog
# Change the default global LLM
sdialog.config.llm("ollama:qwen3:14b")
# Any argument supported by the chosen backend/model can also be given, for example
sdialog.config.llm("ollama:qwen3:14b",
temperature=0.7,
base_url="https://my-ollama-endpoint.com:123") # Remote Ollama server
Any LLM-powered component can also take a specific model and its parameters as argument, to overwrite the default one:
from sdialog.agents import Agent
my_agent = Agent(model="aws:anthropic.claude-3-5-sonnet-20240620-v1:0",
region_name="us-east-1")
See CONTRIBUTING.md. We welcome issues, feature requests, and pull requests. If you want to add personas, agents, orchestrators, generators, evaluators, or tutorials, please open an issue or submit a PR, and help us make SDialog better π
This project follows the all-contributors specification. All-contributors list:
Sergio Burdisso π» π€ π β |
Labrak Yanis π» π€ |
SΓ©verin π» π€ β |
Ricard Marxer π» π€ |
Thomas Schaaf π» |
David Liu π» |
ahassoo1 π€ π» |
Pawel Cyrta π» π€ |
ABCDEFGHIJKL π» |
This work was supported by the EU Horizon 2020 project ELOQUENCE (grant number 101070558).
The initial development of this project began in preparation for the 2025 Jelinek Memorial Summer Workshop on Speech and Language Technologies (JSALT 2025) as part of the "Play your Part" research group.
MIT License
Copyright (c) 2025 Idiap Research Institute