@@ -74,13 +93,13 @@ asyncio.run(main())
The AutoGen ecosystem provides everything you need to create AI agents, especially multi-agent workflows -- framework, developer tools, and applications.
-The *framework* uses a layered and extensible design. Layers have clearly divided responsibilities and build on top of layers below. This design enables you to use the framework at different levels of abstraction, from high-level APIs to low-level components.
+The _framework_ uses a layered and extensible design. Layers have clearly divided responsibilities and build on top of layers below. This design enables you to use the framework at different levels of abstraction, from high-level APIs to low-level components.
- [Core API](./python/packages/autogen-core/) implements message passing, event-driven agents, and local and distributed runtime for flexibility and power. It also support cross-language support for .NET and Python.
- [AgentChat API](./python/packages/autogen-agentchat/) implements a simpler but opinionated API rapid for prototyping. This API is built on top of the Core API and is closest to what users of v0.2 are familiar with and supports familiar multi-agent patterns such as two-agent chat or group chats.
- [Extensions API](./python/packages/autogen-ext/) enables first- and third-party extensions continuously expanding framework capabilities. It support specific implementation of LLM clients (e.g., OpenAI, AzureOpenAI), and capabilities such as code execution.
-The ecosystem also supports two essential *developer tools*:
+The ecosystem also supports two essential _developer tools_:
@@ -97,17 +116,17 @@ With AutoGen you get to join and contribute to a thriving ecosystem. We host wee
-| | [![Python](https://img.shields.io/badge/AutoGen-Python-blue?logo=python&logoColor=white)](./python) | [![.NET](https://img.shields.io/badge/AutoGen-.NET-green?logo=.net&logoColor=white)](./dotnet) | [![Studio](https://img.shields.io/badge/AutoGen-Studio-purple?logo=visual-studio&logoColor=white)](./python/packages/autogen-studio) |
-|----------------------|--------------------------------------------------------------------------------------------|-------------------|-------------------|
-| Installation | [![Installation](https://img.shields.io/badge/Install-blue)](https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/installation.html) | * | [![Install](https://img.shields.io/badge/Install-purple)](https://microsoft.github.io/autogen/dev/user-guide/autogenstudio-user-guide/installation.html) |
-| Quickstart | [![Quickstart](https://img.shields.io/badge/Quickstart-blue)](https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/quickstart.html#) | * | * |
-| Tutorial | [![Tutorial](https://img.shields.io/badge/Tutorial-blue)](https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/models.html) | *| * |
-| API Reference | [![API](https://img.shields.io/badge/Docs-blue)](https://microsoft.github.io/autogen/dev/reference/index.html#) | * | [![API](https://img.shields.io/badge/Docs-purple)](https://microsoft.github.io/autogen/dev/user-guide/autogenstudio-user-guide/usage.html) |
-| Packages | [![PyPi autogen-core](https://img.shields.io/badge/PyPi-autogen--core-blue?logo=pypi)](https://pypi.org/project/autogen-core/)
[![PyPi autogen-agentchat](https://img.shields.io/badge/PyPi-autogen--agentchat-blue?logo=pypi)](https://pypi.org/project/autogen-agentchat/)
[![PyPi autogen-ext](https://img.shields.io/badge/PyPi-autogen--ext-blue?logo=pypi)](https://pypi.org/project/autogen-ext/) | * | [![PyPi autogenstudio](https://img.shields.io/badge/PyPi-autogenstudio-purple?logo=pypi)](https://pypi.org/project/autogenstudio/) |
+| | [![Python](https://img.shields.io/badge/AutoGen-Python-blue?logo=python&logoColor=white)](./python) | [![.NET](https://img.shields.io/badge/AutoGen-.NET-green?logo=.net&logoColor=white)](./dotnet) | [![Studio](https://img.shields.io/badge/AutoGen-Studio-purple?logo=visual-studio&logoColor=white)](./python/packages/autogen-studio) |
+| ------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| Installation | [![Installation](https://img.shields.io/badge/Install-blue)](https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/installation.html) | \* | [![Install](https://img.shields.io/badge/Install-purple)](https://microsoft.github.io/autogen/dev/user-guide/autogenstudio-user-guide/installation.html) |
+| Quickstart | [![Quickstart](https://img.shields.io/badge/Quickstart-blue)](https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/quickstart.html#) | \* | [![Usage](https://img.shields.io/badge/Quickstart-blue)](https://microsoft.github.io/autogen/dev/user-guide/autogenstudio-user-guide/usage.html#) |
+| Tutorial | [![Tutorial](https://img.shields.io/badge/Tutorial-blue)](https://microsoft.github.io/autogen/dev/user-guide/agentchat-user-guide/tutorial/models.html) | \* | [![Usage](https://img.shields.io/badge/Quickstart-blue)](https://microsoft.github.io/autogen/dev/user-guide/autogenstudio-user-guide/usage.html#) |
+| API Reference | [![API](https://img.shields.io/badge/Docs-blue)](https://microsoft.github.io/autogen/dev/reference/index.html#) | \* | [![API](https://img.shields.io/badge/Docs-purple)](https://microsoft.github.io/autogen/dev/user-guide/autogenstudio-user-guide/usage.html) |
+| Packages | [![PyPi autogen-core](https://img.shields.io/badge/PyPi-autogen--core-blue?logo=pypi)](https://pypi.org/project/autogen-core/)
[![PyPi autogen-agentchat](https://img.shields.io/badge/PyPi-autogen--agentchat-blue?logo=pypi)](https://pypi.org/project/autogen-agentchat/)
[![PyPi autogen-ext](https://img.shields.io/badge/PyPi-autogen--ext-blue?logo=pypi)](https://pypi.org/project/autogen-ext/) | \* | [![PyPi autogenstudio](https://img.shields.io/badge/PyPi-autogenstudio-purple?logo=pypi)](https://pypi.org/project/autogenstudio/) |
-**Releasing soon*
+\*_Releasing soon_
Interested in contributing? See [CONTRIBUTING.md](./CONTRIBUTING.md) for guidelines on how to get started. We welcome contributions of all kinds, including bug fixes, new features, and documentation improvements. Join our community and help us make AutoGen better!
diff --git a/python/packages/autogen-agentchat/src/autogen_agentchat/agents/_user_proxy_agent.py b/python/packages/autogen-agentchat/src/autogen_agentchat/agents/_user_proxy_agent.py
index 2ad9a24682f0..89e0b61a50ee 100644
--- a/python/packages/autogen-agentchat/src/autogen_agentchat/agents/_user_proxy_agent.py
+++ b/python/packages/autogen-agentchat/src/autogen_agentchat/agents/_user_proxy_agent.py
@@ -1,15 +1,17 @@
import asyncio
+import uuid
+from contextlib import contextmanager
+from contextvars import ContextVar
from inspect import iscoroutinefunction
-from typing import Awaitable, Callable, Optional, Sequence, Union, cast
+from typing import Any, AsyncGenerator, Awaitable, Callable, ClassVar, Generator, Optional, Sequence, Union, cast
from aioconsole import ainput # type: ignore
from autogen_core import CancellationToken
from ..base import Response
-from ..messages import ChatMessage, HandoffMessage, TextMessage
+from ..messages import AgentEvent, ChatMessage, HandoffMessage, TextMessage, UserInputRequestedEvent
from ._base_chat_agent import BaseChatAgent
-# Define input function types more precisely
SyncInputFunc = Callable[[str], str]
AsyncInputFunc = Callable[[str, Optional[CancellationToken]], Awaitable[str]]
InputFuncType = Union[SyncInputFunc, AsyncInputFunc]
@@ -109,6 +111,33 @@ async def cancellable_user_agent():
print(f"BaseException: {e}")
"""
+ class InputRequestContext:
+ def __init__(self) -> None:
+ raise RuntimeError(
+ "InputRequestContext cannot be instantiated. It is a static class that provides context management for user input requests."
+ )
+
+ _INPUT_REQUEST_CONTEXT_VAR: ClassVar[ContextVar[str]] = ContextVar("_INPUT_REQUEST_CONTEXT_VAR")
+
+ @classmethod
+ @contextmanager
+ def populate_context(cls, ctx: str) -> Generator[None, Any, None]:
+ """:meta private:"""
+ token = UserProxyAgent.InputRequestContext._INPUT_REQUEST_CONTEXT_VAR.set(ctx)
+ try:
+ yield
+ finally:
+ UserProxyAgent.InputRequestContext._INPUT_REQUEST_CONTEXT_VAR.reset(token)
+
+ @classmethod
+ def request_id(cls) -> str:
+ try:
+ return cls._INPUT_REQUEST_CONTEXT_VAR.get()
+ except LookupError as e:
+ raise RuntimeError(
+ "InputRequestContext.runtime() must be called within the input callback of a UserProxyAgent."
+ ) from e
+
def __init__(
self,
name: str,
@@ -153,9 +182,15 @@ async def _get_input(self, prompt: str, cancellation_token: Optional[Cancellatio
except Exception as e:
raise RuntimeError(f"Failed to get user input: {str(e)}") from e
- async def on_messages(
- self, messages: Sequence[ChatMessage], cancellation_token: Optional[CancellationToken] = None
- ) -> Response:
+ async def on_messages(self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken) -> Response:
+ async for message in self.on_messages_stream(messages, cancellation_token):
+ if isinstance(message, Response):
+ return message
+ raise AssertionError("The stream should have returned the final result.")
+
+ async def on_messages_stream(
+ self, messages: Sequence[ChatMessage], cancellation_token: CancellationToken
+ ) -> AsyncGenerator[AgentEvent | ChatMessage | Response, None]:
"""Handle incoming messages by requesting user input."""
try:
# Check for handoff first
@@ -164,15 +199,18 @@ async def on_messages(
f"Handoff received from {handoff.source}. Enter your response: " if handoff else "Enter your response: "
)
- user_input = await self._get_input(prompt, cancellation_token)
+ request_id = str(uuid.uuid4())
+
+ input_requested_event = UserInputRequestedEvent(request_id=request_id, source=self.name)
+ yield input_requested_event
+ with UserProxyAgent.InputRequestContext.populate_context(request_id):
+ user_input = await self._get_input(prompt, cancellation_token)
# Return appropriate message type based on handoff presence
if handoff:
- return Response(
- chat_message=HandoffMessage(content=user_input, target=handoff.source, source=self.name)
- )
+ yield Response(chat_message=HandoffMessage(content=user_input, target=handoff.source, source=self.name))
else:
- return Response(chat_message=TextMessage(content=user_input, source=self.name))
+ yield Response(chat_message=TextMessage(content=user_input, source=self.name))
except asyncio.CancelledError:
raise
diff --git a/python/packages/autogen-agentchat/src/autogen_agentchat/messages.py b/python/packages/autogen-agentchat/src/autogen_agentchat/messages.py
index 07fc3123eb4c..21fb32d9d584 100644
--- a/python/packages/autogen-agentchat/src/autogen_agentchat/messages.py
+++ b/python/packages/autogen-agentchat/src/autogen_agentchat/messages.py
@@ -103,25 +103,40 @@ class ToolCallSummaryMessage(BaseChatMessage):
type: Literal["ToolCallSummaryMessage"] = "ToolCallSummaryMessage"
+class UserInputRequestedEvent(BaseAgentEvent):
+ """An event signaling a that the user proxy has requested user input. Published prior to invoking the input callback."""
+
+ request_id: str
+ """Identifier for the user input request."""
+
+ content: Literal[""] = ""
+ """Empty content for compat with consumers expecting a content field."""
+
+ type: Literal["UserInputRequestedEvent"] = "UserInputRequestedEvent"
+
+
ChatMessage = Annotated[
TextMessage | MultiModalMessage | StopMessage | ToolCallSummaryMessage | HandoffMessage, Field(discriminator="type")
]
"""Messages for agent-to-agent communication only."""
-AgentEvent = Annotated[ToolCallRequestEvent | ToolCallExecutionEvent, Field(discriminator="type")]
+AgentEvent = Annotated[
+ ToolCallRequestEvent | ToolCallExecutionEvent | UserInputRequestedEvent, Field(discriminator="type")
+]
"""Events emitted by agents and teams when they work, not used for agent-to-agent communication."""
__all__ = [
+ "AgentEvent",
"BaseMessage",
- "TextMessage",
+ "ChatMessage",
+ "HandoffMessage",
"MultiModalMessage",
"StopMessage",
- "HandoffMessage",
- "ToolCallRequestEvent",
+ "TextMessage",
"ToolCallExecutionEvent",
+ "ToolCallRequestEvent",
"ToolCallSummaryMessage",
- "ChatMessage",
- "AgentEvent",
+ "UserInputRequestedEvent",
]
diff --git a/python/packages/autogen-agentchat/src/autogen_agentchat/ui/__init__.py b/python/packages/autogen-agentchat/src/autogen_agentchat/ui/__init__.py
index 65c4f1e07ad9..9cc0837c58c2 100644
--- a/python/packages/autogen-agentchat/src/autogen_agentchat/ui/__init__.py
+++ b/python/packages/autogen-agentchat/src/autogen_agentchat/ui/__init__.py
@@ -2,6 +2,6 @@
This module implements utility classes for formatting/printing agent messages.
"""
-from ._console import Console
+from ._console import Console, UserInputManager
-__all__ = ["Console"]
+__all__ = ["Console", "UserInputManager"]
diff --git a/python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py b/python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py
index 79d39d6add7f..767dc68d8b4e 100644
--- a/python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py
+++ b/python/packages/autogen-agentchat/src/autogen_agentchat/ui/_console.py
@@ -1,14 +1,17 @@
+import asyncio
import os
import sys
import time
-from typing import AsyncGenerator, List, Optional, TypeVar, cast
+from inspect import iscoroutinefunction
+from typing import AsyncGenerator, Awaitable, Callable, Dict, List, Optional, TypeVar, Union, cast
from aioconsole import aprint # type: ignore
-from autogen_core import Image
+from autogen_core import CancellationToken, Image
from autogen_core.models import RequestUsage
+from autogen_agentchat.agents import UserProxyAgent
from autogen_agentchat.base import Response, TaskResult
-from autogen_agentchat.messages import AgentEvent, ChatMessage, MultiModalMessage
+from autogen_agentchat.messages import AgentEvent, ChatMessage, MultiModalMessage, UserInputRequestedEvent
def _is_running_in_iterm() -> bool:
@@ -19,25 +22,76 @@ def _is_output_a_tty() -> bool:
return sys.stdout.isatty()
+SyncInputFunc = Callable[[str], str]
+AsyncInputFunc = Callable[[str, Optional[CancellationToken]], Awaitable[str]]
+InputFuncType = Union[SyncInputFunc, AsyncInputFunc]
+
T = TypeVar("T", bound=TaskResult | Response)
+class UserInputManager:
+ def __init__(self, callback: InputFuncType):
+ self.input_events: Dict[str, asyncio.Event] = {}
+ self.callback = callback
+
+ def get_wrapped_callback(self) -> AsyncInputFunc:
+ async def user_input_func_wrapper(prompt: str, cancellation_token: Optional[CancellationToken]) -> str:
+ # Lookup the event for the prompt, if it exists wait for it.
+ # If it doesn't exist, create it and store it.
+ # Get request ID:
+ request_id = UserProxyAgent.InputRequestContext.request_id()
+ if request_id in self.input_events:
+ event = self.input_events[request_id]
+ else:
+ event = asyncio.Event()
+ self.input_events[request_id] = event
+
+ await event.wait()
+
+ del self.input_events[request_id]
+
+ if iscoroutinefunction(self.callback):
+ # Cast to AsyncInputFunc for proper typing
+ async_func = cast(AsyncInputFunc, self.callback)
+ return await async_func(prompt, cancellation_token)
+ else:
+ # Cast to SyncInputFunc for proper typing
+ sync_func = cast(SyncInputFunc, self.callback)
+ loop = asyncio.get_event_loop()
+ return await loop.run_in_executor(None, sync_func, prompt)
+
+ return user_input_func_wrapper
+
+ def notify_event_received(self, request_id: str) -> None:
+ if request_id in self.input_events:
+ self.input_events[request_id].set()
+ else:
+ event = asyncio.Event()
+ self.input_events[request_id] = event
+
+
async def Console(
stream: AsyncGenerator[AgentEvent | ChatMessage | T, None],
*,
no_inline_images: bool = False,
- output_stats: bool = True,
+ output_stats: bool = False,
+ user_input_manager: UserInputManager | None = None,
) -> T:
"""
Consumes the message stream from :meth:`~autogen_agentchat.base.TaskRunner.run_stream`
or :meth:`~autogen_agentchat.base.ChatAgent.on_messages_stream` and renders the messages to the console.
Returns the last processed TaskResult or Response.
+ .. note::
+
+ `output_stats` is experimental and the stats may not be accurate.
+ It will be improved in future releases.
+
Args:
stream (AsyncGenerator[AgentEvent | ChatMessage | TaskResult, None] | AsyncGenerator[AgentEvent | ChatMessage | Response, None]): Message stream to render.
This can be from :meth:`~autogen_agentchat.base.TaskRunner.run_stream` or :meth:`~autogen_agentchat.base.ChatAgent.on_messages_stream`.
no_inline_images (bool, optional): If terminal is iTerm2 will render images inline. Use this to disable this behavior. Defaults to False.
- output_stats (bool, optional): If True, will output a summary of the messages and inline token usage info. Defaults to True.
+ output_stats (bool, optional): (Experimental) If True, will output a summary of the messages and inline token usage info. Defaults to False.
Returns:
last_processed: A :class:`~autogen_agentchat.base.TaskResult` if the stream is from :meth:`~autogen_agentchat.base.TaskRunner.run_stream`
@@ -62,6 +116,7 @@ async def Console(
f"Duration: {duration:.2f} seconds\n"
)
await aprint(output, end="")
+
# mypy ignore
last_processed = message # type: ignore
@@ -91,9 +146,13 @@ async def Console(
f"Duration: {duration:.2f} seconds\n"
)
await aprint(output, end="")
+
# mypy ignore
last_processed = message # type: ignore
-
+ # We don't want to print UserInputRequestedEvent messages, we just use them to signal the user input event.
+ elif isinstance(message, UserInputRequestedEvent):
+ if user_input_manager is not None:
+ user_input_manager.notify_event_received(message.request_id)
else:
# Cast required for mypy to be happy
message = cast(AgentEvent | ChatMessage, message) # type: ignore
diff --git a/python/packages/autogen-core/docs/src/index.md b/python/packages/autogen-core/docs/src/index.md
index e62b398dce58..f19fd42490c3 100644
--- a/python/packages/autogen-core/docs/src/index.md
+++ b/python/packages/autogen-core/docs/src/index.md
@@ -84,7 +84,7 @@ Built on AgentChat.
```bash
pip install autogenstudio
-autogenstudio ui --port 8080
+autogenstudio ui --port 8080 --appdir ./myapp
```
+++
@@ -109,7 +109,7 @@ Get Started
A programming framework for building conversational single and multi-agent applications.
-Built on Core.
+Built on Core. Requires Python 3.10+.
```python
# pip install -U "autogen-agentchat" "autogen-ext[openai]"
@@ -119,7 +119,7 @@ from autogen_ext.models.openai import OpenAIChatCompletionClient
async def main() -> None:
agent = AssistantAgent("assistant", OpenAIChatCompletionClient(model="gpt-4o"))
- print(agent.run(task="Say 'Hello World!'"))
+ print(await agent.run(task="Say 'Hello World!'"))
asyncio.run(main())
```
diff --git a/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/tutorial/teams.ipynb b/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/tutorial/teams.ipynb
index ce0f39664158..d12a273edbda 100644
--- a/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/tutorial/teams.ipynb
+++ b/python/packages/autogen-core/docs/src/user-guide/agentchat-user-guide/tutorial/teams.ipynb
@@ -83,7 +83,7 @@
"source": [
"## Running a Team\n",
"\n",
- "Let's calls the {py:meth}`~autogen_agentchat.teams.BaseGroupChat.run` method\n",
+ "Let's call the {py:meth}`~autogen_agentchat.teams.BaseGroupChat.run` method\n",
"to start the team with a task."
]
},
diff --git a/python/packages/autogen-core/docs/src/user-guide/autogenstudio-user-guide/index.md b/python/packages/autogen-core/docs/src/user-guide/autogenstudio-user-guide/index.md
index 4582657bc24e..09de3f9ac14f 100644
--- a/python/packages/autogen-core/docs/src/user-guide/autogenstudio-user-guide/index.md
+++ b/python/packages/autogen-core/docs/src/user-guide/autogenstudio-user-guide/index.md
@@ -49,15 +49,6 @@ AutoGen Studio offers four main interfaces to help you build and manage multi-ag
- Setup and test endpoints based on a team configuration
- Run teams in a docker container
-This revision improves clarity by:
-
-- Organizing capabilities into clearly numbered sections
-- Using more precise language
-- Breaking down complex features into digestible points
-- Maintaining consistent formatting and structure
-- Eliminating awkward phrasing and grammatical issues
-- Adding context about how each interface serves users
-
### Roadmap
Review project roadmap and issues [here](https://github.com/microsoft/autogen/issues/4006) .
diff --git a/python/packages/autogen-core/docs/src/user-guide/autogenstudio-user-guide/installation.md b/python/packages/autogen-core/docs/src/user-guide/autogenstudio-user-guide/installation.md
index 2ebc167213d2..2ca91af58251 100644
--- a/python/packages/autogen-core/docs/src/user-guide/autogenstudio-user-guide/installation.md
+++ b/python/packages/autogen-core/docs/src/user-guide/autogenstudio-user-guide/installation.md
@@ -9,35 +9,83 @@ myst:
There are two ways to install AutoGen Studio - from PyPi or from source. We **recommend installing from PyPi** unless you plan to modify the source code.
-1. **Install from PyPi**
+## Create a Virtual Environment (Recommended)
- We recommend using a virtual environment (e.g., conda) to avoid conflicts with existing Python packages. With Python 3.10 or newer active in your virtual environment, use pip to install AutoGen Studio:
+We recommend using a virtual environment as this will ensure that the dependencies for AutoGen Studio are isolated from the rest of your system.
- ```bash
- pip install -U autogenstudio
- ```
+``````{tab-set}
-2. **Install from Source**
+`````{tab-item} venv
- > Note: This approach requires some familiarity with building interfaces in React.
+Create and activate:
- If you prefer to install from source, ensure you have Python 3.10+ and Node.js (version above 14.15.0) installed. Here's how you get started:
+```bash
+python3 -m venv .venv
+source .venv/bin/activate
+```
+
+To deactivate later, run:
+
+```bash
+deactivate
+```
+
+`````
+
+`````{tab-item} conda
+
+[Install Conda](https://docs.conda.io/projects/conda/en/stable/user-guide/install/index.html) if you have not already.
+
+
+Create and activate:
+
+```bash
+conda create -n autogen python=3.10
+conda activate autogen
+```
+
+To deactivate later, run:
+
+```bash
+conda deactivate
+```
+
+
+`````
+
+
+
+``````
+
+## Install Using pip (Recommended)
+
+You can install AutoGen Studio using pip, the Python package manager.
+
+```bash
+pip install -U autogenstudio
+```
+
+### Install from Source\*\*
+
+> Note: This approach requires some familiarity with building interfaces in React.
+
+If you prefer to install from source, ensure you have Python 3.10+ and Node.js (version above 14.15.0) installed. Here's how you get started:
- - Clone the AutoGen Studio repository and install its Python dependencies:
+- Clone the AutoGen Studio repository and install its Python dependencies:
- ```bash
- pip install -e .
- ```
+ ```bash
+ pip install -e .
+ ```
- - Navigate to the `samples/apps/autogen-studio/frontend` directory, install dependencies, and build the UI:
+- Navigate to the `samples/apps/autogen-studio/frontend` directory, install dependencies, and build the UI:
- ```bash
- npm install -g gatsby-cli
- npm install --global yarn
- cd frontend
- yarn install
- yarn build
- ```
+ ```bash
+ npm install -g gatsby-cli
+ npm install --global yarn
+ cd frontend
+ yarn install
+ yarn build
+ ```
For Windows users, to build the frontend, you may need alternative commands to build the frontend.
@@ -47,7 +95,7 @@ For Windows users, to build the frontend, you may need alternative commands to b
```
-### Running the Application
+## Running the Application
Once installed, run the web UI by entering the following in a terminal:
@@ -62,8 +110,8 @@ AutoGen Studio also takes several parameters to customize the application:
- `--host