-
Notifications
You must be signed in to change notification settings - Fork 15
feat: Allow multiple markdown streams per message #105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
**Breaking Changes:** - Replaced `shiny-chat-append-message` and `shiny-chat-append-message-chunk` events with new event model - Changed `ChatMessage` from LitElement to vanilla HTMLElement for manual DOM control **New Event System:** - `shiny-chat-message`: Complete message with full content - `shiny-chat-message-start`: Initialize new stream with `streamId` - `shiny-chat-message-append`: Append/replace content to existing stream - `shiny-chat-message-end`: Finalize stream - `shiny-chat-input-enable`: Separate event to re-enable input **Key Improvements:** - **Stable DOM**: Finalized `shiny-markdown-stream` elements never re-render - **Multiple streams per message**: Each assistant message can contain multiple stream segments - **Better loading UX**: Dedicated loading indicator component instead of empty message placeholders - **Cleaner lifecycle management**: Stream state tracked by `streamId`, input enabling decoupled from message lifecycle **Implementation Details:** - `ChatMessage` now acts as container for multiple `shiny-markdown-stream` elements - Server-side R functions updated to use new event payloads - Active stream tracking with automatic ID generation - Light DOM only architecture (no shadow DOM) This enables more robust streaming chat experiences with better performance and extensibility for multiple concurrent streams.
@cpsievert I planned to bring these changes to the Python shinychat component as well, but quickly ran into legacy code (in particular the message checkpointing approach from earlier). Personally, I'd advocate for aligning the two approaches and deprecating or removing the nested streaming approach that's available in the Python version. In the future, I think we'll end up with a system that looks quite similar in the user-facing API but that isn't based on restoring the In short, do you think it would make sense for you to take on the Python updates when you tackle the tool-calling UI updates, too? I think you'll be able to move faster through the codebase than I will and there's a lot of overlap (at least in code to touch) in both updates. |
js/src/chat/chat.ts
Outdated
type Message = Omit<MessageAttrs, "data_role"> & { | ||
role: MessageAttrs["data_role"] | ||
type ChatMessageAppendPayload = { | ||
streamId: string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll probably need to make streamId
optional to support chat_append_message(chunk = TRUE)
which won't have a streamId
.
Alternatively, I was considering something like with_streaming_message()
that would wrap the lifecycle, which would also be useful for tool calling, etc.
With that function, you'd be able to do something like this:
with_streaming_message({
# Use low-level chat_append_message() to temporarily set a progress message
chat_append_message(id, list(role = "assistant", content = "_Thinking..._ "))
await(async_sleep(1))
# Clear the progress message
chat_append_message(
id,
list(role = "assistant", content = ""),
operation = "replace"
)
})
Anyway, I'm realizing this touches too many different places. I think we'll likely end up needing to rethink chat_append_message()
also.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went with making streamId
optional, so the example in the docs works as-written, i.e. this does what you'd generally expect:
# Use low-level chat_append_message() to temporarily set a progress message
chat_append_message(id, list(role = "assistant", content = "_Thinking..._ "))
await(async_sleep(2))
# Clear the progress message
chat_append_message(
id,
list(role = "assistant", content = ""),
operation = "replace"
)
for (chunk in strsplit(sample(responses, 1), "")[[1]]) {
yield(chunk)
await(async_sleep(0.02))
}
I still think we might want to take another look at the API design around chat_append_message()
, but for now this change keeps the shinychat API backwards compatible with the previous approach while leaving room for future, more granular, updates.
…lso start a new message
session = getDefaultReactiveDomain() | ||
) { | ||
check_active_session(session) | ||
if (!is_string(role)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Previously, we were checking that role
is either "assistant"
or "user"
. Now that we're relying more on role, I've relaxed this constraint. By doing this, we can allow for multiple chat participants in a single chat UI, e.g. each with their own icons and styles, etc.
On the TypeScript side, non-user roles get a robot icon by default. So, in general, we carve out user messages for special styles and treat all others as variants of a system message.
import type { HtmlDep } from "../utils/_utils" | ||
import { icons as ICONS } from "../utils/_icons" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I moved all of the icons out into a utils folder. This avoids duplication and extremely long source code lines.
class ChatMessageLoading extends ChatMessage { | ||
constructor() { | ||
super() | ||
this.role = "user" // Always set role to user for this subclass | ||
this.contentType = "semi-markdown" // User messages are always semi-markdown | ||
this.role = "loading" | ||
this.icon = ICONS.dotsFade | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We now have a special loading message element, which simplifies the icon logic considerably. We don't need to rely on the content property, instead we add the loading element when we submit the user message and we remove it when we get any kind of new message back from the server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(The unified diff looks weird here, btw)
if (do_start_stream) { | ||
chat_append_("", chunk = "start", icon = icon) | ||
do_start_stream <- FALSE | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We delay the first start message until we start getting streaming results back. This keeps the loading message in the chat until the actual streaming begins. (Previously we'd send the first message immediately.)
The goal of this PR is to allow
shiny-chat-message
to hold more than one markdown stream and to route message updates from the server to the correct markdown stream.Breaking Changes
shiny-chat-append-message
andshiny-chat-append-message-chunk
events with new event modelChatMessage
from LitElement to vanilla HTMLElement for manual DOM controlNew Event System
Previously, we had two events that carried the same data payload, except that the
-chunk
event type had achunk_type: "start" | "end" | null
.With this PR, we now separate these events into a series of more specific events with payloads that directly match the data required in each event.
shiny-chat-message
: Complete message with full contentshiny-chat-message-start
: Initialize new stream withstreamId
, with associatedrole
,content_type
andicon
.shiny-chat-message-append
: Append/replace content to existing streamshiny-chat-message-end
: Finalize streamshiny-chat-input-enable
: Separate event to re-enable inputKey Improvements
shiny-markdown-stream
elements never re-renderstreamId
, input enabling decoupled from messagelifecycle
Implementation Details
ChatMessage
now acts as container for multipleshiny-markdown-stream
elementsNote that we're using
streamId
in the stream start/append/end messages, in part because it's a little more convenient even if not required. In the future, though, I can see us using stream IDs as a way to enable nested streams, e.g. streaming into tool requests.