You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Linked issues
closes: #1967, #1970
## Details
- Added streaming support for Python with an associated sample. As
ChefBot does not exist in Python, streaming is demonstrated through the
ListBot instead.
- Added Powered by AI features for the final chunk (feedback loop,
citations, sensitivity label, gen by AI label)
- Designed a custom `PromptCompletionModelEmitter` class to manage
streaming events and handlers
- **Feedback Loop:** The flag is passed in slightly differently from JS
and C#. This is because the AI class was never being passed in as a
method param for `Planner.continue_task `. Adding this change now as an
optional param will lead to compiling errors for all developers with
custom planners (parent class needs to be updated). This also leads to a
couple of cyclical dependencies. Instead of internally piping this flag,
it is now exposed as an _option_ in the `ActionPlanner`. This is (a)
more obvious to configure, (b) will not introduce a sudden breaking
change to non-streaming developers, and (c) provides more control for
the developer. Although this flag can deviate from the one set in the
`AI` class, it is not possible for both flows to run at once, so there
won't be any conflicts.
- **Citations and Sensitivity Label**: This will work slightly
differently than the previous non-streaming Plan flow. Citations and its
respective sensitivity labels are added per each text chunk queued.
However, these will only be rendered in the final message (when the full
message has been received). Rather than exposing the
`SensitivityUsageInfo` object as an override on the
`PredictedSayCommand`, the label can now be directly set as `usageInfo`
in the `AIEntity` object along with the AIGenerated label and the
citations.
Additional items for parity:
- Added temporary 1.5 second buffer to adhere to 1RPS BE service
requirement. Consequently, if the message is small, the message will not
look like it is being streamed. Recommended command to visualize the
feature is "tell me a story and render this as a large text chunk".
- Added reject/catch handling for errors
- Added entities metadata to match GA requirements. This will log a few
warnings from the Botbuilder side, until it is added on their side, in a
few months.
**screenshots**:

## Attestation Checklist
- [x] My code follows the style guidelines of this project
- I have checked for/fixed spelling, linting, and other errors
- I have commented my code for clarity
- I have made corresponding changes to the documentation (updating the
doc strings in the code is sufficient)
- My changes generate no new warnings
- I have added tests that validates my changes, and provides sufficient
test coverage. I have tested with:
- Local testing
- E2E testing in Teams
- New and existing unit tests pass locally with my changes
---------
Co-authored-by: lilydu <[email protected]>
The `StreamingResponse` class is the helper class for streaming responses to the client. The class is used to send a series of updates to the client in a single response. If you are using your own custom model, you can directly instantiate and manage this class to stream responses.
@@ -52,7 +53,8 @@ Once `endStream()` is called, the stream is considered ended and no further upda
52
53
### Current Limitations:
53
54
- Streaming is only available in 1:1 chats.
54
55
- SendActivity requests are restricted to 1 RPS. Our SDK buffers to 1.5 seconds.
55
-
- For Powered by AI features, only the Feedback Loop and Generated by AI Label is currently supported.
56
+
- For Powered by AI features, Citations, Sensitivity Label, Feedback Loop and Generated by AI Label are supported in the final chunk.
57
+
- Citations are set per each text chunk queued.
56
58
- Only rich text can be streamed.
57
59
- Due to future GA protocol changes, the `channelData` metadata must be included in the `entities` object as well.
58
60
- Only one informative message can be set. This is reused for each message.
@@ -74,7 +76,8 @@ You can configure streaming with your bot by following these steps:
74
76
75
77
#### Optional additions:
76
78
- Set the informative message in the `ActionPlanner` declaration via the `StartStreamingMessage` config.
77
-
- As previously, set the feedback loop toggle in the `AIOptions` object in the `app` declaration and specify a handler.
79
+
- As previously, set the feedback loop toggle in the `AIOptions` object in the `app` declaration and specify a handler.
80
+
- For *Python* specifically, the toggle also needs to be set in the `ActionPlannerOptions` object.
78
81
- Set attachments in the final chunk via the `EndStreamHandler` in the `ActionPlanner` declaration.
79
82
80
83
#### C#
@@ -158,6 +161,46 @@ const planner = new ActionPlanner({
0 commit comments