Skip to content

Commit 1e61623

Browse files
authored
[JS] fix: CardGazer sample (#2059)
## Linked issues #2039 #minor <img width="935" alt="image" src="https://github.com/user-attachments/assets/0018973c-9751-46d5-abd1-b6cbe889d9cc"> ## Details - `OpenAIModel.ts` docstring cleanup - `PromptManager` - change order of tools and user input - `UserInputMessage` - don't send messages if messages are empty. - Update cardgazer to use 4o-mini and tools augmentation - Update CardGazer sample to improve experience. - Add note in README for Bot SSO sample ## Attestation Checklist - [x] My code follows the style guidelines of this project - I have checked for/fixed spelling, linting, and other errors - I have commented my code for clarity - I have made corresponding changes to the documentation (updating the doc strings in the code is sufficient) - My changes generate no new warnings - I have added tests that validates my changes, and provides sufficient test coverage. I have tested with: - Local testing - E2E testing in Teams - New and existing unit tests pass locally with my changes ### Additional information > Feel free to add other relevant information below --------- Co-authored-by: Corina Gum <>
1 parent 4c6abf7 commit 1e61623

File tree

11 files changed

+51
-32
lines changed

11 files changed

+51
-32
lines changed

js/packages/teams-ai/src/prompts/PromptManager.ts

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -335,17 +335,17 @@ export class PromptManager implements PromptFunctions {
335335
);
336336
}
337337

338+
if (template.config.augmentation && template.config.augmentation.augmentation_type === 'tools') {
339+
const includeHistory: boolean = template.config.completion.include_history;
340+
const historyVariable = includeHistory ? `conversation.${name}_history` : 'temp.${name}_history';
341+
sections.push(new ActionOutputMessage(historyVariable));
342+
}
338343
// Include user input
339344
if (template.config.completion.include_images) {
340345
sections.push(new UserInputMessage(this.options.max_input_tokens));
341346
} else if (template.config.completion.include_input) {
342347
sections.push(new UserMessage('{{$temp.input}}', this.options.max_input_tokens));
343348
}
344-
if (template.config.augmentation && template.config.augmentation.augmentation_type === 'tools') {
345-
const includeHistory: boolean = template.config.completion.include_history;
346-
const historyVariable = includeHistory ? `conversation.${name}_history` : 'temp.${name}_history';
347-
sections.push(new ActionOutputMessage(historyVariable));
348-
}
349349

350350
// Create prompt
351351
template.prompt = new Prompt(sections);

js/packages/teams-ai/src/prompts/UserInputMessage.ts

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,9 @@ export class UserInputMessage extends PromptSectionBase {
8181
const images = inputFiles.filter((f) => f.contentType.startsWith('image/'));
8282
for (const image of images) {
8383
// Check for budget to add image
84-
// TODO: This accounts for low detail images but not high detail images.
84+
// This accounts for low detail images but not high detail images.
85+
// https://platform.openai.com/docs/guides/vision
86+
// low res mode defaults to a 512x512px image which is budgeted at 85 tokens.
8587
// Additional work is needed to account for high detail images.
8688
if (budget < 85) {
8789
break;
@@ -99,7 +101,11 @@ export class UserInputMessage extends PromptSectionBase {
99101
budget -= 85;
100102
}
101103

104+
const output = [];
105+
if (message.content!.length > 0) {
106+
output.push(message);
107+
}
102108
// Return output
103-
return { output: [message], length, tooLong: false };
109+
return { output, length, tooLong: false };
104110
}
105111
}

js/samples/03.ai-concepts/c.actionMapping-lightBot/src/prompts/tools/config.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
"description": "A bot that can turn the lights on and off",
44
"type": "completion",
55
"completion": {
6-
"model": "o1-preview",
6+
"model": "gpt-4o",
77
"completion_type": "chat",
88
"include_history": true,
99
"include_input": true,
@@ -14,6 +14,6 @@
1414
"stop_sequences": []
1515
},
1616
"augmentation": {
17-
"augmentation_type": "monologue"
17+
"augmentation_type": "tools"
1818
}
1919
}

js/samples/04.ai-apps/c.vision-cardGazer/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -18,7 +18,7 @@ _Table of contents_
1818

1919
## Interacting with the bot
2020

21-
You can interact with this bot by sending it a message with an image or a doodle. Be sure to add a message like "Turn this image into an Adaptive Card". As an example, you can use the image included in the `./assets` folder. Large resolution images will not work due to the limitations of the AI model.
21+
You can interact with this bot by sending it a message with an image or a doodle. Be sure to add a message like "Turn this image into an Adaptive Card". As an example, you can use the image included in the `./assets` folder. Large resolution images will not work due to the limitations of the AI model. Since the TeamsAttachmentDownloader is only using low resolution images, your image will be converted to 512px by 512px and budgeted at 85 tokens.
2222

2323
## Setting up the sample
2424

js/samples/04.ai-apps/c.vision-cardGazer/src/index.ts

Lines changed: 20 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -87,11 +87,11 @@ if (!process.env.OPENAI_KEY && !process.env.AZURE_OPENAI_KEY) {
8787
const model = new OpenAIModel({
8888
// OpenAI Support
8989
apiKey: process.env.OPENAI_KEY!,
90-
defaultModel: 'gpt-4-vision-preview',
90+
defaultModel: 'gpt-4o-mini',
9191

9292
// Azure OpenAI Support
9393
azureApiKey: process.env.AZURE_OPENAI_KEY!,
94-
azureDefaultDeployment: 'gpt-4-vision-preview',
94+
azureDefaultDeployment: 'gpt-4o-mini',
9595
azureEndpoint: process.env.AZURE_OPENAI_ENDPOINT!,
9696
azureApiVersion: '2023-03-15-preview',
9797

@@ -132,16 +132,29 @@ interface SendCardParams {
132132
card: any;
133133
}
134134

135-
app.ai.action<SendCardParams>('SendCard', async (context, state, params) => {
135+
app.ai.action<SendCardParams>('SendAdaptiveCard', async (context, state, params) => {
136136
const attachment = CardFactory.adaptiveCard(params.card);
137137
await context.sendActivity(MessageFactory.attachment(attachment));
138138
return 'card sent';
139139
});
140140

141-
app.ai.action<SendCardParams>('ShowCardJSON', async (context, state, params) => {
142-
const json = JSON.stringify(params.card, null, 2);
143-
await context.sendActivity(`<pre>${json}</pre>`);
144-
return 'card displayed';
141+
app.ai.action<SendCardParams>('DisplayJSON', async (context, state, params) => {
142+
const adaptiveCardJson = {
143+
type: 'AdaptiveCard',
144+
version: '1.6',
145+
body: [
146+
{
147+
type: 'CodeBlock',
148+
language: 'Json',
149+
codeSnippet: JSON.stringify(params.card, null, 2)
150+
}
151+
]
152+
};
153+
154+
// Create the attachment
155+
const attachment = CardFactory.adaptiveCard(adaptiveCardJson);
156+
await context.sendActivity(MessageFactory.attachment(attachment));
157+
return `json sent`;
145158
});
146159

147160
// Listen for incoming server requests.
Lines changed: 8 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,34 +1,32 @@
11
[
22
{
3-
"name": "SendCard",
3+
"name": "SendAdaptiveCard",
44
"description": "Sends an adaptive card to the user",
55
"parameters": {
66
"type": "object",
77
"properties": {
88
"card": {
9+
"$ref": "https://adaptivecards.io/schemas/adaptive-card.json#",
910
"type": "object",
1011
"description": "The adaptive card to send"
1112
}
1213
},
13-
"required": [
14-
"card"
15-
]
14+
"required": ["card"]
1615
}
1716
},
1817
{
19-
"name": "ShowCardJSON",
18+
"name": "DisplayJSON",
2019
"description": "Shows the user the JSON for an adaptive card",
2120
"parameters": {
2221
"type": "object",
2322
"properties": {
2423
"card": {
24+
"$ref": "https://adaptivecards.io/schemas/adaptive-card.json#",
2525
"type": "object",
26-
"description": "The adaptive card JSON to show"
26+
"description": "The adaptive card JSON as raw text"
2727
}
2828
},
29-
"required": [
30-
"card"
31-
]
29+
"required": ["card"]
3230
}
3331
}
34-
]
32+
]

js/samples/04.ai-apps/c.vision-cardGazer/src/prompts/chat/config.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
"description": "Vision Bot",
44
"type": "completion",
55
"completion": {
6-
"model": "gpt-4-vision-preview",
6+
"model": "gpt-4o-mini",
77
"completion_type": "chat",
88
"include_history": true,
99
"include_input": true,
Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
You are a friendly assistant for Microsoft Teams with vision support.
22
You are an expert on converting doodles and images to Adaptive Cards for Microsoft Teams.
3-
When shown an image try to convert it to an Adaptive Card and send it using SendCard.
4-
For Adaptive Cards with Image placeholders use ShowCardJSON instead.
3+
When shown an image try to convert it to an Adaptive Card and send it using SendAdaptiveCard.
4+
When the user asks for JSON, use DisplayJSON to show the JSON of the Adaptive Card.

js/samples/04.ai-apps/c.vision-cardGazer/teamsapp.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -113,3 +113,4 @@ publish:
113113
- uses: teamsApp/update
114114
with:
115115
appPackagePath: ./appPackage/build/appPackage.${{TEAMSFX_ENV}}.zip
116+
projectId: 18c89ad5-9038-4751-aee1-0a5143804bc3

js/samples/05.authentication/b.oauth-bot/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
This sample shows how to incorporate a basic conversational SSO flow into a Microsoft Teams application using [Bot Framework](https://dev.botframework.com) and the Teams AI SDK.
44

5-
This sample requires creating an OAuth Connection in Azure Bot Service, which provides a token store to store the token after sign-in.
5+
This sample requires creating an OAuth Connection in Azure Bot Service, which provides a token store to store the token after sign-in. You may need to enable SSO for your app. See [Enable SSO for your app](https://learn.microsoft.com/en-us/microsoftteams/platform/bots/how-to/authentication/bot-sso-overview) for more information.
66

77
Note that this bot will only work in tenants where the following graph scopes are permitted:
88

0 commit comments

Comments
 (0)