Skip to content

Azure-Samples/ai-chat-vision-quickstart-csharp

Chat Application with image support using Microsoft.Extensions.AI (C#/.NET)

NOTE this is not updated yet. TODO for JRL.

Open in GitHub Codespaces Open in Dev Containers

Local quickstart

To get started with this sample on your local machine, the first step is to clone the repository: shell git clone <--> Inside the repository, you will find the appsettings.Development.json file (here) under src/AIChatImgApp. Edit the AIHost value to be one of the following values:

  • "local" - Ollama support
  • "github" - GitHub models
  • "azureAIModelCatalog" - Azure Inference
  • "openai" - Azure OpenAI

Read the relevant subsection for further details on how to configure the settings for each AI provider.

Note You should never store keys in any configuration files that may get checked into source control. Instead, you can either save the key to an environment variable, or use the dotnet user-secrets tool to store the key securely. For more information, see User Secrets. The following settings will be picked of regardless of whether they are stored in user secrets, your envrionment, or the app settings file.

Local (Ollama)

You should install Ollama, then pull the appropriate models and confirm they run.

LOCAL_MODEL_NAME should contain a vision-capable model. For example: llava:7b.

LOCAL_ENDPOINT should contain the endpoint of the local Ollama server. For example: http://localhost:11434.

GitHub models

Generate a token for use with the app. On the GitHub settings page for your profile, choose "Developer settings" (bottom of far left menu) and then "Personal access tokens". Create a fine-grained (not classic) token. No permissions are necessary, you only need the token itself to access models.

GITHUB_TOKEN should be set to the token you generated. REMOTE_ENDPOINT is already configured for the default publicly accessible API at https://models.inference.ai.azure.com/. **etc

"AZURE_OPENAI_KEY": "",

"GITHUB_TOKEN": "",

"AZURE_INFERENCE_KEY": "",

"REMOTE_MODEL_OR_DEPLOYMENT_ID": "gpt-4o-mini", "REMOTE_ENDPOINT": "https://models.inference.ai.azure.com/",

"LOCAL_MODEL_NAME": "llava:7b", "LOCAL_ENDPOINT": "http://localhost:11434" }

  • AIHost: The AI provider to use. Options are local, github, azureAIModelCatalog, or openai.
  • LOCAL_MODELS_ENDPOINT: The endpoint for the local Ollama server.
  • LOCAL_MODELS_NAME: The name of the local model to use.
  • GITHUB_TOKEN: The GitHub token to use for accessing GitHub models.
  • AZURE_INFERENCE_KEY: The Azure Inference key to use.
  • AZURE_MODEL_NAME: The Azure Inference model name to use.
  • AZURE_MODEL_ENDPOINT: The Azure Inference endpoint to use.
  • AZURE_OPENAI_ENDPOINT: The Azure OpenAI endpoint to use.
  • AZURE_OPENAI_DEPLOYMENT: The Azure OpenAI deployment to use.
  • AZURE_OPENAI_KEY: The Azure OpenAI key to use.

The project includes all the infrastructure and configuration needed to provision Azure OpenAI resources and deploy the app to Azure Container Apps using the Azure Developer CLI. By default, the app will use managed identity to authenticate with Azure OpenAI.

We recommend first going through the deploying steps before running this app locally, since the local app needs credentials for Azure OpenAI to work properly.

Features

  • An ASP.NET Core that uses Semantic Kernel package to access language models to generate responses to user messages.
  • A basic HTML/JS frontend that streams responses from the backend using JSON over a ReadableStream.
  • A Blazor frontend that streams responses from the backend.
  • Bicep files for provisioning Azure resources, including Azure OpenAI, Azure Container Apps, Azure Container Registry, Azure Log Analytics, and RBAC roles.
  • Using the OpenAI gpt-4o-mini model through Azure OpenAI.
  • Support for using local LLMs or GitHub Models during development.

Screenshot of the chat app

Architecture diagram

Architecture diagram: Azure Container Apps inside Container Apps Environment, connected to Container Registry with Container, connected to Managed Identity for Azure OpenAI

Getting started

You have a few options for getting started with this template. The quickest way to get started is GitHub Codespaces, since it will setup all the tools for you, but you can also set it up locally.

GitHub Codespaces

You can run this template virtually by using GitHub Codespaces. The button will open a web-based VS Code instance in your browser:

  1. Open the template (this may take several minutes):

    Open in GitHub Codespaces

  2. Open a terminal window

  3. Continue with the deploying steps

Local Environment

If you're not using one of the above options for opening the project, then you'll need to:

  1. Make sure the following tools are installed:

  2. Download the project code:

    azd init -t ai-chat-app-csharp
  3. If you're using Visual Studio, open the src/ai-chat-quickstart.sln solution file. If you're using VS Code, open the src folder.

  4. Continue with the deploying steps.

VS Code Dev Containers

A related option is VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:

  1. Start Docker Desktop (install it if not already installed)

  2. Open the project:

    Open in Dev Containers

  3. In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.

  4. Continue with the deploying steps

Deploying

Once you've opened the project in Codespaces, in Dev Containers, or locally, you can deploy it to Azure.

Azure account setup

  1. Sign up for a free Azure account and create an Azure Subscription.

  2. Check that you have the necessary permissions:

Deploying with azd

From a Terminal window, open the folder with the clone of this repo and run the following commands.

  1. Login to Azure:

    azd auth login
  2. Provision and deploy all the resources:

    azd up

    It will prompt you to provide an azd environment name (like "chat-app"), select a subscription from your Azure account, and select a location where OpenAI is available (like "francecentral"). Then it will provision the resources in your account and deploy the latest code. If you get an error or timeout with deployment, changing the location can help, as there may be availability constraints for the OpenAI resource.

  3. When azd has finished deploying, you'll see an endpoint URI in the command output. Visit that URI, and you should see the chat app! 🎉

  4. When you've made any changes to the app code, you can just run:

    azd deploy

Continuous deployment with GitHub Actions

This project includes a Github workflow for deploying the resources to Azure on every push to main. That workflow requires several Azure-related authentication secrets to be stored as Github action secrets. To set that up, run:

azd pipeline config

Development server

In order to run this app, you need to either have an Azure OpenAI account deployed (from the deploying steps), use a model from GitHub models, use the Azure AI Model Catalog, or use a local LLM server.

After deployment, Azure OpenAI is configured for you using User Secrets. If you could not run the deployment steps here, or you want to use different models, you can manually update the settings in appsettings.local.json. Important: This file is only for local development and this sample includes it in the .gitignore file so changes to it will be ignored. Do not check your secure keys into source control!

  1. If you want to use an existing Azure OpenAI deployment, you modify the AZURE_OPENAI_ENDPOINT and AZURE_OPENAI_DEPLOYMENT configuration settings in the appsettings.local.json file.

  2. For use with GitHub models, change AIHost to "github" in the appsettings.local.json file.

    You'll need a GITHUB_TOKEN environment variable that stores a GitHub personal access token. If you're running this inside a GitHub Codespace, the token will be automatically available. If not, generate a new personal access token and configure it in the GITHUB_TOKEN setting of appsettings.local.json:

  3. For use with local models, change AIHost to "local" in the appsettings.local.json file and change LOCAL_MODELS_ENDPOINT and LOCAL_MODELS_NAME to match the local server. See local LLM server for more information.

  4. To use the Azure AI Model Catalog, change AIHost to "azureAIModelCatalog" in the appsettings.local.json file. Change AZURE_INFERENCE_KEY, AZURE_MODEL_NAME, and AZURE_MODEL_ENDPOINT settings to match your configuration in the Azure AI Model Catalog.

  5. Start the project:

    If using Visual Studio, choose the Debug > Start Debugging menu. If using VS Code or GitHub CodeSpaces*, choose the Run > Start Debugging menu. Finally, if using the command line, run the following from the project directory:

    dotnet run

    This will start the app on port 5153, and you can access it at http://localhost:5153.

Guidance

Costs

Pricing varies per region and usage, so it isn't possible to predict exact costs for your usage. The majority of the Azure resources used in this infrastructure are on usage-based pricing tiers. However, Azure Container Registry has a fixed cost per registry per day.

You can try the Azure pricing calculator for the resources:

  • Azure OpenAI Service: S0 tier, gpt-4o-mini model. Pricing is based on token count. Pricing
  • Azure Container App: Consumption tier with 0.5 CPU, 1GiB memory/storage. Pricing is based on resource allocation, and each month allows for a certain amount of free usage. Pricing
  • Azure Container Registry: Basic tier. Pricing
  • Log analytics: Pay-as-you-go tier. Costs based on data ingested. Pricing

⚠️ To avoid unnecessary costs, remember to take down your app if it's no longer in use, either by deleting the resource group in the Portal or running azd down.

Security Guidelines

This template uses Managed Identity for authenticating to the Azure OpenAI service.

Additionally, we have added a GitHub Action that scans the infrastructure-as-code files and generates a report containing any detected issues. To ensure continued best practices in your own repository, we recommend that anyone creating solutions based on our templates ensure that the Github secret scanning setting is enabled.

You may want to consider additional security measures, such as:

Resources

About

A C# sample of chatting with uploaded images using OpenAI vision models like gpt-4o.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Contributors 2

  •  
  •