You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
5) Add this `local.settings.json` file to the root of the repo folder to simplify local development. Replace `AZURE_OPENAI_ENDPOINT` with your value from step 4. Optionally you can choose a different model deployment in `AZURE_OPENAI_CHATGPT_DEPLOYMENT`. This file will be gitignored to protect secrets from committing to your repo, however by default the sample uses Entra identity (user identity and mananaged identity) so it is secretless.
The key code that makes the prompting and completion work is as follows in [function_app.py](function_app.py). The `/api/ask` function and route expects a prompt to come in the POST body using a standard HTTP Trigger in Python. Then once the environment variables are set to configure OpenAI and LangChain frameworks, we can leverage favorite aspects of LangChain. In this simple example we take a prompt, build a better prompt from a template, and then invoke the LLM. By default the LLM deployment is `gpt-35-turbo` as defined in [./infra/main.parameters.json](./infra/main.parameters.json) but you can experiment with other models.
132
+
133
+
```python
134
+
llm = AzureChatOpenAI(
135
+
deployment_name=AZURE_OPENAI_CHATGPT_DEPLOYMENT,
136
+
temperature=0.3,
137
+
openai_api_key=AZURE_OPENAI_KEY
138
+
)
139
+
llm_prompt = PromptTemplate.from_template(
140
+
"The following is a conversation with an AI assistant. "+
141
+
"The assistant is helpful.\n\n"+
142
+
"A:How can I help you today?\nHuman: {human_prompt}?"
0 commit comments