You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-24
Original file line number
Diff line number
Diff line change
@@ -98,25 +98,6 @@ code .
98
98
99
99
4) Test using same REST client steps above
100
100
101
-
102
-
## Source Code
103
-
104
-
The key code that makes this work is as follows in [function_app.py](./function_app.py). You can customize this or learn more snippets using the [LangChain Quickstart Guide](https://python.langchain.com/en/latest/getting_started/getting_started.html).
template="The following is a conversation with an AI assistant. The assistant is helpful.\n\nAI: I am an AI created by OpenAI. How can I help you today?\nHuman: {human_prompt}?",
112
-
)
113
-
114
-
from langchain.chains import LLMChain
115
-
chain = LLMChain(llm=llm, prompt=llm_prompt)
116
-
117
-
return chain.run(prompt) # prompt is human input from request body
118
-
```
119
-
120
101
## Deploy to Azure
121
102
122
103
The easiest way to deploy this app is using the [Azure Dev CLI](https://aka.ms/azd). If you open this repo in GitHub CodeSpaces the AZD tooling is already preinstalled.
@@ -128,21 +109,20 @@ azd up
128
109
129
110
## Source Code
130
111
131
-
The key code that makes the prompting and completion work is as follows in [function_app.py](function_app.py). The `/api/ask` function and route expects a prompt to come in the POST body using a standard HTTP Trigger in Python. Then once the environment variables are set to configure OpenAI and LangChain frameworks, we can leverage favorite aspects of LangChain. In this simple example we take a prompt, build a better prompt from a template, and then invoke the LLM. By default the LLM deployment is `gpt-35-turbo` as defined in [./infra/main.parameters.json](./infra/main.parameters.json) but you can experiment with other models.
112
+
The key code that makes the prompting and completion work is as follows in [function_app.py](function_app.py). The `/api/ask` function and route expects a prompt to come in the POST body using a standard HTTP Trigger in Python. Then once the environment variables are set to configure OpenAI and LangChain frameworks via `init()` function, we can leverage favorite aspects of LangChain in the `main()` (ask) function. In this simple example we take a prompt, build a better prompt from a template, and then invoke the LLM. By default the LLM deployment is `gpt-35-turbo` as defined in [./infra/main.parameters.json](./infra/main.parameters.json) but you can experiment with other models and other aspects of Langchain's breadth of features.
132
113
133
114
```python
134
115
llm = AzureChatOpenAI(
135
116
deployment_name=AZURE_OPENAI_CHATGPT_DEPLOYMENT,
136
-
temperature=0.3,
137
-
openai_api_key=AZURE_OPENAI_KEY
117
+
temperature=0.3
138
118
)
139
119
llm_prompt = PromptTemplate.from_template(
140
120
"The following is a conversation with an AI assistant. "+
141
121
"The assistant is helpful.\n\n"+
142
-
"A:How can I help you today?\nHuman: {human_prompt}?"
0 commit comments