Skip to content

Commit d886b38

Browse files
authored
Merge pull request #11 from paulyuk/main
Readme update to fix duplication
2 parents 0a352fa + d5e00b9 commit d886b38

File tree

1 file changed

+4
-24
lines changed

1 file changed

+4
-24
lines changed

README.md

+4-24
Original file line numberDiff line numberDiff line change
@@ -98,25 +98,6 @@ code .
9898

9999
4) Test using same REST client steps above
100100

101-
102-
## Source Code
103-
104-
The key code that makes this work is as follows in [function_app.py](./function_app.py). You can customize this or learn more snippets using the [LangChain Quickstart Guide](https://python.langchain.com/en/latest/getting_started/getting_started.html).
105-
106-
```python
107-
llm = AzureOpenAI(deployment_name=AZURE_OPENAI_CHATGPT_DEPLOYMENT, temperature=0.3, openai_api_key=AZURE_OPENAI_KEY)
108-
109-
llm_prompt = PromptTemplate(
110-
input_variables=["human_prompt"],
111-
template="The following is a conversation with an AI assistant. The assistant is helpful.\n\nAI: I am an AI created by OpenAI. How can I help you today?\nHuman: {human_prompt}?",
112-
)
113-
114-
from langchain.chains import LLMChain
115-
chain = LLMChain(llm=llm, prompt=llm_prompt)
116-
117-
return chain.run(prompt) # prompt is human input from request body
118-
```
119-
120101
## Deploy to Azure
121102

122103
The easiest way to deploy this app is using the [Azure Dev CLI](https://aka.ms/azd). If you open this repo in GitHub CodeSpaces the AZD tooling is already preinstalled.
@@ -128,21 +109,20 @@ azd up
128109

129110
## Source Code
130111

131-
The key code that makes the prompting and completion work is as follows in [function_app.py](function_app.py). The `/api/ask` function and route expects a prompt to come in the POST body using a standard HTTP Trigger in Python. Then once the environment variables are set to configure OpenAI and LangChain frameworks, we can leverage favorite aspects of LangChain. In this simple example we take a prompt, build a better prompt from a template, and then invoke the LLM. By default the LLM deployment is `gpt-35-turbo` as defined in [./infra/main.parameters.json](./infra/main.parameters.json) but you can experiment with other models.
112+
The key code that makes the prompting and completion work is as follows in [function_app.py](function_app.py). The `/api/ask` function and route expects a prompt to come in the POST body using a standard HTTP Trigger in Python. Then once the environment variables are set to configure OpenAI and LangChain frameworks via `init()` function, we can leverage favorite aspects of LangChain in the `main()` (ask) function. In this simple example we take a prompt, build a better prompt from a template, and then invoke the LLM. By default the LLM deployment is `gpt-35-turbo` as defined in [./infra/main.parameters.json](./infra/main.parameters.json) but you can experiment with other models and other aspects of Langchain's breadth of features.
132113

133114
```python
134115
llm = AzureChatOpenAI(
135116
deployment_name=AZURE_OPENAI_CHATGPT_DEPLOYMENT,
136-
temperature=0.3,
137-
openai_api_key=AZURE_OPENAI_KEY
117+
temperature=0.3
138118
)
139119
llm_prompt = PromptTemplate.from_template(
140120
"The following is a conversation with an AI assistant. " +
141121
"The assistant is helpful.\n\n" +
142-
"A:How can I help you today?\nHuman: {human_prompt}?"
122+
"A:How can I help you today?\n" +
123+
"Human: {human_prompt}?"
143124
)
144125
formatted_prompt = llm_prompt.format(human_prompt=prompt)
145126

146127
response = llm.invoke(formatted_prompt)
147-
logging.info(response.content)
148128
```

0 commit comments

Comments
 (0)