Skip to content

Commit 66e98a5

Browse files
authored
Merge pull request #444 from mishranant/add_completions_api
Added /completions endpoint back to support gpt-3.5-turbo-instruct
2 parents 81c3988 + b0c0292 commit 66e98a5

File tree

4 files changed

+144
-0
lines changed

4 files changed

+144
-0
lines changed

README.md

Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -399,6 +399,21 @@ end
399399
# => "The weather is nice 🌞"
400400
```
401401

402+
### Completions
403+
404+
Hit the OpenAI API for a completion using other GPT-3 models:
405+
406+
```ruby
407+
response = client.completions(
408+
parameters: {
409+
model: "text-davinci-001",
410+
prompt: "Once upon a time",
411+
max_tokens: 5
412+
})
413+
puts response["choices"].map { |c| c["text"] }
414+
# => [", there lived a great"]
415+
```
416+
402417
### Edits
403418

404419
Send a string and some instructions for what to do to the string:

lib/openai/client.rb

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,10 @@ def embeddings(parameters: {})
3434
json_post(path: "/embeddings", parameters: parameters)
3535
end
3636

37+
def completions(parameters: {})
38+
json_post(path: "/completions", parameters: parameters)
39+
end
40+
3741
def audio
3842
@audio ||= OpenAI::Audio.new(client: self)
3943
end

spec/fixtures/cassettes/gpt-3_5-turbo-instruct_completions_once_upon_a_time.yml

Lines changed: 98 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.
Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,27 @@
1+
RSpec.describe OpenAI::Client do
2+
describe "#completions: gpt-3.5-turbo-instruct" do
3+
context "with a prompt and max_tokens", :vcr do
4+
let(:prompt) { "Once upon a time" }
5+
let(:max_tokens) { 5 }
6+
7+
let(:response) do
8+
OpenAI::Client.new.completions(
9+
parameters: {
10+
model: model,
11+
prompt: prompt,
12+
max_tokens: max_tokens
13+
}
14+
)
15+
end
16+
let(:text) { response.dig("choices", 0, "text") }
17+
let(:cassette) { "#{model} completions #{prompt}".downcase }
18+
let(:model) { "gpt-3.5-turbo-instruct" }
19+
20+
it "succeeds" do
21+
VCR.use_cassette(cassette) do
22+
expect(text.split.empty?).to eq(false)
23+
end
24+
end
25+
end
26+
end
27+
end

0 commit comments

Comments
 (0)