Skip to content

Conversation

@kasimeka
Copy link

@kasimeka kasimeka commented Aug 8, 2025

fixes #6

Summary by CodeRabbit

  • New Features

    • Added an example script demonstrating how to use the text generation function with an OpenAI-compatible API and custom base URL.
  • Bug Fixes

    • Resolved issues with duplicate URL path segments by updating default base URLs and endpoint paths for OpenAI-compatible APIs.
  • Tests

    • Updated test cases to reflect new base URL conventions and ensure correct URL construction in all relevant scenarios.

@coderabbitai
Copy link

coderabbitai bot commented Aug 8, 2025

Walkthrough

The changes update the handling of API endpoint paths for OpenAI-compatible interfaces. The /v1 prefix is removed from hardcoded paths and instead expected to be included in the base_url when necessary. Test cases and examples are adjusted to use base URLs with the /v1 suffix, and a new example demonstrates usage with a Google Gemini endpoint.

Changes

Cohort / File(s) Change Summary
OpenAI-Compatible Path Handling
lib/ai.ex, lib/ai/providers/openai_compatible/chat_language_model.ex
Refactored to remove hardcoded /v1 prefix from endpoint paths; now expects /v1 in base_url if required.
Tests: Base URL Adjustments
test/ai/openai_compatible_test.exs, test/ai/openai_test.exs, test/ai/providers/openai_compatible/generate_text_test.exs
Updated all relevant tests to use base URLs with /v1 suffix to match new path construction logic.
Examples
examples/openai_compatible_example.exs
Added new example demonstrating usage of AI.generate_text with an OpenAI-compatible Gemini endpoint.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant ExampleScript
    participant AI
    participant OpenAICompatibleProvider
    participant GeminiAPI

    User->>ExampleScript: Run openai_compatible_example.exs
    ExampleScript->>AI: generate_text(prompt, model)
    AI->>OpenAICompatibleProvider: do_generate/2 with base_url (may include /v1 or /v1beta)
    OpenAICompatibleProvider->>GeminiAPI: POST /chat/completions (no hardcoded /v1 in path)
    GeminiAPI-->>OpenAICompatibleProvider: Response
    OpenAICompatibleProvider-->>AI: Generated text
    AI-->>ExampleScript: Generated text
    ExampleScript-->>User: Output result
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Assessment against linked issues

Objective Addressed Explanation
Remove hardcoded /v1 prefix from OpenAI-compatible endpoint paths (#6)
Ensure compatibility with Gemini's OpenAI-compatible endpoints (#6)

Assessment against linked issues: Out-of-scope changes

No out-of-scope changes detected.

Poem

A bunny hopped through code today,
Adjusting paths the OpenAI way.
No more /v1 to cause dismay—
Gemini now can join the play!
With tests and docs all in a row,
This rabbit cheers: "Let APIs flow!" 🐇✨

Note

🔌 MCP (Model Context Protocol) integration is now available in Early Access!

Pro users can now connect to remote MCP servers under the Integrations page to get reviews and chat conversations that understand additional development context.


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e18ec59 and a3efbe0.

📒 Files selected for processing (6)
  • examples/openai_compatible_example.exs (1 hunks)
  • lib/ai.ex (5 hunks)
  • lib/ai/providers/openai_compatible/chat_language_model.ex (2 hunks)
  • test/ai/openai_compatible_test.exs (6 hunks)
  • test/ai/openai_test.exs (1 hunks)
  • test/ai/providers/openai_compatible/generate_text_test.exs (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
  • test/ai/openai_compatible_test.exs
  • test/ai/openai_test.exs
  • examples/openai_compatible_example.exs
  • test/ai/providers/openai_compatible/generate_text_test.exs
  • lib/ai/providers/openai_compatible/chat_language_model.ex
  • lib/ai.ex
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
lib/ai/providers/openai_compatible/chat_language_model.ex (1)

246-248: Duplicate hard-coded endpoint & same double-slash risk

do_stream/2 re-implements the exact URL assembly done in make_api_request/3, including the potential “//” issue. Consider centralising this logic in one helper to stay DRY and fix the bug once.

🧹 Nitpick comments (7)
lib/ai.ex (1)

223-224: Inconsistent /v1 handling between chat vs. completion helpers

openai/2 now hard-codes /v1 into the default base_url, while openai_completion/2 still expects the version segment to be supplied in the path inside its provider module. This asymmetry is surprising and easy to miss when switching between the two helpers. Please either:

  1. Align the defaults (both include /v1), or
  2. Add explicit docs/comments explaining why the two helpers differ.

A small clarification now will save others head-scratching later.

test/ai/providers/openai_compatible/generate_text_test.exs (1)

12-15: Nice update — consider an extra case with a trailing “/”

The new /v1 base URL is covered, but adding one test with base_url: "https://api.example.com/v1/" would exercise the double-slash edge case flagged in the implementation.

test/ai/openai_compatible_test.exs (1)

33-35: Add coverage for trailing-slash base URLs

As with the provider tests, adding one variant that passes "https://api.example.com/v1/" would protect against the double-slash bug.

Also applies to: 44-45

examples/openai_compatible_example.exs (4)

11-16: Prefer System.fetch_env/1 for env var handling

fetch_env avoids nil checks and expresses intent clearly.

-    api_key = System.get_env("GOOGLE_API_KEY")
-
-    if is_nil(api_key) do
-      IO.puts("Error: GOOGLE_API_KEY environment variable not set")
-      System.halt(1)
-    end
+    api_key =
+      case System.fetch_env("GOOGLE_API_KEY") do
+        {:ok, key} ->
+          key
+
+        :error ->
+          IO.puts("Error: GOOGLE_API_KEY environment variable not set")
+          System.halt(1)
+      end

9-9: Nit: IO.puts already appends a newline

Remove the explicit \n to avoid a double blank line.

-    IO.puts("Starting generate_text example...\n")
+    IO.puts("Starting generate_text example...")

1-3: Nit: Capitalization/wording in header comments

Tweak capitalization and trailing space.

-# Sample script demonstrating AI.generate_text usage with an openai-compatible api
-# Run from the elixir-ai-sdk root directory with: 
+# Sample script demonstrating AI.generate_text usage with an OpenAI-compatible API
+# Run from the elixir-ai-sdk root directory with:

5-6: Add a clarifying note about base_url expectations

Since this example exists to illustrate the /v1 path change, a brief note helps users avoid common misconfigurations.

 # Make sure you've set the GOOGLE_API_KEY environment variable
+#
+# Note:
+# - For OpenAI, base_url typically includes /v1 (e.g., https://api.openai.com/v1).
+# - For Gemini's OpenAI-compatible API, do NOT add /v1; use v1beta/openai as shown below.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e884b97 and 527e903.

📒 Files selected for processing (6)
  • examples/openai_compatible_example.exs (1 hunks)
  • lib/ai.ex (3 hunks)
  • lib/ai/providers/openai_compatible/chat_language_model.ex (2 hunks)
  • test/ai/openai_compatible_test.exs (6 hunks)
  • test/ai/openai_test.exs (1 hunks)
  • test/ai/providers/openai_compatible/generate_text_test.exs (2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (3)
test/ai/openai_test.exs (1)
lib/ai.ex (1)
  • openai (215-251)
examples/openai_compatible_example.exs (1)
lib/ai.ex (1)
  • openai_compatible (178-187)
test/ai/openai_compatible_test.exs (1)
lib/ai.ex (1)
  • openai_compatible (178-187)
🔇 Additional comments (5)
test/ai/openai_test.exs (1)

76-78: LGTM – test keeps parity with helper change

test/ai/openai_compatible_test.exs (2)

18-29: Base URL updates look correct

Tests now reflect the /v1 shift and still assert URLs accurately.


93-97: Consistency achieved – good job

Also applies to: 166-168, 232-234

examples/openai_compatible_example.exs (2)

24-25: LGTM: Correct base_url for Gemini’s OpenAI-compatible endpoint

Using https://generativelanguage.googleapis.com/v1beta/openai (without an extra /v1) aligns with the PR objective and prevents 404s.


33-34: LGTM: Running the example at script load is appropriate here

Auto-executing the example from an .exs script matches the README-style usage.

Comment on lines 18 to 29
{:ok, result} =
AI.generate_text(%{
prompt: "is elixir a statically typed programming language?"
model:
AI.openai_compatible(
"gemini-2.0-flash",
base_url: "https://generativelanguage.googleapis.com/v1beta/openai",
api_key: api_key
),
})

IO.puts(result.text)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make the example robust: handle {:error, reason} and avoid match crashes (also fixes the missing comma)

Pattern matching on {:ok, result} will crash on errors. Wrap the call in a case to print failures and exit with non‑zero status. This diff also includes the missing comma fix.

-    {:ok, result} =
-      AI.generate_text(%{
-        prompt: "is elixir a statically typed programming language?"
-        model:
-          AI.openai_compatible(
-            "gemini-2.0-flash",
-            base_url: "https://generativelanguage.googleapis.com/v1beta/openai",
-            api_key: api_key
-          ),
-      })
-
-    IO.puts(result.text)
+    case AI.generate_text(%{
+           prompt: "is elixir a statically typed programming language?",
+           model:
+             AI.openai_compatible(
+               "gemini-2.0-flash",
+               base_url: "https://generativelanguage.googleapis.com/v1beta/openai",
+               api_key: api_key
+             )
+         }) do
+      {:ok, result} ->
+        IO.puts(result.text)
+
+      {:error, reason} ->
+        IO.puts("generate_text error: #{inspect(reason)}")
+        System.halt(1)
+    end
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{:ok, result} =
AI.generate_text(%{
prompt: "is elixir a statically typed programming language?"
model:
AI.openai_compatible(
"gemini-2.0-flash",
base_url: "https://generativelanguage.googleapis.com/v1beta/openai",
api_key: api_key
),
})
IO.puts(result.text)
case AI.generate_text(%{
prompt: "is elixir a statically typed programming language?",
model:
AI.openai_compatible(
"gemini-2.0-flash",
base_url: "https://generativelanguage.googleapis.com/v1beta/openai",
api_key: api_key
)
}) do
{:ok, result} ->
IO.puts(result.text)
{:error, reason} ->
IO.puts("generate_text error: #{inspect(reason)}")
System.halt(1)
end
🤖 Prompt for AI Agents
In examples/openai_compatible_example.exs around lines 18 to 29, the code
directly pattern matches on {:ok, result}, which will crash if an error tuple is
returned. To fix this, replace the direct match with a case statement that
handles both {:ok, result} and {:error, reason} cases, printing the error reason
and exiting with a non-zero status on failure. Also, add the missing comma after
the prompt string to correct the syntax.

Comment on lines +72 to 73
case make_api_request(model.provider, "/chat/completions", request_body) do
{:ok, response} ->
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Possible “//” in URL when base_url ends with a slash

make_api_request/3 blindly concatenates:

url = "#{provider.base_url}#{path}"

If a caller supplies base_url: "https://api.example.com/v1/", the final URL becomes
https://api.example.com/v1//chat/completions, which many servers treat differently or reject.

Guard against this by trimming one side or, better, using Path.join/2-style logic:

-url = "#{provider.base_url}#{path}"
+url = "#{String.trim_trailing(provider.base_url, "/")}#{path}"

You might also extract "/chat/completions" into a module attribute to avoid the duplication seen here and at Line 247.

🤖 Prompt for AI Agents
In lib/ai/providers/openai_compatible/chat_language_model.ex around lines 72 to
73, the URL construction in make_api_request/3 concatenates base_url and path
directly, which can cause double slashes if base_url ends with a slash. Fix this
by trimming the trailing slash from base_url or the leading slash from path
before concatenation, or implement Path.join/2-style logic to join them safely.
Additionally, extract the "/chat/completions" string into a module attribute to
avoid duplication and improve maintainability.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

the openai compatible interface doesn't work with gemini because it appends a /v1 prefix to all paths

1 participant