Skip to content

Conversation

oxaroky02
Copy link

@oxaroky02 oxaroky02 commented Jul 9, 2025

What this does

Adds Azure OpenAI provider, derived from the existing OpenAI provider.

Provider can be configured as follows:

    context =  RubyLLM.context do |config|
      config.azure_openai_api_base = ENV.fetch('AZURE_OPENAI_URI')
      config.azure_openai_api_key = ENV.fetch('AZURE_OPENAI_API_KEY', nil)
      config.azure_openai_api_version = ENV.fetch('AZURE_OPENAI_API_VER', nil)
      config.default_model = ENV.fetch('AZURE_OPENAI_MODEL') || 'gpt-4o'
    end

    chat = context.chat(
      provider: :azure_openai,
      model: ENV.fetch('AZURE_OPENAI_MODEL') || 'gpt-4o',
      assume_model_exists: true)

    chat.ask "Hello!"

Type of change

  • Bug fix
  • New feature
  • Breaking change
  • Documentation
  • Performance improvement

Scope check

  • I read the Contributing Guide
  • This aligns with RubyLLM's focus on LLM communication
  • This isn't application-specific logic that belongs in user code
  • This benefits most users, not just my specific use case

Quality check

  • I ran overcommit --install and all hooks pass
  • I tested my changes thoroughly
  • I updated documentation if needed
  • I didn't modify auto-generated files manually (models.json, aliases.json)

API changes

  • Breaking change
  • New public methods/classes
  • Changed method signatures
  • No API changes

Related issues

@oxaroky02
Copy link
Author

Hola @crmne. Before I go further, can you please take a look and let me know if I'm on the right track?

I had to overload some methods to "steal" model ID and config properties because of the way Azure OpenAI API URLs get constructed using model and API version.

@crmne crmne added the new provider New provider integration label Jul 16, 2025
@oxaroky02
Copy link
Author

@crmne, apologies if I am pestering you. I've been using the new provider with my project where I have tested #ask extensively, including tool-use which is a key to my app. My app uses both ollama and azure_openai and I have switched between them to ensure behaviour is consistent.

I don't fully understand how / what I should do for the RSpec tests. Let me know how you would like me to proceed, or if you are comfortable with accepting this PR as is.

@crmne crmne linked an issue Jul 18, 2025 that may be closed by this pull request
@crmne
Copy link
Owner

crmne commented Jul 18, 2025

Hi @oxaroky02 just add your models to the correct lists in spec_helper.rb and run rspec . It should create the correct VCR cassettes.

@oxaroky02 oxaroky02 mentioned this pull request Jul 18, 2025
@oxaroky02
Copy link
Author

just add your models to the correct lists in spec_helper.rb and run rspec . It should create the correct VCR cassettes.

@crmne I've configured spec_helper.rb but it looks like I need an active postgres DB and some config to run the tests. I installed postgres locally, I ran bundle add pg to make that available, and now I'm running into some missing config in the DB. Sorry about this, I typically don't use Postgres locally. Is there a way for me to run the tests without needing Postgres?

@oxaroky02
Copy link
Author

Hmm, looks like I need to figure out the Postgres config for the commit hooks to work. OK, no worries, I'll work on that.

@t2
Copy link

t2 commented Jul 22, 2025

@oxaroky02 what errors are you getting with PG? I may be able to help. This PR is necessary for us to utilize RubyLLM without patching.

@t2
Copy link

t2 commented Jul 22, 2025

@oxaroky02 also, could we add a config for always assuming model exists? With Azure we have many models names and would love to just use the standard RubyLLM.chat syntax without the options each time.

@oxaroky02
Copy link
Author

@oxaroky02 what errors are you getting with PG? I may be able to help. This PR is necessary for us to utilize RubyLLM without patching.

I tried running postgres after a brew install and looks I get an error about missing profle. I "assumed" the defaults would be sufficient but I think I need to be less hasty. 😀 I'm going to set up a container instead and see if I can get it to work.

@t2
Copy link

t2 commented Jul 22, 2025

@oxaroky02 do you have a Mac? If so, simplest way to get PG running is installing https://postgresapp.com/. It'll give you what you need and then you can remove if you don't need it anymore.

@oxaroky02
Copy link
Author

oxaroky02 commented Jul 22, 2025

@t2, I have postgres running in a container and I have it exposed on 5431 which the tests seem to require. I'm using a dead simple default docker compose file with following defaults for postgres:

      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: password
      POSTGRES_DB: testdb

I'm getting following exception:

ActiveRecord::ConnectionNotEstablished:
  connection to server at "127.0.0.1", port 5431 failed: fe_sendauth: no password supplied

I see the spec helper is loading a .env file but there's none in the repo. So I tried adding one, and then added an ActiveRecord::Base.establish_connection... call in the spec helper. At this point I'm just making stuff up because I don't understand what I'm missing.

Questions:

  1. Is there some local env / vars I can set to configure the password before running the tests?
  2. Is there a default database I need to be present, or will the tests create the DB / tables they need?

I appreciate your patience and support with this.

@t2
Copy link

t2 commented Jul 22, 2025

ActiveRecord::ConnectionNotEstablished:
connection to server at "127.0.0.1", port 5431 failed: fe_sendauth: no password supplied

I'm not extremely familiar with Docker so I don't think I could help there. If you just install Postgress.app on your main computer I expect it should just work.

@oxaroky02
Copy link
Author

@t2, no worries, I'll keep looking. I'm limited with what I can install on my work laptop. The DB is up and running and I'm able to connect to it etc. So now I'm trying to figure out where exactly the password can be specified for the tests to run.

Were you able to try the new provider tho? Did it work for you? I'll look into your other question soon, I'm still figuring out how the innards of ruby_llm work in terms of the config and how to support this as a default provider.

@oxaroky02
Copy link
Author

Looks like I can set the default for pg gem via a .env file and that at least gets me past the missing password error. Progress! 😀

@t2
Copy link

t2 commented Jul 22, 2025

config.azure_openai_api_base = ENV.fetch('AZURE_OPENAI_URI')
      config.azure_openai_api_key = ENV.fetch('AZURE_OPENAI_API_KEY', nil)
      config.azure_openai_api_version = ENV.fetch('AZURE_OPENAI_API_VER', nil)
      config.default_model = ENV.fetch('AZURE_OPENAI_MODEL') || 'gpt-4o'

I just checked out your branch and tested and IT WORKED 🤘🏼! Once you get past the tests this will be great for the folks on Azure OpenAI.

@oxaroky02
Copy link
Author

oxaroky02 commented Jul 22, 2025

DELETING

(I figured out the problem. There was something going on with the default zsh environment when I tried to pre-load the Azure Open AI credentials as environment variable. With a clean clone of the fork and all the tests are running fine. Aaargh. Sorry for all the noise here.)

@oxaroky02 oxaroky02 force-pushed the add_azure_openai_provider branch from 07ffcf8 to 9b98721 Compare July 22, 2025 15:49
@oxaroky02
Copy link
Author

oxaroky02 commented Jul 22, 2025

The tests are running properly now. I have added models listing support now and I've updated lib/tasks/models_update.rake to include a way to configure Azure Open AI.

@crmne, when I run the rake task it ends up making a lot of changes to the models.json file, way more than the one model for the Azure OpenAI provider. This in turn leads to tests failing because there's stuff in the models.json without matching vcr cassettes anymore.

One thing I can do (which I tried for fun) is take the new entry and stuff it into the original models.json file. This is in effect what I expected the task to do but since I don't have API keys for other providers the list from Parsera is getting merged in to produce a list that has a lot of changes. Doing it this way allows me to create the test cassette, etc.

BUT this kinda goes against the constraint that I shouldn't modify the file manually.

How would you like me to proceed?

@t2
Copy link

t2 commented Jul 29, 2025

@crmne given that adding this provider only adds Azure specific models, is there a way to skip the models:update for any providers that aren't impacted? From what I am gathering, we must have valid ENV variables for all providers to refresh the models list and without those overcommit will fail to allow commits.

@undersky0
Copy link

undersky0 commented Oct 8, 2025

@oxaroky02 are you planning to finish this ?

@oxaroky02
Copy link
Author

Hola @undersky0, sorry for the delay. Last time I worked on this I was stuck (earlier question quoted below) and then my life got super busy. 😄 I can try and finish this, or I can also hand this over to someone else who has more time in the short term. Let me at least see if I can get the fork rebased, I see a lot of commits.

The tests are running properly now. I have added models listing support now and I've updated lib/tasks/models_update.rake to include a way to configure Azure Open AI.

@crmne, when I run the rake task it ends up making a lot of changes to the models.json file, way more than the one model for the Azure OpenAI provider. This in turn leads to tests failing because there's stuff in the models.json without matching vcr cassettes anymore.

One thing I can do (which I tried for fun) is take the new entry and stuff it into the original models.json file. This is in effect what I expected the task to do but since I don't have API keys for other providers the list from Parsera is getting merged in to produce a list that has a lot of changes. Doing it this way allows me to create the test cassette, etc.

BUT this kinda goes against the constraint that I shouldn't modify the file manually.

How would you like me to proceed?

@oxaroky02 oxaroky02 force-pushed the add_azure_openai_provider branch from d0a2490 to f651c86 Compare October 8, 2025 15:10
@oxaroky02
Copy link
Author

oxaroky02 commented Oct 8, 2025

@crmne or anyone else who knows 😄

I have rebased and fixed my old code to match all the changes (wow!) since I last looked at this. None of the issues I had before exist now and I was able to move some hooks and less files etc. 👍

All the tests that pertain to Azure Open AI are passing with the exception of four (error response-related) which I have marked for skipping (for now.)

However there are three failing tests as follows:

Failed examples:

rspec ./spec/ruby_llm/models_local_refresh_spec.rb:90 # RubyLLM::Models local provider model fetching local provider model resolution assumes model exists for Ollama without warning after refresh
rspec ./spec/ruby_llm/models_spec.rb:90 # RubyLLM::Models#refresh! updates models and returns a chainable Models instance
rspec ./spec/ruby_llm/models_spec.rb:103 # RubyLLM::Models#refresh! works as a class method too
All three are failing for the same reason, so here's one of them. (Click to reveal.)
     Failure/Error:
       @app.call(env).on_complete do |response|
         self.class.parse_error(provider: @provider, response: response)
       end

     VCR::Errors::UnhandledHTTPRequestError:


       ================================================================================
       An HTTP request has been made that VCR does not know how to handle:
         GET https://testifier.openai.azure.com/openai/models

       VCR is currently using the following cassette:
         - /Users/oxaroky02/DEV/FORKS/ruby_llm_fork/spec/fixtures/vcr_cassettes/models_local_provider_model_fetching_local_provider_model_resolution_assumes_model_exists_for_ollama_without_warning_after_refresh.yml
           - :record => :once
           - :match_requests_on => [:method, :uri]

       Under the current configuration VCR can not find a suitable HTTP interaction
       to replay and is prevented from recording new requests. There are a few ways
       you can deal with this:

         * If you're surprised VCR is raising this error
           and want insight about how VCR attempted to handle the request,
           you can use the debug_logger configuration option to log more details [1].
         * You can use the :new_episodes record mode to allow VCR to
           record this new request to the existing cassette [2].
         * If you want VCR to ignore this request (and others like it), you can
           set an `ignore_request` callback [3].
         * The current record mode (:once) does not allow new requests to be recorded
           to a previously recorded cassette. You can delete the cassette file and re-run
           your tests to allow the cassette to be recorded with this request [4].
         * The cassette contains 2 HTTP interactions that have not been
           played back. If your request is non-deterministic, you may need to
           change your :match_requests_on cassette option to be more lenient
           or use a custom request matcher to allow it to match [5].

       [1] https://benoittgt.github.io/vcr/?v=6-3-1#/configuration/debug_logging
       [2] https://benoittgt.github.io/vcr/?v=6-3-1#/record_modes/new_episodes
       [3] https://benoittgt.github.io/vcr/?v=6-3-1#/configuration/ignore_request
       [4] https://benoittgt.github.io/vcr/?v=6-3-1#/record_modes/once
       [5] https://benoittgt.github.io/vcr/?v=6-3-1#/request_matching
       ================================================================================
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/request_handler.rb:97:in 'VCR::RequestHandler#on_unhandled_request'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/library_hooks/webmock.rb:120:in 'VCR::LibraryHooks::WebMock::RequestHandler#on_unhandled_request'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/request_handler.rb:24:in 'VCR::RequestHandler#handle'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/library_hooks/webmock.rb:135:in 'block in <module:WebMock>'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:35:in 'block (2 levels) in WebMock::StubRegistry#register_global_stub'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:41:in 'Thread::Mutex#synchronize'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:41:in 'block in WebMock::StubRegistry#register_global_stub'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/request_pattern.rb:42:in 'WebMock::RequestPattern#matches?'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:75:in 'block in WebMock::StubRegistry#request_stub_for'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:74:in 'Array#each'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:74:in 'Enumerable#detect'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:74:in 'WebMock::StubRegistry#request_stub_for'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:66:in 'WebMock::StubRegistry#response_for_request'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/http_lib_adapters/net_http.rb:90:in 'Net::HTTP#request'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:113:in 'block in Faraday::Adapter::NetHttp#request_with_wrapped_block'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/http_lib_adapters/net_http.rb:130:in 'Net::HTTP#start_without_connect'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/http_lib_adapters/net_http.rb:157:in 'Net::HTTP#start'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:112:in 'Faraday::Adapter::NetHttp#request_with_wrapped_block'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:102:in 'Faraday::Adapter::NetHttp#perform_request'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:66:in 'block in Faraday::Adapter::NetHttp#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/adapter.rb:45:in 'Faraday::Adapter#connection'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:65:in 'Faraday::Adapter::NetHttp#call'
     # ./lib/ruby_llm/error.rb:39:in 'RubyLLM::ErrorMiddleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/middleware.rb:171:in 'block in Faraday::Retry::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/retryable.rb:7:in 'Faraday::Retryable#with_retries'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/middleware.rb:167:in 'Faraday::Retry::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/response/logger.rb:25:in 'Faraday::Response::Logger#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/rack_builder.rb:153:in 'Faraday::RackBuilder#build_response'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/connection.rb:452:in 'Faraday::Connection#run_request'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/connection.rb:200:in 'Faraday::Connection#get'
     # ./lib/ruby_llm/connection.rb:45:in 'RubyLLM::Connection#get'
     # ./lib/ruby_llm/provider.rb:63:in 'RubyLLM::Provider#list_models'
     # ./lib/ruby_llm/models.rb:43:in 'Array#each'
     # ./lib/ruby_llm/models.rb:43:in 'Enumerable#flat_map'
     # ./lib/ruby_llm/models.rb:43:in 'RubyLLM::Models.fetch_from_providers'
     # ./lib/ruby_llm/models.rb:26:in 'RubyLLM::Models.refresh!'
     # ./spec/ruby_llm/models_local_refresh_spec.rb:104:in 'block (4 levels) in <top (required)>'
     # ./spec/support/rspec_configuration.rb:17:in 'block (3 levels) in <top (required)>'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/util/variable_args_block_caller.rb:9:in 'VCR::VariableArgsBlockCaller#call_block'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr.rb:194:in 'VCR#use_cassette'
     # ./spec/support/rspec_configuration.rb:16:in 'block (2 levels) in <top (required)>'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/rspec.rb:39:in 'block (2 levels) in <top (required)>'

What do I need to do for these? Any help is appreciated. Once I resolve these I have a working PR again to commit.

@t2
Copy link

t2 commented Oct 13, 2025

@oxaroky02 Do you have Claude Code or Codex? I just had Claude Code fix the errors but I don't have access to the repo to push a branch. The broken tests are referencing Ollama, is that by design?

module Providers
# Azure OpenAI API integration. Derived from OpenAI integration to support
# OpenAI capabilities via Microsoft Azure endpoints.
module AzureOpenAI
Copy link

@alexperto alexperto Oct 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about being consistent with other existing providers, like deepseek does:

module RubyLLM
  module Providers
    # Azure OpenAI API integration. Derived from OpenAI integration to support
    # OpenAI capabilities via Microsoft Azure endpoints.
    class AzureOpenAI < OpenAI
      include AzureOpenAI::Chat
      include AzureOpenAI::Streaming
      include AzureOpenAI::Models

      def api_base
        # https://<ENDPOINT>/openai/deployments/<MODEL>/chat/completions?api-version=<APIVERSION>
        "#{@config.azure_openai_api_base}/openai"
      end

Great work by the way!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made this implementation locally, and all the tests are green. Let me know what's the best way to contribute to your PR and I'll share these changes.

Copy link
Author

@oxaroky02 oxaroky02 Oct 14, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hola @alexperto, thanks. I have made the changes to reflect the use of class vs module. The problem I'm facing right now is I can't commit them as the pre-commit hook fails. I suppose I could commit without the pre-commit checking knowing the PR will fail its tests ... but I don't want to be rude. 😄

Essentially, the code everyone is seeing is out of date.

@oxaroky02
Copy link
Author

oxaroky02 commented Oct 14, 2025

Do you have Claude Code or Codex? I just had Claude Code fix the errors but I don't have access to the repo to push a branch. The broken tests are referencing Ollama, is that by design?

@t2, first I should point out the errors I'm seeing are after a number of changes I've made which I am unable to commit. If there's a way to commit knowing the PR will fail its checks, let me know, as that might make it easier for you and others to reproduce the issue.

Essentially, the code everyone is seeing is out of date.

One of the tests is referring to Ollama and local models, however all three have one thing in common: RubyLLM::Models#refresh!

Also, all three are failing because of the following error:

    Failure/Error:
       @app.call(env).on_complete do |response|
         self.class.parse_error(provider: @provider, response: response)
       end

     VCR::Errors::UnhandledHTTPRequestError:

I have to admit I don't know how VCR work in this case; I assume some API call is being made but I can't figure out where or what.

@oxaroky02
Copy link
Author

First I want to first apologize as I disabled the pre-commit hook and pushed by current changes into my branch for this PR. 😞 Sorry. I hope this will help reduce all the time you're all kindly wasting looking at my old code. Sorry again.

As of now, all tests except 3 are passing. Here's the summary; use the link below to expand and see the detailed failure exception. The tests are all related to RubyLLM::Models#refresh! and all due to VCR::Errors::UnhandledHTTPRequestError exception.

Failed examples:

rspec ./spec/ruby_llm/models_local_refresh_spec.rb:90 # RubyLLM::Models local provider model fetching local provider model resolution assumes model exists for Ollama without warning after refresh
rspec ./spec/ruby_llm/models_spec.rb:90 # RubyLLM::Models#refresh! updates models and returns a chainable Models instance
rspec ./spec/ruby_llm/models_spec.rb:103 # RubyLLM::Models#refresh! works as a class method too
Click to expand and see the detailed test failures.

Failures:

  1) RubyLLM::Models local provider model fetching local provider model resolution assumes model exists for Ollama without warning after refresh
     Failure/Error:
       @app.call(env).on_complete do |response|
         self.class.parse_error(provider: @provider, response: response)
       end

     VCR::Errors::UnhandledHTTPRequestError:


       ================================================================================
       An HTTP request has been made that VCR does not know how to handle:
         GET https://testifier.openai.azure.com/openai/models

       VCR is currently using the following cassette:
         - /Users/oxaroky02/DEV/FORKS/ruby_llm_fork/spec/fixtures/vcr_cassettes/models_local_provider_model_fetching_local_provider_model_resolution_assumes_model_exists_for_ollama_without_warning_after_refresh.yml
           - :record => :once
           - :match_requests_on => [:method, :uri]

       Under the current configuration VCR can not find a suitable HTTP interaction
       to replay and is prevented from recording new requests. There are a few ways
       you can deal with this:

         * If you're surprised VCR is raising this error
           and want insight about how VCR attempted to handle the request,
           you can use the debug_logger configuration option to log more details [1].
         * You can use the :new_episodes record mode to allow VCR to
           record this new request to the existing cassette [2].
         * If you want VCR to ignore this request (and others like it), you can
           set an `ignore_request` callback [3].
         * The current record mode (:once) does not allow new requests to be recorded
           to a previously recorded cassette. You can delete the cassette file and re-run
           your tests to allow the cassette to be recorded with this request [4].
         * The cassette contains 2 HTTP interactions that have not been
           played back. If your request is non-deterministic, you may need to
           change your :match_requests_on cassette option to be more lenient
           or use a custom request matcher to allow it to match [5].

       [1] https://benoittgt.github.io/vcr/?v=6-3-1#/configuration/debug_logging
       [2] https://benoittgt.github.io/vcr/?v=6-3-1#/record_modes/new_episodes
       [3] https://benoittgt.github.io/vcr/?v=6-3-1#/configuration/ignore_request
       [4] https://benoittgt.github.io/vcr/?v=6-3-1#/record_modes/once
       [5] https://benoittgt.github.io/vcr/?v=6-3-1#/request_matching
       ================================================================================
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/request_handler.rb:97:in 'VCR::RequestHandler#on_unhandled_request'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/library_hooks/webmock.rb:120:in 'VCR::LibraryHooks::WebMock::RequestHandler#on_unhandled_request'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/request_handler.rb:24:in 'VCR::RequestHandler#handle'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/library_hooks/webmock.rb:135:in 'block in <module:WebMock>'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:35:in 'block (2 levels) in WebMock::StubRegistry#register_global_stub'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:41:in 'Thread::Mutex#synchronize'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:41:in 'block in WebMock::StubRegistry#register_global_stub'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/request_pattern.rb:42:in 'WebMock::RequestPattern#matches?'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:75:in 'block in WebMock::StubRegistry#request_stub_for'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:74:in 'Array#each'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:74:in 'Enumerable#detect'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:74:in 'WebMock::StubRegistry#request_stub_for'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:66:in 'WebMock::StubRegistry#response_for_request'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/http_lib_adapters/net_http.rb:90:in 'Net::HTTP#request'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:113:in 'block in Faraday::Adapter::NetHttp#request_with_wrapped_block'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/http_lib_adapters/net_http.rb:130:in 'Net::HTTP#start_without_connect'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/http_lib_adapters/net_http.rb:157:in 'Net::HTTP#start'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:112:in 'Faraday::Adapter::NetHttp#request_with_wrapped_block'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:102:in 'Faraday::Adapter::NetHttp#perform_request'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:66:in 'block in Faraday::Adapter::NetHttp#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/adapter.rb:45:in 'Faraday::Adapter#connection'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:65:in 'Faraday::Adapter::NetHttp#call'
     # ./lib/ruby_llm/error.rb:39:in 'RubyLLM::ErrorMiddleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/middleware.rb:171:in 'block in Faraday::Retry::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/retryable.rb:7:in 'Faraday::Retryable#with_retries'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/middleware.rb:167:in 'Faraday::Retry::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/response/logger.rb:25:in 'Faraday::Response::Logger#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/rack_builder.rb:153:in 'Faraday::RackBuilder#build_response'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/connection.rb:452:in 'Faraday::Connection#run_request'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/connection.rb:200:in 'Faraday::Connection#get'
     # ./lib/ruby_llm/connection.rb:45:in 'RubyLLM::Connection#get'
     # ./lib/ruby_llm/provider.rb:63:in 'RubyLLM::Provider#list_models'
     # ./lib/ruby_llm/models.rb:43:in 'Array#each'
     # ./lib/ruby_llm/models.rb:43:in 'Enumerable#flat_map'
     # ./lib/ruby_llm/models.rb:43:in 'RubyLLM::Models.fetch_from_providers'
     # ./lib/ruby_llm/models.rb:26:in 'RubyLLM::Models.refresh!'
     # ./spec/ruby_llm/models_local_refresh_spec.rb:104:in 'block (4 levels) in <top (required)>'
     # ./spec/support/rspec_configuration.rb:17:in 'block (3 levels) in <top (required)>'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/util/variable_args_block_caller.rb:9:in 'VCR::VariableArgsBlockCaller#call_block'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr.rb:194:in 'VCR#use_cassette'
     # ./spec/support/rspec_configuration.rb:16:in 'block (2 levels) in <top (required)>'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/rspec.rb:39:in 'block (2 levels) in <top (required)>'

  2) RubyLLM::Models#refresh! updates models and returns a chainable Models instance
     Failure/Error:
       @app.call(env).on_complete do |response|
         self.class.parse_error(provider: @provider, response: response)
       end

     VCR::Errors::UnhandledHTTPRequestError:


       ================================================================================
       An HTTP request has been made that VCR does not know how to handle:
         GET https://testifier.openai.azure.com/openai/models

       VCR is currently using the following cassette:
         - /Users/oxaroky02/DEV/FORKS/ruby_llm_fork/spec/fixtures/vcr_cassettes/models_refresh_updates_models_and_returns_a_chainable_models_instance.yml
           - :record => :once
           - :match_requests_on => [:method, :uri]

       Under the current configuration VCR can not find a suitable HTTP interaction
       to replay and is prevented from recording new requests. There are a few ways
       you can deal with this:

         * If you're surprised VCR is raising this error
           and want insight about how VCR attempted to handle the request,
           you can use the debug_logger configuration option to log more details [1].
         * You can use the :new_episodes record mode to allow VCR to
           record this new request to the existing cassette [2].
         * If you want VCR to ignore this request (and others like it), you can
           set an `ignore_request` callback [3].
         * The current record mode (:once) does not allow new requests to be recorded
           to a previously recorded cassette. You can delete the cassette file and re-run
           your tests to allow the cassette to be recorded with this request [4].
         * The cassette contains 2 HTTP interactions that have not been
           played back. If your request is non-deterministic, you may need to
           change your :match_requests_on cassette option to be more lenient
           or use a custom request matcher to allow it to match [5].

       [1] https://benoittgt.github.io/vcr/?v=6-3-1#/configuration/debug_logging
       [2] https://benoittgt.github.io/vcr/?v=6-3-1#/record_modes/new_episodes
       [3] https://benoittgt.github.io/vcr/?v=6-3-1#/configuration/ignore_request
       [4] https://benoittgt.github.io/vcr/?v=6-3-1#/record_modes/once
       [5] https://benoittgt.github.io/vcr/?v=6-3-1#/request_matching
       ================================================================================
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/request_handler.rb:97:in 'VCR::RequestHandler#on_unhandled_request'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/library_hooks/webmock.rb:120:in 'VCR::LibraryHooks::WebMock::RequestHandler#on_unhandled_request'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/request_handler.rb:24:in 'VCR::RequestHandler#handle'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/library_hooks/webmock.rb:135:in 'block in <module:WebMock>'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:35:in 'block (2 levels) in WebMock::StubRegistry#register_global_stub'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:41:in 'Thread::Mutex#synchronize'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:41:in 'block in WebMock::StubRegistry#register_global_stub'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/request_pattern.rb:42:in 'WebMock::RequestPattern#matches?'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:75:in 'block in WebMock::StubRegistry#request_stub_for'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:74:in 'Array#each'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:74:in 'Enumerable#detect'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:74:in 'WebMock::StubRegistry#request_stub_for'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:66:in 'WebMock::StubRegistry#response_for_request'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/http_lib_adapters/net_http.rb:90:in 'Net::HTTP#request'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:113:in 'block in Faraday::Adapter::NetHttp#request_with_wrapped_block'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/http_lib_adapters/net_http.rb:130:in 'Net::HTTP#start_without_connect'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/http_lib_adapters/net_http.rb:157:in 'Net::HTTP#start'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:112:in 'Faraday::Adapter::NetHttp#request_with_wrapped_block'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:102:in 'Faraday::Adapter::NetHttp#perform_request'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:66:in 'block in Faraday::Adapter::NetHttp#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/adapter.rb:45:in 'Faraday::Adapter#connection'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:65:in 'Faraday::Adapter::NetHttp#call'
     # ./lib/ruby_llm/error.rb:39:in 'RubyLLM::ErrorMiddleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/middleware.rb:171:in 'block in Faraday::Retry::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/retryable.rb:7:in 'Faraday::Retryable#with_retries'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/middleware.rb:167:in 'Faraday::Retry::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/response/logger.rb:25:in 'Faraday::Response::Logger#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/rack_builder.rb:153:in 'Faraday::RackBuilder#build_response'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/connection.rb:452:in 'Faraday::Connection#run_request'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/connection.rb:200:in 'Faraday::Connection#get'
     # ./lib/ruby_llm/connection.rb:45:in 'RubyLLM::Connection#get'
     # ./lib/ruby_llm/provider.rb:63:in 'RubyLLM::Provider#list_models'
     # ./lib/ruby_llm/models.rb:43:in 'Array#each'
     # ./lib/ruby_llm/models.rb:43:in 'Enumerable#flat_map'
     # ./lib/ruby_llm/models.rb:43:in 'RubyLLM::Models.fetch_from_providers'
     # ./lib/ruby_llm/models.rb:26:in 'RubyLLM::Models.refresh!'
     # ./lib/ruby_llm/models.rb:217:in 'RubyLLM::Models#refresh!'
     # ./spec/ruby_llm/models_spec.rb:92:in 'block (3 levels) in <top (required)>'
     # ./spec/support/rspec_configuration.rb:17:in 'block (3 levels) in <top (required)>'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/util/variable_args_block_caller.rb:9:in 'VCR::VariableArgsBlockCaller#call_block'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr.rb:194:in 'VCR#use_cassette'
     # ./spec/support/rspec_configuration.rb:16:in 'block (2 levels) in <top (required)>'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/rspec.rb:39:in 'block (2 levels) in <top (required)>'

  3) RubyLLM::Models#refresh! works as a class method too
     Failure/Error:
       @app.call(env).on_complete do |response|
         self.class.parse_error(provider: @provider, response: response)
       end

     VCR::Errors::UnhandledHTTPRequestError:


       ================================================================================
       An HTTP request has been made that VCR does not know how to handle:
         GET https://testifier.openai.azure.com/openai/models

       VCR is currently using the following cassette:
         - /Users/oxaroky02/DEV/FORKS/ruby_llm_fork/spec/fixtures/vcr_cassettes/models_refresh_works_as_a_class_method_too.yml
           - :record => :once
           - :match_requests_on => [:method, :uri]

       Under the current configuration VCR can not find a suitable HTTP interaction
       to replay and is prevented from recording new requests. There are a few ways
       you can deal with this:

         * If you're surprised VCR is raising this error
           and want insight about how VCR attempted to handle the request,
           you can use the debug_logger configuration option to log more details [1].
         * You can use the :new_episodes record mode to allow VCR to
           record this new request to the existing cassette [2].
         * If you want VCR to ignore this request (and others like it), you can
           set an `ignore_request` callback [3].
         * The current record mode (:once) does not allow new requests to be recorded
           to a previously recorded cassette. You can delete the cassette file and re-run
           your tests to allow the cassette to be recorded with this request [4].
         * The cassette contains 2 HTTP interactions that have not been
           played back. If your request is non-deterministic, you may need to
           change your :match_requests_on cassette option to be more lenient
           or use a custom request matcher to allow it to match [5].

       [1] https://benoittgt.github.io/vcr/?v=6-3-1#/configuration/debug_logging
       [2] https://benoittgt.github.io/vcr/?v=6-3-1#/record_modes/new_episodes
       [3] https://benoittgt.github.io/vcr/?v=6-3-1#/configuration/ignore_request
       [4] https://benoittgt.github.io/vcr/?v=6-3-1#/record_modes/once
       [5] https://benoittgt.github.io/vcr/?v=6-3-1#/request_matching
       ================================================================================
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/request_handler.rb:97:in 'VCR::RequestHandler#on_unhandled_request'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/library_hooks/webmock.rb:120:in 'VCR::LibraryHooks::WebMock::RequestHandler#on_unhandled_request'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/request_handler.rb:24:in 'VCR::RequestHandler#handle'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/library_hooks/webmock.rb:135:in 'block in <module:WebMock>'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:35:in 'block (2 levels) in WebMock::StubRegistry#register_global_stub'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:41:in 'Thread::Mutex#synchronize'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:41:in 'block in WebMock::StubRegistry#register_global_stub'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/request_pattern.rb:42:in 'WebMock::RequestPattern#matches?'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:75:in 'block in WebMock::StubRegistry#request_stub_for'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:74:in 'Array#each'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:74:in 'Enumerable#detect'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:74:in 'WebMock::StubRegistry#request_stub_for'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/stub_registry.rb:66:in 'WebMock::StubRegistry#response_for_request'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/http_lib_adapters/net_http.rb:90:in 'Net::HTTP#request'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:113:in 'block in Faraday::Adapter::NetHttp#request_with_wrapped_block'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/http_lib_adapters/net_http.rb:130:in 'Net::HTTP#start_without_connect'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/http_lib_adapters/net_http.rb:157:in 'Net::HTTP#start'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:112:in 'Faraday::Adapter::NetHttp#request_with_wrapped_block'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:102:in 'Faraday::Adapter::NetHttp#perform_request'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:66:in 'block in Faraday::Adapter::NetHttp#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/adapter.rb:45:in 'Faraday::Adapter#connection'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-net_http-3.4.1/lib/faraday/adapter/net_http.rb:65:in 'Faraday::Adapter::NetHttp#call'
     # ./lib/ruby_llm/error.rb:39:in 'RubyLLM::ErrorMiddleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/middleware.rb:171:in 'block in Faraday::Retry::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/retryable.rb:7:in 'Faraday::Retryable#with_retries'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-retry-2.3.2/lib/faraday/retry/middleware.rb:167:in 'Faraday::Retry::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/middleware.rb:56:in 'Faraday::Middleware#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/response/logger.rb:25:in 'Faraday::Response::Logger#call'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/rack_builder.rb:153:in 'Faraday::RackBuilder#build_response'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/connection.rb:452:in 'Faraday::Connection#run_request'
     # ./vendor/bundle/ruby/3.4.0/gems/faraday-2.13.3/lib/faraday/connection.rb:200:in 'Faraday::Connection#get'
     # ./lib/ruby_llm/connection.rb:45:in 'RubyLLM::Connection#get'
     # ./lib/ruby_llm/provider.rb:63:in 'RubyLLM::Provider#list_models'
     # ./lib/ruby_llm/models.rb:43:in 'Array#each'
     # ./lib/ruby_llm/models.rb:43:in 'Enumerable#flat_map'
     # ./lib/ruby_llm/models.rb:43:in 'RubyLLM::Models.fetch_from_providers'
     # ./lib/ruby_llm/models.rb:26:in 'RubyLLM::Models.refresh!'
     # ./spec/ruby_llm/models_spec.rb:104:in 'block (3 levels) in <top (required)>'
     # ./spec/support/rspec_configuration.rb:17:in 'block (3 levels) in <top (required)>'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr/util/variable_args_block_caller.rb:9:in 'VCR::VariableArgsBlockCaller#call_block'
     # ./vendor/bundle/ruby/3.4.0/gems/vcr-6.3.1/lib/vcr.rb:194:in 'VCR#use_cassette'
     # ./spec/support/rspec_configuration.rb:16:in 'block (2 levels) in <top (required)>'
     # ./vendor/bundle/ruby/3.4.0/gems/webmock-3.25.1/lib/webmock/rspec.rb:39:in 'block (2 levels) in <top (required)>'

Finished in 1 minute 1.31 seconds (files took 1.46 seconds to load)
569 examples, 3 failures, 34 pending

Failed examples:

rspec ./spec/ruby_llm/models_local_refresh_spec.rb:90 # RubyLLM::Models local provider model fetching local provider model resolution assumes model exists for Ollama without warning after refresh
rspec ./spec/ruby_llm/models_spec.rb:90 # RubyLLM::Models#refresh! updates models and returns a chainable Models instance
rspec ./spec/ruby_llm/models_spec.rb:103 # RubyLLM::Models#refresh! works as a class method too

@eichert12
Copy link

also in need of azure support. let me know if you want another tester of the new provider.

@oxaroky02
Copy link
Author

also in need of azure support. let me know if you want another tester of the new provider.

@eichert12 Thanks, and yes. Anyone who can test this would be helpful.

My main issue though is that I don't know what to do about those test failures above. 😢

I am happy to invite someone who can help to my forked repo, or also for someone with access to modify PR branch. I am stuck. Sorry this has been blocked for so long.

@t2
Copy link

t2 commented Oct 20, 2025

@oxaroky02 thanks for all of your handwork on this. I am not able to help on the VCR tests but I was able to get Azure going with the standard install but overriding a few items in my initializer. So, until this gets all fixed up, if you want to use something similar here it is:

# config/initializers/ruby_llm.rb
# frozen_string_literal: true

module RubyLLM
  class << self
    def chat_context
      @chat_context ||= context do |config|
        config.openai_api_key  = api_key
        config.openai_api_base = chat_url
        config.request_timeout = 180
      end
    end

    def embedding_context
      @embedding_context ||= context do |config|
        config.openai_api_key  = api_key
        config.openai_api_base = embedding_url
        config.request_timeout = 180
      end
    end

    def api_key
      dummy_env? ? '' : ENV.fetch('AZURE_OPENAI_API_KEY')
    end

    def chat_url
      return '' if dummy_env?

      "#{ENV.fetch('AZURE_OPENAI_BASE_URL')}/openai/deployments/" \
        "#{ENV.fetch('OPENAI_MODEL', 'gpt-4o')}" \
        "?api-version=#{ENV.fetch('AZURE_OPENAI_API_VERSION', '2024-12-01-preview')}"
    end

    def embedding_url
      return '' if dummy_env?

      "#{ENV.fetch('AZURE_OPENAI_BASE_URL')}/openai/deployments/" \
        "#{ENV.fetch('OPENAI_EMBEDDING_MODEL', 'text-embedding-3-small')}" \
        "?api-version=#{ENV.fetch('AZURE_OPENAI_API_VERSION', '2024-12-01-preview')}"
    end

    def dummy_env?
      ENV['SECRET_KEY_BASE_DUMMY'].present?
    end

    def reset!
      @chat_context      = nil
      @embedding_context = nil
    end
  end
end

module RubyLLM
  module ContextDefaults
    def chat(*, **opts)
      opts[:provider]          ||= :openai
      opts[:context]           ||= RubyLLM.chat_context
      opts[:assume_model_exists] = true unless opts.key?(:assume_model_exists)

      super(*, **opts)
    end
  end
end

RubyLLM.singleton_class.prepend(RubyLLM::ContextDefaults)

module RubyLLM
  module EmbeddingDefaults
    def embed(*, **opts)
      opts[:provider]          ||= :openai
      opts[:context]           ||= RubyLLM.embedding_context
      opts[:assume_model_exists] = true unless opts.key?(:assume_model_exists)

      super(*, **opts)
    end
  end
end

RubyLLM::Embedding.singleton_class.prepend(RubyLLM::EmbeddingDefaults)

RubyLLM.configure do |config|
  config.openai_api_key      = RubyLLM.api_key
  config.openai_api_base     = RubyLLM.chat_url
  config.request_timeout     = 180
end

AZURE_OPENAI_BASE_URL="https://YOUR_ENDPOINT.openai.azure.com"

And now you can use it just like the docs say:

chat = RubyLLM.chat
response = chat.ask "What is Ruby?"

embedding = RubyLLM.embed "What is Ruby?"
puts embedding.vectors

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

new provider New provider integration

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Azure OpenAI support

6 participants