RUBY GEM · MIT · FREE

Test your AI
without the drama.

MockOpenAI is a mocking gem for OpenAI-compatible and Anthropic APIs. Deterministic responses, on-demand failure injection, zero app changes. Works with Rails, Sinatra, CLI tools, and plain Ruby scripts.

spec/services/chat_spec.rb
it "returns a canned response", :mock_openai do
  MockOpenAI.set_responses([
    { match: "Hello", response: "Hi there!" }
  ])

  result = ChatService.call("Hello, can you help me?")

  expect(result).to eq("Hi there!")
end

THE PROBLEM

Testing AI is broken.
This is much better.

WITHOUT MOCKOPENAI

  • Real API calls burn tokens and slow CI to a crawl
  • Non-deterministic responses make assertions unreliable
  • Rate limits and network failures randomly break pipelines
  • Error-handling code never gets exercised. Until production.
  • VCR cassettes go stale. WebMock is just guessing.

WITH MOCKOPENAI

  • Zero API calls, zero cost. Runs entirely local.
  • Deterministic: same input produces the same output, always
  • No rate limits. No flakiness. CI completes in seconds.
  • Inject failures on demand. Test every edge case.
  • No app changes. No wrapping, patching, or prompt pollution.

Not sure if MockOpenAI is right for your project? When not to use MockOpenAI.

EXAMPLES

Your test framework.
Your rules.

Canned response for a matching prompt
it "returns a canned response", :mock_openai do
  MockOpenAI.set_responses([
    { match: "Hello", response: "Hi there!" }
  ])

  result = ChatService.call("Hello, can you help me?")

  expect(result).to eq("Hi there!")
end

FEATURES

The features you need.

Deterministic responses

Same input, same output. Your tests make real assertions, not hopeful ones.

🎯

Per-request matching

Regex, substring, or catch-all. Return different responses to different prompts in one test.

💥

Failure simulation

Timeouts, rate limits, bad JSON, 500s, truncated streams. All testable.

🔧

Zero app changes

Redirect your client to localhost. No monkey-patching, no wrapping, no test doubles.

🏎

CI-ready from day one

Runs entirely local. No API keys. No network. Moves at unit-test speed.

💎

RSpec & Minitest

One tag or one module include. State resets automatically between tests, no manual cleanup.

FAILURE MODES

Break it
on purpose.

The worst time to discover your error handling is broken is in production. MockOpenAI lets you inject any failure mode on demand, so you can prove your app handles it before it matters.

:timeout :rate_limit :internal_error :malformed_json :truncated_stream
failure_simulation_spec.rb
describe "when the LLM is unavailable" do
  it "falls back to cached response", :mock_openai do
    MockOpenAI.set_responses([
      { match: ".*", failure_mode: :timeout }
    ])

    result = SmartService.call("summarize this")

    # Your error handling actually gets tested!
    expect(result[:source]).to eq(:cache)
    expect(result[:error]).to be_nil
    expect(MyMailer).to have_received(:alert).once
  end
end

INSTALLATION

Four steps.
You're off and running.

1

Add to your Gemfile

group :test do
  gem "mock_openai"
end
2

Install

bundle install
3

Require the integration and start the server

# RSpec (spec/rails_helper.rb or spec/spec_helper.rb)
require "mock_openai/rspec"
MockOpenAI.start_test_server!
RubyLLM.configure { |c| c.openai_api_base = MockOpenAI.server_url }

# Minitest (test/test_helper.rb)
require "mock_openai/minitest"
MockOpenAI.start_test_server!
RubyLLM.configure { |c| c.openai_api_base = MockOpenAI.server_url }
4

Write your first test

it "works", :mock_openai do
  MockOpenAI.set_responses([
    { match: "Hello", response: "Hi!" }
  ])
  expect(MyService.call("Hello")).to eq("Hi!")
end

WORKS WITH

Rails Sinatra RSpec Minitest ruby_llm openai-ruby Capybara any HTTP client

Free.
Open source.
Always.

MIT-licensed. No tiers, no accounts, no telemetry.

View on GitHub →
MockOpenAI

Ship AI features
with confidence.

Stop hoping your AI tests are correct. Start knowing.