MockOpenAI is a mocking gem for OpenAI-compatible and Anthropic APIs. Deterministic responses, on-demand failure injection, zero app changes. Works with Rails, Sinatra, CLI tools, and plain Ruby scripts.
it "returns a canned response", :mock_openai do
MockOpenAI.set_responses([
{ match: "Hello", response: "Hi there!" }
])
result = ChatService.call("Hello, can you help me?")
expect(result).to eq("Hi there!")
end
THE PROBLEM
WITHOUT MOCKOPENAI
WITH MOCKOPENAI
Not sure if MockOpenAI is right for your project? When not to use MockOpenAI.
EXAMPLES
it "returns a canned response", :mock_openai do
MockOpenAI.set_responses([
{ match: "Hello", response: "Hi there!" }
])
result = ChatService.call("Hello, can you help me?")
expect(result).to eq("Hi there!")
end
FEATURES
Same input, same output. Your tests make real assertions, not hopeful ones.
Regex, substring, or catch-all. Return different responses to different prompts in one test.
Timeouts, rate limits, bad JSON, 500s, truncated streams. All testable.
Redirect your client to localhost. No monkey-patching, no wrapping, no test doubles.
Runs entirely local. No API keys. No network. Moves at unit-test speed.
One tag or one module include. State resets automatically between tests, no manual cleanup.
FAILURE MODES
The worst time to discover your error handling is broken is in production. MockOpenAI lets you inject any failure mode on demand, so you can prove your app handles it before it matters.
describe "when the LLM is unavailable" do
it "falls back to cached response", :mock_openai do
MockOpenAI.set_responses([
{ match: ".*", failure_mode: :timeout }
])
result = SmartService.call("summarize this")
# Your error handling actually gets tested!
expect(result[:source]).to eq(:cache)
expect(result[:error]).to be_nil
expect(MyMailer).to have_received(:alert).once
end
end
INSTALLATION
Add to your Gemfile
group :test do
gem "mock_openai"
end
Install
bundle install
Require the integration and start the server
# RSpec (spec/rails_helper.rb or spec/spec_helper.rb)
require "mock_openai/rspec"
MockOpenAI.start_test_server!
RubyLLM.configure { |c| c.openai_api_base = MockOpenAI.server_url }
# Minitest (test/test_helper.rb)
require "mock_openai/minitest"
MockOpenAI.start_test_server!
RubyLLM.configure { |c| c.openai_api_base = MockOpenAI.server_url }
Write your first test
it "works", :mock_openai do
MockOpenAI.set_responses([
{ match: "Hello", response: "Hi!" }
])
expect(MyService.call("Hello")).to eq("Hi!")
end
WORKS WITH
Stop hoping your AI tests are correct. Start knowing.