Introduction
A mock server that mimics the OpenAI API for predictable testing
What it looks like
Section titled “What it looks like”apiKey: "test-key"port: 3000responses: - id: "greeting" matcher: type: "contains" pattern: "hello" response: content: "Hello! How can I help you today?"
// Your test codeconst openai = new OpenAI({ baseURL: 'http://localhost:3000/v1', apiKey: 'test-key'});
const response = await openai.chat.completions.create({ model: 'gpt-4', messages: [{ role: 'user', content: 'Say hello!' }]});// Returns: "Hello! How can I help you today!"
Why use it
Section titled “Why use it”Testing LLM applications is hard because:
- Real API responses vary each time
- API calls cost money and have rate limits
- Network issues can break tests
- You can’t test specific edge cases reliably
This mock server solves these issues by giving you complete control over responses during testing.