Open Source LLM Native Mock API Test Engine
Click to view full sizeVidai.Mock is a high-fidelity simulation tool designed specifically for Continuous Integration/Continuous Deployment (CI/CD) pipelines to mimic the behavior of large language model (LLM) APIs like OpenAI GPT-4. It accurately simulates network conditions such as latency and jitter, as well as potential failure modes, ensuring deterministic and efficient testing. The tool operates without the need for complex environments like Docker or Java Virtual Machine, instead utilizing a lightweight Rust binary for quick and straightforward execution. Vidai.Mock supports multiple LLM providers, including OpenAI, Anthropic, and Bedrock, and offers advanced features such as real-time token simulation, chaos primitives for resilience testing, and support for Retrieval Augmented Generation flows. It integrates seamlessly into existing test suites and provides extensive observability through Prometheus metrics and request tracing. Vidai.Mock is suitable for both local and enterprise environments, offering an air-gapped, stateless configuration for robust testing scenarios.
Testing LLM apps needs realistic streaming, latency and failure simulation.
Offline mock server simulates LLM APIs with streaming physics and chaos errors.
Developers testing LLM integrations in CI/CD pipelines
Add a comment...