Langfuse is an open-source platform for building and operating LLM-powered applications. It helps teams trace model behavior, evaluate quality, manage prompts, and track metrics so you can debug issues and improve performance over time.
Key capabilities
- LLM tracing and monitoring to collect and analyze runtime data
- Prompt versioning and collaborative prompt management
- Model quality and performance evaluation
- Manual annotation with custom labels and comments
- Dataset creation and management for training and testing
- Metrics and statistics for performance analysis
- Sandbox for quick testing and experimentation
How to get started
Langfuse is available as an open-source web application. Typical setup includes:
- Create an account and a new project
- Install an SDK (for example, Python or JavaScript)
- Integrate the SDK into your app to trace and manage LLM calls
- Configure settings and start using monitoring, prompts, and evaluation tools
Langfuse can be used for free without limits and can also be self-hosted. It supports English and integrates with common LLM tooling, including LlamaIndex, Langchain, and the OpenAI SDK.

