Troubleshooting and FAQ
This page addresses frequently asked questions and common troubleshooting topics for Langfuse Prompt Management.
If you don't find a solution to your issue here, try using Ask AI for instant answers. For bug reports, please open a ticket on GitHub Issues. For general questions or support, visit our support page.
FAQ
- Can I dynamically select sub-prompts at runtime?
- How can I manage my prompts with Langfuse?
- How to configure retries and timeouts when fetching prompts?
- How to measure prompt performance?
- I'm not seeing the latest version of my prompt. Why?
- Link prompt management with tracing in Langfuse
- Using external templating libraries (Jinja, Liquid, etc.) with Langfuse prompts
- What is prompt engineering?
GitHub Discussions
Folders
Organize prompts into virtual folders to group prompts with similar purposes. Use folder hierarchies to manage prompt libraries at scale.
Overview
With Langfuse you can capture all your LLM evaluations in one place. You can combine a variety of different evaluation metrics like model-based evaluations (LLM-as-a-Judge), human annotations or fully custom evaluation workflows via API/SDKs. This allows you to measure quality, tonality, factual accuracy, completeness, and other dimensions of your LLM application.