LLM playground
Test and iterate on your prompts with the new LLM playground directly in Langfuse.
On Day 2 of Launch Week 1, we're excited to introduce the LLM playground to Langfuse. By making prompt engineering possible directly in Langfuse, we take another step in our mission to build a feature-complete LLM engineering platform that helps you along the full live cycle of your LLM application.
With the LLM playground, you can now test and iterate your prompts directly in Langfuse. Either start from scratch or jump into the playground from an existing prompt in your project. You can then tweak the prompt and the model parameters to see how the model responds to different inputs. This way, you can quickly iterate on your prompts to get the best results for your LLM app.
The LLM playground currently support all major models from both OpenAI and Anthropic, and we're planning on adding more model providers in the future.
We hope you enjoy using the LLM playground. Let us know what you think in the GitHub discussion (opens in a new tab), and stay tuned for more updates during Langfuse Launch Week 1 🚀