Quickstart
This quickstart helps you to integrate your LLM application with Langfuse. It will log a single LLM call to get started.
Create new project in Langfuse
- Create Langfuse account (opens in a new tab) or self-host
- Create a new project
- Create new API credentials in the project settings
Log your first LLM call to Langfuse
The @observe()
decorator makes it easy to trace any Python LLM application. In this quickstart we also use the Langfuse OpenAI integration to automatically capture all model parameters.
Not using OpenAI? Switch to the "Python Decorator + any LLM" tab.
pip install langfuse openai
.env
LANGFUSE_SECRET_KEY="sk-lf-..."
LANGFUSE_PUBLIC_KEY="pk-lf-..."
LANGFUSE_HOST="https://cloud.langfuse.com" # πͺπΊ EU region
# LANGFUSE_HOST="https://us.cloud.langfuse.com" # πΊπΈ US region
main.py
from langfuse.decorators import observe
from langfuse.openai import openai # OpenAI integration
@observe()
def story():
return openai.chat.completions.create(
model="gpt-3.5-turbo",
max_tokens=100,
messages=[
{"role": "system", "content": "You are a great storyteller."},
{"role": "user", "content": "Once upon a time in a galaxy far, far away..."}
],
).choices[0].message.content
@observe()
def main():
return story()
main()
β
Done, now visit the Langfuse interface to look at the trace you just created.
All Langfuse platform features
This was a very brief introduction to get started with Langfuse. Explore all Langfuse platform features in detail.
Develop
Monitor
Test
References
Python DecoratorβPython low-level SDKβJS/TS SDKβOpenAI SDKβπ¦πLangchainβπ¦LlamaIndexβAPI referenceβFlowiseβLangflowβLitellmβ