Skip to content

Use LLM Tracing with AI Model Experiments

Last updated on

Implementing LLM Tracing in STACKIT AI Model Experiments allows you to see “under the hood” of your AI’s decision-making process. This tutorial shows you how to use MLflow™‘s one-line auto-logging for popular frameworks like LangChain, or manual tracing for custom code.

Ensure you have the environment variables from the Getting Started guide active in your terminal. In addition, you will need to install the LangChain library for this example.

Terminal window
pip install mlflow langchain langchain-community

You have two ways to implement tracing: Automatic (best for frameworks) or Manual (best for custom logic).

Option A: Automatic Tracing (LangChain Example)

Section titled “Option A: Automatic Tracing (LangChain Example)”

If you use a supported library, a single line of code captures every step of the running chain.

import mlflow
from langchain_core.prompts import ChatPromptTemplate
from langchain_community.chat_models.fake import FakeListChatModel
# Enable MLflow Tracing for LangChain
mlflow.langchain.autolog()
# Set experiment
mlflow.set_experiment("Tracing example")
# Setup the chain using a Fake LLM.
# This allows the tutorial to run without an LLM API token.
# Because it uses the standard LangChain interface, MLflow will still
# generate a complete Trace in the STACKIT UI.
# To use a real model, swap FakeListChatModel for eg.: ChatOpenAI(model="gpt-4o-mini")
llm = FakeListChatModel(
responses=[
"STACKIT AI Model Experiments is a managed MLflow server. (This is a simulated response! No API key was used.)"
]
)
prompt = ChatPromptTemplate.from_template("What is {topic}")
chain = prompt | llm
# Invoke the chain - This automatically sends a "Trace" to the AI Model Experiments server
response = chain.invoke({"topic": "STACKIT AI Model Experiments"})
print(response.content)

Option B: Manual Tracing (Custom Function)

Section titled “Option B: Manual Tracing (Custom Function)”

Use the @mlflow.Trace decorator to capture custom logic as a trace in the UI.

import mlflow
# Set experiment
mlflow.set_experiment("Tracing example")
# Use @mlflow.Trace decorator to trace custom function
@mlflow.trace
def my_function():
return "This is the STACKIT AI Model Experiments Docu."
# Calling the function generates a trace
my_function()
  1. Open your AI Model Experiments Instance URL in your browser. 2. Select the newly generated experiment. 3. Navigate to the Traces tab on the left sidebar. 4. Click on your recent request to see the Trace View.