LangChain expression language (advanced usage)
Diese Seite ist noch nicht in deiner Sprache verfügbar. Englische Seite aufrufen
This tutorial shows how to use LLMs with the LangChain framework. It covers a setup with several LLM invocations to complete a task. For an introduction to the framework, see LangChain expression language (basic usage).
As an example task, we design a technical workshop for further education of a developer team. Using an LLM (or a chat interface) can significantly speed up the workshop planning process. The goal of this tutorial is to show how to connect several LLM invocations and process interim results.
After creating a STACKIT AI Model Serving Auth Token (see Manage auth tokens), provide it as model_serving_auth_token. From Available shared models, choose a model and provide the model name and the URL.
import os
from dotenv import load_dotenv
load_dotenv("../.env")
model = os.environ["STACKIT_MODEL_SERVING_MODEL"] # Select a chat model from https://support.docs.stackit.cloud/stackit/en/models-licenses-319914532.htmlbase_url = os.environ["STACKIT_MODEL_SERVING_BASE_URL"] # For example: "https://api.openai-compat.model-serving.eu01.onstackit.cloud/v1"model_serving_auth_token = os.environ["STACKIT_MODEL_SERVING_AUTH_TOKEN"] # For example: "ey..."Utilities
Section titled “Utilities”We use Pydantic BaseModels for structured data handling. A workshop item is modelled as a Pydantic BaseModel containing several sprints. The prompt design (described later) explains how these parts interact.
When combining LangChain Runnables, adapt the output of an earlier runnable to the expected input of the following one. Use the helper functions below.
from typing import Dict, List
from pydantic import BaseModel, Field
class Sprint(BaseModel): sprint_topic: str description: str Stories: List[str] = Field(default_factory=list)
class WorkShop(BaseModel): topic: str target_audience: str expected_prior_experience: str schedule: str sprints: List[Sprint] = Field(default_factory=list)
def sprint_text_to_sprints(response: str) -> List[Sprint]: """Extracts a list of Sprints from a given text; expecting a certain format of the provided text.""" out = [] for idx, line in enumerate(response.splitlines()): if not line.startswith("Sprint"): continue out.append( Sprint( sprint_topic=line.split(":")[-1].strip(), description=response.splitlines()[idx+1].split(":")[-1].strip() ) ) return out
def dict_to_workshop(d: Dict[str, str]) -> WorkShop: """Casts given dictionary to a Workshop (pydantic BaseModel), ignoring additional keys.""" return WorkShop(**d)
def extract_single_nested_dict(d: Dict[str, str], key_nested_dict: str = "origin_args") -> Dict[str, str]: """Extract a single nested dictionary and add all key, value pairs to the top level dictionary.""" nested_dict = d.pop(key_nested_dict, None) return {**nested_dict, **d}Prepare the prompts
Section titled “Prepare the prompts”To accomplish the subtasks and solve the overall workshop design, prepare prompt templates as follows:
topic_promptgenerates a concrete topic derived from a broader superordinate topic.sprint_promptincorporates Agile concepts into the workshop and generates plausible sprints.schedule_promptderives a detailed schedule from the previous results and the user’s input.
from langchain_core.prompts import ChatPromptTemplate
topic_prompt = ChatPromptTemplate([ ("system", "You are a helpful AI bot that should help with the design of workshops and hackathons."), ("human", "Come up with a specific topic for a {superordinate_topic} workshop. Only give the topic, no further explanations."),])
sprint_prompt = ChatPromptTemplate([ ("system", "You are a helpful AI bot that should help with the design of workshops and hackathons."), ("human", "We design a {topic} workshop. The workshop is scheduled to run for {amount_of_days} days. The target audience are {target_audience}. The assumed level of prior experience is {expected_prior_experience}. This workshop ought to be also a training in agile work. Therefore i need you do state two sprint topics for each workshop day.\n\nGive your answer in a concise way, no introduction needed. Follow this example:\n'Day 1\nSprint 1: <sprint topic>\nDescription: <Max three sentences to describe the sprint goal.>'"),])
schedule_prompt = ChatPromptTemplate([ ("system", "You are a helpful AI bot that should help with the design of workshops and hackathons."), ("human", "We design a {topic} workshop to run for {amount_of_days} days. Orienting towards agile concepts such as scrum or kanban, we have following sprints scheduled: {sprints_text}\n\nI need you to draft a detailed time schedula for the whole workshop."),])Set up chains
Section titled “Set up chains”Set up chains to solve the workshop design. Each chain has its own ChatOpenAI instance. For the more creative aspects, increase temperature for more variation, and use a non‑zero frequency_penalty to encourage the response to differ from the input. For the schedule chain, set both to zero to ensure the sprints and topics match the input.
Use langchain_core.runnables to design chains that can be composed:
RunnableLambdawraps any function as a runnable, letting you apply helper functions such asextract_single_nested_dict.RunnableParallelgathers runnables (or chains) that can run independently in parallel.RunnablePassthroughtakes the input dictionary and passes it through.
Combining RunnableParallel and RunnablePassthrough, followed by extract_single_nested_dict, preserves the original input for later runnables.
from langchain_core.output_parsers.string import StrOutputParserfrom langchain_core.runnables import RunnableLambda, RunnableParallel, RunnablePassthroughfrom langchain_openai import ChatOpenAI
topic_chain = RunnableParallel( topic=( topic_prompt | ChatOpenAI( model=model, base_url=base_url, api_key=model_serving_auth_token, temperature=.1, frequency_penalty=.05 ) | StrOutputParser() ), origin_args=RunnablePassthrough()) | RunnableLambda(extract_single_nested_dict)
sprint_chain = RunnableParallel( sprints_text=( sprint_prompt | ChatOpenAI( model=model, base_url=base_url, api_key=model_serving_auth_token, temperature=.1, frequency_penalty=.05 ) | StrOutputParser() ), origin_args=RunnablePassthrough()) | RunnableLambda(extract_single_nested_dict)
schedule_chain = RunnableParallel( schedule=( schedule_prompt | ChatOpenAI( model=model, base_url=base_url, api_key=model_serving_auth_token, temperature=.0, frequency_penalty=.0 ) | StrOutputParser() ), origin_args=RunnablePassthrough()) | RunnableLambda(extract_single_nested_dict)
sprints_and_schedule_chain = RunnableParallel( sprints=RunnableLambda(lambda d: sprint_text_to_sprints(d["sprints_text"])), schedule=schedule_chain, origin_args=RunnablePassthrough()) | RunnableLambda(extract_single_nested_dict)
workshop_chain = ( topic_chain | sprint_chain | sprints_and_schedule_chain | RunnableLambda(lambda d: extract_single_nested_dict(d, "schedule")) | RunnableLambda(lambda d: dict_to_workshop(d)))Invoke chains
Section titled “Invoke chains”When defining the chains above, compose them step by step. This is convenient for testing and debugging interim results, but the ultimate goal is the workshop_chain: A single runnable that manages all LLM calls and conversions.
Below, input preservation of the topic_chain is shown, so the superordinate topic is still available for later subchains.
superordinate_topic = "Machine Learning"response_topic = topic_chain.invoke({"superordinate_topic": superordinate_topic})print(response_topic)
# Output#> {'superordinate_topic': 'Machine Learning', 'topic': '"Building Explainable AI Models with SHAP and LIME"'}Next, invoke sprint_chain. Provide the generated topic and the other conditions. The sprints output matches the format specified in the corresponding prompt. The function sprint_text_to_sprints, which converts the LLM output into the structured Pydantic BaseModel, requires this format.
response_sprints = sprint_chain.invoke( { **response_topic, **{ "amount_of_days": "two", "target_audience": "developers with different backgrounds", "expected_prior_experience": "no experience at all" } })print(response_sprints["sprints_text"])
# Output#> Day 1#> Sprint 1: Introduction to Explainable AI and SHAP#> Description: Participants will learn the basics of Explainable AI, its importance, and the SHAP framework. They will work in teams to understand how SHAP values are calculated and visualise them using Python libraries. The goal is to grasp the fundamental concepts of SHAP and its application in Explainable AI.#>#> Sprint 2: Hands-on with SHAP - Explaining Machine Learning Models#> Description: Participants will work on a hands-on project where they will use SHAP to explain a pre-trained machine learning model. They will learn how to integrate SHAP into their existing workflows and visualise feature contributions. The goal is to apply SHAP to a real-world scenario and understand its practical implications.#>#> Day 2#> Sprint 3: Introduction to LIME and Model Interpretability#> Description: Participants will learn about the LIME framework, its strengths and weaknesses, and how it compares to SHAP. They will work in teams to implement LIME on a simple machine learning model and analyse the results. The goal is to understand the trade-offs between different Explainable AI techniques.#>#> Sprint 4: Building an Explainable AI Pipeline with SHAP and LIME#> Description: In this final sprint, participants will build an end-to-end Explainable AI pipeline using both SHAP and LIME. They will integrate these frameworks into a real-world project and present their findings. The goal is to apply the concepts learned throughout the workshop to a practical problem.All at once
Section titled “All at once”The goal is a single chain that handles all conversions and LLM calls with one invocation. The workshop_chain output is not a response dictionary; it is already converted to a Pydantic model for downstream data handling. Interim results are also structured Pydantic models.
Try a new workshop draft:
workshop = workshop_chain.invoke({ "superordinate_topic": "gRPC", "amount_of_days": "three", "target_audience": "python and go developers", "expected_prior_experience": "little experience"})
print(workshop.topic)
# Output#> "Building Scalable Microservices with gRPC and Cloud Native Technologies"This concludes the advanced tutorial on handling LangChain Runnables via LCEL. Key takeaways:
- Runnables can be concatenated for sequential or parallel execution.
- Using
RunnablePassthrough, inputs can be preserved for later Runnables in a chain. RunnableLambdalets you add functions to adapt input and output and convert LLM responses into a structured data format.- Configure
ChatOpenAIto adapt the generated output of the LLM.
See other STACKIT tutorials for further advanced and specific goals