Create your first AI Model Experiments instance, connect to it and conduct a first experiment
Last updated on
Prerequisites
Section titled “Prerequisites”- You have a STACKIT customer account: Create a customer Account
- You have a STACKIT user account: Create a user account
- You have a STACKIT project: Create a project
- You have enabled STACKIT’s Object Storage: Enable Object Storage
This guide enables you to get a running STACKIT AI Model Experiments instance and running an experiment on it, fast. It assumes default settings, that most people may use.
Create an instance and a token
Section titled “Create an instance and a token”Follow these steps to create your instance:
-
Visit the STACKIT Portal.
-
On the sidebar click on AI Model Experiments.
If AI Model Experiments is not enabled in your region, an Enable-button will appear. Click it to activate the product in your region. This comes with no further expenses.
-
On the bar on the top click on Create AI Model Experiments.
-
On the new pane enter a description and a name.
-
Click on Create
-
Wait for your instance to become Active.
-
Click on your new instance and save its URL to a safe location.
This article references this URL as
ai-model-experiments-instance-uri.
After your instance has reached the Active state, you can continue with creating an access token. To do so, follow these steps:
-
Visit the STACKIT Portal.
-
On the sidebar click on AI Model Experiments.
-
Navigate to your instance and click on it.
-
On the left bar click on Tracking tokens.
-
Click on Create tracking token.
-
On the new pane enter a name and click on Create.
-
Save your token to a safe location.
This article references this token as
ai-model-experiments-access-token.
Now you meet all internal prerequisites. As a final step of preparation you need to create credentials for the newly created Object Storage bucket:
-
Visit the STACKIT Portal.
-
On the sidebar click on Object Storage
-
In the sub-menu click on Credentials & Groups.
-
Navigate to the entry that begins with
aiexpand click on it. -
On the left click on Credentials.
-
On the top bar, click on Create credentials.
-
On the pane, click on Create.
-
Copy the Access key ID and the Secret access key to a safe location.
This article references the key ID and the key itself as
stackit-object-storage-access-keyandstackit-object-storage-secret-key.
Setting up the environment
Section titled “Setting up the environment”To begin using STACKIT AI Model Experiments, install the MLflow™ and boto3 Python libraries. While MLflow™ manages your experiments, boto3 provides the necessary S3-compatible interface for storing artifacts, such as model files and plots.
At first create and activate a virtual environment:
python -m venv ai-model-experimentssource ai-model-experiments/bin/activateFor the purposes of this specific tutorial, you will also need to install a few additional data science tools. Run the following command to install the necessary libraries:
pip install mlflow boto3 XGBoost numpyFurthermore, to enable the MLflow™ client to authenticate and communicate with both the tracking server and the artifact storage, you must configure the following environment variables:
# MLflow Tracking Server Configurationexport MLFLOW_TRACKING_URI="[ai-model-experiments-instance-uri]"export MLFLOW_TRACKING_TOKEN="[ai-model-experiments-access-token]"# S3 Credentials for Artifact Accessexport AWS_ACCESS_KEY_ID="[stackit-object-storage-access-key]"export AWS_SECRET_ACCESS_KEY="[stackit-object-storage-secret-key]"# STACKIT S3 Endpoint URLexport MLFLOW_S3_ENDPOINT_URL="https://object.storage.eu01.onstackit.cloud"Detailed instructions on how to generate and retrieve these values are provided in Setup the local environment.
Create an experiment
Section titled “Create an experiment”Experiments are a way to organize your runs.
You can create a new experiment using the mlflow.create_experiment function.
import mlflow
experiment_name = "my-first-experiment"mlflow.create_experiment(experiment_name)mlflow.set_experiment(experiment_name)Create a new file with the code printed above called create_experiment.py and run it:
python create_experiment.pyStart a Run and Log Data
Section titled “Start a Run and Log Data”Within an experiment, you can start runs to track individual runs of your code. You can log parameters, metrics, and artifacts to a run.
import mlflow
with mlflow.start_run() as run: run_id = run.info.run_id print(f"MLflow Run ID: {run_id}")
# Log a parameter mlflow.log_param("alpha", 0.5)
# Log a metric mlflow.log_metric("accuracy", 0.95)
# Log an artifact with open("artifact.txt", "w") as f: f.write("This is a dummy artifact.") mlflow.log_artifact("artifact.txt")Create a new file with the code printed above called run_experiment.py and run it:
python run_experiment.pyThe script will create an output like this:
MLflow Run ID: b10f5deb2dd34af7b3824e5c3e03903dComplete tracking script
Section titled “Complete tracking script”While MLflow™ is excellent for tracking individual metrics and files, its primary strength lies in managing the entire machine learning lifecycle. The following script demonstrates a full end-to-end workflow: creating an experiment, logging parameters, saving a trained XGBoost model, and registering it.
This example introduces several advanced features such as:
-
Native Model Flavoring: Instead of saving the model as a generic file, we use
mlflow.xgboost.log_model. This preserves the model’s internal structure, allowing it to be reloaded as a functional object. -
Model Signatures: By using
infer_signature, we define a “data contract” that specifies the exact input and output formats. This ensures safety and consistency when the model is used for inference later. -
Automated Registration: The script automatically adds the trained model to the Model Registry via the
registered_model_nameparameter, making it immediately available for versioning and deployment.
import mlflowimport numpy as npimport xgboost as xgbfrom mlflow.models import infer_signature
# 1. Initialize Experimentexperiment_name = "my-first-experiment"if not mlflow.get_experiment_by_name(experiment_name): mlflow.create_experiment(experiment_name)mlflow.set_experiment(experiment_name)
# 2. Start a Training Runwith mlflow.start_run() as run: run_id = run.info.run_id print(f"Active Run ID: {run_id}")
# Log Metadata mlflow.log_param("algorithm", "XGBoost") mlflow.log_param("layers", 3) mlflow.log_metric("accuracy", 0.95)
# Create a dummy artifact file with open("notes.txt", "w") as f: f.write("Model trained on synthetic data.") mlflow.log_artifact("notes.txt")
# --- Train a Model --- X = np.random.rand(100, 10) y = np.random.randint(0, 2, 100) dtrain = xgb.DMatrix(X, label=y)
model = xgb.train({"objective": "binary:logistic"}, dtrain, num_boost_round=5)
# Log the model with a signature (defines input/output schema) signature = infer_signature(X, model.predict(dtrain)) mlflow.xgboost.log_model( xgb_model=model, artifact_path="model-artifacts", signature=signature, registered_model_name="stackit-xgboost-model" )Create a new file with the code printed above called register_model.py and run it:
python register_model.pyYou should get an output like this:
Successfully registered model 'stackit-xgboost-model'.Created version '1' of model 'stackit-xgboost-model'.Loading and inference
Section titled “Loading and inference”Once a model is registered in the Model Registry, you can load it using its name and version.
import mlflow.xgboostimport xgboost as xgbimport numpy as np
# Define the model URI (using version 1)model_name = "stackit-xgboost-model"model_version = 1model_uri = f"models:/{model_name}/{model_version}"
# Load the modelloaded_model = mlflow.xgboost.load_model(model_uri)
# Run Inferencesample_data = xgb.DMatrix(np.random.rand(1, 10))predictions = loaded_model.predict(sample_data)print(f"Predictions: {predictions}")Create a new file with the code printed above called load_model.py and run it:
python load_model.pyThe output will be like this:
Predictions: [0.58940643]Congratulations, you have run all code examples!
Inspecting your experiments
Section titled “Inspecting your experiments”Once your code has run, you can visualize and manage your data through the MLflow™ UI. To get started, simply enter your MLFLOW_TRACKING_URI into your browser. Use the dashboard to review experiment runs, compare metrics across different versions, and inspect the artifacts stored in STACKIT Object Storage.
Next steps
Section titled “Next steps”Congratulations! You have successfully integrated STACKIT AI Model Experiments into your workflow.
While this tutorial covered the foundational setup by installing the MLflow™ SDK and boto3 and logging your first run, there is much more to explore. We recommend reading the official MLflow™ documentation for more details.