Skip to content

Create your first AI Model Experiments instance, connect to it and conduct a first experiment

Last updated on

This guide enables you to get a running STACKIT AI Model Experiments instance and running an experiment on it, fast. It assumes default settings, that most people may use.

Follow these steps to create your instance:

  1. Visit the STACKIT Portal.

  2. On the sidebar click on AI Model Experiments.

    If AI Model Experiments is not enabled in your region, an Enable-button will appear. Click it to activate the product in your region. This comes with no further expenses.

  3. On the bar on the top click on Create AI Model Experiments.

  4. On the new pane enter a description and a name.

  5. Click on Create

  6. Wait for your instance to become Active.

  7. Click on your new instance and save its URL to a safe location.

    This article references this URL as ai-model-experiments-instance-uri.

After your instance has reached the Active state, you can continue with creating an access token. To do so, follow these steps:

  1. Visit the STACKIT Portal.

  2. On the sidebar click on AI Model Experiments.

  3. Navigate to your instance and click on it.

  4. On the left bar click on Tracking tokens.

  5. Click on Create tracking token.

  6. On the new pane enter a name and click on Create.

  7. Save your token to a safe location.

    This article references this token as ai-model-experiments-access-token.

Now you meet all internal prerequisites. As a final step of preparation you need to create credentials for the newly created Object Storage bucket:

  1. Visit the STACKIT Portal.

  2. On the sidebar click on Object Storage

  3. In the sub-menu click on Credentials & Groups.

  4. Navigate to the entry that begins with aiexp and click on it.

  5. On the left click on Credentials.

  6. On the top bar, click on Create credentials.

  7. On the pane, click on Create.

  8. Copy the Access key ID and the Secret access key to a safe location.

    This article references the key ID and the key itself as stackit-object-storage-access-key and stackit-object-storage-secret-key.

To begin using STACKIT AI Model Experiments, install the MLflow™ and boto3 Python libraries. While MLflow™ manages your experiments, boto3 provides the necessary S3-compatible interface for storing artifacts, such as model files and plots.

At first create and activate a virtual environment:

Terminal window
python -m venv ai-model-experiments
source ai-model-experiments/bin/activate

For the purposes of this specific tutorial, you will also need to install a few additional data science tools. Run the following command to install the necessary libraries:

Terminal window
pip install mlflow boto3 XGBoost numpy

Furthermore, to enable the MLflow™ client to authenticate and communicate with both the tracking server and the artifact storage, you must configure the following environment variables:

Terminal window
# MLflow Tracking Server Configuration
export MLFLOW_TRACKING_URI="[ai-model-experiments-instance-uri]"
export MLFLOW_TRACKING_TOKEN="[ai-model-experiments-access-token]"
# S3 Credentials for Artifact Access
export AWS_ACCESS_KEY_ID="[stackit-object-storage-access-key]"
export AWS_SECRET_ACCESS_KEY="[stackit-object-storage-secret-key]"
# STACKIT S3 Endpoint URL
export MLFLOW_S3_ENDPOINT_URL="https://object.storage.eu01.onstackit.cloud"

Detailed instructions on how to generate and retrieve these values are provided in Setup the local environment.

Experiments are a way to organize your runs. You can create a new experiment using the mlflow.create_experiment function.

import mlflow
experiment_name = "my-first-experiment"
mlflow.create_experiment(experiment_name)
mlflow.set_experiment(experiment_name)

Create a new file with the code printed above called create_experiment.py and run it:

Terminal window
python create_experiment.py

Within an experiment, you can start runs to track individual runs of your code. You can log parameters, metrics, and artifacts to a run.

import mlflow
with mlflow.start_run() as run:
run_id = run.info.run_id
print(f"MLflow Run ID: {run_id}")
# Log a parameter
mlflow.log_param("alpha", 0.5)
# Log a metric
mlflow.log_metric("accuracy", 0.95)
# Log an artifact
with open("artifact.txt", "w") as f:
f.write("This is a dummy artifact.")
mlflow.log_artifact("artifact.txt")

Create a new file with the code printed above called run_experiment.py and run it:

Terminal window
python run_experiment.py

The script will create an output like this:

MLflow Run ID: b10f5deb2dd34af7b3824e5c3e03903d

While MLflow™ is excellent for tracking individual metrics and files, its primary strength lies in managing the entire machine learning lifecycle. The following script demonstrates a full end-to-end workflow: creating an experiment, logging parameters, saving a trained XGBoost model, and registering it.

This example introduces several advanced features such as:

  • Native Model Flavoring: Instead of saving the model as a generic file, we use mlflow.xgboost.log_model. This preserves the model’s internal structure, allowing it to be reloaded as a functional object.

  • Model Signatures: By using infer_signature, we define a “data contract” that specifies the exact input and output formats. This ensures safety and consistency when the model is used for inference later.

  • Automated Registration: The script automatically adds the trained model to the Model Registry via the registered_model_name parameter, making it immediately available for versioning and deployment.

import mlflow
import numpy as np
import xgboost as xgb
from mlflow.models import infer_signature
# 1. Initialize Experiment
experiment_name = "my-first-experiment"
if not mlflow.get_experiment_by_name(experiment_name):
mlflow.create_experiment(experiment_name)
mlflow.set_experiment(experiment_name)
# 2. Start a Training Run
with mlflow.start_run() as run:
run_id = run.info.run_id
print(f"Active Run ID: {run_id}")
# Log Metadata
mlflow.log_param("algorithm", "XGBoost")
mlflow.log_param("layers", 3)
mlflow.log_metric("accuracy", 0.95)
# Create a dummy artifact file
with open("notes.txt", "w") as f:
f.write("Model trained on synthetic data.")
mlflow.log_artifact("notes.txt")
# --- Train a Model ---
X = np.random.rand(100, 10)
y = np.random.randint(0, 2, 100)
dtrain = xgb.DMatrix(X, label=y)
model = xgb.train({"objective": "binary:logistic"}, dtrain, num_boost_round=5)
# Log the model with a signature (defines input/output schema)
signature = infer_signature(X, model.predict(dtrain))
mlflow.xgboost.log_model(
xgb_model=model,
artifact_path="model-artifacts",
signature=signature,
registered_model_name="stackit-xgboost-model"
)

Create a new file with the code printed above called register_model.py and run it:

Terminal window
python register_model.py

You should get an output like this:

Successfully registered model 'stackit-xgboost-model'.
Created version '1' of model 'stackit-xgboost-model'.

Once a model is registered in the Model Registry, you can load it using its name and version.

import mlflow.xgboost
import xgboost as xgb
import numpy as np
# Define the model URI (using version 1)
model_name = "stackit-xgboost-model"
model_version = 1
model_uri = f"models:/{model_name}/{model_version}"
# Load the model
loaded_model = mlflow.xgboost.load_model(model_uri)
# Run Inference
sample_data = xgb.DMatrix(np.random.rand(1, 10))
predictions = loaded_model.predict(sample_data)
print(f"Predictions: {predictions}")

Create a new file with the code printed above called load_model.py and run it:

Terminal window
python load_model.py

The output will be like this:

Predictions: [0.58940643]

Congratulations, you have run all code examples!

Once your code has run, you can visualize and manage your data through the MLflow™ UI. To get started, simply enter your MLFLOW_TRACKING_URI into your browser. Use the dashboard to review experiment runs, compare metrics across different versions, and inspect the artifacts stored in STACKIT Object Storage.

Congratulations! You have successfully integrated STACKIT AI Model Experiments into your workflow.

While this tutorial covered the foundational setup by installing the MLflow™ SDK and boto3 and logging your first run, there is much more to explore. We recommend reading the official MLflow™ documentation for more details.