/
New usecase - Traffic Forecasting for Green Network

New usecase - Traffic Forecasting for Green Network

1. Introduction

  • In this use case, we utilize the "Cell Metrics"(RRU.PrbUsedDl) dataset provided by the O-RAN SC SIM space, which includes synthetic data generated by a simulator, with all data recorded in Unix timestamp format.

  • The model training process is carried out on the O-RAN SC AI/ML Framework, including GPU support, and considers both traditional machine learning (ML) and deep learning (DL) approaches. For ML models, we use Random Forest and Support Vector Regression (SVR), while for DL models, we employ RNN, LSTM, and GRU architectures.

  • By managing the ON/OFF state of cells through traffic forecasting, we can reduce power consumption. Additionally, if the AI/ML models used for forecasting are operated in an eco-friendly manner, further power savings can be achieved. In this use case, we measure the carbon emissions and energy consumption during the cell traffic forecasting process using AI/ML to ensure that the forecasting model is not only effective but also environmentally sustainable.

image-20241209-091900.png

2. Requirements

Configuring GPU Usage in a Machine Learning Pipeline

This section is based on contributions from Sungjin Lee's Github repository. For more details, visit this link.

  • Step 1. Install the nvidia-container-toolkit

    curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
  • Step 2. Configure containerd

    sudo nvidia-ctk runtime configure --runtime=containerd
    sudo vim /etc/containerd/config.toml
  • Step 3. Install the nvidia-device-plugin

  • Step 4. Build the traininghost/pipelinegpuimage image

    • We built a new base image that recognizes the configured GPU to enable proper GPU usage in the ML pipeline components

    • To build the required image, you can refer to the provided Dockerfile and requirement.txt at the following link, or modify the pipeline image available in the existing aimlfw-dep

  • Step 5. Verify GPU usage with nerdctl

    • If the output is similar to the one below, the GPU setup is complete.

3. Getting Started

1. Viavi Dataset Insertion

  • Step 1. Download or Copy insert.py file available in the "File List" on this page

  • Step 2. Update the insert.py file with the appropriate values for your database(InfluxDB), including token, org, bucket, and the file location of the dataset(csv_file)

    • the bucket for Viavi dataset must already be created

  • Step 3. Install insert.py

    • It takes about 10 min to insert all the Viavi dataset

2. Setting FeatureGroup

  • Refer to the image below, and make sure to set _measurement to "cell_metrics"

Viavi_featuregroup.png
Traffic Forecasting FeatureGroup

 

3. Upload AI/ML Pipeline Script (Jupyter Notebook)

  • Step 1. Download the pipeline script(pipeline.ipynb) provided in the “File List”

  • Step 2. Modify the pipeline script to satisfy your own requirements

    • Set data features required for model training (using FeatureStoreSdk)

      • We used the RRU_PrbUsedDl column from the Viavi Dataset
        we extracted data at 30-minute intervals to train the model, focusing on capturing meaningful patterns in traffic data.

    • Write a TensorFlow-based AI/ML model script

      • We used LSTM(Long Short-Term Memory) model to predict downlink traffic

      • You can add other model prediction accuracy(e.g. RMSE, MAE, MAPE)

    • Configure Energy and CO2 emission tracking for the Green Network use case using CodeCarbon

      • we provide:

        • Training duration, RAM/CPU/GPU energy consumption, CO2 emissions

    • Upload the trained model along with its metrics (using ModelMetricsSdk)

  • Step 3. Compile the pipeline code to generate a Kubeflow pipeline YAML file

 

4. TrainingJob

  • Set the TrainingJob name in lowercase

  • Configure the Feature Filter

    • For the query to work correctly, use backticks(`) to specify a specific cell site for filtering (e.g., `Viavi.Cell.Name` == "S10/B13/C3")

  • Refer to the image below

 

5. Result

  • These logs can be reviewed through the logs of the Kubeflow pod generated during training execution, and the details that can be checked are as follows:

6. Load Model

7. Comparison

4. File list

  • CellReports.csv
    The file contains traffic data from Viavi.

  • The file processes the dataset and inserts the data into InfluxDB.
    (Changed required: DATASET_PATH , INFLUXDB_IP , INFLUXDB_TOKEN)

  • The file defines the model structure and training process.

  • The yaml file is used for deploying model inference service.

  • The script used for excuting the model prediction.

  • The json file is used for Inference.

5. Example

  • Input data

 

  • output data

Contributors

  • Peter Moonki Hong - Samsung

  • Taewan Kim - Samsung

  • Corbin(Geon) Kim - Kyunghee Univ. MCL

  • Sungjin Lee - Kyunghee Univ. MCL

  • Hyuksun Kwon - Kyunghee Univ. MCL

  • Hoseong Choi - Kyunghee Univ. MCL

Version History

Data

Ver.

Author

Comment

Data

Ver.

Author

Comment

2024-12-10

1.0.0

Corbin(Geon) Kim, Sungjin Lee, Hyuksun Kwon, Hoseong Choi

 

2024-01-08

1.0.1

Sungjin Lee, Hyuksun Kwon

Corrected the input.json file