INF CI/CD and Integration Deployment

INF CI/CD and Integration Deployment

INF Jenkins Jobs

 

 

 

Environment

Testing O2 IMS implementations across two platforms: OpenShift-KNI (PTI-RTP) and StarlingX (PTI-O2). The objective is validating baremetal provisioning workflows and managing real RAN workloads through O2 interfaces.

Test Infrastructure:

  • Newton (Supermicro): Jumphost and O-Cloud Manager (OKD single-node)

  • Galileo (Asus): Target for StarlingX O-Cloud Manager

  • Joule (Dell R750): RT-capable node for gNB deployment via O2

  • Laovisier (Supermicro): Reference gNB environment (RHEL 9.2, vanilla Kubernetes)

DNS Important NS

Name

Endpoint

Name

Endpoint

api.ocloud-vm-okd-aio.lab.local

192.168.8.53

console-openshift-console.apps.ocloud-vm-okd-aio.lab.local

192.168.8.53

o2ims.apps.ocloud-vm-okd-aio.lab.local

192.168.8.53

assisted-image-service-multicluster-engine.apps.ocloud-vm-okd-aio.lab.local

192.168.8.53

oauth-openshift.apps.ocloud-vm-okd-aio.lab.local

192.168.8.53

Proxy Setup

Node

Frontend

Backend

Remarks

Node

Frontend

Backend

Remarks

192.168.8.53

*:443

192.168.123.11:443

OC Routes

 

*:6443

192.168.123.11:6443

Kubernetes API

 

:5050

192.168.123.11:5050

Ironic Inspector

 

:6388

192.168.123.11:6388

Ironic Image Service

 

:6385

192.168.123.11:6385

Ironic API

StarlingX Platform

StarlingX 10.0

AIO Bootstrap

system_mode: simplex dns_servers: - 8.8.8.8 - 8.8.4.4 external_oam_subnet: 192.168.8.0/24 external_oam_gateway_address: 192.168.8.9 external_oam_floating_address: 192.168.8.35 external_oam_node_0_address: 192.168.8.11 admin_username: admin admin_password: ********** ansible_become_pass: *****

Controller Setup

OAM Interface Configuration

[sysadmin@localhost ~(keystone_admin)]$ system host-if-modify controller-0 enp5s0f0 -c platform +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | ifname | enp5s0f0 | | iftype | ethernet | | ports | ['enp5s0f0'] | | imac | 18:31:bf:7d:ce:8e | | imtu | 1500 | | ifclass | platform | | ptp_role | none | | aemode | None | | schedpolicy | None | | txhashpolicy | None | | primary_reselect | None | | uuid | 8b5c760a-20a6-4551-aba2-969208cd7723 | | ihost_uuid | 25b378b0-99de-4a0e-b493-8fc0b419c9a2 | | vlan_id | None | | uses | [] | | used_by | [] | | created_at | 2025-11-06T15:34:08.879847+00:00 | | updated_at | 2025-11-06T15:52:42.570230+00:00 | | sriov_numvfs | 0 | | sriov_vf_driver | None | | max_tx_rate | None | | ipv4_mode | None | | ipv6_mode | None | | accelerated | [True] | +------------------+--------------------------------------+ [sysadmin@localhost ~(keystone_admin)]$ system interface-network-assign controller-0 enp5s0f0 oam +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | hostname | controller-0 | | uuid | c579fca3-c8b2-4aec-9610-413d1fcd84ac | | ifname | enp5s0f0 | | network_name | oam | +--------------+--------------------------------------+

NTP Setup

[sysadmin@localhost ~(keystone_admin)]$ system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | uuid | d4504a82-37b4-462c-a1a5-1c348966d700 | | ntpservers | 0.pool.ntp.org,1.pool.ntp.org | | isystem_uuid | 26b80906-99ec-48fd-b8cc-768781aa0964 | | created_at | 2025-11-06T15:33:05.113521+00:00 | | updated_at | None | +--------------+--------------------------------------+

Host Filesystems

# Instances filesystem for VM ephemeral storage [sysadmin@controller-0 ~(keystone_admin)]$ system host-fs-add controller-0 instances=100 +----------------+--------------------------------------+ | Property | Value | +----------------+--------------------------------------+ | uuid | 8ecab35d-acc7-4db9-ac7b-6aecdb2b5c9d | | name | instances | | size | 100 | | logical_volume | instances-lv | | state | Creating (on unlock) | | capabilities | {'functions': []} | | created_at | 2025-11-06T16:49:59.675244+00:00 | | updated_at | None | +----------------+--------------------------------------+ # Filesystem allocation summary (1394GB total disk) [sysadmin@controller-0 ~(keystone_admin)]$ system host-fs-list controller-0 +--------------------------------------+-----------+------+--------------+----------------------+----------------------------+ | UUID | FS Name | Size | Logical | State | Capabilities | | | | in | Volume | | | | | | GiB | | | | +--------------------------------------+-----------+------+--------------+----------------------+----------------------------+ | 55bf0031-a10a-4022-8a57-195b156588b1 | backup | 25 | backup-lv | In-Use | {'functions': []} | | 9dc5d124-986b-4a4c-bc2c-88d94a0790b7 | ceph | 20 | ceph-lv | Ready | {'functions': ['monitor']} | | 5a197360-3f1e-4a0f-a094-24cef6c0e841 | docker | 60 | docker-lv | In-Use | {'functions': []} | | 8ecab35d-acc7-4db9-ac7b-6aecdb2b5c9d | instances | 100 | instances-lv | Creating (on unlock) | {'functions': []} | | 16d11eda-18ca-45b0-9a96-dd4e16d0fb22 | kubelet | 10 | kubelet-lv | In-Use | {'functions': []} | | 37d022c9-7225-44de-b977-3d805914166c | log | 8 | log-lv | In-Use | {'functions': []} | | 88c58f0b-0835-4fab-b58b-6c8846055684 | root | 20 | root-lv | In-Use | {'functions': []} | | d1c043fd-844a-4a4c-97b5-5cf03ee09199 | scratch | 16 | scratch-lv | In-Use | {'functions': []} | | 8d7d4e47-e011-43a9-84c5-3746835fed17 | var | 20 | var-lv | In-Use | {'functions': []} | +--------------------------------------+-----------+------+--------------+----------------------+----------------------------+ # Total allocated: ~279GB, Remaining: ~1115GB available

Data Network Configuration

# Available network interfaces [sysadmin@controller-0 ~(keystone_admin)]$ system host-port-list controller-0 +--------------------------------------+----------+----------+--------------+--------+-----------+-------------+------------------------------------------------+ | uuid | name | type | pci address | device | processor | accelerated | device type | +--------------------------------------+----------+----------+--------------+--------+-----------+-------------+------------------------------------------------+ | 1ebc0427-9a69-49e4-8cb6-72abb43e0841 | enp3s0f0 | ethernet | 0000:03:00.0 | 0 | 0 | True | Ethernet Controller X710 for 10GbE SFP+ [1572] | | a8dd67eb-3242-462c-b215-7a8748593f47 | enp3s0f1 | ethernet | 0000:03:00.1 | 0 | 0 | True | Ethernet Controller X710 for 10GbE SFP+ [1572] | | be3d3554-0154-45f6-a58d-8e01e2b9a2a6 | enp5s0f0 | ethernet | 0000:05:00.0 | 0 | 0 | True | I350 Gigabit Network Connection [1521] | | 7befa9f6-0e48-4764-b442-1ffbfb030137 | enp5s0f1 | ethernet | 0000:05:00.1 | 0 | 0 | True | I350 Gigabit Network Connection [1521] | | 50f10d0b-bad5-475b-a6ea-efea3453c2ad | ens1f0 | ethernet | 0000:02:00.0 | 0 | 0 | True | 82580 Gigabit Network Connection [1516] | | f1b98994-538d-4d33-8025-f0fb05eedaf8 | ens1f1 | ethernet | 0000:02:00.1 | 0 | 0 | True | 82580 Gigabit Network Connection [1516] | +--------------------------------------+----------+----------+--------------+--------+-----------+-------------+------------------------------------------------+ # Configure data interface (enp5s0f0 reused for data after platform setup) [sysadmin@controller-0 ~(keystone_admin)]$ system host-if-modify -m 1500 -n data0 -c data controller-0 8b5c760a-20a6-4551-aba2-969208cd7723 +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | ifname | data0 | | iftype | ethernet | | ports | ['enp5s0f0'] | | imac | 18:31:bf:7d:ce:8e | | imtu | 1500 | | ifclass | data | | ptp_role | none | | aemode | None | | schedpolicy | None | | txhashpolicy | None | | primary_reselect | None | | uuid | 8b5c760a-20a6-4551-aba2-969208cd7723 | | ihost_uuid | 25b378b0-99de-4a0e-b493-8fc0b419c9a2 | | vlan_id | None | | uses | [] | | used_by | [] | | created_at | 2025-11-06T15:34:08.879847+00:00 | | updated_at | 2025-11-06T16:52:07.366991+00:00 | | sriov_numvfs | 0 | | sriov_vf_driver | None | | max_tx_rate | None | | ipv4_mode | static | | ipv6_mode | disabled | | accelerated | [True] | +------------------+--------------------------------------+ # Create datanetwork [sysadmin@controller-0 ~(keystone_admin)]$ system datanetwork-add datanet0 vlan +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | id | 1 | | uuid | 62599454-da9f-4021-b059-52ad713e9d7a | | name | datanet0 | | network_type | vlan | | mtu | 1500 | | description | None | +--------------+--------------------------------------+ # Assign datanetwork to interface [sysadmin@controller-0 ~(keystone_admin)]$ system interface-datanetwork-assign controller-0 8b5c760a-20a6-4551-aba2-969208cd7723 datanet0 +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | hostname | controller-0 | | uuid | 2cb3e0bf-d3c4-4f93-947a-f1d1f9a132ae | | ifname | data0 | | datanetwork_name | datanet0 | +------------------+--------------------------------------+

CPU Configuration for OpenStack

# Assign platform cores (6 cores on processor 0) [sysadmin@controller-0 ~(keystone_admin)]$ system host-cpu-modify -f platform -p0 6 controller-0 # Output: Cores 0-5 (and their HT siblings) assigned to Platform # Assign vSwitch cores (2 cores required for OVS-DPDK) [sysadmin@controller-0 ~(keystone_admin)]$ system host-cpu-modify -f vswitch -p0 2 controller-0 # Output: Cores 6-7 (and their HT siblings) assigned to vSwitch # Remaining cores (8-13 on both processors) assigned to Application (VMs)

Storage Backend

# Rook-Ceph already configured [sysadmin@controller-0 ~(keystone_admin)]$ system storage-backend-list +--------------------------------------+-----------------+-----------+----------------------+----------+----------------+---------------------------------------------+ | uuid | name | backend | state | task | services | capabilities | +--------------------------------------+-----------------+-----------+----------------------+----------+----------------+---------------------------------------------+ | 76763271-d927-4b39-ad82-dd41bfb9a2e6 | ceph-rook-store | ceph-rook | configuring-with-app | uploaded | block, | deployment_model: controller replication: 1 | | | | | | | filesystem | min_replication: 1 | | | | | | | | | +--------------------------------------+-----------------+-----------+----------------------+----------+----------------+---------------------------------------------+

Applications Status

[sysadmin@controller-0 ~(keystone_admin)]$ system application-list +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ | application | version | manifest name | manifest file | status | progress | +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+ | cert-manager | 24.09-79 | cert-manager-fluxcd-manifests | fluxcd-manifests | applied | completed | | dell-storage | 24.09-26 | dell-storage-fluxcd-manifests | fluxcd-manifests | uploaded | completed | | nginx-ingress-controller | 24.09-66 | nginx-ingress-controller-fluxcd-manifests | fluxcd-manifests | applied | completed | | oidc-auth-apps | 24.09-63 | oidc-auth-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | | platform-integ-apps | 24.09-144 | platform-integ-apps-fluxcd-manifests | fluxcd-manifests | uploaded | completed | | rook-ceph | 24.09-71 | rook-ceph-fluxcd-manifests | fluxcd-manifests | uploaded | completed | +--------------------------+-----------+-------------------------------------------+------------------+----------+-----------+

System Information

[sysadmin@controller-0 ~(keystone_admin)]$ cat /etc/build.info SW_VERSION="24.09" BUILD_TARGET="Host Installer" BUILD_TYPE="Formal" BUILD_ID="20250124T210100Z" SRC_BUILD_ID="12" JOB="STX_10.0_build_debian" BUILD_BY="jenkins" BUILD_NUMBER="13" BUILD_HOST="yow2-wrcp2-lx" BUILD_DATE="2025-01-24 21:01:00 +0000"

Disk Layout

[sysadmin@controller-0 ~(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+----------------------------------+-------------------------------------------------+ | uuid | device_node | device_num | device_type | size_gib | available_gib | rpm | serial_id | device_path | +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+----------------------------------+-------------------------------------------------+ | 70e317fb-502b-4b6d-9ae4-9eecd6a8ba87 | /dev/sda | 2048 | HDD | 1394.375 | 0.0 | Undetermined | 0008077b0dbf92c622588d0030f81201 | /dev/disk/by-path/pci-0000:81:00.0-scsi-0:2:0:0 | +--------------------------------------+-------------+------------+-------------+----------+---------------+--------------+----------------------------------+-------------------------------------------------+

 

StarlingX 10 on VM (Duplex)

  • Controller 00: 192.168.8.34

  • Controller 00: 192.168.8.33

  • Floating IP: 192.168.8.32

  • Worker Node: TBD → either Joule or Lavoisier

VMs

Nodes

CPU

Memory

Disk (GB)

Nodes

CPU

Memory

Disk (GB)

controller-00

16

32

500

controller-01

16

32

500

Bootstrap

system_mode: duplex dns_servers: - 8.8.8.8 - 8.8.4.4 external_oam_subnet: 192.168.8.0/24 external_oam_gateway_address: 192.168.8.103 external_oam_floating_address: 192.168.8.32 external_oam_node_0_address: 192.168.8.33 external_oam_node_1_address: 192.168.8.34 admin_username: admin admin_password: ***** ansible_become_pass: ****

 

 

Installation Issues

StarlingX 10.0 on Galileo

Attempted simplex all-in-one installation. Bootstrap completed but Ceph storage backend installation fails consistently

 

StarlingX 8.0 on Galileo

Bootstrap fails during container image download phase. Upstream mirrors are missing required images. Error occurs during common/push-docker-images task:

 

StarelingX 10.0 Missing VG

  • If I choose virtio as disk driver on cockpit instead of sata, the vg creation will be corrupt and there will be failure during bootstrap.

  •  

TASK [bootstrap/prepare-env : Check volume groups] ****************************************************************************** Tuesday 25 November 2025 10:57:18 +0000 (0:00:00.024) 0:00:13.814 ****** fatal: [localhost]: FAILED! => changed=true cmd: - vgdisplay - cgts-vg delta: '0:00:00.038214' end: '2025-11-25 10:57:19.038735' msg: non-zero return code rc: 5 start: '2025-11-25 10:57:19.000521' stderr: |2- Volume group "cgts-vg" not found Cannot process volume group cgts-vg stderr_lines: <omitted> stdout: '' stdout_lines: <omitted> PLAY RECAP ********************************************************************************************************************** localhost : ok=43 changed=8 unreachable=0 failed=1 skipped=78 rescued=0 ignored=0 Tuesday 25 November 2025 10:57:19 +0000 (0:00:00.551) 0:00:14.365 ****** =============================================================================== common/prepare-env : stat ------------------------------------------------------------------------------------------------ 2.28s common/prepare-env : Evaluate ansible_become_pass ------------------------------------------------------------------------ 0.89s bootstrap/prepare-env : Create SSL CA cert directory --------------------------------------------------------------------- 0.71s common/prepare-env : Append kubernetes extra configuration (extraArgs and extraVolumes) ---------------------------------- 0.69s bootstrap/prepare-env : Check volume groups ------------------------------------------------------------------------------ 0.55s bootstrap/prepare-env : Look for unmistakenly StarlingX packages --------------------------------------------------------- 0.53s bootstrap/prepare-env : Check Docker status ------------------------------------------------------------------------------ 0.52s bootstrap/prepare-env : Look for tmp kubernetes configuration file stats ------------------------------------------------- 0.52s bootstrap/prepare-env : Clean up interface config files in /etc/network/interfaces.d/ ------------------------------------ 0.51s bootstrap/prepare-env : Remove tmp kubernetes configuration file --------------------------------------------------------- 0.51s common/validate-target : Retrieve software version number ---------------------------------------------------------------- 0.50s common/validate-target : Retrieve system type ---------------------------------------------------------------------------- 0.50s bootstrap/prepare-env : Check if bootstrap_finalized flag exists on host ------------------------------------------------- 0.50s bootstrap/prepare-env : Look for openrc file ----------------------------------------------------------------------------- 0.49s bootstrap/prepare-env : Look for last kubernetes configuration file stats ------------------------------------------------ 0.49s bootstrap/prepare-env : Check initial config flag ------------------------------------------------------------------------ 0.49s bootstrap/prepare-env : Fail if any of the mandatory configurations are not defined -------------------------------------- 0.12s bootstrap/prepare-env : Copy the valid files referenced by the remaining overrides directly ------------------------------ 0.09s bootstrap/prepare-env : Try to decode the content and write in the temp files if it's from a cert or key ----------------- 0.09s bootstrap/prepare-env : Check if the files referenced by the remaining overrides exist ----------------------------------- 0.09s sysadmin@localhost:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 256M 0 loop `-luks_encrypted_vault 252:4 0 240M 0 crypt /var/luks/stx/luks_fs sr0 11:0 1 3.1G 0 rom vda 253:0 0 500G 0 disk |-vda1 253:1 0 1M 0 part |-vda2 253:2 0 29.3G 0 part /var/rootdirs/opt/platform-backup |-vda3 253:3 0 300M 0 part /boot/efi |-vda4 253:4 0 2G 0 part /boot `-vda5 253:5 0 468.4G 0 part |-cgts--vg-root--lv 252:0 0 20G 0 lvm /sysroot |-cgts--vg-var--lv 252:1 0 20G 0 lvm /var |-cgts--vg-log--lv 252:2 0 7.8G 0 lvm /var/log `-cgts--vg-scratch--lv 252:3 0 15.6G 0 lvm /var/rootdirs/scratch

TASK [bootstrap/prepare-env : Check volume groups] ******************************************************************************
Tuesday 25 November 2025 10:57:18 +0000 (0:00:00.024) 0:00:13.814 ******
fatal: [localhost]: FAILED! => changed=true
cmd:

  • vgdisplay

  • cgts-vg
    delta: '0:00:00.038214'
    end: '2025-11-25 10:57:19.038735'
    msg: non-zero return code
    rc: 5
    start: '2025-11-25 10:57:19.000521'
    stderr: |2-
    Volume group "cgts-vg" not found
    Cannot process volume group cgts-vg
    stderr_lines: <omitted>
    stdout: ''
    stdout_lines: <omitted>

PLAY RECAP **********************************************************************************************************************
localhost : ok=43 changed=8 unreachable=0 failed=1 skipped=78 rescued=0 ignored=0

Tuesday 25 November 2025 10:57:19 +0000 (0:00:00.551) 0:00:14.365 ******

common/prepare-env : stat ------------------------------------------------------------------------------------------------ 2.28s
common/prepare-env : Evaluate ansible_become_pass ------------------------------------------------------------------------ 0.89s
bootstrap/prepare-env : Create SSL CA cert directory --------------------------------------------------------------------- 0.71s
common/prepare-env : Append kubernetes extra configuration (extraArgs and extraVolumes) ---------------------------------- 0.69s
bootstrap/prepare-env : Check volume groups ------------------------------------------------------------------------------ 0.55s
bootstrap/prepare-env : Look for unmistakenly StarlingX packages --------------------------------------------------------- 0.53s
bootstrap/prepare-env : Check Docker status ------------------------------------------------------------------------------ 0.52s
bootstrap/prepare-env : Look for tmp kubernetes configuration file stats ------------------------------------------------- 0.52s
bootstrap/prepare-env : Clean up interface config files in /etc/network/interfaces.d/ ------------------------------------ 0.51s
bootstrap/prepare-env : Remove tmp kubernetes configuration file --------------------------------------------------------- 0.51s
common/validate-target : Retrieve software version number ---------------------------------------------------------------- 0.50s
common/validate-target : Retrieve system type ---------------------------------------------------------------------------- 0.50s
bootstrap/prepare-env : Check if bootstrap_finalized flag exists on host ------------------------------------------------- 0.50s
bootstrap/prepare-env : Look for openrc file ----------------------------------------------------------------------------- 0.49s
bootstrap/prepare-env : Look for last kubernetes configuration file stats ------------------------------------------------ 0.49s
bootstrap/prepare-env : Check initial config flag ------------------------------------------------------------------------ 0.49s
bootstrap/prepare-env : Fail if any of the mandatory configurations are not defined -------------------------------------- 0.12s
bootstrap/prepare-env : Copy the valid files referenced by the remaining overrides directly ------------------------------ 0.09s
bootstrap/prepare-env : Try to decode the content and write in the temp files if it's from a cert or key ----------------- 0.09s
bootstrap/prepare-env : Check if the files referenced by the remaining overrides exist ----------------------------------- 0.09s
sysadmin@localhost:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 256M 0 loop
-luks_encrypted_vault 252:4 0 240M 0 crypt /var/luks/stx/luks_fs sr0 11:0 1 3.1G 0 rom vda 253:0 0 500G 0 disk |-vda1 253:1 0 1M 0 part |-vda2 253:2 0 29.3G 0 part /var/rootdirs/opt/platform-backup |-vda3 253:3 0 300M 0 part /boot/efi |-vda4 253:4 0 2G 0 part /boot -vda5 253:5 0 468.4G 0 part
|-cgts--vg-root--lv 252:0 0 20G 0 lvm /sysroot
|-cgts--vg-var--lv 252:1 0 20G 0 lvm /var
|-cgts--vg-log--lv 252:2 0 7.8G 0 lvm /var/log
`-cgts--vg-scratch--lv 252:3 0 15.6G 0 lvm /var/rootdirs/scratch

PTI-RTP: OKD (branch:master) 

Components

Branch/Version

Remarks

Components

Branch/Version

Remarks

pti-rtp

master

 

oran-o2ims

osc-l-release

Rebuild

cluster-group-upgrades

4.19.0

 

oran-hwmgr-plugin

main

 

siteconfig

2.14.0-SNAPSHOT-2025-05-19-21-04-46

 

stolostron-deploy

2.14.0-SNAPSHOT-2025-05-19-21-04-46

 

Jumphost Configuration

Newton runs CentOS Stream 9 as the jumphost providing infrastructure services:

O-Cloud Manager Deployment

  • domain: ocloud-vm-bmw.lab.local

Deployed using PTI-RTP repository (branch: l-release). The deployment creates a single-node OpenShift cluster (VM-AIO) on NAT network 192.168.123.0/24.

Network Architecture:

Lab Network (192.168.8.0/24) 192.168.8.53:443 (HAProxy TCP passthrough) VM 192.168.123.11:443 (OKD Router)

Since jumphost runs as VM behind a NAT network, need proxy to expose it towards outter network. 

O2IMS Operator

API Version Incompatibility

Pre-built operator images (versions 4.18.0-4.21.0) fail to start with error:

no matches for kind "Inventory" in version "ocloud.openshift.io/v1alpha1"

This occurs because operator binaries were compiled against the old API group ocloud.openshift.io/v1alpha1, but CRDs in osc-l-release use o2ims.oran.openshift.io/v1alpha1.

Resolution

All operator components are now functional. API endpoints respond correctly to infrastructure inventory queries.

Baremetal Provisioning

From this commit: https://gerrit.o-ran-sc.org/r/c/pti/rtp/+/15169

Missing source-crs for policytemplates

 

ProvisioningRequest Workflow

apiVersion: o2ims.provisioning.oran.org/v1alpha1 kind: ProvisioningRequest metadata: labels: app.kubernetes.io/name: provisioningrequest app.kubernetes.io/instance: provisioningrequest-sample app.kubernetes.io/part-of: oran-o2ims app.kubernetes.io/managed-by: kustomize app.kubernetes.io/created-by: oran-o2ims name: 123e4567-e89b-12d3-a456-426614174000 spec: name: "sno-du-1" description: "Provisioning request for setting up a Single Node OKD (SNO) cluster in the test environment." templateName: sno-du templateVersion: okd-v4-19 templateParameters: nodeClusterName: "sno-du-1" oCloudSiteId: "oransc-example-lab" policyTemplateParameters: sriov-network-pfNames-1: '["ens7f0"]' sriov-network-vlan-1: "103" cpu-isolated: "0-1,28-29" cpu-reserved: "2-10" clusterInstanceParameters: baseDomain: oran-sc.example.lab clusterName: sno-du-1 extraLabels: ManagedCluster: cluster-version: "v4.19" sno-du-policy: "v1" machineNetwork: - cidr: 192.168.8.0/24 nodes: - bmcAddress: redfish-virtualmedia://192.168.8.222/redfish/v1/Systems/System.Embedded.1 ironicInspect: "disabled" bmcCredentialsDetails: username: ****************** password: ********************** bootMACAddress: 00:62:0b:4a:6e:cb extraLabels: BareMetalHost: resources.clcm.openshift.io/resourcePoolId: du-pool resources.clcm.openshift.io/siteId: oransc-example-lab extraAnnotations: BareMetalHost: agent.install.openshift.io/kernel-arguments: "agent.url=https://assisted-service-multicluster-engine.apps.ocloud-vm-bmw.lab.local" inspect.metal3.io/disable-bmc-inspection: "true" hostName: master-0-sno.ocloud-vm-bmw.lab.local nodeLabels: node-role.kubernetes.io/infra: "" node-role.kubernetes.io/master: "" nodeNetwork: config: dns-resolver: config: server: - 192.168.8.72 interfaces: - ipv4: address: - ip: 192.168.8.123 prefix-length: 25 dhcp: false enabled: true ipv6: dhcp: false enabled: false name: eno12409 type: ethernet routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.8.9 next-hop-interface: eno12409 table-id: 254 interfaces: - name: eno12409 macAddress: 00:62:0b:4a:6e:cb serviceNetwork: - cidr: 172.30.0.0/16 sshPublicKey: ************************** status: conditions: []

Testing reveals a workflow failure during baremetal node provisioning. The sequence proceeds as follows:

  1. Submit ProvisioningRequest via O2 API

  2. Controller creates InfraEnv and BareMetalHost resources

  3. Target machine boots inspection ISO via Redfish virtual media

  4. Assisted Installer agent starts on target machine

  5. Machine remains in inspection phase indefinitely

Screenshot From 2025-11-19 19-26-16.png
image-20251120-091105.png
assisted-image-service is injected properly. All local domain names are reachable from CoreOS of target Baremetal

 

  • Attempted to manually patch BareMetalHost resources to force transition to installation phase. This approach failed due to fundamental architectural issues:

  • Patching BareMetalHost with installation ISO URL causes BMC to enter boot loop

  • The inspection ISO (CoreOS-based) only runs ironic-agent and never starts agent.service

  • Manually starting agent.service allows bootstrap to proceed, but the installation process writes CoreOS directly to disk, bypassing the intended cluster installation workflow

PTI-RTP: OKD (branch:osc-l-release) 

Components

Branch/Version

Remarks

Components

Branch/Version

Remarks

pti-rtp

l-release

 

oran-o2ims

osc-l-release

Rebuild

cluster-group-upgrades

4.19.0

 

oran-hwmgr-plugin

main

 

siteconfig

2.14.0-SNAPSHOT-2025-05-19-21-04-46

 

stolostron-deploy

2.14.0-SNAPSHOT-2025-05-19-21-04-46

 

O-Cloud Manager

  • domain: ocloud-vm-okd-aio.lab.localocloud-vm-okd-aio.la

  • Installed

O2IMS Operator

  • Installed

  • Need to rebuild o2ims too, default image complain about wrong CR

 

Baremetal Provisioning

  • From this commit: https://gerrit.o-ran-sc.org/r/c/pti/rtp/+/15169

  • BMH not creating anything

  • assisted-installer-controller seems to be down

  • │ time="2025-11-20T05:49:35Z" level=info msg="Start running Assisted-controller. Configuration is:\n struct ControllerConfig {\n\tClusterID: \"d4e36ee7-edd3-4de3-a70f-842 │ │ 21e884e62\",\n\tURL: \"http://192.168.123.11:8090/\",\n\tPullSecretToken: <SECRET>,\n\tSkipCertVerification: true,\n\tCACertPath: \"\",\n\tNamespace: \"assisted-install │ │ er\",\n\tOpenshiftVersion: \"4.19.0-okd-scos.0\",\n\tControlPlaneCount: 1,\n\tWaitForClusterVersion: false,\n\tMustGatherImage: \"quay.io/okd/scos-content@sha256:f738da │ │ cc63f9be039be41b5fa5f467b3f71cce070ea736dec2882f0722dad9d4\",\n\tDryRunEnabled: false,\n\tDryFakeRebootMarkerPath: \"\",\n\tDryRunClusterHostsPath: \"\",\n\tNotifyNumRe │ │ boots: true,\n\tParsedClusterHosts: config.DryClusterHosts(nil),\n}" func=main.main file="/go/src/github.com/openshift/assisted-installer/src/main/assisted-installer-co │ │ ntroller/assisted_installer_main.go:80" │ │ time="2025-11-20T05:49:35Z" level=info msg="Using proxy {HTTPProxy: HTTPSProxy: NoProxy:} to set env-vars for installer-controller pod" func="github.com/openshift/assis │ │ ted-installer/src/k8s_client.(*k8sClient).SetProxyEnvVars" file="/go/src/github.com/openshift/assisted-installer/src/k8s_client/k8s_client.go:425" │ │ time="2025-11-20T05:49:35Z" level=warning msg="Certificate verification is turned off. This is not recommended in production environments" func=github.com/openshift/ass │ │ isted-installer/src/inventory_client.CreateInventoryClientWithDelay file="/go/src/github.com/openshift/assisted-installer/src/inventory_client/inventory_client.go:93" │ │ time="2025-11-20T05:49:35Z" level=info msg="Making sure service dns-default can reserve the .10 address" func="github.com/openshift/assisted-installer/src/assisted_inst │ │ aller_controller.(*controller).HackDNSAddressConflict" file="/go/src/github.com/openshift/assisted-installer/src/assisted_installer_controller/assisted_installer_contro │ │ ller.go:310" │ │ time="2025-11-20T05:49:35Z" level=info msg="openshift-install-manifests ConfigMap attribute invoker = agent-installer" func=github.com/openshift/assisted-installer/src/ │ │ common.GetInvoker file="/go/src/github.com/openshift/assisted-installer/src/common/common.go:347" │ │ time="2025-11-20T05:49:35Z" level=warning msg="cluster is SNO and invoker = agent-installer, skipping assisted-installer-controller" func=main.main file="/go/src/github │ │ .com/openshift/assisted-installer/src/main/assisted-installer-controller/assisted_installer_main.go:157" │ │ stream closed: EOF for assisted-installer/assisted-installer-controller-6qm7w (assisted-installer-controller) │ │ │ │ │
image-20251120-100905.png
image-20251120-100935.png

Metal3 Patch

apiVersion: metal3.io/v1alpha1 kind: Provisioning metadata: name: provisioning-configuration spec: provisioningNetwork: Disabled watchAllNamespaces: true

 

  • ironic-agent need these ports 6385, 6388, 5050. If you are behind a Proxy. Add them.

    • Wont’t work since Ironic will detect kvm IP everytime and updating the deployment cant be done since it will always reconcile

    • Curl from external network to haproxy ironic instance. Receive reponse but advertised link are KVM host link.

      • infidel ~ → curl -sk https://192.168.8.53:6385 | jq { "name": "OpenStack Ironic API", "description": "Ironic is an OpenStack project which enables the provision and management of baremetal machines.", "default_version": { "id": "v1", "links": [ { "href": "https://192.168.123.11:6385/v1/", "rel": "self" } ], "status": "CURRENT", "min_version": "1.1", "version": "1.95" }, "versions": [ { "id": "v1", "links": [ { "href": "https://192.168.123.11:6385/v1/", "rel": "self" } ], "status": "CURRENT", "min_version": "1.1", "version": "1.95" } ] } infidel ~ →

Patched Version PTI-RTP

https://github.com/motangpuar/pti-rtp/commit/2c1a007f4979c80a9bd7ba177b7d0af36124db05

  • Allow bridge attachment for master-node

  • Mac and IP are dictacted by inventory templates, libvirt will generate machine templates accordingly

  1. Groups templates

    1. --- ocloud_infra: vm ocloud_platform: okd ocloud_topology: aio ocloud_cluster_name: "ocloud-vm-bmw" ocloud_domain_name: "lab.local" ocloud_network_mode: "bridge" ocloud_net_bridge: "bridge0" ocloud_net_name: "br0-okd" ocloud_net_cidr: "192.168.8.0/24" ocloud_cluster_net_cidr: "10.128.0.0/14" ocloud_cluster_net_hostprefix: 23 ocloud_service_net_cidr: "172.30.0.0/16" ocloud_network_type: "OVNKubernetes" # DNS servers ocloud_dns_servers: - "192.168.8.72" # Optional NTP servers ocloud_ntp_servers: - "time.google.com"
  2. Statically define IP and MAC inventory/host_vars/master-0-vm

    1. --- role: master ip_address: 192.168.8.210 ocloud_infra_vm_mem_gb: 48 mac_addresses: ens3: "52:54:00:ab:cd:01" network_config: interfaces: - name: ens3 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: 192.168.8.210 prefix-length: 24 dns-resolver: config: server: - 192.168.8.72 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.8.103 next-hop-interface: ens3

OKD Hub-Spoke VM

Inventories

==> ocloud-vm-okd-aio-bridge/group_vars/all/vars.yml <== --- ocloud_infra: vm ocloud_platform: okd ocloud_topology: aio ocloud_platform_okd_ssh_pubkey: ~ ocloud_cluster_name: "ocloud-vm-bmw" ocloud_domain_name: "lab.local" ocloud_network_mode: "bridge" ocloud_net_bridge: "bridge0" ocloud_net_name: "br0-okd" ocloud_net_cidr: "192.168.8.0/24" ocloud_cluster_net_cidr: "10.128.0.0/14" ocloud_cluster_net_hostprefix: 23 ocloud_service_net_cidr: "172.30.0.0/16" ocloud_network_type: "OVNKubernetes" # DNS servers ocloud_dns_servers: - "192.168.8.72" # Optional NTP servers ocloud_ntp_servers: - "time.google.com"
==> ocloud-vm-okd-aio-bridge/group_vars/all/vault.yml <== --- # Encrypt with `ansible-vault encrypt vault.yml` before adding secrets # Uncomment to override default pull secret from ocloud_platform_okd role: #ocloud_platform_okd_pull_secret: ~
==> ocloud-vm-okd-aio-bridge/host_vars/master-0-vm/vars.yml <== --- role: master ip_address: 192.168.8.210 ocloud_infra_vm_mem_gb: 48 mac_addresses: ens3: "52:54:00:ab:cd:01" network_config: interfaces: - name: ens3 type: ethernet state: up ipv4: enabled: true dhcp: false address: - ip: 192.168.8.210 prefix-length: 24 dns-resolver: config: server: - 192.168.8.72 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.8.103 next-hop-interface: ens3
==> ocloud-vm-okd-aio-bridge/hosts.yml <== deployer: hosts: localhost: ansible_connection: local kvm: hosts: localhost: ansible_connection: local ocloud: hosts: master-0-vm: