INF StarlingX O-Cloud Getting Started /Sample test process
This doc describes how to install and deploy INF bare metal All-in-one Duplex deployment configuration on two HP ProLiant DL380p Gen8 servers.
1 Installation for the first server from the O-RAN INF ISO image
- The INF ISO image can be downloaded from inf-image-aio-installer-intel-corei7-64.iso or build yourself from source
- Please see the Developer-Guide file for how to build the image.
- The Image is a live ISO image with CLI installer
1.1 Burn the image to the USB device
- Assume the the usb device is /dev/sdX here
- $ sudo dd if=/path/to/inf-image-aio-installer-intel-corei7-64.iso of=/dev/sdX bs=1M
1.2 Install the first server (controller-0)
- Reboot the target from the USB device.
- Select “All-in-one Graphics console” or “All-in-one Serial console install” and press ENTER
- Start the auto installation
- It will reboot automatically after installation
2 Configuration and initialize the bootstrap
2.1 First Login with "sysadmin/sysadmin" and change password
2.2 Set OAM network before bootstrap
export OAM_DEV=eno3
export CONTROLLER0_OAM_CIDR=128.224.210.110/24
export DEFAULT_OAM_GATEWAY=128.224.210.1
sudo ip address add $CONTROLLER0_OAM_CIDR dev $OAM_DEV
sudo ip link set up dev $OAM_DEV
sudo ip route add default via $DEFAULT_OAM_GATEWAY dev $OAM_DEV
2.3 Login the server through SSH with "sysadmin"
2.4 Prepare the localhost.yml for bootstrap
cat << EOF > localhost.yml
system_mode: duplex
management_subnet: 192.168.18.0/24
management_start_address: 192.168.18.2
management_end_address: 192.168.18.50
management_gateway_address: 192.168.18.1
external_oam_subnet: 128.224.210.0/24
external_oam_gateway_address: 128.224.210.1
external_oam_floating_address: 128.224.210.110
external_oam_node_0_address: 128.224.210.111
external_oam_node_1_address: 128.224.210.112
EOF
2.5 Run the ansible bootstrap
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml -vvv
After the bootstrap successfully finish, it will show as following:
2.6 Congiure controller-0
- Acquire admin credentials:
source /etc/platform/openrc
- Configure the OAM and MGMT interfaces of controller-0 and specify the attached networks:
OAM_IF=eno3
MGMT_IF=eno1
system host-if-modify controller-0 lo -c none
IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}')
for UUID in $IFNET_UUIDS; do
system interface-network-remove ${UUID}
done
system host-if-modify controller-0 $OAM_IF -n oam0
system host-if-modify controller-0 $MGMT_IF -n pxeboot0
system host-if-modify controller-0 oam0 -c platform
system interface-network-assign controller-0 oam0 oam
system host-if-modify controller-0 pxeboot0 -c platform
system interface-network-assign controller-0 pxeboot0 pxeboot
system host-if-add -V 18 controller-0 mgmt0 vlan pxeboot0
system interface-network-assign controller-0 mgmt0 mgmt
system host-if-add -V 19 controller-0 cluster0 vlan pxeboot0
system interface-network-assign controller-0 cluster0 cluster-host
- Configure NTP Servers for network time synchronization:
system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org
Then check the inferfaces status
- Add an OSD on controller-0 for Ceph:
system host-disk-list controller-0
system host-disk-list controller-0 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-disk-list controller-0 | awk '/\/dev\/sdc/{print $2}' | xargs -i system host-stor-add controller-0 {}
system host-stor-list controller-0
2.7 Unlock controller-0
system host-unlock controller-0
Controller-0 will reboot to apply configuration changes and come into service. This can take 5-10 minutes, depending on the performance of the host machine.
Once the controller comes back up, check the status of controller-0
source /etc/platform/openrc
system host-list
2 Installation for the second server (controller-1)
2.1 Power on the controller-1 server and force it to network boot
2.2 As controller-1 boots, a message appears on its console instructing you to configure the personality of the node
2.3 On the console of controller-0, list hosts to see newly discovered controller-1 host (hostname=None)
system host-list
2.4 Using the host id, set the personality of this host to 'controller’:
system host-update 2 personality=controller
2.5 Wait for the software installation on controller-1 to complete, for controller-1 to reboot, and for controller-1 to show as locked/disabled/online in 'system host-list'.
This can take 5-10 minutes, depending on the performance of the host machine.
2.6 Configure controller-1
OAM_IF=eno3
MGMT_IF=eno1
system host-if-modify controller-1 $OAM_IF -n oam0
system host-if-modify controller-1 oam0 -c platform
system interface-network-assign controller-1 oam0 oam
system host-if-add -V 19 controller-1 cluster0 vlan pxeboot0
system interface-network-assign controller-1 cluster0 cluster-host
system host-if-list controller-1
system host-disk-list controller-1
system host-disk-list controller-1 | awk '/\/dev\/sdb/{print $2}' | xargs -i system host-stor-add controller-1 {}
system host-disk-list controller-1 | awk '/\/dev\/sdc/{print $2}' | xargs -i system host-stor-add controller-1 {}
system host-stor-list controller-1
2.7 Unlock controller-1
system host-unlock controller-1
system host-list
3 High Availability status show
sm-dump
Use the sm-dump to show the status of 2 controller, it shows that services on controller-0 are in active mode, and services on controller-1 are in standby mode.