Release J - Run in Kubernetes
This wiki describes how to deploy the NONRTRIC components within Kubernetes cluster.
NONRTRIC Architecture
NONRTRIC comprises several components,
- Control Panel
- Policy Management Service
- Information Coordinator Service
- Non RT RIC Gateway (reuse of existing kong proxy is also possible)
- R-App catalogue Service
- Enhanced R-App catalogue Service
- A1 Simulator (3 A1 interface versions - previously called Near-RT RIC A1 Interface)
- A1 Controller (currently using SDNC from ONAP)
- Helm Manager
- Dmaap Adapter Service
- Dmaap Mediator Service
- Use Case rApp O-DU Slice Assurance
- Use Case rAPP O-RU Closed loop recovery
- CAPIF core
- RANPM
- RAPP Manager
- DME Participant
In the IT/Dep repo, there are helm charts for each these components. In addition, there is a chart called nonrtric
, which is a composition of the components above.
Prerequisites
kubernetes
v1.19 +local
kubectl
utilitya connected kubernetes cluster
istio
installed on the cluster
docker
anddocker-compose
(latest)git
- Text editor, e.g.
vi
,notepad
,nano
, etc. helm
helm3
bash
library '
envsubst
' must be installed (check installation using command:type envsubst
)library '
jq
' must be installed check installation using command:type jq
)keytool
openssl
ChartMuseum
to store the HELM charts on the server, multiple options are available:Execute the install script:
./dep/smo-install/scripts/layer-0/0-setup-charts-museum.sh
Install
chartmuseum
manually on port 18080 (https://chartmuseum.com/#Instructions, https://github.com/helm/chartmuseum)
These instructions work on linux/MacOS or on windows via WSL using a local or remote kubernetes cluster.
It is recommended to run ranpm on a kubernetes cluster instead of local docker-desktop etc. as the deployment is somewhat resource intensive.
Requirement on Kubernetes
The demo set can be run on local or remote kubernetes cluster. Kubectl must be configured to point to the applicable kubernetes instance. Nodeports exposed by the kubernetes instance must be accessible by the local machine - basically the kubernetes control plane IP needs to be accessible from the local machine. (The installation scripts take care of getting a token form Istio, using dep/ranpm/install/scripts/kube_get_controlplane_host.sh as the baseurl. The K8s controlplane should be accessible from localhost)
The latest version of
istio
must be installed on the cluster.
Introduction to Helm Charts
In NONRTRIC we use Helm chart as a packaging manager for kubernetes. Helm charts helps developer to package, configure & deploy the application and services into kubernetes environments.
Before proceeding you will need to be familiar with helm, kubernetes and basic bash scripting. For an introduction to helm see: https://helm.sh/docs/intro/quickstart/
Preparations
Download the the it/dep
repository.
git clone "https://gerrit.o-ran-sc.org/r/it/dep"
However, for some use cases below you will also need to clone some additional linked repositories (git submodules
).
To download these, replace the command above to include the '–recursive
' parameter/switch:
#Submodule 'ranpm' (https://gerrit.o-ran-sc.org/r/nonrtric/plt/ranpm) registered for path 'ranpm' #Submodule 'ric-dep' (https://gerrit.o-ran-sc.org/r/ric-plt/ric-dep) registered for path 'ric-dep' #Submodule 'smo-install/multicloud-k8s' (https://github.com/onap/multicloud-k8s.git) registered for path 'smo-install/multicloud-k8s' #Submodule 'smo-install/onap_oom' (https://gerrit.onap.org/r/oom) registered for path 'smo-install/onap_oom' git clone --recursive "https://gerrit.o-ran-sc.org/r/it/dep"
Note: This places all artifacts in a directory 'dep
', which is the base directory for all operations that follow.
Note: Git Branches are currently not used in the it/dep repository. The latest version (master
branch) is used here, but if changes are made then the instruction below may become outdated. The instructions below are accurate at time of writing, during the J-Release time-frame. Earlier versions of the it/dep repo may be used by examining the Git log for the repo.
Configuration of components to install
It is possible to configure which of nonrtric components to install, including the platform functions, simulators and graphical control panels. This configuration is made in the override for the helm package.
Edit the file dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml
<editor> dep/RECIPE_EXAMPLE/NONRTRIC/example_recipe.yaml
The file shown below is a snippet from the override file: example_recipe.yaml
.
All parameters beginning with 'install' can be configured 'true
' for enabling installation and 'false
' for disabling installation.
For the parameters installNonrtricgateway
and installKong
, only one can be enabled.
There are many other parameters in the file that may require adaptation to fit a certain environment. For example hostname
, namespace
and port
to kafka
message router etc. These integration details are not covered in this guide.
# Here you can enable inclusion or exclusion of each component. Disabled components will not be installed, and their later configurations will be ignored. nonrtric: installPms: true installA1controller: true installA1simulator: true installControlpanel: true installInformationservice: true installRappcatalogueservice: true installRappcatalogueenhancedservice: true # Enable either installNonrtricgateway or installKong. Both cannot be enabled at the same time installNonrtricgateway: false installKong: true installDmaapadapterservice: true installDmaapmediatorservice: true installHelmmanager: true installOrufhrecovery: true installRansliceassurance: true installCapifcore: true installServicemanager: true # When enabling Ranpm, set value false above for installControlpanel, installInformationservice, installNonrtricgateway installRanpm: false # rApp Manager functionality relies on ONAP ACM for its operation installrAppmanager: true # DME Participant should only be activated when ONAP ACM installation is available for this participant to utilize installDmeParticipant: false volume1: # Set the size to 0 if you do not need the volume (if you are using Dynamic Volume Provisioning) size: 2Gi storageClassName: pms-storage hostPath: /var/nonrtric/pms-storage volume2: # Set the size to 0 if you do not need the volume (if you are using Dynamic Volume Provisioning) size: 2Gi storageClassName: ics-storage hostPath: /var/nonrtric/ics-storage volume3: size: 1Gi storageClassName: helmmanager-storage ... ... ...
Installation
There is a script that packs and installs the components by using the helm
command. The installation uses a values override file like the one shown above. This example can be run like this:
sudo ./bin/deploy-nonrtric.sh -f dep/nonrtric/RECIPE_EXAMPLE/example_recipe.yaml
Installing / Uninstalling the RAN PM functions
See the sub-page Release J - Run in Kubernetes - Additional instructions for RANPM Installation for more details.
Uninstalling
There is a script that uninstalls the NONRTRIC components.
sudo ./dep/bin/undeploy-nonrtric.sh
Troubleshooting A1-Policy functions
- After successful installation, control panel shows "
No Type
" as policy type as shown below.
- If there is no policy type shown and UI looks like below, then the setup can be investigated with below steps (It could be due to synchronization delay as well, It gets fixed automatically after few minutes)
- Verify the A1 PMS logs to make sure that the connection between A1 PMS and a1controller is successful.
Command to check pms logs
Kubernetes command to get PMS logskubectl logs policymanagementservice-0 -n nonrtric
Command to enable debug logs in PMS (Command below should be executed inside k8s pods or the host address needs to be updated with the relevant port forwarding)
Enabling debug logs in PMScurl --request POST \ --url http://policymanagementservice:9080/actuator/loggers/org.onap.ccsdk.oran.a1policymanagementservice \ --header 'Content-Type: application/json' \ --data '{ "configuredLevel": "DEBUG" }'
Try removing the controller information in specific simulator configuration and verify the simulator are working without a1controller.
- For troubleshooting the
curl
command is available in the can be used incontrolpanel
pod.