The near-RT RIC (RIC for short) Self-Health-Check flow fulfills the requirement that all systems need to monitor their own health – internal subsystems, hosted software, and external interfaces.
Internal Self-Check - At configurable intervals, RIC is to trigger Health-Check requests to its internal common platform modules and hosted xAPPs. Each platform module and each xAPP are required to support Health-Check requests and to perform a self-check.
Alarms and Notifications - Based on Health-Check results, the RIC is required to maintain a list of anomaly conditions - alarms and alerts - that represent the state of its health. Alarm/Alert conditions are to be raised and sent as notifications.
Self-Check of Platform Modules and xAPPs
The RIC is responsible to check the health of RIC Platform modules and xAPP instances hosted on the RIC:
- Ability to internally initiate Health-checks on each of the common platform modules within the RIC (e.g., O1 Termination, A1 Mediator, E2 Termination, E2 Manager, xAPP Manager, Subscription Manager, etc.).
- Internal self-checks are to be done at default intervals. Intervals are to be configurable during run-time.
- Each platform module is required to support a health-check request. Initially, the modules may simply need to send a response message to indicate that the connectivity is still up and messaging pathway still operational. (In later releases, additional diagnostics may be needed to ensure RIC lifecycle management is robust and carrier-grade.)
- Self-check results on platform modules are to be logged.
Alarms, Clearings and Notifications
- Anomaly conditions can be encountered as part of the self-check process or during normal operation (e.g., cannot send message to another module via RMR).
- For each anomaly condition, the RIC and/or that RIC platform module needs to determine the severity and whether it is mappable to an alarm type.
- Alarms found in either case (self-check or normal operation) require a notification to be sent immediately via the O1 VES interface.
- Alarms are to be stored and captured as part of the alarm list of the RIC
- Implementation Option[1]: The self-check can potentially leverage Kubernetes Liveness and Readiness probes. Liveness probes can be configured to execute a command, issue a http-get, and open a TCP socket against the container/pod. Readiness probes can be configured to ensure the pod is ready before allowing it handle traffic. To further check a module’s (pod) ability to communicate with other modules over RMR (RIC Message Router), each module could subscribe to its own topic, send a hello-world message regularly to itself and ensure it can send and receive messages.
- RMR
- Health of xAPPs
- Ability of RIC to invoke Health-check requests to each of the xAPP instances deployed on the RIC [9]
- Ability of each xAPP to perform Health-checks on itself and respond back to the RIC [10]
- Implementation Option: See Implementation Option above for platform modules.
- Any alarm/alert conditions or clearing of alarms/alerts are sent immediately via the O1 VES interface. [7-8]
- normalize alarms conditions
External Interfaces
For external interfaces, the RIC is responsible to check its interface functions - O1 Termination, A1 Mediator, and E2 Termination modules.
In addition, heartbeats or keep-alive signals over O1 are verified by the NB clients.
The RIC also checks heartbeat message come from RAN resources over the E2 interface.
Note: Since the role of RIC is to enable near realtime control loop actions, latency is an important set of telemetry to be collected and reported - E2 latency and RIC processing latency. As RIC matures release over release, latency telemetry should be defined and implemented.
- Health of E2 interface - Ability to send request downstream RAN resources (O-CU/O-DU) via E2 interface for PM collection and report generation, and receive PM report [13-15]
- Any alarm/alert conditions or clearing of alarms/alerts are sent immediately via the O1 VES interface. [16]
- Health of Overall RIC Instance based on Health-Check results (successes, failures and anomalies), mapped to alarms and alerts (which represent RIC’s operability) are stored . [17]
- Ability to update alarms for queries by NB clients. For example, RIC alarms/alerts can be incorporated into the O1 NetConf operational tree. The corresponding Yang model might need to be augmented (e.g., define health state leaf with alarm-list in the Yang model).
- Ability to make performance test results available to NB clients (for on-demand requests) [19-22]
To support this flow, a new Health-Check functional block within the RIC is being proposed, which can be implemented as a separate software module, as a distributed function across one or more existing modules, and/or as existing capabilities already available from the underlying container infrastructure such as Kubernetes' container/pod lifecycle management. The Health-Check functional block has to perform the following:
- Perform health-checks on the common RIC platform functions/modules and on xAPP instances hosted on the RIC (self-checks at configured intervals and on-demand requests)
- Map failures and anomalies to alarms and alerts
- Send out notifications for alarms and alerts
- Determine the state of the RIC based on alarms and alerts
- Store health-check results for queries
- Clear alarms and alerts when conditions clear
Figure 1 below shows the flow of RIC Self-Checks – regular heartbeats over O1 and A1, the Health-Check Module initiating health-check requests within the RIC to assess its overall health, and issuing alarms/alerts, as appropriate based on health results.
Note 1: Figure 1 above shows the flows assuming that SMO is the northbound client that triggers the near-RT RIC. The SMO consists of the O1 OAM adapter (supporting both O1VES and O1NetConf related messages/data) and the non-RT RIC (containing the A1 adapter). The O-RAN SC implementation of the flows associated with this Health-check use case should create a simulated SMO for invoking requests and processing responses. The simulated SMO should also provide a Test Driver (shown in Figures 2-4) for initiating requests to SMO and receive response from SMO. Alternatively, a Dashboard can also be the NB client to trigger these requests.
[1] Implementation options are suggested at the use case level, to be further refined/finalized during user stories phase.