Release J - Run in Docker
This page describes how to run various NONRTRIC functions in Docker.
In this sample deployment all NONRTRIC components run as docker containers and communicate via a private docker network with container ports, where ports are also available on localhost
.
Details of the architecture can be found from Release J page.
Project Requirements
docker
anddocker-compose
(latest)curl
(or similar)- Additional optional requirements if using the "Helm Manager" function
kubernetes
v1.19+kubectl
with admin access to kubernetes (e.g., minikube, docker-desktop kubernetes etc)helm
with access to kubernetes - this is only applicable when running the Helm Manager example operations
- Addition optional requirement if using the "DMaaP Adapter" or "DMaap Mediator Producer" services
- DMaaP MR (See: Deploy DMaaP message router in nonrtric)
kafka
(latest) - only for DMaaP Adapter Service, and optional
Images
The images used for running the Non-RT RIC can be selected from the table below depending on if the images are built manually (snapshot image) or if release images shall be used.
In general, there is no need to build the images manually unless there are code changes made by the user, so release images should be used. Instruction on how to build all components, see. Release J - Build.
The run commands throughout this page uses the release images and tags. Replace the release images/tags in the container run commands in the instructions if manually-built snapshot images are desired.
J Release - Images & Tags
Ports
The following ports will be allocated and exposed to localhost
for each component. If other port(s) are desired, then the ports need to be replaced in the container run commands in the instructions further below.
Component | Port exposed to localhost (http/https) |
---|---|
A1 Policy Management Service | 8081/8443 |
A1 Simulator (Near-RT RIC simulator) | 8085/8185, 8086/8186, 8087/8187 |
Information Coordinator Service | 8083/8434 |
Non-RT RIC Control Panel | 8080/8880 |
SDNC A1-Controller | 8282/8443 |
Gateway | 9090 (only http) |
App Catalogue Service | 8680/8633 |
Helm Manager | 8112 (only http) |
DMaaP Mediator Producer | 9085/9185 |
DMaaP Adapter Service | 9087/9187 |
SME-CapifCore | 8090 (only http) |
Prerequisites
The containers need to be connected to docker network in order to communicate with each other.
Create a private docker network. If another network name is used, all references to 'nonrtric-docker-net
' in the container run commands below need to be updated.
|
Run the A1 Policy Management Service Docker Container
Test locally with three separate A1 simulator instances, each running a one of the three available A1 Policy interface versions:
- Create an
application_configuration.json
file with the configuration below. This will configure the A1 Policy Management Service to use the simulators for the A1 interface - Note: Any defined
ric
names must match the given docker container names in near-RT RIC simulator startup, see Run A1 Simulator Docker Containers
The A1 Policy Management Service can be configured to support A1 connection via an SDNC A1-Controller for some or all rics/simulators. It is optional to access the near-RT-RIC through an SDNC A1-Controller.
This is enabled in the configuration file using the optional "controller
" parameter for each ric
entry. If all configured ric
s bypass the A1-Controller (do not have "controller
" values) then the "controller
" object at the top of the configuration can be omitted. If all configured ric
s bypass the SDNC A1-Controller there is no need to start an SDNC A1-Controller.
This sample configuration is for running without the SDNC A1-Controller
{ "config": { "ric": [ { "name": "ric1", "baseUrl": "http://ric1:8085/", "managedElementIds": [ "kista_1", "kista_2" ] }, { "name": "ric2", "baseUrl": "http://ric2:8085/", "managedElementIds": [ "kista_3", "kista_4" ] }, { "name": "ric3", "baseUrl": "http://ric3:8085/", "managedElementIds": [ "kista_5", "kista_6" ] } ] } }
This sample configuration is for running with the SDNC A1-Controller
{ "config": { "controller": [ { "name": "a1controller", "baseUrl": "https://a1controller:8443", "userName": "admin", "password": "Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U" } ], "ric": [ { "name": "ric1", "baseUrl": "http://ric1:8085/", "controller": "a1controller", "managedElementIds": [ "kista_1", "kista_2" ] }, { "name": "ric2", "baseUrl": "http://ric2:8085/", "controller": "a1controller", "managedElementIds": [ "kista_3", "kista_4" ] }, { "name": "ric3", "baseUrl": "http://ric3:8085/", "controller": "a1controller", "managedElementIds": [ "kista_5", "kista_6" ] } ] } }
Note: The A1 Policy Management Service DMaaP interface is deprecated/removed so this interface shall no longer be configured in the application_configuration.json.
Start the container with the following command. Replace "<absolute-path-to-file>" with the the path to the created configuration file in the command. The configuration file is mounted to the container. There will be WARN messages appearing in the log until the simulators are started.
|
Wait 1 minute to allow the container to start and to read the configuration. Then run the command below another terminal. The output should match the configuration in the file - all three rics (ric1, ric2 and ric3) should be included in the output. Note that each ric has the state "UNAVAILABLE" until the simulators are started.
Note: If the policy management service is started with config for the SDNC A1 Controller (the second config option), do the steps described in section Run the SDNC A1 Controller Docker Container below before proceeding.
NOTE: Use below Endpoint to use a1policymanagement V2
|
Expected output (not that all simulators - ric1,ric2 and ric3 will indicate "state":"UNAVAILABLE" until the simulators has been started in Run A1 Simulator Docker Containers):
|
NOTE: Use below Endpoint to use a1policymanagement V3
|
Expected output (not that all simulators - ric1,ric2 and ric3 will indicate "state":"UNAVAILABLE" until the simulators has been started in Run A1 Simulator Docker Containers):
|
Run the SDNC A1 Controller Docker Container (ONAP SDNC)
This step is only applicable if the configuration for the Policy Management Service include the SDNC A1 Controller (second config option), see Run the A1 Policy Management Service Docker Container.
Create the docker compose file - be sure to update image for the a1controller to the one listed for SDNC A1 Controller in the table on the top of this page.
docker-compose.yaml
version: '3' networks: default: external: true name: nonrtric-docker-net services: db: image: nexus3.o-ran-sc.org:10001/mariadb:10.5 container_name: sdncdb networks: - default ports: - "3306" environment: - MYSQL_ROOT_PASSWORD=itsASecret - MYSQL_ROOT_HOST=% - MYSQL_USER=sdnctl - MYSQL_PASSWORD=gamma - MYSQL_DATABASE=sdnctl logging: driver: "json-file" options: max-size: "30m" max-file: "5" a1controller: image: nexus3.onap.org:10001/onap/sdnc-image:2.6.1 depends_on : - db container_name: a1controller networks: - default entrypoint: ["/opt/onap/sdnc/bin/startODL.sh"] ports: - 8282:8181 - 8443:8443 links: - db:dbhost - db:sdnctldb01 - db:sdnctldb02 environment: - MYSQL_ROOT_PASSWORD=itsASecret - MYSQL_USER=sdnctl - MYSQL_PASSWORD=gamma - MYSQL_DATABASE=sdnctl - SDNC_CONFIG_DIR=/opt/onap/sdnc/data/properties - SDNC_BIN=/opt/onap/sdnc/bin - ODL_CERT_DIR=/tmp - ODL_ADMIN_USERNAME=admin - ODL_ADMIN_PASSWORD=Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U - ODL_USER=admin - ODL_PASSWORD=Kp8bJ4SXszM0WXlhak3eHlcse2gAw84vaoGGmJvUy2U - SDNC_DB_INIT=true - A1_TRUSTSTORE_PASSWORD=a1adapter - AAI_TRUSTSTORE_PASSWORD=changeit logging: driver: "json-file" options: max-size: "30m" max-file: "5"
Start the SNDC A1 controller with the following command, using the created docker-compose file.
|
Open this url below in a web browser to verify that the SDNC A1 Controller is up and running. It may take a few minutes until the endpoint is available.
http://localhost:8282/apidoc/explorer/index.html#/controller%20A1-ADAPTER-API |
The Karaf logs of A1 controller can be followed e.g. by using command
|
Run the A1 Simulator (Near-RT-RIC simulator) Docker Containers
Start a simulator for each ric
defined in in the application_configuration.json
created above in section Run the A1 Policy Management Service Docker Container. Each simulator will use one of the currently available A1 interface versions. Each simulator uses different local ports.
ric1
|
ric2
|
ric3
|
Wait at least one minute to let the policy management service synchronise the ric
s. Then run the command below another terminal. The output should match the configuration in the file. Note that each ric
now has the state "AVAILABLE
".
NOTE: Use below Endpoint to use a1policymanagement V2
|
Expected output - all state should indicated AVAILABLE
:
|
NOTE: Use below Endpoint to use a1policymanagement V3
|
Expected output - all state should indicated AVAILABLE
:
|
Only the simulators using version STD_2.0.0
and OSC_2.1.0
supports A1 Policy types. Run the commands below to add one A1 Policy type in each of ric1
and ric3
. A1-AP version 1 (STD_1.1.3
) does not support A1 Policy types.
Create the file with policy type for ric1
Put the policy type to ric1
- should http response code 201
|
Create the file with policy type for ric3
Put the policy type to ric3
- should return http response code 201
|
Wait approximately 1 minute to let the policy management service synchronise the types with the simulators.
List the synchronised types using a1policymanagement V2 endpoint:
|
Expected output:
|
List the synchronised types using a1policymanagement V3 endpoint:
|
Expected output:
|
Run the Information Coordinator Service Docker Container
Run the following command to start the information coordinator service.
|
Verify that the Information Coordinator Service is started and responding (response is an empty array).
|
Expected output:
|
For troubleshooting/verification purposes you can view/access the full swagger API from url: http://localhost:8083/swagger-ui/index.html?configUrl=/v3/api-docs/swagger-config
Run the NONRTRIC Gateway and NONRTRIC Control Panel Docker Container
The NONRTRIC Gateway exposes the interfaces of the A1 Policy Management Service and the Information Coordinator Service to a single port of the gateway. This single port is then used by the NONRTRIC Control Panel to access both services.
Create the config file for the gateway.
Run the following command to start the gateway. Replace "<absolute-path-to-file>" with the the path to the created application.yaml.
|
Run the following two commands to check that the services can be reached through the gateway
|
Expected output:
|
Second command:
|
Expected output:
|
Create the config file for the control panel:
Run the following command to start the control panel. Replace "<absolute-path-to-file>
" below with the the path to the nginx.conf
file created above.
|
The web-based UI can be accessed by pointing the web-browser to this URL:
http://localhost:8080/
Run the App Catalogue Service Docker Container
Start the App Catalogue Service by the following command.
|
Verify that the service is up and running
|
Expected output:
|
Run the App Catalogue (Enhanced) Service Docker Container
Start the App Catalogue Enhanced Service by the following command.
|
Verify that the service is up and running
|
Expected output:
|
Run the Helm Manager Docker Container
Note: Access to kubernetes
is required as stated the requirements on the top of this page.
Download the 'helm-manager
' repo: Helm Manager (h-release branch).
|
Start the helm manger in a separate shell by the following command:
|
Ensure the app has started by listing the current charts - response should be empty json array.
|
To test the app further, start a test helm chart store, then create a dummy helm chart:
Start a chartmuseum
chart repository in a separate shell
|
Add the chartmuseum
chart store to the helm manager by the following command:
|
Create a dummy helm chart for test and package the chart, and save this chart in chartmuseum
:
|
The commands below show examples of operations towards the helm manager using the dummy chart.
As an alternative, run the script 'test.sh
' to execute a full sequence of commands.
|
Run the DMaaP Adapter Service Docker Container
The DMaaP Adapter Service needs two configurations files, one for the application specific parameters and one for the types the application supports.
Note that a running Information Coordinator Service is needed for creating jobs and a running message router is needed for receiving data that the job can distribute to the consumer.
In addition, if the data is available on a kafka topic then an instance of a running kafka server is needed.
The following parameter need to be configured according to hosts and ports (these setting may need to adjusted to your environment)
|
Create the file application.yaml
with content below.
Create the file application_configuration.json
according to one of alternatives below.
Option 1: Without kafka type (just DMaaP)
{ "types": [ { "id": "ExampleInformationType", "dmaapTopicUrl": "/events/unauthenticated.dmaapadp.json/dmaapadapterproducer/msgs?timeout=15000&limit=100", "useHttpProxy": false } ] }
Option 2: With kafka type (DMaaP & kafka)
{ "types": [ { "id": "ExampleInformationType", "dmaapTopicUrl": "/events/unauthenticated.dmaapadp.json/dmaapadapterproducer/msgs?timeout=15000&limit=100", "useHttpProxy": false }, { "id": "ExampleInformationTypeKafka", "kafkaInputTopic": "unauthenticated.dmaapadp_kafka.text", "useHttpProxy": false } ] }
Start the DMaaP Adapter Service in a separate shell with the following command:
|
Setup jobs to produce data according to the types in application_configuration.json
Create a file job1.json
with the job definition (replace paths <url-for-job-data-delivery>
and <url-for-job-status-delivery>
to fit your environment
{ "info_type_id": "ExampleInformationType", "job_result_uri": "<url-for-job-data-delivery>", "job_owner": "job1owner", "status_notification_uri": "<url-for-job-status-delivery>", "job_definition": {} }
Create job1
for type 'ExampleInformationType
'
|
Check that the job has been enabled - job accepted by the Information Coordinator Service
|
Data posted on the DMaaP MR topic unauthenticated.dmaapadp.json
will be delivered to the path as specified in the job1.json
.
If the kafka type also used, setup a job for that type too:
Create a file job2.json
with the job definition (replace paths <url-for-job-data-delivery>
and <url-for-job-status-delivery>
to fit your environment:
{ "info_type_id": "ExampleInformationTypeKafka", "job_result_uri": "<url-for-job-data-delivery>", "job_owner": "job1owner", "status_notification_uri": "<url-for-job-status-delivery>", "job_definition": {} }
Create job2
for type 'ExampleInformationType
'
|
Check that the job has been enabled - job accepted by the Information Coordinator Service
|
Data posted on the kafka topic unauthenticated.dmaapadp_kafka.text
will be delivered to the path as specified in the job2.json
.
Run the DMaaP Mediator Producer Docker Container
The DMaaP Mediator Producer needs one configuration file for the types the application supports.
Note that a running Information Coordinator Service is needed for creating jobs and a running message router is needed for receiving data that the job can distribute to the consumer.
Create the file type_config.json
with the content below
{ "types": [ { "id": "STD_Fault_Messages", "dmaapTopicUrl": "/events/unauthenticated.dmaapmed.json/dmaapmediatorproducer/STD_Fault_Messages?timeout=15000&limit=100" } ] }
There are a number of environment variables that need to be set when starting the application. See these example settings:
|
Start the DMaaP Mediator Producer in a separate shell with the following command:
|
Setup jobs to produce data according to the types in type_config.json
Create a file job3.json
with the job definition (replace paths <url-for-job-data-delivery>
and <url-for-job-status-delivery>
to fit your environment:
{ "info_type_id": "STD_Fault_Messages", "job_result_uri": "<url-for-job-data-delivery>", "job_owner": "job3owner", "status_notification_uri": "<url-for-job-status-delivery>", "job_definition": {} }
Create job3
for type 'ExampleInformationType
'
|
Check that the job has been enabled - job accepted by the Infomation Coordinator Service
|
Data posted on the DMaaP MR topic unauthenticated.dmaapmed.json
will be delivered to the path as specified in the job3.json
.
Run SME CAPIF Core
Start the CAPIF Core (Release J) in a separate shell with the following command:
|
This is a basic start command without helm. See CAPIF (Release H) and the README file in the sme repository for more options.
Check that the component has started.
|
Run RANPM
There is no Docker compose file for now. Might have one later.