Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

See also: JIRA link:

Jira Legacy
serverSystem Jira
serverId3122c0e4-6090-3a7d-9725-8b5a32a8eaeb
keyNONRTRIC-965
what

Table of Contents

How to use the NONRTRIC Information Coordination Service (ICS)

The Basics

What is it: (Data management and exposure) Service that manages data subscriptions. It separates data consumers from data producers (for different vendor). Data consumer doesn't need to be aware of where the data source.
where is it: Historical names: Information Coordinator Service (ICS), Enrichment Information Coordinator.

Repository and documentation about the service can be found at:

...

...

...

...

...

...

...

Terminology:

  • Information Type: Represents the types of data that can be produced by data producers and consumed by data consumers.

  • Information Job: Represents an active data subscription by a data consumer, specifying the type of data to be produced and additional parameters for filtering.

  • Data Consumer: Represents entities that consume data and manage data subscription jobs.

  • Data Producer: Represents entities that produce data.

Image Added

API offered in ICS:

  • Data producer API: 

    • Information Type and Information Producer 

      • Producer CALLBACKS: GET healthcheck (supervision); Information Job Creation/Modification/Delete.

  • Data consumer API:

    • Information Type Subscription Creation/Modification/Delete (REGISTERED/UNREGISTERED); Information Job (Creation/Modification/Delete) and GET Information Type 

      • Consumer CALLBACKS: POST Information Type Status: REGISTERED/UNREGISTERED invoked when a Information type status has been changed

  • Service status API:

    • Returns statistics such as Number of Producers Types and Jobs

ICS Docker Image:

1. Building the docker image from source and run it on port 8083 http

Code Block
languagebash
git clone "https://gerrit.o-ran-sc.org/r/nonrtric/plt/informationcoordinatorservice"
cd informationcoordinatorservice
mvn clean install
docker run -d -p 8083:8083 o-ran-sc/nonrtric-plt-informationcoordinatorservice:latest

Or use the pre-built image 

Code Block
languagebash
docker run -d -p 8083:8083 nexus3.o-ran-sc.org:10001/o-ran-sc/nonrtric-plt-informationcoordinatorservice:1.6.0


2. Import the swagger.json in Postman (informationcoordinatorservice/api/ics-api.json) as an OpenAPI3.0
3. Replace the baseUrl with http://localhost:8083 (in the Data management and exposure variables), and change accordingly {{infoTypeId}} from :infoTypeId
Other variables will be :{{infoJobId }}/{{infoProducerId}}/{{infoTypeId}}/{{subscriptionId}} etc4.


ICS flow:

a) Create a type (PUT /info-types)
b) Create a producer (PUT /info-producers) {supports type for filtering}
c) Create a job (PUT /info-jobs) {consumer subscription}

a) ICS type:
Stores the data schema of what's sent between the producer and consumer.

PUT {{baseUrl}}/data-producer/v1/info-types/{{infoTypeId}}
Body:

Code Block
languageyml
{

...

 "info_job_data_schema": {

...

 "topicName": "example_topic",

...

 "key": "example_key",

...

 

...

"message": "example_message"

...

 },

...

 

...

"info_type_information": {} }

...

...


Onboarding a producer in ICS:

PUT {{baseUrl}}/data-producer/v1/info-producers/{{infoProducerId}}
Body:

Code Block
languageyml
{
  "supported_info_types": ["example_info_type_id"],
  "info_job_callback_url": "http://example.com/job_callback", //POST JobCallbackUrl() + "/" + infoJob.getId();
  "info_producer_supervision_callback_url": "http://example.com/producer_supervision_callback"
}

"jobCallbackUrl" and "producerSupervisionCallbackUrl" are used for communication between a service and external producers in the context of the Information Control Service (ICS).

  • jobCallbackUrl: This URL serves as a callback endpoint for the producer. When the service needs to communicate or interact with the producer regarding a specific job, it sends requests to this URL. In the stopInfoJob() method, the service constructs a URL by appending the job ID to the jobCallbackUrl of the producer. Then it sends a DELETE request to this URL, effectively stopping the job. In the startInfoJob() method, the service sends a POST request to the jobCallbackUrl to start a job in the producer.

  • producerSupervisionCallbackUrl: This URL is used for health checks or supervision purposes. The service can send requests to this URL to check the health or status of the producer. In the healthCheck() method, the service sends a GET request to the producerSupervisionCallbackUrl to check the health of the producer.

In summary, both URLs facilitate communication between the service and external producers, enabling actions like starting and stopping jobs, as well as monitoring the health and status of the producers.c)

Giving the consumer a job definition:

PUT {{baseUrl}}/data-consumer/v1/info-jobs/{{infoJobId}}
Body:

Code Block
languageyml
{
  "info_type_id": "example_info_type_id",
  "job_owner": "example_owner",
  "job_definition": {
    "example_key1": "example_value1",
    "example_key2": "example_value2"
  },
  "job_result_uri": "http://example.com/job_result",
  "status_notification_uri": "http://example.com/status_notification"
}


1. Authorization check: POST to the Authentication Agent (from the starting config config/application.yaml )
2. Validation: The URLs seem to be are used only for URI validation (?)
3. Consumer starts a job on the Producer POST producerCallbacks.startInfoSubscriptionJob->restClient.post(producer.getJobCallbackUrl(), jobCallbackBody(infoJob))  


ICS Callbacks Flow


Demo Application - Java Producer and Consumer 
WIP application: https://gerrit.nordix.org/c/local/oransc/nonrtric-prototyping/+/20750
Script for the demo: https://gerrit.nordix.org/gitweb?p=local%2Foransc%2Fnonrtric-prototyping.git;hb=refs%2Fchanges%2F50%2F20750%2F15;f=kafka-demo-app%2Fdemo2.sh
Running the script will check the requirements and start 3 containers: DemoApp(localhost:8080), Kafka(localhost:9092), ICS(localhost:8083)
Image Removed
The demo application must implement these callbacks in order to work with ICS:1. GET SUPERVISION_URL Return 200
2. DELETE JOB_URL + "/{infoJobId}" Return 200
3, GET JOB_URL Return 200 and a collection of JOB
4. POST
JOB_URL Return 200 and send in body a JOB
This also assumes that the Demo Application has a definition of a TYPE and a JOB on that type. 
Run the demo:
The demo.sh script will:

  1. Check the system for dependencies such as Maven, Java, Docker and docker-compose
  2. Package the demo application for a producer and a consumer and build the docker images
  3. Start the docker container in the same docker network with docker-compose
  4. After Strimzi kafka is up and running the user can run manually ./runproducer.sh and ./runconsumer.sh in different shells or use demo.sh to start the producer and consumer
  5. The script will send type1 to ICS, which is already predefined in the demo application
  6. The script will send the producer info to ICS
  7. The script will send the consumer job info to ICS
  8. ICS will trigger the demo application based on its callbacks
  9. Data is produced on the demo application
  10. The script sends docker logs of the Producer Callback function of ICS
  11. The script sends docker logs of the Demo applications

Image Removed

Terminology:

  • Information Type: Represents the types of data that can be produced by data producers and consumed by data consumers.
  • Information Job: Represents an active data subscription by a data consumer, specifying the type of data to be produced and additional parameters for filtering.
  • Data Consumer: Represents entities that consume data and manage data subscription jobs.
  • Data Producer: Represents entities that produce data.

Image Removed

API offered in ICS:

...