See also: JIRA link:
Jira Legacy | ||||||
---|---|---|---|---|---|---|
|
what What is it (Data management and exposure) Service that manages data subscriptions. It separates data consumers from data producers (for different vendor). Data consumer doesn't need to be aware of where the data source.
where Where is it: https://github.com/o-ran-sc/nonrtric-plt-informationcoordinatorservice mirror of https://gerrit.o-ran-sc.org/r/nonrtric/plt/informationcoordinatorservice
historical Historical names: Information Coordinator Service (ICS), Enrichment Information Coordinator.
...
- Data producer API: Information Type and Information Producer
- Producer CALLBACKS: GET healthcheck (supervision); Information Job Creation/Modification/Delete.
- Data consumer API: Information Type Subscription Creation/Modification/Delete (REGISTERED/UNREGISTERED); Information Job (Creation/Modification/Delete) and GET Information Type
- Consumer CALLBACKS: POST Information Type Status: REGISTERED/UNREGISTERED invoked when a Information type status has been changed
- Service status API: Returns statistics such as Number of Producers Types and Jobs
...
1. Authorization check: POST to the Authentication Agent (from the starting config config/application.yaml )
2. Validation: The URLs seem to be are used only for URI validation (?)
3. Consumer starts a job on the Producer POST producerCallbacks.startInfoSubscriptionJob->restClient.post(producer.getJobCallbackUrl(), jobCallbackBody(infoJob))
ICS Callbacks Flow
...
Demo Application - Java Producer and Consumer
WIP application: https://gerrit.nordix.org/c/local/oransc/nonrtric-prototyping/+/20750Sample application: ics-producer-consumer
Script for the demo: https://gerrit.nordix.org/gitweb?p=local%2Foransc%2Fnonrtric-prototyping.git;hb=refs%2Fchanges%2F50%2F20750%2F15;f=kafka-demo-app%2Fdemo2.sh
Running the script will check the requirements and start 3 containers: DemoApp(localhost:8080), Kafka(localhost:9092), ICS(localhost:8083)
The demo application must implement start.sh
Running the script will check the requirements and start 4 containers:
- Kafka(localhost:9092)
- ICS(localhost:8083)
- Producer(localhost:8080)
- Consumer(localhost:8081)
The Producer implements these callbacks in order to work with ICS:
1. GET SUPERVISION_URL Return 200
2. DELETE JOB_URL + "/{infoJobId}" Return 200
3, . GET JOB_URL Return 200 and a collection of JOB
4. POST JOB_URL Return 200 and send in body a JOB
This (It receives data from ICS)
The Consumer implements these callbacks in order to work with ICS:
1. POST /info-type-status Return 200 when a Information type status has been changed (It receives data from ICS) Status: REGISTERED/UNREGISTERED
This also assumes that the Demo Application has a definition of a TYPE and a JOB on that type.
Run the demo:
The demo.sh script will:
...
Code Block | ||
---|---|---|
| ||
Demo Producer Docker logs 2024-04-02 12:48:05 INFO c.d.p.p.SimpleProducer:141 - {"bootstrapServers":"kafka-zkless:9092","topic":"mytopic","source":"com.demo.producer.producer.SimpleProducer","message":"ygHwxXSIxW","key":"f8f1a7a7-a78e-4c7d-9b8d-108bb0cc9e2c"} 2024-04-02 12:48:06 INFO c.d.p.p.SimpleProducer:141 - {"bootstrapServers":"kafka-zkless:9092","topic":"mytopic","source":"com.demo.producer.producer.SimpleProducer","message":"KNIbP10zfN","key":"b058d00f-bbcd-4d2c-936b-6327847d4c2a"} 2024-04-02 12:48:07 INFO c.d.p.p.SimpleProducer:141 - {"bootstrapServers":"kafka-zkless:9092","topic":"mytopic","source":"com.demo.producer.producer.SimpleProducer","message":"V6fH1NkdeH","key":"ae1a83a3-d8a7-40c8-9d98-529230f8b585"} 2024-04-02 12:48:08 INFO c.d.p.p.SimpleProducer:141 - {"bootstrapServers":"kafka-zkless:9092","topic":"mytopic","source":"com.demo.producer.producer.SimpleProducer","message":"m76qvRFh6f","key":"abccde52-fa72-4fd4-99ab-5bc21514d825"} 2024-04-02 12:48:09 INFO c.d.p.p.SimpleProducer:141 - {"bootstrapServers":"kafka-zkless:9092","topic":"mytopic","source":"com.demo.producer.producer.SimpleProducer","message":"t7FJYnFr43","key":"0602239e-34e9-45a6-a04a-3c67b4c7d9e4"} ++++++++++++++++++++++++++++++++++++++++++++++++++++ Demo Consumer Docker logs 2024-04-02 12:48:05 INFO c.d.c.c.SimpleConsumer:158 - {"message":"Topic: mytopicMessage: ygHwxXSIxW"} 2024-04-02 12:48:06 INFO c.d.c.c.SimpleConsumer:158 - {"message":"Topic: mytopicMessage: KNIbP10zfN"} 2024-04-02 12:48:07 INFO c.d.c.c.SimpleConsumer:158 - {"message":"Topic: mytopicMessage: V6fH1NkdeH"} 2024-04-02 12:48:08 INFO c.d.c.c.SimpleConsumer:158 - {"message":"Topic: mytopicMessage: m76qvRFh6f"} 2024-04-02 12:48:09 INFO c.d.c.c.SimpleConsumer:158 - {"message":"Topic: mytopicMessage: t7FJYnFr43"} ++++++++++++++++++++++++++++++++++++++++++++++++++++ ICS logs 2024-04-02T12:48:05.615Z DEBUG 1 --- [or-http-epoll-2] o.o.i.c.r1producer.ProducerCallbacks : Job subscription 1 started OK 1 2024-04-02T12:48:05.820Z DEBUG 1 --- [io-8083-exec-10] o.o.i.repository.InfoTypeSubscriptions : Added type status subscription 1 |
Redpanda Console:
...
GUI Consoles and Panels
Automatic
Running the script : red.sh will
- Bring the sandbox setup (Kafka and ICS)
- Build the local images for the producer and consumer
- Redpanda console (visual monitor kafka data flow) and the NONRTRIC control panel (visual data type job subscriptions).
bash red.sh
There are the options to skip the build or the GUIs
bash red.sh --skip-build --no-console
Manual
Redpanda Console:
After kafka is up and running
docker-compose -f docker-composeRedPanda.yaml up -d
Redpanda console available at: http://localhost:8888
Manual NONRTRIC-controlpanel:
git clone "https://gerrit.o-ran-sc.org/r/portal/nonrtric-controlpanel"
Changed the configuration files as shown here:
...