I'm investigating the usage of DDS in a scenario where two types of data distribution are envisaged: 1. distributing data generated in real-time by some devices. This data is distributed at the speed at which is being generated -> perfect match with the DDS model. 2. the data at 1. is being recorded and Subscribers need to have the possibility to ask for a replay. The data should be optionally sent at the maximum possible speed, and the Subscribers should have the possibility to block the Publishers if the data comes too fast (like in TCP where the server blocks in writing if the client is too slow to read from the socket).
My problems are obviously for point 2. A. DDS does not seem to include a mechanism through which the Subscribers can communicate their "wishes" to the Publishers. I guess this can be solved by implementing some special Topics containing requests for which the Subscribers will act as publishers and the Publishers would act as subscribers. But then the requests can be refused for a reason or other and then I need a mechanism through which the errors are sent back to the Subscribers.
B. When Publishers publish the data, they seem not to care if the Subscribers are able to keep up with the publishing speed.
Could you please give me your opinion if I should use the DDS for this or I should better forget altogether about it?
The existing DDS specification does support the notion of non-volatile data (TRANSIENT and/or PERSISTENT where the first is maintained in memory and the second is maaintained on disk) that is 'sort of' replayed in the sense that late joining applications can explicitly ask for it by means of the 'wait_for_historical_data()' call.
Basically this means that the subscriber's datacache will be filled with this historical data by the built-in (and distributed) durability service.
One of the principle characteristics of DDS is that it provides a decoupling in space (location) as well as in time/frequency since it very much resembles a database where data-instances are uniquely identified by their key-values and where new-data is allowed to overwrite old data (of the same key-value). To prevent data-loss, there is the capability to specify a history-depth that can be utilized to capture a limited (so-called KEEP_LAST) or unlimited (so-called KEEP_ALL) set of 'old' samples (for each key-value). which provides means to 'buffer-away' any data that comes in too fast.
In a state-based system (as opposed to an event-based system), its generally required and/or sufficient to have the latest-state of an object rather than all historical states. Such systems can (therefore) benefit well from the DDS database characteristics.
Finally, there are solutions available that allow to forward/stream the DDS data into a RDBMS system including replay-features of streaming RDBMS data back into the real-time DDS system.
I'm not entirely certain, but I think you should be able to juggle some of the QoS policies around this.
A Reliable topic may cause DataWriter::write to block if the data cannot be delivered to all its DataReaders (126.96.36.199, 188.8.131.52). If you set the DataReaders' Resource_Limits.max_samples_per_instance to 1, that should force the DataWriter into lock-step with the slowest DataReader. (Note, however, that this will also cause all the other DataReaders to wait for the slowest of their siblings.)
Instead of blocking the DataWriter the service may drop the sample at the slow DataReader, but if it does it must still resend it when it can and in the meantime no further samples can be sent. Combined with a By_Source_Timestamp Destination_Order I think that should work.
You may need to place your replayed data in a separate topic in order not to interfere with the distribution of the live data.
Uffe: The behavior you describe is true in combination with a KEEP_ALL policy. Yet its also true that this is generally 'bad practice' since a slow reader might slow down the complete system.
Nicolae: I'm not completely sure you need the RDBMS coupling for retaining 'old' states outside the scope of life applications. The TRANSIENT persistence QoS Setting (as supported by our DDS implementation) also supports preservation and recovering old states in combination with current ones. There's a related 'wait_for_historical_data()' call that allows subscribers/readers for TRANSIENT data to be 'synced-up' with 'old' states.
The DBMS coupling is NOT needed for this in our implementation, however if you want I can send you more information on that. Just send me an email: email@example.com