i´m interested on streaming video and audio over DDS but is not clear for me how does it work. Supposing that i have a video file of 100 mb or more, i have to segment it in chunks and then transfer over dds. Supposing also that i´m working on a network with two subscribers for that video that don´t request the file at the same time: how can the late join node discriminate between the chunks sent for the other node and the chunks for it (supposing that the data are broadcasted)? in fact the late join node should see also the data on the netwotk sent for the node that subscribed to the video before its request. Is there any example code that meets this scenario?
There are various architectural and/or implementation aspects of DDS that help you in this scenario. For instance by using multicast-based communication you'll make sure that your 1-to-n distribution is maximally efficient. Another 'feature' that you could utilize is using DDS 'durability'. By defining your data as TRANSIENT it will be 'remembered' for late-joining applications such as your second subscriber. Now it depends on the DDS-implementation if and how this QoS policy is of specific help. For our product OpenSplice DDS, we've implemented durability by means of distributed durability-services that transparently maintain published data for late-joining applications. If a durability-service is configured to run already on the node of your second subscriber before that subscriber becomes active, then it has already 'aligned' itself with the rest of the system i.e. also with the durability-service of the publishing node and as it maintains its information in a shared-memory-segement it provides 'instant availability' of this data once the second subscriber becomes active. If its really a late-joining 'complete-node' i.e. where infrastructure and application are both 'starting late', any available durability-service available in the already active system will take care of aligning the service of the late joining-node which could even happen 'in parallel' with your active publisher. This is not an issue as you can syncrhonize your subscribing application by utilizing the 'wait_for_historical_data()' API that blocks your appliation until all historical data is provided. After this synchronization, your second subscriber knows 'as much' as your first one and you can utilize several API's to be triggered by data-arrival.
Another requirement would be to properly model your information w.r.t. how to uniquely distinguish your data-chunks. DDS both has the notion of 'keys' (like in a RDBMS) to uniquely identify samples that belong to a specific instance (maybe the 'frame-nr' could be a topic-instance key) as well as the notion of 'history' that allows for a dataReader to maintain a set of samples for each key-value rather than 'just' the newest sample with that key-value (maybe your history-depth should be the size of your 'fragmentation').