Only when "discovery" is not enabled (via the xml-configuration file pointed to by $OSPL_URI) is data sent to the network regardless of discovered 'recipients'. When Discovery is active, data will NOT be sent to the network once its clear that there are no other interested nodes.
Also note that in OpenSplice DDS we have network-services (on each node) that take care of the communication between nodes which offers several advantages w.r.t. both discovery, efficiency and determinism:
1) discovery traffic : the only 'discovery' required is about interest of remote nodes (i.e. that have one ore more 'matching' readers and/or configured durability-service for retaining non-volatile data in a distributed/fault-tolerant way). This 'node-granularity' rather than individual reader/writer granularity greatly reduces discovery traffic which is important in large-scale systems such as the naval Combat Systems where OpenSplice DDS is being deployed for many years already and which typically have thousands of applications distributed over 100+ nodes.
2) discovery times: in the large-scale systems described above, startup/discovery times are very important and in OpenSplice this time is constant and basically 'zero' as any published information has 'in-line' QoS's that allow 'incoming data' in any node to be correctly matched/delivered by its networking-service.
3) determinism: a 'nodal' network-service schedules all network-traffic based upon actual importance (transport-priority QoS) and urgency (latency-budget QoS) and utilizes runtime-configured traffic-shaped 'priority-lanes' that allow for end-to-end priority-preemption of data-delivery based upon the sample's actual QoS settings (rather than a writer/reader's processing priority)
4) efficiency: a 'nodal' network-service can combine samples from multiple dataWriters and/or topics into configurable-size UDP-frames (driven by the available latency-budget as expressed for each individual sample by the LATENCY_BUDGET QoS value) which substantially increases the efficiency/throughput of the data-distribution. Furthermore, incoming data needs to be de-serialized only once regardless of the number of matching dataReaders on a single node (which in large-scale systems utilizing powerful SMP-boxes might be tens or hundreds on a single node)
4) scalability: the combination of utilizing a ring-fenced shared-memory segment to 'hold' all DDS-data on a node allows for having only a 'single-copy' of any sample-payload regardless of the number of nodal dataReaders and/or dataWriters. Apart from very fast intra-node communication it also facilitates efficient communication with the network service that both schedules the data-transfers based upon actual information urgency/priority as well as dynamically 'partitions' the data-traffic by transparently mapping logical DDS-partitions (i.e. the partition QoS policy of the related Publisher) onto any number of pre-configured so-called OpenSplice 'networkPartitions' characterized by a multicast-group and/or dynamically discovered unicast-limited hosts. Also the constant single-time (de)serialization effort aids in both scalability as well as determinism of the distributed system.
Hope this helps somewhat in understanding OpenSplice DDS. For more product/vendor specific questions (and answers ) I'd like to refer to our OpenSplice DDS Forums to be found at: http://forums.opensplice.org
posted on Thursday, August 19, 2010 - 02:22 pm
Thank you very much for the reply. I do have additional questions on OpenSplice that I'll post on OpenSplice forum.