Can anyone indicate how I/O i.e, block or character or pointing devices can be handled as publishers of topics? In that case what QOS should be used? If there is a missed deadline in case of a non-reliable transport, then how does one ensure that all samples of the corresponding topic have been received by the subscriber?
Its actually very common to use DDS in an environment where edge-devices are responsible for the I/O of the system and where the data to/from those devices is made available as real-time information in DDS.
Its typically easy to creat a topic that matches the information that can be published by a device. The QoS policies that drive the information-distribution as well as the information-availability are those that can be defined on the 'topic-level' and are the following:
1) w.r.t. distribution
- delivery: either best-effort or reliable delivery which typically depends on the nature of the device i.e. if it periodically provides (digital) measurements of the (analogue) outside-world then then best-effort delivery might be a good option since thats less expensive than using reliable-delivery (and also with limited time-validity of information, a retried transmission might already be outside the lifetime of the data)
- importance: the priority of the information (as opposed to local processing-priority of an application) as expressed by the TRANSPORT_PRIORITY QoS policy. This is a system-wide notion of the priority of the topic-samples w.r.t. samples of other topics that are handled by DDS in the system. The implementation of this QoS is vendor-dependent yet in OpenSplice DDS we put a lot of emphasis on this since its (only) this QoS policy that allows deterministic real-time communication where priority-inversions can be bound.
- urgency: the acceptable latency (budget) of the end-to-end data-distribution. Also this QoS policy is implemented/supported differently by different DDS-vendors where in our DDS product (OpenSplice DDS), the LATENCY_BUDGET is utilized to increase efficiency by packing information into large UDP-fragments whilest still delivering 'on-time' i.e, within the acceptable budget
2) w.r.t. information availability
The DDS specification provides various QoS policies to control the information-availability once its publishes by the device. Basically there are 4 flavours: a) volatile durability: meaning that data is not 'remembered' for late-joining applications, typically applicable for periodically published information b) transient_local: data is 'remembered' at the publisher so that late-joining appliations can get it once they've joined the system. Note that this policy ties the lifetime of the information to that of the publisher which is not always acceptable: use 'real' transient durability instead, see c) c) transient durability: published information is kept transparantly and in a distributed fashion so that a late-joining application can always get it, regardless of the availability of the orinigal publisher at that time. In OpenSplice DDS we've implemented this feature in a distributed and fault-tolerant way since its often the basis for dynamic recovery by providing fault-tolerant state-information for re-started applications/devices
Finally: especially for I/O based DDS-publishers, the DDS-spec contains a feature to explicitly set the source-timestamp of data (using 'write_with_timestamp()') which is typically needed in those cases where only the apppliation itself can determine the proper source-timestamp rather than having the middleware create a timestamp automatically which then would be the time when the data is offered to DDS, so not necessarily the time when the data was 'originated' from the IO/device
Thanks Hans. Regarding transient durability, can a user subscriber, subscribe to the state information of middleware itself? If i have to re-create the current state of running system, including the state of middleware layer, by a late-joining system, is it possible? 2 more questions. (1) I have a non-linear rate of data arriving as input to a writer from IO device. For eg, a temperature sensor of boiler control. as long as the temp is withing permissible limits, the rate of information is say 1hz. as the temp moves to 'caution' band, the rate increases to 10Hz. and in critical band it arrives at 100hz. can i vary my urgency, delivery and importance QOS based on data rate during run time? does OMG specify any QOS dependent on non-linear functionality of topics publishing? how would OpenSplice handle this requirement? (2) How do i publish a keyboard input or mouse movement as topics? can DDS provide such strict timing constraints and delivery to subscribers? especially if the data writer and reader are on two different systems.
w.r.t. transient durability: a subscriber can subscribe to ANY transient information, which can be user-defined transient topics and/or the transient meta-data of the so-called 'built-in topics' (DCPSParticipant, DCPSPublication, DCPSSubscription and DCPSTopic) which basically represent all DDS-entities of a running system. A late-joining user/node/application will be provided 'instantly' with thetransient data he is subscribed to (he can actually 'block' for it when using the wait_for_historical_data() API).
w.r.t. runtime varying urgency/importance: the OMG-specification actually specifies most of these (urgency=LATENCY_BUDGET, importance=TRANSPORT_PRIORITY, ownership=STRENGTH) as 'Changeable' meaning they can be modified at runtime for readers/writers. Whether such a QoS-change has immediate effect is somewhat dependent on a vendor's implementation, but in OpenSplice DDS its indeed 'instantaneous' (meaning each written sample can have a different latency-budget, transport-priority or ownership-strength)
w.r.t. runtime varying delivery-rates: this is actually really simple as this is fully driven by the application business-logic, so in your example basically a varying 'delay' between published samples (where the delay is dependent on the actual temperature/temperatur-band to be reported)
w.r.t. publishing keypboard-input or mouse-movement as topics and related strict timing: this is actually where DDS 'shines' and OpenSplice DDS 'excels' I dare to say, since the QoS policies (of the spec) and our implementation (of OpenSplice DDS) allow for finegrained control over efficiency (driven by urgency/LATENCY_BUDGET) and determinism (driven by importance/TRANSPORT_PRIORITY) such that latencies and jitter can be well-bounded for important data. To give an idea, typical numbers for small messages distributed over linux-PC's on a little-loaded gigabit ethernet are end-to-end latency of about 60 usec and jitter less than 10 usec. so should be fine for keyboard and/our mouse
Thanks again Hans! some more queries... (a) I am modelling my system as a PIM using an IDE, to be fair to market forces :-) . When i convert the PIM to PSM for opensplice or any other product, and then generate code in a common language, say C++, apart from code size, is there any metrics to evaluate the performance or optimisation of code? (b) Is the built in topics for system meta-data (DCPSParticipant, DCPSPublication, DCPSSubscription and DCPSTopic) an OMG specification or vendor specific implementation? also is the API call vendor neutral? Can i use these built in topics in my PSM and hope that when i convert it to PSM, there wouldnt be any issues after generating code? (c) Is there any pattern or default framework available for system cloning by a late joined system available in openslice, that i can use readymade? and thanks for all the help.
Maybe interesting to checkout the example at http://dds-forum.org/boards/messages/34/197.html?1233135578 where I've shown the usage of a MDE tool that basically converts a high-level DDS-model to a specific language. I do think that when creating a DDS-based system, you can NOT abstract away from that in a PIM since i.m.h.o. a PIM does abstract away from the platform but not from the basic architecture (i.e. PIM can not abstract the difference between a loosely-coupled data-centric pattern such as DDS and a tightly-coupled client/server pattern such as CORBA). I think if you do so, you'll find yourself at the (abstraction-)level of the (operational) system requirements.
I think the 'quality' of generated-code is primarily related to the 'quality' of the abstraction i.e how easy life is made for users of the model (i.e. the language into which to express the users's problem domain). And then you want to know what/when/how-much you 'sacrifice' with the abstraction w.r.t. configurability, QoS utilization, i.e. how much restrictions there are w.r.t. utilizing the full DDS specification. Our 'PowerTools' in that aspect don't impose any limitations, yet as a consequence, will require some detailed (DDS-)domain knowledge if the 'defaults' are 'not good enough' for a particular usecase.
2) w.r.t. built-in topics
Those are indeed fully specified by the OMG-DDS specification w.r.t. syntax. As explained in an earlier email, availability of that information in a system can be vendor-specific. Also, I think topics (captured in a tool and/or in IDL) are part of the PIM rather than the PSM (I'd consider the code-generated from the vendor's IDL-compiler i.e. the type-support as well as the typed reader/writer interfaces as the "PSM part" of topics).
3) w.r.t. system-cloning patterns
When you consider the TRANSIENT durability QoS policy of DDS as an enabler to capture the system-state in a distributed and fault-tolerant way, then I guess there are 3 related patterns: a. a late-joining system will 'align' itself with the TRANSIENT data-set and with that will become another/new source of that data w.r.t. fault-tolerance (so a pattern where you can dynamically add replica's/backup's of data which is more powerfull than most hot/standby patterns where a backup can take over from a master, but adding a new backup ina running system is not possible)
b. a late-joining application will get the TRANSIENT data regardless of the fact that it has been published in the past and/or the publisher/writer still being active/available in the system
c. a crashed application that needs to be restarted (perhaps on another and/or new computing node) can subscribe to its own (previously published) internal state and by that continue to operate in the correct mode on the correct data-set (this implies that an application publishes his internal state explicitly if it can not be reconstructed from its 'normal' set of inputs)
All 3 patterns are readily supported by a DDS that implements the TRANSIENT QoS policy (which is actually defined as 'optional' in the DDS-specification). OpenSplice DDS for instance does support it.
Thanks once again Hans! I still need some more info. (a) Is configuration of DDS using XML, a OMG specification or Opensplice implementation? (b) Is there a list of parameters that can be used in the configuration file and what do they signify? or is it implementation specific? (c) Can QoS parameters for a domain (only domain level) be specified in the configuration file? (d) do we have to specify the "database" size in the configuration file at initialising or can it "grow" during runtime/operation? (e) In a running DDS system, can i periodically obtain a 'snapshot' of builtin topics and store in a database (using DBMS connector to an RDBMS?
First of all, apoligies for the late reaction, I was on vacation last week (its that time of the year )
Now to your questions:
a) configuration of DDS using XML
The OMG DDS specification standardizes the (logical) API that applications 'perceive' when using DDS. As is typical for most middleware products, there's also 'stuff' related to configuration (like networking parameters such as IP-addresses, port-numbers, UDP-packet sizes, etc.) that is of course necessary but not 'by definition' or 'by standardization' visible to individual applications. Now it depends on the DDS vendor how such configuration is done. We've chosen for an architecture that clearly separates those 'deployment' aspects (which we think is the 'domain' for system-integrator) from the programatic/API aspects (which is the 'domain' of the application-developers). By doing so we assure maximum re-usability of applications as well as keep the complexity of writing/deploying applications down to the minimum (e.g. applications ONLY perceive the standardized DDS-API so don't have to 'bother' about configuration and/or needing proprietary/vendor-specicif API's to setup/configure lower-level items like the networking-parameters mentioned above). Finally, using XML is a natural way of doing so and combined with our OpenSplice 'Configurator' tool (which provides context-sensitive help to construct such a XML-configuration file) we think its an elegant way of configuring the deployment envionment as well as mapping some DDS_level QoS policies (for importance/urgency/partitioning) on the underlying communication infrastructure.
b) Configuration parameters
The complete list of configurable parameters and their explanation can be found in the OpenSplice DDS Deployment-guide which is part of our (open-source) distribution.
c) Domain-level QoS parameters
For applications, all (logical) QoS parameters for all DDS-entities that togeter 'make-up' a DDS-domain are fully specified (syntax & semantics) by the OMG-DDS specification. Yet there is flexibility in 'implementing' certain QoS policies such as information importance (TRANSPORT_PRIORITY), information urgency (LATENCY_BUDGET), data-partitioning (PARTITION) and in our product we've chosen to allow for a dynamic mapping of such 'logical' QoS policies on 'physical' characteristics off the underlying target/deployment system: TRANSPORT_PRIORITY ==> dynamically mapped upon XML-configures priority-lanes LATENCY_BUDGET ==> dymamically mapped upon a 'packing-algorithm' to combine multiple DDS-samples (from multiple applications) into configurable UDP-fragments (obeying also configurable traffic-shaping settings) PARTITION ==> dynamically mapped upon configurable 'network-partitions' that are basically characterized as a set of unicast/multicast addresses
d) database size
For maximum safety and efficiency, our middleware manages a ring-fenced shared-memory segment which is pre-allocated upon initiation of the middleware. This segment is also called 'the database' and is typically allocated as an operating-system 'shared memory' segment. This allows for an architecture where information can be shared extremely efficient and fast within a single node as well as prevents multiple copies of the same information when multiple applications are interested in it. The size of this segment is typically 'bounded' by the RESOURCE-LIMITS QoS policy settings of the applications that utilize it. Extending it dynamically when 'running-out' of memory is typically not well supported by the utilized Operating Systems, but we're providing tools ('mmstat') that provide runtime insight in its utilization.
Finally, in those cases where the scalability and efficiency of shared-memory are not required, we also support a 'OpenSplice DDS Cluster' build-option where this segment is mapped on 'heap' (where dynamic size-management is not an issue). This results in one-or-more applications as well as all pluggable services to become a single process that doesn't use shared memory nor daemons. We typically utilize this feature in our QA-process w.r.t. assuring that memory is properly managed using tools such as Rational Purify (as typically these tools are not sufficiently capable of 'handling' shared-memory as opposed to heap).
e) periodic 'snapshots'
Creating 'periodic' or 'continuous' snapshots of topics (which can be both the pre-defined meta-data 'built-in-topics' as well as any user-defined topic) is indeed very well supported by a gateway towards a DBMS as several DDS-vendor support. In case of OpenSplice DDS this would be one of the use-cases for our OpenSplice DBMSConnect service as it supports transparent 2-way 'interconnection' of DDS and any number of ODBC-3 compliant RDBMS systems. Again using the XML-configuration method, such 2-way DDS/DBMS connections can be configured in a very granular way w.r.t. frequences, triggering and filtering (using both DDS-queries as well as RDBMS queries) of data. Detailed information on this can again be found in the product's deployment manual.
Sorry for the delay, yet I hope it still helps somewhat ...
Hi hans, thanks for all the info. I am back with more queries... 1. How do i model a 'display' or graphics driver as a subscriber-for accepting display inputs from different publishers for different sections of the and as a publisher-for publishing keyboard/mouse inputs by user to different subscribers? 2. Can i the standard MVC modelling architecture for display rendering? and if so, how do i store the MVC models/delegates as configuration info?
How you model a 'display' application heavily depends on the kind of modeling tool you're using. As an example, I've attached a screenshot of our modeling tool for DDS which shows a 'display' application that is subscribing to 3 different topics that provide the application with information to 'display'.
The interaction means that DDS offers to appliations, i.e. using (synchronous) waitsets and/or (asynchronous) listeners facilitates the use of patterns such as MVC.