Whereas PowerTools is an eclipse-based modeling and code-generation toolsuite, the OpenSplice Tuner is a more 'runtime' oriented tool that can be utilized to control and monitor OpenSplice DDS based target systems. Please check out OpenSplice Tube for video's on both that will further explain the differences
2) on dynamically changing partitions
Yes, the Partition QoS policy of Publishers and Subscribers can be changed at runtime causing information written by writers belonging to that publisher and/or information received by readers belonging to that subscriber will be communicated using the changed partition setting. When using wildcards at the subscriber, you can assure that information on matching topics published in any current and/or new emerging partition will be received by the related dataReaders.
3) about C++ legacy applications and DDS
Almost any DDS implementation can support multiple languages including C++. To re-use legacy applications that are not DDS-aware, the challenge is to identify the information that is tobe exhcnaged by your application, define a corresponding information model (in IDL which can then be 'compiled' for C++) and create a wrapper around your legacy-application to exchange the information
4) about keys and keyless topics
DDS follows a relation data-model where published information i.e. 'samples' are uniquely identified by the value of its key-fields (like in any RDBMS). Typical usage of 'keys' is to relate the data (samples) to an outside-world object such as a temperature-sensor-id that would be a typical key-field of an airconditioning system that should 'handle' multiple rooms with multiple temperature-sensors. The temperature-readings (samples) of a specific sensor are then sait to belong to the specific 'instance' identified by the sensor-id (the key-field). The resulting behavior of DDS is that of a database where the storage-spectrum is defined by the key-fields.
Keyless topics are typically used when all samples are related to the same entity/outside-world-object. When using the history-QoS policy to 'create room' for historical samples, the result is a data-flow that behaves like a normal (FIFO) queue (rather than a database)
5) w.r.t 'topic-slicing'
I'm not sure I understand the question. Maybe its related to the process of defining the right granularity of the information model driven by non-functional properties such as timeliness, urgency, importance, persistence, etc.
posted on Thursday, June 18, 2009 - 03:38 pm
Hi Hans, Thanks for answers. In my second question i actually wanted to know can we change partition(not the QOS policies) to which publisher or subscriber belongs ,at run time?
I can understand your question as the 'partition' QoS policy is somewhat different from other QoS policy-settings in that it actually 'drives' the creation and usage of Partitions. So a partition is created as soon as there's a publisher that has defined that partition(-name) in his partition QoS policy and has a related writer that writes samples of a topic (which will then 'appear' or better 'live' within that partition). And yes, you can change partitions at runtime by changing the related partition-names expressed by the QoS policies of publishers/subscribers
Hope this helps a little again ..
posted on Thursday, June 18, 2009 - 04:43 pm
Hi Hans, I have a scenario.I have two modes in my system ,lets suppose "training" and "normal".Is it possible to have two publishers publishing same topics ,one publishing in one partition and other publishing in another partition.Now suppose I am in training mode and I want to switch to normal mode and receive real world data.Can i do that just switching the partition of my subscriber to receive data from the publisher publishing real world data?
posted on Thursday, June 18, 2009 - 04:45 pm
Thanks for all your answers..........
With Regards, Sandeep.............
posted on Thursday, June 18, 2009 - 04:51 pm
Hello Hans, how is One to many and many to many relationship in data modelling implemented in dds.
Domains are much more static and don't allow the dynamics (that for partitions can even include wildcards),so I'd strongly suggest 'partitions' to create these 'information-worlds' (like the REAL_WORLD and TRAINING_WORLD)
posted on Thursday, June 18, 2009 - 06:13 pm
Hi Hans, Thanks for the answer. I have some more queries:
1)what is control coupling and what is has got to do with DDS?
2)Why request-reply model implementation is not feasible in DDS?
3)Suppose I have a system 'A' built using DDS with some QOS setting and I have only executable of it.Now I want to make another system using DDS that has to receive information from system 'A' but their QOS requirements are not compatible e.g new system wants some topic twice a second but old system can provide this topic update only once in a second.What to do in this situation?
4)how can I get full documentation of openSplice DDS?
Are you refering here to a previous post ? I suspect its related to the differentiation between tightly-coupled 'control' patterns that are typically sychronous of nature (i.e. request/reply 'pull' style) versus a loosely-coupled data-coupling pattern based upon the publish/subscribe ('push' style).
3) Changing QoS settings 'at runtime' is possible with our OpenSplice Tuner tool, but these changes will not be 'persistent'. In your specific example I'm not sure its a QoS incompatibility in teams of DDS QoS policies as the frequency at which a writer publishes data is as 'his discretion'. The TIME_BASED_FILTER QoS of DDS belongs to a dataReader and allows that reader to specify a maximum-rate (or 'minimal-separation') at which he wants to 'see samples'. Which is like the 'inverse' of your usecase where its more like asking for a minimum rate.
W.r.t. topic-definition based on QoS requirements:
Granularity of topics (i.e. the choice between creating many small or 1 big topic) is typically driven by:
1) separation of low- and high-frequency data-components in different topics (that are related by having the same ‘keys’) - assuming that sizes/payload of high- and low-frequency parts of related topics are different (many times only a small portion of related data-attributes is changing at high-frequency whereas the larger portion of the information is much more static) - thus preventing unnecessary updates of non-changed information - usage of the DDS information-management features to automatically combine that information again (as supported by our support for the OMG-DDS DLRL layer)
2) specification of required reliability for each ‘piece of information’ (i.e. topic) - in many cases periodically produced data doesn't need a heavyweight reliable protocol - like in a combat-system environment, track-positions are updated periodically (don't need a reliable protocol for that) whereas track-classification is more like ‘one-shot’ data that represents ‘state’ and must be transported reliably
3) specification of required ‘persistence’ of each piece of information - not all data needs to be ‘remembered’ for late-joining applications - for example, combat-systems, only track-classifications must be ‘remembered’, position-updates will be refreshed fast and periodically anyhow
4) specification of available ‘latency budget’ for the data-distribution - not all data needs to be distributed equally fast - allowing the middleware to utilize the available budget for efficient communication (i.e. packing of information) - like - in real life - the efficiency of public-transportation as compared to private-cars
5) specification of (dynamic) logical network-partitions - allowing for logical structuring/separation of grouped information - allowing for ‘filtering at the source’ since data published in non-interested partitions will not be communicated - allowing for physical segmentation of grouped information - since DDS-partitions may be mapped on physical ‘network-partitions’ that utilize pre-defined multicast addresses
6) specification of default-transport-priorities for each information-type (topic) - relieving this ‘burden’ from each individual application developer - typically the ‘importance’ of information can be rather statically ‘attributed’ to the information-type - still the DDS-spec allows many of those QoS attributes (priority, latency-budget, reliability, persistence) to be used in a dynamic fashion (i.e. pre-set and/or adapted at runtime for a specific dataWriter/Reader)
posted on Tuesday, June 23, 2009 - 05:51 am
Hi Hans, Yes in my third query i was talking about a situation in which data reader is asking for a minimum rate which the data writer(that has already been implemented) can not support.What can I do in this situaion?
As RxO (Request vs. Offered) policies in DDS are to assure that no communication will occur if someone (a publisher/writer) offers less than you (subscriber/reader) needs, I guess there's not much that we/middleware can do. Of course one could anticipate on this when designing the system in such way that a) writers publish fast enough or b) writers are made explicitly aware of requested data-rates by (emerging) readers
I have a question, I have download a Community version of OpenSplice DDS and can't build included examples. Are some generated files needed ? As I understood, this files are generated by some TOOL, but this TOOL is included in Commercial version of OpenSplice DDS only ?
So, where I can obtain examples for Community version or how to build examples from Community version ?
The community edition comes by default with a number of examples written in different languages for different platforms.
Basically we distinguish between Stand Alone application, where you do not need any external tools besides your compiler, and Corba-Cohabitated applications that would like to integrate DDS with CORBA.
The Stand Alone examples should compile without any further tools. You can recognize these examples by their name: they will always start with SA (for StandAlone) followed by an identifier for their target language. This way SAC stands for StandAlone C, SAJ for StandAlone Java, SACPP for StandAlone C++, etc.
The Corba cohabitated examples require you to have the ORB installed with which you would like to integrate. For C++ this is by default OpenFusion TAO 1.6.1, and for Java this is JacORB 2.3.0. Both examples try to invoke the IDL compiler that comes with the ORB in question. You can recognize these examples by their name as well, since they always start with C (for CORBA) followed by an identifier for their target language. This way CCPP stands for Corba C++, CJ for CORBA Java, etc.
I guess that the examples that do not want to compile are either the CCPP or CJ examples, is that right? You could just skip those examples if you are not interested in Corba cohabitation, or you could download the appropriate ORBS at opensplice's website: