The TopicQos does not impose anything on individual readers/writers. Each reader/writer can make its own QoS decisions, regardless of the topic settings.
The idea behind the topicQos is that is is a hint: if a reader/writer does not know what what would be good candidate policies for a given system, it can copy the settings from the topic. There are special helper-operations on the Publisher and Subscriber (copy_from_topic_qos) for this, and there are also convenience macro's that do the same trick.
In a typical system, the system architect defines the overall information model. That model defines all topics including their TopicQoS settings. Typical applications follow these policy guidelines set by the system architect.
In exceptional cases it may be necessary that individual applications need to deviate from the policies as set by the system architects. For those cases individual readers/writers may override the topic settings. (Normally this should be discussed with the system architect). For those readers/writers the standard compatibility rules apply with the rest of the system.
Please not that the compatibility rules are only evaluated between Readers and Writers. The TopicQos is only considered a hint by the system so no individual Reader/Writer will be considered incompatible with the Topic. However, since the rest of the system probably follows the TopicQos, you may end up being in conflict with all other Readers/Writers in the system.
Hope this answers your question.
Regards, Erik Hendriks.
posted on Thursday, November 29, 2007 - 02:08 pm
Excellent - many thanks for the speedy response :-)
There appears to be one QOs that breaks this rule - the DURABILITY?
My reading of the spec is that for each topic, the topics max DURABILITY is set by the creator of the topic. Therefore, you could get producers or consumers requesting a lower or equal level of DURABILITY than the topic but never higher.
The DurabilityQosPolicy and its cousin the DurabilityServiceQosPolicy are very special policies that have settings that are not supported by each DDS vendor, because some of these settings are part of an optional DDS profile (the Persistency profile). So different DDS products may have different behaviour here.
In case of OpenSplice, it supports the full range of settings: VOLATILE, TRANSIENT and PERSISTENT.
The TRANSIENT and PERSISTENT settings are managed by a pluggable durability service that can be configured to run on various nodes in the system. Because scalability of durable data is an issue in large systems, keeping all historical data on each node may not be feasible. Often nodes are interested in a small part of the total system data. They are driven by both performance (boot time, memory usage, network load, CPU load) and fault tolerance (the need for replicates). This service looks to the TopicQos to see whether it should prepare storage facilities for the topic and what type of storage facilities (memory and/or disk). It also looks at the DurabilityServiceQosPolicy of the TopicQos to see what kind of resource limits should be used and how long samples should be kept after they are disposed by their owners. But in the end it is always the DataWriter that decides whether its samples should be stored in memory (TRANSIENT), on disk (PERSISTENT) or not at all (VOLATILE).
That means that in this particular case the DataWriter can have a weaker QoS than the topic, but not stronger than the topic since if the Topic states VOLATILE, the DurabilityService will not have prepared any storage for it.
DDS implementations that do not support the Persistency Profile only have support for the VOLATILE and TRANSIENT_LOCAL policy that stores data in the context of the DataWriter itself. In those cases the TopicQosPolicy is only hint, each DataWriter will decide for itself whether it will keep a copy of its samples, based on its DataWriterQosPolicy settings.
The reason for having a pluggable service besides the TRANSIENT_LOCAL approach is that the data in a pluggable service can outlive the lifecycle of its DataWriter. That offers some advantages, for example the ability to restore your system state from a crash of one or more of its TRANSIENT DataWriters, where in case of a TRANSIENT_LOCAL DataWriter that crashes, all TRANSIENT_LOCAL information originating from that Writer is lost as soon as it dies. Another example is that late-joining subscribers may want to see TRANSIENT data of DataWriters that have already been deleted.
Regards, Erik Hendriks.
posted on Thursday, January 10, 2008 - 04:05 pm
I was just re-reading this and thought about it some more. Do you know if the OpenSplice pattern of having one particular node (or more I guess) acting as the durability service is implemented by other vendors too?
Since the persistency profile is an optional profile, no vendor is required to implement it. The whole idea behind this profile is that data is stored outside the context of the DataWriter itself, but there is complete freedom over where and when TRANSIENT/PERSISTENT data is stored in your system. So a vendor may decide that he only runs 1 service that stores all transient data (introducing a single point of failure that way), multiple services that are full back-ups of each other (increasing the amount of redundancy), or even partition the transient data over different services (a very flexible approach).
As far as I know different vendors claim to have implemented the Persistency profile, but each one will make different choices with respect to the amount of configurability and the mechanisms needed to make it reliable and fault tolerant.
The OpenSplice product, which is based on the years of experience we have in the field of naval combat systems and all the redundancy requirements that come with that, supports a very flexible approach that is highly customizable.
If you want more information about the details of our durability service, please contact PrismTech and we will be happy to provide it to you.
I'm currently working on my masters thesis where focus is on the policies provided by the DDS specification. For the past few weeks I have been studying and reading the specfication. After reading and studying it I started to question the policies (Qos) provided within the Topic entity and how they affect both publishers and subscribers in relation to Subscribing/Publishing to a Topic and the Subscriber-requested, Publisher-offered mechanism.
I really can't find anything about it in the specification and I'm glad to see Erik's answer on the subject. Nonetheless I'm interesed in any references, in the specification, regarding the Topic's policies of being a 'hint' and that publisher are not forced to meet the requirements as indicated in the Topic's policies.
The description of the DEADLINE Qos, on page 111 of the specification, states the following:
"This policy is useful for cases where a Topic is expected to have each instance updated periodically. On the publishing side this setting establishes a contract that the application must meet. On the subscribing side the setting establishes a minimum requirement for the remote publishers that are expected to supply the data values."
Does this mean that a publisher must meet the minimum requirements as indicated in the Topic's Qos?
I agree that the DDS spec is not really clear about the applicability of certain concepts, but the spec is written for implementors of a DDS product, not for its end-users. So it only explains what a certain entity/operation/qos is supposed to do, not why it is supposed to do that and how it could be utilized effectively for a certain use-case. If you really want information on how to deploy the DDS for your applications, you should probably not be using the DDS spec as a starting point.
For that reason, most DDS products come with courses/documentation that explain the philosophies and rationale behind the DDS specification, and explain how to use the different concepts in different situations. I don't know which DDS product you are currently using, but it might be a good idea to contact your supplier about such information. As a middleware supplier with a long DDS history, PrismTech may be able to help you out here....
In the particular case you mention, the DDS spec does not state directly that the TopicQos is meant as a hint for Readers & Writers. However, it does say that for RxO policies (policies with Request/Offered semantics) only the Publisher and Subscriber ends are involved in determining compatibility. The TopicQos is never mentioned as having a direct effect on the end-to-end connectivity. That fact, combined with the availability of operations like copy_from_topic_qos() and of convenience macro's like DATAREADER_QOS_USE_TOPIC_QOS (that both help you determine a practical set of starting QoSPolicy values for your Readers and Writers) gives some clues on how the TopicQos is supposed to be used.
Back to your particular question about the deadline: also here the RxO property only applies to the Publishing and Subscribing ends (in this case Writers and Readers), and the TopicQos is not mentioned as a reason for incompatibilty in the section from which you took your quote.
That means the Publishing end doesn't have to meet any requirements of the Topic: any DataWriter is free to select the QoS settings it wants. However, having incompatibilities with the TopicQos increases the chance to encounter incompatibilities with available DataReaders.
Regards, Erik Hendriks.
posted on Wednesday, February 10, 2010 - 05:16 pm
Hi All, Can any one help me out in this. Is it possible for an application to Poll DDS for the available topics/DDS entities.What are the minimum requirements for an application to do this.
Yes this is possible ! For this, the DDS-standard has specified pre-defined so-called 'built-in topics' that provide meta-data w.r.t. DDS-entities that are available in your system, see section 7.1.5. of the OMG-DDS rev1.2 specification.
The built-in topics (for which there are also built-in readers for convenience) are:
If you know the name of the Topic from which you want to obtain the Qos (for example by looking at the builtin DCPSTopic), you could use the find_topic operation on the DomainParticipant to actually get a local 'Topic' proxy to that topic.
You can then simply invoke the get_qos on that Topic object to obtain its qos.
Hope that helps.
posted on Tuesday, April 27, 2010 - 08:20 pm
hello I 'm developping a simulator with respect to dds specification.I'm just interrested in this qos for publihser,subscriber and topic: transport priority deadline lifespan i need to determine a function that classify my dds agent given this 3 qos. 2_ is there a difference betewen publisher period and publisher deadline?
thanks for your help
posted on Tuesday, June 01, 2010 - 04:51 pm
I'm afraid I don't really understand your question. But I'll try to give you some explanation on what the different Qos settings do, and maybe that helps you in rephrasing your questions to target the problem in more detail.
The transport priority is set on the Writer, and determines how the DDS should process the data relative to data coming from writers with a different transport priority. Data with a higher transport priority is allowed to jump queues with respect to samples with a lower transport priority, and may be transmitted on the network using a higher diffserv priority than samples with a lower transport priority, so that also switches and routers will give priority to these samples.
The deadline is a Qos that allows you to create a contract between readers and writers with respect to the anticipated update frequency of each individual instance. A writer that sets a deadline of for example 5 seconds, promises its readers that it will update EVERY instance within 5 seconds. If the writer misses its deadline, it will be notified about that. A reader may specify a separate deadline, which may be wider than the deadline of the writer (for example 8 seconds in this case). This reader will NOT be notified when the writer misses its own 5 second deadline, but it will be notified if any of its instances miss the reader's 8 second deadline.
The lifespan allows you specify a 'maximum time to live' for your samples: an expiry time is calculated based on the reception time of a sample + its lifespan. If a sample outlives its expiry time, it will automatically be removed from the readers and the durability stores for which it expired. However, a reader will not be notified about the expiry of any of its contained samples.
Hope this explains the Qos policies a little bit. Can you explain in more detail what exactly you want to do?
Yes you can, as typically the QoS-settings as defined on the topic-level act as 'defaults' for the 'eventual' quality-of-service that is applied during a write() or read() call of topic instances/samples. To utilize these 'defaults', there's a utility in the DDS-spec called copy_from_topic_qos() that allows a writer/reader to obtain the (pre-defined) QoS-policies for the whole topic for individual reads/writes. Yet that's not mandatory, so you can assign different QoS policies at runtime. Note that not all QoS policies are changeable at runtime between each read/write (you can check chapter 7.1.3 of the DDS rev1.2 spec for that): for instance RELIABILITY is not changeable at runtime which means that you'd have to create 2 writers (a reliable writer and a best-effort writer) and choose between those two 'at runtime' if you want to utilize different QoS-settings for writing different instances of a topic at runtime. Also don't forget that some QoS policies have Request-Offered (RxO) behavior meaning that if you change a writer QoS policy when writing sample of a certain instance of a topic, it might not anymore 'match' with the QoS of the reader. Going back to the example of RELIABILITY, if your reader specifies that it wants data RELIABLY delivered, then it won't accept a writer that writes it as BEST_EFFORT .. so some good thinking/designing is required to utilize dynamic QoS changes ..
While evaluating the effort in undertaking a re-design of the entire data model to DDS based information model, we found it to be prudent to categorise the data based on grouped QoS requirements.. for eg. periodic data, command/response, status info etc. Thereafter, create separate topics for each of these groups and distinguish them using a key field, viz., msg_id. so as to make them seperate instances of the same topic. So far so good.
But If we also needed finer granulation inside these msg groups, then we wanted a capability to change the QoS of the topic instance based on the individual message.
I understand that if we keep the QoS that are typically RxO based non-changing, then we have some control on QoS such as history depth, presentation, topic data, liveliness etc. to be changed at run time on per-instance requirement. Am i correct in my understanding?
Almost .. what I meant is that when you change RxO policies, you have to be 'careful' in not loosing connectivity as the newly set QoS might not match anymore. I guess whats even more important is that when a QoS-policy is not 'Changeable' you have to create multiple writers/readers and choose the 'right' one at runtime if you'd want different behavior for different instances.
What I've used in the past (in my TACTICOS history) is that we 'raise' the QoS on individual writes (not lowering it) i.e. for Transient data, we'd create a Volatile reader but that does call wait_for_historical_data() to synchronize with potential TRANSIENT writers and then we can select 'at runtime' if we want to write a volatile or transient sample.
Hi.. I want to know how is it possible to define a element(member) in a topic which contains null-terminated-strings "\0" in between the string content? When we type cast the string to char* to be passed as parameter to myReader.write() function, the string delivered at the subscriber end is only till the first "\0". Is there a method or is it an issue with the DDSI?
Hi.. We did not want to use a char as we know the length of the string only during run time and did not want to send the char[MAX_SIZE] in all samples. Nevertheless, we overcame the issue by using sequence<octect> and are setting the mySequence.length(<calculated_msg_length>) at run time based on the length of the string.
Is it possible to use the values of parameters in the filter expression of content-filtered-topic at run time by using placeholders/variables in topic IDL?
There are some examples in NDDS of RTI DDS implementation, but is there any such provision in OpenSplice?