User_data and ignore_participant PreviousNext
Data Distribution Service (DDS) Forum > DDS Technical Forum >
Message/Author
Next message Bryan Richard  posted on Wednesday, November 26, 2008 - 10:44 pm
I have data associations which cannot be described through domains, topics or partitions.

Is it true that ignore_participant is not implemented in v3.4?

If so, user_data is effectively useless if I cannot selectively filter an association based on user_data content.

Passing some initial qualifiers though user_data and selectively ignoring participants through ignore_participants would effectively nip the association in the bud. The only alternative is to tack on the extra bit of data to every publication, and implement content filtering. This is not only more data in each publication, but the data goes more places where it isn't needed.
Next message Bryan Richard  posted on Wednesday, November 26, 2008 - 11:01 pm
Note: In the original post replace ignore_participant with ignore_publication.
Next message Niels Kortstee  posted on Thursday, November 27, 2008 - 10:54 am
Hi Bryan,

You are right, ignore_publication is not available in V3.4. I don't understand why you say that user_data is 'effectively useless' because of that. The user_data can be used for all kinds of purposes.

Another approach for your scenario is to use the partition QoS policy for your Publisher and Subscriber entities. Make sure the partitions of the entities that need to communicate match with each other. This on itself will make sure published data is not delivered to subscribers that use (a) different partition(s).

On top of that OpenSplice DDS also allows you to map a multicast address to a DDS partition, which allows you to prevent data even being delivered to nodes that don't need the data on a network level.

Hopefully this answers your question.
Best regards, Niels
Next message Hans van 't Hag  posted on Thursday, November 27, 2008 - 10:59 am
Hi Bryan,

What you alternatively perhaps could do is to still pass qualifiers through the user-data but instead of ignoring publications select a (publisher-specific) partition by setting the partition-QoS of the associated subscriber. This implies that each publisher publishes in to an 'own' partition. With the capabilities of some DDS implementation of mapping logical partitions to physical multicast groups, it would also prevent data to arrive on more places than where its actually needed.
Next message Bryan Richard  posted on Friday, November 28, 2008 - 06:30 pm
Hans and Niels,

Thank you for your suggestions, but again, the associations cannot be described through domains, topics or partitions.

Our pub sub abstraction provides ability to associate a set of static parameters to qualify the data associated with each datawriter. These parameters are basically a set of arbitrary name-value pairs like "color=blue" and "make=ford". These parameters are static for the life of the datawriter. The subscribers can implement an arbitrarily complex filter to capture only the set of parameters in which it is interested. DDS Partition qos feature is inadequate because it only allows an "or" filter. If one partition matches, the other partitions are ignored. With our abstraction, applications must be able to specify minimally an AND filter. For instance, that they want only the publications that are "color=blue AND make=ford". Since the qualifiers are static for the life of the datareader, it is association oriented rather than data instance oriented. There would need to be separate partition for each unique combination of parameters. This is inadequate, since, the number of combinations and therefore the number of partitions would grow to exponentially to something unwieldy very quickly.

I assumed by PrismTech saying that they were DCPS specification compliant, they meant the full specification, not a partial implementation. Is it true then, by that precedent, that any DDS implementation, even an empty one with absolutely no functionality, can be called "DDS specification compliant". Please excuse me if I sound harsh, but I was relying on user_data in conjunction with ignore_publication to implement our design. It would have been a perfect solution; exactly what we needed. I have been the DDS champion at this company, flaunting DDS virtues at every turn, and now I must submit an implementation which is fundamentally flawed but does not have to be if there were a fully compliant and fully implemented DCPS layer. It is frustrating.

Now, my only alternative is to push the parameter list into every piece of data going through our system, and filter it at the destination. So, not only is every piece of data going through the system larger in size by its parameter list, but also, the data will reach more places it doesn't need to go.

I called user_data "effectively useless", because IMHO qualifying participant associations beyond domain, topic and partition qos is the target use case for which user_data and ignore_xxx were created. I am sure there are other uses for which user_data can be applied.

-Bryan
Next message Niels Kortstee  posted on Sunday, November 30, 2008 - 03:37 pm
Hi Bryan,

That sounds a bit harsh indeed, but I can understand your frustration. This functionality will be implemented in OpenSplice for sure.

Keep in mind that even if 'ignore_publication' would be implemented, it doesn't mean that data would automatically be filtered at the source since most DDS implementations rely on multicast to enable reaching all subscribers in the domain by sending data 'over the wire' only once. Let's say you have 10 subscribers in your network for a specific publisher. If one subscriber chooses to ignore a publisher, it would still be inefficient if the single multicast would be replaced by 9 unicasts to reach only the ones that didn't ignore the publisher.

Niels
Next message Bryan Richard  posted on Monday, December 01, 2008 - 03:43 pm
Niels,

I apologize for sounding harsh. I never thought about the multicast issue. If there is more than one recipient on that node, the data will end up needing to be filtered at the destination anyway, but the cost to network bandwidth will remain unaffected. True?

Could you please give me some insight into how the local shared memory implementation works? Will filtering at the destination v.s. the source be as irrelevent? In other words, you have given me a reason that filtering at the source will no be as bad as anticipated due to multicast characteristics. Will the same hold true for local, shared memory communications due to some OpenSplice architecture feature? For local shared memory scenarios, ignore_publication and content filtered topics seem like two api's accessing similar core functionality. Is this true?

Thanks,

Bryan
Next message Bryan Richard  posted on Monday, December 01, 2008 - 05:24 pm
Is there any specific number of partitions allowed in the system?


Are there any performance characteristics or limitations of a system with a large number of Partitions?
Next message Niels Kortstee  posted on Tuesday, December 02, 2008 - 07:53 am
Hi Bryan,

> If there is more than one recipient on that node, the data will end up needing to be filtered at the destination anyway, but the cost to network bandwidth will remain unaffected. True?

Yes, that is correct. Also keep in mind that evaluating ignore_publication and content filters at the source is not a scalable concept for two reasons:
1. This requires full knowledge of all subscribers at the publisher.
2. The publisher (source) must evaluate 'content filters' and 'ignores' of all matching subscribers. The performance of the whole system degrades if one subscriber is added in your domain.

> Could you please give me some insight into how the local shared memory implementation works?

In local shared memory there is only one copy of a specific sample, independent of the number of subscribers on that node. This is realized by means of reference counting.

>For local shared memory scenarios, ignore_publication and content filtered topics seem like two api's accessing similar core functionality. Is this true?

In certain scenario's the result (meaning the data that is received on application level) is the same indeed. Content filters are potentially more powerful, but complex filters could take more cpu time to evaluate.

> Is there any specific number of partitions allowed in the system?

There is not limitation on the number of partitions.

> Are there any performance characteristics or limitations of a system with a large number of Partitions?

Performance is not affected by the number of partitions in your domain.
Next message Bryan Richard  posted on Tuesday, December 02, 2008 - 03:55 pm
Does dynamic Partition negotiation propagated through user_data sound feasible? This is something like Hans suggested originally.

In order to implement this, I would need 2 things: the ability to generate a unique partition name based on some intrinsic uniqueness of the publisher, one per publisher and Partitions must be dynamically changeable.

Can this be done?
Is there any way I can get example code?
Next message Hans van 't Hag  posted on Monday, December 15, 2008 - 12:15 pm
Hi Bryan,

I think my suggestion is feasible (otherwise I probably wouldn't have proposed it :-) )

Partition-names are 'just' strings so you have all freedom to create unique names for that. Maybe the middleware could even help in providing publisher-unique identifications, but that might not be vendor-independent

I don't see an issue in communicating the publisher-partition information via user-data

Anybody can create, leave/join partitions if/when they wish, so that shouldn't be an issue too.

thanks,
hans
Back to top
Add Your Message Here
Post:
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Password:
Options: Post as "Anonymous"
Enable HTML code in message
Automatically activate URLs in message
Action: