Potential resource exhaustion with re... PreviousNext
Data Distribution Service (DDS) Forum > DDS Technical Forum >
Next message Erik Boasson  posted on Monday, June 29, 2009 - 09:39 am
The OMG procedure for reporting issues with the specification appears to put all reports in an OMG-members-only area. In my opinion, this is not helpful at all to those who use DDS, especially considering the lack of updates to the specification. Therefore, I've decided to post this in a public forum before submitting it to the OMG.


There appears to exist at least one sequence of operations consuming a monotonically increasing amount of resources even with resource limits set.

Consider the following simple process (in pseudo code, but I think it is clear enough):

W := DataWriter(topic T, reliable)
R := DataReader(topic T, reliable, resource_limits.max_samples=1)
Q := query on R: "key == %1"
key := 0
forever do
data.key := key
write(W, data)
set_query_parameters(Q, {key})
dispose(W, data)
key := key + 1

My reading of the spec is that R will receive the dispose [0] (as it should [1]), which will cause the addition of a sample without "valid data" [2] that can't be removed automatically [3], which isn't accounted for in the resource limits [2], and hence causes resource exhaustion.

That this is a contrived process is immaterial.

R "knew" the instance in the words of the specification—although that wording is sickeningly vague.

This is supported by section of the specification: "The act of taking a sample removes it from the DataReader so it cannot be ‘read’ or ‘taken’ again. [...] It will not affect the instance_state of the instance." So the instance state must remain at ALIVE after the take, which surely means it still "knows" of the instance.

Reasonable use cases exist that pretty much require R to receive it: for example, a forwarding process that reads topic T in partition X and republishes it in partition Y. For it do to its work properly, it must forward disposes, too, but because it never needs to access any particular sample twice it would be ridiculous requiring it to hang on to a sample of each instance it has ever seen that hasn't been disposed yet. Indeed, one can even image it becoming the single greatest resource hog if this were to be the case.

A dispose does not magically create valid data as suggested a.o. in section

"Some elements in the returned collection may not have valid data. If the instance_state in the SampleInfo is NOT_ALIVE_DISPOSED or NOT_ALIVE_NO_WRITERS, then the last sample for that instance in the collection, that is, the one whose SampleInfo has sample_rank==0 does not contain valid data. Samples that contain no data do not count towards the limits imposed by the RESOURCE_LIMITS QoS policy."

I think the intention of the spec is that
- the dispose creates a sample;
- W does not lose the status of live writer for the instance
and both are required for local reclamation of resources (section & figure 7.11).

However, it is not clear that W indeed remains a live writer for the instance.

Kind regards,
Back to top
Add Your Message Here
Username: Posting Information:
This is a private posting area. Only registered users and moderators may post messages here.
Options: Post as "Anonymous"
Enable HTML code in message
Automatically activate URLs in message