I'm writing a DataReaderListener and, I have to know wich event (Creation, Update, Deletion or Creation after a Deletion) is associated to a sample that is read in on_data_available callback.
In this case, states and ranks of SampleInfo class aren't enough.
For example: An instance N is quickly disposed and re-born When I am notified, the read operation will return a collection with one sample. This sample is NEW and ALIVE. I am not aware of the deletion unless I store the state of each instance in my application.
The SampleInfo class contains a 'disposed_generation_count' attribute that is incremented each time the instance_state of the corresponding instance changes from NOT_ALIVE_DISPOSED (following the writer's dispose action) to ALIVE (i.e. when its 're-born' as you state).
Chapter 184.108.40.206.1.7 (Interpretation of the SampleInfo counters and ranks) of the latest DDS specification revision (OMG document formal 07-07-01)and state-chart 7-11 describe this behavior.
Does your issue stem from the fact that in order to detect an increase of the disposed_generation_count, you have to 'remember' the previous value ?
Hans van 't Hag Product Manager "OpenSplice DDS" PrismTech
It's not needed to store any information to become aware of the deletion. The disposed_generation_count should be enough, because if you receive a sample that is NEW and ALIVE but has a disposed_generation_count higher then 0 then you know that there must have one (or more) dispose(s) you missed before you read this sample. It doesn't matter if this is the first 'rebirth' or the one-hundredth 'rebirth', with the information at hand you should have enough information to solve your problem without having to store any information in your own application.
The only problem you cannot solve this way is to know how many disposes you missed, because that would require you to know the disposed_generation_count value the last time you read a sample for that instance. But I assume thats not the scenario you wish to resolve as you stated you simply wanted to be aware of the fact that there was a previous instance which has now become disposed.
My DataReaderListener has to ensure that no event is lost in (if possible) a stateless way. The DDS doesn't work only with the last value of each instance but, with a history.
In the case where an instance is updated, deleted then re-created between 2 calls of on_data_available, the read operation will return me a collection containing 2 samples : the 1st : NOT_READ, NEW, ALIVE, dgc=n the 2nd : NOT_READ, NEW, ALIVE, dgc=n+1 The disposed_generation_count (dgc) helps me to know that the instance has been disposed then re-created . But I'm not sure that the first sample means an update, it may mean a creation too.
This case is not the only one. There is a problem each time there are several new samples related to the same instance belonging to several generations in the same collection.
In short, how to deduce the event associated to a sample in a stateless way, even the read performed during on_data_available returns several new samples related to the same instance belonging to several generations ?
I confirm that there is no way, from what i can undertand in the spec., to determine what kind of event the application encountered. The only way seems to store the count disposed_generation_count.
I believe that more generally the deletion status reveals any lacks in this part of the OMG specifications, it seems the DDS is more focused on data updates than data deletions. It can be damaged in any cases, yet the name "on_data_available" is clear.
I'm thinking about another problem, that is the notification on disposal of data. The callback on_data_available will be called after the next instance update, consequently a user can be notified very lately about the disposal.
So it means on_data_available is really not the convenient means to be aware on deletions. In a full compliant implementation, what is the standard solution? Using the couple ReadCondition/WaitSet?
What exactly are your QoS policies regarding history then?
If you do not want to miss any events then you may consider using the KEEP_ALL history kind with an unlimited depth. That way the middleware will keep track of all relevant data until you have the time to read and analyze it.
A problem with this is that you will start using a lot of resources since you are retaining all samples ever received, but if you simply take away the samples of the instance you no longer need (take_instance, then specify max_samples to an appropriate value) and just leave the last sample (or however many you want to leave) then you should be fine. Check out the specification about the impact of using the KEEP_ALL history QoS policy and how to properly ensure you don't run out of memory and such.
But using this QoS policy should at least solve your problem.
About data_available not triggering on disposes, thats not how it works.
Based on the 1.2 DDS specification the on_data_available should trigger even if you just perform a dispose, see section 220.127.116.11.2 of the DDS specification which states that the data available flag is set to true if 'data arrives to DataReader OR change in InstanceState of contained instance'. Since a dispose changes the instance state it means that the flag is set to true and the operation should trigger.
I do believe this is only applicable since the 1.2 DDS specification, so products (or specific versions) not yet compliant to that specification will behave differently. As I believe before the 1.2 version of the specification this behavior was indeed different where the on_data_available would not trigger if just a dispose was done.
The granularity of the data_available status is indeed very coarse. It triggers when any sample or instance state changes in the DataReader because of incoming modifications (i.e. creations, updates, disposals). It does not tell you exactly what kind of change caused the trigger.
If you only want to be triggered for specific kinds of updates (for example only for disposals and creations, but not for updates) you can use ReadConditions tailored for that specific purpose (possibly in a combo with a WaitSet). For example you can make 1 ReadCondition that triggers on newly created instances (NEW, ANY, ANY) and another one for deleted instances (ANY, ANY, DISPOSED).
This becomes harder when multiple generations come into play: a deletion can be followed by an immediate rebirth of the same instance, possibly causing only the 1st ReadCondition to trigger. In that case either you will need historical samples to also notice the preceeding disposal (see HistoryQos), or you will need to keep track of the generation counter for each instance manually.
Indeed the problem occurs when the previous samples are no more available. But it could be nice to be able to analyse the event without knowledge of what happened before.
And i think that with a KEEP_ALL policy, a read(NOT_READ_SAMPLE_STATE) produces the same lack, it is not possible to deduce the deletion from output SampleInfos although it has never been read. But if no updates occured since the first deletion, you will be notified because the output data sequence contain one invalid sample NOT_ALIVE
The combination of KEEP_ALL with a read is potentially very dangerous, since it does not release the samples, even after you have processed them. A KEEP_ALL should therefore always be used in combination with a take, to remove the samples that you have already processed.
Suppose you use such a combination, and you always take the samples you have processed, then you should never miss any generation you have not yet processed. Although the absolute generation counts have no particular meaning in that case, you can still distinguish different unprocessed generations because of their differences in generation count.