right now we are using checkers on the QC tasks in synchronous mode. We will need later on to implement checkers on the trending output, to see if the results are stable in time and find the outliers from general trends. It seems to me, correct me if I am wrong, that currently the checkers are in the main data flow between task and publishing histograms on database.
Is there a way to use checker to check the trending output, is there implemented a separate checker class for such cases? And if not, are there any plans to implement something like this?
Hello,
this is indeed something which still has to be done. Honestly, I did not expect that someone would ask for it so soon
The plan is to allow for a check data source which is an object in repository. It would be configured as follows:
Would that work for your use-case?
Would you need it as soon as possible or it can wait a bit? Now I have the support for multi-node QC setups as my highest priority, which was already requested by two detector teams.
Cheers,
Piotr
For the example use case, this seems like a good suggestion. Would then the trending histogram be a “MO”? Would it be possible to run the checks in time loops? So the trending could be analyzed every 10 minutes or every hour and so on by the checker?
I as well have another question for this. We were talking in TPC QC group, that we will need to apply checkers not only on TObjects, but on custom objects as well. For example, if we would like to check calibration results, we would need to apply checkers on the PEDESTAL calibration object. Can such objects be saved in repository as MonitoringObjects and used later by checkers?
Yes, we treat it and store it just like normal Monitor Objects generated by QC Tasks.
Yes, we would have to allow for some configurable time interval. We surely don’t want to check such an object constantly in a loop. Another option would be to run a Check each time when there is a new version of an MO in the repository.
Not yet, but the plan is to support any objects which have a ROOT dictionary. If your calibration object is stored in CCDB, then I guess it has a dictionary. @bvonhall is taking care of that, perhaps he would like to add something.
Thanks for the information! And then the last a little bit related question.
If I would like to store some additional information from the checker function [for example if I pass MOs from several tasks and want to calculate some common compound observable for the check and I want to store this observable so I could access it later for trending], is there a possibility to publish some additional objects from the checker to the ccdb? (simple floats, TObjects…)
We didn’t foresee creating new objects inside Checks. However, we want to add a possibility to attach some metadata to QualityObjects (check’s results), in a form of keys and values.
Producing additional TObjects inside a Check would be a bit of a stretch perhaps - we could change that in the framework, but I would advocate sticking with the single-responsibility principle.
Another way to do that would be to compute and store these observables inside your own Post-processing Task (e.g. TrendingTask is a specialization of a Post-processing Task). Then you should be able to access those observables again while running a Check or a TrendingTask.
Would any of these options be enough to achieve what you want?
I suppose it is enough. The possibility to attach metadata would be great. I will check out the post-processing task option as well. Thank you for all these answers!
Have a nice day.
Sorry, there is a dedicated issue for Checks in post-processing now, because I realised that the previously proposed approach would cause clashes when beautifying objects from post processing.
Increasing priority on this. If you would like to provide some feedback, please comment directly in that issue, so we keep the discussion in one place.