MCH plans to regularly collect pedestal data that will be used to asses the status of the readout. and to identify dead and/or noisy channels that need to be masked for physics data taking.
The pedestal data consists of large, fixed-size events collected at very low trigger rates, of the order of few Hz or few tenth of Hz. The raw data has to be stored as-is, and cannot be converted into compressed TimeFrames because the information about the individual ADC samples would be lost. Hence my first qiestion:
- will it be possible to save raw non-compressed TimeFrames when collecting MCH pedestal data?
The pedestal data will then be processed to compute the mean and RMS of the pedestal values for each readout channel. This information will be used to asses whether the front-end electronics is properly working, and to identify dead/noisy channels that need to be masked for physics data taking.
The extraction of the channel-by-channel pedestal mean/RMS values cannot be parallelized in the usual way, because the data of a given channel must be computed by a single process. It is therefore not possible to parallelize the computation by dispatching the different TimeFrames to different computing nodes.
It would be however possible to achieve a certain parallelization by dispatching the data from different CRU links to different nodes, making sure that each node always gets the data from the same link. So my second question:
- will it be possible to parallelize the processing such that each node gets all the data from a given CRU link?
The result of the pedestals processing will be some histograms that show the overall status of the readout and eventually highlight the portions that do not work properly, and a table of readout channels to be masked, with three columns like this:
LinkID BoardID ChannelID
This table should be stored both in the CDDB (so that the reconstruction knows which channels were disabled at a given time) and in the DCS DB (the table of disabled channels will be read each time the front-end is configured for physics data taking).
I guess that the O2 processing will write the table into the CDDB, and then the table will be replicated into the DCS DB.
- what is the current status of this CBBD <-> DCS DB replication? When do you think we could foresee some first tests?
When this table will be read by the MCH WinccOA software, we will need to perform some mapping that converts the
LinkID values into
CRU,LINK pairs, since this is how we address the front-end electronics in ALF-FRED. My understanding is that this mapping will be maintained by the central system, as it represents the actual wiring of the CRU optical links.
- is that really the case? Do we already know where this mapping will be stored, and in which form? At the beginning we can use our own mapping for the tests, but it would be nice to get prepared for the final solution…
The next doubt I have is about the preferred way to implement the processing of pedestal data, which is both a data quality check task and a calibration task. One possibility would be to include al the code in QC, however my understanding is that people prefer to have a dedicated O2 workflow for calibration tasks. We could then put all the processing into a dedicated DPL workflow, and simply send the channel-by-channel values to the QC for visualization.
- what is your recommendation regarding the implementation?
Finally, is there an example that shows how to write a table into the CDDB from an O2 workflow and/or a QC task?
Thanks for your patience in reading this long message!