I have started to play with readout.exe + QC framework, and I would have some basic questions regarding the structure of the buffers that are sent from readout to QC.
I can successfully acquire and decode the CRU data by letting readout.exe write into a named pipe, with this configuration:
# recording to file
[consumer-rec]
consumerType=fileRecorder
enabled=1
fileName=/tmp/readout-pipe
bytesMax=0G
In this case, the data is divided into 8kB buffers with a 64-byte header and a 8128-bytes payload.
However, when I receive the CRU data inside the QC framework the payload has a different size.
Here is the relevant part of my readout.exe configuration for QC:
We have discussed this morning with @sy-c and @pkonopka. The payload received in the QC is a CRU data page. It should be the equivalent of what is actually copied on the data pipe.
The header is a O2 header populated with data from the header built by the Readout. I don’t think that you interested in the header I just want to be complete.
I let Sylvain correct me if I am wrong. Maybe you could have a look at the first 8kB to see if there is the 64-byte header and 8128-bytes payload you expect ?
Hi,
you can use the option -q which is a default option of the DPL.
You could also specify a monitoring backend where this type of data would go. I have to check how to do that.
Some progress on this… I have been able to get the CRU data and do some basic decoding, and now I notice that the payloads are not consecutive, van though I have set
One possible problem is that readout is going too fast for the task and some blocks are dropped in the Data Sampling. Could you tell me whether the data is ok within a single payload ? is it that the payloads are not properly ordered or is it that you have missing payloads ?
Most likely not, at least not before we have some working pre-filtering in the CRU user logic.
Nevertheless, I think it would be interesting to investigate where is the bottleneck compared to a straight dump into a named pipe… what do you think?
We will certainly investigate the bottleneck, no doubt. This is very interesting for us and we should aim at being able to collect a 100% data in such a case. Could you give me details on the rate we are talking about here ?
It helps ! I have to see now in which piece we are wasting time. It seems that the readout is not affected by the Data Sampling which is good.
I’ll keep you informed.
Are you sending a lot of debug messages on the stdout? That’s the handler of messages for all the processes and therefore it can get congested. Can we sit together to profile what is going on?
I am running qcRunReadout with the -q option, so that the terminal output is reduced to a minimum.
I’m available to profile the code, but most likely not before the end of the Christmas break, as I need to bring the equipment in the P2 cavern this afternoon for other tests…