Corrupted data during readout


so, 1 packet contains roughly 61 Time-Bins that means 61 x 30 = 1830 time-bins per link. I think we can work with that for the moment. At the moment we take 5000 time-bins but that only works for one link and only half of the time, so it’s a pain in the ass.


With the current server configuration I managed to get a data taking without packet dropped with a max of 9 links per endpoints.
If you want to collect more data you will have to reduce the amount of links in the readout (anyway to be tested in your test system)

I have sent an email to johannes … once the software is ready we can do some tests.


So, even with the new readout tool we will be quite limited, no? And need to change the link configuration in between (let’s say take 2 x 6 links) in order not to loose data.

So enabling the 2 endpoints you can connect 9 links per each for a total of 18 links per CRU without loosing data.

We can measure the performance in your system and check if we can increase the number of the links.
Currently in this machine (on loan) we are limited by a few configuration and I can’t assign very big buffer in the memory.

To have a proper data taking with 2 endpoint we need the LTU to start the data taking to synch the 2 end points.
This should be already in develop branch … I’ll check with CRU colleagues.


Running with 10 links per endpoint (so for a total of 20 links per CRU, that should cover your case).
I can collect 196 DMA block, that is roughly 6 times more of what I wrote before.
That should be enough to cover the whole CRU right?



so that would be 20 links per CRU and 196 packets per link, right? I think that would be sufficient for the moment. We can always upgrade to a more powerful system, like a Raspberry Pi, to get more than 1.5 MByte/link. It’s the year 2018 after all :wink: