Timing of a device in the DPL Workflow

Dear all,
I have devices running in a DPL workflow and I would like to know how “fast” they are.
I can easily use google benchmark for the pure algorithmic part, extracting it from the DPL device, but I have no idea on how to time the full device, including the decoding/encoding of the messages.
Is there a way to time single devices in a DPL workflow?
Thanks in advance,
best regards,

On mac, I use Instruments and I simply attach to the given process. On linux you can use perf or similar. There is also a --child-driver option which you can use to prepend the command line of a given child with something, e.g. callgrind or, again, perf. There is also some facilities to instrument code via DTrace or Signpost. Let me know if this is enough information to get you started and if you get blocked, just let me know.

Hi @eulisse ,
thanks for the info. Actually I already knew about instruments on Mac. However, AFAIK, it only performs a sampling of the code, it does not tell me the exact time spent (unlike call grind, for example). So I cannot know for example how much time my code spend to process one “event”.
Anyways, I guess from your answer that there is no integrated facility on DPL for this.
In this case I will try with either some timers in the code itself (e.g. std::chrono::high_resolution_clock), or with some external instruments, like the one you mention.

Actually there is a metrics for that as well, IIRC. I can check tomorrow.

Indeed, I’ve seen that there is the possibility to send some metrics…but it is not clear to me how to do it (and where they’re sent: the GUI?)

They are sent to the GUI by default or to --monitoring-backend <url> if specified. I will try to find a proper example tomorrow…