DPL initialization

Do I understand correctly that the initialization part of all the dataprocessors happen on all the devices ?

So for instance if I have 3 DataProcessors : the typical toy example of a sampler a processor and a sink, and the sampler is opening a file in its initcallback, that file opening will happen 4 times ? (one for the driver and once per device) ?

Mmm… no, the init callback happens only once for the dataprocessor it’s defined for. Can you share an example?

ok, then I must do something stupid and/or have misunderstood sth fundamental… :wink:

See my skeleton workflow at https://github.com/aphecetche/AliceO2/blob/mch-hit-reader/Detectors/MUON/MCH/Simulation/src/HitReaderWorkflow.cxx

with a sampler that (will at some point) read hits from a Root file, a fake processor and fake sink.

~/Mind/@Archive/2018/MRRTF/dpl
❯ mch-hit-reader > log.txt

~/Mind/@Archive/2018/MRRTF/dpl
❯ grep create log.txt
createHitReaderAlgo(o2sim.root)
[79581]: createHitReaderAlgo(o2sim.root)
[79581]: [WARN] Could not create GUI. Switching to batch mode. Do you have GLFW on your system?
[79580]: createHitReaderAlgo(o2sim.root)
[79580]: [WARN] Could not create GUI. Switching to batch mode. Do you have GLFW on your system?
[79582]: createHitReaderAlgo(o2sim.root)
[79582]: [WARN] Could not create GUI. Switching to batch mode. Do you have GLFW on your system?

Ok. You are not actually using the InitCallback, but you are doing it in the defineDataProcessing which is indeed executed for every device.

For now have a look at the end of:

TGeo plays the role of your file. I will try to find / prepare a better example tomorrow.

Indeed.
As I still don’t feel “comfortable” with all those lambdas (find they don’t necessary make the code super readable nor testable), I tried to bypass them as much as possible. Too much in fact.
I guess it just shows that I’m having a hard time getting my head around closures…
Thanks,

Yes, agreed it’s a bit clumsy. I had in the plan to provide a “Task” API, but not done it yet. Will try to bump priority for it.

I now have a prototype for it. It was actually much easier than I thought.

In principle we could have adaptor for different kind of tasks and the same mechanism could be used to load task dynamically, e.g. via a proper plugin manager…

While dynamically loaded tasks at runtime sounds nice, I have two comments:

  • Have a look at Boost.DLL, it gives you all the generic building blocks for building a task plugin manager (IMHO there is no generic plugin manager, because there is no generic definition of a plugin - there are only symbols in a dynamic library, and to interpret/use them generically in a C++ typesafe manner is what Boost.DDL gives you).
  • Introducing plugins (as in dynamically loaded code of any sort at runtime) comes with decoupling more domains than one might think in the first place (you are opening the framework to binary distributed code with the whole tail of problems attached).

From my personal experience with FairMQ control&config plugins, I conclude to stay away from dynamically loading code at runtime as much as possible. Compile time dynamic linking is already hard enough (in regard to quality ensurance).

Well, since I come from an environment where dynamically loaded plugins were the norm and worked quite well, I guess we have different experiences there. In the end the AlgorithmSpec already defines quite well the interface those plugins would need to support. Notice that the main issue there is IMHO proliferation of small libraries, which I fully agree are to be avoided, but nothing would forbid having a single large libPhysics or a large executable and still enable some parts rather than others in a dynamic manner. Anyways, not for the short term.

My experience is positive with dynamically loaded plugins. In the DQM (AMORE) for Run 1 and 2 we used them extensively (each detector had their own plugin).

For the QC I have blindly continued on the same track : https://github.com/AliceO2Group/QualityControl/blob/master/Framework/include/QualityControl/TaskInterfaceDPL.h

Just my two cents.

and plugins, which are loaded at runtime with no a priori knowledge about at compile time are fundamentally different things.

I guess, your positive experiences come from the fact, that the loader and the plugins all came from the same source distribution, where you had perfect control over the used compiler(s) and that plugins got relinked whenever necessary (ABI change of the framework/loading side).

If this is true, and plugins are always part of the same source distribution as the loaders, then what’s the benefit of not expressing dynamic linking at compile time?

Yes, of course that is what was happening but it’s pretty much a given. We are not talking about integrating downloaded plugins but a mechanism to configure on the the fly, without recompiling your chain of algorithms.