Offload data from alien to cernbox/EOS

Hi there,

Is there a magic command to copy directly data from alien (to alleviate a quota issue, not to mention it :wink: ) to some cernbox/EOS location ?

Thanks,

I don’t know if this is possible. But I could imagine that
a) finding out the location of a file on ALIEN with whereis

whereis test_alien.sh
the file test_alien.sh is in

	 SE => ALICE::FZK::SE           pfn => root://alice-disk-se.gridka.de:1094//01/34617/349b3850-2829-11eb-863c-024230a3e856
	 SE => ALICE::RAL::CEPH         pfn => root://alice.echo.stfc.ac.uk:1094/alice:/01/34617/349b3850-2829-11eb-863c-024230a3e856

and
b) then using a command like xrdcp or some other XROOTD tool you should be able to specify source and target on different concrete distributed storages (assuming previous authentication).

Surely, the GRID has something to say here (@grigoras).

so, with the premise that i understood correctly that the problem is to transfer
from alien to cerrbox eos then there are a few points to make:

  1. see the tutorial KB0001998 - EOS quick tutorial for beginners - CERN Service Portal: easy access to services at CERN
    and more specifically see KB0001998 - EOS quick tutorial for beginners - CERN Service Portal: easy access to services at CERN
    and see how xrdcp is used
  2. use alien.py lfn2uri meta LFN to create a metafile for the desired lfn
  3. use xrdcp GUID.meta4 root:://EOS_CERNBOX_PATH

another way is to use webdav to mount your lxplus home and then you can just use alien.py cp LFN file:lxplus_mount_localtion/my_dir/

Let me know how it goes…

Hi Adrian,

Thanks.

Thing is that I want to transfer full folders, and not files “one by one”. Is that possible ?
Or should I “script it” to loop over files and get the meta of each ?

well, then try option 2: just mount through davfs your lxplus home (see KB0003190 - CERNBox: online access to files via WebDAV (File Exporer, Finder, Cyberduck, ...) - CERN Service Portal: easy access to services at CERN)
then just alien.py cp grid_source_dir/ file:davfs_mounted_lxplus_dir/

L.E. performance-wise for the actual transfer it should be better with direct xrdcp (so you do a loop, get all the metafiles (note that you have the actual lfn within the meta file), then you copy them in a loop)
but this should be somehow offset if you do something like -T 64 or even more when you do the copy to davfs mount point

L.E.2 or you do a copy script (alien.py cp) that you run directly on lxplus :slight_smile:

ok, will try to play with that a bit to find a way that suits my needs :wink:
Does any of this change if the target is not a home directory on Cernbox but a project dir ? e.g. /eos/project/a/alice-mch/data

no, nothing is different, as long as the destination is a posix mount (local fs, nfs, cifs, sftp-fs, sshfs, webdav, eos-fuse etc…), the underlying vfs can be anything and can be used like any other mount point. so, if you mounted the the eos through fuse you can use it directly as destination (the underlying driver does the translation from posix file oriented verbs to the actual fs verbs)

For the record in the end I simply used a script to do a bunch of alien_cp, from lxplus.

e.g.

alien_cp alien:///alice/cern.ch/user/l/laphecet/run3/2022/LHC22h/align1/520506/018/root_archive.zip file:///eos/project-a/alice-mch/data/ZERO-FIELD/FILL7963/align1/520506/018/root_archive.zip