Using the T3 Cluster at Nevis

Our installation is very similar to the standard US T3 installation but not exactly the same. Start by adding the following to your .bashrc file (or corresponding file if you use a different shell, but I find bash to work rather well with ATLAS software...):

export ATLAS_LOCAL_ROOT_BASE=/a/data/xenia/share/atlas/ATLASLocalRootBase
alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh'

Log onto xenia.nevis.columbia.edu and go to bash if that is not your default shell. You need to do everything on xenia(2): this is where everything is installed

If you need a complete enviroment setup example view:

~kiord/.zshrc

To setup the grid environment source:

source ${ATLAS_LOCAL_ROOT_BASE}/packageSetups/atlasLocalPandaClientSetup.sh currentJedi --noAthenaCheck    
source ${ATLAS_LOCAL_ROOT_BASE}/packageSetups/atlasLocalRucioClientsSetup.sh 
voms-proxy-init --voms atlas 

To be able to use the grid dataset replication you need to get authorized to the vo: go to

https://voms2.cern.ch:8443/voms/atlas/user/home.action

and make sure your CERN certificate is allowed on usatlas tier3. This is under "Your groups and roles"; request membership in /atlas/usatlas

To replicate datasets to xenia use the command (after setting up the grid enviroment):

rucio add-rule mc14_13TeV:mc14_13TeV.samplename --grouping DATASET 1 "NEVIS_GRIDFTP"

You can check the status of replication from this website (replace the username by your username):

https://rucio-ui.cern.ch/list_rules?account=username

or with the command (replace the rule):

rucio rule-info 4f935bec5e3644fabbc7aaee4e1499b3

Once the replication has finished you can find the exact location of the files using the command:

rucio list-file-replicas datasetname

Examples how to allocate and run on data can be found at :

~kiord/scripts

There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory) on xenia and xenia2, and there is 150 TB of space in xrootd.

To see the contents of xrootd:

source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh
arc_nevis_ls /data/xrootd

(or look in /xrootdfs)

The database is updated every hour. You can then use Arcond on top of condor to run jobs over the files. See ArCondNevis for a tutorial. The general ArCond instructions are at

Arcond User Guide

Modern releases are available through cvmfs. Do

showVersions --show=athena

to see the list.

Proof-on-demand is also available, see instructions at

https://twiki.atlas-canada.ca/bin/view/AtlasCanada/ATLASLocalRootBase#Using_PROOF_on_Demand_PoD

Otherwise, most of the instructions at

https://atlaswww.hep.anl.gov/twiki/bin/view/UsAtlasTier3/Tier3gUsersGuide

should work.

That's most of it. Talk to others in the group for scripts etc.

The instructions below can (?) still be used, but data subscriptions are now recommended. There is a webpage to subscribe to datasets.

Note the very useful line

xrdcp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile>

There are also scripts on /data/users/common to help adding files. They use /data/users/common/fileCatalog which is meant to keep track of the files on xrootd.

This is an example command:

xrootadd.sh data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ /data/xrootd/data/skims/test

where data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ is a folder containing root files and /data/xrootd/data/skims/test is an existing folder on xrootd

The command will create a sub-folder /data/xrootd/data/skims/test/data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ containing root files renamed to NTUP.0001.root etc. To avoid renaming use /data/users/common/xrootadd.dont_rename.sh

The xrootadd.sh scripts check that the folder exists in the /data/users/common/fileCatalog before adding to xrootd. You can add the folder there with mkdir or:

/data/users/common/xrootmkdir.sh /data/xrootd/data/skims/test 

Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files <script type="text/javascript" src="http://www.nevis.columbia.edu/twiki/pub/TWiki/TinyMCEPlugin/tinymce/jscripts/tiny_mce/themes/advanced/langs/en.js"></script><script type="text/javascript" src="http://www.nevis.columbia.edu/twiki/pub/TWiki/TinyMCEPlugin/tinymce/jscripts/tiny_mce/plugins/twikibuttons/langs/en.js"></script><script type="text/javascript" src="http://www.nevis.columbia.edu/twiki/pub/TWiki/TinyMCEPlugin/tinymce/jscripts/tiny_mce/plugins/twikiimage/langs/en.js"></script>get distributed over the nodes that way - they're not all on xenia! To find out what's available and where:

-- GustaafBrooijmans - 16 Apr 2010

DQ2 to XROOTD

Using DQ2 to XrootD Directly

On xenia it is now possible to transfer files directly to the xrootd ("data subscription"). The command to use is:

rucio add-rule mc14_13TeV:mc14_13TeV.203989.ProtosPythia_AUET2B_MSTW2008LO_TTS_M900.merge.DAOD_TOPQ1.e3397_s1982_s2008_r5787_r5853_p1852 --grouping DATASET 1 "NEVIS_GRIDFTP"

On xenia the files can be seen in:

% ls /xrootdfs/atlas/dq2/...

Further automation of the process is coming...

Reference for installation (not for users)

To make this work I made the changes below which were different from the basic instructions here: https://twiki.cern.ch/twiki/bin/view/Atlas/Tier3gGridftpSetup I had to add myself to

/etc/grid-security/grid-mapfile

In the xrootd config file:

/etc/xrootd/xrootd-clustered.cfg

And in

/etc/xrootd-dsi/gridftp-xrootd.conf

put:

$XROOTD_VMP "xenia.nevis.columbia.edu:1094:/xrootd=/atlas"

Make sure all these config files are readable by everyone (not just root).

To start and stop the server as root do:

/sbin/service globus-gridftp-server start/stop/restart

(some of this may be superfluous).

-- TimothyAndeen - 21 Dec 2011

Edit | Attach | Watch | Print version | History: r19 < r18 < r17 < r16 < r15 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r19 - 2021-01-21 - GustaafBrooijmans
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2023 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback