Difference: UsingNevisT3 (1 vs. 18)

Revision 182015-07-29 - KalliopiIordanidou

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 7 to 7
 
export ATLAS_LOCAL_ROOT_BASE=/a/data/xenia/share/atlas/ATLASLocalRootBase
alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh'
Changed:
<
<
Log onto xenia and go to bash if that is not your default shell. You need to do everything on xenia: this is where everything is installed, and although most of it is accessible by NFS from the rest of the cluster, most Nevis machines still run an incompatible 32-bit OS.
>
>
Log onto xenia.nevis.columbia.edu and go to bash if that is not your default shell. You need to do everything on xenia: this is where everything is installed, and although most of it is accessible by NFS from the rest of the cluster, most Nevis machines still run an incompatible 32-bit OS.

If you need a complete enviroment setup example view:

~kiord/.zshrc

To setup the grid enviroment source:

source ${ATLAS_LOCAL_ROOT_BASE}/packageSetups/atlasLocalPandaClientSetup.sh currentJedi --noAthenaCheck    
source ${ATLAS_LOCAL_ROOT_BASE}/packageSetups/atlasLocalRucioClientsSetup.sh 
voms-proxy-init --voms atlas 

To be able to use the grid dataset replication ask to be included to the grid map. To replicate datasets to xenia use the command (after setting up the grid enviroment):

rucio add-rule mc14_13TeV:mc14_13TeV.samplename --grouping DATASET 1 "NEVIS_GRIDFTP"

You can check the status of replication from this website (replace the username by your username):

https://rucio-ui.cern.ch/list_rules?account=username

or with the command (replace the rule):

rucio rule-info 4f935bec5e3644fabbc7aaee4e1499b3

Once the replication has finished you can find the exact location of the files using the command:

rucio list-file-replicas datasetname

Examples how to allocate and run on data can be found at :

~kiord/scripts
  There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory) on xenia and xenia2, and there is 150 TB of space in xrootd.

Revision 172015-07-08 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 18 to 20
  The database is updated every hour. You can then use Arcond on top of condor to run jobs over the files. See ArCondNevis for a tutorial. The general ArCond instructions are at
Changed:
<
<
Arcond User Guide
>
>
Arcond User Guide
  Modern releases are available through cvmfs. Do
Added:
>
>
 
showVersions --show=athena

to see the list.

Line: 27 to 30
  Proof-on-demand is also available, see instructions at
Changed:
<
<
https://twiki.atlas-canada.ca/bin/view/AtlasCanada/ATLASLocalRootBase#Using_PROOF_on_Demand_PoD
>
>
https://twiki.atlas-canada.ca/bin/view/AtlasCanada/ATLASLocalRootBase#Using_PROOF_on_Demand_PoD
  Otherwise, most of the instructions at
Changed:
<
<
https://atlaswww.hep.anl.gov/twiki/bin/view/UsAtlasTier3/Tier3gUsersGuide
>
>
https://atlaswww.hep.anl.gov/twiki/bin/view/UsAtlasTier3/Tier3gUsersGuide
  should work.
Line: 64 to 68
 

Using DQ2 to XrootD Directly

Changed:
<
<
On xenia it is now possible to transfer files directly to the xrootd using dq2-get. The command to use is:

dq2-get -Y -L  -q FTS -o https://fts.usatlas.bnl.gov:8442/glite-data-transfer-fts/services/FileTransfer -S gsiftp://xenia.nevis.columbia.edu/xrootd

For example I copied the dataset:

user.chapleau.valid1.105592.Pythia_Zprime_tt1000.recon.AOD.e574_s933_s946_r2759.NTUP_TOPBOOST.v1.111015001105  

[tandeen@xenia]~% dq2-get -d -Y -L UKI-SCOTGRID-GLASGOW_LOCALGROUPDISK -q FTS -o https://fts.usatlas.bnl.gov:8443/glite-data-transfer-fts/services/FileTransfer -S gsiftp://xenia.nevis.columbia.edu/xrootd user.chapleau.valid1.105592.Pythia_Zprime_tt1000.recon.AOD.e574_s933_s946_r2759.NTUP_TOPBOOST.v1.111015001105

and on xenia the files can now be seen here:

>
>
On xenia it is now possible to transfer files directly to the xrootd ("data subscription"). The command to use is:
rucio add-rule mc14_13TeV:mc14_13TeV.203989.ProtosPythia_AUET2B_MSTW2008LO_TTS_M900.merge.DAOD_TOPQ1.e3397_s1982_s2008_r5787_r5853_p1852 --grouping DATASET 1 "NEVIS_GRIDFTP"
 
Changed:
<
<
[tandeen@xenia]~% ls /xrootdfs/xrootd/dq2/user/chapleau/valid1/user.chapleau.valid1.105592.Pythia_Zprime_tt1000.recon.AOD.e574_s933_s946_r2759.NTUP_TOPBOOST.v1.111015001105
>
>
On xenia the files can be seen in:
% ls /xrootdfs/atlas/dq2/...
  Further automation of the process is coming...
Reference for installation (not for users)
Line: 96 to 97
 Make sure all these config files are readable by everyone (not just root).

To start and stop the server as root do:

Changed:
<
<
/sbin/service globus-gridftp-server start/stop/restart
>
>
/sbin/service globus-gridftp-server start/stop/restart
  (some of this may be superfluous).

Revision 162015-03-17 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 6 to 6
 
export ATLAS_LOCAL_ROOT_BASE=/a/data/xenia/share/atlas/ATLASLocalRootBase
alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh'
Changed:
<
<
Log onto xenia (from kolya or karthur) and go to bash if that is not your default shell. You need to do everything on xenia: this is where everything is installed, and although most of it is accessible by NFS from the rest of the cluster, most Nevis machines still run an incompatible 32-bit OS.
>
>
Log onto xenia and go to bash if that is not your default shell. You need to do everything on xenia: this is where everything is installed, and although most of it is accessible by NFS from the rest of the cluster, most Nevis machines still run an incompatible 32-bit OS.
 
Changed:
<
<
There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. As of Sept 4, 2010 there is 8.8 TB of storage on xenia:/data (with 6.2 TB used) and 11 TB in xrootd. Instructions are on the T3 users site (link below), although it is missing the very useful line
>
>
There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory) on xenia and xenia2, and there is 150 TB of space in xrootd.
 
Changed:
<
<
xrdcp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile>

There are also scripts on /data/users/common to help adding files. They use /data/users/common/fileCatalog which is meant to keep track of the files on xrootd.

This is an example command:

xrootadd.sh data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ /data/xrootd/data/skims/test

where data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ is a folder containing root files and /data/xrootd/data/skims/test is an existing folder on xrootd

The command will create a sub-folder /data/xrootd/data/skims/test/data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ containing root files renamed to NTUP.0001.root etc. To avoid renaming use /data/users/common/xrootadd.dont_rename.sh

The xrootadd.sh scripts check that the folder exists in the /data/users/common/fileCatalog before adding to xrootd. You can add the folder there with mkdir or:

/data/users/common/xrootmkdir.sh /data/xrootd/data/skims/test 

Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where:

>
>
To see the contents of xrootd:
 
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh
arc_nevis_ls /data/xrootd
Added:
>
>
(or look in /xrootdfs)
 The database is updated every hour. You can then use Arcond on top of condor to run jobs over the files. See ArCondNevis for a tutorial. The general ArCond instructions are at

Arcond User Guide

Deleted:
<
<
To directly move data into xrootd from dq2-get, see below. Note that it's probably good to specify a US site when possible: dq2-get SITE DATASET.

Old Athena versions are installed in

/a/data/xenia/share/atlas/ATLASLocalRootBase/Athena/i686_slc5_gcc43_opt/
 Modern releases are available through cvmfs. Do
showVersions --show=athena
Line: 57 to 35
  should work.
Changed:
<
<
That's most of it. Of course, only basic tests have been made.
>
>
That's most of it. Talk to others in the group for scripts etc.

The instructions below can (?) still be used, but data subscriptions are now recommended. There is a webpage to subscribe to datasets.

Note the very useful line

xrdcp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile>

There are also scripts on /data/users/common to help adding files. They use /data/users/common/fileCatalog which is meant to keep track of the files on xrootd.

This is an example command:

xrootadd.sh data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ /data/xrootd/data/skims/test

where data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ is a folder containing root files and /data/xrootd/data/skims/test is an existing folder on xrootd

The command will create a sub-folder /data/xrootd/data/skims/test/data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ containing root files renamed to NTUP.0001.root etc. To avoid renaming use /data/users/common/xrootadd.dont_rename.sh

The xrootadd.sh scripts check that the folder exists in the /data/users/common/fileCatalog before adding to xrootd. You can add the folder there with mkdir or:

/data/users/common/xrootmkdir.sh /data/xrootd/data/skims/test 

Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files <script type="text/javascript" src="http://www.nevis.columbia.edu/twiki/pub/TWiki/TinyMCEPlugin/tinymce/jscripts/tiny_mce/themes/advanced/langs/en.js"></script><script type="text/javascript" src="http://www.nevis.columbia.edu/twiki/pub/TWiki/TinyMCEPlugin/tinymce/jscripts/tiny_mce/plugins/twikibuttons/langs/en.js"></script><script type="text/javascript" src="http://www.nevis.columbia.edu/twiki/pub/TWiki/TinyMCEPlugin/tinymce/jscripts/tiny_mce/plugins/twikiimage/langs/en.js"></script>get distributed over the nodes that way - they're not all on xenia! To find out what's available and where:

  -- GustaafBrooijmans - 16 Apr 2010

Revision 152013-12-07 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 88 to 88
 In the xrootd config file:
/etc/xrootd/xrootd-clustered.cfg
Deleted:
<
<
I added the line
all.manager meta glrd.usatlas.org:1095
 And in
Changed:
<
<
/opt/osg-v1.2/vdt/services/vdt-run-gsiftp.sh.env
>
>
/etc/xrootd-dsi/gridftp-xrootd.conf
 
Changed:
<
<
I changed the last line to:
export XROOTD_VMP="xenia.nevis.columbia.edu:1094:/xrootd=/xrootd"
>
>
put:
$XROOTD_VMP "xenia.nevis.columbia.edu:1094:/xrootd=/atlas"
  Make sure all these config files are readable by everyone (not just root).

To start and stop the server as root do:

Changed:
<
<
source /opt/osg-v1.2/setup.sh 
vdt-control --disable gsiftp  
vdt-control --off gsiftp  
vdt-control --enable gsiftp  
vdt-control --on gsiftp
>
>
/sbin/service globus-gridftp-server start/stop/restart
  (some of this may be superfluous).

Revision 142012-04-07 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 36 to 36
  Arcond User Guide
Changed:
<
<
Unfortunately, there is no way to directly move data into xrootd from dq2-get yet. dq2-get does work, but it's probably good to specify a US site when possible: dq2-get SITE DATASET.
>
>
To directly move data into xrootd from dq2-get, see below. Note that it's probably good to specify a US site when possible: dq2-get SITE DATASET.
 
Changed:
<
<
xenia currently has 7 TB of generally available storage under /data (shared between users and xrootd), and each batch node has 1.7 TB. We will evolve this as we develop understanding of the needs.

Athena is installed in

>
>
Old Athena versions are installed in
 
/a/data/xenia/share/atlas/ATLASLocalRootBase/Athena/i686_slc5_gcc43_opt/

Revision 132011-12-21 - TimothyAndeen

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 62 to 62
 That's most of it. Of course, only basic tests have been made.

-- GustaafBrooijmans - 16 Apr 2010

Added:
>
>
DQ2 to XROOTD

Using DQ2 to XrootD Directly

On xenia it is now possible to transfer files directly to the xrootd using dq2-get. The command to use is:

dq2-get -Y -L  -q FTS -o https://fts.usatlas.bnl.gov:8442/glite-data-transfer-fts/services/FileTransfer -S gsiftp://xenia.nevis.columbia.edu/xrootd

For example I copied the dataset:

user.chapleau.valid1.105592.Pythia_Zprime_tt1000.recon.AOD.e574_s933_s946_r2759.NTUP_TOPBOOST.v1.111015001105  

[tandeen@xenia]~% dq2-get -d -Y -L UKI-SCOTGRID-GLASGOW_LOCALGROUPDISK -q FTS -o https://fts.usatlas.bnl.gov:8443/glite-data-transfer-fts/services/FileTransfer -S gsiftp://xenia.nevis.columbia.edu/xrootd user.chapleau.valid1.105592.Pythia_Zprime_tt1000.recon.AOD.e574_s933_s946_r2759.NTUP_TOPBOOST.v1.111015001105

and on xenia the files can now be seen here:

[tandeen@xenia]~% ls /xrootdfs/xrootd/dq2/user/chapleau/valid1/user.chapleau.valid1.105592.Pythia_Zprime_tt1000.recon.AOD.e574_s933_s946_r2759.NTUP_TOPBOOST.v1.111015001105

Further automation of the process is coming...

Reference for installation (not for users)

To make this work I made the changes below which were different from the basic instructions here: https://twiki.cern.ch/twiki/bin/view/Atlas/Tier3gGridftpSetup I had to add myself to

/etc/grid-security/grid-mapfile

In the xrootd config file:

/etc/xrootd/xrootd-clustered.cfg

I added the line

all.manager meta glrd.usatlas.org:1095

And in

/opt/osg-v1.2/vdt/services/vdt-run-gsiftp.sh.env

I changed the last line to:

export XROOTD_VMP="xenia.nevis.columbia.edu:1094:/xrootd=/xrootd"

Make sure all these config files are readable by everyone (not just root).

To start and stop the server as root do:

source /opt/osg-v1.2/setup.sh 
vdt-control --disable gsiftp  
vdt-control --off gsiftp  
vdt-control --enable gsiftp  
vdt-control --on gsiftp

(some of this may be superfluous).

-- TimothyAndeen - 21 Dec 2011

Revision 122011-10-14 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 32 to 32
 
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh
arc_nevis_ls /data/xrootd
Changed:
<
<
The database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. See ArCondNevis for a tutorial. The general ArCond instructions are at
>
>
The database is updated every hour. You can then use Arcond on top of condor to run jobs over the files. See ArCondNevis for a tutorial. The general ArCond instructions are at
  Arcond User Guide
Line: 41 to 41
 xenia currently has 7 TB of generally available storage under /data (shared between users and xrootd), and each batch node has 1.7 TB. We will evolve this as we develop understanding of the needs.

Athena is installed in

Changed:
<
<
/a/data/xenia/share/atlas/ATLASLocalRootBase/Athena/i686_slc5_gcc43_opt/
>
>
/a/data/xenia/share/atlas/ATLASLocalRootBase/Athena/i686_slc5_gcc43_opt/

Modern releases are available through cvmfs. Do

showVersions --show=athena

to see the list.

Proof-on-demand is also available, see instructions at

 
Changed:
<
<
Releases can be added easily.
>
>
https://twiki.atlas-canada.ca/bin/view/AtlasCanada/ATLASLocalRootBase#Using_PROOF_on_Demand_PoD
  Otherwise, most of the instructions at

https://atlaswww.hep.anl.gov/twiki/bin/view/UsAtlasTier3/Tier3gUsersGuide

Changed:
<
<
should work. We do not have a squid server yet.
>
>
should work.
  That's most of it. Of course, only basic tests have been made.

Revision 112011-04-12 - AlexPenson

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 11 to 11
 There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. As of Sept 4, 2010 there is 8.8 TB of storage on xenia:/data (with 6.2 TB used) and 11 TB in xrootd. Instructions are on the T3 users site (link below), although it is missing the very useful line

Changed:
<
<
cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile>
>
>
xrdcp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile>
 
Changed:
<
<
(it is possible to do
mkdir xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>
>
>
There are also scripts on /data/users/common to help adding files. They use /data/users/common/fileCatalog which is meant to keep track of the files on xrootd.
 
Changed:
<
<
and this is generally a good idea I think.)
>
>
This is an example command:
xrootadd.sh data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ /data/xrootd/data/skims/test

where data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ is a folder containing root files and /data/xrootd/data/skims/test is an existing folder on xrootd

The command will create a sub-folder /data/xrootd/data/skims/test/data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ containing root files renamed to NTUP.0001.root etc. To avoid renaming use /data/users/common/xrootadd.dont_rename.sh

The xrootadd.sh scripts check that the folder exists in the /data/users/common/fileCatalog before adding to xrootd. You can add the folder there with mkdir or:

/data/users/common/xrootmkdir.sh /data/xrootd/data/skims/test 
  Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where:
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh

Revision 102010-12-08 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Our installation is very similar to the standard US T3 installation but not exactly the same. Start by adding the following to your .bashrc file (or corresponding file if you use a different shell, but I find bash to work rather well with ATLAS software...):

Changed:
<
<
export ATLAS_LOCAL_ROOT_BASE=/a/data/xenia/share/atlas/ATLASLocalRootBase/
>
>
export ATLAS_LOCAL_ROOT_BASE=/a/data/xenia/share/atlas/ATLASLocalRootBase
 alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh'

Log onto xenia (from kolya or karthur) and go to bash if that is not your default shell. You need to do everything on xenia: this is where everything is installed, and although most of it is accessible by NFS from the rest of the cluster, most Nevis machines still run an incompatible 32-bit OS.

Revision 92010-09-16 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 44 to 44
  That's most of it. Of course, only basic tests have been made.
Deleted:
<
<
Note: if you want to use the local python version, you will need to
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
 -- GustaafBrooijmans - 16 Apr 2010 \ No newline at end of file

Revision 82010-09-04 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 8 to 8
  Log onto xenia (from kolya or karthur) and go to bash if that is not your default shell. You need to do everything on xenia: this is where everything is installed, and although most of it is accessible by NFS from the rest of the cluster, most Nevis machines still run an incompatible 32-bit OS.
Changed:
<
<
There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. Instructions are on the T3 users site (link below), although it is missing the very useful line
>
>
There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. As of Sept 4, 2010 there is 8.8 TB of storage on xenia:/data (with 6.2 TB used) and 11 TB in xrootd. Instructions are on the T3 users site (link below), although it is missing the very useful line
 
cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile>

Revision 72010-08-18 - DustinUrbaniec

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 19 to 19
  and this is generally a good idea I think.)
Changed:
<
<
Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where:
>
>
Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where:
 
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh
arc_nevis_ls /data/xrootd

Revision 62010-08-18 - DustinUrbaniec

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 23 to 23
 
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh
arc_nevis_ls /data/xrootd
Changed:
<
<
The database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at
>
>
The database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. See ArCondNevis for a tutorial. The general ArCond instructions are at
  Arcond User Guide

Revision 52010-08-09 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 10 to 10
  There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. Instructions are on the T3 users site (link below), although it is missing the very useful line
Changed:
<
<
cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile>
>
>
cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile>
  (it is possible to do
mkdir xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>
Line: 19 to 21
  Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where:
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh
Changed:
<
<
arc_nevis_ls /data/xrootdThe database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at
>
>
arc_nevis_ls /data/xrootd

The database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at

  Arcond User Guide
Line: 40 to 44
  That's most of it. Of course, only basic tests have been made.
Added:
>
>
Note: if you want to use the local python version, you will need to
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
 -- GustaafBrooijmans - 16 Apr 2010 \ No newline at end of file

Revision 42010-04-23 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 18 to 18
 and this is generally a good idea I think.)

Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where:

Changed:
<
<
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh; arc_nevis_ls 
The database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at
>
>
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh
arc_nevis_ls /data/xrootd
The database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at
  Arcond User Guide

Revision 32010-04-22 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 10 to 10
  There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. Instructions are on the T3 users site (link below), although it is missing the very useful line
Changed:
<
<
cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<myfile>
>
>
cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile>
 
Changed:
<
<
which also requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way. To find out what's available and where:
>
>
(it is possible to do
mkdir xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>

and this is generally a good idea I think.)

Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where:

 
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh; arc_nevis_ls 
The database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at

Arcond User Guide

Revision 22010-04-22 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Line: 10 to 10
  There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. Instructions are on the T3 users site (link below), although it is missing the very useful line
Changed:
<
<
cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<myfile>
>
>
cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<myfile>
 
Changed:
<
<
which also requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. Unfortunately, there is no way to directly move data into xrootdfrom dq2-get yet. dq2-get does work, but it's probably good to specify a US site when possible: dq2-get SITE DATASET.
>
>
which also requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way. To find out what's available and where:
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh; arc_nevis_ls 
The database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at

Arcond User Guide

Unfortunately, there is no way to directly move data into xrootd from dq2-get yet. dq2-get does work, but it's probably good to specify a US site when possible: dq2-get SITE DATASET.

  xenia currently has 7 TB of generally available storage under /data (shared between users and xrootd), and each batch node has 1.7 TB. We will evolve this as we develop understanding of the needs.

Revision 12010-04-16 - GustaafBrooijmans

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="WebHome"

Using the T3 Cluster at Nevis

Our installation is very similar to the standard US T3 installation but not exactly the same. Start by adding the following to your .bashrc file (or corresponding file if you use a different shell, but I find bash to work rather well with ATLAS software...):

export ATLAS_LOCAL_ROOT_BASE=/a/data/xenia/share/atlas/ATLASLocalRootBase/
alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh'

Log onto xenia (from kolya or karthur) and go to bash if that is not your default shell. You need to do everything on xenia: this is where everything is installed, and although most of it is accessible by NFS from the rest of the cluster, most Nevis machines still run an incompatible 32-bit OS.

There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. Instructions are on the T3 users site (link below), although it is missing the very useful line

cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<myfile>

which also requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. Unfortunately, there is no way to directly move data into xrootdfrom dq2-get yet. dq2-get does work, but it's probably good to specify a US site when possible: dq2-get SITE DATASET.

xenia currently has 7 TB of generally available storage under /data (shared between users and xrootd), and each batch node has 1.7 TB. We will evolve this as we develop understanding of the needs.

Athena is installed in

/a/data/xenia/share/atlas/ATLASLocalRootBase/Athena/i686_slc5_gcc43_opt/

Releases can be added easily.

Otherwise, most of the instructions at

https://atlaswww.hep.anl.gov/twiki/bin/view/UsAtlasTier3/Tier3gUsersGuide

should work. We do not have a squid server yet.

That's most of it. Of course, only basic tests have been made.

-- GustaafBrooijmans - 16 Apr 2010

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback