Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 7 to 6 | ||||||||
export ATLAS_LOCAL_ROOT_BASE=/a/data/xenia/share/atlas/ATLASLocalRootBase alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh' | ||||||||
Changed: | ||||||||
< < | Log onto xenia.nevis.columbia.edu and go to bash if that is not your default shell. You need to do everything on xenia: this is where everything is installed, and although most of it is accessible by NFS from the rest of the cluster, most Nevis machines still run an incompatible 32-bit OS. | |||||||
> > | Log onto xenia.nevis.columbia.edu and go to bash if that is not your default shell. You need to do everything on xenia(2): this is where everything is installed | |||||||
If you need a complete enviroment setup example view: | ||||||||
Deleted: | ||||||||
< < | ||||||||
~kiord/.zshrc | ||||||||
Changed: | ||||||||
< < | To setup the grid enviroment source: | |||||||
> > | To setup the grid environment source: | |||||||
source ${ATLAS_LOCAL_ROOT_BASE}/packageSetups/atlasLocalPandaClientSetup.sh currentJedi --noAthenaCheck source ${ATLAS_LOCAL_ROOT_BASE}/packageSetups/atlasLocalRucioClientsSetup.sh voms-proxy-init --voms atlas | ||||||||
Changed: | ||||||||
< < | To be able to use the grid dataset replication ask to be included to the grid map. To replicate datasets to xenia use the command (after setting up the grid enviroment): | |||||||
> > | To be able to use the grid dataset replication you need to get authorized to the vo: go to https://voms2.cern.ch:8443/voms/atlas/user/home.action ![]() and make sure your CERN certificate is allowed on usatlas tier3. This is under "Your groups and roles"; request membership in /atlas/usatlas | |||||||
Added: | ||||||||
> > | To replicate datasets to xenia use the command (after setting up the grid enviroment): | |||||||
rucio add-rule mc14_13TeV:mc14_13TeV.samplename --grouping DATASET 1 "NEVIS_GRIDFTP"You can check the status of replication from this website (replace the username by your username): |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 18 to 20 | ||||||||
The database is updated every hour. You can then use Arcond on top of condor to run jobs over the files. See ArCondNevis for a tutorial. The general ArCond instructions are at | ||||||||
Changed: | ||||||||
< < | Arcond User Guide | |||||||
> > | Arcond User Guide | |||||||
Modern releases are available through cvmfs. Do | ||||||||
Added: | ||||||||
> > | ||||||||
showVersions --show=athenato see the list. | ||||||||
Line: 27 to 30 | ||||||||
Proof-on-demand is also available, see instructions at | ||||||||
Changed: | ||||||||
< < | https://twiki.atlas-canada.ca/bin/view/AtlasCanada/ATLASLocalRootBase#Using_PROOF_on_Demand_PoD | |||||||
> > | https://twiki.atlas-canada.ca/bin/view/AtlasCanada/ATLASLocalRootBase#Using_PROOF_on_Demand_PoD | |||||||
Otherwise, most of the instructions at | ||||||||
Changed: | ||||||||
< < | https://atlaswww.hep.anl.gov/twiki/bin/view/UsAtlasTier3/Tier3gUsersGuide | |||||||
> > | https://atlaswww.hep.anl.gov/twiki/bin/view/UsAtlasTier3/Tier3gUsersGuide | |||||||
should work. | ||||||||
Line: 64 to 68 | ||||||||
Using DQ2 to XrootD Directly | ||||||||
Changed: | ||||||||
< < | On xenia it is now possible to transfer files directly to the xrootd using dq2-get. The command to use is:
dq2-get -Y -L -q FTS -o https://fts.usatlas.bnl.gov:8442/glite-data-transfer-fts/services/FileTransfer -S gsiftp://xenia.nevis.columbia.edu/xrootdFor example I copied the dataset: user.chapleau.valid1.105592.Pythia_Zprime_tt1000.recon.AOD.e574_s933_s946_r2759.NTUP_TOPBOOST.v1.111015001105 [tandeen@xenia]~% dq2-get -d -Y -L UKI-SCOTGRID-GLASGOW_LOCALGROUPDISK -q FTS -o https://fts.usatlas.bnl.gov:8443/glite-data-transfer-fts/services/FileTransfer -S gsiftp://xenia.nevis.columbia.edu/xrootd user.chapleau.valid1.105592.Pythia_Zprime_tt1000.recon.AOD.e574_s933_s946_r2759.NTUP_TOPBOOST.v1.111015001105and on xenia the files can now be seen here: | |||||||
> > | On xenia it is now possible to transfer files directly to the xrootd ("data subscription"). The command to use is:
rucio add-rule mc14_13TeV:mc14_13TeV.203989.ProtosPythia_AUET2B_MSTW2008LO_TTS_M900.merge.DAOD_TOPQ1.e3397_s1982_s2008_r5787_r5853_p1852 --grouping DATASET 1 "NEVIS_GRIDFTP" | |||||||
Changed: | ||||||||
< < | [tandeen@xenia]~% ls /xrootdfs/xrootd/dq2/user/chapleau/valid1/user.chapleau.valid1.105592.Pythia_Zprime_tt1000.recon.AOD.e574_s933_s946_r2759.NTUP_TOPBOOST.v1.111015001105 | |||||||
> > | On xenia the files can be seen in:
% ls /xrootdfs/atlas/dq2/... | |||||||
Further automation of the process is coming...
Reference for installation (not for users) | ||||||||
Line: 96 to 97 | ||||||||
Make sure all these config files are readable by everyone (not just root). To start and stop the server as root do: | ||||||||
Changed: | ||||||||
< < | /sbin/service globus-gridftp-server start/stop/restart | |||||||
> > |
/sbin/service globus-gridftp-server start/stop/restart | |||||||
(some of this may be superfluous). |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 6 to 6 | ||||||||
export ATLAS_LOCAL_ROOT_BASE=/a/data/xenia/share/atlas/ATLASLocalRootBase alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh' | ||||||||
Changed: | ||||||||
< < | Log onto xenia (from kolya or karthur) and go to bash if that is not your default shell. You need to do everything on xenia: this is where everything is installed, and although most of it is accessible by NFS from the rest of the cluster, most Nevis machines still run an incompatible 32-bit OS. | |||||||
> > | Log onto xenia and go to bash if that is not your default shell. You need to do everything on xenia: this is where everything is installed, and although most of it is accessible by NFS from the rest of the cluster, most Nevis machines still run an incompatible 32-bit OS. | |||||||
Changed: | ||||||||
< < | There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. As of Sept 4, 2010 there is 8.8 TB of storage on xenia:/data (with 6.2 TB used) and 11 TB in xrootd. Instructions are on the T3 users site (link below), although it is missing the very useful line | |||||||
> > | There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory) on xenia and xenia2, and there is 150 TB of space in xrootd. | |||||||
Changed: | ||||||||
< < |
xrdcp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile>There are also scripts on /data/users/common to help adding files. They use /data/users/common/fileCatalog which is meant to keep track of the files on xrootd.
This is an example command:
xrootadd.sh data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ /data/xrootd/data/skims/testwhere data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ is a folder containing root files and /data/xrootd/data/skims/test is an existing folder on xrootd
The command will create a sub-folder /data/xrootd/data/skims/test/data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ containing root files renamed to NTUP.0001.root etc. To avoid renaming use /data/users/common/xrootadd.dont_rename.sh
The xrootadd.sh scripts check that the folder exists in the /data/users/common/fileCatalog before adding to xrootd. You can add the folder there with mkdir or:
/data/users/common/xrootmkdir.sh /data/xrootd/data/skims/testNote that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where: | |||||||
> > | To see the contents of xrootd: | |||||||
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh arc_nevis_ls /data/xrootd | ||||||||
Added: | ||||||||
> > | (or look in /xrootdfs) | |||||||
The database is updated every hour. You can then use Arcond on top of condor to run jobs over the files. See ArCondNevis for a tutorial. The general ArCond instructions are at Arcond User Guide | ||||||||
Deleted: | ||||||||
< < | To directly move data into xrootd from dq2-get, see below. Note that it's probably good to specify a US site when possible: dq2-get SITE DATASET.
Old Athena versions are installed in
/a/data/xenia/share/atlas/ATLASLocalRootBase/Athena/i686_slc5_gcc43_opt/ | |||||||
Modern releases are available through cvmfs. Do
showVersions --show=athena | ||||||||
Line: 57 to 35 | ||||||||
should work. | ||||||||
Changed: | ||||||||
< < | That's most of it. Of course, only basic tests have been made. | |||||||
> > | That's most of it. Talk to others in the group for scripts etc.
The instructions below can (?) still be used, but data subscriptions are now recommended. There is a webpage to subscribe to datasets.
Note the very useful line
xrdcp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile>There are also scripts on /data/users/common to help adding files. They use /data/users/common/fileCatalog which is meant to keep track of the files on xrootd. This is an example command: xrootadd.sh data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ /data/xrootd/data/skims/testwhere data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ is a folder containing root files and /data/xrootd/data/skims/test is an existing folder on xrootd The command will create a sub-folder /data/xrootd/data/skims/test/data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ containing root files renamed to NTUP.0001.root etc. To avoid renaming use /data/users/common/xrootadd.dont_rename.sh The xrootadd.sh scripts check that the folder exists in the /data/users/common/fileCatalog before adding to xrootd. You can add the folder there with mkdir or: /data/users/common/xrootmkdir.sh /data/xrootd/data/skims/testNote that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files <script type="text/javascript" src="http://www.nevis.columbia.edu/twiki/pub/TWiki/TinyMCEPlugin/tinymce/jscripts/tiny_mce/themes/advanced/langs/en.js"></script><script type="text/javascript" src="http://www.nevis.columbia.edu/twiki/pub/TWiki/TinyMCEPlugin/tinymce/jscripts/tiny_mce/plugins/twikibuttons/langs/en.js"></script><script type="text/javascript" src="http://www.nevis.columbia.edu/twiki/pub/TWiki/TinyMCEPlugin/tinymce/jscripts/tiny_mce/plugins/twikiimage/langs/en.js"></script>get distributed over the nodes that way - they're not all on xenia! To find out what's available and where: | |||||||
-- GustaafBrooijmans - 16 Apr 2010 |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 88 to 88 | ||||||||
In the xrootd config file:
/etc/xrootd/xrootd-clustered.cfg | ||||||||
Deleted: | ||||||||
< < | I added the line
all.manager meta glrd.usatlas.org:1095 | |||||||
And in | ||||||||
Changed: | ||||||||
< < | /opt/osg-v1.2/vdt/services/vdt-run-gsiftp.sh.env | |||||||
> > | /etc/xrootd-dsi/gridftp-xrootd.conf | |||||||
Changed: | ||||||||
< < | I changed the last line to:
export XROOTD_VMP="xenia.nevis.columbia.edu:1094:/xrootd=/xrootd" | |||||||
> > | put:
$XROOTD_VMP "xenia.nevis.columbia.edu:1094:/xrootd=/atlas" | |||||||
Make sure all these config files are readable by everyone (not just root). To start and stop the server as root do: | ||||||||
Changed: | ||||||||
< < | source /opt/osg-v1.2/setup.sh vdt-control --disable gsiftp vdt-control --off gsiftp vdt-control --enable gsiftp vdt-control --on gsiftp | |||||||
> > | /sbin/service globus-gridftp-server start/stop/restart | |||||||
(some of this may be superfluous). |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 36 to 36 | ||||||||
Arcond User Guide | ||||||||
Changed: | ||||||||
< < | Unfortunately, there is no way to directly move data into xrootd from dq2-get yet. dq2-get does work, but it's probably good to specify a US site when possible: dq2-get SITE DATASET. | |||||||
> > | To directly move data into xrootd from dq2-get, see below. Note that it's probably good to specify a US site when possible: dq2-get SITE DATASET. | |||||||
Changed: | ||||||||
< < | xenia currently has 7 TB of generally available storage under /data (shared between users and xrootd), and each batch node has 1.7 TB. We will evolve this as we develop understanding of the needs. Athena is installed in | |||||||
> > | Old Athena versions are installed in | |||||||
/a/data/xenia/share/atlas/ATLASLocalRootBase/Athena/i686_slc5_gcc43_opt/ |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 62 to 62 | ||||||||
That's most of it. Of course, only basic tests have been made. -- GustaafBrooijmans - 16 Apr 2010 | ||||||||
Added: | ||||||||
> > |
DQ2 to XROOTD
Using DQ2 to XrootD DirectlyOn xenia it is now possible to transfer files directly to the xrootd using dq2-get. The command to use is:dq2-get -Y -L -q FTS -o https://fts.usatlas.bnl.gov:8442/glite-data-transfer-fts/services/FileTransfer -S gsiftp://xenia.nevis.columbia.edu/xrootdFor example I copied the dataset: user.chapleau.valid1.105592.Pythia_Zprime_tt1000.recon.AOD.e574_s933_s946_r2759.NTUP_TOPBOOST.v1.111015001105 [tandeen@xenia]~% dq2-get -d -Y -L UKI-SCOTGRID-GLASGOW_LOCALGROUPDISK -q FTS -o https://fts.usatlas.bnl.gov:8443/glite-data-transfer-fts/services/FileTransfer -S gsiftp://xenia.nevis.columbia.edu/xrootd user.chapleau.valid1.105592.Pythia_Zprime_tt1000.recon.AOD.e574_s933_s946_r2759.NTUP_TOPBOOST.v1.111015001105and on xenia the files can now be seen here: [tandeen@xenia]~% ls /xrootdfs/xrootd/dq2/user/chapleau/valid1/user.chapleau.valid1.105592.Pythia_Zprime_tt1000.recon.AOD.e574_s933_s946_r2759.NTUP_TOPBOOST.v1.111015001105Further automation of the process is coming... Reference for installation (not for users)To make this work I made the changes below which were different from the basic instructions here: https://twiki.cern.ch/twiki/bin/view/Atlas/Tier3gGridftpSetup![]() /etc/grid-security/grid-mapfileIn the xrootd config file: /etc/xrootd/xrootd-clustered.cfgI added the line all.manager meta glrd.usatlas.org:1095And in /opt/osg-v1.2/vdt/services/vdt-run-gsiftp.sh.envI changed the last line to: export XROOTD_VMP="xenia.nevis.columbia.edu:1094:/xrootd=/xrootd"Make sure all these config files are readable by everyone (not just root). To start and stop the server as root do: source /opt/osg-v1.2/setup.sh vdt-control --disable gsiftp vdt-control --off gsiftp vdt-control --enable gsiftp vdt-control --on gsiftp(some of this may be superfluous). -- TimothyAndeen - 21 Dec 2011 |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 32 to 32 | ||||||||
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh arc_nevis_ls /data/xrootd | ||||||||
Changed: | ||||||||
< < | The database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. See ArCondNevis for a tutorial. The general ArCond instructions are at | |||||||
> > | The database is updated every hour. You can then use Arcond on top of condor to run jobs over the files. See ArCondNevis for a tutorial. The general ArCond instructions are at | |||||||
Arcond User Guide | ||||||||
Line: 41 to 41 | ||||||||
xenia currently has 7 TB of generally available storage under /data (shared between users and xrootd), and each batch node has 1.7 TB. We will evolve this as we develop understanding of the needs. Athena is installed in | ||||||||
Changed: | ||||||||
< < | /a/data/xenia/share/atlas/ATLASLocalRootBase/Athena/i686_slc5_gcc43_opt/ | |||||||
> > | /a/data/xenia/share/atlas/ATLASLocalRootBase/Athena/i686_slc5_gcc43_opt/Modern releases are available through cvmfs. Do showVersions --show=athenato see the list. Proof-on-demand is also available, see instructions at | |||||||
Changed: | ||||||||
< < | Releases can be added easily. | |||||||
> > | https://twiki.atlas-canada.ca/bin/view/AtlasCanada/ATLASLocalRootBase#Using_PROOF_on_Demand_PoD | |||||||
Otherwise, most of the instructions at https://atlaswww.hep.anl.gov/twiki/bin/view/UsAtlasTier3/Tier3gUsersGuide | ||||||||
Changed: | ||||||||
< < | should work. We do not have a squid server yet. | |||||||
> > | should work. | |||||||
That's most of it. Of course, only basic tests have been made. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 11 to 11 | ||||||||
There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. As of Sept 4, 2010 there is 8.8 TB of storage on xenia:/data (with 6.2 TB used) and 11 TB in xrootd. Instructions are on the T3 users site (link below), although it is missing the very useful line | ||||||||
Changed: | ||||||||
< < | cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile> | |||||||
> > | xrdcp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile> | |||||||
Changed: | ||||||||
< < | (it is possible to do
mkdir xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset> | |||||||
> > | There are also scripts on /data/users/common to help adding files. They use /data/users/common/fileCatalog which is meant to keep track of the files on xrootd. | |||||||
Changed: | ||||||||
< < | and this is generally a good idea I think.) | |||||||
> > | This is an example command:
xrootadd.sh data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ /data/xrootd/data/skims/testwhere data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ is a folder containing root files and /data/xrootd/data/skims/test is an existing folder on xrootd
The command will create a sub-folder /data/xrootd/data/skims/test/data11_7TeV.00178109.physics_Muons.merge.NTUP_SMWZ containing root files renamed to NTUP.0001.root etc. To avoid renaming use /data/users/common/xrootadd.dont_rename.sh
The xrootadd.sh scripts check that the folder exists in the /data/users/common/fileCatalog before adding to xrootd. You can add the folder there with mkdir or:
/data/users/common/xrootmkdir.sh /data/xrootd/data/skims/test | |||||||
Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where:
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at NevisOur installation is very similar to the standard US T3 installation but not exactly the same. Start by adding the following to your .bashrc file (or corresponding file if you use a different shell, but I find bash to work rather well with ATLAS software...): | ||||||||
Changed: | ||||||||
< < | export ATLAS_LOCAL_ROOT_BASE=/a/data/xenia/share/atlas/ATLASLocalRootBase/ | |||||||
> > | export ATLAS_LOCAL_ROOT_BASE=/a/data/xenia/share/atlas/ATLASLocalRootBase | |||||||
alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh' Log onto xenia (from kolya or karthur) and go to bash if that is not your default shell. You need to do everything on xenia: this is where everything is installed, and although most of it is accessible by NFS from the rest of the cluster, most Nevis machines still run an incompatible 32-bit OS. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 44 to 44 | ||||||||
That's most of it. Of course, only basic tests have been made. | ||||||||
Deleted: | ||||||||
< < | Note: if you want to use the local python version, you will need to
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib | |||||||
-- GustaafBrooijmans - 16 Apr 2010 \ No newline at end of file |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 8 to 8 | ||||||||
Log onto xenia (from kolya or karthur) and go to bash if that is not your default shell. You need to do everything on xenia: this is where everything is installed, and although most of it is accessible by NFS from the rest of the cluster, most Nevis machines still run an incompatible 32-bit OS. | ||||||||
Changed: | ||||||||
< < | There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. Instructions are on the T3 users site (link below), although it is missing the very useful line | |||||||
> > | There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. As of Sept 4, 2010 there is 8.8 TB of storage on xenia:/data (with 6.2 TB used) and 11 TB in xrootd. Instructions are on the T3 users site (link below), although it is missing the very useful line | |||||||
cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile> |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 19 to 19 | ||||||||
and this is generally a good idea I think.) | ||||||||
Changed: | ||||||||
< < | Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where: | |||||||
> > | Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where: | |||||||
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh arc_nevis_ls /data/xrootd |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 23 to 23 | ||||||||
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh arc_nevis_ls /data/xrootd | ||||||||
Changed: | ||||||||
< < | The database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at | |||||||
> > | The database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. See ArCondNevis for a tutorial. The general ArCond instructions are at | |||||||
Arcond User Guide |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 10 to 10 | ||||||||
There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. Instructions are on the T3 users site (link below), although it is missing the very useful line | ||||||||
Changed: | ||||||||
< < | cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile> | |||||||
> > |
cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile> | |||||||
(it is possible to do
mkdir xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset> | ||||||||
Line: 19 to 21 | ||||||||
Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where:
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh | ||||||||
Changed: | ||||||||
< < | arc_nevis_ls /data/xrootdThe database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at | |||||||
> > | arc_nevis_ls /data/xrootd The database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at | |||||||
Arcond User Guide | ||||||||
Line: 40 to 44 | ||||||||
That's most of it. Of course, only basic tests have been made. | ||||||||
Added: | ||||||||
> > | Note: if you want to use the local python version, you will need to
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib | |||||||
-- GustaafBrooijmans - 16 Apr 2010 \ No newline at end of file |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 18 to 18 | ||||||||
and this is generally a good idea I think.) Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where: | ||||||||
Changed: | ||||||||
< < | source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh; arc_nevis_lsThe database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at | |||||||
> > | source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh arc_nevis_ls /data/xrootdThe database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at | |||||||
Arcond User Guide |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 10 to 10 | ||||||||
There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. Instructions are on the T3 users site (link below), although it is missing the very useful line | ||||||||
Changed: | ||||||||
< < | cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<myfile> | |||||||
> > | cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>/<myfile> | |||||||
Changed: | ||||||||
< < | which also requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way. To find out what's available and where: | |||||||
> > | (it is possible to do
mkdir xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<dataset>and this is generally a good idea I think.) Note that this requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way - they're not all on xenia! To find out what's available and where: | |||||||
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh; arc_nevis_lsThe database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at Arcond User Guide |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Using the T3 Cluster at Nevis | ||||||||
Line: 10 to 10 | ||||||||
There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. Instructions are on the T3 users site (link below), although it is missing the very useful line | ||||||||
Changed: | ||||||||
< < | cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<myfile> | |||||||
> > | cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<myfile> | |||||||
Changed: | ||||||||
< < | which also requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. Unfortunately, there is no way to directly move data into xrootdfrom dq2-get yet. dq2-get does work, but it's probably good to specify a US site when possible: dq2-get SITE DATASET. | |||||||
> > | which also requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. (Please only copy data files, not log files!!!). Note that files get distributed over the nodes that way. To find out what's available and where:
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh; arc_nevis_lsThe database is updated every 4 hours. You can then use Arcond on top of condor to run jobs over the files. The instructions for that are at Arcond User Guide Unfortunately, there is no way to directly move data into xrootd from dq2-get yet. dq2-get does work, but it's probably good to specify a US site when possible: dq2-get SITE DATASET. | |||||||
xenia currently has 7 TB of generally available storage under /data (shared between users and xrootd), and each batch node has 1.7 TB. We will evolve this as we develop understanding of the needs. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
Added: | ||||||||
> > |
Using the T3 Cluster at NevisOur installation is very similar to the standard US T3 installation but not exactly the same. Start by adding the following to your .bashrc file (or corresponding file if you use a different shell, but I find bash to work rather well with ATLAS software...):export ATLAS_LOCAL_ROOT_BASE=/a/data/xenia/share/atlas/ATLASLocalRootBase/ alias setupATLAS='source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh'Log onto xenia (from kolya or karthur) and go to bash if that is not your default shell. You need to do everything on xenia: this is where everything is installed, and although most of it is accessible by NFS from the rest of the cluster, most Nevis machines still run an incompatible 32-bit OS. There is a batch queue managed by condor, and storage is available under /data/user (create your own subdirectory). However, please start using xrootd as soon as possible to manage storage. Instructions are on the T3 users site (link below), although it is missing the very useful line cp <myfile> xroot://xenia.nevis.columbia.edu:1094//data/xrootd/<myfile> which also requires the setups under "Data storage for your batch cluster". If people copy files to xrootd that way, they will be accessible to all and will gradually also get distributed over the batch nodes. Unfortunately, there is no way to directly move data into xrootdfrom dq2-get yet. dq2-get does work, but it's probably good to specify a US site when possible: dq2-get SITE DATASET. xenia currently has 7 TB of generally available storage under /data (shared between users and xrootd), and each batch node has 1.7 TB. We will evolve this as we develop understanding of the needs. Athena is installed in /a/data/xenia/share/atlas/ATLASLocalRootBase/Athena/i686_slc5_gcc43_opt/Releases can be added easily. Otherwise, most of the instructions at https://atlaswww.hep.anl.gov/twiki/bin/view/UsAtlasTier3/Tier3gUsersGuide should work. We do not have a squid server yet. That's most of it. Of course, only basic tests have been made. -- GustaafBrooijmans - 16 Apr 2010 |