Difference: ArCondNevis (1 vs. 8)

Revision 82015-08-20 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="UsingNevisT3"

Using ArCond at Nevis

Changed:
<
<
ArCond is a wrapper that uses condor to automatically parallelize your jobs. When you copy a dataset to xrootd, the data is distributed evenly over all three of the T3 worker nodes. In addition to parallelization, ArCond also runs jobs only on the nodes where the data is stored so there is no network load. The general ArCond instructions are available here. These instructions are specific to users at ANL, but there may still be some information there that you will find useful.
>
>
ArCond is a wrapper that uses condor to automatically parallelize your jobs. When you copy a dataset to xrootd, the data is distributed evenly over all three of the T3 worker nodes. In addition to parallelization, ArCond also runs jobs only on the nodes where the data is stored so there is no network load. The general ArCond instructions are available here. These instructions are specific to users at ANL, but there may still be some information there that you will find useful.
  There are of course some drawbacks. The first of which is that you can't monitor how the job is progressing. You can only wait for the job to finish and check the log files after the fact. Secondly, you can only run over the entire dataset (modulo telling your submission scripts to only run a certain number of events, per job). Also, if the datasets you're running on are not very large (or the jobs are not very cpu intensive), then the time it takes to set up the submission scripts (which of course you only have to do once and then you can re-use them as often as you need), to submit the jobs, for condor to copy the packages/output to and from the worker nodes, and for you to combine the output (by hand) will probably be longer than the time it takes to just run the job locally. Of course, you can set up scripts to do all of the above to make things easier/quicker.
Line: 10 to 10
 

The Tutorial

Changed:
<
<
This tutorial works in zsh. I'm not sure about bash. It will teach you how to submit c++ jobs which run over D3PDs (in the AnalysisUtilities framework) using ArCond. I'm fairly new to ArCond, so it is possible that I've made mistakes (or that things could be done in a more efficient way). If you discover any, please let me know or update this page yourself.
>
>
This tutorial works in zsh. I'm not sure about bash. It will teach you how to submit c++ jobs which run over D3PDs (in the AnalysisUtilities framework) using ArCond. I'm fairly new to ArCond, so it is possible that I've made mistakes (or that things could be done in a more efficient way). If you discover any, please let me know or update this page yourself.
  It's also possible to run athena on ArCond, but I haven't tried to do that and I don't plan to. See the general ArCond instructions for that.
Line: 25 to 24
 localSetupPython export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ROOTSYS/lib/ #export LD_PRELOAD=$ROOTSYS/lib/libXrdPosixPreload.so
Changed:
<
<
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh

Also, you need to make sure your files are on xrootd. To copy a dataset to xrootd, after doing the above setup, do:

>
>
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh
 
Changed:
<
<
source /data/users/common/xrootadd.sh <dataset_name> <destination/directory>
>
>
With the subscriptions, data are transferred straight into xrootd.
  If you just now copied a dataset, you'll need to wait a few hours for the database to update before continuing (otherwise, ArCond won't know that the data is available on the nodes by default). To check if your data is there, again do the above setup and do:
Deleted:
<
<
 
Changed:
<
<
arc_nevis_ls /data/xrootd/
>
>
arc_nevis_ls /data0/atlas/...
  If the data is there, you should list the files with the above command.
Changed:
<
<

Tutorial Package

>
>
Note that /xrootdfs is a virtual filesystem that is useful to take a quick look at all the files in the system. The physical filesystems are under /data0 on the worker nodes. So,
ls /xrootdfs/atlas/... 

on xenia collates the info from

ls /data0/atlas/... 
 
Changed:
<
<
Check out the ArCondNevis package. This directory has the code to run the jobs (Analysis package), plus the submission scripts (in arc_d3pd). You will probably want to do this on your xenia user directory. Condor copies the output of your jobs to wherever you submit from. If the output files are large and you do this from a karthur or kolya directory (i.e. where most of our home directories are mounted), bad things could happen when the output is copied over. Also, when you cd to your xenia user directory, make sure you DO NOT go to the full NFS path /a/data/xenia/users/username/ but rather /data/users/username/. You need to use the condor built-in file transfer mechanism, documented here or there's a chance you will bring the system to its knees. Then, check out the package:
>
>
on the 22 worker nodes.

Tutorial Package

 
Added:
>
>
Check out the ArCondNevis package. This directory has the code to run the jobs (Analysis package), plus the submission scripts (in arc_d3pd). You will probably want to do this on your xenia user directory. Condor copies the output of your jobs to wherever you submit from. If the output files are large and you do this from a karthur or kolya directory (i.e. where most of our home directories are mounted), bad things could happen when the output is copied over. Also, when you cd to your xenia user directory, make sure you DO NOT go to the full NFS path /a/data/xenia/users/username/ but rather /data/users/username/. You need to use the condor built-in file transfer mechanism, documented here or there's a chance you will bring the system to its knees. Then, check out the package:
 
export SVNAP=svn+ssh://svn.cern.ch/reps/apenson
cd /a/data/xenia/users/urbaniec
kinit urbaniec@CERN.CH
Changed:
<
<
svn co $SVNAP/ArCondNevis
>
>
svn co $SVNAP/ArCondNevis
  Optional: If you want to run the code interactively to see what it does, execute the following commands:
Deleted:
<
<
 
cd ArCondNevis/Analysis/AnalysisUtilities/goodRunsLists/cmt
make
Line: 62 to 58
 make cd ../AnalysisTemplate make
Changed:
<
<
run InputFiles.txt physics Analysis.root -isMC 1
>
>
run InputFiles.txt physics Analysis.root -isMC 1
  This should take a few minutes to compile and run. The output is a file called Analysis.root which has some histograms, plus a slimmed TTree.
Line: 74 to 69
 Now check out the ArCondNevis/arc_d3pd/patterns directory. Here you tell ArCond what machines are available (and any requirements for those machines). The files all have the form schema.site.xeniaXX.nevis.columbia.edu. You don't need to modify these, but if nodes are ever added to the T3 site, you'll need to add a corresponding file. One thing you might want to modify is uncommenting the email notification line (else you'll get a ton of emails when the jobs finish... personally i let the emails come and then filter the emails, but this is really up to you).

Finally, check out the ArCondNevis/user directory. There should be 3 files. The most important is called ShellScript_BASIC.sh. Open this and skip to the part where it says "user-defined part" (everything before this is ArCond set up, e.g. copying packages to the nodes, setting up the parallelization, etc.). As you can see, it does some setup for AnalysisUtilities, then compiles the packages, then runs the job with the following line:

Deleted:
<
<
 
Changed:
<
<
./run InputFiles.txt physics Analysis.root > Analysis.log 2>&1
>
>
./run InputFiles.txt physics Analysis.root > Analysis.log 2>&1
  InputFiles.txt is the the data list that is automatically created by ArCond in the few lines above the user defined part (using the python jobOptions in the user directory). There is also a file called InputFiles.txt by default in the local version of the AnalysisTemplate package, but it will get overwritten in the condor job. This is how ArCond parallelizes the jobs. It divides the files in each dataset that are on each node into several InputFiles.txt files that AnalysisUtilities then runs over in each sub job (so if you want to use ArCond outside of AnalysisUtilities, keep in mind that your code needs to run over a list of the data files and that the list needs to be called InputFiles.txt in the submitted job, that is of course unless you rename it in the shell script).
Line: 86 to 79
 The package should be plug and play. There is no need to compile anything since compilation is in principle done on the worker nodes where the job is located. However, if you want to save time, I haven't had any issues with compiling locally and then copying over the binaries. To do that, just compile as above and comment out the compilation lines in ShellScript_BASIC.sh. This significantly reduces the time it takes to finish all the jobs.

To run ArCond, just do:

Deleted:
<
<
 
cd ArCondNevis/arc_d3pd
Changed:
<
<
arcond -allyes
>
>
arcond -allyes
  You can then execute condor_q to view the queue and you should see 12 jobs under your username. If no one is using condor, your jobs should start running right away and take ~10 minutes to finish. You'll get an email sent to your nevis account when the jobs are finished. At any time, from the arc_d3pd directory, you can execute arc_check or condor_q to see which jobs are still running ( arc_check will also tell you which jobs have succeeded, but it will only work if you name your output root files Analysis.root). When all the jobs are finished (with status 0 in your emails if you get them), have a look at the output:
Deleted:
<
<
 
cd ArCondNevis/arc_d3pd/Job
Changed:
<
<
ls
>
>
ls
  You should see several directories (one for each job) entitled something like run0_xenia01.nevis.columbia.edu/ Each of these directories contains the submission and execution scripts. They are also where the output is copied to once the job finishes. In this case, the output should be called Analysis.log (output text from AnalysisUtilities job) and Analysis.root.
Line: 102 to 91
 You should see several directories (one for each job) entitled something like run0_xenia01.nevis.columbia.edu/ Each of these directories contains the submission and execution scripts. They are also where the output is copied to once the job finishes. In this case, the output should be called Analysis.log (output text from AnalysisUtilities job) and Analysis.root.

Unfortunately, ArCond does not automatically combine the jobs at the end. This can presumably be modified but I haven't done that yet (and at this point I don't plan to). To combine the output, in principle one is supposed to use arc_add (doesn't work for me). What I do is use condor_q to make sure all my jobs are done, then do something like the following:

Deleted:
<
<
 
cd ArCondNevis/arc_d3pd/Job
Changed:
<
<
hadd -f Analysis_all.root run*/Analysis.root
>
>
hadd -f Analysis_all.root run*/Analysis.root
  You'll probably see some errors related to different binning (due to the automatic rebinning in AnalysisUtilities). This is another downside to the parallelization. I don't know a good solution for this at the moment. In my analysis I don't use variable size binning so all jobs are consistent.

Revision 72012-05-18 - GustaafBrooijmans

Line: 1 to 1
 
META TOPICPARENT name="UsingNevisT3"

Using ArCond at Nevis

Line: 6 to 6
  There are of course some drawbacks. The first of which is that you can't monitor how the job is progressing. You can only wait for the job to finish and check the log files after the fact. Secondly, you can only run over the entire dataset (modulo telling your submission scripts to only run a certain number of events, per job). Also, if the datasets you're running on are not very large (or the jobs are not very cpu intensive), then the time it takes to set up the submission scripts (which of course you only have to do once and then you can re-use them as often as you need), to submit the jobs, for condor to copy the packages/output to and from the worker nodes, and for you to combine the output (by hand) will probably be longer than the time it takes to just run the job locally. Of course, you can set up scripts to do all of the above to make things easier/quicker.
Changed:
<
<
In my experience, a job that normally takes > 1 hour or so to run locally is worth submitting to ArCond (will take ~10 minutes to finish on ArCond). Also, if you are submitting many jobs over many different datasets, writing scripts to submit these all to ArCond rather than running them all sequentially will probably be much faster since you have 48 cores on the worker nodes vs. 16 interactively (that are more consistently in use by others).
>
>
In my experience, a job that normally takes > 1 hour or so to run locally is worth submitting to ArCond (will take ~10 minutes to finish on ArCond). Also, if you are submitting many jobs over many different datasets, writing scripts to submit these all to ArCond rather than running them all sequentially will probably be much faster since you have many many cores on the worker nodes vs. 16 interactively (that are more consistently in use by others).
 

The Tutorial

Line: 44 to 44
 

Tutorial Package

Changed:
<
<
Check out the ArCondNevis package. This directory has the code to run the jobs (Analysis package), plus the submission scripts (in arc_d3pd). You will probably want to do this on your xenia user directory. Condor copies the output of your jobs to wherever you submit from. If the output files are large and you do this from a karthur or kolya directory (i.e. where most of our home directories are mounted), bad things could happen when the output is copied over. Also, when you cd to your xenia user directory, make sure you go to the full NFS path /a/data/xenia/users/username/ rather than just /data/users/username/ so that the jobs know where the submission directory is on NFS. Otherwise, they won't know where to copy the output. Then, check out the package:
>
>
Check out the ArCondNevis package. This directory has the code to run the jobs (Analysis package), plus the submission scripts (in arc_d3pd). You will probably want to do this on your xenia user directory. Condor copies the output of your jobs to wherever you submit from. If the output files are large and you do this from a karthur or kolya directory (i.e. where most of our home directories are mounted), bad things could happen when the output is copied over. Also, when you cd to your xenia user directory, make sure you DO NOT go to the full NFS path /a/data/xenia/users/username/ but rather /data/users/username/. You need to use the condor built-in file transfer mechanism, documented here or there's a chance you will bring the system to its knees. Then, check out the package:
 
export SVNAP=svn+ssh://svn.cern.ch/reps/apenson

Revision 62010-11-16 - AlexPenson

Line: 1 to 1
 
META TOPICPARENT name="UsingNevisT3"

Using ArCond at Nevis

Line: 117 to 117
 
  • It would be nice to submit over many datasets from one arcond.conf file, but I don't know how to do this (comma separating the dataset names doesn't work).
  • Automatically hadding the output would make things much more convenient as well.
  • Having more flexibility to run over only (or exclude) certain files could potentially be useful.
Added:
>
>

See Also:

https://atlaswww.hep.anl.gov/twiki/bin/view/UsAtlasTier3/Tier3gUsersGuide

http://www.nevis.columbia.edu/twiki/bin/view/ATLAS/UsingNevisT3

 -- DustinUrbaniec - 27 Aug 2010 \ No newline at end of file

Revision 52010-09-22 - AlexPenson

Line: 1 to 1
 
META TOPICPARENT name="UsingNevisT3"

Using ArCond at Nevis

Line: 37 to 37
 If you just now copied a dataset, you'll need to wait a few hours for the database to update before continuing (otherwise, ArCond won't know that the data is available on the nodes by default). To check if your data is there, again do the above setup and do:
Changed:
<
<
arc_nevis_ls /data/xrootd/
>
>
arc_nevis_ls /data/xrootd/
 

If the data is there, you should list the files with the above command.

Revision 42010-09-16 - EvanWulf

Line: 1 to 1
 
META TOPICPARENT name="UsingNevisT3"

Using ArCond at Nevis

Line: 8 to 8
  In my experience, a job that normally takes > 1 hour or so to run locally is worth submitting to ArCond (will take ~10 minutes to finish on ArCond). Also, if you are submitting many jobs over many different datasets, writing scripts to submit these all to ArCond rather than running them all sequentially will probably be much faster since you have 48 cores on the worker nodes vs. 16 interactively (that are more consistently in use by others).
Changed:
<
<

Teh Tutorial

>
>

The Tutorial

  This tutorial works in zsh. I'm not sure about bash. It will teach you how to submit c++ jobs which run over D3PDs (in the AnalysisUtilities framework) using ArCond. I'm fairly new to ArCond, so it is possible that I've made mistakes (or that things could be done in a more efficient way). If you discover any, please let me know or update this page yourself.
Line: 31 to 31
 Also, you need to make sure your files are on xrootd. To copy a dataset to xrootd, after doing the above setup, do:
Changed:
<
<
source /data/users/common/xrootadd.sh
>
>
source /data/users/common/xrootadd.sh <destination/directory>
 

If you just now copied a dataset, you'll need to wait a few hours for the database to update before continuing (otherwise, ArCond won't know that the data is available on the nodes by default). To check if your data is there, again do the above setup and do:

Line: 71 to 71
  In the ArCondNevis/arc_d3pd directory, there is a file called arcond.conf. The only three important lines begin with input_data - where you specify the dataset (always in the form /data/xrootd/), max_jobs_per_node (remember there are 3 nodes, so multiply this number by 3 and you'll get the degree of parallelization of your jobs), and package_dir - where you specify the path to your analysis package to be copied to where your jobs will run. Modify these as you see fit (if you just want to run ArCond out-of-the-box for the tutorial, leave these as they are).
Changed:
<
<
Now check out the ArCondNevis/patterns directory. Here you tell ArCond what machines are available (and any requirements for those machines). The files all have the form schema.site.xeniaXX.nevis.columbia.edu. You don't need to modify these, but if nodes are ever added to the T3 site, you'll need to add a corresponding file. One thing you might want to modify is uncommenting the email notification line (else you'll get a ton of emails when the jobs finish... personally i let the emails come and then filter the emails, but this is really up to you).
>
>
Now check out the ArCondNevis/arc_d3pd/patterns directory. Here you tell ArCond what machines are available (and any requirements for those machines). The files all have the form schema.site.xeniaXX.nevis.columbia.edu. You don't need to modify these, but if nodes are ever added to the T3 site, you'll need to add a corresponding file. One thing you might want to modify is uncommenting the email notification line (else you'll get a ton of emails when the jobs finish... personally i let the emails come and then filter the emails, but this is really up to you).
  Finally, check out the ArCondNevis/user directory. There should be 3 files. The most important is called ShellScript_BASIC.sh. Open this and skip to the part where it says "user-defined part" (everything before this is ArCond set up, e.g. copying packages to the nodes, setting up the parallelization, etc.). As you can see, it does some setup for AnalysisUtilities, then compiles the packages, then runs the job with the following line:

Revision 32010-09-01 - DustinUrbaniec

Line: 1 to 1
 
META TOPICPARENT name="UsingNevisT3"

Using ArCond at Nevis

Changed:
<
<
ArCond is a wrapper that uses condor to automatically parallelize your jobs. When you copy a dataset to xrootd, the data is distributed evenly over all three of the T3 worker nodes. In addition to parallelization, ArCond also runs jobs only on the nodes where the data is stored so there is no network load. The general ArCond instructions are available here. These instructions are specific to users at ANL, but there may still be some information there that you will find useful.
>
>
ArCond is a wrapper that uses condor to automatically parallelize your jobs. When you copy a dataset to xrootd, the data is distributed evenly over all three of the T3 worker nodes. In addition to parallelization, ArCond also runs jobs only on the nodes where the data is stored so there is no network load. The general ArCond instructions are available here. These instructions are specific to users at ANL, but there may still be some information there that you will find useful.
 
Changed:
<
<
This tutorial will teach you how to submit c++ jobs which run over D3PDs (in the AnalysisUtilities framework) using ArCond. I'm fairly new to ArCond, so it is possible that I've made mistakes (or that things could be done in a more efficient way). If you discover any, please let me know or update this page yourself.
>
>
There are of course some drawbacks. The first of which is that you can't monitor how the job is progressing. You can only wait for the job to finish and check the log files after the fact. Secondly, you can only run over the entire dataset (modulo telling your submission scripts to only run a certain number of events, per job). Also, if the datasets you're running on are not very large (or the jobs are not very cpu intensive), then the time it takes to set up the submission scripts (which of course you only have to do once and then you can re-use them as often as you need), to submit the jobs, for condor to copy the packages/output to and from the worker nodes, and for you to combine the output (by hand) will probably be longer than the time it takes to just run the job locally. Of course, you can set up scripts to do all of the above to make things easier/quicker.

In my experience, a job that normally takes > 1 hour or so to run locally is worth submitting to ArCond (will take ~10 minutes to finish on ArCond). Also, if you are submitting many jobs over many different datasets, writing scripts to submit these all to ArCond rather than running them all sequentially will probably be much faster since you have 48 cores on the worker nodes vs. 16 interactively (that are more consistently in use by others).

Teh Tutorial

This tutorial works in zsh. I'm not sure about bash. It will teach you how to submit c++ jobs which run over D3PDs (in the AnalysisUtilities framework) using ArCond. I'm fairly new to ArCond, so it is possible that I've made mistakes (or that things could be done in a more efficient way). If you discover any, please let me know or update this page yourself.

  It's also possible to run athena on ArCond, but I haven't tried to do that and I don't plan to. See the general ArCond instructions for that.

Setting Up For Running ArCond

Changed:
<
<
This tutorial works in zsh. I'm not sure about bash.

You need to set up ArCond (as well as python). I recommend putting the following into a script:

>
>
First you need to set up ArCond. I recommend putting the following into a script:
 
setupATLAS #if you've setup T3 correctly, this should be an alias to 'source /a/data/xenia/share/atlas/ATLASLocalRootBase/user/atlasLocalSetup.sh'
Line: 32 to 36
  If you just now copied a dataset, you'll need to wait a few hours for the database to update before continuing (otherwise, ArCond won't know that the data is available on the nodes by default). To check if your data is there, again do the above setup and do:
Changed:
<
<
arc_nevis_ls /data/xrootd/
>
>
arc_nevis_ls /data/xrootd/
  If the data is there, you should list the files with the above command.

Tutorial Package

Changed:
<
<
Check out the ArCondNevis package. This directory has the code to run the jobs (Analysis package), plus the submission scripts (in arc_d3pd). You will probably want to do this on your xenia user directory. Condor copies the output of your jobs to wherever you submit from. If the output files are large and you do this from a karthur or kolya directory (i.e. where most of our home directories are mounted), bad things could happen when the output is copied over. Also, when you cd to your xenia user directory, make sure you go to the full NSF path /a/data/xenia/users/username/ rather than just /data/users/username/ so that the jobs know where the submission directory is on NSF. Otherwise, they won't know where to copy the output. Then, check out the package:
>
>
Check out the ArCondNevis package. This directory has the code to run the jobs (Analysis package), plus the submission scripts (in arc_d3pd). You will probably want to do this on your xenia user directory. Condor copies the output of your jobs to wherever you submit from. If the output files are large and you do this from a karthur or kolya directory (i.e. where most of our home directories are mounted), bad things could happen when the output is copied over. Also, when you cd to your xenia user directory, make sure you go to the full NFS path /a/data/xenia/users/username/ rather than just /data/users/username/ so that the jobs know where the submission directory is on NFS. Otherwise, they won't know where to copy the output. Then, check out the package:
 
export SVNAP=svn+ssh://svn.cern.ch/reps/apenson
Line: 61 to 67
  This should take a few minutes to compile and run. The output is a file called Analysis.root which has some histograms, plus a slimmed TTree.
Changed:
<
<

Submitting the Jobs Out-of-the-box

>
>

ArCond Submission

In the ArCondNevis/arc_d3pd directory, there is a file called arcond.conf. The only three important lines begin with input_data - where you specify the dataset (always in the form /data/xrootd/), max_jobs_per_node (remember there are 3 nodes, so multiply this number by 3 and you'll get the degree of parallelization of your jobs), and package_dir - where you specify the path to your analysis package to be copied to where your jobs will run. Modify these as you see fit (if you just want to run ArCond out-of-the-box for the tutorial, leave these as they are).

Now check out the ArCondNevis/patterns directory. Here you tell ArCond what machines are available (and any requirements for those machines). The files all have the form schema.site.xeniaXX.nevis.columbia.edu. You don't need to modify these, but if nodes are ever added to the T3 site, you'll need to add a corresponding file. One thing you might want to modify is uncommenting the email notification line (else you'll get a ton of emails when the jobs finish... personally i let the emails come and then filter the emails, but this is really up to you).

Finally, check out the ArCondNevis/user directory. There should be 3 files. The most important is called ShellScript_BASIC.sh. Open this and skip to the part where it says "user-defined part" (everything before this is ArCond set up, e.g. copying packages to the nodes, setting up the parallelization, etc.). As you can see, it does some setup for AnalysisUtilities, then compiles the packages, then runs the job with the following line:

./run InputFiles.txt physics Analysis.root > Analysis.log 2>&1

InputFiles.txt is the the data list that is automatically created by ArCond in the few lines above the user defined part (using the python jobOptions in the user directory). There is also a file called InputFiles.txt by default in the local version of the AnalysisTemplate package, but it will get overwritten in the condor job. This is how ArCond parallelizes the jobs. It divides the files in each dataset that are on each node into several InputFiles.txt files that AnalysisUtilities then runs over in each sub job (so if you want to use ArCond outside of AnalysisUtilities, keep in mind that your code needs to run over a list of the data files and that the list needs to be called InputFiles.txt in the submitted job, that is of course unless you rename it in the shell script).

Submitting the Jobs

 
Changed:
<
<
The package should be plug and play. There is no need to compile anything since compilation is in principle done locally on the worker nodes where the job is located. To run
>
>
The package should be plug and play. There is no need to compile anything since compilation is in principle done on the worker nodes where the job is located. However, if you want to save time, I haven't had any issues with compiling locally and then copying over the binaries. To do that, just compile as above and comment out the compilation lines in ShellScript_BASIC.sh. This significantly reduces the time it takes to finish all the jobs.

To run ArCond, just do:

 
Deleted:
<
<
ArCond, just do:
 
cd ArCondNevis/arc_d3pd
Changed:
<
<
arcond
>
>
arcond -allyes
 
Changed:
<
<
You can then execute condor_q to view the queue and you should see 12 jobs under your username. If no one is using condor, your jobs should start running right away and take ~10 minutes to finish. You'll get an email sent to your nevis account when the jobs are finished. At any time, from the arc_d3pd directory, you can execute arc_check or condor_q to see which jobs are still running (arc_check will also tell you which jobs have succeeded). When all the jobs are finished (with status 0 in your emails), have a look at the output:
>
>
You can then execute condor_q to view the queue and you should see 12 jobs under your username. If no one is using condor, your jobs should start running right away and take ~10 minutes to finish. You'll get an email sent to your nevis account when the jobs are finished. At any time, from the arc_d3pd directory, you can execute arc_check or condor_q to see which jobs are still running ( arc_check will also tell you which jobs have succeeded, but it will only work if you name your output root files Analysis.root). When all the jobs are finished (with status 0 in your emails if you get them), have a look at the output:
 
cd ArCondNevis/arc_d3pd/Job
ls
Changed:
<
<
You should see 12 directories (one for each job) entitled something like run0_xenia01.nevis.columbia.edu/ Each of these directories contains the submission and execution scripts. They are also where the output is copied to once the job finishes. In this case, the output should be called Analysis.log (output text from AnalysisUtilities job) and Analysis.root.
>
>
You should see several directories (one for each job) entitled something like run0_xenia01.nevis.columbia.edu/ Each of these directories contains the submission and execution scripts. They are also where the output is copied to once the job finishes. In this case, the output should be called Analysis.log (output text from AnalysisUtilities job) and Analysis.root.

Unfortunately, ArCond does not automatically combine the jobs at the end. This can presumably be modified but I haven't done that yet (and at this point I don't plan to). To combine the output, in principle one is supposed to use arc_add (doesn't work for me). What I do is use condor_q to make sure all my jobs are done, then do something like the following:

cd ArCondNevis/arc_d3pd/Job
hadd -f Analysis_all.root run*/Analysis.root

You'll probably see some errors related to different binning (due to the automatic rebinning in AnalysisUtilities). This is another downside to the parallelization. I don't know a good solution for this at the moment. In my analysis I don't use variable size binning so all jobs are consistent.

Open Issues

 
Added:
>
>
A lot of the open issues with ArCond require a greater understanding of the software than I currently have (not to mention the permissions for re-writing some of it):
  • Many of the commands don't work correctly (e.g. arc_add).
  • It would be nice to submit over many datasets from one arcond.conf file, but I don't know how to do this (comma separating the dataset names doesn't work).
  • Automatically hadding the output would make things much more convenient as well.
  • Having more flexibility to run over only (or exclude) certain files could potentially be useful.
 -- DustinUrbaniec - 27 Aug 2010 \ No newline at end of file

Revision 22010-09-01 - DustinUrbaniec

Line: 1 to 1
 
META TOPICPARENT name="UsingNevisT3"
Changed:
<
<

Using ArCond at Nevis

>
>

Using ArCond at Nevis

 
Changed:
<
<
ArCond is a wrapper that uses condor to automatically parallelize your jobs. When you copy a dataset to xrootd, the data is distributed evenly over all three of the T3 worker nodes. In addition to parallelization, ArCond also runs jobs only on the nodes where the data is stored so there is no network load. The general ArCond instructions are available here. These instructions are specific to users at ANL, but there may still be some information there that you will find useful.
>
>
ArCond is a wrapper that uses condor to automatically parallelize your jobs. When you copy a dataset to xrootd, the data is distributed evenly over all three of the T3 worker nodes. In addition to parallelization, ArCond also runs jobs only on the nodes where the data is stored so there is no network load. The general ArCond instructions are available here. These instructions are specific to users at ANL, but there may still be some information there that you will find useful.
 
Changed:
<
<
This tutorial will teach you how to submit c++ jobs which run over D3PDs (in the AnalysisUtilities framework) using ArCond. I'm fairly new to ArCond, so it is possible that I've made mistakes (or that things could be done in a more efficient way). If you discover any, please let me know or update this page yourself. It's also possible to run athena on ArCond, but I haven't tried to do that and I don't plan to. See the general ArCond instructions for that.
>
>
This tutorial will teach you how to submit c++ jobs which run over D3PDs (in the AnalysisUtilities framework) using ArCond. I'm fairly new to ArCond, so it is possible that I've made mistakes (or that things could be done in a more efficient way). If you discover any, please let me know or update this page yourself.
 
Changed:
<
<

Setting Up For Running ArCond

>
>
It's also possible to run athena on ArCond, but I haven't tried to do that and I don't plan to. See the general ArCond instructions for that.
 
Changed:
<
<
Unfortunately, you have to set up athena to run ArCond (even if you're not going to use it).
>
>

Setting Up For Running ArCond

This tutorial works in zsh. I'm not sure about bash.

You need to set up ArCond (as well as python). I recommend putting the following into a script:

setupATLAS #if you've setup T3 correctly, this should be an alias to 'source /a/data/xenia/share/atlas/ATLASLocalRootBase/user/atlasLocalSetup.sh'
localSetupGcc --gccVersion=gcc432_x86_64_slc5
localSetupROOT --rootVersion=5.26.00-slc5-gcc4.3
localSetupPython
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$ROOTSYS/lib/
#export LD_PRELOAD=$ROOTSYS/lib/libXrdPosixPreload.so 
source /a/data/xenia/share/atlasadmin/condor/Arcond/etc/arcond/arcond_setup.sh

Also, you need to make sure your files are on xrootd. To copy a dataset to xrootd, after doing the above setup, do:

source /data/users/common/xrootadd.sh <dataset_name>

If you just now copied a dataset, you'll need to wait a few hours for the database to update before continuing (otherwise, ArCond won't know that the data is available on the nodes by default). To check if your data is there, again do the above setup and do:

arc_nevis_ls /data/xrootd/

If the data is there, you should list the files with the above command.

 

Tutorial Package

Changed:
<
<
Check out the ArCondNevis package. This directory has the code to run the jobs (Analysis package), plus the submission scripts (in arc_d3pd). You will probably want to do this on your xenia user directory. Condor copies the output of your jobs to wherever you submit from. If the output files are large and you do this from a karthur or kolya directory (i.e. where most of our home directories are mounted), bad things could happen when the output is copied over. Also, when you cd to your xenia user directory, make sure you go to /a/data/xenia/users/username/ rather than just /data/xenia/users/username/ so that the jobs know that the submission directory is on /a/. Otherwise, they won't know where to copy the output. Then, setup svn and check out the package:
>
>
Check out the ArCondNevis package. This directory has the code to run the jobs (Analysis package), plus the submission scripts (in arc_d3pd). You will probably want to do this on your xenia user directory. Condor copies the output of your jobs to wherever you submit from. If the output files are large and you do this from a karthur or kolya directory (i.e. where most of our home directories are mounted), bad things could happen when the output is copied over. Also, when you cd to your xenia user directory, make sure you go to the full NSF path /a/data/xenia/users/username/ rather than just /data/users/username/ so that the jobs know where the submission directory is on NSF. Otherwise, they won't know where to copy the output. Then, check out the package:
 
Added:
>
>
export SVNAP=svn+ssh://svn.cern.ch/reps/apenson
 cd /a/data/xenia/users/urbaniec kinit urbaniec@CERN.CH svn co $SVNAP/ArCondNevis
Line: 44 to 71
 arcond
Changed:
<
<
You can then execute condor_q to view the queue and you should see 12 jobs under your username. If no one is using condor, your jobs should start running right away and take ~10 minutes to finish. You'll get an email sent to your nevis account when the jobs are finished. At any time, from the arc_d3pd directory, you can execute arc_check or condor_q to see which jobs are still running (arc_check will also tell you which jobs have succeeded). When all the jobs are finished (with status 0 in your emails), have a look at the output:
>
>
You can then execute condor_q to view the queue and you should see 12 jobs under your username. If no one is using condor, your jobs should start running right away and take ~10 minutes to finish. You'll get an email sent to your nevis account when the jobs are finished. At any time, from the arc_d3pd directory, you can execute arc_check or condor_q to see which jobs are still running (arc_check will also tell you which jobs have succeeded). When all the jobs are finished (with status 0 in your emails), have a look at the output:
 
cd ArCondNevis/arc_d3pd/Job
ls
Changed:
<
<
You should see 12 directories (one for each job) entitled something like run0_xenia01.nevis.columbia.edu/ Each of these directories contains the submission and execution scripts. They are also where the output is copied to once the job finishes. In this case, the output should be called Analysis.log (output text from AnalysisUtilities job) and Analysis.root
>
>
You should see 12 directories (one for each job) entitled something like run0_xenia01.nevis.columbia.edu/ Each of these directories contains the submission and execution scripts. They are also where the output is copied to once the job finishes. In this case, the output should be called Analysis.log (output text from AnalysisUtilities job) and Analysis.root.
  -- DustinUrbaniec - 27 Aug 2010

Revision 12010-08-27 - DustinUrbaniec

Line: 1 to 1
Added:
>
>
META TOPICPARENT name="UsingNevisT3"

Using ArCond at Nevis

ArCond is a wrapper that uses condor to automatically parallelize your jobs. When you copy a dataset to xrootd, the data is distributed evenly over all three of the T3 worker nodes. In addition to parallelization, ArCond also runs jobs only on the nodes where the data is stored so there is no network load. The general ArCond instructions are available here. These instructions are specific to users at ANL, but there may still be some information there that you will find useful.

This tutorial will teach you how to submit c++ jobs which run over D3PDs (in the AnalysisUtilities framework) using ArCond. I'm fairly new to ArCond, so it is possible that I've made mistakes (or that things could be done in a more efficient way). If you discover any, please let me know or update this page yourself. It's also possible to run athena on ArCond, but I haven't tried to do that and I don't plan to. See the general ArCond instructions for that.

Setting Up For Running ArCond

Unfortunately, you have to set up athena to run ArCond (even if you're not going to use it).

Tutorial Package

Check out the ArCondNevis package. This directory has the code to run the jobs (Analysis package), plus the submission scripts (in arc_d3pd). You will probably want to do this on your xenia user directory. Condor copies the output of your jobs to wherever you submit from. If the output files are large and you do this from a karthur or kolya directory (i.e. where most of our home directories are mounted), bad things could happen when the output is copied over. Also, when you cd to your xenia user directory, make sure you go to /a/data/xenia/users/username/ rather than just /data/xenia/users/username/ so that the jobs know that the submission directory is on /a/. Otherwise, they won't know where to copy the output. Then, setup svn and check out the package:

cd /a/data/xenia/users/urbaniec
kinit urbaniec@CERN.CH
svn co $SVNAP/ArCondNevis

Optional: If you want to run the code interactively to see what it does, execute the following commands:

cd ArCondNevis/Analysis/AnalysisUtilities/goodRunsLists/cmt
make
cd ../..
make
cd ../AnalysisTemplate
make
run InputFiles.txt physics Analysis.root -isMC 1

This should take a few minutes to compile and run. The output is a file called Analysis.root which has some histograms, plus a slimmed TTree.

Submitting the Jobs Out-of-the-box

The package should be plug and play. There is no need to compile anything since compilation is in principle done locally on the worker nodes where the job is located. To run

ArCond, just do:

cd ArCondNevis/arc_d3pd
arcond

You can then execute condor_q to view the queue and you should see 12 jobs under your username. If no one is using condor, your jobs should start running right away and take ~10 minutes to finish. You'll get an email sent to your nevis account when the jobs are finished. At any time, from the arc_d3pd directory, you can execute arc_check or condor_q to see which jobs are still running (arc_check will also tell you which jobs have succeeded). When all the jobs are finished (with status 0 in your emails), have a look at the output:

cd ArCondNevis/arc_d3pd/Job
ls

You should see 12 directories (one for each job) entitled something like run0_xenia01.nevis.columbia.edu/ Each of these directories contains the submission and execution scripts. They are also where the output is copied to once the job finishes. In this case, the output should be called Analysis.log (output text from AnalysisUtilities job) and Analysis.root

-- DustinUrbaniec - 27 Aug 2010

 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback