TWiki> ATLAS Web>UsingNevisT3>ArCondNevis (revision 1)EditAttach

Using ArCond at Nevis

ArCond is a wrapper that uses condor to automatically parallelize your jobs. When you copy a dataset to xrootd, the data is distributed evenly over all three of the T3 worker nodes. In addition to parallelization, ArCond also runs jobs only on the nodes where the data is stored so there is no network load. The general ArCond instructions are available here. These instructions are specific to users at ANL, but there may still be some information there that you will find useful.

This tutorial will teach you how to submit c++ jobs which run over D3PDs (in the AnalysisUtilities framework) using ArCond. I'm fairly new to ArCond, so it is possible that I've made mistakes (or that things could be done in a more efficient way). If you discover any, please let me know or update this page yourself. It's also possible to run athena on ArCond, but I haven't tried to do that and I don't plan to. See the general ArCond instructions for that.

Setting Up For Running ArCond

Unfortunately, you have to set up athena to run ArCond (even if you're not going to use it).

Tutorial Package

Check out the ArCondNevis package. This directory has the code to run the jobs (Analysis package), plus the submission scripts (in arc_d3pd). You will probably want to do this on your xenia user directory. Condor copies the output of your jobs to wherever you submit from. If the output files are large and you do this from a karthur or kolya directory (i.e. where most of our home directories are mounted), bad things could happen when the output is copied over. Also, when you cd to your xenia user directory, make sure you go to /a/data/xenia/users/username/ rather than just /data/xenia/users/username/ so that the jobs know that the submission directory is on /a/. Otherwise, they won't know where to copy the output. Then, setup svn and check out the package:

cd /a/data/xenia/users/urbaniec
kinit urbaniec@CERN.CH
svn co $SVNAP/ArCondNevis

Optional: If you want to run the code interactively to see what it does, execute the following commands:

cd ArCondNevis/Analysis/AnalysisUtilities/goodRunsLists/cmt
cd ../..
cd ../AnalysisTemplate
run InputFiles.txt physics Analysis.root -isMC 1

This should take a few minutes to compile and run. The output is a file called Analysis.root which has some histograms, plus a slimmed TTree.

Submitting the Jobs Out-of-the-box

The package should be plug and play. There is no need to compile anything since compilation is in principle done locally on the worker nodes where the job is located. To run

ArCond, just do:

cd ArCondNevis/arc_d3pd

You can then execute condor_q to view the queue and you should see 12 jobs under your username. If no one is using condor, your jobs should start running right away and take ~10 minutes to finish. You'll get an email sent to your nevis account when the jobs are finished. At any time, from the arc_d3pd directory, you can execute arc_check or condor_q to see which jobs are still running (arc_check will also tell you which jobs have succeeded). When all the jobs are finished (with status 0 in your emails), have a look at the output:

cd ArCondNevis/arc_d3pd/Job

You should see 12 directories (one for each job) entitled something like Each of these directories contains the submission and execution scripts. They are also where the output is copied to once the job finishes. In this case, the output should be called Analysis.log (output text from AnalysisUtilities job) and Analysis.root

-- DustinUrbaniec - 27 Aug 2010

Edit | Attach | Watch | Print version | History: r8 | r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r1 - 2010-08-27 - DustinUrbaniec
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback