Difference: MonteCarloProduction (1 vs. 5)

Revision 52010-12-23 - AlexPenson

Line: 1 to 1
 

MonteCarloProduction

Changed:
<
<
This page shows how to produce Monte Carlo using MC10 settings with pathena.
>
>
There are various ways of getting MC produced:
 
Changed:
<
<
There are many twiki pages explaining how to get started and set up generators.
>
>
1) Official Production
 
Changed:
<
<
https://twiki.cern.ch/twiki/bin/view/AtlasProtected/AtlasProductionGroup#Private_Simulation
>
>
2) 'Organized' private production such as USATLAS RAC: http://www.usatlas.bnl.gov/twiki/bin/view/AtlasSoftware/ResourceRequest.html

3) Private production that you do yourself. Some commands and tips below.

You should make sure that physics group conveners approve the production. They will have a quota of official production and so may suggest the other options. You should find someone who has produced MC recently (possibly via the conveners) and check with them that you have the most up-to-date standard settings.

For each datasets to be produced you will need:
- A dataset number (RunNumber)
- A cross section
- A job option (probably using pythia or herwig) that Evgen can run on
- A few validation plots on a twiki (from after the Evgen step)

You will have to run the Evgen step (presumably locally) and produce a few plots per dataset to 'validate' the setup. Then put these plots on a twiki and link to it for example here https://twiki.cern.ch/twiki/bin/view/AtlasProtected/ExoticsValidationPlots. Pythia and Herwig are part of Athena, so if you have a Pythia card then you should be able to run Evgen directly on that. Otherwise you download, say, Alpgen, set it up and produce events (and possibly config) text files. Then to run Evgen you need a pythia/herwig jobOption to read the events files. Either search lxr for the job option of your favourite similar dataset: http://alxr.usatlas.bnl.gov/lxr/source/atlas/Generators/MC10JobOptions/share/MC10.107650.AlpgenJimmyZeeNp0_pt20.py or look here: http://alxr.usatlas.bnl.gov/lxr/source/atlas/Generators/

Some twiki pages explaining how to get started:

  https://twiki.cern.ch/twiki/bin/view/AtlasProtected/McProductionCommonParameters
Added:
>
>
esp. for non-Pythia: https://twiki.cern.ch/twiki/bin/view/AtlasProtected/PreparingLesHouchesEven
 https://twiki.cern.ch/twiki/bin/view/AtlasProtected/ExoticsMCRequestsHowTo#Setting_up_a_request_for_event_g
Changed:
<
<
esp. for non-Pythia: https://twiki.cern.ch/twiki/bin/view/AtlasProtected/PreparingLesHouchesEven
>
>
https://twiki.cern.ch/twiki/bin/view/AtlasProtected/AtlasProductionGroup#Private_Simulation

Private Production with Pathena

 
Changed:
<
<
If you haven't already emailed people to make sure that they approve the production and that you have the most up-to-date standard settings, this is one of the first things to do.
>
>
This shows how to produce Monte Carlo using MC10 settings with pathena.
 
Changed:
<
<
I followed the dashboard of a previous request http://www-f9.ijs.si/atlpy/atlprod/prodrequest/5832/
>
>
https://twiki.cern.ch/twiki/bin/viewauth/Atlas/PandaAthena#example_8_How_to_run_production

I followed the dashboard of a previous request http://www-f9.ijs.si/atlpy/atlprod/prodrequest/5832/

 

Generation and Geant

Line: 37 to 50
 csc_atlasG4_trf.py [options] [maxevents] [physicslist] [jobconfig] [dbrelease] [conditionstag] [dbcontent] [ignoreconfigerror] [amitag]
Changed:
<
<
The command below generates 5000 events and runs Geant on the first 25. I understand that it's possible to use --nEventsPerJob and %SKIPEVENTS to run multiple subjobs on the same evgen file. https://twiki.cern.ch/twiki/bin/viewauth/Atlas/PandaAthena#example_8_How_to_run_production
>
>
The command below generates 5000 events and runs Geant on the first 25. I understand that it's possible to use --nEventsPerJob and %SKIPEVENTS to run multiple subjobs on the same evgen file.
 
Changed:
<
<
Evgen_trf.py complained when I tried to generate less (or more) than 5000 events, but it's so much faster than Geant that it doesn't matter. Geant takes 4-8 mins per event
>
>
Evgen_trf.py complained when I tried to generate less (or more) than 5000 events, but it's so much faster than Geant that it doesn't matter. Geant takes 4-8 mins per event. It uses ~2GB RAM per job and is hence not allowed on xenia.
  I made the random seed equal to the job number (0 in the command below)
Line: 126 to 139
  https://twiki.cern.ch/twiki/bin/viewauth/Atlas/RecoTrf#Special_syntax_for_preExec_postE was helpful as well as:
Changed:
<
<
hn-atlas-dist-analysis-help@cern.ch and hn-atlas-job-transformations@cern.ch
>
>
hn-atlas-dist-analysis-help@cern.ch and hn-atlas-job-transformations@cern.ch
 
META TOPICMOVED by="DustinUrbaniec" date="1218228408" from="ATLAS.AtlasProduction" to="ATLAS.MonteCarloProduction"

Revision 42010-12-17 - AlexPenson

Line: 1 to 1
Changed:
<
<
This page shows how to produce Monte Carlo (using MC10 settings) in a highly parallel way using either condor or pathena.
>
>

MonteCarloProduction

 
Changed:
<
<
Pythia event generation and Geant simulation are done in athena release 15.6.12.9. Digitization and reconstruction are in release 16.0.2.3.
>
>
This page shows how to produce Monte Carlo using MC10 settings with pathena.
 
Changed:
<
<
All steps can take 10-15 mins per event.
>
>
There are many twiki pages explaining how to get started and set up generators.
 
Changed:
<
<

Generation

>
>
https://twiki.cern.ch/twiki/bin/view/AtlasProtected/AtlasProductionGroup#Private_Simulation
 
Changed:
<
<
After your release has been installed and setup, you will need to create a place where you can do your production. For example, mine is ~/workarea/14.2.10/production. Once created, cd to that directory.
>
>
https://twiki.cern.ch/twiki/bin/view/AtlasProtected/McProductionCommonParameters

https://twiki.cern.ch/twiki/bin/view/AtlasProtected/ExoticsMCRequestsHowTo#Setting_up_a_request_for_event_g

esp. for non-Pythia: https://twiki.cern.ch/twiki/bin/view/AtlasProtected/PreparingLesHouchesEven

If you haven't already emailed people to make sure that they approve the production and that you have the most up-to-date standard settings, this is one of the first things to do.

I followed the dashboard of a previous request http://www-f9.ijs.si/atlpy/atlprod/prodrequest/5832/

Generation and Geant

Pythia event generation and Geant simulation were done in athena release 15.6.12.9.

source setup.sh -tag=15.6.12.9,AtlasProduction,32,opt,setup

The transform scripts are:

Evgen_trf.py [options] <ecmenergy> <runnumber> <firstevent> [maxevents] <randomseed> <jobconfig> <outputevgenfile> [histogramfile] [ntuplefile] [inputgeneratorfile] [evgenjobopts]

and:

csc_atlasG4_trf.py [options] <inputevgenfile> <outputhitsfile> [maxevents] <skipevents> <randomseed> <geometryversion> [physicslist] [jobconfig] [dbrelease] [conditionstag] [dbcontent] [ignoreconfigerror] [amitag]

The command below generates 5000 events and runs Geant on the first 25. I understand that it's possible to use --nEventsPerJob and %SKIPEVENTS to run multiple subjobs on the same evgen file. https://twiki.cern.ch/twiki/bin/viewauth/Atlas/PandaAthena#example_8_How_to_run_production

Evgen_trf.py complained when I tried to generate less (or more) than 5000 events, but it's so much faster than Geant that it doesn't matter. Geant takes 4-8 mins per event

I made the random seed equal to the job number (0 in the command below)

pathena --trf \
'Evgen_trf.py \
7000 \
105587 \
0 \
5000 \
0 \
./MC10.105587.PythiaWprime800_WZ_qqee.py \
TMP.evgen.root; \
\
csc_atlasG4_trf.py \
TMP.evgen.root \
%OUT.hits.root \
25 \
0 \
0 \
ATLAS-GEO-16-00-00 \
QGSP_BERT \
VertexFromCondDB.py,CalHits.py,ParticleID.py \
%DB:ddo.000001.Atlas.Ideal.DBRelease.v120201:DBRelease-12.2.1.tar.gz \
OFLCOND-SDR-BS7T-02 \
NONE \
False \
NONE' \
--long \
--excludedSite \
RALPP,GRIF-LAL,SLAC \
--dbRelease \
ddo.000001.Atlas.Ideal.DBRelease.v120201:DBRelease-12.2.1.tar.gz \
--outDS \
user.apenson.MC10.105587.PythiaWprime800_WZ_qqee.evgen.G4.25.0

Digitization and Reconstruction

Digitization and reconstruction were done in release 16.0.2.3.

source setup.sh -tag=16.0.2.3,AtlasProduction

This command take the hits file generated by the command above and runs two more transforms on it:

pathena --trf \
'Digi_trf.py \
inputHitsFile=%IN \
outputRDOFile=TMP.RDO.root \
maxEvents=-1 \
skipEvents=0 \
DBRelease=%DB:ddo.000001.Atlas.Ideal.DBRelease.v120902:DBRelease-12.9.2.tar.gz \
conditionsTag=OFLCOND-SDR-BS7T-04-02 \
geometryVersion=ATLAS-GEO-16-00-00 \
samplingFractionDbTag=QGSP_BERT \
digiSeedOffset1=1 \
digiSeedOffset2=0; \
\
Reco_trf.py \
inputRDOFile=TMP.RDO.root \
outputESDFile=%OUT.ESD.pool.root \
outputAODFile=%OUT.AOD.pool.root \
DBRelease=%DB:ddo.000001.Atlas.Ideal.DBRelease.v120902:DBRelease-12.9.2.tar.gz \
autoConfiguration=everything \
conditionsTag=OFLCOND-SDR-BS7T-04-02 \
geometryVersion=ATLAS-GEO-16-00-00 \
preExec="rec.Commissioning.set_Value_and_Lock(True);jobproperties.Beam.energy.set_Value_and_Lock(3500*Units.GeV);muonRecFlags.writeSDOs=True" \
preInclude=RecJobTransforms/SetJetConstants-02-000.py \
postInclude_r2e=RecJobTransforms/CalibrationHitsInESDConfig.py \
triggerConfig=MCRECO:DB:TRIGGERDBMC:248,108,194' \
--long \
--dbRelease \
ddo.000001.Atlas.Ideal.DBRelease.v120902:DBRelease-12.9.2.tar.gz \
--inDS \
user.apenson.MC10.105587.PythiaWprime800_WZ_qqee.evgen.G4.25.0/ \
--outDS \
user.apenson.MC10.105587.PythiaWprime800_WZ_qqee.0
 
Changed:
<
<
The event generation job transform is csc_evgen_trf.py. The main difference (from what I can tell) between a job options file and a job transform file is that the transform accepts arguments (for example for the number of events, so that you don't have to physically change the contents of the file as with job options). You will also need a job options file which specifies the pythia cards for your event. An example job options file can be copied from ~durbanie/workarea/14.2.10/production/WprWZenuqq.jobOptions.py. This job options file contains W' events decaying to a W and a Z. The W then decays to an electron and its neutrino, and the Z hadronically.
>
>
The two digiSeedOffsets must be changed for each subjob to change the pattern of noise in the calorimeter.
 
Changed:
<
<
The part where you specify your decay begins after the line Pythia.PythiaCommand = [. Everything after this switches on the W' production, specifies its mass, forces a decay to WZ, and then specifies what the W and Z decay to (see comments in the file). For example to make the W decay hadronically, first turn off the leptonic decay by switching the line "pydat3 mdme 206 1 1" to "pydat3 mdme 206 1 0". Then turn on the hadronic modes (for example, changing "pydat3 mdme 190 1 0" to "pydat3 mdme 190 1 1" will turn on the W-]d-bar u mode). In short, the last number in each line specifies if the process is on (1) or off (0).
>
>
https://twiki.cern.ch/twiki/bin/viewauth/Atlas/RecoTrf#Special_syntax_for_preExec_postE was helpful as well as:
 
Changed:
<
<
Once you have the job options file, simply enter the following line, specifying your own parameters:

csc_evgen_trf.py -t [runNumber] [firstEvent] [maxEvents] [randomSeed] [jobConfig] [outputEvgenFile] [histogramFile] [ntupleFile] [inputGeneratorFile]

As an example:

csc_evgen_trf.py -t 8999 0 100 54298752 WprWZenuqq.jobOptions.py WprWZenuqq.evgen.pool.root NONE NONE

This generates 100 W'->WZ->enuqq events and stores them in the file WprWZenuqq.evgen.pool.root. This should go rather quickly (around 5 min for 100 events).

Simulation

The next step is simulation/digitization using Atlfast II. The transform file used here is csc_simul_trf.py. This takes an input evgen file and first simulates what each particle does to the detector, and outputs a file containing a record of where each particle traversed the detector as well as how much energy was deposited in that detector component. The user can save this record as a root file (called a HITS file) if he or she chooses.

This transform also digitizes events, i.e. simulates the digital output of the detector in response to each event. The result is called an RDO file which stands for Raw Data Object. This is in principle similar to real data that one would expect to get out of the detector. To run simulation, enter this line at the command line:

csc_simul_trf.py [options] [inputevgenfile] [outputhitsfile] [outputrdofile] [maxevents] [skipevents][randomseed] [geometryversion] [physicslist] [jobconfig] [dbrelease]

For example:

csc_simul_trf.py WprWZenuqq.evgen.pool.root WprWZenuqq.HITS.root WprWZenuqq.RDO.root 100 0 12 "ATLAS-GEO-01-01-00" 1 2 QGSP_EMV jobConfig.FastIDKiller.py

Note that this can take a while to complete (~2 hours for 100 events)

Reconstruction

The last step is reconstruction (converting the raw data into objects like jets, electrons, etc.). In the Atlas Event Data Model, there are two data formats: the Event Summary Data (ESD) and the Analysis Object Data (AOD). The ESD contains more information than the AOD (for example, the ESD provides full access to the calorimeter cells whereas the AOD does not). The relevant transform file is csc_reco_trf.py. This file will create both an ESD and an AOD. It also creates an ntuple which is readable in root, but you'll likely just want to create your own ntuple anyway using AnalysisSkeleton, for example. Also, see the physics workbook for more info on how to do analyses with an ESD or AOD.

Before you begin with this step, note that there is a bug which causes crashes while reconstructing in release 14.2.10. This should not be true for 14.2.11 or 14.1.0, but those are not yet installed at Nevis. The details are here if you're interested. The fix (I believe) is simply to check out and compile the following packages:

PhysicsAnalysis/MuonID/MuonUtils-00-06-01

PhysicsAnalysis/MuonID/MuonAlgs-00-10-05

To do this, set up cvs as specified in the RunningAnalysis page. Then do:

cd path_to_your_workarea/14.2.10

cmt co -r MuonUtils-00-06-01 PhysicsAnalysis/MuonID/MuonUtils

cd PhysicsAnalysis/MuonID/MuonUtils/cmt

cmt config

source setup.sh

cmt bro make

and similarly for the other package. Once you've done this, the reconstruction should work without crashing. You may also have to source the setup.sh script in each package's cmt directory each time you open a new shell for the fix to work.

The command to run reconstruction is:

csc_reco_trf.py [options] [inputrdofile] [outputesdfile] [outputaodfile] [ntuplefile] [maxevents] [skipevents] [geometryversion] [triggerconfig] [jobconfig] [dbrelease] [conditionstag]

For example:

csc_reco_trf.py WprWZenuqq.RDO.root WprWZenuqq.ESD.root WprWZenuqq.AOD.root NONE 100 0 "ATLAS-GEO-01-01-00" NONE FastCaloSimAddCellsRecConfig.py,NoAtlfastAODConfig.py

This creates a 100 event ESD and AOD that you can then run an analysis on.

-- DustinUrbaniec - 08 Aug 2008

>
>
hn-atlas-dist-analysis-help@cern.ch and hn-atlas-job-transformations@cern.ch
 
META TOPICMOVED by="DustinUrbaniec" date="1218228408" from="ATLAS.AtlasProduction" to="ATLAS.MonteCarloProduction"

Revision 32010-11-22 - AlexPenson

Line: 1 to 1
Changed:
<
<

Introduction/Prerequisites

>
>
This page shows how to produce Monte Carlo (using MC10 settings) in a highly parallel way using either condor or pathena.

Pythia event generation and Geant simulation are done in athena release 15.6.12.9. Digitization and reconstruction are in release 16.0.2.3.

All steps can take 10-15 mins per event.

 
Deleted:
<
<
This page shows how to generate, simulate (using AltfastII), and reconstruct events. It assumes you have installed and set up release 14. If not, see Gustaaf's tutorial for setting up a release at Nevis: RunningAnalysis or for setting up a release at BNL: RunningAnalysisAtBNL. You can also set up a release on lxplus at cern by following cern's computing workbook. It may be a good idea to go through the workbook anyway, however, if you intend to do a lot of production, you will not be able to do it on lxplus because of disc space issues. Also, the workbook production runs through the full chain which takes an order of magnitude longer to get through than using Atlfast II.
 

Generation

After your release has been installed and setup, you will need to create a place where you can do your production. For example, mine is ~/workarea/14.2.10/production. Once created, cd to that directory.

Line: 18 to 21
  csc_evgen_trf.py -t 8999 0 100 54298752 WprWZenuqq.jobOptions.py WprWZenuqq.evgen.pool.root NONE NONE
Changed:
<
<
This generates 100 W'->WZ->enuqq events and stores them in the file WprWZenuqq.evgen.pool.root. This should go rather quickly (around 5 min for 100 events).
>
>
This generates 100 W'->WZ->enuqq events and stores them in the file WprWZenuqq.evgen.pool.root. This should go rather quickly (around 5 min for 100 events).
 

Simulation

The next step is simulation/digitization using Atlfast II. The transform file used here is csc_simul_trf.py. This takes an input evgen file and first simulates what each particle does to the detector, and outputs a file containing a record of where each particle traversed the detector as well as how much energy was deposited in that detector component. The user can save this record as a root file (called a HITS file) if he or she chooses.

Revision 22008-08-18 - DustinUrbaniec

Line: 1 to 1
 

Introduction/Prerequisites

This page shows how to generate, simulate (using AltfastII), and reconstruct events. It assumes you have installed and set up release 14. If not, see Gustaaf's tutorial for setting up a release at Nevis: RunningAnalysis or for setting up a release at BNL: RunningAnalysisAtBNL. You can also set up a release on lxplus at cern by following cern's computing workbook. It may be a good idea to go through the workbook anyway, however, if you intend to do a lot of production, you will not be able to do it on lxplus because of disc space issues. Also, the workbook production runs through the full chain which takes an order of magnitude longer to get through than using AtlfastII.

Line: 34 to 34
 Note that this can take a while to complete (~2 hours for 100 events)

Reconstruction

Changed:
<
<
To be completed soon.
>
>
The last step is reconstruction (converting the raw data into objects like jets, electrons, etc.). In the Atlas Event Data Model, there are two data formats: the Event Summary Data (ESD) and the Analysis Object Data (AOD). The ESD contains more information than the AOD (for example, the ESD provides full access to the calorimeter cells whereas the AOD does not). The relevant transform file is csc_reco_trf.py. This file will create both an ESD and an AOD. It also creates an ntuple which is readable in root, but you'll likely just want to create your own ntuple anyway using AnalysisSkeleton, for example. Also, see the physics workbook for more info on how to do analyses with an ESD or AOD.

Before you begin with this step, note that there is a bug which causes crashes while reconstructing in release 14.2.10. This should not be true for 14.2.11 or 14.1.0, but those are not yet installed at Nevis. The details are here if you're interested. The fix (I believe) is simply to check out and compile the following packages:

PhysicsAnalysis/MuonID/MuonUtils-00-06-01

PhysicsAnalysis/MuonID/MuonAlgs-00-10-05

To do this, set up cvs as specified in the RunningAnalysis page. Then do:

cd path_to_your_workarea/14.2.10

cmt co -r MuonUtils-00-06-01 PhysicsAnalysis/MuonID/MuonUtils

cd PhysicsAnalysis/MuonID/MuonUtils/cmt

cmt config

source setup.sh

cmt bro make

and similarly for the other package. Once you've done this, the reconstruction should work without crashing. You may also have to source the setup.sh script in each package's cmt directory each time you open a new shell for the fix to work.

The command to run reconstruction is:

csc_reco_trf.py [options] [inputrdofile] [outputesdfile] [outputaodfile] [ntuplefile] [maxevents] [skipevents] [geometryversion] [triggerconfig] [jobconfig] [dbrelease] [conditionstag]

For example:

csc_reco_trf.py WprWZenuqq.RDO.root WprWZenuqq.ESD.root WprWZenuqq.AOD.root NONE 100 0 "ATLAS-GEO-01-01-00" NONE FastCaloSimAddCellsRecConfig.py,NoAtlfastAODConfig.py

This creates a 100 event ESD and AOD that you can then run an analysis on.

  -- DustinUrbaniec - 08 Aug 2008

Revision 12008-08-08 - DustinUrbaniec

Line: 1 to 1
Added:
>
>

Introduction/Prerequisites

This page shows how to generate, simulate (using AltfastII), and reconstruct events. It assumes you have installed and set up release 14. If not, see Gustaaf's tutorial for setting up a release at Nevis: RunningAnalysis or for setting up a release at BNL: RunningAnalysisAtBNL. You can also set up a release on lxplus at cern by following cern's computing workbook. It may be a good idea to go through the workbook anyway, however, if you intend to do a lot of production, you will not be able to do it on lxplus because of disc space issues. Also, the workbook production runs through the full chain which takes an order of magnitude longer to get through than using AtlfastII.

Generation

After your release has been installed and setup, you will need to create a place where you can do your production. For example, mine is ~/workarea/14.2.10/production. Once created, cd to that directory.

The event generation job transform is csc_evgen_trf.py. The main difference (from what I can tell) between a job options file and a job transform file is that the transform accepts arguments (for example for the number of events, so that you don't have to physically change the contents of the file as with job options). You will also need a job options file which specifies the pythia cards for your event. An example job options file can be copied from ~durbanie/workarea/14.2.10/production/WprWZenuqq.jobOptions.py. This job options file contains W' events decaying to a W and a Z. The W then decays to an electron and its neutrino, and the Z hadronically.

The part where you specify your decay begins after the line Pythia.PythiaCommand = [. Everything after this switches on the W' production, specifies its mass, forces a decay to WZ, and then specifies what the W and Z decay to (see comments in the file). For example to make the W decay hadronically, first turn off the leptonic decay by switching the line "pydat3 mdme 206 1 1" to "pydat3 mdme 206 1 0". Then turn on the hadronic modes (for example, changing "pydat3 mdme 190 1 0" to "pydat3 mdme 190 1 1" will turn on the W-]d-bar u mode). In short, the last number in each line specifies if the process is on (1) or off (0).

Once you have the job options file, simply enter the following line, specifying your own parameters:

csc_evgen_trf.py -t [runNumber] [firstEvent] [maxEvents] [randomSeed] [jobConfig] [outputEvgenFile] [histogramFile] [ntupleFile] [inputGeneratorFile]

As an example:

csc_evgen_trf.py -t 8999 0 100 54298752 WprWZenuqq.jobOptions.py WprWZenuqq.evgen.pool.root NONE NONE

This generates 100 W'->WZ->enuqq events and stores them in the file WprWZenuqq.evgen.pool.root. This should go rather quickly (around 5 min for 100 events).

Simulation

The next step is simulation/digitization using Atlfast II. The transform file used here is csc_simul_trf.py. This takes an input evgen file and first simulates what each particle does to the detector, and outputs a file containing a record of where each particle traversed the detector as well as how much energy was deposited in that detector component. The user can save this record as a root file (called a HITS file) if he or she chooses.

This transform also digitizes events, i.e. simulates the digital output of the detector in response to each event. The result is called an RDO file which stands for Raw Data Object. This is in principle similar to real data that one would expect to get out of the detector. To run simulation, enter this line at the command line:

csc_simul_trf.py [options] [inputevgenfile] [outputhitsfile] [outputrdofile] [maxevents] [skipevents][randomseed] [geometryversion] [physicslist] [jobconfig] [dbrelease]

For example:

csc_simul_trf.py WprWZenuqq.evgen.pool.root WprWZenuqq.HITS.root WprWZenuqq.RDO.root 100 0 12 "ATLAS-GEO-01-01-00" 1 2 QGSP_EMV jobConfig.FastIDKiller.py

Note that this can take a while to complete (~2 hours for 100 events)

Reconstruction

To be completed soon.

-- DustinUrbaniec - 08 Aug 2008

META TOPICMOVED by="DustinUrbaniec" date="1218228408" from="ATLAS.AtlasProduction" to="ATLAS.MonteCarloProduction"
 
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback