Hence the functionality of the current "special tasks" eg
mdbquefpack, TCL job is implemented within the user application
-> no need for these special tasks
-> every user job has access to this functionality
histogram collection
ntuple collection
printout collection
are to be implemented within the user application with the aid of
new h1 library code
histogramming :
The harness will supply data on the input stream
which causes the "event" loop histogram calls to
either fill local histograms and send on output stream at
ENDRUN ( or ENDJOB ) -OR- to send on output stream data from
histogram filling calls at end of each event processing. The
harness will care for the transfer of these data to a copy of
the user process so that this copy sums histograms -OR- makes
actual histogram filling and is informed of end-run , end-job
conditions allowing for writing of histogram files on run and
job basis.
ntuples:
The harness will send on output stream n-tuple entry calls made
during event processing to an incarnation of the user process so
that ntuple an ntuple file may be written.
printout:
The new framework will collect all printout made at each stage
of processing ie BEGJOB, REVENT event processing etc
label it with run/event number and timestamp and send this data
on the output stream...to an incarnation of the user process which
will write a single printout file.
database input:
database input information ( eg TCL job ) may be written by the
user process during event processing and the harness will feed
all such records...to an incarnation of the user process which
will have then flag TCLANA set instructing it to analyse this data.
|