easy with autoinstaller or from sources, as you prefer :



  • Compilation of sources : get the sources from the git repository :
    git clone git://
    Prerequisites are a c/C++ compiler, Python, numpy, PyQt4.
    For windows, visualc++ version must match the version used for Python.
  • Windows : the script w.bat compiles and install everything in c:\PPM.
You can edit it and chose a different target.
  • Linux : python install --install-lib yourdir --install-scripts yourdir
builds and install everything in your preferred yourdir

Some test to check to test that everything is OK

  • The main scripts can be found in the installation directory.For Windows they
    can be found also in the Start Menu. The scripts are :
  • ppmguiScalar : GUI for the scalar calculation
  • ppmguiTensorial: GUI for the Magnetic calculations
  • ppmxml : command line script for scalar
  • ppmxmlTens : command line script for magnetic calculation
  • The gui are a tool to help you creating the description of the model.
    The model description itself is stored in a XML file. The gui launches
    the command line script to process the xml files and produce output
  • The installation directory contains examples directory with several examples
    subdirectories. Depending on the directory you choosed, you might need to copy
    examples to a place where you have write acces because temporary files are created.
    Suggestion for a quick check : with ppmguiScalar open ex_scalar_2_fit/oro.xml
    and run it. Then open oro_fit.xml and run it. The second xml model has some changed variables
    which will be fitted to the result of the first model. When you run this script , the free variables
    are fitted to the reflectivity calculated in the previous run. It takes a little while because
    the example implements an annealed version of amoeba, which relaunch the optimisation each time
    a local minima is found.
    When the optimisation is finished you can reload the variables, using the run menu.
    You will be asked to choose one if the files with optimised variables that the optimisation has created.
    You can find it either in the example directory : file lastvariables or in the PARTIAL subdirectory.
    In the PARTIAL subdirectory you have RESULTs_AT_XXX which are the variables at the local minima
    found at iteration XXX. Always in the PARTIAL subdir; the variablesN where N is a number from 0 to 29 files are the variables
    found in the most recent calculations during optimisations. The postfix N is equal to Nit modulus 30 ( Nit = number of the iteration ).
    These files allows to monitor whats is going on during difficult fit where local minima are hard to reach.



go to the EXAMPLES/ex1 directory :
../../WRAPPERS/ppmxml oro.xml
This oro.xml file defines a simple layer on a substrate, calculates its
reflectivity along a scan specified inside the <scan> xml tags.
The result is written on fit_res1 ( fit_res2.. fit_resn when you have more scans )
The instruction 
......ppmxml oro_fit.xml
run a minimisation where on takes fit_res1 as experimental data
and the optimal thickness is looked for.
The variable optimise is indicated providing three numbers :
<f>"thickness" 50.0 40.0 100.0 </f>
The optimisation runs an amoeba minimisation, with several restart.
( other choices for minimisation routines are detailed in the following of the manual )
During the minimisation partial results are visible in the PARTIAL directory :
lastresults1... lastresultsn
contain the calculated and experimental scans.
contain the value for the variables. You dont need to inspect manually
the variables files***. The best way to use them is automatically transposing
the contained values in the input xml file :
..ppmxml oro_fit.xml PARTIAL/lastvariables
This command creates NEW_oro_fit.xml.
***(However, if you have associate reference names ( see chapter on references)
to the variables, such file will be easily readeable)
The files RESULTAT_AT_XXX contains the variables found each time  amoeba stops.
For long fit, instead of waiting for the fit to stop, you can monitor
the actual situation and the found local minima, and stop the program
if satistying solutions have been found.
To test the magnetic case, got to the ex2 directory :
..ppmxmlTens ex2.xml 
.....ppmxmlTens ex2_fit.xml

  • USING PPM as subroutines
Get     .... into your PYTHONPATH
Now you can use PPM as subroutine.
>>>import ppmxml
now the string s contains the script which correspond to the xml file.
It can be executed with exec(s)
In the script generated by oro.xml , the variable fit2 is an object of the PPM_ComparisonTheoryExperiment class
you can call the error() method, it can be called
After you have called it the variable
fit2.calculatedscan[0]    contains the calculated scan , if you have several scan you get the others increasing the index.
The corresponding experimental scan  is
fit2.scanlist[0]   which consistes in 4 columns : wavelenghts, angles, experimental data, weights
::: Magnetic case :::
Same as above, in directory ex2 of the examples
>>>import ppmxmlTens

  • explanation of the different tags and options
  • OPTICAL CONSTANTS ----------------------------------------------------------

    Your sistem is always described within the <system> </system> couple of tags.
    Therefore your xml file starts with <system> and ends with </system>

    The first thing to set are the tables of optical data. You need to give a table for f0
    and another for f1f2

    <s> f0_WaasKirf.dat </s>
    <s> f1f2_Windt.dat </s>

    Other tables are available in the DABAX directory (DATA/)

    Several scans can be fitted simultaneaously. As default they refers to the same stack( multilayer).
    They can be associated to different stacks , in case you have defined severals stacks in the xml file.
    ( the use of several stacks is useful for fitting a batch of multilayer which share a given properties :
    for example the same interface, but different thicknesses )

    The scans are given inside the scanlist tag as in the below example.

    <scan nstack="0">
    <s key="filename" > fit_res1 </s>
    <f>"wavelenghts_col" 2 </f>
    <f>"angles_col" 3 </f>
    <f>"refle_col" 1 </f>
    <f>"weight_col" 1.0 </f>
    <f>"angle_factor" 1.0 </f>
    <f>"norm" 1.0 0.9 1.2 </f>
    <f>"noise" 0.0 </f>

    &lt;scan  nstack="1" &gt;
    &lt;s key="filename" &gt; Bfit_res1 &lt;/s&gt;
    &lt;f&gt;"wavelenghts_col" 2 &lt;/f&gt;
    &lt;f&gt;"angles_col" 3 &lt;/f&gt;
    &lt;f&gt;"refle_col" 1 &lt;/f&gt;
    &lt;f&gt;"weight_col" 1.0 &lt;/f&gt;
    &lt;f&gt;"angle_factor" 1.0 &lt;/f&gt;
    &lt;f&gt;"norm" 1.0 0.9 1.2 &lt;/f&gt;
    &lt;f&gt;"noise" 0.0 &lt;/f&gt;

    -- The "filename" argument is the name of the file.
    -- The "wavelenghts_col" argument tell which column of the file contains the wavelents. A float to set it manually. ** A negative integer number : the abs value is taken as column position and the value divides 12398.52 eV*Ang
    -- The "refle_col" one tells which contains the data.
    -- The "weight_col" ........ which contains the weight. A float to set it manually
    -- The "angle_factor" is the factor to apply to the angles to obtain radians. Radians are used internally.
    -- The "norm" is the factor the calculation are multiplied by before comparison with the data
    -- The "noise" is added to the calculation before multiplying by norm

    The nstack keyword defaults to stack number 0. Usually you have one stack only and you can omit it.

    Other keywords :
    -- "CutOffRatio" is an angle, small, given in radians (check it ). If it is given, the calculated
    data is multiplied for angles < cutoffratio by angle/cutoffratio. This accounts for beam fingeprint
    getting bigger than sample at small angles.
    -- "angleshift" use it if you think that there is an error in the angles.

    When for a float parameters, three numbers are given, that parameter is taken as optimizable variable

    • Defining synthetic scan

      example :

      &lt;f&gt;"wavelenghts_col" 1.0 &lt;/f&gt;
      &lt;f&gt;"angles_col" [0.1,2.01,0.01] &lt;/f&gt;
      &lt;f&gt;"angle_factor" 3.1415/180.0 &lt;/f&gt;

      Where a float is used to fix the wavelenght, and the list, containing start, end and step, to
      generate an array of angles. It is important not to leave any
      space within expressions.

    ::: Magnetic case :::

    the angleshift" argument is not yet implemented.
    The main difference with the scalar case, in defining the scan, is
    the polarisation argument. This is the example in ex2.html
    Once again do no leave any space within expressions like the above
    Such expression defines how the calculation has to be done. It is a
    list of several items. Each item contains a numerical factor, a
    toggle integer which can be 0 or 1, and a couple of complex numbers.
    The two complex numbers multiply S an dP polarisation respectively.
    If the toggle is 1 is the item is calculated at (180-angles)
    This is, in some cases, equivalent to complex conjugating the
    For each item the reflectivity is calculated, and added to the
    result with its own factor ( first number in the item )



    An example of stack is given below. The nstack keyword can be omitted if you are fitting one stack only.
    Each time you open the <stack> tag, what you put inside is added to the stack, from the bottom to the top.
    The first layer you add to a stack is taken as infinite substrate no matter what its thickness is

    The repetitions keyword is used to duplicate a subunit several times

    <stack nstack="0">
    <f>"roughness" 5.0 </f>
    <f>"thickness" 0.0 </f>
    <ift key="material" >
    <s> Au </s>
    <f> 9.0 </f>

    <stack repetitions="10">
    <f>"roughness" 5.0 </f>
    <f>"thickness" 50.0 </f>
    <ift key="material" >
    <s> Si </s>
    <f> 2.3 </f>
    <f>"roughness" 5.0 </f>
    <f>"thickness" 50.0 </f>
    <ift key="material" >
    <s> Mo </s>
    <f> 8.0 </f>


    The layer has three arguments : roughness, thickness, material.

    The material argument must be an object providing optical constants.
    These are :
    -- ift = Index From Table theat will be explained here
    -- Other more complicate, composite objects, which are composed using references to
    other objects. These will be detailed after the explanation of the symbolic references
    and are listed here :
    --- "ifo":IndexFromObject
    --- "iff":IndexFromFile
    --- "ifos":IndexFromObjects
    --- "KK" Kramers-Kronig

    The ift object is a mixture of elements. The index of this mixture is obtained from tf0 
    and tf12 specified above ( or the last one specified before the ift definition ).
    The arguments are couples of element-name+density(g/cc). One couple per each element
    entering the mixture
    :::: Magnetic case :::
    -- MagScatterer : a material with a tensorial
    dielectric constant. One gives two optical index in
    input, for the two elicity, the direction of the
    magnetization etc etc ...

    Let's see a reference example. The example-line below

    <f> "thickness" rgu 17.71742+random.gauss(0.0,0.2)</f>

    could be used inside <layer></layer> to define the thickness argument.
    Its value would be 17.71742+random.gauss(0.0,0.2), evaluated at the moment of its definition,
    and the name rgu is a reference name that can be used in the following lines.
    As an example of its reuse, one could, in the definition of another layer, use

    <f> "thickness" rgu </f>

    In this case the thickness argument will have the same value as the previously defined layer.

    References can be used to install constraint, for example we can define at top level

    <f> my_thickness 1.0 0.1 3.0 </f>

    In this case my_thickness will be an optimizable variable with initial value 0.1,
    that can be reused anywhere in the subsequent lines.

    References can be introduced for all tags, just add the reference name immediately after the tag, like this

    <my_tag> reference_name
    tag arguments

    and reused like this

    <f> reference_name </f>


    Imagine that you want to do a constrained optimisation of a U/Fe bilayer
    whose you know already the total thickness.
    You can create a variable for the total thickness
    <f> period_var 100.0</f>
    and optimisable variable for the Fe thickness
    <f> Fe_Thick 10.0 0 100.0 </f>

    Now for the U_thick dependent variable you use this :

    <dv> U_Thick <s> par(period_var)-par(Fe_Thick)</s> </dv>

    The expression inside <s></s> ( the s tag means string )
    is processed at run time. The par(variable_reference) function gets the actual value of the variable

    To create variables which depends on the layers, two symbols can be used inside the expression string :

    self.depth takes the integer value , equal to 0 for the first layer,
    1 for the second, and so on.
    self.VarCounter() gives 0 the first time it is called, 1 the second .. and so on

    The comparison Theory-experiment must be created with the <fit> tag.
    It must be in the xml file if you want to use it subsequently to do minimisations
    or to write down on files the comparison results.
    It is almost automatic, in the sense that ppmxml will use quietly all the gathered informations
    ( for example the list of optimisable variables ).
    The merit function , for the scalar version , is fixed : sum of the absolute value of the differences
    between logarithms.
    You can nonetheless specify a convolution width, given as a gaussian sigma in units of experimental
    point : meaningful for equispaced grids
    Examples :

    <f> "width" 0 </f>

    The reference fit_name is an arbitrary name that you will use as an argument to the minimisation routines
    and/or other call to the error function.

    :::::::::::::: Magnetic case ::::::::::::::::::::::::::
    By default the fit object considers the logarithm.
    For dichroism fits this is an inconvenient.
    In the magnetic case you can add the following argument
    <s key="meritfunction"> sin4 </s>
    and you will get a leastsquares fit where the data and calculation
    are wheighted by sin(angle)**4.



    Two method exists to optimise the error function given by the <fit> tag.
    The first is based on an annealed amoeba.
    Here an example.

    &lt;f&gt;"fit" fit2&lt;/f&gt;
    &lt;s key="temperature" &gt; .05*exp(-0.2*x) &lt;/s&gt;
    &lt;f&gt;"max_refusedcount" 100 &lt;/f&gt;
    &lt;f&gt;"max_isthesame" 10 &lt;/f&gt;
    In general the only important argument is the "fit" one.
    The others govern the annealing.
    Unless the fitting is fast, however, the annealing is not effective
    because you will not be patient enough to wait for several local minima to
    be found. You can let them at their value shown here.
    Once a local minima is found the algorithm tries to find another one.
    If it is better it is always taken, otherwise it can be taken or not
    according to the comparison between the difference in the merit function
    and the temperature function argument which is calculated as a function of x=number of local minima
    previously found. Look at the code for more details.
    The program stops when it falls on the same minima "max_isthesame" times
    or when it gets "max_refusedcount" local minima which are refused.
    The other method is based on the GeFit routine by Armando Sole, based on Levenberg-Marquardt

    <f>"fit" fit</f>


    <f> fit2</f>
    <s> fit_res </s>

    The first argument is a reference fo a <fit> object.
    The result is written on fit_res1 ( fit_res2.. fit_resn when you have more scans )


    Here three method to manipulate/create the optical indexes.
    The tag ift has been detailed above.
    The tag KK will be detailed in a dedicated section.
    Here we detail :

    ---  "ifo":IndexFromObject
    --- "iff":IndexFromFile
    --- "ifos":IndexFromObjects

    --- ifo takes two parameters : a reference to an index object, and a relative density

    <ifo> rescaledObjectname
    <f> ref_to_an_index_object </f>
    <f> 1.0 0.5 2.0 </f>

    for example.

    --- iff also takes two parameters : a filename, and a relative density.
    The index read from the file will be rescaled for the density.
    The file must contain three columns :
    Energy Real(n) Imm(n)

    PPM uses the convention exp(-i omega time + i K x ) for a plane wave.

    --- ifos takes N*two parameters : N*(a reference to an index object, and a relative density)

    ::: Magnetic Case :::
    --- MagScatterer
    The first two argument are index objects for positive and negative elicity.
    The "versor" argument is the direction of the magnetization :
    it mut be a list of three numbers.
    The "Saturation" argument modulates the intensity of the
    magnetisation controlling the average between the two indexes :
    Saturation set to 0 gives a scalar material.
    The "RelativeDensity" argument works as usual


    PPM provides several method to synthetise absorption function to be used in KK.
    They are called beta objects because they are reconducted to the imaginary part
    of optical index.

    -- bff : beta from file 
    -- bjoin : merges a beta object to tables
    -- Beta_write: write it on file
    -- bfc: (beta from continuum ) the continuum part : arctan
    -- bfl : ( beta from lorentzian) a lorentzian
    -- bsum : a sum of betas
    -- bff   
    reads a file.
    The arguments are :
    -- name of the file
    -- energy shift to be applied to the energy ( can be optimised)
    -- a factor for the data ( can be optimised )
    -- "rescaleXlambda" which can be 1 or 0
    The first column of filename is the energy.
    Rescale per lambda is set to one when one has raw data
    for absorption. In that case beta is proportional to absorption
    time lambda/lambda0 where lambda0 is the
    middle of the scan.
    The first three arguments are positional. The last one is optional
    and needs the key "rescaleXlambda"
    &lt;f&gt;  "rescaleXlambda" 1 &lt;f&gt;
    -- bjoin
    joins an absorption spectra
    to tabulated optical index ( imaginary part, beta)
    to obtained a beta on the wider range given by
    the tabulated values, but having more precised
    data, given by betaobject, in the shorter range given by betaobject
    The arguments are :
    -- a reference to a beta object
    -- a reference to an index object
    -- "trim_left" optional argument
    -- "trim_right" optional argument
    The firts two arguments are positional. The last two are optional.
    They default to 0.
    The energy extend over which the beta object is used is asked to the beta-object by bjoin.
    For a bff, for example, it is given by the energy column.
    The trim argument reduce this extent, if they are bigger than zero.
    --  Beta_write :
    The arguments for such methos are :
          = a reference to  a beta_object
      =  minE
          =  maxE
          =  the number of points
          = the name of the output file
    The beta will be written on file , from minE up to maxE at n points.
    There are two other optinal arguments, useful only for magnetism :
    -  "pol" =1
    - "dichroism" =0
    which are not documented here and are passed to the beta object.
    -- bfc:  (beta from continuum ) the continuum part : arctan  
    the arguments are :
    -- "E0"
    -- "step"
    -- "pente"
    -- "arctanfact"
    -- "min"
    -- "max"
    all these arguments must be given with the keywords ( which improves readibility )
    The forumla is
    result = step + (energies - E0 ) * pente
    if( self.arctanfact is None ) :
    result =result*Numeric.less(E0, energies)
    result =result*(0.5+Numeric.arctan( ( energies-E0)* par(self.arctanfact) ) / Numeric.pi)
    These variables can be optimisable ones.
    The last two parameters are the extent of the object. The one which is used by bjoin for example.
    -- bfl : ( beta from lorentzian) a lorentzian
    -- "E0"
    -- "height"
    -- "gammaL"
    -- "gammaR"
    -- "min"
    -- "max"
    Use keywords for all these parameters.
    E0 is the center. Height is ...
    Then there is the possibility to have asymmetric lorentzian.
    For symmetric Lorentzian use a variable, and then a reference to this variable for both
    gammaL and gammaR.
    min and max define the extent as explained above.
    -- bsum :  a sum of betas
    The N arguments to this methos are either N  references
    to beta objects or they are 2N and are a sequence
    of N*( reference to beta, factor )
    The extent of the resulting object is the intersection 
    of those of the components.
    -- KK :
    it returns index(wavelenghts) as the result of a KK
    Arguments :
    =  "filename_or_betaObject"   a reference your beta object
    =  "material"  a reference to  an object  returned by IndexFromTable :
    = "E1" 
    = "E2" 
    = "N" 
    E1 E2 are the extrema between which the KK has to be done.
    N energy points and their betas are generated
    from material object
    and given in input to the kk program
    = "e1" 
    = "e2" 
    e1 e2 specifies an interval contained into E1,E2.
    After doing the KK transforms, the obtained real part
    is joined to the "material" indexes at e1 and e2


    =  "Fact"  Fact is a factor multiplying the betas read from nomefile.
    ----------------- Optional ---------------------------
    = "maglia"  provide a reference to a beta-object built from a file
    Explanation :
    as a default the integration are done in N steps between
    E1 and E2.
    Such grid is merged, if this argument is set,
    to the energy grid contained in the "maglia" argument
    ( the data points )
    = "Nmaglia" when "maglia" is used , the provided grid
    is oversampled by this factor. Dafaults to 10
    --  KK_write
    this has the same arguments as Beta_Write, apart that you
    give a reference to an index object instead of a beta-object