Returning V_FitError from graph fitting inside 'Execute'

In my function I create a string containing FuncFit command and use Execute for batch fitting inside a loop. I would like to keep a record of fit error but could not get the value of V_FitError outside of Execute FuncFit_String. I can add FuncFit_String+=";print V_FitError" at the end to print the result out during the loop but adding ";MyVariable=V_FitError" doesn't work.. Is there any solution to this? Thank you very much.
Why are you using an Execute method?

MyVariable = V_FitError will not work in an execute statement essentially because the commands run in a separate environment from the main loop and are unaware of local variables in the main loop.

You could use a try ... catch ... end try approach inside your fit function, create a global that records the status, and read that global after the execute statements runs.

But, to return to the top question, the best approach is to avoid the Execute method. Are you aware of proto-type functions?

--
J. J. Weimer
Chemistry / Chemical & Materials Engineering, UAH
A command executed via Execute acts like you executed it from the command line. Consequently, operation output variables are created as global variables. You can access V_Fit error like this:
String cmd = "FuncFit..."
Execute cmd
NVAR V_FitError  // Create reference to global variable created by FuncFit
Variable MyVariable = V_FitError


If you can do what you need to do without using Execute, that is a better solution. If the fitType parameter to CurveFit or FuncFit has to be computed at runtime, then that is a legitimate reason to use Execute.
Thanks Howard for the clarification! I had forgotten, the error variable is global. The valid case to use Execute FitFunc ... is new to me.

--
J. J. Weimer
Chemistry / Chemical & Materials Engineering, UAH
hrodstein wrote:
If you can do what you need to do without using Execute, that is a better solution. If the fitType parameter to CurveFit or FuncFit has to be computed at runtime, then that is a legitimate reason to use Execute.

Perhaps related to what jjweimer was getting at when asking about prototype functions, the fit type parameter can also be dealt with in a user function with FUNCREF

i.e., I think it's the case that any user-defined fit function could be called this way
function fit(wv,coefwave,fitNameStr)
    WAVE wv,coefwave
    String fitNameSTr
   
    FUNCREF fits_protoFunc f = $fitNameStr
    FuncFit f, coefwave,wv
end
Thank you very much. 'NVAR V_FitError' immediately solves the problem.

The reason to use 'Execute FuncFit_String' is that users can select arbitrary number of peaks in multi-peak fit GUI that I wrote (to extract peak parameters in spectromicroscopy volume with x-y for image pixel coordinate and z for XPS spectrum). I use a loop and FuncFit_String+=",{func_n, coef_n, keyword =value }" ... to construct the fitting command, but still face the 400 characters limit of Execute so the code cannot really handle too many peaks.

I have never learned about proto-type function before. I just tried to read the Help in Igor but still don't understand how it works. Will this proto-type function help in my situation?

Thank you very much again!
I don't think using FUNCREF will help you in this case.

In Igor7 the limit for the command line, and therefore for Execute is, if memory serves, 1000 bytes, and in Igor8, now in beta testing, it is 2500 bytes.

I think there may be a way to do your fitting without using Execute, even in Igor6. Go to the reference help for the FuncFit operation and then search for "Fitting Sums of Fit Functions". If you are attempting to fit a variable number of peaks you will need to use the {string =fitSpecStr} format.
I did overlook the feature '{string =fitSpecStr}'. This should solve the command length limit issue. Thank you so much!!
euaruksakul wrote:
I have never learned about proto-type function before. I just tried to read the Help in Igor but still don't understand how it works. Will this proto-type function help in my situation?
I had mistakenly thought you might in part be using Execute to allow the fit function name to vary at run time. I agree with hrodstein that it's not helpful in this situation. I wasn't aware of the {fitFuncSpec } option, which looks perfect for both your case and for allowing the fit function to vary at run time.
You can handle multiple peaks in a user-defined fit function by simply varying the length of the coefficient wave. For XPS fitting you typically have a linear background offset and slope, and for each peak seven parameters: position, area, FWHM, GL ratio, two asymmetry parameters and a Shirley background parameters. That will make the length of the coefficient wave 2+NumberOfPeaks*7. Inside your fit function you could now do NumberOfPeaks=(NumPnts(CoefWave)-2)/7 to get the number of peaks. You could also pass the number of peaks as the first parameter in the coefficient wave. Any parameter you hold constant can be used like that to pass additional information to your fit function. The nicest solution is to use a structure fit function where additional information can be easily passed to the fit function.
​Olelytken - Thank you so much for the tips. I was never confident enough to use the structure fit function but I think it is time to learn that. I am at the moment deciding whether to put Shirley BG inside fit function or do BG subtraction as preprocessing instead to save the fitting time. May I ask whether having the Shirley algorithm inside the fit function significantly increase the fitting time in your experience? There will be quite a lot of spectra to fit (sometimes more than 10,000) in one scan.
If you experience slow fitting, it might be worth to use the "all-at-once"- versions.
displayhelptopic "All-At-Once Fitting Functions"
Explicit multi-threading might also be helpful to compute the individual peaks inside the fitfunction.

HJ

Adhering strictly to first principles, Shirley or Tougaard backgrounds are to be fit along with the peaks. This is because they are integrals of the evolving peaks themselves. In practice, I have found that either baseline type can be integrated from the raw spectrum, smoothed, and removed first. The Shirley is indeed the easier of the two to use. Two caveats apply with the pre-removal approach. One must have a measured spectrum that extends far enough to the low BE side of the peak to pass any x-ray satellites and to obtain a flat reference (especially with non-monochromatic sources). More importantly, one must have a measured spectrum that extends far enough on the high BE side to bypass loss peaks and again obtain a respectable representation of a baseline. My sense after working with laboratory-level spectra is that uncertainties in the peak fitting parameters are more sensitive to signal to noise on the spectra as well as to sample-to-sample variations than to whether one takes the baseline out first or fits it with the peak fitting. IOW, take time to improve S/N in your spectra and take replicant spectra rather than writing code to run Shirley or Tougaard baselines within the peak fitting. Alternatively, invest in software that has already done the coding for you, whether here or e.g. as offered with CasaXPS.

In the meantime, the approach that @Olelytken suggests is one that I used many years ago to fit XPS multiplets prior to structure fit functions. I built the coefficient wave and used a for ... endfor loop to add up peaks.

--
J. J. Weimer
Chemistry / Chemical & Materials Engineering, UAH
The Shirley background (integrated peak area) of a Gaussian peak is the error function and that of a Lorentzian is arcus tangent. Both functions are available in Igor. They are slightly slower than calculation of the peaks themselves, but not much more. Below are the functions I use in my XPS fitting procedure.

Using an all-at-once fit function as HJDrescher suggests is a very good idea since it will allow you to add the different peak components together with FastOP, which is extremely fast. It also means you don't have to go through the for loop for each data point, but only once for the whole wave.

You can improve the speed of your fit by orders of magnitudes by playing around with exactly how you code your calculations. Be warned! You can spend an enormous amount of time on that. For the XPS spectra I typically work with (~100 data points, 2-4 peaks) I can fit 100-1000 spectra per second on my very old office computer, that includes fitting individual Shirley backgrounds for each peak and letting all parameters free. By fixing parameters, especially the position and FWHM, I can fit much faster, and I could still significantly improve the speed of my code if I really wanted to, but it is a lot of work.

If you could average all spectra from your entire image and fit the averaged spectrum to get peak positions, FWHM and all the other parameters, you could then fit each individual spectra by only allowing the area of each peak to change. That could be extremely fast.

You can find my XPS procedure here: http://www.igorexchange.com/project/EccentricXPS, but because I have played around with improving the speed of the fit it is not easy to look at the code and follow what happens within the fit function. Below, I added a simplified version with no asymmetry of my fit function for a single peak. You will find it below the easier-to-follow functions. Here I also added the execution time of the individual bits and you can see that erf(x) is the slowest step at 61 microseconds.

ThreadSafe Static Function XPSFitAssGauss(x, Position, FWHM)
//  Returns a Gaussian peak normalized to an area of one
Variable x, Position, FWHM

    //  f(x) = sqrt( 4 * ln(2) / pi ) / FWHM * exp( -4 * ln(2) * (( x - Position ) / FWHM )^2 )
    //  sqrt( 4 ln(2) / pi ) = 0.939437
    //  4*ln(2) = 2.77259

    Return 0.939437/FWHM*exp(-2.77259*((x-Position)/FWHM)^2)
end



ThreadSafe Static Function XPSFitAssGaussShirley(x, Position, FWHM)
//  Calculates the Shirley background (integrated area) of a Gaussian peak with an area of one
Variable x, Position, FWHM

    //  f(x) = 0.5 * ( Erf(sqrt( 4 * ln(2) ) * ( x - Position ) / FWHM ) + 1 )
    //  sqrt( 4 ln(2) ) = 1.66511

    Return 0.5*(Erf(1.66511*(x-Position)/FWHM)+1)
end



ThreadSafe Static Function XPSFitAssLorentz(x, Position, FWHM)
//  Returns a Lorentzian peak normalised to an area of one
Variable x, Position, FWHM

    //  2 / pi * FWHM / (4 * ( x - Position )^2 + FWHM^2 )
    //  2/pi = 0.63662

    Return 0.63662*FWHM/(4*(x-Position)^2+FWHM^2)
end



ThreadSafe Static Function XPSFitAssLorentzShirley(x, Position, FWHM)
//  Calculates the Shirley background (integrated area) of a Lorentzian peak with an area of one
Variable x, Position, FWHM

    //  1 / pi * atan( 2 * ( x - Position ) / FWHM ) + 0.5
    //  1/pi = 0.31831

    Return 0.31831*atan(2*(x-Position)/FWHM)+0.5
end



ThreadSafe Static Function XPSFitAssFitFuncAAOOnePeak(Position, PeakArea, FWHM, GL, ShirleyCoef, xw, R)
//  Calculates a single peak for the all-at-once fit function. Since this function optimized for speed the math is almost impossible to follow.
Variable Position, PeakArea, FWHM, GL, ShirleyCoef
Wave/D xw, R
Variable c1=0, c2=0, Const=0
   
    Duplicate/O/FREE/D R, A, A2, B, C       //  8 us
   
    //  The simple calculation for a symmetric pseudo-Voigt peak with the corresponding Shirley background. This will also be the starting point for the asymmetric peak
    c1=2/FWHM
    c2=c1*Position
    FastOP A=(c1)*xw-(c2)   //  0.8 us
    A2=A^2                  //  15 us
    Const=0.5*PeakArea*ShirleyCoef

    //  Doesn't calculate the Gaussian component, if the peak is pure Lorentzian
    if (GL!=1)

        //  Calculates the Gaussian component of the peak
        FastOP B=(-0.693148)*A2         //  0.6 us
        B=exp(B)                        //  12 us
        c2=(1-GL)*PeakArea
        c1=0.939437*c2/FWHM

        //  Doesn't calculate the Shirley background if the Shirley coefficient is zero
        if (ShirleyCoef!=0)

            //  Calculates the Shirley background for the Gaussian component
            FastOP C=(0.832555)*A       //  0.5 us
            C=erf(C)                        //  61 us
            c2=c2*0.5*ShirleyCoef

            //  Adds the peak and background together
            FastOP R=(c1)*B+(c2)*C      //  1.5 us
        else

            //  Only adds the peak and not the background
            FastOP R=(c1)*B
        endif
       
        //  Doesn't calculate the Lorentzian component, if the peak is pure Gaussian
        if (GL!=0)
       
            //  Calculates the Lorentzian component of the peak
            FastOP B=A2+(1)     //  0.7 us
            B=1/B                   //  8.5 us
            c2=PeakArea*GL
            c1=c2*0.63662/FWHM

            //  Adds the peak to the result wave
            FastOP R=R+(c1)*B       //  1.4 us

            //  Doesn't calculate the Shirley background if the Shirley coefficient is zero
            if (ShirleyCoef!=0)

                //  Calculates the Shirley background for the Lorentzian component
                C=atan(A)           //  16 us
                c2=c2*0.31831*ShirleyCoef

                //  Adds the background to the result wave
                FastOP R=R+(c2)*C       //  1.3 us
            endif
        endif
    else

        //  Calculates the Lorentzian component of the peak
        FastOP B=A2+(1)
        B=1/B
        c1=PeakArea*0.63662/FWHM

        //  Doesn't calculate the Shirley background if the Shirley coefficient is zero
        if (ShirleyCoef!=0)

            //  Calculates the Shirley background for the Lorentzian component
            C=atan(A)
            c2=PeakArea*0.31831*ShirleyCoef

            //  Adds the peak and background together
            FastOP R=(c1)*B+(c2)*C
        else
       
            //  Only adds the peak and not the background
            FastOP R=(c1)*B
        endif
    endif

    //  Returns the constant offset, which should be added to the wave
    Return Const
end
jjweimer - Thank you for the XPS tips. My work started out from simply trying to quantify some elements in a user's project so I wrote a simple code but now plan to use the GUI for more users. I think we do have CasaXPS installed somewhere in the lab and I will take a look. That should give me more ideas.

olelytken - It is really amazing to see your very nice work in trying to minimize the processing time. I already learned many new Igor tricks from looking at your example. The idea to integrate all spectrum and fit for initial values seems really nice. We are using PEEM and there is still iso-chromaticity issue (peak at each different pixel is shifted by a different amount) but once I find out how to cope with that I the code should run really faster.