Fitting data with python: Difference between revisions
No edit summary |
|||
Line 12: | Line 12: | ||
Using a polynomial fit that is based on generalized linear regression algorithm, solving a linear system. | Using a polynomial fit that is based on generalized linear regression algorithm, solving a linear system. | ||
<source lang="py"> | <source lang="py"> | ||
from numpy.polynomial import polynomial as P | from numpy.polynomial import polynomial as P | ||
coeff, stats = P.polyfit(xb,yb,9,full=True) | coeff, stats = P.polyfit(xb,yb,9,full=True) | ||
Line 22: | Line 21: | ||
Using curve-fit that calls *leastsq* algorithm, taking a step-by-step search for the minimum. | Using curve-fit that calls *leastsq* algorithm, taking a step-by-step search for the minimum. | ||
<source lang="py"> | <source lang="py"> | ||
fitfunc = lambda x, a, b: a*np.sin(b*x) | fitfunc = lambda x, a, b: a*np.sin(b*x) | ||
p, pcov = curve_fit(fitfunc,xb,yb,p0 = p0) | p, pcov = curve_fit(fitfunc,xb,yb,p0 = p0) | ||
Line 29: | Line 27: | ||
The last lines provides the found optimal parameters and their uncertainties. | The last lines provides the found optimal parameters and their uncertainties. | ||
It is worth trying several guesses p0. | It is worth trying several guesses p0. | ||
Plotting the results: | Plotting the results: |
Revision as of 16:00, 31 January 2016
Curve fitting
Preparing noisy data: <source lang="py"> Npoints = 30 x = np.linspace(1,10,100) xb = np.linspace(1,10,Npoints) f = lambda x: np.sin(x) yb = f(xb) + 0.3*np.random.normal(size=len(xb)) </source>
Using a polynomial fit that is based on generalized linear regression algorithm, solving a linear system. <source lang="py"> from numpy.polynomial import polynomial as P coeff, stats = P.polyfit(xb,yb,9,full=True) fitpoly = P.Polynomial(coeff) print stats </source> fitpoly is a function and coeff are the coefficients of the optimal polynomial.
Using curve-fit that calls *leastsq* algorithm, taking a step-by-step search for the minimum. <source lang="py"> fitfunc = lambda x, a, b: a*np.sin(b*x) p, pcov = curve_fit(fitfunc,xb,yb,p0 = p0) print p, np.sqrt(np.diag(pcov)) </source> The last lines provides the found optimal parameters and their uncertainties. It is worth trying several guesses p0.
Plotting the results: <source lang="py"> import matplotlib.pyplot as plt plt.scatter(xb,yb) plt.plot(x,f(x)) plt.plot(x,fitpoly(x)) plt.plot(x,fitfunc(x,p[0],p[1])) plt.show() </source>
Using the least-square function directly
The basic syntax is the following: <source lang="py">
- !/usr/bin/python
from scipy import optimize from numpy import array
- your data as lists
x = [0.0, 1.0, 2.0, 3.0] y = [1.0, 0.5, 0.0, -1.0]
- define a fitting function and the corresponding error function that must be minimized
- use the lambda shortcut or define standard functions with def fit():
- p is the list of parameters
fit = lambda p, x: p[0] + p[1]*(x) + p[2]*(x)**2 err = lambda p, x, y: fit(p,x) - y
- initial guess
p0 = [1.0,1.0,1.0]
- calls optimize.leastsq to find optimal parameters, converts lists into numpy.array on the fly
p, success = optimize.leastsq(err, p0, args=(array(x), array(y)), ftol=5e-9, xtol=5e-9)
- some info about convergence is in success and the optimized parameters in p
</source>