Home | Trees | Indices | Help |
---|
|
A class that allows to easily extract the fit coefficients, covariance matrix, predictions, residuals, etc.
Remarks: LinearFit instances can be printed
|
|||
LinearFit |
|
||
ndarray |
|
||
ndarray |
|
||
double |
|
||
double |
|
||
ndarray |
|
||
ndarray |
|
||
ndarray |
|
||
tuple |
|
||
ndarray |
|
||
ndarray |
|
||
|
|||
|
|||
double |
|
||
double |
|
||
double |
|
||
double |
|
||
boolean |
|
||
|
|||
|
|||
ndarray |
|
||
Inherited from |
|
|||
Inherited from |
|
Initialises the LinearFit instance
|
Returns the observations Remarks:
|
Returns the regression coefficients.
|
Returns the sum of the squared residuals
|
Estimates the variance of the residuals of the time series. As normalization the degrees of freedom is used
|
Returns the covariance matrix of the fit coefficients
|
Returns the correlation matrix of the fit coefficients
|
Returns the formal error bars of the fit coefficients
|
Returns the symmetric (1-alpha) confidence interval around the fit coefficients E.g. if alpha = 0.05, the 95% symmetric confidence interval is returned. Remarks:
|
Returns the formal t-values of the fit coefficients
|
Performs a hypothesis T-test on each of the regressors Null hypothesis: H0 : fit coefficient == 0 Alternative hypothesis : H1 : fit coefficient != 0 Remarks:
|
Returns the predicted (fitted) values It concerns the predictions for the original observations, not the decorrelated ones. Remarks:
|
Returns an array with the residuals. Residuals = observations minus the predictions Remarks:
|
Returns the coefficient of determination The coeff of determination is defined by 1 - S1 / S2 with
Remarks:
|
Returns the Bayesian Information Criterion value. Remarks:
TODO: . make a weighted version
|
Returns the 2nd order Akaike Information Criterion value Remarks:
TODO: . make a weighted version
|
Returns the F-statistic, commonly used to assess the fit. Remarks:
|
Performs a hypothesis F-test on the fit Null hypothesis: H0 : all fit coefficients == 0 Alternative hypothesis : H1 : at least one fit coefficient != 0 Stated otherwise (with R^2 the coefficient of determination): Null hypothesis: H0 : R^2 == 0 Alternative hypothesis: H1: R^2 != 0 Remarks:
|
Writes some basic results of fitting the model to the observations.
|
Returns the string written by printing the LinearFit object
|
Evaluates your current best fit in regressors evaluated in new covariates. Remark:
Example: >>> noise = array([0.44, -0.48, 0.26, -2.00, -0.93, 2.21, -0.57, -2.04, -1.09, 1.53]) >>> x = linspace(0, 5, 10) >>> obs = 2.0 + 3.0 * exp(x) + noise >>> myModel = LinearModel([ones(10), exp(x)], ["1", "exp(x)"]) >>> print(myModel) Model: y = a_0 + a_1 * exp(x) Expected number of observations: 10 >>> myFit = myModel.fitData(obs) >>> xnew = linspace(-5.0, +5.0, 20) >>> y = myFit.evaluate([ones_like(xnew), exp(xnew)]) >>> print(y) [ 1.53989966 1.55393018 1.57767944 1.61787944 1.68592536 1.80110565 1.99606954 2.32608192 2.8846888 3.83023405 5.43074394 8.13990238 12.72565316 20.48788288 33.62688959 55.86708392 93.51271836 157.23490405 265.09646647 447.67207215]
|
Home | Trees | Indices | Help |
---|
Generated by Epydoc 3.0.1 on Fri Mar 30 10:45:20 2018 | http://epydoc.sourceforge.net |