Home | Trees | Indices | Help |
---|
|
A linear model class
The class implements a linear model of the form
y(x) = a_0 * f_0(x) + a_1 * f_1(x) + ... + a_{n-1}
f_{n-1}(x)
where
Note: LinearModel instances can be added and printed.
|
|||
LinearModel |
|
||
LinearModel |
|
||
ndarray |
|
||
|
|||
ndarray |
|
||
ndarray |
|
||
double |
|
||
ndarray |
|
||
ndarray |
|
||
ndarray |
|
||
integer |
|
||
list |
|
||
boolean |
|
||
boolean |
|
||
boolean |
|
||
integer |
|
||
integer |
|
||
LinearFit |
|
||
generator |
|
||
LinearModel |
|
||
|
|||
Inherited from |
|
|||
Inherited from |
|
Initialiser of the LinearModel class. Example: >>> x = linspace(0,10,100) >>> lm = LinearModel([sin(x),x*x], ["sin(x)", "x^2"])
|
Defines operator '+' of two LinearModel instances. Example: >>> x = np.linspace(0,10,100) >>> lm1 = LinearModel([x], ["x"]) >>> lm2 = LinearModel([x**2], ["x^2"]) >>> lm3 = lm1 + lm2 # the same as LinearModel([x, x**2], ["x", "x^2"])
|
Returns the design matrix of the linear model Example: >>> x = np.linspace(0,10,5) >>> lm = LinearModel([x], ["x"]) >>> lm.designMatrix() array([[ 0. , 0. ], [ 2.5 , 6.25], [ 5. , 25. ], [ 7.5 , 56.25], [ 10. , 100. ]])
|
Performs and stores the singular value decomposition of the design matrix Private class method, not to be used by the user. The singular value decomposition of the design matrix is defined as
where
with M the number of observations, and N the number of regressors. Note that for a unitary matrix A: A^{-1} = A^t
|
Computes the pseudo-inverse in a robust way Remarks:
Example: >>> x = linspace(0,10,5) >>> lm = LinearModel([x, x**2], ["x", "x^2"]) >>> lm.pseudoInverse() array([[ 8.95460520e-17, 1.63870968e-01, 1.98709677e-01, 1.04516129e-01, -1.18709677e-01], [ -1.07455262e-17, -1.80645161e-02, -2.06451613e-02, -7.74193548e-03, 2.06451613e-02]])
|
Computes the standard covariance matrix. Remarks:
Example: >>> x = linspace(0,10,5) >>> lm = LinearModel([x, x**2], ["x", "x^2"]) >>> lm.standardCovarianceMatrix() array([[ 0.09135484, -0.01032258], [-0.01032258, 0.00123871]])
|
Computes the condition number. Example: >>> x = linspace(0,10,5) >>> lm = LinearModel([x, x**2], ["x", "x^2"]) >>> lm..conditionNumber() 35.996606504814814
|
Return the non-zero singular values as obtained with the singular value decomposition Remarks:
Example: >>> x = linspace(0,10,5) >>> lm = LinearModel([x, x**2], ["x", "x^2"]) >>> lm.singularValues() array([ 118.34194851, 3.28758625])
|
Computes the hat matrix. The hat matrix is defined by H = X (X^t X)^{-1} X^t with
and where all multiplications are matrix multiplications. It has the property that \hat{y} = H * y where
So, it "puts a hat" on the (weighted) observation vector. Remarks:
Example: >>> x = linspace(0,10,5) >>> lm = LinearModel([x, x**2], ["x", "x^2"]) >>> lm.hatMatrix() array([[ 9.32150093e-32, 1.56705591e-16, 1.79092104e-16, 6.71595390e-17, -1.79092104e-16], [ 1.56705591e-16, 2.96774194e-01, 3.67741935e-01, 2.12903226e-01, -1.67741935e-01], [ 1.79092104e-16, 3.67741935e-01, 4.77419355e-01, 3.29032258e-01, -7.74193548e-02], [ 6.71595390e-17, 2.12903226e-01, 3.29032258e-01, 3.48387097e-01, 2.70967742e-01], [ -1.79092104e-16, -1.67741935e-01, -7.74193548e-02, 2.70967742e-01, 8.77419355e-01]])
|
Returns the trace of the (possibly weighted) hat matrix. The computation of the entire MxM hat matrix (M the number of observations) is avoided to calculate the trace. This is useful when the number of observations is so high that the hat matrix does no longer fit into the memory. Remarks:
Example: >>> x = linspace(0,10,5) >>> lm = LinearModel([x, x**2], ["x", "x^2"]) >>> lm.hatMatrix() 1.9999999999999993
|
Returns the effective number of degrees of freedom. The d.o.f. is defined as the trace of the hat matrix. See also: Wikipedia Example: >>> x = linspace(0,10,5) >>> lm = LinearModel([x, x**2], ["x", "x^2"]) >>> lm.degreesOfFreedom() 3
|
Returns the regressor names. Example: >>> x = linspace(0,10,100) >>> lm = LinearModel([sin(x),x*x], ["sin(x)", "x^2"]) >>> lm.regressorNames() ["sin(x)", "x^2"]
|
Checks if the linear model contains the given regressor. Example: >>> x = linspace(0,10,100) >>> lm = LinearModel([sin(x), x*x], ["sin(x)", "x^2"]) >>> lm.containsRegressorName("x^2") True >>> lm.containsRegressorName("cos(x)") False
|
Checks if a regressor name contains a given substring. Example: >>> x = linspace(0,10,100) >>> lm = LinearModel([sin(x), x*x], ["sin(x)", "x^2"]) >>> lm.someRegressorNameContains("sin") True
|
Checks if there is a constant regressor Example: >>> x = linspace(0,10,100) >>> lm = LinearModel([1, x*x], ["1", "x^2"]) >>> lm.withIntercept() True
|
Returns the number of observations Example: >>> x = linspace(0,10,23) >>> lm = LinearModel([x], ["x"]) >>> lm.nObservations() 23
|
Returns the number of fit coefficients Example: >>> x = linspace(0,10,100) >>> lm = LinearModel([sin(x),x*x], ["sin(x)", "x^2"]) >>> lm.nParameters() 2
|
Create and return a LinearFit object. From this object the user can extract all kinds of quantities related to the linear fit. See LinearFit. Example: >>> x = linspace(0,10,100) >>> lm = LinearModel([x], ["x"]) >>> obs = x + normal(0.0, 1.0, 100) # some simulated observations >>> linearfit = lm.fitData(obs)
|
Returns a generator capable of generating submodels The generator sequentially returns the submodels which are instances of LinearModel with only a subset of the regressors of the parent model. The submodels with fewer regressors will always appear first in the list. The order in which the regressors appear in the submodels will be from low rank to high rank. Examples: If the regressors are [1, x, x**2] then
If the regressors are [1,x,sin(2x),cos(2x)] then
|
Returns a duplicate of the current LinearModel Example: >>> x = linspace(0,10,100) >>> lm = LinearModel([x], ["x"]) >>> lm2 = lm.copy()
|
Generates what is written to the console if the LinearModel is printed
|
Home | Trees | Indices | Help |
---|
Generated by Epydoc 3.0.1 on Fri Mar 30 10:45:20 2018 | http://epydoc.sourceforge.net |