And assume some rule which produces estimates of y_{n} from x_{n}:
The total square error is then defined as the sum of square errors for each piece of data:
This may also written in terms of the vectors X and Y as:
where Y' = f(X, parameters) is a vector containing the estimates for all the data elements of Y (for some values of the parameters). For a given set of data we see that the total square error is a function of the paramters of our rule.
In the leastsquareserror sense, the best fit of the data to a theoretical curve is found for the set of parameters that minimizes
E_{T}.
Linear LeastSquares Error Fitting
When the rule mapping the data vector X to vector Y is a linear one,
If p is the vector 

and A is the matrix 
 and Y is the vector 

where ^{ 1} denotes the inverse of a matrix.
Many calcuators support linear leastsquare data fitting. Such a fit is also easily performed by MatLab. If the data and parameters 'a' and 'b' are put in the form above (vector p and matrix A), then p is solved in MatLab by the statement:
MatLab can also perform a polynomial leastsquares error fit of a vector of data Y to vector X. That is, it finds parameters of the rule,
that minimize the total error,
In MatLab this is performed by the function,
where the result
As an example, to find the linear leastsquares fit (Y=aX + b) of data Y and X using the MatLab polyfit command, one would type:
For an example of how Dr. Erlenmeyer used linear leastsquares fitting, and for copies of his matlab scripts, click on Dr. Erlenmeyer. 
a linear leastsquares fit can be performed on the transformed data and parameters using the linear rule,
where Y=sqrt(Z) (i.e., y_{n} = sqrt z_{n} ) and a = sqrt(c), b =  sqrt(c) d .
An example using this type of transformation is given on the Dr. Erlenmeyer LeastSquares Fit page. 
Similarly, by making the transformation,
a linear leastsquares fit can be performed on the transformed data and parameters using the linear rule,
where Y=ln(Z) (i.e., y_{n} = ln z_{n} ) and a = d, b = ln(c)