These methods talk about "the variance of the x and y variables" as though every data point has the same uncertainty. In most cases you have estimates of the uncertainty in different data points. Some may even have repeated measurements at the same nominal x value. A good fit needs to weigh the points differently. Brent On 10/4/2018 1:06 PM, Henry Baker wrote:
Upon reflection(!!), it occurred to me that the "ordinary" least squares solution is a pretty decent approximation to the "correct"/symmetrical slope, so one could simply *rotate* the entire x-y plane about the centroid of the data points until this slope was *zero*, and then run the "ordinary" least squares algorithm again. Since I believe that this iteration achieves cubic(?) convergence, at most 2-3 iterations should be required for any reasonable problem.
However, the following method requires exactly 1 iteration, and hence should be considerably faster for large #'s of points:
http://www.mathpages.com/home/kmath110.htm
At 09:21 AM 10/4/2018, Henry Baker wrote:
In common discussions of least squares, the parameters (m,b) are estimated for the equation y = m*x+b using as data various datapoints [x1,y1], [x2,y2], [x3,y3], etc.
For example, in Wikipedia (where m=beta2 and b=beta1):
https://en.wikipedia.org/wiki/Linear_least_squares#Example
So far, so good.
Now, if I merely exchange x and y, then my equation is x = m'*y+b', where should be m' = 1/m and b' = -b/m. (Let's ignore the case where the best m=0.)
However, if I then estimate (m',b') using the same least squares method, I don't get (1/m,-b/m) !
So either I'm doing something wrong, or perhaps there is a more symmetric least squares method that treats x and y symmetrically ??
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com https://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun