1

I am using a simple error propagation of a multivariate polynomial (of x[i_],y[i_],z[i_] ...). The way I am writing is as a symbolic expression and then substituting for some real values so that it can be applied in any similar general case:

 (**All are real functions/numbers**)
 pol = a x[1] + b x[1] y[1] - c x[2] z[1] - d x[1] (** + ...**)
 error = (Sum[D[pol, x[i]]^2*dx[i]^2, {i, 1, 2}] +
          Sum[D[pol, y[i]]^2*dy[i]^2, {i, 1, 2}] +
          Sum[D[pol, z[i]]^2*dz[i]^2 , {i, 1, 2}](**+...**))^(1/2)

Now the above trial could lead to wrong errors as what Mathematica does is to club the terms a x[1]- d x[1] + b y[1] x[1] together and in the error estimation, it just sees (a-d+ b y[1] )^2 dx[1]^2 which is not the same as (a^ + d^2 + b^2 y[1]^2)dx[1]^2. Does this mean I will have to do them one by one and sum them in quadrature explicitly (without using Sum, For, Do etc.)?

BabaYaga
  • 1,836
  • 9
  • 19
  • My bad. Yes, you are right. I don't know why I was just looking at the Abs[..] and not Abs[..]^2 the whole time! Thanks! – BabaYaga Jul 03 '22 at 12:19
  • @user293787 the problem remains though, and the code still gives wrong results. I think one way would be to convert the expression into a list and then operate on each element. – BabaYaga Jul 03 '22 at 12:29
  • I have no idea why you want a^2 + d^2 + b^2 y[1]^2 (link?). From context, I understand that we are assuming that a, b, c, d are given and fixed numbers. Take this example: If a = d = 1 then the first and last term in pol cancel and do not propagate errors at all. What is the rationale for a^2 + d^2 in that case? – user293787 Jul 03 '22 at 12:32
  • Assuming the variables are uncorrelated, normally one wants to add the errors in quadrature for each variable. The reason I am separating the different coefficients for even x[1] is because they have different source. I hope I am doing it correctly. Link(https://en.wikipedia.org/wiki/Propagation_of_uncertainty) (see the variance). – BabaYaga Jul 03 '22 at 12:36
  • Since you include only dx[i], dy[i], dz[i] and no da, db, dc, dd I assume that you are in a case where x[i], y[i], z[i] have uncertainty (random variables) but a, b, c, d have no uncertainty (fixed parameters). Then the formula should be correct as is, within the assumptions of the error propagation formula (small variances since based on linear Taylor expansion, and you make some kind of independence assumption, or no correlation assumption). – user293787 Jul 03 '22 at 12:43
  • Yes, a,b,c,d are fixed numbers. Variables are x[i],y[i],z[i] with the uncertainties dx,dy,dz. The variables come into the pol in the way I showed. Now you are saying the correct way to find the error of the pol is to combine (a-d+ b y[1] )^2 dx[1]^2 ? – BabaYaga Jul 03 '22 at 12:48
  • Yes, it is correct as it is. – user293787 Jul 03 '22 at 12:50
  • What is confusing to me is that if a-d+ b y[1] = 0, then there is no error corresponding to x[1]! Whereas what I was thinking was the correct formula is (a^2 + d^2 + b^2 y[1]^2) * dx[1]^2. – BabaYaga Jul 03 '22 at 12:51
  • 1
    If you are at a point where a-d+b*y[1]==0, then the partial derivative of pol w.r.t. x[1] vanishes at that point, and so to linear order about that point, the function does not depend on x[1], and uncertainty in x[1] is not propagated to linear order (linear Taylor), which is all the usual error propagation formula sees. (In higher order corrections, one would see a contribution from the uncertainty of x[1].) – user293787 Jul 03 '22 at 12:57
  • 2
    I agree with @user293787. I don't see why one would treat the error dx[1] in one instance of x[1] as uncorrelated with the error dx[1] in another instance of x[1]. That would mean that somehow the different instances of x[1] are in fact independent variables each with different values of dx[1]. (The round-off errors in a x[1] and -d x[1] may be treated as independent if a and d are uncorrelated, but that's not a component of the propagated error.) – Michael E2 Jul 03 '22 at 14:12
  • I agree and realize my confusion. My confusion was that the error will vanish at some point. But it can happen as the variable itself disappear at that point. – BabaYaga Jul 03 '22 at 14:13

1 Answers1

5

The formula is correct as is, within the range of applicability of the error propagation formula, and assuming a, b, c, d are fixed parameters.

Remark. It is useful to define a function that can be re-used, for example

errorPropagation[f_,vars_] := Grad[f,vars]*Map[differental,vars] // Sqrt[Dot[#,#]]&;

Here vars is the list of all uncertain variables, which are assumed to have small variances and to be uncorrelated. The function must be modified if the variables are correlated and so on, see theory on the error propagation formula for more details.

Then this works:

errorPropagation[a*x[1]+b*x[1]*y[1]-c*x[2]*z[1]-d*x[1],
                 {x[1],x[2],y[1],z[1]}]

See also here and here.

user293787
  • 11,833
  • 10
  • 28