What is the best methodology to determine whether a data point is an outlier of a data set when the data has an increasing mean?
Current approach is to calculate the difference in standard deviations between the current point and the population's mean before this point. If it is more than 4 standard deviations away, I determine this to be an outlier. However, this methodology won't be the most effective in the scenario where the mean of the population is gradually increasing over time.
Thoughts?
Current approach is to calculate the difference in standard deviations between the current point and the population's mean before this point. If it is more than 4 standard deviations away, I determine this to be an outlier. However, this methodology won't be the most effective in the scenario where the mean of the population is gradually increasing over time.
Thoughts?
Comment