Entry tags:
Stats question!
There is a standard ("Aldrich") that I measure ~10 times per mass spectrometry session, to make sure I'm getting approximately the right numbers out. I currently have ~104 measurements of this standard, with a standard deviation of +- 0.45 units.
I also measure samples, with number of (acceptable) measurements per sample between 2 and 13 inclusive.
Where the standard deviation between measurements for a sample is greater than my long-term stdev for the Aldrich, I use the standard deviation between measurements for the sample.
Where the standard deviation between measurements for a sample is less than that for Aldrich, I can either use the Aldrich stdev or I can use the measured stdev for the specific sample. How many measurements of a specific sample do I have to make before it becomes legit to trust the measured stdev for the sample rather than my long-term average for the Aldrich standard? Any pointers on how to work this out would be greatly appreciated - my supervisor is handwaving as "maybe ten?" but I'd like to have some kind of defensible reason for doing the thing...
I also measure samples, with number of (acceptable) measurements per sample between 2 and 13 inclusive.
Where the standard deviation between measurements for a sample is greater than my long-term stdev for the Aldrich, I use the standard deviation between measurements for the sample.
Where the standard deviation between measurements for a sample is less than that for Aldrich, I can either use the Aldrich stdev or I can use the measured stdev for the specific sample. How many measurements of a specific sample do I have to make before it becomes legit to trust the measured stdev for the sample rather than my long-term average for the Aldrich standard? Any pointers on how to work this out would be greatly appreciated - my supervisor is handwaving as "maybe ten?" but I'd like to have some kind of defensible reason for doing the thing...
no subject