I have a lot of datasets which are the result of an export from SAS. As far as I can tell, SAS always defaults to double-precision floats when exporting to Stata. The floating-point variables in these datasets appear to nearly always "stored" as single-precision floats in the sense that their accuracy is obviously round to the 6.92 digit precision of a single-precision float.
Data items like this:
will then be stored as:
(I know you can change the format and Stata will silently adapt the way it displays it, but this is more a question about type conversion.)
If I try to test whether the variables are identical when rounded to within 6 decimals or 8 decimals
it will fail. I'm guessing this is because of some underlying precision in binary that I don't quite understand. On top of this, I'm not quite sure how to round to the exact precision of a single-precision float. Because of this, I'm sort of lost as to why
fails. I know it has something to do with binary representations of base 10 numbers, but I never had formal training in computer science on this.
Does anyone have any advice on how to pseudo-test the implied decimal precision of a double so that it can be recast into a float if the precision is unwarranted?
Data items like this:
Code:
* Example generated by -dataex-. To install: ssc install dataex clear input double var1 9.642 8.165 8.482 9.351 end
Code:
var1 9.641999999999999 8.164999999999999 8.481999999999999 9.351000000000001
If I try to test whether the variables are identical when rounded to within 6 decimals or 8 decimals
Code:
assert round(var1,.00001)== round(var1,.0000001)
Code:
assert float(var1)==var1
Does anyone have any advice on how to pseudo-test the implied decimal precision of a double so that it can be recast into a float if the precision is unwarranted?
Comment