Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

💡Unit Testing Floating Point Numbers #147

Open
uggi121 opened this issue Oct 25, 2019 · 0 comments
Open

💡Unit Testing Floating Point Numbers #147

uggi121 opened this issue Oct 25, 2019 · 0 comments

Comments

@uggi121
Copy link

uggi121 commented Oct 25, 2019

As a direct consequence of the IEEE 754 format for floating point numbers, it is impossible to achieve 100% accuracy and precision in testing floating point values. A pragmatic heuristic to follow involves factoring in the inputs to the method and padding the expected return value with a suitable error threshold.

For example, 0.1 + 0.2 is not equal to 0.3 in this format. Keep this in mind while designing unit tests and choose an appropriate error tolerance range. Another good practice for mathematical methods is to test them against similar existing implementations

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant