-
-
Notifications
You must be signed in to change notification settings - Fork 3.5k
Adding floating point considerations to tutorial #8324
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
rossbar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @amcandio , I think this is a step in the right direction. One idea that occurred to me when considering how to better document numerical precision was to add a standard reference document (similar to what is done for seeding/randomness) and then link to this reference doc from the functions where precision issues are prominent.
That's too much for this PR though - I'm +1 for getting this information somewhere visible in the docs and this seems a sensible place to me!
One style nit - could we get the line limits down to something more reasonable (80 or 88 or 110 chars). In principle the linter should be doing this file is clearly not being picked up 🙃
Co-authored-by: Ross Barnowski <rossbar@caltech.edu>
|
Thanks! I fixed the line length. We can add a more-detail doc like https://networkx.org/documentation/stable/reference/randomness.html in a separate PR. |
Co-authored-by: Dan Schult <dschult@colgate.edu>
dschult
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @amcandio
rossbar
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @amcandio !
Floating Point Considerations
As discussed in previous issues and PRs (e.g., #4972, #4592, #8316), users sometimes face unexpected results when using floating point values such as edge weights or capacities. These issues are often caused by rounding errors, not bugs in the algorithms.
The moment floating point numbers are used, all results become approximate. So to avoid confusion, we update tutorial to clarify these considerations.