“If you need to do arbitrary-precision, floating-point math, append M to a number to create a BigDecimal literal”
I know this is from 3 years ago, but I’m starting to work on 4th edition of the book - I don’t understand what the errata is here?