Processing...

` = "At Precision Point"`

Enter a value for all fields

This equation determines if a value is within a specified tolerance (+/-) of a specified precision point. This can be used in equations where you are trying to assure the output value is set to a specific value when precision errors cause the resultant to be off slightly.

Ex: An equation's inputs should result in a known value of 1.0 but precision in Big Decimal calculations result in the value being displayed as 0.999999999999... You can use this equation to test for closeness to the right result at a specified precision. So, you could choose to set the display value to exactly 1.0 when the difference from 1.0 is less than, let's say, 1E-15.