Skip to content

Conversation

@gwhitney
Copy link
Collaborator

Previously, gamma on BigNumbers only worked for positive integers. This change implements a general arbitrary-precision algorithm based on the Stirling series (reference included in the code), and adds tests for it. This PR is the last stepping stone needed to correct the computation of zeta(n), resolving further bugs found based on #3532.

It is only a work-in-progress because ultimately the implementation needs to compute factorials, rising factorials, and double factorials. As submitted, the code uses inefficient temporary code for the variant factorials so that the algorithm could be implemented and tested. However, once #3568 or an agreeable substitute is merged, calls to whatever the resulting facility is for all forms of factorial needs to be inserted, and the computation of the ordinary factorial needs to be moved into factorial.js and then called from gamma, rather than the current roundabout organization where the code for factorial resides in gamma, but calling factorial forwards to gamma.

@gwhitney
Copy link
Collaborator Author

Oh, there is one more important question to resolve before this is checked in. To obtain d digits of precision in the result, the algorithm needs d + m digits of precision in the intermediate quantities, where m depends on the inputs in some way. It can happen that d+m exceeds the precision available in the current instance. In such a case, the current code for gamma simply truncates the result to the digits it can be sure of. Some other high precision functions in the library take the alternate tactic of creating a new mathjs instance with d + m digits of precision, running the algorithm in the new instance, and then returning the desired number of digits of precision as a bignumber in the original instance.

Which way would you like me to go in dealing with the intermediate precision needs? It seemed to me that maybe creating new instances on the fly every time gamma is called and happens to need more precision was too expensive/heavyweight of an operation. But maybe it's not as resource-intensive as I might think, or at least deemed worth it when trying to compute to high accuracy. Thanks for your thoughts on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant