Introduction
Sine, one of the fundamental trigonometric functions, plays a crucial role in various fields, including mathematics, physics, engineering, and computer science. Its calculation is not trivial, especially when it comes to implementing it in electronic calculators, where efficiency and accuracy are paramount.
In previous entries of the series, we looked into how calculators solve equations and how they calculate square roots. In this blog post, we’ll delve into the intricate process of calculating the sine function, starting from simple approximations to more sophisticated methods.
How sine is calculated
To begin, let’s inspect the plot of the sine function:
It is immediately obvious, that the function is periodic, and has a strong symmetry of the interval between 0 and
In other words, it is enough to calculate the function on the interval
While this method is simple, we need to calculate very high exponentials, and the approximation errors can get quite large around
How sine is really calculated
While the method presented in the previous paragraph is quite bad, it does serve as a blueprint for better methods. Essentially, every implementation of sine uses the following three steps:
-
Reduction: Using some algebraic tricks, reduce
to a small number . -
Approximation: Calculate the value of
using an approximation method, such as the Taylor series. -
Reconstruction: Calculate the final value of
based on .
There are many ways to approach this problem. In the following , I present what Intel uses in their processors, based on their paper. They start with the formula
We have to calculate
Only one piece of the puzzle remains: how to calculate
Conclusion
In conclusion, calculating sine in computers involves a combination of reduction, approximation, and reconstruction steps. From simple reduction and Taylor series to more precise methods like minimax approximation, computers employ various techniques to compute sine efficiently while maintaining acceptable levels of accuracy. Understanding these methods sheds light on the underlying mathematics that power computational tools and simulations in numerous fields.