A team from the University of Applied Sciences of the Grisons swiped the record last year, pushing the total up to 62.8 trillion decimal places. Like before, Google used y-cruncher to perform calculations. This time around, the Compute Engine was configured with 128 vCPUs, 864 GB of RAM and 100 Gbps of egress bandwidth. For comparison, the 2019 calculation had just 16 Gbps of egress bandwidth. The program ran for a total of 157 days, 23 hours, 31 minutes and 7.651 seconds, utilizing 43.5 PB of reads and 38.5 PB of writes in the process.
(History of π computation from ancient times through today) Emma Haruka Iwao, a developer advocate at Google, said they used Terraform to set up and manage the cluster. They also created a program that runs y-cruncher with different parameters and automated much of the measurement. All said and done, the tweaks made the program about twice as fast. Why keep going at this point? As Iwao highlights, Pi calculations can be used as a measuring stick to chart the progress of processing power over time. In this specific instance, it also demonstrates the capabilities of Google’s Cloud infrastructure and the reliability it affords. Google has published the scripts it used over on GitHub for those interested in digging deeper into the code.