Last year, a team from the University of Applied Sciences of the Grisons lifted the record, bringing the total to 62.8 trillion decimal places.
Google used y-cruncher to perform calculations. This time around, the Compute Engine was equipped with 128 vCPUs, 864 GB of RAM, and 100 Gbps of egress bandwidth. For comparison, the 2019 calculation had just 16 Gbps of egress bandwidth.
The program was running for 157 days, 23 hours, 31 minutes, and 7.651 seconds, utilizing 43.5 PB of reads and 38.5 PB of written in the process.
(History of computation from ancient times through today)
According to Emma Haruka Iwao, a Google developer advocate, they used Terraform to build and manage the cluster. They also created a program that uses different parameters and automated much of the measurement. All of this, the tweaks made the program about twice as quick.
Why keep going at this point? Pi calculations may be used as a measuring stick to determine the advancement of processing power over time. In this particular instance, it also shows Google''s Cloud infrastructure and its dependability.
For those interested in deeper research, Google has published the scripts it used on GitHub.