Just seven (!!) years ago, I wrote a blog post talking about the progress in computing power right around the time that Moore's law appeared to be ending saying that the change in processing power (and processing power/$) from 2001 -> 2012 was unprecedented and probably never going to happen again.
TL;DR 2315x growth in efficiency/power in 10 years FOR LESS COST is pretty great.
Well, we're about 8 years beyond that, and it's... still pretty true! In 2012, top performance was 17 Petaflops/sec, and today, we're at 148 Petaflops, an incredibly modest 8.7x increase over 8 years. Boo hoo.
ON THE OTHER HAND, Google's public numbers around TPU pods (100+ Petaflops in January of 2020) are about $400k for a three year commitment, which is about $4,000 per Petaflops/sec. This compares with the 148 Petaflops/sec for about $200,000,000 for the top super computer, giving you about $1.3M per Petaflops/sec.
This compares with $5.7M per Petaflops/sec in 2012, and $14B ($14,000 M) per Petaflops/sec in 2001.
So, while the top end hasn't quite kept pace - it has been getting MUCH MUCH MUCH cheaper. Specifically, 1425x cheaper for the same performance in 8 years.
CRAZY!
(Yes, I know that comparing generalized compute like in a super computer with specific performance in a ML accelerator isn't really apples and oranges ... but it's as close as we can get!)