The cost is a rate, like $2 per hour, not a purchase price.
So faster CPUs get work done more quickly and may justify a higher cost per hour.
Ah, the graph is wacky, but the text makes sense, looks like a disconnect:
1. C4A Axion: $2.16 reported cost per hour, test took average of 9 seconds per run: cost approximately 0.005 dollars.
2. T2A Ampere Altra: $1.85 reported cost per hour, test took average of 17 seconds per run: cost approximately 0.009 dollars.
3. C4 Xeon Platinum EMR: $2.37 reported cost per hour, test took average of 17 seconds per run: cost approximately 0.011 dollars.
So the C4A costs a bit more ($2.16 vs $1.85) but gets approximately perf/$ is around 2x in favor of the C4A.
> These new C4A instances are advertised as offering up to 50% better performance and up to 60% better energy efficiency than their current generation x86 instance types.
Hardware (see also, Google's TPUs and their performance vs. energy cost) is one reason why I'm fairly bullish on Google.
Hardware companies have yearly releases and work closely with their customers. None of this describes google and it's the reason why nvidia is a trillion dollar company despite Google TPUs existing prior. Basically no one outside of Google uses Google hardware. If it's a generic arm target someone might use it because it's low effort, but it's not exactly a value add.
Hey can someone else double-check that the formula being used on Page 2 for Perf/Dollar is actually making sense?
They had three measures: A) Flops. Operations per second. B) Cost. Dollars per hour. C) Runtime. Seconds per task.
I expected the formula to be Flops/Cost, resulting in units of operations per dollar.
Instead it was computed as Flops / (Cost * Runtime), to get some units that don't make sense to me — operations * tasks per dollar seconds?
The cost is a rate, like $2 per hour, not a purchase price.
So faster CPUs get work done more quickly and may justify a higher cost per hour.
Ah, the graph is wacky, but the text makes sense, looks like a disconnect:
So the C4A costs a bit more ($2.16 vs $1.85) but gets approximately perf/$ is around 2x in favor of the C4A.Yup you're right, surprising for a site so dedicated to benchmarks.
I agree, all of the perf/cost graphs are nonsensical.
> Not only was the Google Axion processors delivering great performance in Google Cloud but doing so with the best performance-per-dollar too.
Nice upgrade for Google's customers. I'm guessing it does it at much lower wattage as well.
> These new C4A instances are advertised as offering up to 50% better performance and up to 60% better energy efficiency than their current generation x86 instance types.
Hardware (see also, Google's TPUs and their performance vs. energy cost) is one reason why I'm fairly bullish on Google.
Hardware companies have yearly releases and work closely with their customers. None of this describes google and it's the reason why nvidia is a trillion dollar company despite Google TPUs existing prior. Basically no one outside of Google uses Google hardware. If it's a generic arm target someone might use it because it's low effort, but it's not exactly a value add.