> In a frontside design, the silicon substrate can be as thick as 750 micrometers. Because silicon conducts heat well, this relatively bulky layer helps control hot spots by spreading heat from the transistors laterally. Adding backside technologies, however, requires thinning the substrate to about 1 mm to provide access to the transistors from the back.
This is a typo here, right? 1mm is thicker, not thinner, than 750 micrometers. I assume 1µm was meant?
Wafers on some semiconductor processes are 0.3m in diameter. You could not practically handle a 1um thick wafer 0.3m in diameter without shattering it. 0.75mm is a reasonable overall wafer thickness.
Whose gonna pull the trigger on beryllium oxide mounting packages first?
Its the holy grail of having thermal conductivity somewhere between aluminum and copper, while being as electrically insulating as ceramic. You can put the silicon die directly on it.
Problem is that the dust from it is terrifyingly toxic, but in it's finished form it's "safe to handle".
The article mentions backside (underside) power distribution, capacitors to help regulate voltage (thus allowing tighter tolerances and lower voltage / operating power), voltage regulation under the chip, and finally dual-layer stacking with the above as potential avenues to spread heat dissipation.
I can't help but wonder, where exactly is that heat supposed to go on the underside of the chip? Modern CPUs practical float atop a bed of nails.
We could also explore the idea that Von Neumann's architecture isn't the best choice the future. Having trillions of transistors just waiting their turn to hand off data as fast as possible doesn't seem same to me.
One game that can be played is to use isotopically pure Si-28 in place of natural silicon. The thermal conductivity of Si-28 is 10% higher than natural Si at room temperature (but 8x higher at 26 K).
I guess future designs will have a cooling ring integrated into the chiplets.. the dark silicon starts up, finds in the memory shared with the previous hot silcone the instructions and cache, computes till heat death, stores all it did in the successor chiplet - it all is on a ring like structure, that is always boiling in some cooling liquid its directly immersed in, going forever round and round. It reminds me of the Ian M. Banks setup of the fireplanet Echronedal in the player of games.
With AI, both GPU and CPU are pushed to the absolute limit and we shall be putting 750W to 1000W per unit with liquid cooling in Datacenter within next 5 - 8 years.
I wonder if we can actually use those heat for something useful.
> In a frontside design, the silicon substrate can be as thick as 750 micrometers. Because silicon conducts heat well, this relatively bulky layer helps control hot spots by spreading heat from the transistors laterally. Adding backside technologies, however, requires thinning the substrate to about 1 mm to provide access to the transistors from the back.
This is a typo here, right? 1mm is thicker, not thinner, than 750 micrometers. I assume 1µm was meant?
I think you're right that 1µm was meant given the orders of magnitude in other sources e.g. 200µm -> 0.3µm in this white paper:
https://www.cadence.com/en_US/home/resources/white-papers/th...
Wafers on some semiconductor processes are 0.3m in diameter. You could not practically handle a 1um thick wafer 0.3m in diameter without shattering it. 0.75mm is a reasonable overall wafer thickness.
Whose gonna pull the trigger on beryllium oxide mounting packages first?
Its the holy grail of having thermal conductivity somewhere between aluminum and copper, while being as electrically insulating as ceramic. You can put the silicon die directly on it.
Problem is that the dust from it is terrifyingly toxic, but in it's finished form it's "safe to handle".
The article mentions backside (underside) power distribution, capacitors to help regulate voltage (thus allowing tighter tolerances and lower voltage / operating power), voltage regulation under the chip, and finally dual-layer stacking with the above as potential avenues to spread heat dissipation.
I can't help but wonder, where exactly is that heat supposed to go on the underside of the chip? Modern CPUs practical float atop a bed of nails.
We could also explore the idea that Von Neumann's architecture isn't the best choice the future. Having trillions of transistors just waiting their turn to hand off data as fast as possible doesn't seem same to me.
One game that can be played is to use isotopically pure Si-28 in place of natural silicon. The thermal conductivity of Si-28 is 10% higher than natural Si at room temperature (but 8x higher at 26 K).
I guess future designs will have a cooling ring integrated into the chiplets.. the dark silicon starts up, finds in the memory shared with the previous hot silcone the instructions and cache, computes till heat death, stores all it did in the successor chiplet - it all is on a ring like structure, that is always boiling in some cooling liquid its directly immersed in, going forever round and round. It reminds me of the Ian M. Banks setup of the fireplanet Echronedal in the player of games.
With AI, both GPU and CPU are pushed to the absolute limit and we shall be putting 750W to 1000W per unit with liquid cooling in Datacenter within next 5 - 8 years.
I wonder if we can actually use those heat for something useful.
Pentium 4, GeForce FX 5800, PS3, Xbox 360, Nintendo Wii, MacBook 20??-2019: "First time?"
Is there a reason we can’t put heat pipes directly into chips? Or underneath
Speaking of dissipation, how is the progress in reversible computing going?
Isn’t heat just wasted energy?
Seems my M1 Macbook Air generates almost no heat.
<looks at the arm macs> You sure?