For those not aware, Shift Left[1] is (at this point) an old term that was coined for a specific use case, but now refers to a general concept. The concept is that, if you do needed things earlier in a product cycle, it will end up reducing your expense and time in the long run, even if it seems like it's taking longer for you to "get somewhere" earlier on. I think this[2] article is a good no-nonsense explainer for "Why Shift Left?".
More broadly it includes moving activities that are normally performed at a later stage so that they are earlier in the process. The idea is that defects found later in the process are more costly.
It can also mean manual testing earlier. Valve credits playtesting early and playtesting often for their success with Half-Life, Half-Life 2, and other games.
In my org [1], "shift left" means developers do more work sooner.
So before we clearly clarify product requirements, we start the build early with assumptions that can change. Sometimes it works, sometimes it does not, which just means we end up net neutral.
But an executive somewhere up the management chain can claim more productivity. Lame.
I think that's a myopic view. Getting something anything in the hands of your potential users that's even vaguely in the ball-park of a solution shaped thing gives you extremely valuable information both on what is actually needed and, to me more importantly, what you don't have to build at all.
I consider it a success when an entire line of work is scrapped because after initial testing the users say they wouldn't use it. Depending on the project scope that could be 6-7 figures of dev time not wasted right there.
> Optimization strategies have shifted from simple power, performance, and area (PPA) metrics to system-level metrics, such as performance per watt. “If you go back into the 1990s, 2000s, the road map was very clear,”
Tell me you work for Intel without telling me you work for Intel.
> says Chris Auth, director of advanced technology programs at Intel Foundry.
Yeah that’s what I thought. The breathlessness of Intel figuring out things that everyone else figured out twenty years ago doesn’t bode well for their future recovery. They will continue to be the laughing stock of the industry if they can’t find more self reflection than this.
Whether this is their public facing or internal philosophy hardly matters. Over this sort of time frame most companies come to believe their own PR.
Intel has had a few bad years, but frankly I feel like they could fall a lot lower. They aren't as down bad as AMD was during the Bulldozer years, or Apple during the PowerPC years, or even Samsung's early Exynos chipsets. The absolute worst thing they've done in the past 5 years was fab on TSMC silicon, which half the industry is guilty of at this point.
You can absolutely shoot your feet off trying to modernize too quickly. Intel will be the laughingstock if 18A never makes it to market and their CPU designs start losing in earnest to their competitors. But right now, in a relative sense, Intel isn't even down for the count.
Intel has failed pretty badly IMO. Fabbing at TSMC might actually have been a good idea, except that every other component of arrow like is problematic. Huge tile to tile latencies, special chiplets that are not reusable in any other design, removal of hyperthreading, etc etc. Intel’s last gen CPU is in general faster than the new gen due to all the various issues.
And that’s just the current product! The last two gens are unreliable, quickly killing themselves with too high voltage and causing endless BSODs.
The culture and methods of ex-Intel people at the management level is telling as well, from my experiences at my last job at least.
(My opinions are my own, not my current employers & a lot of ex-Intel people are awesome!)
Fabbing at TSMC is an embarrassing but great decision. The design side of Intel is the revenue/profit generating side. They can't let the failures of their foundries hold back their design side and leave them hopelessly behind AMD and ARM. Once they've regained their shares of the server market or at least stabilized it, they can start shifting some of their fabbing to Intel Foundry Services, who are going to really suck at the beginning. But no one else is going to take that chance on those foundries if not Intel's design side. The foundry side will need that stream of business while they work out their processes.
We'll see, I mostly object to the "vultures circling" narrative that HN seems to be attached to. Intel's current position is not unprecedented, and people have been speculating Intel would have a rough catchup since the "14nm+++" memes were vogue. But they still have their fabs (and wisely spun it out to it's own business) and their chip designs, while pretty faulty, successfully brought x86 to the big.LITTLE core arrangement. They've beat AMD to the punch on a number of key technologies, and while I still think AMD has the better mobile hardware it still feels like the desktop stuff is a toss-up. Server stuff... glances at Gaudi and Xeon, then at Nvidia ...let's not talk about server stuff.
A lot of hopes and promises are riding on 18A being the savior for both Intel Foundry Services and the Intel chips wholesale. If we get official confirmation that it's been cancelled so Intel can focus on something else then it will signal the end of Intel as we know it.
>> successfully brought x86 to the big.LITTLE core arrangement.
Really? I thought they said using e-cores would be better than hyper threading. AMD has doubled down on hyper threading - putting a second decoder in each core that doesn't directly benefit single thread perf. So Intels 24 cores are now competitive with (actually losing to) 16 zen 5 cores. And that's without using AVX512 which Arrow Lake doesn't even support.
I was never a fan of big.little for desktop or even laptops.
For those not aware, Shift Left[1] is (at this point) an old term that was coined for a specific use case, but now refers to a general concept. The concept is that, if you do needed things earlier in a product cycle, it will end up reducing your expense and time in the long run, even if it seems like it's taking longer for you to "get somewhere" earlier on. I think this[2] article is a good no-nonsense explainer for "Why Shift Left?".
[1] https://en.wikipedia.org/wiki/Shift-left_testing [2] https://www.dynatrace.com/news/blog/what-is-shift-left-and-w...
So “shift left” is roughly equivalent to “tests first” or “TDD”?
More broadly it includes moving activities that are normally performed at a later stage so that they are earlier in the process. The idea is that defects found later in the process are more costly.
That's just one example of it.
Other examples:
* Replacing automated tests with (quicker) type checking and running it on a git commit hook instead of CI.
* Replacing slower tests with faster tests.
* Running tests before merging a PR instead of after.
* Replacing a suite of manual tests with automation tests.
etc.
I would say tests first/TDD is a form of shifting left, but it can encompass more than that.
It can also mean manual testing earlier. Valve credits playtesting early and playtesting often for their success with Half-Life, Half-Life 2, and other games.
Not just when they play tested but how. Silently watching someone play, only taking notes. No debrief.
Source: the week I spent in the Valve offices in 2005, having Valve staff playing our in development HL2 mod.
In my org [1], "shift left" means developers do more work sooner.
So before we clearly clarify product requirements, we start the build early with assumptions that can change. Sometimes it works, sometimes it does not, which just means we end up net neutral.
But an executive somewhere up the management chain can claim more productivity. Lame.
[1] I work at a bank.
I think that's a myopic view. Getting something anything in the hands of your potential users that's even vaguely in the ball-park of a solution shaped thing gives you extremely valuable information both on what is actually needed and, to me more importantly, what you don't have to build at all.
I consider it a success when an entire line of work is scrapped because after initial testing the users say they wouldn't use it. Depending on the project scope that could be 6-7 figures of dev time not wasted right there.
> Optimization strategies have shifted from simple power, performance, and area (PPA) metrics to system-level metrics, such as performance per watt. “If you go back into the 1990s, 2000s, the road map was very clear,”
Tell me you work for Intel without telling me you work for Intel.
> says Chris Auth, director of advanced technology programs at Intel Foundry.
Yeah that’s what I thought. The breathlessness of Intel figuring out things that everyone else figured out twenty years ago doesn’t bode well for their future recovery. They will continue to be the laughing stock of the industry if they can’t find more self reflection than this.
Whether this is their public facing or internal philosophy hardly matters. Over this sort of time frame most companies come to believe their own PR.
Intel has had a few bad years, but frankly I feel like they could fall a lot lower. They aren't as down bad as AMD was during the Bulldozer years, or Apple during the PowerPC years, or even Samsung's early Exynos chipsets. The absolute worst thing they've done in the past 5 years was fab on TSMC silicon, which half the industry is guilty of at this point.
You can absolutely shoot your feet off trying to modernize too quickly. Intel will be the laughingstock if 18A never makes it to market and their CPU designs start losing in earnest to their competitors. But right now, in a relative sense, Intel isn't even down for the count.
Intel has failed pretty badly IMO. Fabbing at TSMC might actually have been a good idea, except that every other component of arrow like is problematic. Huge tile to tile latencies, special chiplets that are not reusable in any other design, removal of hyperthreading, etc etc. Intel’s last gen CPU is in general faster than the new gen due to all the various issues.
And that’s just the current product! The last two gens are unreliable, quickly killing themselves with too high voltage and causing endless BSODs.
The culture and methods of ex-Intel people at the management level is telling as well, from my experiences at my last job at least.
(My opinions are my own, not my current employers & a lot of ex-Intel people are awesome!)
Fabbing at TSMC is an embarrassing but great decision. The design side of Intel is the revenue/profit generating side. They can't let the failures of their foundries hold back their design side and leave them hopelessly behind AMD and ARM. Once they've regained their shares of the server market or at least stabilized it, they can start shifting some of their fabbing to Intel Foundry Services, who are going to really suck at the beginning. But no one else is going to take that chance on those foundries if not Intel's design side. The foundry side will need that stream of business while they work out their processes.
We'll see, I mostly object to the "vultures circling" narrative that HN seems to be attached to. Intel's current position is not unprecedented, and people have been speculating Intel would have a rough catchup since the "14nm+++" memes were vogue. But they still have their fabs (and wisely spun it out to it's own business) and their chip designs, while pretty faulty, successfully brought x86 to the big.LITTLE core arrangement. They've beat AMD to the punch on a number of key technologies, and while I still think AMD has the better mobile hardware it still feels like the desktop stuff is a toss-up. Server stuff... glances at Gaudi and Xeon, then at Nvidia ...let's not talk about server stuff.
A lot of hopes and promises are riding on 18A being the savior for both Intel Foundry Services and the Intel chips wholesale. If we get official confirmation that it's been cancelled so Intel can focus on something else then it will signal the end of Intel as we know it.
>> successfully brought x86 to the big.LITTLE core arrangement.
Really? I thought they said using e-cores would be better than hyper threading. AMD has doubled down on hyper threading - putting a second decoder in each core that doesn't directly benefit single thread perf. So Intels 24 cores are now competitive with (actually losing to) 16 zen 5 cores. And that's without using AVX512 which Arrow Lake doesn't even support.
I was never a fan of big.little for desktop or even laptops.
What else would they focus on?
I mean they were on 14 NM until just about 2022, those memes didn't come from nowhere. And it's not even that long ago.
As soon as I saw “shift left” I knew I wanted to double down.