What I find most interesting with this is that it shows they believe there is nothing unique at Meta related to AI. There is no resource, people and computing power, that they can't get elsewhere for whatever they believe would be more interesting for them.
I mention this because it feels analogous to military research, where people "dream" of how advanced the military is, how forward they are compared to public research... and yet, it seems to be a recurring myth they love to sustain.
So the signal I get here is AI "labs" in BigTech have nothing worth waiting for around the corner, it's just more of the same and boring for people who stick there.
> What I find most interesting with this is that it shows they believe there is nothing unique at Meta related to AI
Whether or not this is the case, I don't get this as being the reason for Sousmith leaving - it sounds as if he is just ready for a change.
Still, it is noticeable that with many of the AI companies claiming that their version of "AGI" is just around the corner, developers and staff don't appear to be particularly excited about this (I assume they realize it is just hype, not some momentous advance around the corner), and leave to pursue different things, such as Mira Murati starting a fine-tuning company, Karpathy going back to education, others switching ship (typically from OpenAI to Anthropic), etc.
I think you might be reading a bit too much into this.
He’s been with Meta for 11 years and is likely in a very comfortable financial position, given the substantial stock options he’s received over that time.
He also mentioned the arrival of a new child, and it’s well known that Meta's work-life balance isn’t always ideal.
On top of that, Meta, like many major tech companies, has been shifting its focus toward LLM-based AI, moving away from more traditional PyTorch use cases.
Considering all of this, it seems like a natural time for him to move on and pursue new, more exciting opportunities.
If you can afford to support yourself, which I’m sure he can, there’s a serenity to working on small projects that are nothin the public eye. It may simply be that he craves some quiet time that enables him to focus on his family and himself.
I don't think that's the read? Guy says he wants to work on something small. If you want to work on something big you probably want to be in a big corp to have the resources to do the big thing.
Also absolutely unknown if the "new thing" is AI-related at all!
> If you want to work on something big you probably want to be in a big corp to have the resources to do the big thing.
If anything, the reverse seems to be true, if you want to work on something big, you want to be in a small company, sufficiently funded, filled with great people, yet not "big", that's when "something big" seems to be more likely to happen.
In contrast, as far as I can think, the bigger a company gets, the less likely they are to actually come up with "something big", it seems like most of the times you need (creative) constraints in order for the results to end up being actually innovative, otherwise you end up like IBM and Meta, throwing money on stuff and getting some results, but nothing really out of the ordinary considering what's happening elsewhere in their ecosystems.
Well he left so whatever is coming next, AI related or not, "small" or not (small for them might be reaching just a million people, he wrote that he "lead the software layer that powers the entire AI industry." so his notion of scale is probably unlike mine, maybe yours too) is more exciting to him that whatever he could do next with all of Meta's resources.
Edit: to be clear, I didn't mean to imply their next thing is AI related, solely that they obviously know more about AI at Meta than e.g. XR at Meta, just because that's their expertise.
Negative, what you shave taken away is it’s the people. He mentions standing up clusters. Small shops can’t afford clusters. Ignore the technical aspect of this article and read it for what it’s for. A thank you note to the people he has worked with on amazing projects. Research in a bubble of 1 isn’t very useful. Research in a small team with Meta Budget is extremely useful. With the right people.
> where people "dream" of how advanced the military is
If you've ever worked on "advanced military grade" equipment, you'd know better.
It tends to be what you'd euphemistically call "well-proven technology", built down to a price by the lowest bidder, by comparatively unskilled labour.
The most shocking thing about the "captured" Russian drones is they use name-brand Raspberry Pis inside. I'm prepared to bet the American versions use whatever AliExpress crap is on special this week. The UK stuff definitely does.
> I mention this because it feels analogous to military research, where people "dream" of how advanced the military is, how forward they are compared to public research... and yet, it seems to be a recurring myth they love to sustain.
I don't think that you can read this from the blog post at all, but it gives me a chuckle to think how the quest for AGI at Meta may be "The Men Who Stare at Goats" all over again.
I'm totally speculating. I have no extra information there.
It just makes me think of all the staff, technical staff, that left OpenAI recently. Altman was making grand claims about what was coming next.
Well, we know what followed, namely I don't think any researcher who left knowing what was in the pipeline feel like they missed much in terms of access.
The non-fiction book behind it is probably better comparison than the film adaptation, if you think Meta are doing goat-staring (I don't think they're especially bad on this issue compared to their rivals).
That man has an infective enthusiasm. I remember the DCGAN paper inspired me to try getting the (Lua) Torch code to work, and I tried it on the Oxford flowers dataset early on. It worked surprisingly well, and Soumith Chintala even shared it around in social media, surprised at how well it worked on such a small dataset. Of course back then we didn't really appreciate the problem of mode collapse.
Pytorch and old Lua Torch were a pleasure to work with compared to the contemporary Tensorflow. Lots of S.C's code was copied around liberally, it had its quirks (I remember the DCGAN code had a pretty odd way of doing parameter passing) but it was also really easy to understand and made random people like me feel like we had suddenly stumbled onto something crazy powerful (which we had!). It was wonderfully hackable.
As a loyal JAX user, I hope they can play catchup. PyTorch has dominated the AI scene since TF1 fumbled the ball at 10th yard line. What Matt Johnson has done turning Autograd into JAX is hopefully going to be worthy of as much praise as what Soumith has received.
I see good answers already, but here's a concrete example:
In my University we had to decide between both libraries so, as a test, we decided to write a language model from scratch. The first minor problem with TF was that (if memory serves me right) you were supposed to declare your network "backwards" - instead of saying "A -> B -> C" you had to declare "C(B(A))". The major problem, however, was that there was no way to add debug messages - either your network worked or it didn't. To make matters worse, the "official" TF tutorial on how to write a Seq2Seq model didn't compile because the library had changed but the bug reports for that were met for years with "we are changing the API so we'll fix the example once we're done".
PyTorch, by comparison, had the advantage of a Python-based interface - you simply defined classes like you always did (including debug statements!), connected them as variables, and that was that. So when I and my beginner colleagues had to decide which library to pick, "the one that's not a nightmare to debug" sounded much better than "the one that's more efficient if you have several billions training datapoints and a cluster". Me and my colleagues then went on to become professionals, and we all brought PyTorch with us.
In 2018, I co-wrote a blog post with the inflammatory title “Don’t use TensorFlow, try PyTorch instead” (https://news.ycombinator.com/item?id=17415321).
As it gained traction here, it was changed to “Keras vs PyTorch” (some edgy things that work for a private blog are not good for a corporate one). Yet the initial title stuck, and you can see it resonated well with the crowd.
TensorFlow (while a huge step on top of Theano) had issues with a strange API, mixing needlessly complex parts (even for the simplest layers) with magic-box-like optimization.
There was Keras, which I liked and used before it was cool (when it still supported the Theano backend), and it was the right decision for TF to incorporate it as the default API. But it was 1–2 years too late.
At the same time, I initially looked at PyTorch as some intern’s summer project porting from Lua to Python. I expected an imitation of the original Torch.
Yet the more it developed, the better it was, with (at least to my mind) the perfect level of abstraction. On the one hand, you can easily add two tensors, as if it were NumPy (and print its values in Python, which was impossible with TF at that time). On the other hand, you can wrap anything (from just a simple operation to a huge network) in an nn.Module. So it offered this natural hierarchical approach to deep learning. It offered building blocks that can be easily created, composed, debugged, and reused. It offered a natural way of picking the abstraction level you want to work with, so it worked well for industry and experimentation with novel architectures.
Actually, I opened “Teaching deep learning” and smiled as I see how it evolved:
> There is a handful of popular deep learning libraries, including TensorFlow, Theano, Torch and Caffe. Each of them has Python interface (now also for Torch: PyTorch)
> [...]
> EDIT (July 2017): If you want a low-level framework, PyTorch may be the best way to start. It combines relatively brief and readable code (almost like Keras) but at the same time gives low-level access to all features (actually, more than TensorFlow).
> EDIT (June 2018): In Keras or PyTorch as your first deep learning framework I discuss pros and cons of starting learning deep learning with each of them.
The original TensorFlow had an API similar to the original Lua-based Torch (the predecessor to PyTorch) that required you to first build the network, node by node, then run it. PyTorch used a completely different, and much more convenient approach, where the network is built automatically for you just by running the forward pass code (and will then be used for the backward pass), using both provided node types and arbitrary NumPy compatible code. You're basically just writing differentiable code.
This new PyTorch approach was eventually supported by TensorFlow as well ("immediate mode"), but the PyTorch approach was such a huge improvement that there had been an immediate shift by many developers from TF to PyTorch, and TF never seemed able to regain the momentum.
TF also suffered from having a confusing array of alternate user libraries built on top of the core framework, none of which had great documentation, while PyTorch had a more focused approach and fantastic online support from the developer team.
For me it was about 8 years ago. Back then TF was already bloated but had two weaknesses. Their bet on static compute graphs made writing code verbose and debugging difficult.
The few people I know back then used keras instead. I switched to PyTorch for my next project which was more "batteries included".
I'm no machine learning engineer but I've dabbled professionally with both frameworks a few years ago and the developer experience didn't even compare. The main issue with TF was that you could only chose between a powerful but incomprehensible, poorly documented [1], ultra-verbose and ever changing low-level API, and an abstraction layer (Keras) that was too high level to be really useful.
Maybe TF has gotten better since but at the time it really felt like an internal tool that Google decided to just throw into the wild. By contrast PyTorch offered a more reasonable level of abstraction along with excellent API documentation and tutorials, so it's no wonder that machine learning engineers (who are generally more interested in the science of the model than the technical implementation) ended up favoring it.
[1] The worst part was that Google only hosted the docs for the latest version of TF, so if you were stuck on an older version (because, oh I don't know, you wanted a stable environment to serve models in production), well tough luck. That certainly didn't gain TF any favors.
I personally believe TF1 was serving the need of its core users. It provided a compileable compute graph with autodiff, and you got very efficient training and inference from it. There was a steep learning curve, but if you got past it, things worked very very well. The distributed TF never really took off—it was buggy, and I think they made some wrong earlier bets in the design for performance reasons that they should have been sacrificed in favor of simplicity.
I believe some years after the TF1 release, they realized the learning curve was too steep, they were losing users to PyTorch. I think also the Cloud team was attempting to sell customers on their amazing DL tech, which was falling flat. So they tried to keep the TF brand while totally changing the product under the hood by introducing imperative programming and gradient tapes. They killed TF1, upsetting those users, while not having a fully functioning TF2, all the while having plenty of documentation pointing to TF1 references that didn’t work. Any new grad student made the simple choice of using a tool that was user-friendly and worked, which was PyTorch. And most old TF1 users hopped on the band wagon.
Imagine a total newbie trying to fine-tune an image classifier, reusing some open source example code, about a decade ago.
If their folder of 10,000 labelled images contains one image that's a different size to the others, the training job will fail with an error about unexpected dimensions while concatenating.
But it won't be able to say the file's name, or that the problem is an input image of the wrong size. It'll just say it can't concatenate tensors of different sizes.
An experienced user will recognise the error immediately, and will have run a data cleansing script beforehand anyway. But it's not experienced users who bounce from frameworks, it's newbies.
> An experienced user will recognise the error immediately, and will have run a data cleansing script beforehand anyway. But it's not experienced users who bounce from frameworks, it's newbies.
Even seasoned developers will bounce away from frameworks or libraries - no matter if old dogs or the next hot thing - if the documentation isn't up to speed or simple, common tasks require wading through dozens of pages of documentation.
Writing good documentation is hard enough, writing relevant "common usage examples" is even harder... but keeping them up to date and working is a rarely seen art.
And the greatest art of all of it is logging. Soooo many libraries refuse to implement detailed structured logging in internal classes (despite particularly Java and PHP offering very powerful mechanisms), making it much more difficult to troubleshoot problems in the field.
I only remember 2015 TF and I was wondering: why would I use Python to assemble a computational graph when what I really want is to write code and then differentiate through it?
Not Op. I have production / scale experience in PyTorch and toy/hobby experience in JAX. I wish I could have time time or liberty to use JAX more. It consists of small, orthogonal set of ideas that combine like lego blocks. I can attempt to reason from first principals about performance. The documentation is super readable and strives to make you understand things.
JAX seems well engineered. One would argue so was TensorFlow. But ideas behind JAX were built outside Google (autograd) so it has struck right balance with being close to idiomatic Python / Numpy.
PyTorch is where the tailwinds are, though. It is a wildly successful project which has acquired ton of code over the years. So it is little harder to figure out how something works (say torch-compile) from first principles.
Not OP. I prefer JAX for non-AI tasks in scientific computing because of the different mental model than PyTorch. In JAX, you think about functions and gradients of functions. In PyTorch you think about tensors which accumulate a gradient while being manipulated through functions. JAX just suits my way of thinking much better.
I also like that jax.jit forces you to write "functional" functions free of side effects or inplace array updates. It might feel weird at first (and not every algorithm is suited for this style) but ultimately it leads to clearer and faster code.
I am surprised that JIT in PyTorch gets so little attention. Maybe it's less impactful for PyTorch's usual usecase of large networks, as opposed to general scientific computing?
>I also like that jax.jit forces you to write "functional" functions free of side effects or inplace array updates. It might feel weird at first (and not every algorithm is suited for this style) but ultimately it leads to clearer and faster code.
It's not weird. It's actually the most natural way of doing things for me. You just write down your math equations as JAX and you're done.
For anyone that’s curious, the underlying Torch library is also a joy to work with, as are the many other torch bindings. For example, Rust has tch and Burn which both work with libtorch.
PyTorch of course has the benefit of being dynamically debuggable. Can’t forget the first time I break pointed my pytorch model and wrote pytorch calls inside the terminal to inspect the behavior. That’s still something I miss a lot now that I’m working with only “fast” compiled code.
>>Every major AI company and hardware vendor are on a speed dial. This kind of power is really hard to give up. But curiosity ultimately won out in my head.
A simple feeling has such a power.
May he gets an opportunity to create one more powerful tool before retiring.
To me it sounds as if he is trying to open a new chapter in his life. Good for him, but I wonder if everything was really as smooth as described. People often write how everything is always perfect on their blog. Well - could be. But it could also be that not everything was perfect but no description followed on the blog.
You're asking a lot of questions but are you willing to think about it? For one, no it's not "everyone's style" because you wouldn't have asked whether it was, you'd know.
I read one post on his blog and found that Adam Paszke reached out to the author and got an internship. I wonder if it was that easy to get an internship at FAIR. I thought that they hire only PhDs.
I was pretty involved in the PyTorch ecosystem in the early days around 2016 and Adam was nothing short of a genius and prolific developer whose contributions to the codebase and community were immense. I think he was like an undergrad in Poland at the time. My understanding is that his contributions came before the internship, but I don’t know.
My memory is that Souminth was really open to other people’s contributions and questions, no matter their credentials. He was a great leader who felt approachable to the open-source community.
I didn't know that. Soumith Chintala certainly paid it forward. He was very helpful and supportive of random strangers (like me!) in the early pytorch days. I count him with Andrej Karpathy and Chris Olah as one of the people who really made machine learning accessible to regular software engineers.
PyTorch is one of those tools that’s so simple and easy to take apart that you feel like you might’ve been able to make it yourself. I can’t imagine how much engineering effort was behind all those moments where I thought to myself, “of course it should work like that, how can it be any other way?”
The choice of the dynamic computation graph [1] of PyTorch made it easier to debug and implement, leading to higher adoption, even though running speed was initially slower (and therefore training cost higher).
Other decisions follow from this one.
Tensorflow started with static and had to move to dynamic at version 2.0, which broke everything. Fragmentation between tensorflow 1, tensorflow 2, keras, jax.
Pytorch's compilation of this computation graph erased the remaining edge of Tensorflow.
Is the battle over ? From a purely computational point, Pytorch solution is very far from optimal and billions of dollars of electricity and GPUs are burned every year, but major players are happy with circular deals to entrench their positions. So at the pace of current AI code development, probably one or two years before Pytorch is old history.
> at the pace of current AI code development, probably one or two years before Pytorch is old history.
Ehhh, I don’t know about that.
Sure, new AI techniques and new models are coming out pretty fast, but when I go to work with a new AI project, they’re often using a version of PyTorch or CUDA from when the project began a year or two ago. It’s been super annoying having to update projects to PyTorch 2.7.0 and CUDA 12.8 so I can run them on RTX 5000 series GPUs.
All this to say: If PyTorch was going to be replaced in a year or two, we’d know the name of its killer by now, and they’d be the talk of HN. Not to mention that at this point all of the PhDs flooding into AI startups wrote their grad work in PyTorch, it has a lot of network lock-in that an upstart would have to overcome by being way better at something PyTorch can never be good at. I don’t even know what that would be.
Bear in mind that it took a few years for Tensorflow to die out due to lock in, and we all knew about PyTorch that whole time.
> a lot of network lock-in that an upstart would have to overcome by being way better at something PyTorch can never be good at
Higher level code migration to the newer framework, is going to 0. You ask your favorite agent (or intern) to port and check that the migration is exact. We already see this in the multitude of deep-learning frameworks.
The day one optimization trick that PyTorch can't do but another framework can, which reduce your training cost 10x and PyTorch is going the way of the dodo.
The day one architecture which can't be implemented in PyTorch get superior performance, and it's bye bye python.
We see this with architectures which require real-time rendering like Gaussian Splatting (Instant Nerf), or the caching strategies for LLM sequence generation.
Pytorch's has 3 main selling point :
- Abstracting away the GPU (or device) specific code, which is due to nvidia's mess : custom optimized kernels, which you are forced to adapt to if you don't want to write custom kernels.
If you don't mind writing optimized kernels, because the machine write them. Or if you don't need Cuda because you can't use Nvidia hardware because for example you are in China. Or if you use custom silicon, like Grok and need your own kernels anyway.
- Automatic differentiation. It's one of its weak point, because they went for easy instead of optimal. They shut themselves off some architectures. Some language like Julia because of the dynamic low-level compilation can do things Pytorch won't even dream about, (but Julia has its own problems mainly related to memory allocations). Here with the pytorch's introduction of the "scan function"[2] we have made our way full circle to Theano, Tensorflow's/Keras ancestor, which is usually the pain point of the bad automatic differentiating strategy chosen by Pytorch.
The optimal solution like all physics Phds which wrote simulations know, is writing custom adjoint code with 'Source Code Transformation' or symbolically : it's not hard but very tedious so it's now a great fit for your LLM (or intern or Phd candidate running 'student gradient descent') if you prove or check your gradient calculation is ok.
- Cluster Orchestration and serialization : a model can be shared with less security risks than arbitrary source code, because you only share weights. A model can be splitted between machines dynamically. But this is also a big weakness because your code rust as you become dependent of versioning, you are locked with the specific version number your model was trained on.
I don't know the full list, but back when it came out, TF felt like a crude set of bindings to the underlying c++/CUDA workhorse. PyTorch felt, in contrast, pythonic. It was much closer in feeling to numpy.
I think it was mostly the eager evaluation that made it possible to debug every step in the network forward/backward passes.
Tensorflow didn't have that at the time which made debugging practically impossible.
I’m not sure if such an overview exists, but when caffe2 was still a thing and JAX was a big contender dynamic vs static computational graphs seemed to be a major focus point for people ranking the frameworks.
It is notable (but perhaps not surprising) that this is mostly about the people and the work itself. The writing is silent on the downstream impacts on the world. In contrast, there are fields (global development, medicine, etc.) where people tend to focus on the impact on humanity (especially when reaching a milestone in their career).
If you take advice from reformed Internet trolls, consider turning off all your devices and trying to give yourself at least a week, but ideally a month offline staring at your new baby. You'll never get that time back and there's nothing your brain will appreciate more than loading up those memories as they grow.
What I find most interesting with this is that it shows they believe there is nothing unique at Meta related to AI. There is no resource, people and computing power, that they can't get elsewhere for whatever they believe would be more interesting for them.
I mention this because it feels analogous to military research, where people "dream" of how advanced the military is, how forward they are compared to public research... and yet, it seems to be a recurring myth they love to sustain.
So the signal I get here is AI "labs" in BigTech have nothing worth waiting for around the corner, it's just more of the same and boring for people who stick there.
> What I find most interesting with this is that it shows they believe there is nothing unique at Meta related to AI
Whether or not this is the case, I don't get this as being the reason for Sousmith leaving - it sounds as if he is just ready for a change.
Still, it is noticeable that with many of the AI companies claiming that their version of "AGI" is just around the corner, developers and staff don't appear to be particularly excited about this (I assume they realize it is just hype, not some momentous advance around the corner), and leave to pursue different things, such as Mira Murati starting a fine-tuning company, Karpathy going back to education, others switching ship (typically from OpenAI to Anthropic), etc.
I think you might be reading a bit too much into this.
He’s been with Meta for 11 years and is likely in a very comfortable financial position, given the substantial stock options he’s received over that time.
He also mentioned the arrival of a new child, and it’s well known that Meta's work-life balance isn’t always ideal.
On top of that, Meta, like many major tech companies, has been shifting its focus toward LLM-based AI, moving away from more traditional PyTorch use cases.
Considering all of this, it seems like a natural time for him to move on and pursue new, more exciting opportunities.
If you can afford to support yourself, which I’m sure he can, there’s a serenity to working on small projects that are nothin the public eye. It may simply be that he craves some quiet time that enables him to focus on his family and himself.
I don't think that's the read? Guy says he wants to work on something small. If you want to work on something big you probably want to be in a big corp to have the resources to do the big thing.
Also absolutely unknown if the "new thing" is AI-related at all!
> If you want to work on something big you probably want to be in a big corp to have the resources to do the big thing.
If anything, the reverse seems to be true, if you want to work on something big, you want to be in a small company, sufficiently funded, filled with great people, yet not "big", that's when "something big" seems to be more likely to happen.
In contrast, as far as I can think, the bigger a company gets, the less likely they are to actually come up with "something big", it seems like most of the times you need (creative) constraints in order for the results to end up being actually innovative, otherwise you end up like IBM and Meta, throwing money on stuff and getting some results, but nothing really out of the ordinary considering what's happening elsewhere in their ecosystems.
Well he left so whatever is coming next, AI related or not, "small" or not (small for them might be reaching just a million people, he wrote that he "lead the software layer that powers the entire AI industry." so his notion of scale is probably unlike mine, maybe yours too) is more exciting to him that whatever he could do next with all of Meta's resources.
Edit: to be clear, I didn't mean to imply their next thing is AI related, solely that they obviously know more about AI at Meta than e.g. XR at Meta, just because that's their expertise.
Your assumption is a bad read because it only works if his set of life priorities contains nothing else but maximizing his impact in the world of AI.
If he has just one other priority in that set (which could still include a robotic min/max of AI impact), then your assumption fails.
It reads to me as if he was the victim of office politics and decided to say "fuck it" instead of being transferred to something else within Meta.
> It reads to me as if he was the victim of office politics and decided to say "fuck it" instead of being transferred to something else within Meta.
It looks like he'd already been transferred once (to Infra) and maybe didn't want to do it again.
Negative, what you shave taken away is it’s the people. He mentions standing up clusters. Small shops can’t afford clusters. Ignore the technical aspect of this article and read it for what it’s for. A thank you note to the people he has worked with on amazing projects. Research in a bubble of 1 isn’t very useful. Research in a small team with Meta Budget is extremely useful. With the right people.
> where people "dream" of how advanced the military is
If you've ever worked on "advanced military grade" equipment, you'd know better.
It tends to be what you'd euphemistically call "well-proven technology", built down to a price by the lowest bidder, by comparatively unskilled labour.
The most shocking thing about the "captured" Russian drones is they use name-brand Raspberry Pis inside. I'm prepared to bet the American versions use whatever AliExpress crap is on special this week. The UK stuff definitely does.
Isn't that exactly the point parent was trying to make? Maybe I misunderstood their comment, but it seems like you're repeating what they said.
> I mention this because it feels analogous to military research, where people "dream" of how advanced the military is, how forward they are compared to public research... and yet, it seems to be a recurring myth they love to sustain.
I don't think that you can read this from the blog post at all, but it gives me a chuckle to think how the quest for AGI at Meta may be "The Men Who Stare at Goats" all over again.
I'm totally speculating. I have no extra information there.
It just makes me think of all the staff, technical staff, that left OpenAI recently. Altman was making grand claims about what was coming next.
Well, we know what followed, namely I don't think any researcher who left knowing what was in the pipeline feel like they missed much in terms of access.
Just checked BTW and ... premise looks fun but the score is too low https://www.rottentomatoes.com/m/men_who_stare_at_goats was it actually good as movie, not just the idea behind it?
It's more the idea behind it. Considering the great cast, the movie could have been much better.
The non-fiction book behind it is probably better comparison than the film adaptation, if you think Meta are doing goat-staring (I don't think they're especially bad on this issue compared to their rivals).
That man has an infective enthusiasm. I remember the DCGAN paper inspired me to try getting the (Lua) Torch code to work, and I tried it on the Oxford flowers dataset early on. It worked surprisingly well, and Soumith Chintala even shared it around in social media, surprised at how well it worked on such a small dataset. Of course back then we didn't really appreciate the problem of mode collapse.
Pytorch and old Lua Torch were a pleasure to work with compared to the contemporary Tensorflow. Lots of S.C's code was copied around liberally, it had its quirks (I remember the DCGAN code had a pretty odd way of doing parameter passing) but it was also really easy to understand and made random people like me feel like we had suddenly stumbled onto something crazy powerful (which we had!). It was wonderfully hackable.
As a loyal JAX user, I hope they can play catchup. PyTorch has dominated the AI scene since TF1 fumbled the ball at 10th yard line. What Matt Johnson has done turning Autograd into JAX is hopefully going to be worthy of as much praise as what Soumith has received.
> PyTorch has dominated the AI scene since TF1 fumbled the ball at 10th yard line
can you explain why you think TensorFlow fumbled?
I see good answers already, but here's a concrete example:
In my University we had to decide between both libraries so, as a test, we decided to write a language model from scratch. The first minor problem with TF was that (if memory serves me right) you were supposed to declare your network "backwards" - instead of saying "A -> B -> C" you had to declare "C(B(A))". The major problem, however, was that there was no way to add debug messages - either your network worked or it didn't. To make matters worse, the "official" TF tutorial on how to write a Seq2Seq model didn't compile because the library had changed but the bug reports for that were met for years with "we are changing the API so we'll fix the example once we're done".
PyTorch, by comparison, had the advantage of a Python-based interface - you simply defined classes like you always did (including debug statements!), connected them as variables, and that was that. So when I and my beginner colleagues had to decide which library to pick, "the one that's not a nightmare to debug" sounded much better than "the one that's more efficient if you have several billions training datapoints and a cluster". Me and my colleagues then went on to become professionals, and we all brought PyTorch with us.
In 2018, I co-wrote a blog post with the inflammatory title “Don’t use TensorFlow, try PyTorch instead” (https://news.ycombinator.com/item?id=17415321). As it gained traction here, it was changed to “Keras vs PyTorch” (some edgy things that work for a private blog are not good for a corporate one). Yet the initial title stuck, and you can see it resonated well with the crowd.
TensorFlow (while a huge step on top of Theano) had issues with a strange API, mixing needlessly complex parts (even for the simplest layers) with magic-box-like optimization.
There was Keras, which I liked and used before it was cool (when it still supported the Theano backend), and it was the right decision for TF to incorporate it as the default API. But it was 1–2 years too late.
At the same time, I initially looked at PyTorch as some intern’s summer project porting from Lua to Python. I expected an imitation of the original Torch. Yet the more it developed, the better it was, with (at least to my mind) the perfect level of abstraction. On the one hand, you can easily add two tensors, as if it were NumPy (and print its values in Python, which was impossible with TF at that time). On the other hand, you can wrap anything (from just a simple operation to a huge network) in an nn.Module. So it offered this natural hierarchical approach to deep learning. It offered building blocks that can be easily created, composed, debugged, and reused. It offered a natural way of picking the abstraction level you want to work with, so it worked well for industry and experimentation with novel architectures.
So, while in 2016–2017 I was using Keras as the go-to for deep learning (https://p.migdal.pl/blog/2017/04/teaching-deep-learning/), in 2018 I saw the light of PyTorch and didn’t feel a need to look back. In 2019, even for the intro, I used PyTorch (https://github.com/stared/thinking-in-tensors-writing-in-pyt...).
Actually, I opened “Teaching deep learning” and smiled as I see how it evolved:
> There is a handful of popular deep learning libraries, including TensorFlow, Theano, Torch and Caffe. Each of them has Python interface (now also for Torch: PyTorch)
> [...]
> EDIT (July 2017): If you want a low-level framework, PyTorch may be the best way to start. It combines relatively brief and readable code (almost like Keras) but at the same time gives low-level access to all features (actually, more than TensorFlow).
> EDIT (June 2018): In Keras or PyTorch as your first deep learning framework I discuss pros and cons of starting learning deep learning with each of them.
The original TensorFlow had an API similar to the original Lua-based Torch (the predecessor to PyTorch) that required you to first build the network, node by node, then run it. PyTorch used a completely different, and much more convenient approach, where the network is built automatically for you just by running the forward pass code (and will then be used for the backward pass), using both provided node types and arbitrary NumPy compatible code. You're basically just writing differentiable code.
This new PyTorch approach was eventually supported by TensorFlow as well ("immediate mode"), but the PyTorch approach was such a huge improvement that there had been an immediate shift by many developers from TF to PyTorch, and TF never seemed able to regain the momentum.
TF also suffered from having a confusing array of alternate user libraries built on top of the core framework, none of which had great documentation, while PyTorch had a more focused approach and fantastic online support from the developer team.
For me it was about 8 years ago. Back then TF was already bloated but had two weaknesses. Their bet on static compute graphs made writing code verbose and debugging difficult.
The few people I know back then used keras instead. I switched to PyTorch for my next project which was more "batteries included".
I'm no machine learning engineer but I've dabbled professionally with both frameworks a few years ago and the developer experience didn't even compare. The main issue with TF was that you could only chose between a powerful but incomprehensible, poorly documented [1], ultra-verbose and ever changing low-level API, and an abstraction layer (Keras) that was too high level to be really useful.
Maybe TF has gotten better since but at the time it really felt like an internal tool that Google decided to just throw into the wild. By contrast PyTorch offered a more reasonable level of abstraction along with excellent API documentation and tutorials, so it's no wonder that machine learning engineers (who are generally more interested in the science of the model than the technical implementation) ended up favoring it.
[1] The worst part was that Google only hosted the docs for the latest version of TF, so if you were stuck on an older version (because, oh I don't know, you wanted a stable environment to serve models in production), well tough luck. That certainly didn't gain TF any favors.
I personally believe TF1 was serving the need of its core users. It provided a compileable compute graph with autodiff, and you got very efficient training and inference from it. There was a steep learning curve, but if you got past it, things worked very very well. The distributed TF never really took off—it was buggy, and I think they made some wrong earlier bets in the design for performance reasons that they should have been sacrificed in favor of simplicity.
I believe some years after the TF1 release, they realized the learning curve was too steep, they were losing users to PyTorch. I think also the Cloud team was attempting to sell customers on their amazing DL tech, which was falling flat. So they tried to keep the TF brand while totally changing the product under the hood by introducing imperative programming and gradient tapes. They killed TF1, upsetting those users, while not having a fully functioning TF2, all the while having plenty of documentation pointing to TF1 references that didn’t work. Any new grad student made the simple choice of using a tool that was user-friendly and worked, which was PyTorch. And most old TF1 users hopped on the band wagon.
Imagine a total newbie trying to fine-tune an image classifier, reusing some open source example code, about a decade ago.
If their folder of 10,000 labelled images contains one image that's a different size to the others, the training job will fail with an error about unexpected dimensions while concatenating.
But it won't be able to say the file's name, or that the problem is an input image of the wrong size. It'll just say it can't concatenate tensors of different sizes.
An experienced user will recognise the error immediately, and will have run a data cleansing script beforehand anyway. But it's not experienced users who bounce from frameworks, it's newbies.
> An experienced user will recognise the error immediately, and will have run a data cleansing script beforehand anyway. But it's not experienced users who bounce from frameworks, it's newbies.
Even seasoned developers will bounce away from frameworks or libraries - no matter if old dogs or the next hot thing - if the documentation isn't up to speed or simple, common tasks require wading through dozens of pages of documentation.
Writing good documentation is hard enough, writing relevant "common usage examples" is even harder... but keeping them up to date and working is a rarely seen art.
And the greatest art of all of it is logging. Soooo many libraries refuse to implement detailed structured logging in internal classes (despite particularly Java and PHP offering very powerful mechanisms), making it much more difficult to troubleshoot problems in the field.
I only remember 2015 TF and I was wondering: why would I use Python to assemble a computational graph when what I really want is to write code and then differentiate through it?
Do you have experience in both JAX and PyTorch? Why do you prefer JAX?
Not Op. I have production / scale experience in PyTorch and toy/hobby experience in JAX. I wish I could have time time or liberty to use JAX more. It consists of small, orthogonal set of ideas that combine like lego blocks. I can attempt to reason from first principals about performance. The documentation is super readable and strives to make you understand things.
JAX seems well engineered. One would argue so was TensorFlow. But ideas behind JAX were built outside Google (autograd) so it has struck right balance with being close to idiomatic Python / Numpy.
PyTorch is where the tailwinds are, though. It is a wildly successful project which has acquired ton of code over the years. So it is little harder to figure out how something works (say torch-compile) from first principles.
Not OP. I prefer JAX for non-AI tasks in scientific computing because of the different mental model than PyTorch. In JAX, you think about functions and gradients of functions. In PyTorch you think about tensors which accumulate a gradient while being manipulated through functions. JAX just suits my way of thinking much better.
I also like that jax.jit forces you to write "functional" functions free of side effects or inplace array updates. It might feel weird at first (and not every algorithm is suited for this style) but ultimately it leads to clearer and faster code.
I am surprised that JIT in PyTorch gets so little attention. Maybe it's less impactful for PyTorch's usual usecase of large networks, as opposed to general scientific computing?
>I also like that jax.jit forces you to write "functional" functions free of side effects or inplace array updates. It might feel weird at first (and not every algorithm is suited for this style) but ultimately it leads to clearer and faster code.
It's not weird. It's actually the most natural way of doing things for me. You just write down your math equations as JAX and you're done.
For anyone that’s curious, the underlying Torch library is also a joy to work with, as are the many other torch bindings. For example, Rust has tch and Burn which both work with libtorch.
PyTorch of course has the benefit of being dynamically debuggable. Can’t forget the first time I break pointed my pytorch model and wrote pytorch calls inside the terminal to inspect the behavior. That’s still something I miss a lot now that I’m working with only “fast” compiled code.
I wrote som truly awful code back in the day because of that but god it was glorious.
>>Every major AI company and hardware vendor are on a speed dial. This kind of power is really hard to give up. But curiosity ultimately won out in my head.
A simple feeling has such a power. May he gets an opportunity to create one more powerful tool before retiring.
Soumith's 2nd release? https://github.com/pytorch/pytorch/releases/tag/v0.1.1
Also, looking at the contribution history for a long career is very interesting; reflects the changing roles over time https://github.com/soumith
His homepage says he wants to build a robot. So he is probably going to work with robots for his next role.
He is an investor in Anthropic, didnt know you could do that working for Meta.
To me it sounds as if he is trying to open a new chapter in his life. Good for him, but I wonder if everything was really as smooth as described. People often write how everything is always perfect on their blog. Well - could be. But it could also be that not everything was perfect but no description followed on the blog.
This is the end of an era. Amazing work soumith.
There's no context around 'FAIR' - is it https://www.go-fair.org/?
Facebook Artificial Intelligence Research, c.f. https://engineering.fb.com/category/ai-research/#:~:text=Art...
it's https://ai.meta.com/research/
I wonder how much this guy has earned from Meta in total. Would it reach $100M?
Considering Meta was trying to Poach AI talent for $250M, I wouldn’t be surprised if this guy has his own 8-figure income
Counterfactual Regret Minimization irl
Is this also partially AI generated? What's with the repeated short phrases? Is this just everyone's style now?
You're asking a lot of questions but are you willing to think about it? For one, no it's not "everyone's style" because you wouldn't have asked whether it was, you'd know.
I read one post on his blog and found that Adam Paszke reached out to the author and got an internship. I wonder if it was that easy to get an internship at FAIR. I thought that they hire only PhDs.
I was pretty involved in the PyTorch ecosystem in the early days around 2016 and Adam was nothing short of a genius and prolific developer whose contributions to the codebase and community were immense. I think he was like an undergrad in Poland at the time. My understanding is that his contributions came before the internship, but I don’t know.
My memory is that Souminth was really open to other people’s contributions and questions, no matter their credentials. He was a great leader who felt approachable to the open-source community.
I didn't know that. Soumith Chintala certainly paid it forward. He was very helpful and supportive of random strangers (like me!) in the early pytorch days. I count him with Andrej Karpathy and Chris Olah as one of the people who really made machine learning accessible to regular software engineers.
You can't do anything if you never try.
PyTorch is one of those tools that’s so simple and easy to take apart that you feel like you might’ve been able to make it yourself. I can’t imagine how much engineering effort was behind all those moments where I thought to myself, “of course it should work like that, how can it be any other way?”
Can anyone recommend a technical overview describing the design decisions PyTorch made that led it to win out?
The choice of the dynamic computation graph [1] of PyTorch made it easier to debug and implement, leading to higher adoption, even though running speed was initially slower (and therefore training cost higher).
Other decisions follow from this one.
Tensorflow started with static and had to move to dynamic at version 2.0, which broke everything. Fragmentation between tensorflow 1, tensorflow 2, keras, jax.
Pytorch's compilation of this computation graph erased the remaining edge of Tensorflow.
Is the battle over ? From a purely computational point, Pytorch solution is very far from optimal and billions of dollars of electricity and GPUs are burned every year, but major players are happy with circular deals to entrench their positions. So at the pace of current AI code development, probably one or two years before Pytorch is old history.
[1] https://www.geeksforgeeks.org/deep-learning/dynamic-vs-stati...
Someone’s got to prototype the next generation of architectures.
> at the pace of current AI code development, probably one or two years before Pytorch is old history.
Ehhh, I don’t know about that.
Sure, new AI techniques and new models are coming out pretty fast, but when I go to work with a new AI project, they’re often using a version of PyTorch or CUDA from when the project began a year or two ago. It’s been super annoying having to update projects to PyTorch 2.7.0 and CUDA 12.8 so I can run them on RTX 5000 series GPUs.
All this to say: If PyTorch was going to be replaced in a year or two, we’d know the name of its killer by now, and they’d be the talk of HN. Not to mention that at this point all of the PhDs flooding into AI startups wrote their grad work in PyTorch, it has a lot of network lock-in that an upstart would have to overcome by being way better at something PyTorch can never be good at. I don’t even know what that would be.
Bear in mind that it took a few years for Tensorflow to die out due to lock in, and we all knew about PyTorch that whole time.
> a lot of network lock-in that an upstart would have to overcome by being way better at something PyTorch can never be good at
Higher level code migration to the newer framework, is going to 0. You ask your favorite agent (or intern) to port and check that the migration is exact. We already see this in the multitude of deep-learning frameworks.
The day one optimization trick that PyTorch can't do but another framework can, which reduce your training cost 10x and PyTorch is going the way of the dodo.
The day one architecture which can't be implemented in PyTorch get superior performance, and it's bye bye python.
We see this with architectures which require real-time rendering like Gaussian Splatting (Instant Nerf), or the caching strategies for LLM sequence generation.
Pytorch's has 3 main selling point :
- Abstracting away the GPU (or device) specific code, which is due to nvidia's mess : custom optimized kernels, which you are forced to adapt to if you don't want to write custom kernels.
If you don't mind writing optimized kernels, because the machine write them. Or if you don't need Cuda because you can't use Nvidia hardware because for example you are in China. Or if you use custom silicon, like Grok and need your own kernels anyway.
- Automatic differentiation. It's one of its weak point, because they went for easy instead of optimal. They shut themselves off some architectures. Some language like Julia because of the dynamic low-level compilation can do things Pytorch won't even dream about, (but Julia has its own problems mainly related to memory allocations). Here with the pytorch's introduction of the "scan function"[2] we have made our way full circle to Theano, Tensorflow's/Keras ancestor, which is usually the pain point of the bad automatic differentiating strategy chosen by Pytorch.
The optimal solution like all physics Phds which wrote simulations know, is writing custom adjoint code with 'Source Code Transformation' or symbolically : it's not hard but very tedious so it's now a great fit for your LLM (or intern or Phd candidate running 'student gradient descent') if you prove or check your gradient calculation is ok.
- Cluster Orchestration and serialization : a model can be shared with less security risks than arbitrary source code, because you only share weights. A model can be splitted between machines dynamically. But this is also a big weakness because your code rust as you become dependent of versioning, you are locked with the specific version number your model was trained on.
[2] "https://docs.pytorch.org/xla/master/features/scan.html
I don't know the full list, but back when it came out, TF felt like a crude set of bindings to the underlying c++/CUDA workhorse. PyTorch felt, in contrast, pythonic. It was much closer in feeling to numpy.
I think it was mostly the eager evaluation that made it possible to debug every step in the network forward/backward passes. Tensorflow didn't have that at the time which made debugging practically impossible.
I’m not sure if such an overview exists, but when caffe2 was still a thing and JAX was a big contender dynamic vs static computational graphs seemed to be a major focus point for people ranking the frameworks.
You forgot to thank Jürgen. /scnr
Very proud as a Swiss that Soumith has a .ch domain!
Probably because his first name is Chintala
That d be his last name
true haha
It is notable (but perhaps not surprising) that this is mostly about the people and the work itself. The writing is silent on the downstream impacts on the world. In contrast, there are fields (global development, medicine, etc.) where people tend to focus on the impact on humanity (especially when reaching a milestone in their career).
Nice, that is the dream career!
The last few years must have been incredibly exhausting. Thanks for your work good luck and 73.
Respect.
Sounds like you had a momentous run.
If you take advice from reformed Internet trolls, consider turning off all your devices and trying to give yourself at least a week, but ideally a month offline staring at your new baby. You'll never get that time back and there's nothing your brain will appreciate more than loading up those memories as they grow.
Good luck.
Look, I get that some pages require javascript, but
which is then unset by JS, with no <noscript> anywhere, is just... I just get white page.Changing it to
gives perfectly readable web, so it seem bit... pointless.