A hobby audio and text analysis application I've written, with no specific concern for low level performance other than algorithmically, runs 4x as fast in .net10 vs .net8. Pretty much every optimization discussed here applies to that app. Great work, kudos to the dotnet team. C# is, imo, the best cross platform GC language. I really can't think of anything that comes close in terms of performance, features, ecosystem, developer experience.
Having worked with C# professionally for a decade, going through the changes with LINQ, async/await, Roslyn, and the rise of .NET Core, to .NET Core becoming .NET, I disagree. I certainly think that C# is a great tool and that it’s the best it has ever been. It’s also relies on very implicit behaviour, it is build upon OOP design principles and a bunch of “needless” abstraction. Things I personally have come to view as anti-patterns over the years. This isn’t because I specifically dislike C#, you could find me saying something similar about Java.
I suspect that the hidden indirection and runtime magic, may be part of why you love the language. In my experience, however, it leads to poor observability, opaque control flow, and difficult debugging sessions in every organisation and company I’ve ever worked for. It’s fair to argue that this is because the people working with C# are bad at software engineering. Similar to how Uncle Bob will always be correct when he calls teams out for getting his principles wrong. To me that means the language itself has a poor design fit for software development in 2025. Which is probably why we see more and more Go adoption, due to its explicit philosophies. Though to be fair, Python seems to be “winning” as far as adoption goes in the cross platform GC language space. Having worked with Django-Ninja I can certainly see why. It’s so productive, and with stuff like Pyrefly, UV and Ruff it’s very easy to make it a YAGNI experience with decent type control.
I am happy you enjoy C# though, and it’s great to see that it is evolving. If they did more to enhance the developer experience so that people were less inclined to do bad engineering on a thursday afternoon after a long day of useless meetings. Then I would probably agree with you. I'm not sure any of the changes going toward .NET 10 are going in that direction though.
C# has increasingly become more terse (e.g. switch expressions, collection initializers, object initializers, etc) and, IMO, is a good balance between OOP and functional[0].
Functions are first class objects in C# and teams can write functional style C# if they want. But I suspect that this doesn't scale well in human terms as we've encountered a LOT of trouble trying to get TypeScript devs to adopt more functional programming techniques. We've found that many devs like to use functional code (e.g. Linq, `.filter()`, `.map()`), but dislike writing functional code because most devs are not wired this way and do not have any training in how to write functional code and understanding monads. Asking these devs to use a monad has been like asking kids to eat their carrots.
Across much of our backend TS codebase, there are very, very few cases where developers accept a function as input or return a function as output (almost all of it written by 1 dev out of a team of ~20).
> ...it is build upon OOP design principles and a bunch of “needless” abstraction
Having been working with Nest.js for a while, it's clear to me that most of these abstractions are not "needless" but actually "necessary" to manage complexity of apps beyond a certain scale and the reasons are less technical and more about scaling teams and communicating concepts.
Anyone that looks at Nest.js will immediately see the similarities to Spring Boot or .NET Web APIs because it fills the same niche. Whether you call a `*Factory` a "factory" or something else, the core concept of what the thing does still exists whether you're writing C#, Java, Go, or JS: you need a thing that creates instances of things.
You can say "I never use a factory in Go", but if you have a function that creates other things or other functions, that's a factory...you're just not using the nomenclature. Good for you? Or maybe you misunderstand why there is standard nomenclature of common patterns in the first place and are associating these patterns with OOP when in reality, they are almost universal and are rather human language abstractions for programming patterns.
I notice that none of the examples in your blog entry on functional C# deals with error handling. I know that is not the point of your article, but that is actually one of my key issues with C# and its reliance on implicit, because like so many other parts of C# you'd probably hand it over to an exeception handler. I'd much rather prefer you to deal with it explicitly right where it happens, and I would prefer if you were actually forced to do it for examples like yours. This is because implicit error handling is hard. I have no doubt you do it well, but it is frankly rare to meet a C# developer who has as much of an understanding on the language that you clearly have.
I think this is an excellent blog post by the way. My issues with C# (and this applies to a lot of other GC languages) is that most developers would learn a lot from your article. Because none of it is an intuitive part of the language philosophy.
I don't think you should never use OOP or abstractions. I don't think there is a golden rule for when you should use either. I do think you need to understand why you are doing it though, and C# sort of makes people go to abstractions first, not last in my experience. I don't think these changes to the GC is going to help people who write C# without understanding C#, which is frankly most C# developers around here. Because Go is opinionated and explicit it's simply an issue I have to deal with less in Go teams. It's not an issue I have to deal with less in Python teams, but then, everyone who loves Python knows it sucks.
My team just recently made the switch from a TS backend to a C# backend for net new work. When we made this switch, we also introduced `ErrorOr`[0] which is a monadic result type.
I would not have imagined this to be controversial nor difficult, but it turns out that developers really prefer and understand exceptions. That's because for a backend CRUD API, it's really easy to just throw and catch at a global HTTP pipeline exception filter and for 95% of cases, this is OK and good enough; you're not really going to be able to handle it nor is it worth it to to handle it.
We'll stick with ErrorOr, but developers aren't using it as monad and simply unwrapping the value and the error because, as it turns out, most devs just have a preference/greater familiarity with imperative try-catch handling of errors and practically, in an HTTP backend, there's nothing wrong in most cases with just having a global exception filter do the heavy lifting unless the code path has a clear recovery path.
> I don't think you should never use OOP or abstractions. I don't think there is a golden rule for when you should use either.
I do think there is a "silver rule": OOP when you need structural scaffolding, functional when you have "small contracts" over big ones. An interface or abstract class is a "big contract" that means to understand how to use it, you often have to understand a larger surface area. A function signature is still a contract, but a micro-contract.
Depending on what you're building, having structural scaffolding and "big contracts" makes more sense than having lots of micro-contracts (functions). Case in point: REST web APIs make a lot more sense with structural scaffolding. If you write it without structural scaffolding of OOP, it ends up with a lot of repetition and even worse opaqueness with functions wrapping other functions.
The silver rule for OOP vs FP for me: OOP for structural templating for otherwise repetitive code and "big contracts"; FP for "small contracts" and algorithmically complex code. I encourage devs on the team to write both styles depending on what they are building and the nature of complexity in their code. I think this is also why TS and C# are a sweet spot, IMO, because they straddle both OOP and have just enough FP when needed.
You introduced a pattern that is simply different than the usual in C#. It's also not clearly better, it's different. In languages designed for result types like this the ergonomics of such a type are usually better.
All the libraries you use and all methods from the standard library use exceptions. So you have to deal with exceptions in any case.
There's also a million or so libraries that implement types like this. There is no standard, so no interoperability. And people have to learn the pecularities of the chosen library.
I like result types like this, but I'd never try to introduce them in C# (unless at some point they get more language support).
These are developers that have never written C# before so there's no difference between whether it's language supported or not. It was in the core codebase on day 1 when they onboarded so it may as well have been native.
But what I takeaway from this is that Go's approach to error handling is "also not clearly better, it's different".
Even if C# had core language support for result types, you would be surprised how many developers would struggle with it (that is my takeaway from this experience).
If you're not coming from a strongly typed functional language, it's still a pattern you're not used to. Which might be a bit of a roundabout way to say that I agree about your last part, developers without contact to that kind of language will struggle at first with a pattern like this.
I know how to use this pattern, but the C# version still feels weird and cumbersome. Usually you combine this with pattern matching and other functional features and the whole thing makes it convenient in the end. That part is missing in C#. And I think it makes a different in understanding, as you would usually build ony our experience with pattern matching to understand how to handle this case of Result|Error.
> Usually you combine this with pattern matching and other functional features and the whole thing makes it convenient in the end. That part is missing in C#
You mean like this?
string foo = result.MatchFirst(
value => value,
firstError => firstError.Description);
Or this?
ErrorOr<string> foo = result
.Then(val => val * 2)
.Then(val => $"The result is {val}");
Or this?
ErrorOr<string> foo = await result
.ThenDoAsync(val => Task.Delay(val))
.ThenDo(val => Console.WriteLine($"Finsihed waiting {val} seconds."))
.ThenDoAsync(val => Task.FromResult(val * 2))
.ThenDo(val => $"The result is {val}");
C# pattern matching is pretty damn good[0] (seems you are not aware?).
I am at a (YC, series C) startup that just recently made the switch from TS backend on Nest.js to C# .NET Web API[0]. It's been a progression from Express -> Nest.js -> C#.
What we find is that having attributes in both Nest.js (decorators) and C# allows one part of the team to move faster and another smaller part of the team to isolate complexity.
The indirection and abstraction are explicit decisions to reduce verbosity for 90% of the team for 90% of the use cases because otherwise, there's a lot of repetitive boilerplate.
The use of attributes, reflection, and source generation make the code more "templatized" (true both in our Nest.js codebase as well as the new C# codebase) so that 90% of the devs simply need to "follow the pattern" and 10% of the devs can focus on more complex logic backing those attributes and decorators.
Having the option to dip into source generation, for example, is really powerful in allowing the team to reduce boilerplate.
[0] We are hiring, BTW! Seeking experienced C# engineers; very, very competitive comp and all greenfield work with modern C# in a mixed Linux and macOS environment.
The trade offs are though that patterns and behind the scenes source code generation is another layer that the devs who have to follow need to deal with when debugging and understanding why something isn’t working. They either spend more time understanding the bespoke things or are bottle necked relying on a team or person to help them get through those moments. It’s a trade off and one that has bit me and others before
I am not talking about C# specifically but also and I agree.
Implicit and magic looks nice at first but sometimes it can be annoying. I remember the first time I tried Ruby On Rails and I was looking for a piece of config.
Yes, "convention over configuration". Namely, ungreppsble and magic.
This kind of stuff must be used with a lot of care.
I usually favor explicit and, for config, plain data (usually toml).
This can be extended to hidden or non-obvious allocations and other stuff (when I work with C++).
It is better to know what is going on when you need to and burying it in a couole of layers can make things unnecessarily difficult.
It has been worth the abstraction in my organization with many teams. Thinking 1000+ engineers, at minimum. It helps to abstract as necessary for new teammates that want to simply add a new endpoint yet follow all the legal, security, and data enforcement rules.
Better than no magic abstractions imo. In our large monorepo, LSP feedback can often be so slow that I can’t even rely on it to be productive. I just intuit and pattern match, and these magical abstractions do help. If I get stuck, then I’ll wade into the docs and code myself, and then ask the owning team if I need more help.
Would you rather a team move faster and be more productive or be a purist and disallow abstractions to avoid some potential runtime tracing challenges which can be mitigated with good use of OTEL and logging? I don't know about you, but I'm going to bias towards productivity and use integration tests + observability to safeguard code.
Disallow bespoke abstractions and use the industry standard ones instead. People who make abstractions inflate how productive they’re making everyone else. Your user base is much smaller than popular libs, so your docs and abstractions are not as battle tested and easy to use as much as you think.
What are you working on that you're debugging annotations everyday? I'd say you've made a big mistake if you're doing that/you didn't read the docs and don't understand how to use the attribute.
(Of course you are also free to write C# without any of the built in frameworks and write purely explicit handling and routing)
On the other hand, we write CRUD every day so anything that saves repetition with CRUD is a gain.
People were so afraid of macros they ended up with something even worse.
At least with macros I don't need to consider the whole of the codebase and every library when determining what is happening. Instead I can just... Go to the macro.
They are not. They are generators. Macros tends to be local and explicit as the other commenters have said. They are more like templates. Generators can be fairly involved and feels like a mini language, one that is not as observable as macros.
Maybe you're confusing `System.Reflection.Emit` and source generators? Source generators are just a source tree walker + string templates to write source files.
Attributes and reflection are still used in C# for source generators, JSON serialization, ASP.NET routing, dependency injection... The amount of code that can fail at runtime because of reflection has probably increased in modern C#. (Not from C# source generators of course, but those only made interop even worse for F#-ers).
Aye, was involved in some really messed up outages from New Relics agent libraries generating bogus byte code at runtime, absolute nightmare for the teams trying to debug it because none of the code causing the crashing existed anywhere you could easily inspect it. Replaced opaque magic from new relic with simpler OTEL, no more outages
Don't we have automated tests for catching this kind of things or is everyone only YOLOing in nowadays? Serialization, routing, etc can fail at runtime regardless of using or not using attributes or reflection.
Ease of comprehension is more important than tests for preventing bugs. A highly testable DI nightmare will have more bugs than a simple system that people can understand just by looking at it.
But the "simple" system will be full of repetition and boilerplate, meaning the same bugs are scattered around the code base, and obscured by masses of boilerplate.
Isn't a GC also a Magic? Or anything above assembly? While I also understand the reluctance to use too much magic, in my experience, it's not the magic, it's how well the magic is tested and developed.
I used to work with Play framework, a web framework built around Akka, an async bundle of libraries. Because it wasn't too popular, only the most common issues were well documented. I thought I hated magic.
Then, I started using Spring Boot, and I loved magic. Spring has so much documentation that you can also become the magician, if you need to.
If the argument is that most developers can't understand what a DI system does, I don't know if I buy that. Or is the argument it's hard to track down dependencies? Because if that's the case the idiomatic c# has the dependencies declared right in the ctor.
If I wanted explicitness for every little detail I would keep writing in Assembly like in the Z80, 80x86, 68000 days.
Unfortunately we never got Lisp or Smalltalk mainstream, so we got to metaprogramming with what is available, and it is quite powerful when taken advantage of.
Some people avoid wizard jobs, others avoid jobs where magic is looked down upon.
I would also add that in the age of LLM and AI generated applications, discussing programming languages explicitness is kind of irrelevant.
Explicitness is different than verbosity. Often annotations and the like are abused to create a lot of accidental complexity just to not write a few keywords. In almost every lisp project you'll find that macros are not intended for reducing verbosity, they are there to define common patterns. You can have something like
(define-route METHOD PATH BODY)
You can then easily expect the generated code. But in Java and others, you'll have something like
@GET(path=PATH)
And there's a whole system hidden behind this, that you have to carefully understand as every annotation implementation is different.
This is the trade-off with macros and annotation/code-generation systems.
I tend to do obvious things whwn I use this kind of tools. In fact, I try to avoid macros.
Even if configurability is not important, I favor sinplification over reuse. In case I need reuse, I go for higher order functions if I can. Macro is the last bullet.
In some circumstances like Json or serialization maybe they can be slightly abused to mark fields and such. But whole code generation can take it so far and magic that it is not worth in many circumstances IMHO, thiugh every tool has its use cases, even macros and annotations.
IMO, macros and such should be to improve coding UX. But using it for abstractions and the like is very much not worth it. So something like JSX (or the loop system in Common Lisp) is good. But using it for DI is often a code smell for me.
Only if those Lisp projects are done by newbies, Clojure is quite known for having a community that takes that approach to macros, versus everyone else on Lisp since its early days.
Using macros for DSLs has been common for decades, and is how frameworks like CLOS were initially implemented.
I write C# and rust fulltime. Native discriminated unions (and their integration throughout the ecosystem) are often the deciding factor when choosing rust over C#.
Very hard to imagine teams cross shopping C# and Rust and DU's being the deciding factor. The tool chains, workflows, and use cases are just so different, IMO. What heuristics were your team using to decide between the two?
For me, it will be if they ever get checked errors of some sort. I don’t want to use a language with unchecked exceptions flying about everywhere. This isn't saying I want checked exceptions either, but I think if they get proper unions and then have some sort of error union type it would go a long way.
Exactly what I've observed in practice because most devs have no background in writing functional code and will complain when asked to do so.
Passing or returning a function seems a foreign concept to many devs. They know how to use lambda expressions, but rarely write code that works this way.
We adopted ErrorOr[0] and have a rule that core code must return ErrorOr<T>. Devs have struggled with this and continue to misunderstand how to use the result type.
I'd much rather code F# than Python, it's more principled, at least at the small scale. But F# is in many ways closer to modern mainstream languages than a modern pure functional language. There's nothing scary about it. You can write F# mostly like Python if you want, i.e. pervasive mutation and side effects, if that's your thing.
It all depends on the lens one chooses to view them. None of them are really "functional programming" in the truly modern sense, even F#. As more and more mainstream languages get pattern matching and algebraic data types (such as Python), feature lambdas and immutable values, then these languages converge. However, you don't really get the promises of functional programming such as guaranteed correct composition and easier reasoning/analysis, for that one needs at least purity and perhaps even totality. That carries the burden of proof, which means things get harder and perhaps too hard for some (e.g. the parent poster).
If purity is a requirement for "real" functional programming, then OCaml or Clojure aren't functional. Regarding totality, even Haskell has partial functions and exceptions.
Both OCaml and Clojure are principled and well designed languages, but they are mostly evolutions of Lisp and ML from the 70s. That's not where functional programming is today. Both encourage a functional style, which is good. And maybe that's your definition of a "functional language". But I think that definition will get increasingly less useful over time.
Sure, Python has types as part of the syntax, but Python doesn't have types like Java, C#, etc. have types. They are not pervasive and the semantics are not locked down.
> I really can't think of anything that comes close in terms of performance, features, ecosystem, developer experience.
This is also why I prefer to use Unity over all other engines by an incredible margin. The productivity gains of using a mature C# scripting ecosystem in a tight editor loop are shocking compared to the competition. Other engines do offer C# support, but if I had to make something profitable to save my own standard of living the choice is obvious.
There's only two vendors that offer built-in SIMD accelerated linear math libraries capable of generating projection matrices out of the box. One is Microsoft and the other is Apple. The benefits of having stuff like this baked into your ecosystem are very hard to overstate. The amount of time you could waste looking for and troubleshooting primitives like this can easily kill an otherwise perfect vision.
Is supported on more platforms, has more developers, more jobs, more OSS projects, is more widely used (Tiobe 2024). Performance was historically better, but c# caught up.
Reified generics, value types, LINQ are just a few things that you would miss when going to Java. Also Java and .NET are both big, that's not a real argument here. Not that I would trust Tiobe index too much, but as of 2025 September C# is right behind Java at 5th place.
Yeah, but if you use F# that then you’ll have all the features C# has been working on for years, only in complete and mature versions, and also an opinionated language encouraging similar styles between teams instead of wild dialects of kinda-sorta immutablity and kinda-sorta DU’s, and everything in between, requiring constant vigilance and training… ;)
I’m a fan of all three languages, but C# spent the first years relearning why Visual Basic was very productive and the last many years learning why OCaml was chosen to model F# after. It’s landed in a place where I can make beautiful code the way I need to, but the mature libraries I’ve crafted to make it so simply aren’t recreate-able by most .Net devs, and the level of micro-managing it takes to maintain across groups is a line-by-line slog against orthodoxy and seeming ‘shortcuts’, draining the productivity those high level guarantees should provide. And then there’s the impact of EF combined with sloppy Linq which makes every junior/consultant LOC a potentially ticking time bomb without more line-by-line, function-by-function slog.
> I really can't think of anything that comes close in terms of [...] developer experience.
Of all the languages that I have to touch professionally, C# feels by far the most opaque and unusable.
Documentation tends to be somewhere between nonexistant and useless, and MSDN's navigation feels like it was designed by a sadist. (My gold standard would be Rustdoc or Scala 2.13 era Scaladoc, but even Javadoc has been.. fine for basically forever.) For third-party libraries it tends to be even more dire and inconsistent.
The Roslyn language server crashes all the time, and when it does work.. it doesn't do anything useful? Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there! (I know there's this thing called "SourceLink" which is.. supposed to solve this? I think? But I've never seen it actually use it in practice.)
Even finding where something comes from is ~impossible without the language server, because `using` statements don't mention.. what they're even importing. (Assuming that you have them at all. Because this is also the company that thought project-scoped imports were a good idea!)
And then there's the dependency injection, where I guess someone thought it would be cute if every library just had an opaque extension method on the god object, that didn't tell you anything about what it actually did. So good luck finding where the actual implementation of anything is.
I almost exclusively work in C# and have never experienced the Roslyn crashes you mentioned. I am using either Rider or Visual Studio though.
> Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there!
If these are projects you have in the same solution then it should never do this. I would only expect this to happen if either symbol files or source files are missing.
Very mixed feelings about this as there’s a strong case for the decisions made here but it also moves .NET further away from WASMGC, which makes using it in the client a complete non-starter for whole categories of web apps.
It’s a missed opportunity and I can’t help but feel that if the .NET team had gotten more involved in the proposals early on then C# in the browser could have been much more viable.
Those changes affect the .NET runtime, designed for real computers. This does not preclude the existence of a special runtime designed for Wasm with WasmGC support.
The .NET team appears to be aware of WasmGC [0], and they have provided their remarks when WasmGC was being designed [1].
.NET was already incompatible with WASM GC from the start [1]. The changes in .NET 10 are nothing in comparison to those. AFAIK WASM GC was designed with only JavaScript in mind so that's what everyone is stuck with.
1: JavaScript _interoperability_ , ie same heap but incompatible objects (nobody is doing static JS)
2: Java, Schemes and many other GC derived languages ,etc have more "pure" GC models, C# traded some of it for practicality and that would've required some complications to the regular JS GC's.
A lot of the features here, stuff like escape analysis for methods etc. does not directly involve the GC - it reduces the amount of objects that go to the GC heap so the GC has less work to do in the first place.
How would this move .NET further away from WASMGC? This is a new GC for .NET, but doesn't add new things to the language that would make it harder to use WASMGC (nor easier).
For example, .NET has internal pointers which WASMGC's MVP can't handle. This doesn't change that so it's still a barrier to using WASMGC. At the same time, it isn't adding new language requirements that WASMGC doesn't handle - the changes are to the default GC system in .NET.
I agree it's disappointing that the .NET team wasn't able to get WASMGC's MVP to support what .NET needs. However, this change doesn't move .NET further away from WASMGC.
I wouldn't be surprised if it did take off, classic Wasm semantics were horrible since you needed a lot of language support to even have simple cludges when referring to DOM objects via indices and extra lifeness checking.
WASM-GC will remove a lot of those and make quite a few languages possible as almost first-class DOM manipulating languages (there's still be cludges as the objects are opaque but they'll be far less bad since they can at least avoid external ID mappings and dual-GC systems that'll behave leakily like old IE ref-counts did).
You still need to usually install plenty of moving pieces to produce a wasm file out of the "place language here", write boilerplate initialisation code, debugging is miserable, only for a few folks to avoid writing JavaScript.
There will always be enthusiasts to take the initial steps, the question is if they have the taste to make it a coherent system that isn't horrible to use.
Counted out over N languages, we should see something decent land before long.
Don't mix mainstream adoption at the same level as regular JavaScrip and Typescript, with availability.
Microsoft would wish Blazor would take off like React and Angular, in reality it is seldom used outside .NET shops intranets in a way similar to WebForms.
The JVM famously boxes everything though, probably because it was originally designed to run a dynamic language. An array list of floats is an array list of pointers. This created an entire cottage industry of alternative collections libraries with concrete array list implementations.
Arrays have a static fixed size though, making them far less useful in practice. Anything one builds with generics is boxed. Dotnet doesn't have this problem.
Almost none of this is in the JVM. Escape analysis is extremely limited on the standard JVM, and it's one of GraalVM's "enterprise" features. You have to pay for it.
One limitation of the stack is that it needs to be contiguous virtual addresses, so it was often limited when devices just didn't have the virtual address space to "waste" on a large stack for every thread in a process.
But 64 bits of virtual address space is large enough that you can keep the stacks far enough apart that even for pretty extreme numbers of threads you'll run out of physical memory before they start clashing. So you can always just allocate more physical pages to the stack as needed, similar to the heap.
I don't know if the .net runtime actually does this, though.
> Won't this potentially cause stack overflows in programs that ran fine in older versions though?
That's certainly a possibility, and one that's come up before even between .net framework things migrated to .net core. Though usually it's a sign that something is awry in the first place. Thankfully the default stack sizes can be overridden with config or environment variables.
I am surprised that they didn't already do a lot of optimizations informed by escape analysis, even though they have had value types from the beginning. Hotspot is currently hampered by only having primitive and reference types, which Project Valhalla is going to rectify.
FWIW Tiered Compilation has been enabled on by default since .NET Core 3.1. If the code tries to use refection to mutate static readonly fields and fails, it's the fault of that code.
I am considering dotnet Maui for a project. On the one hand, I am worried about committing to the Microsoft ecosystem where projects like Maui have been killed in the past and Microsoft has a lot of control. Also XML… On the other hand, I’ve been seeing so many impressive technical things about dotnet itself. Has anyone here used Maui and wants to comment on their experience?
Speaking as an experienced desktop .NET Dev, we've avoided it due to years of instability and no real confidence it'll get fully adopted. We've stuck with WPF, which is certainly a bit warty, but ultimately fine. If starting fresh at this point I'd give a real look at Avalonia, seems like they've got their head on their shoulders and are in it for the long haul.
I've been a C# developer my entire career and spent a few years building apps with Xamarin/Uno. At my current company, we evaluated MAUI and Flutter for our mobile app rewrite (1M+ monthly active users).
We first built a proof of concept with 15 basic tasks to implement in both MAUI and Flutter. Things like authentication, navigation, API calls, localization, lists, map, etc. In MAUI, everything felt heavier than it should've been. Tooling issues, overkill patterns, outdated docs, and a lot of small frustrations that added up. In Flutter, we got the same features done much faster and everything just worked. The whole experience was just nicer. The documentation, the community, the developer experience... everything is better.
I love C#, that's what we use for our backend, but for mobile developement Flutter was the clear winner. We launched the new app a year ago and couldn't be happier with our decision.
Aside from using an esoteric language and being a Google product with a risk of shutting down just because, Flutter's game-like UI rendering on a canvas was confirmed to be quite a questionable approach with the whole Liquid Glass transition. If anything, React Native is a more reliable choice: endless supply of React devs and native UI binding similar to MAUI.
I'd say Uno Platform[0] is a better alternative to Flutter for those who do not care much about the native look: it replicates WinUI API on iOS, Mac, Android, and Linux, while also providing access to the whole mature .NET ecosystem – something Flutter can't match for being so new and niche.
It simply can't use it because it does not use native UIs, but instead mimics them with its own rendering engine. This approach worked to some extent during the flat minimalist era, but now that Apple has added so many new animations and transitions, reproducing them all has become close to impossible.
At best, Flutter can implement some shaders for the glass'y look of the controls, but something as basic as the Liquid Glass tab bar would require a huge effort to replicate it inside Flutter, while in MAUI and RN it's an automatic update.
Not a single user cares about "native ui", it's only a debate among developers. Take the top 20 apps people are using, all of them use their own design system which isn't native.
Flutter will always have multiple advantages against React Native (and even Native toolkits themselves) in terms of upgradability, you can do 6 months of updates with only 30mins of work and make sure it 100% works everywhere.
The quality of the testing toolkit is also something which is still unmatched elsewhere and makes a big difference on the app reliability.
Classic HN comment with unapologetic statements. If Flutter were that good, it wouldn't have flatlined so fast after the initial hype a few years ago. I tried it last year, only to see rendering glitches in the sample project.
All those stats look great on paper, but a few months ago I checked job postings for different mobile frameworks, and Flutter listings were 2-3 times fewer than RN. Go on Indeed and see for yourself.
For a "28% of new iOS apps", the Flutter subreddit is a ghost town with regular "is it dying? should I pick RN?" posts. I just don't buy the numbers because I'm myself in a rather stagnant cross-platform ecosystem, so I know this vibe well.
If I ever leave .NET, no way I'd move to something like Flutter. Even Kotlin Multiplatform is more promising concept-wise. LLMs are changing cross-platform development and Flutter's strong sides are not that important anymore, while its weak sides are critical.
Rendering glitches may be due to completely new, lightweight rendering engine made from scratch, that has replaced Skia. Shoudn't be a problem when it matures a bit.
Not everything is related to tech, in my company for example, they picked React Native because they have the ability to tap into the front-end job market (or they think they do), certainly not for it's intrisic qualities.
Personally I've done a 50k+ line project in Flutter and I didn't hit any of these. There's been a few issues for sure but nowhere near what I experienced with React Native (and don't start me on native itself)
I highly recommend using MvvmCross with native UIs instead of MAUI: you get your model and view model 100% cross-platform, and then build native UIs twice (with UIKit and Android SDK), binding them to the shared VM. It also works with AppKit and WinUI.
In the past it was rather painful for a solo dev to do them twice, but now Claude Code one-shots them. I just do the iOS version and tell it to repeat it on Android – in many cases 80% is done instantly.
Just in case, I have an app with half a million installs on both stores that has been running perfectly since 2018 using this ".NET with native UIs" approach.
I have used MAUI at my previous job to build 3 different apps, used only on mobile (Android and iOS). I don't know why many people dislike XAML, to me it felt natural to use it for UI, I researched flutter and liked MAUI/XAML more. Although the development loop felt smoother with flutter. What I didn't like was the constant bugs, with each new version that I was eager to update to fix current issues, something new appeared. After spending countless hours searching through the projects GitHub, I am under the impression that there aren't much resources dedicated to MAUI development from Microsoft, the project is carried forward by few employees and volunteers. If I would start another project I would seriously look into Avalonia. But I always was a backend guy so now at my current job I do server backend development in C# and couldn't be happier.
If you're windows based, I'd unironically consider winforms, it's been re-added to dotnet in windows, and is one of the easiest and best ways to make simple GUI applications.
Sadly it's not cross-platform, which is a benefit of MAUI.
I don't really understand why Microsoft didn't do a Tauri like thing for C# devs instead of this Maui stuff. It would be a tiny project in comparison and then isn't completely going against the grain like Maui is. If you want a write once / run in more places compromise, the browser already does that very well.
Because web UI for a desktop app sucks compared to actual native UI. As a user, any time that I see an app uses Electron, Tauri or any of that ilk, I immediately look for an alternative because the user experience will be awful.
Maui Blazor Hybrid has a cool model where the HTML UI binds to native code (not WASM) for mobile and desktop. That is the closest you can get to Tauri-like. If you want to run that same app in a browser, then it'll use Blazor with WASM.
MAUI Blazor Hybrid is great if you won't want to learn XAML. Apple killed Silverlight, Microsoft kept it running for ~20 years. If you stayed close to what Xamarin was the migration to MAUI isn't bad from what I've seen.
I would say it really depends on your target. If you want only mobile, then there's different option's (see other comments). But if you want only desktop then Avilonia is good. However if you want both (like my team) then we did end up going for MAUI. However we use MAUI Blazor as we also want to run on a server. We're finding iOS to be difficult to target but I don't think that has anything to do with MAUI.
Benchmark Games[0] shows C# just behind C/C++ and Rust across a variety of benchmark types. C# has good facilities for dipping into unmanaged code and utilizing hardware intrinsics so you'd have to tap into that and bypass managed code in many cases to achieve higher performance.
There are plenty of domains where the competition is not one of pure latency (where FPGAs and custom hardware have even taken over from C++). In these domains managed languages can be sufficient to get to "fast enough" and the faster iteration speed and other comforts they provide can give an edge over native languages.
I think that DATAS also has more knobs to tune it than the old GC. I plan to set the Throughput Cost Percentage (TCP) via System.GC.DTargetTCP to some low value so that is has little impact on latency.
> You may not disclose the results of any benchmark test of the .NET Framework component of
the Software to any third party without Microsoft’s prior written approval.
I seem to vaguely recall such a thing from way back in the early days, but the only copy[1] of the .Net Framework EULA I could readily find says it's OK as long as you publish all the details.
It's because you aren't looking at 20 year old EULA's
>3.4 Benchmark Testing. The Software may contain the Microsoft .NET Framework. You may not disclose the results of any benchmark test of the .NET Framework component of the Software to any third party without Microsoft’s prior written approval.
This person is not likely familiar with the history of the .net framework and .net core because they decided a long time ago they were never going to use it.
As long as it's your deployment target and nothing else. For development, both macOS and Linux continue to be second class citizens, and I don't see this changing as it goes against their interests. In most .NET shops around me, the development and deployment tooling is so closely tied to VS that you can't really not use it.
It's fine if you stick to JetBrains and pay for their IDE (or do non-commercial projects only), and either work in a shop which isn't closely tied to VS (basically non-existent in my area), or work by yourself.
> The development and deployment tooling is so closely tied to VS that you can't really not use it.
Development tooling: It's 50-50. Some use Visual Studio, some use Rider. It's fine. The only drawback is that VS Live Share and the Jetbrains equivalent don't interoperate.
deployment tooling: There is deployment tooling tied to the IDE? No-one uses that, it seems like a poor idea. I see automated build/test/deploy pipelines in GitHib Actions, and in Octopus Deploy. TeamCity still gets used, I guess.
It's true though that the most common development OS is Windows by far (with Mac as second) and the most common deployment target by far is Linux.
However the fact that there is close to no friction in this dev vs deploy changeover means that the cross-platform stuff just works. At least for server-side things such as HTTP request and queue message processing. I know that the GUI toolkit story is more complex and difficult, but I don't have to deal with it at all so I don't have details or recommendations.
VS has the “Publish” functionality for direct deployment to targets. It works well for doing that and nothing else. As you said, CI/CD keeps deployment IDE agnostic and has far more capabilities (e.g. Azure DevOps, GitHub Actions).
Yeah? Ncurses still a thing? I only ask because that's the only api name I remember from forever ago.
I worked on a mud on linux right after high school for awhile. Spent most of the time on the school's bsdi server prior to that though.
Then I went java, and as they got less permissive and .net got more permissive I switched at some point. I've really loved the direction C# has gone merging in functional programming idioms and have stuck with it for most personal projects but I am currently learning gdscript for some reason even though godot has C# as an option.
The only thing that has become "less permissive" is Oracle's proprietary OpenJDK build, which isn't really needed or recommended in 99.9% of cases (except for when the vendor of your proprietary application requires it to provide support).
The rest of the ecosystem is "more permissive" than .NET since there are far more FOSS libraries for every task under the sun (which don't routinely go commercial without warnings), and fully open / really cross-platform development tooling, including proper IDEs.
The fact that you even need to be very careful when choosing a JDK is a lot bigger problem than some simple easily replaceable library is going commercial (not that this has not happend also in Java land). Also .NET is fully open and really cross-platform for a long time already and it includes more batteries than Java out of the box, you may not even need to include any third party dependencies (although there are also plenty to choose - 440k packages in Nuget). .NET has also proper IDEs or is Jetbrains Rider not a proper IDE for you?
Funny, because one the libraries I was using at the time went hyper commercial (javafxports). Java burned me on two fronts at the very same time and lost me. Ymmv I guess. It's always a good time to try something new anyway... I also moved to kotlin on android and couldn't be happier with it, it's a clearly superior language.
It works just fine out of the box. The articles/manuals are just if you want to really understand how it works and get the most out of it. What's the issue with that?
In my 20+ years using C#, there's only been one instance where I needed to explicitly control some behavior of the GC (it would prematurely collect the managed handle on a ZMQ client) and that only required one line of code to pin the handle.
It pretty much never gets in your way for probably 98% of developers.
Dr. Dobbs and The C/C++ Users Journal archives are full of articles and ads for special memory allocators, because the ones on the standard library for C or C++ also don't work in many cases, they are only good enough as general purpose allocation.
You need these settings when you drive your application hard into circumstances where manual memory allocation arguably starts making sense again. Like humongous heaps, lots of big, unwieldy objects, or tight latency (or tail latency) requirements. But unless you're using things like Rust or Swift, the price of memory management is the need to investigate segmentation faults. I'd prefer to spend developer time on feature development and benchmarking instead.
A hobby audio and text analysis application I've written, with no specific concern for low level performance other than algorithmically, runs 4x as fast in .net10 vs .net8. Pretty much every optimization discussed here applies to that app. Great work, kudos to the dotnet team. C# is, imo, the best cross platform GC language. I really can't think of anything that comes close in terms of performance, features, ecosystem, developer experience.
Having worked with C# professionally for a decade, going through the changes with LINQ, async/await, Roslyn, and the rise of .NET Core, to .NET Core becoming .NET, I disagree. I certainly think that C# is a great tool and that it’s the best it has ever been. It’s also relies on very implicit behaviour, it is build upon OOP design principles and a bunch of “needless” abstraction. Things I personally have come to view as anti-patterns over the years. This isn’t because I specifically dislike C#, you could find me saying something similar about Java.
I suspect that the hidden indirection and runtime magic, may be part of why you love the language. In my experience, however, it leads to poor observability, opaque control flow, and difficult debugging sessions in every organisation and company I’ve ever worked for. It’s fair to argue that this is because the people working with C# are bad at software engineering. Similar to how Uncle Bob will always be correct when he calls teams out for getting his principles wrong. To me that means the language itself has a poor design fit for software development in 2025. Which is probably why we see more and more Go adoption, due to its explicit philosophies. Though to be fair, Python seems to be “winning” as far as adoption goes in the cross platform GC language space. Having worked with Django-Ninja I can certainly see why. It’s so productive, and with stuff like Pyrefly, UV and Ruff it’s very easy to make it a YAGNI experience with decent type control.
I am happy you enjoy C# though, and it’s great to see that it is evolving. If they did more to enhance the developer experience so that people were less inclined to do bad engineering on a thursday afternoon after a long day of useless meetings. Then I would probably agree with you. I'm not sure any of the changes going toward .NET 10 are going in that direction though.
You are missing the forest for the trees.
C# has increasingly become more terse (e.g. switch expressions, collection initializers, object initializers, etc) and, IMO, is a good balance between OOP and functional[0].
Functions are first class objects in C# and teams can write functional style C# if they want. But I suspect that this doesn't scale well in human terms as we've encountered a LOT of trouble trying to get TypeScript devs to adopt more functional programming techniques. We've found that many devs like to use functional code (e.g. Linq, `.filter()`, `.map()`), but dislike writing functional code because most devs are not wired this way and do not have any training in how to write functional code and understanding monads. Asking these devs to use a monad has been like asking kids to eat their carrots.
Across much of our backend TS codebase, there are very, very few cases where developers accept a function as input or return a function as output (almost all of it written by 1 dev out of a team of ~20).
Having been working with Nest.js for a while, it's clear to me that most of these abstractions are not "needless" but actually "necessary" to manage complexity of apps beyond a certain scale and the reasons are less technical and more about scaling teams and communicating concepts.Anyone that looks at Nest.js will immediately see the similarities to Spring Boot or .NET Web APIs because it fills the same niche. Whether you call a `*Factory` a "factory" or something else, the core concept of what the thing does still exists whether you're writing C#, Java, Go, or JS: you need a thing that creates instances of things.
You can say "I never use a factory in Go", but if you have a function that creates other things or other functions, that's a factory...you're just not using the nomenclature. Good for you? Or maybe you misunderstand why there is standard nomenclature of common patterns in the first place and are associating these patterns with OOP when in reality, they are almost universal and are rather human language abstractions for programming patterns.
[0] https://medium.com/itnext/getting-functional-with-c-6c74bf27...
I notice that none of the examples in your blog entry on functional C# deals with error handling. I know that is not the point of your article, but that is actually one of my key issues with C# and its reliance on implicit, because like so many other parts of C# you'd probably hand it over to an exeception handler. I'd much rather prefer you to deal with it explicitly right where it happens, and I would prefer if you were actually forced to do it for examples like yours. This is because implicit error handling is hard. I have no doubt you do it well, but it is frankly rare to meet a C# developer who has as much of an understanding on the language that you clearly have.
I think this is an excellent blog post by the way. My issues with C# (and this applies to a lot of other GC languages) is that most developers would learn a lot from your article. Because none of it is an intuitive part of the language philosophy.
I don't think you should never use OOP or abstractions. I don't think there is a golden rule for when you should use either. I do think you need to understand why you are doing it though, and C# sort of makes people go to abstractions first, not last in my experience. I don't think these changes to the GC is going to help people who write C# without understanding C#, which is frankly most C# developers around here. Because Go is opinionated and explicit it's simply an issue I have to deal with less in Go teams. It's not an issue I have to deal with less in Python teams, but then, everyone who loves Python knows it sucks.
My team just recently made the switch from a TS backend to a C# backend for net new work. When we made this switch, we also introduced `ErrorOr`[0] which is a monadic result type.
I would not have imagined this to be controversial nor difficult, but it turns out that developers really prefer and understand exceptions. That's because for a backend CRUD API, it's really easy to just throw and catch at a global HTTP pipeline exception filter and for 95% of cases, this is OK and good enough; you're not really going to be able to handle it nor is it worth it to to handle it.
We'll stick with ErrorOr, but developers aren't using it as monad and simply unwrapping the value and the error because, as it turns out, most devs just have a preference/greater familiarity with imperative try-catch handling of errors and practically, in an HTTP backend, there's nothing wrong in most cases with just having a global exception filter do the heavy lifting unless the code path has a clear recovery path.
I do think there is a "silver rule": OOP when you need structural scaffolding, functional when you have "small contracts" over big ones. An interface or abstract class is a "big contract" that means to understand how to use it, you often have to understand a larger surface area. A function signature is still a contract, but a micro-contract.Depending on what you're building, having structural scaffolding and "big contracts" makes more sense than having lots of micro-contracts (functions). Case in point: REST web APIs make a lot more sense with structural scaffolding. If you write it without structural scaffolding of OOP, it ends up with a lot of repetition and even worse opaqueness with functions wrapping other functions.
The silver rule for OOP vs FP for me: OOP for structural templating for otherwise repetitive code and "big contracts"; FP for "small contracts" and algorithmically complex code. I encourage devs on the team to write both styles depending on what they are building and the nature of complexity in their code. I think this is also why TS and C# are a sweet spot, IMO, because they straddle both OOP and have just enough FP when needed.
[0] https://github.com/amantinband/error-or
You introduced a pattern that is simply different than the usual in C#. It's also not clearly better, it's different. In languages designed for result types like this the ergonomics of such a type are usually better.
All the libraries you use and all methods from the standard library use exceptions. So you have to deal with exceptions in any case.
There's also a million or so libraries that implement types like this. There is no standard, so no interoperability. And people have to learn the pecularities of the chosen library.
I like result types like this, but I'd never try to introduce them in C# (unless at some point they get more language support).
These are developers that have never written C# before so there's no difference between whether it's language supported or not. It was in the core codebase on day 1 when they onboarded so it may as well have been native.
But what I takeaway from this is that Go's approach to error handling is "also not clearly better, it's different".
Even if C# had core language support for result types, you would be surprised how many developers would struggle with it (that is my takeaway from this experience).
If you're not coming from a strongly typed functional language, it's still a pattern you're not used to. Which might be a bit of a roundabout way to say that I agree about your last part, developers without contact to that kind of language will struggle at first with a pattern like this.
I know how to use this pattern, but the C# version still feels weird and cumbersome. Usually you combine this with pattern matching and other functional features and the whole thing makes it convenient in the end. That part is missing in C#. And I think it makes a different in understanding, as you would usually build ony our experience with pattern matching to understand how to handle this case of Result|Error.
[0] https://timdeschryver.dev/blog/pattern-matching-examples-in-...
I am paid to work in Java and C# among Go, Rust, Kotlin, Scala and I wholeheartedly agree.
I hate the implicitness of Spring Boot, Quarkus etc. as much as the one in C# projects.
All these magic annotations that save you a few lines of code until they don't, because you get runtime errors due to incompatible annotations.
And then it takes digging through pages of docs or even reporting bugs on repos instead of just fixing a few explicit lines.
Explicitness and Verbosity are orthogonal concepts mostly!
I disagree on this.
I am at a (YC, series C) startup that just recently made the switch from TS backend on Nest.js to C# .NET Web API[0]. It's been a progression from Express -> Nest.js -> C#.
What we find is that having attributes in both Nest.js (decorators) and C# allows one part of the team to move faster and another smaller part of the team to isolate complexity.
The indirection and abstraction are explicit decisions to reduce verbosity for 90% of the team for 90% of the use cases because otherwise, there's a lot of repetitive boilerplate.
The use of attributes, reflection, and source generation make the code more "templatized" (true both in our Nest.js codebase as well as the new C# codebase) so that 90% of the devs simply need to "follow the pattern" and 10% of the devs can focus on more complex logic backing those attributes and decorators.
Having the option to dip into source generation, for example, is really powerful in allowing the team to reduce boilerplate.
[0] We are hiring, BTW! Seeking experienced C# engineers; very, very competitive comp and all greenfield work with modern C# in a mixed Linux and macOS environment.
The trade offs are though that patterns and behind the scenes source code generation is another layer that the devs who have to follow need to deal with when debugging and understanding why something isn’t working. They either spend more time understanding the bespoke things or are bottle necked relying on a team or person to help them get through those moments. It’s a trade off and one that has bit me and others before
I am not talking about C# specifically but also and I agree.
Implicit and magic looks nice at first but sometimes it can be annoying. I remember the first time I tried Ruby On Rails and I was looking for a piece of config.
Yes, "convention over configuration". Namely, ungreppsble and magic.
This kind of stuff must be used with a lot of care.
I usually favor explicit and, for config, plain data (usually toml).
This can be extended to hidden or non-obvious allocations and other stuff (when I work with C++).
It is better to know what is going on when you need to and burying it in a couole of layers can make things unnecessarily difficult.
It has been worth the abstraction in my organization with many teams. Thinking 1000+ engineers, at minimum. It helps to abstract as necessary for new teammates that want to simply add a new endpoint yet follow all the legal, security, and data enforcement rules.
Better than no magic abstractions imo. In our large monorepo, LSP feedback can often be so slow that I can’t even rely on it to be productive. I just intuit and pattern match, and these magical abstractions do help. If I get stuck, then I’ll wade into the docs and code myself, and then ask the owning team if I need more help.
Would you rather a team move faster and be more productive or be a purist and disallow abstractions to avoid some potential runtime tracing challenges which can be mitigated with good use of OTEL and logging? I don't know about you, but I'm going to bias towards productivity and use integration tests + observability to safeguard code.
Disallow bespoke abstractions and use the industry standard ones instead. People who make abstractions inflate how productive they’re making everyone else. Your user base is much smaller than popular libs, so your docs and abstractions are not as battle tested and easy to use as much as you think.
This is raw OpenFGA code:
This is an abstraction we wrote on top of it: You would make the case that the former is better than the latter?How much faster are we talking? Because you'd have to account for the time lost debugging annotations.
What are you working on that you're debugging annotations everyday? I'd say you've made a big mistake if you're doing that/you didn't read the docs and don't understand how to use the attribute.
(Of course you are also free to write C# without any of the built in frameworks and write purely explicit handling and routing)
On the other hand, we write CRUD every day so anything that saves repetition with CRUD is a gain.
I don't debug them every day, but when I do, it takes days for a nasty bug to be worked out.
Yes, they make CRUD stuff very easy and convenient.
That's the deal with all metaprogramming.
People were so afraid of macros they ended up with something even worse.
At least with macros I don't need to consider the whole of the codebase and every library when determining what is happening. Instead I can just... Go to the macro.
C# source generators are...just macros?
They are not. They are generators. Macros tends to be local and explicit as the other commenters have said. They are more like templates. Generators can be fairly involved and feels like a mini language, one that is not as observable as macros.
Isn't this just a string template? https://github.com/CharlieDigital/SKPromptGenerator/blob/mai...
Maybe you're confusing `System.Reflection.Emit` and source generators? Source generators are just a source tree walker + string templates to write source files.
What are those magic annotations you are talking about? Attributes? Not much of those are left in modern .net.
Attributes and reflection are still used in C# for source generators, JSON serialization, ASP.NET routing, dependency injection... The amount of code that can fail at runtime because of reflection has probably increased in modern C#. (Not from C# source generators of course, but those only made interop even worse for F#-ers).
Aye, was involved in some really messed up outages from New Relics agent libraries generating bogus byte code at runtime, absolute nightmare for the teams trying to debug it because none of the code causing the crashing existed anywhere you could easily inspect it. Replaced opaque magic from new relic with simpler OTEL, no more outages
That's likely the old emit approach. Newer source gen will actually generate source that is included in the compilation.
Don't we have automated tests for catching this kind of things or is everyone only YOLOing in nowadays? Serialization, routing, etc can fail at runtime regardless of using or not using attributes or reflection.
Ease of comprehension is more important than tests for preventing bugs. A highly testable DI nightmare will have more bugs than a simple system that people can understand just by looking at it.
But the "simple" system will be full of repetition and boilerplate, meaning the same bugs are scattered around the code base, and obscured by masses of boilerplate.
Isn't a GC also a Magic? Or anything above assembly? While I also understand the reluctance to use too much magic, in my experience, it's not the magic, it's how well the magic is tested and developed.
I used to work with Play framework, a web framework built around Akka, an async bundle of libraries. Because it wasn't too popular, only the most common issues were well documented. I thought I hated magic.
Then, I started using Spring Boot, and I loved magic. Spring has so much documentation that you can also become the magician, if you need to.
If the argument is that most developers can't understand what a DI system does, I don't know if I buy that. Or is the argument it's hard to track down dependencies? Because if that's the case the idiomatic c# has the dependencies declared right in the ctor.
I haven't experienced a DI 'nightmare' myself yet, but then again, we have integration tests to cover for that.
Try Nest.js and you'll know true DI "nightmares".
As polyglot developer, I also disagree.
If I wanted explicitness for every little detail I would keep writing in Assembly like in the Z80, 80x86, 68000 days.
Unfortunately we never got Lisp or Smalltalk mainstream, so we got to metaprogramming with what is available, and it is quite powerful when taken advantage of.
Some people avoid wizard jobs, others avoid jobs where magic is looked down upon.
I would also add that in the age of LLM and AI generated applications, discussing programming languages explicitness is kind of irrelevant.
Explicitness is different than verbosity. Often annotations and the like are abused to create a lot of accidental complexity just to not write a few keywords. In almost every lisp project you'll find that macros are not intended for reducing verbosity, they are there to define common patterns. You can have something like
You can then easily expect the generated code. But in Java and others, you'll have something like And there's a whole system hidden behind this, that you have to carefully understand as every annotation implementation is different.This is the trade-off with macros and annotation/code-generation systems.
I tend to do obvious things whwn I use this kind of tools. In fact, I try to avoid macros.
Even if configurability is not important, I favor sinplification over reuse. In case I need reuse, I go for higher order functions if I can. Macro is the last bullet.
In some circumstances like Json or serialization maybe they can be slightly abused to mark fields and such. But whole code generation can take it so far and magic that it is not worth in many circumstances IMHO, thiugh every tool has its use cases, even macros and annotations.
IMO, macros and such should be to improve coding UX. But using it for abstractions and the like is very much not worth it. So something like JSX (or the loop system in Common Lisp) is good. But using it for DI is often a code smell for me.
Only if those Lisp projects are done by newbies, Clojure is quite known for having a community that takes that approach to macros, versus everyone else on Lisp since its early days.
Using macros for DSLs has been common for decades, and is how frameworks like CLOS were initially implemented.
C# will be a force to reckon with if/when discriminated unions finally land as a language feature.
I think people who last looked at C# 10 years ago or haven't adapted to new language features seriously don't know how good C# is these days.
Switch expressions with pattern matching are absolutely killer[0] for its terseness.
Also, it is possible to use OneOf[1] and Dunet[2] to get access to DU
[0] https://timdeschryver.dev/blog/pattern-matching-examples-in-...
[1] https://github.com/mcintyre321/OneOf
[2] https://github.com/domn1995/dunet
I write C# and rust fulltime. Native discriminated unions (and their integration throughout the ecosystem) are often the deciding factor when choosing rust over C#.
Very hard to imagine teams cross shopping C# and Rust and DU's being the deciding factor. The tool chains, workflows, and use cases are just so different, IMO. What heuristics were your team using to decide between the two?
This surprises me.
If you want the .NET ecosystem and GC conveniences, there is already F#. If you want no GC and RAII-style control, then you would already pick Rust.
For me, it will be if they ever get checked errors of some sort. I don’t want to use a language with unchecked exceptions flying about everywhere. This isn't saying I want checked exceptions either, but I think if they get proper unions and then have some sort of error union type it would go a long way.
You can get an error union now: https://github.com/amantinband/error-or
The issue is the ecosystem and standard library. They still will be throwing unchecked exceptions everywhere
> C# is, imo, the best cross platform GC language. I really can't think of anything that comes close
How about F#? Isn't F# mostly C# with better ergonomics?
Personally I love F#, but I feel the community is probably even smaller than OCaml...
He means the runtime ".NET CLR". They have the same runtime.
It is but in practice it’s very hard to find programmers for it.
Lmao, functional programming is far from ergonomic
Exactly what I've observed in practice because most devs have no background in writing functional code and will complain when asked to do so.
Passing or returning a function seems a foreign concept to many devs. They know how to use lambda expressions, but rarely write code that works this way.
We adopted ErrorOr[0] and have a rule that core code must return ErrorOr<T>. Devs have struggled with this and continue to misunderstand how to use the result type.
[0] https://github.com/amantinband/error-or
F# is hardly modern functional programming. It's more like a better python with types. And that's much more ergonomic than C#.
Python and F# are not very similar. A better comparison is OCaml. F# and OCaml are similar. They're both ML-style functional languages.
I'd much rather code F# than Python, it's more principled, at least at the small scale. But F# is in many ways closer to modern mainstream languages than a modern pure functional language. There's nothing scary about it. You can write F# mostly like Python if you want, i.e. pervasive mutation and side effects, if that's your thing.
It's so weird to describe F# as "Python with Types." First of all, Python is Python with Types. And C# is much more similar to Python than F# is.
It all depends on the lens one chooses to view them. None of them are really "functional programming" in the truly modern sense, even F#. As more and more mainstream languages get pattern matching and algebraic data types (such as Python), feature lambdas and immutable values, then these languages converge. However, you don't really get the promises of functional programming such as guaranteed correct composition and easier reasoning/analysis, for that one needs at least purity and perhaps even totality. That carries the burden of proof, which means things get harder and perhaps too hard for some (e.g. the parent poster).
If purity is a requirement for "real" functional programming, then OCaml or Clojure aren't functional. Regarding totality, even Haskell has partial functions and exceptions.
Both OCaml and Clojure are principled and well designed languages, but they are mostly evolutions of Lisp and ML from the 70s. That's not where functional programming is today. Both encourage a functional style, which is good. And maybe that's your definition of a "functional language". But I think that definition will get increasingly less useful over time.
Sure, Python has types as part of the syntax, but Python doesn't have types like Java, C#, etc. have types. They are not pervasive and the semantics are not locked down.
That really depends on your preferred coding style.
> I really can't think of anything that comes close in terms of performance, features, ecosystem, developer experience.
This is also why I prefer to use Unity over all other engines by an incredible margin. The productivity gains of using a mature C# scripting ecosystem in a tight editor loop are shocking compared to the competition. Other engines do offer C# support, but if I had to make something profitable to save my own standard of living the choice is obvious.
There's only two vendors that offer built-in SIMD accelerated linear math libraries capable of generating projection matrices out of the box. One is Microsoft and the other is Apple. The benefits of having stuff like this baked into your ecosystem are very hard to overstate. The amount of time you could waste looking for and troubleshooting primitives like this can easily kill an otherwise perfect vision.
Java?
Is supported on more platforms, has more developers, more jobs, more OSS projects, is more widely used (Tiobe 2024). Performance was historically better, but c# caught up.
Reified generics, value types, LINQ are just a few things that you would miss when going to Java. Also Java and .NET are both big, that's not a real argument here. Not that I would trust Tiobe index too much, but as of 2025 September C# is right behind Java at 5th place.
Or you can use the "C# without the line noise", which goes under the name of F#.
Yeah, but if you use F# that then you’ll have all the features C# has been working on for years, only in complete and mature versions, and also an opinionated language encouraging similar styles between teams instead of wild dialects of kinda-sorta immutablity and kinda-sorta DU’s, and everything in between, requiring constant vigilance and training… ;)
I’m a fan of all three languages, but C# spent the first years relearning why Visual Basic was very productive and the last many years learning why OCaml was chosen to model F# after. It’s landed in a place where I can make beautiful code the way I need to, but the mature libraries I’ve crafted to make it so simply aren’t recreate-able by most .Net devs, and the level of micro-managing it takes to maintain across groups is a line-by-line slog against orthodoxy and seeming ‘shortcuts’, draining the productivity those high level guarantees should provide. And then there’s the impact of EF combined with sloppy Linq which makes every junior/consultant LOC a potentially ticking time bomb without more line-by-line, function-by-function slog.
Compiler guarantees mean a lot.
Except for F#, which also gets all the .NET10 cross-platform GC improvements for free and is a better programming language than C#.
> I really can't think of anything that comes close in terms of [...] developer experience.
Of all the languages that I have to touch professionally, C# feels by far the most opaque and unusable.
Documentation tends to be somewhere between nonexistant and useless, and MSDN's navigation feels like it was designed by a sadist. (My gold standard would be Rustdoc or Scala 2.13 era Scaladoc, but even Javadoc has been.. fine for basically forever.) For third-party libraries it tends to be even more dire and inconsistent.
The Roslyn language server crashes all the time, and when it does work.. it doesn't do anything useful? Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there! (I know there's this thing called "SourceLink" which is.. supposed to solve this? I think? But I've never seen it actually use it in practice.)
Even finding where something comes from is ~impossible without the language server, because `using` statements don't mention.. what they're even importing. (Assuming that you have them at all. Because this is also the company that thought project-scoped imports were a good idea!)
And then there's the dependency injection, where I guess someone thought it would be cute if every library just had an opaque extension method on the god object, that didn't tell you anything about what it actually did. So good luck finding where the actual implementation of anything is.
I almost exclusively work in C# and have never experienced the Roslyn crashes you mentioned. I am using either Rider or Visual Studio though.
> Like cross-project "go-to-definition" takes me to either a list of members or a decompiled listing of source code, even when I have the actual source code right there!
If these are projects you have in the same solution then it should never do this. I would only expect this to happen if either symbol files or source files are missing.
I use VS Code on macOS for all of my C# code over the last 5 years and also never experienced Roslyn crashes.
Try "go to implementation" in place of go to definition.
Very mixed feelings about this as there’s a strong case for the decisions made here but it also moves .NET further away from WASMGC, which makes using it in the client a complete non-starter for whole categories of web apps.
It’s a missed opportunity and I can’t help but feel that if the .NET team had gotten more involved in the proposals early on then C# in the browser could have been much more viable.
Those changes affect the .NET runtime, designed for real computers. This does not preclude the existence of a special runtime designed for Wasm with WasmGC support.
The .NET team appears to be aware of WasmGC [0], and they have provided their remarks when WasmGC was being designed [1].
[0] https://github.com/dotnet/runtime/issues/94420
[1] https://github.com/WebAssembly/gc/issues/77
.NET was already incompatible with WASM GC from the start [1]. The changes in .NET 10 are nothing in comparison to those. AFAIK WASM GC was designed with only JavaScript in mind so that's what everyone is stuck with.
[1] https://github.com/dotnet/runtime/issues/94420
There's 2 things,
1: JavaScript _interoperability_ , ie same heap but incompatible objects (nobody is doing static JS)
2: Java, Schemes and many other GC derived languages ,etc have more "pure" GC models, C# traded some of it for practicality and that would've required some complications to the regular JS GC's.
A lot of the features here, stuff like escape analysis for methods etc. does not directly involve the GC - it reduces the amount of objects that go to the GC heap so the GC has less work to do in the first place.
How would this move .NET further away from WASMGC? This is a new GC for .NET, but doesn't add new things to the language that would make it harder to use WASMGC (nor easier).
For example, .NET has internal pointers which WASMGC's MVP can't handle. This doesn't change that so it's still a barrier to using WASMGC. At the same time, it isn't adding new language requirements that WASMGC doesn't handle - the changes are to the default GC system in .NET.
I agree it's disappointing that the .NET team wasn't able to get WASMGC's MVP to support what .NET needs. However, this change doesn't move .NET further away from WASMGC.
Webassembly taking off on the browser is wishful thinking.
There are a couple unicorns like Figma and that is it.
Performance is much better option with WebGPU compute, and not everyone hates JavaScript.
Whereas on the server it is basically a bunch of companies trying to replicate application servers, been there done that.
I wouldn't be surprised if it did take off, classic Wasm semantics were horrible since you needed a lot of language support to even have simple cludges when referring to DOM objects via indices and extra lifeness checking.
WASM-GC will remove a lot of those and make quite a few languages possible as almost first-class DOM manipulating languages (there's still be cludges as the objects are opaque but they'll be far less bad since they can at least avoid external ID mappings and dual-GC systems that'll behave leakily like old IE ref-counts did).
All great and dandy, except tooling still sucks.
You still need to usually install plenty of moving pieces to produce a wasm file out of the "place language here", write boilerplate initialisation code, debugging is miserable, only for a few folks to avoid writing JavaScript.
There will always be enthusiasts to take the initial steps, the question is if they have the taste to make it a coherent system that isn't horrible to use.
Counted out over N languages, we should see something decent land before long.
I think you may be underestimating how many people really dislike JavaScript.
As many that dislike PHP, C, C++, yet here we are.
> Webassembly taking off on the browser is wishful thinking.
It has taken off in the browser. If you've ever used Google Sheets you've used WebAssembly.
Another niche use case.
Google Sheets is one of the most widely used applications on the planet. It's not niche.
Amazon switched their Prime Video app from JavaScript to WebAssembly for double the performance. Is streaming video a niche use case?
I think they meant most people aren’t building a high performance spreadsheet, not most people aren’t using a high performance spreadsheet.
> most people aren’t building a high performance spreadsheet
Lots of people are building Blazor applications:
https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blaz...
> not most people aren’t using a high performance spreadsheet
A spreadsheet making use of WebAssembly couldn't be deployed to the browser if WebAssembly hadn't taken off in browsers.
Practical realities contradict pjmlp's preconceptions.
Don't mix mainstream adoption at the same level as regular JavaScrip and Typescript, with availability.
Microsoft would wish Blazor would take off like React and Angular, in reality it is seldom used outside .NET shops intranets in a way similar to WebForms.
> Blazor is seldom used outside .NET shops intranets
So, in other words, widely used in lots and lots of deployments.
Do you have a number for us?
Can you actually build something like Figma in Blazor? Does Blazor somehow facilitate that?
I think that was sarcasm :)
Comprehensive and (I thought) interesting article in perf improvements in .net 10:
Performance Improvements in .NET 10
https://devblogs.microsoft.com/dotnet/performance-improvemen...
This is a great article, as soon as you’re beyond the introductory 5 paragraphs on the minutiae of the opening song of Disney’s Frozen.
On the topic of DATAS, there was a discussion here recently: https://news.ycombinator.com/item?id=45358527
Thanks! Macroexpanded:
Preparing for the .NET 10 GC - https://news.ycombinator.com/item?id=45358527 - Sept 2025 (60 comments)
Interesting, I mostly work in JVM, and am always impressed how much more advanced feature-wise the .NET runtime is.
Won't this potentially cause stack overflows in programs that ran fine in older versions though?
I don't think the runtime is "much more advanced", the JVM has had most of these optimizations for years.
The JVM famously boxes everything though, probably because it was originally designed to run a dynamic language. An array list of floats is an array list of pointers. This created an entire cottage industry of alternative collections libraries with concrete array list implementations.
A float[] is packed and not a list of pointers in the jvm.
An ArrayList<Float> is a list of pointers though.
Arrays have a static fixed size though, making them far less useful in practice. Anything one builds with generics is boxed. Dotnet doesn't have this problem.
Currently you can get around this with Panama, even if the API is kind of verbose for the purpose.
Eventually value classes might close the gap, finally available as EA.
They're famously working on changing that. I think we're all hopeful that we'll start seeing the changes from Valhalla roll in post-25.
Almost none of this is in the JVM. Escape analysis is extremely limited on the standard JVM, and it's one of GraalVM's "enterprise" features. You have to pay for it.
> one of GraalVM's "enterprise" features. You have to pay for it.
Free for some (most?) use cases these days.
Basically enterprise edition does not exist anymore as it became the "Oracle GraalVM" with a new license.
https://www.graalvm.org/faq/
One limitation of the stack is that it needs to be contiguous virtual addresses, so it was often limited when devices just didn't have the virtual address space to "waste" on a large stack for every thread in a process.
But 64 bits of virtual address space is large enough that you can keep the stacks far enough apart that even for pretty extreme numbers of threads you'll run out of physical memory before they start clashing. So you can always just allocate more physical pages to the stack as needed, similar to the heap.
I don't know if the .net runtime actually does this, though.
> Won't this potentially cause stack overflows in programs that ran fine in older versions though?
That's certainly a possibility, and one that's come up before even between .net framework things migrated to .net core. Though usually it's a sign that something is awry in the first place. Thankfully the default stack sizes can be overridden with config or environment variables.
I am surprised that they didn't already do a lot of optimizations informed by escape analysis, even though they have had value types from the beginning. Hotspot is currently hampered by only having primitive and reference types, which Project Valhalla is going to rectify.
DATAS has been great for us. Literally no effort, upgrade the app to net8 and flip it on. Huge reduction in memory.
TieredCompilation on the other hand caused a bunch of esoteric errors.
FWIW Tiered Compilation has been enabled on by default since .NET Core 3.1. If the code tries to use refection to mutate static readonly fields and fails, it's the fault of that code.
I am considering dotnet Maui for a project. On the one hand, I am worried about committing to the Microsoft ecosystem where projects like Maui have been killed in the past and Microsoft has a lot of control. Also XML… On the other hand, I’ve been seeing so many impressive technical things about dotnet itself. Has anyone here used Maui and wants to comment on their experience?
Speaking as an experienced desktop .NET Dev, we've avoided it due to years of instability and no real confidence it'll get fully adopted. We've stuck with WPF, which is certainly a bit warty, but ultimately fine. If starting fresh at this point I'd give a real look at Avalonia, seems like they've got their head on their shoulders and are in it for the long haul.
Would also recommend Avalonia. It's truly cross-platform (supports also Linux) unlike MAUI.
I've been a C# developer my entire career and spent a few years building apps with Xamarin/Uno. At my current company, we evaluated MAUI and Flutter for our mobile app rewrite (1M+ monthly active users).
We first built a proof of concept with 15 basic tasks to implement in both MAUI and Flutter. Things like authentication, navigation, API calls, localization, lists, map, etc. In MAUI, everything felt heavier than it should've been. Tooling issues, overkill patterns, outdated docs, and a lot of small frustrations that added up. In Flutter, we got the same features done much faster and everything just worked. The whole experience was just nicer. The documentation, the community, the developer experience... everything is better.
I love C#, that's what we use for our backend, but for mobile developement Flutter was the clear winner. We launched the new app a year ago and couldn't be happier with our decision.
Aside from using an esoteric language and being a Google product with a risk of shutting down just because, Flutter's game-like UI rendering on a canvas was confirmed to be quite a questionable approach with the whole Liquid Glass transition. If anything, React Native is a more reliable choice: endless supply of React devs and native UI binding similar to MAUI.
I'd say Uno Platform[0] is a better alternative to Flutter for those who do not care much about the native look: it replicates WinUI API on iOS, Mac, Android, and Linux, while also providing access to the whole mature .NET ecosystem – something Flutter can't match for being so new and niche.
[0]: https://platform.uno/
> Flutter's game-like UI rendering on a canvas was confirmed to be quite a questionable approach with the whole Liquid Glass transition.
Im not a flutter dev and Im very interested to hear how it doesn’t play well liquid glass.
It simply can't use it because it does not use native UIs, but instead mimics them with its own rendering engine. This approach worked to some extent during the flat minimalist era, but now that Apple has added so many new animations and transitions, reproducing them all has become close to impossible.
At best, Flutter can implement some shaders for the glass'y look of the controls, but something as basic as the Liquid Glass tab bar would require a huge effort to replicate it inside Flutter, while in MAUI and RN it's an automatic update.
Not a single user cares about "native ui", it's only a debate among developers. Take the top 20 apps people are using, all of them use their own design system which isn't native.
Flutter will always have multiple advantages against React Native (and even Native toolkits themselves) in terms of upgradability, you can do 6 months of updates with only 30mins of work and make sure it 100% works everywhere.
The quality of the testing toolkit is also something which is still unmatched elsewhere and makes a big difference on the app reliability.
Classic HN comment with unapologetic statements. If Flutter were that good, it wouldn't have flatlined so fast after the initial hype a few years ago. I tried it last year, only to see rendering glitches in the sample project.
28% of new iOS apps are made with flutter and it's the #1 cross platform framework on stack overflow 2024 survey so I highly doubt it has flatlined.
https://flutter.dev/multi-platform/ios
https://survey.stackoverflow.co/2024/technology#1-other-fram...
All those stats look great on paper, but a few months ago I checked job postings for different mobile frameworks, and Flutter listings were 2-3 times fewer than RN. Go on Indeed and see for yourself.
For a "28% of new iOS apps", the Flutter subreddit is a ghost town with regular "is it dying? should I pick RN?" posts. I just don't buy the numbers because I'm myself in a rather stagnant cross-platform ecosystem, so I know this vibe well.
If I ever leave .NET, no way I'd move to something like Flutter. Even Kotlin Multiplatform is more promising concept-wise. LLMs are changing cross-platform development and Flutter's strong sides are not that important anymore, while its weak sides are critical.
Rendering glitches may be due to completely new, lightweight rendering engine made from scratch, that has replaced Skia. Shoudn't be a problem when it matures a bit.
Not everything is related to tech, in my company for example, they picked React Native because they have the ability to tap into the front-end job market (or they think they do), certainly not for it's intrisic qualities.
Personally I've done a 50k+ line project in Flutter and I didn't hit any of these. There's been a few issues for sure but nowhere near what I experienced with React Native (and don't start me on native itself)
I would personally prefer Avalonia (https://avaloniaui.net/) over MAUI.
I highly recommend using MvvmCross with native UIs instead of MAUI: you get your model and view model 100% cross-platform, and then build native UIs twice (with UIKit and Android SDK), binding them to the shared VM. It also works with AppKit and WinUI.
In the past it was rather painful for a solo dev to do them twice, but now Claude Code one-shots them. I just do the iOS version and tell it to repeat it on Android – in many cases 80% is done instantly.
Just in case, I have an app with half a million installs on both stores that has been running perfectly since 2018 using this ".NET with native UIs" approach.
I have used MAUI at my previous job to build 3 different apps, used only on mobile (Android and iOS). I don't know why many people dislike XAML, to me it felt natural to use it for UI, I researched flutter and liked MAUI/XAML more. Although the development loop felt smoother with flutter. What I didn't like was the constant bugs, with each new version that I was eager to update to fix current issues, something new appeared. After spending countless hours searching through the projects GitHub, I am under the impression that there aren't much resources dedicated to MAUI development from Microsoft, the project is carried forward by few employees and volunteers. If I would start another project I would seriously look into Avalonia. But I always was a backend guy so now at my current job I do server backend development in C# and couldn't be happier.
I do think server/backend is C#'s sweetspot because EF Core is soooo good.
But it's curious that it's used widely with game engines (Unity, Godot), but has a pretty weak and fractured UI landscape.
If you're windows based, I'd unironically consider winforms, it's been re-added to dotnet in windows, and is one of the easiest and best ways to make simple GUI applications.
Sadly it's not cross-platform, which is a benefit of MAUI.
I don't really understand why Microsoft didn't do a Tauri like thing for C# devs instead of this Maui stuff. It would be a tiny project in comparison and then isn't completely going against the grain like Maui is. If you want a write once / run in more places compromise, the browser already does that very well.
Because web UI for a desktop app sucks compared to actual native UI. As a user, any time that I see an app uses Electron, Tauri or any of that ilk, I immediately look for an alternative because the user experience will be awful.
Maui Blazor Hybrid has a cool model where the HTML UI binds to native code (not WASM) for mobile and desktop. That is the closest you can get to Tauri-like. If you want to run that same app in a browser, then it'll use Blazor with WASM.
MAUI Blazor Hybrid is great if you won't want to learn XAML. Apple killed Silverlight, Microsoft kept it running for ~20 years. If you stayed close to what Xamarin was the migration to MAUI isn't bad from what I've seen.
I would say it really depends on your target. If you want only mobile, then there's different option's (see other comments). But if you want only desktop then Avilonia is good. However if you want both (like my team) then we did end up going for MAUI. However we use MAUI Blazor as we also want to run on a server. We're finding iOS to be difficult to target but I don't think that has anything to do with MAUI.
I wonder if this makes .net competitive for high frequency trading...
Benchmark Games[0] shows C# just behind C/C++ and Rust across a variety of benchmark types. C# has good facilities for dipping into unmanaged code and utilizing hardware intrinsics so you'd have to tap into that and bypass managed code in many cases to achieve higher performance.
[0] https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
It's been competitive for a long time now.
https://medium.com/@ocoanet/improving-net-disruptor-performa...
Why would you ever pick a language like this for HFT? It seems like a nonstarter for me but I guess Java is out there in use
There are plenty of domains where the competition is not one of pure latency (where FPGAs and custom hardware have even taken over from C++). In these domains managed languages can be sufficient to get to "fast enough" and the faster iteration speed and other comforts they provide can give an edge over native languages.
I think that DATAS also has more knobs to tune it than the old GC. I plan to set the Throughput Cost Percentage (TCP) via System.GC.DTargetTCP to some low value so that is has little impact on latency.
https://learn.microsoft.com/en-us/dotnet/core/runtime-config...
Are you now allowed to benchmark the .Net runtime / GC?
Edit: Looks like you are allowed to benchmark the runtime now. I was able to locate an ancient EULA which forbade this (see section 3.4): https://download.microsoft.com/documents/useterms/visual%20s...
> You may not disclose the results of any benchmark test of the .NET Framework component of the Software to any third party without Microsoft’s prior written approval.
Yes, you probably mixed it with SQL Server.
> Publishing SQL Server benchmarks without prior written approval from Microsoft is generally prohibited by the standard licensing agreements.
why wouldn't you be?
Yes.
...Were you not before?
IIRC the EULA forbids it. This is why you don't see .net v/s Java GC comparisons for example.
I seem to vaguely recall such a thing from way back in the early days, but the only copy[1] of the .Net Framework EULA I could readily find says it's OK as long as you publish all the details.
[1]: https://docs.oracle.com/en/industries/food-beverage/micros-w...
I can't find mention of anything resembling this. The .NET runtime is under the MIT license.
https://download.microsoft.com/documents/useterms/visual%20s...
It's because you aren't looking at 20 year old EULA's
>3.4 Benchmark Testing. The Software may contain the Microsoft .NET Framework. You may not disclose the results of any benchmark test of the .NET Framework component of the Software to any third party without Microsoft’s prior written approval.
This person is not likely familiar with the history of the .net framework and .net core because they decided a long time ago they were never going to use it.
Yeah, you got me there. I have moved on to Linux development since then. Haven't kept up with Microsoft developer tools.
As a dotnet developer all my code these days is run on Linux.
.net core on Linux works great btw.
In recent versions (i.e. since .NET 5 in 2020) ".NET core" is just called ".NET"
The cross-platform version is mainstream, and this isn't new any more.
.NET on Linux works fine for services. Our .NET services are deployed to Linux hosts, and it's completely unremarkable.
As long as it's your deployment target and nothing else. For development, both macOS and Linux continue to be second class citizens, and I don't see this changing as it goes against their interests. In most .NET shops around me, the development and deployment tooling is so closely tied to VS that you can't really not use it.
It's fine if you stick to JetBrains and pay for their IDE (or do non-commercial projects only), and either work in a shop which isn't closely tied to VS (basically non-existent in my area), or work by yourself.
Well, in most .NET shops around me:
> The development and deployment tooling is so closely tied to VS that you can't really not use it.
Development tooling: It's 50-50. Some use Visual Studio, some use Rider. It's fine. The only drawback is that VS Live Share and the Jetbrains equivalent don't interoperate.
deployment tooling: There is deployment tooling tied to the IDE? No-one uses that, it seems like a poor idea. I see automated build/test/deploy pipelines in GitHib Actions, and in Octopus Deploy. TeamCity still gets used, I guess.
It's true though that the most common development OS is Windows by far (with Mac as second) and the most common deployment target by far is Linux.
However the fact that there is close to no friction in this dev vs deploy changeover means that the cross-platform stuff just works. At least for server-side things such as HTTP request and queue message processing. I know that the GUI toolkit story is more complex and difficult, but I don't have to deal with it at all so I don't have details or recommendations.
> is there deployment tooling tied to the IDE?
VS has the “Publish” functionality for direct deployment to targets. It works well for doing that and nothing else. As you said, CI/CD keeps deployment IDE agnostic and has far more capabilities (e.g. Azure DevOps, GitHub Actions).
Yeah? Ncurses still a thing? I only ask because that's the only api name I remember from forever ago.
I worked on a mud on linux right after high school for awhile. Spent most of the time on the school's bsdi server prior to that though.
Then I went java, and as they got less permissive and .net got more permissive I switched at some point. I've really loved the direction C# has gone merging in functional programming idioms and have stuck with it for most personal projects but I am currently learning gdscript for some reason even though godot has C# as an option.
The only thing that has become "less permissive" is Oracle's proprietary OpenJDK build, which isn't really needed or recommended in 99.9% of cases (except for when the vendor of your proprietary application requires it to provide support).
The rest of the ecosystem is "more permissive" than .NET since there are far more FOSS libraries for every task under the sun (which don't routinely go commercial without warnings), and fully open / really cross-platform development tooling, including proper IDEs.
The fact that you even need to be very careful when choosing a JDK is a lot bigger problem than some simple easily replaceable library is going commercial (not that this has not happend also in Java land). Also .NET is fully open and really cross-platform for a long time already and it includes more batteries than Java out of the box, you may not even need to include any third party dependencies (although there are also plenty to choose - 440k packages in Nuget). .NET has also proper IDEs or is Jetbrains Rider not a proper IDE for you?
Funny, because one the libraries I was using at the time went hyper commercial (javafxports). Java burned me on two fronts at the very same time and lost me. Ymmv I guess. It's always a good time to try something new anyway... I also moved to kotlin on android and couldn't be happier with it, it's a clearly superior language.
Wow didn't know that. Can you provide some links?
What are you talking about?
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Use managed language, it will handle memory stuff for you, you don’t have to care.
But also read these 400 articles to understand our GC. If you are lucky, we will let you change 3 settings.
You can provide your own GC implementation if you really wanted to:
https://learn.microsoft.com/en-us/dotnet/core/runtime-config...
https://github.com/dotnet/runtime/blob/main/src/coreclr/gc/g...
Interesting!
It works just fine out of the box. The articles/manuals are just if you want to really understand how it works and get the most out of it. What's the issue with that?
In my 20+ years using C#, there's only been one instance where I needed to explicitly control some behavior of the GC (it would prematurely collect the managed handle on a ZMQ client) and that only required one line of code to pin the handle.
It pretty much never gets in your way for probably 98% of developers.
Dr. Dobbs and The C/C++ Users Journal archives are full of articles and ads for special memory allocators, because the ones on the standard library for C or C++ also don't work in many cases, they are only good enough as general purpose allocation.
You need these settings when you drive your application hard into circumstances where manual memory allocation arguably starts making sense again. Like humongous heaps, lots of big, unwieldy objects, or tight latency (or tail latency) requirements. But unless you're using things like Rust or Swift, the price of memory management is the need to investigate segmentation faults. I'd prefer to spend developer time on feature development and benchmarking instead.