>The goal is, that xfwl4 will offer the same functionality and behavior as xfwm4 does...
I wonder how strictly they interpret behavior here given the architectural divergence?
As an example, focus-stealing prevention. In xfwm4 (and x11 generally), this requires complex heuristics and timestamp checks because x11 clients are powerful and can aggressively grab focus. In wayland, the compositor is the sole arbiter of focus, hence clients can't steal it, they can only request it via xdg-activation. Porting the legacy x11 logic involves the challenge of actually designing a new policy that feels like the old heuristic but operates on wayland's strict authority model.
This leads to my main curiosity regarding the raw responsiveness of xfce. On potato hardware, xfwm4 often feels snappy because it can run as a distinct stacking window manager with the compositor disabled. Wayland, by definition forces compositing. While I am not concerned about rust vs C latency (since smithay compiles to machine code without a GC), I am curious about the mandatory compositing overhead. Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
At least they are honest regarding the reasons, not a wall of text to justify what bails down to "because I like it".
Naturally these kinds of having a language island create some attrition regarding build tooling, integration with existing ecosystem and who is able to contribute to what.
So lets see how it evolves, even with my C bashing, I was a much happier XFCE user than with GNOME and GJS all over the place.
You know that all the Wayland primitives, event handling and drawing in gnome-shell are handled in C/native code through Mutter, right ?
The JavaScript in gnome-shell is the cherry on top for scripting, similar to C#/Lua (or any GCed language) in game engines, elisp in Emacs, event JS in QtQuick/QML.
It is not the performance bottleneck people seem to believe.
One thing to keep in mind is that composition does not mean you have to do it with vsync, you can just refresh the screen the moment a client tells you the window has new contents.
> Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
I think this is ultimately correct. The compositor will have to render a frame at some point after the VBlank signal, and it will need to render with it the buffers on-screen as of that point, which will be from whatever was last rendered to them.
This can be somewhat alleviated, though. Both KDE and GNOME have been getting progressively more aggressive about "unredirecting" surfaces into hardware accelerated DRM planes in more circumstances. In this situation, the unredirected planes will not suffer compositing latency, as their buffers will be scanned out by the GPU at scanout time with the rest of the composited result. In modern Wayland, this is accomplished via both underlays and overlays.
There is also a slight penalty to the latency of mouse cursor movement that is imparted by using atomic DRM commits. Since using atomic DRM is very common in modern Wayland, it is normal for the cursor to have at least a fraction of a frame of added latency (depending on many factors.)
I'm of two minds about this. One, obviously it's sad. The old hardware worked perfectly and never had latency issues like this. Could it be possible to implement Wayland without full compositing? Maybe, actually. But I don't expect anyone to try, because let's face it, people have simply accepted that we now live with slightly more latency on the desktop. But then again, "old" hardware is now hardware that can more often than not, handle high refresh rates pretty well on desktop. An on-average increase of half a frame of latency is pretty bad with 60 Hz: it's, what, 8.3ms? But half a frame at 144 Hz is much less at somewhere around 3.5ms of added latency, which I think is more acceptable. Combined with aggressive underlay/overlay usage and dynamic triple buffering, I think this makes the compositing experience an acceptable tradeoff.
What about computers that really can't handle something like 144 Hz or higher output? Well, tough call. I mean, I have some fairly old computers that can definitely handle at least 100 Hz very well on desktop. I'm talking Pentium 4 machines with old GeForce cards. Linux is certainly happy to go older (though the baseline has been inching up there; I think you need at least Pentium now?) but I do think there is a point where you cross a line where asking for things to work well is just too much. At that point, it's not a matter of asking developers to not waste resources for no reason, but asking them to optimize not just for reasonably recent machines but also to optimize for machines from 30 years ago. At a certain point it does feel like we have to let it go, not because the computers are necessarily completely obsolete, but because the range of machines to support is too wide.
Obviously, though, simply going for higher refresh rates can't fix everything. Plenty of laptops have screens that can't go above 60 Hz, and they are forever stuck with a few extra milliseconds of latency when using a compositor. It is unideal, but what are you going to do? Compositors offer many advantages, it seems straightforward to design for a future where they are always on.
Love your post. So, don’t take this as disagreement.
I’m always a little bewildered by frame rate discussions. Yes, I understand that more is better, but for non-gaming apps (e.g. “productivity” apps), do we really need much more than 60 Hz? Yes, you can get smoother fast scrolling with higher frame rate at 120 Hz or more, but how many people were complaining about that over the last decade?
I enjoy working on my computer more at 144Hz than 60Hz. Even on my phone, the switch from 60Hz to a higher frame rate is quite obvious. It makes the entire system feel more responsive and less glitchy. VRR also helps a lot in cases where the system is under load.
60Hz is actually a downgrade from what people were used to. Sure, games and such struggled to get that kind of performance, but CRT screens did 75Hz/85Hz/100Hz quite well (perhaps at lower resolutions, because full-res 1200p sometimes made text difficult to read on a 21 inch CRT, with little benefit from the added smoothness as CRTs have a natural fuzzy edge around their straight lines anyway).
There's nothing about programming or word processing that requires more than maybe 5 or 6 fps (very few people type more than 300 characters per minute anyway) but I feel much better working on a 60 fps screen than I do a 30 fps one.
Everyone has different preferences, though. You can extend your laptop's battery life by quite a bit by reducing the refresh rate to 30Hz. If you're someone who doesn't really mind the frame rate of their computer, it may be worth trying!
> how many people were complaining about that over the last decade?
Quite a few. These articles tend to make the rounds when it comes up: https://danluu.com/input-lag/https://lwn.net/Articles/751763/ Perception varies from person to person, but going from my 144hz monitor to my old 60hz work laptop is so noticeable to me that I switched it from a composited wayland DE to an X11 WM.
Input lag is not the same as refresh rate. 60 Hz is 16.7 ms per frame. If it takes a long time for input to appear on screen it’s because of the layers and layers of bloat we have in our UI systems.
Essentially, the only reason to go over 60 Hz for desktop is for a better "feel" and for lower latency. Compositing latency is mainly centered around frames, so the most obvious and simplest way to lower that latency is to shorten how long a frame is, hence higher frame rates.
However, I do think that high refresh rates feel very nice to use even if they are not strictly necessary. I consider it a nice luxury.
If our mouse cursors are going to have half a frame of latency, I guess we will need 60Hz or 120Hz desktops, or whatever.
I dunno. It does seem a bit odd, because who was thinking about the framerates of, like, desktops running productivity software, for the last couple decades? I guess I assumed this would never be a problem.
Mouse cursor latency and window compositing latency are two separate things. I probably did not do a good enough job conveying this. In a typical Linux setup, the mouse cursor gets its own DRM plane, so it will be rendered on top of the desktop during scanout right as the video output goes to the screen.
There are two things that typically impact mouse cursor latency, especially with regards to Wayland:
- Software-rendering, which is sometimes used if hardware cursors are unavailable or buggy for driver/GPU reasons. In this case the cursor will be rendered onto the composited desktop frame and thus suffer compositor latency, which is tied to refresh rate.
- Atomic DRM commits. Using atomic DRM commits, even the hardware-rendered cursors can suffer additional latency. In this case, the added latency is not necessarily tied to frame times or refresh rates. Instead, its tied to when during the refresh cycle the atomic commit is sent; specifically, how close to the deadline. I think in most cases we're talking a couple milliseconds of latency. It has been measured before, but I cannot find the source.
Wayland compositors tend to use atomic DRM commits, hence a slightly more laggy mouse cursor. I honestly couldn't tell you if there is a specific reason why they must use atomic DRM, because I don't have knowledge that runs that deep, only that they seem to.
Mouse being jumpy shouldn’t be related to refresh rate. The mouse driver and windowing system should keep track of the mouse position regardless of the video frame rate. Yes, the mouse may jump more per frame with a lower frame rate, but that should only be happening when you move the mouse a long distance quickly. Typically, when you do that, you’re not looking at the mouse itself but at the target. Then, once you’re near it, you slow down the movement and use fine motor skills to move it onto the target. That’s typically much slower and frame rate won’t matter much because the motion is so much smaller.
I couldn't find ready stats on what percentage of displays are 60 hz but outside of gaming and high end machines I suspect 60 hz is still the majority of of machines used by actual users meaning we should evaluate the latency as it is observed by most users.
> ...or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
I think I know what "frame perfect" means, and I'm pretty sure that you've been able to get that for ages on X11... at least with AMD/ATi hardware. Enable (or have your distro enable) the TearFree option, and there you go.
I read somewhere that TearFree is triple buffering, so -if true- it's my (perhaps mistaken) understanding that this adds a frame of latency.
> I read somewhere that TearFree is triple buffering, so -if true- it's my (perhaps mistaken) understanding that this adds a frame of latency.
True triple buffering doesn't add one frame of latency, but since it enforces only whole frames be sent to the display instead of tearing, it can cause partial frames of latency. (It's hard to come up with a well-defined measure of frame latency when tearing is allowed.)
But there have been many systems that abused the term "triple buffering" to refer to a three-frame queue, which always does add unnecessary latency, making it almost always the wrong choice for interactive systems.
> Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
well, the answer is just no, wayland has been consistently slower than X11 and nothing running on top can't really go around that
Can you cite any sources for that claim? I found this blog post that says wayland is pretty much on par with X11 except for XWayland, which should be considered a band-aid only anyways: https://davidjusto.com/articles/m2p-latency/
In my view, this project itself shows some of the reasons why Wayland is the right path forward.
On X, we had Xorg and that is it. But at least Xorg did a lot of the work for you.
On Wayland, you in theory have to do a lot more of the work yourself when you build a compositor. But what we are seeing is libraries emerge that do this for you (wlroots, Smithay, Louvre, aquamarine, SWC, etc). So we have this one man project expecting to deliver a dev release in just a few months (mid-2026 is 4 months from now).
But it is not just that we have addressed the Wayland objection. This project was able to evaluate alternatives and decide the smithay is the best fit both for features and language choice. As time goes on, we will see more implementations that will compete with each other on quality and features. This will drive the entire ecosystem forward. That is how Open Source is supposed to work.
Because Wayland only does essential low-level stuff such as display and graphics it forced people to start coming up with a common Linux desktop (programming) interface out of nowhere to basically glue everything together and make programs at least interoperate.
Such an effort to rethink Linux desktop alone could've been a major project on its own but as having something was necessitated by Wayland all of it has become hurried and lacking control. Anything reminiscent of a bigger and more comprehensive project is in initial stages at best. If Wayland has been coming on for about ten years now I'll give it another ten years until we have some kind of established, consistent desktop API for Linux again.
X11 did offer some very basic features for a desktop environment so that programs using different toolkits could work together, and enough hooks you could implement stuff in window managers etc. Yet there was nothing like the more complete interfaces of the desktops of other operating systems that tied everything together in a central, consistent way. So, Linux desktop interface was certainly in need for a rewrite but the way it's happening is just disheartening.
Nobody has a user-space stick big enough to force things in the Linux world.
When Apple dropped the old audio APIs of classic macOS and introduced CoreAudio, they pissed off a lot of developers, but those developers had no choice. In the GUI realm, they only deprecated HIKit for a decade or two before removing it (if they've even done that), but they made it very clear that CoreFoo was the API you should be using and that was that.
In Linux-land, nobody has that authority. Nobody can come up with an equivalent to Core* for Linux and enforce its use. Consequently, you're going to continue to see the Qt/GTK/* splits, where the only commonality is at the lowest level of the window system (though, to Qt's credit, optionally also the event loop).
GNOME has enough weight to at least force most projects to accommodate them. But unfortunately this has mostly been for the worst, as GNOME is usually the odd one out with most matters of taste and design.
I hope that XFCE remains a solid lightweight desktop option. I've become a huge fan of KDE over the past couple of years, but it certainly isn't what you would consider lightweight or minimal.
Personally, I'm a big proponent of Wayland and not big Rust detractor, so I don't see any problem with this. I do, however, wonder how many long-time XFCE fans and the folks who donated the money funding this will feel about it. To me the reasoning is solid: Wayland appears to be the future, and Rust is a good way to help avoid many compositor crashes, which are a more severe issue in Wayland (though it doesn't necessarily need to be fatal, FWIW.) Still I perceive a lot of XFCE's userbase to be more "traditional" and conservative about technologies, and likely to be skeptical of both Wayland and Rust, seeing them as complex, bloated, and unnecessary.
Of course, if they made the right choice, it should be apparent in relatively short order, so I wish them luck.
> Still I perceive a lot of XFCE's userbase to be more "traditional" and conservative about technologies, and likely to be skeptical of both Wayland and Rust, seeing them as complex, bloated, and unnecessary.
Very long time (since 2007) XFCE user here. I don't think this is accurate. We want things to "just work" and not change for no good reason. Literally no user cares what language a project is implemented in, unless they are bored and enjoy arguing about random junk on some web forum. Wayland has the momentum behind it, and while there will be some justified grumbling because change is always annoying, the transition will happen and will be fairly painless as native support for it continues to grow. The X11 diehards will go the way of the SysV-init diehards; some weird minority that likes to scream about the good old days on web forums but really no one cares about.
There are good reasons to switch to Wayland, and I trust the XFCE team to handle the transition well. Great news from the XFCE team here, I'm excited for them to pull this off.
I used XFCE for a long time and I very much agree. it just works, and is lightweight. I use KDE these days but XFCE would be my second choice.
> The X11 diehards will go the way of the SysV-init diehard
I hope you are not conflating anti-systemD people with SysV init diehards? As far as I can see very few people want to keep Sysv init, but there are lots who think SystemD init is the wrong replacement, and those primarily because its a lot more than an init system.
In many ways the objects are opposite. People hate system D for being more than init, people hate Wayland for doing less than X.
Edit: corrected "Wayland" to "XFCE" in first sentence!
If Rust has one weakness right now, it's bindings to system and hardware libraries. There's a massive barrier in Rust communicating with the outside ecosystem that's written in C. The definitive choice to use Rust and an existing Wayland abstraction library narrows their options down to either creating bindings of their own, or using smithay, the brand new Rust/Wayland library written for the Cosmic desktop compositor. I won't go into details, but Cosmic is still very much in beta.
It would have been much easier and cost-effective to use wlroots, which has a solid base and has ironed out a lot of problems. On the other hand, Cosmic devs are actively working on it, and I can see it getting better gradually, so you get some indirect manpower for free.
I applaud the choice to not make another core Wayland implementation. We now have Gnome, Plasma, wlroots, weston, and smithay as completely separate entities. Dealing with low-level graphics is an extremely difficult topic, and every implementor encounters the same problems and has to come up with independent solutions. There's so much duplicated effort. I don't think people getting into it realize how deceptively complex and how many edge-cases low-level graphics entails.
There really isn't a "massive barrier" to FFI. Autogenerate the C bindings and you're done. You don't have to wrap it in a safe abstraction, and imo you shouldn't.
This. It is somewhat disheartening to hear the whole interop-with-C with Rust being an insurmountable problem. Keeping the whole “it’s funded by the Government/Google etc” nonsense aside: I personally wish that at least a feeble attempt would be made to actually use the FFI capabilities that Rust and its ecosystem has before folks form an opinion. Personally - and I’m not ashamed to state that I’m an early adopter of the language - it’s very good. Please consider that the Linux kernel project, Google, Microsoft etc went down the Rust path not on a whim but after careful analysis of the pros and cons. The pros won out.
> This. It is somewhat disheartening to hear the whole interop-with-C with Rust being an insurmountable problem.
I have done it and it left a bad taste in my mouth. Once you're doing interop with C you're just writing C with Rust syntax topped off with a big "unsafe" dunce cap to shame you for being a naughty, lazy programmer. It's unergonomic and you lose the differentiating features of Rust. Writing safe bindings is painful, and using community written ones tends to pull in dozens of dependencies. If you're interfacing a C library and want some extra features there are many languages that care far more about the developer experience than Rust.
> a big "unsafe" dunce cap to shame you for being a naughty, lazy programmer
That's bizarrely emotional. It's a language feature that allows you to do things the compiler would normally forbid you from doing. It's there because it's sometimes necessary or expedient to do those things.
> a big "unsafe" dunce cap to shame you for being a naughty, lazy programmer
You just have to get over that. `unsafe` means "compiler cannot prove this to be safe." FFI is unsafe because the compiler can't see past it.
> Once you're doing interop with C you're just writing C with Rust syntax
Just like C++, or go, or anything else. You can choose to wrap it, but that's just indirection for no value imo. I honestly hate seeing C APIs wrapped with "high level" bindings in C++ for the same reason I hate seeing them in Rust. The docs/errors/usage are all in terms of the C API and in my code I want to see something that matches the docs, so it should be "C in syntax of $language".
> The X11 diehards will go the way of the SysV-init diehards; some weird minority
I upvoted your general response but this line was uncalled for. No need to muddy the waters about X11 -> Wayland with the relentlessly debated, interminable, infernal init system comparison.
> Literally no user cares what language a project is implemented in
I think this is true but also maybe not true at the same time.
For one thing, programming languages definitely come with their own ecosystems and practices that are common.
Sometimes, programming languages can be applied in ways that basically break all of the "norms" and expectations of that programming language. You can absolutely build a bloated and slow C application, for example, so just using C doesn't make something minimal or fast. You can also write extremely reliable C code; sqlite is famously C after all, so it's clearly possible, it just requires a fairly large amount of discipline and technical effort.
Usually though, programs fall in line with the norms. Projects written in C are relatively minimal, have relatively fewer transitive dependencies, and are likely to contain some latent memory bugs. (You can dislike this conclusion, but if it really weren't true, there would've been a lot less avenues for rooting and jailbreaking phones and other devices.)
Humans are clearly really good at stereotyping, and pick up on stereotypes easily without instruction. Rust programs have a certain "feel" to them; this is not delusion IMO, it's likely a result of many things, like the behaviors of clap and anywho/Rust error handling leaking through to the interface. Same with Go. Even with languages that don't have as much of a monoculture, like say Python or C, I think you can still find that there are clusters of stereotypes of sorts that can predict program behavior/error handling/interfaces surprisingly well, that likely line up with specific libraries/frameworks. It's totally possible to, for example, make a web page where there are zero directly visible artifacts of what frameworks or libraries were used to make it. Yet despite that, when people just naturally use those frameworks, there are little "tells" that you can pick up on a lot of the time. You ever get the feeling that you can "tell" some application uses Angular, or React? I know I have, and what stuns me is that I am usually right (not always; stereotypes are still only stereotypes, after all.)
So I think that's one major component of why people care about the programming language that something is implemented in, but there's also a few others:
- Resources required to compile it. Rust is famously very heavy in this regard; compile times are relatively slow. Some of this will be overcome with optimization, but it still stands to reason that the act of compiling Rust code itself is very computationally expensive compared to something as simple as C.
- Operational familiarity. This doesn't come into play too often, but it does come into play. You have to set a certain environment variable to get Rust to output full backtraces, for example. I don't think it is part of Rust itself, but the RUST_LOG environment variable is used by multiple libraries in the ecosystem.
- Ease of patching. Patching software written in Go or Python, I'd argue, is relatively easy. Rust, definitely can be a bit harder. Changes that might be possible to shoehorn in in other languages might be harder to do in Rust without more significant refactoring.
- Size of the resulting programs. Rust and Go both statically link almost all dependencies, and don't offer a stable ABI for dynamic linking, so each individual Rust binary will contain copies of all of their dependencies, even if those dependencies are common across a lot of Rust binaries on your system. Ignoring all else, this alone makes Rust binaries a lot larger than they could be. But outside of that, I think Rust winds up generating a lot of code, too; trying to trim down a Rust wasm binary tells you that the size cost of code that might panic is surprisingly high.
So I think it's not 100% true to say that people don't care about this at all, or that only people who are bored and like to argue on forums ever care. (Although admittedly, I just spent a fairly long time typing this to argue about it on a forum, so maybe it really is true.)
Why does Wayland "feel like the future?" It feels like a regression to me and a lot of other people who have run into serious usability problems.
At best, it seems like a huge diversion of time and resources, given that we already had a working GUI. (Maybe that was the intention.) The arguments for it have boiled down to "yuck code older than me" from supposed professionals employed by commercial Linux vendors to support the system, and it doesn't have Android-like separation — a feature no one really wants.
The mantra of "it's a protocol" isn't very comforting when it lacks so many features that necessitate workarounds, leading to fragmentation and general incompatibility. There are plenty of complicated, bad protocols. The ones that survive are inherently "simple" (e.g., SMTP) or "trivial" (e.g., TFTP). Maybe there will be a successor to Wayland that will be the SMTP to its X400, but to me, Wayland seems like a past compromise (almost 16 years of development) rather than a future.
Wayland supports HDR, it's very easy to configure VRR, and it's fractional scaling (if implemented properly) is far superior to anything X11 can offer.
Furthermore, all of these options can be enabled individually on multiple screens on the same system and still offer a good mix-used environment. As someone who has been using HiDPI displays on Linux for the past 7 years, wayland was such a game changer for how my system works.
We’re accustomed to "the future" connoting progress and improvement. Unfortunately, it isn’t always so (no matter how heavily implied). Just that it’s literally expected to be the future state if matters.
I've been on and off linux desktops since the advent of Wayland. Unsure of the actual issues people run into at this point outside of very niche workflows or applications, to which, there are X11 fallbacks for.
Also, by "commercial linux vendors", you do realize Wayland is directly supported (afaik, correct me if wrong) by the largest commercial linux contributors, Red Hat, Canoncial. They're not simply 'vendors'.
> Unsure of the actual issues people run into at this point outside of very niche workflows or applications, to which, there are X11 fallbacks for.
I don't know if others have experienced this but the biggest bug I see in Wayland right now is sometimes on an external monitor after waking the computer, a full-screen electron window will crash the display (ie the display disconnects).
I can usually fix this by switching to another desktop and then logging out and logging back in.
Such a strange bug because it only affects my external monitor and only affects electron apps (I notice it with VSCode the most but that's just cause I have it running virtually 24/7)
If anyone has encountered this issue and figured out a solution i am all ears.
This is probably worth reporting. I don't think I've ever heard or ran into something like that before. Most issues I ran into during the early rollout of Wayland desktop environments was broken or missing functionality in existing apps.
This argument is actually backwards: one of the goals of the wayland project is to draw development away from X. If wayland didn't exist, people would have worked on X11 a lot more.
This question sounds to me like you suspect some outright evil getting projected here. That would go too far. The wayland project tried to get the support of X developers early so that they could become a sort of "blessed" X successor early on. Plenty of earlier replacement attempts have failed because they couldn't get bigger community support, so this had to be part of a successful strategy. Any detrimental effects on X from that move were never a direct goal, as far as I am aware, just a consequence.
Yes, I do interpret your “draw development away from X” as suggesting an attempt to damage X (sorry if I misinterpreted your post, but I do think my interpretation was not really that unreasonable).
This “blessed successor” without and detrimental effects as a main goal: that’s pretty close to my understanding of the project. IIRC some X people were involved from the beginning, right?
This isn't quite right? Wayland was literally created by an X11 developer who got two more main X11 developers in. It's a second system, not a competitor as such.
Wanting developers to switch projects doesn't have to be malicious, in fact personally i doubt there were any bad intentions in place, the developers of Wayland most likely think they're doing the right thing.
That’s a fork, which is fine. But for example, users from most mainstream distros will have to compile it themselves.
I guess we’ll see if that development is ever applied to the main branch, or if it supplants the main X branch. At the moment, though… if that’s the future of X, then it is fair to be a little bit unsure if it is going to stick, right?
That seems pretty interesting. I guess it relies on BSD plumbing though?
Funny enough, the my first foray into these sort of operating systems was BSD, but it was right when I was getting started. So I don’t really know which of my troubles were caused by BSD being tricky (few probably), and which were caused by my incompetence at the time (most, probably). One of these days I’ll try it again…
Even if you dislike Wayland, forwards-going development is clearly centred around it.
Development of X11 has largely ended and the major desktop environments and several mainstream Linux distributions are likewise ending support for it. There is one effort I know of to revive and modernize X11 but it’s both controversial and also highly niche.
You don’t have to like the future for it to be the future.
It's mostly coz nobody really wants to improve X11. I don't think there is many wayland features that would be impossible to implement in X11 it's just nobody wants to dig into crusty codebase to do it.
And sadly wayland decided to just not learn any lessons from X11 and it shows.
What do you mean nobody wants to improve X11? There were developers with dozens of open merge requests with numerous improvements to X11 that were being actively ignored/held back by IBM/Red Hat because they wanted Wayland, their corporate project, to succeed instead.
Reviewing PRs and merging them requires great effort, especially in case of a non-trivial behemoth like X. Surely if all these merge requests were of huge value, someone could have forked the project and be very happy with all the changes, right?
Not having enough maintainers, and some design issues that can't be solved are both reasons why X was left largely unmaintained.
> Surely if all these merge requests were of huge value
There were a lot of MRs with valuable changes however Red Hat wanted certain features to be exclusive to Wayland to make the alternative more appealing to people so they actively blocked these MRs from progressing.
> someone could have forked the project and be very happy with all the changes, right?
That's precisely what happened, one of the biggest contributors and maintainers got bullied by Red Hat from the project for trying to make X11 work and decided to create X11Libre (https://github.com/X11Libre/xserver) which is now getting all these fancy features that previously were not possible to get into X11 due to Red Hat actively sabotaging the project in their attempt to turn Linux into their own corporate equivalent of Windows/macOS.
It's a downgrade that we have no choice but to accept in order to continue using our machines. Anyone familiar with Microsoft or Apple already knows that's the future.
- Having a single X server that almost everyone used lead to ossification. Having Wayland explicitly be only a protocol is helping to avoid that, though it comes with its own growing pains.
- Wayland-the-Protocol (sounds like a Sonic the Hedgehog character when you say it like that) is not free of cruft, but it has been forward-thinking. It's compositor-centric, unlike X11 which predates desktop compositing; that alone allows a lot of clean-up. It approaches features like DPI scaling, refresh rates, multi-head, and HDR from first principles. Native Wayland enables a much better laptop docking experience.
- Linux desktop security and privacy absolutely sucks, and X.org is part of that. I don't think there is a meaningful future in running all applications in their own nested X servers, but I also believe that trying to refactor X.org to shoehorn in namespaces is not worth the effort. Wayland goes pretty radical in the direction of isolating clients, but I think it is a good start.
I think a ton of the growing pains with Wayland come from just how radical the design really is. For example, there is deliberately no global coordinate space. Windows don't even know where they are on screen. When you drag a window, it doesn't know where it's going, how much it's moving, anything. There isn't even a coordinate space to express global positions, from a protocol PoV. This is crazy. Pretty much no other desktop windowing system works this way.
I'm not even bothered that people are skeptical that this could even work; it would be weird to not be. But what's really crazy, is that it does work. I'm using it right now. It doesn't only work, but it works very well, for all of the applications I use. If anything, KDE has never felt less buggy than it does now, nor has it ever felt more integrated than it does now. I basically have no problems at all with the current status quo, and it has greatly improved my experience as someone who likes to dock my laptop.
But you do raise a point:
> It feels like a regression to me and a lot of other people who have run into serious usability problems.
The real major downside of Wayland development is that it takes forever. It's design-by-committee. The results are actually pretty good (My go-to example is the color management protocol, which is probably one of the most solid color management APIs so far) but it really does take forever (My go-to example is the color management protocol, which took about 5 years from MR opening to merging.)
The developers of software like KiCad don't want to deal with this, they would greatly prefer if software just continued to work how it always did. And to be fair, for the most part XWayland should give this to you. (In KDE, XWayland can do almost everything it always could, including screen capture and controlling the mouse if you allow it to.) XWayland is not deprecated and not planned to be.
However, the Wayland developers have taken a stance of not just implementing raw tools that can be used to implement various UI features, but instead implement protocols for those specific UI features.
An example is how dragging a window works in Wayland: when a user clicks or interacts with a draggable client area, all the client does is signal that they have, and the compositor takes over from there and initiates a drag.
Another example would be how detachable tabs in Chrome work in Wayland: it uses a slightly augmented invocation of the drag'n'drop protocol that lets you attach a window drag to it as well. I think it's a pretty elegant solution.
But that's definitely where things are stuck at. Some applications have UI features that they can't implement in Wayland. xdg-session-management for being able to save and restore window positions is still not merged, so there is no standard way to implement this in Wayland. ext-zones for positioning multi-window application windows relative to each-other is still not merged, so there is no standard way to implement this in Wayland. Older techniques like directly embedding windows from other applications have some potential approaches: embedding a small Wayland compositor into an application seems to be one of the main approaches in large UI toolkits (sounds crazy, but Wayland compositors can be pretty small, so it's not as bad as it seems) whereas there is xdg-foreign which is supported by many compositors (Supported by GNOME, KDE, Sway, but missing in Mir, Hyprland and Weston. Fragmentation!) but it doesn't support every possible thing you could do in X11 (like passing an xid to mpv to embed it in your application, for example.)
I don't think it's unreasonable that people are frustrated, especially about how long the progress can take sometimes, but when I read these MRs and see the resulting protocols, I can't exactly blame the developers of the protocols. It's a long and hard process for a reason, and screwing up a protocol is not a cheap mistake for the entire ecosystem.
But I don't think all of this time is wasted; I think Wayland will be easier to adapt and evolve into the future. Even if we wound up with a one-true-compositor situation, there'd be really no reason to entirely get rid of Wayland as a protocol for applications to speak. Wayland doesn't really need much to operate; as far as I know, pretty much just UNIX domain sockets and the driver infrastructure to implement a WSI for Vulkan/GL.
Do you know if global shortcuts are solved in a satisfactory way, and if there easy mechanism for one application to query wayland about other applications.
One hack I've made a while ago was to bind win+t command to a script that queried the active window in the current workspace, and based on a decision opened up a terminal at the right filesystem location, with a preferred terminal profile.
All I get from llms is that dbus might be involved in gnome for global shortcuts, and when registering global shortcuts in something like hyperland app ids must be passed along, instead of simple scripts paths.
> xdg-session-management for being able to save and restore window positions
> is still not merged, so there is no standard way to implement this in Wayland
For me, this is a real reason not to want to be forced to use Wayland. I'm sure the implementation of Wayland in xfce is a long time off, and the dropping of Xwindows even further off, so hopefully this problem will have been solved by then.
Thanks a lot for an actually constructive comment on Wayland! The information tends to be lost in all the hate.
I understand the frustration, but I see a lot of "it's completely useless" and "it's a regression", though to me it really sounds like Wayland is an improvement in terms of security. So there's that.
The fact that this post is downvoted into grayness while lazy hateful rants aren't shows just how rotten HN community has gotten around opensource these days :/
Yeah, I am staunch proponent of "don't try to fix what is not broken". Current XFCE is fast, light-weight, usable and works fine without major issues. While I don't fully understand the advantages / disadvantages of XFCE using Wayland instead of X, if, as someone else pointed out here on HN, running XFCE on Wayland is going to make it slower, it means these developers will be crippling one of XFCE's strongest feature. In that case other minor advantages seems pointless to users like me.
> running XFCE on Wayland is going to make it slower
Citation. None of the other desktops have slowed with Wayland, and gaming is as fast as, if not marginally faster on KDE/Gnome with Wayland vs LXDE on X.
Long-time XFCE user here. We care that stuff works the same, we appreciate how much work it is to achieve that when the world is changing out from under you, and we appreciate that XFCE understands this and cares about it. Being in Rust is not a concern.
Afaik there exists only X11 and Wayland, and X11 is dying if not dead. And for rust I don't see why a desktop user would be concerned by the language used as long as it is good enough.
The move from kernel 2.4.x to 2.6.x was pretty painful. The absolute slog from 2.6 to 3.0 and a development model that a least somewhat resembles the model used today was exhausting.
In case you weren't there, the "even" kernels (e.g. 2.0, 2.2, 2.4, and 2.6) were the stable series while the "odd" kernels (e.g. 2.1, 2.3, 2.5) were the development series, the development model was absolutely mental and development moved at a glacial pace compared to today's breakneck speed.
The pre-git days were less than ideal. The BitKepper years were... interesting, politically and philosophically speaking.
To me the most painful switch was Gnome 2 to Gnome 3. I still miss Gnome 2.
I left Gnome 3 for other WMs (eventually settled on cinnamon), but every once in a while I decided to give Gnome 3 a try, just to be disappointed again. I felt like those people in abusive romantic relationships that keep coming back and divorcing over and over again. "Oh, Gnome has really changed now, he won't beat me again this time!".
Systemd was easy for me. All things worked in transition and have the big advantage that don't need shell scripts for create services. Wayland..., is slow, buggy, applications close without reason...
Fully-featured DEs like Gnome and KDE work a lot worse when doing everything in software rendering. If you're working on a device with subpar/nonexistent GPU driver support (i.e. Nvidia hardware for years on end), the experience is absolutely awful.
Nvidia's driver do something weird on Wayland when my laptop is connected to HDMI, probably something funky with the iGPU<->dGPU communication. Everything works, but at the whims of Nvidia an update reduces the maximum FPS I can achieve over HDMI to about 30-45fps. Jittery and painful, even on a monitor that supposedly supports VRR.
That's not really Wayland's fault of course, but in the same way Linux is broken because Photoshop doesn't work on it, Wayland is broken for many users because their desktop is weird on it.
systemd was a problem for early adopters (e.g., Fedora). Distros like Debian joined the party later and, as a result, got things way more stable. I never had any systemd-related problem in Debian, while for Fedora (some years earlier) I had some bugs affecting my ability to work. They all seem to work very fine now. Things took a while to mature, but it just works now.
I've used Smithay's Rust client toolkit for a few months now. For making apps it is still sometimes have unsafe wrappers disguised as safe. It has a lot of internals wrapped in Arc<>, but in my tests, the methods are not safe to call from different threads anyhow, you will get weird crashes if done so.
I will seek to dive-in to how Wayland API actually works, because I'd really like to know what not to do, when the wrappers used 'wrong' can crash.
Could you expand on why you describe Hyprland and XFCE4 as "a cursed combination"? Might provide some insight as to why the official XFCE project decided to create their own compositor.
I see the words "feature parity". I hope those words are taken seriously. I feel like most Wayland advocates would do well to take those words seriously.
Rather than going fully protocol-based (like Waypipe), they used Weston to render to RDP. Using RDP's "remote apps" functionality, practically any platform can render the windows. I think it's a pretty clever solution, one perhaps even better than plain X11 forwarding (which breaks all kinds of things like GPU acceleration).
I don't know if anyone has messed with this enough to get it to work like plain old RemoteApps for macOS/BSD/Windows/Linux, but the technology itself is clearly ready for it.
Wayland works pretty well on FreeBSD and I know at least wlroots compositors work a bit on OpenBSD (though, I suspect anyone on OpenBSD would prefer to use their homegrown Xenocara). There are Wayland compositors for Mac, the youtuber Brodie Robertson did a good overview of them a few days ago
It depends on what you mean by send. Wayland doesn't have network transparency, there's a bit of a song and dance you have to do to get that working properly. I'm not sure the state of that or of Wayland compositors in general on Mac.
For xeyes that works. It is absolutely an inferior and chatty protocol for any other application though, like try to watch a youtube video in chrome through it.
X's network transparency was made at a time when we drawn two lines as UI, and for that it works very well. But today even your Todo app has a bunch of icons that are just bitmaps to X, and we can transfer those via much better means (that should probably not be baked into a display protocol).
I think Wayland did the correct decision here. Just be a display protocol that knows about buffers and that's it.
User space can then just transport buffers in any way they seem fit.
Also, another interesting note, the original X network transparency's modern analogue might very well be the web, if you look at it squinted. And quite a few programs just simply expose a localhost port to avoid the "native GUI" issue wholesale.
You need waypipe installed on both machines. For the Mac, I guess you'll need something like cocoa-way (https://github.com/J-x-Z/cocoa-way). Some local Wayland compositor, anyway.
If wayland support was there already I would be using xfce. I truly admire it, it's great to see this happening and I hope the project continues in great speed. With DE's requiring hard system-d support, I would rather have something like xfce
I started off using twm / olwm / vtwm in 1991. Then FVWM and Afterstep / WindowMaker. I've been using XFCE since around 2007. As long as it functions similarly I'll be happy.
I resisted Wayland for a longtime, but I'm sold now that I see how well it does on old hardware.
I have an old Thinkpad. Firefox on X is slow and scrolls poorly. On wayland, the scrolling is remarkably smooth for 10 y/o hardware, and the addition of touchpad gestures is very nice. Yes, there's more configuration overhead for each compositor, but I'm now accepting this trade.
Not the whole codebase, only the window manager (compositor is the Wayland equivalent). Other components are written in C and will remain so for the foreseeable future. Those components gained Wayland support in the last couple of years, you can try Xfce in a labwc session, there are of course several things to improve, but the compositor is the last big piece missing.
As someone that is sensitive to displays, one of the best features of XFCE, unlike others desktops, is that it doesn't cause eye strain, probably because it doesn't play tricks - a pixel at a certain color is stable, and not dithered(if you choose) and higher level primitives are also stable and don't play time/frequency based games.
I hope XFCE preserves this, it is a killer feature in today's world.
I suspect many of us still using X, are xfce users waiting for an alternative; I've heard very mixed things about current Fedora xfce wayland setups from different people.
Wow, this is annoying. I really like Xfce, but there are plenty of minor things which would need improvements. Instead of fixing all these minor things, they waste a lot of their donations on a rewrite for Wayland / Rust - apparently for exactly the same reason as all the other Wayland stuff and Rust reworks. Developers like to write new code more than actually maintaining / improving fixing existing things and finds some excuses to do this.
It was originally named XFce after the XForms library. As of Xfce 3, it uses GTK though, so it could be called GTKce, but renaming the project every time you change widget toolkits is probably not a good idea.
If they do not mind introducing C++ (they're introducing Rust so i guess multilanguage development isn't out of the question) then FLTK could be an option, though it'd probably need to improve its theming support.
They both have kinda similar roots in that XFCE originally used XForms which was an open source replacement of the SGI Forms library while FLTK also started as a somewhat compatible/inspired opensource replacement of SGI Forms in C++.
If they ever move away from GTK (due to the GNOME shenanigans GNOME-izing GTK) I wish Englightenment and Xfce were together a single thing. But that's if I could ask the Tux genie three wishes.
GTK4 is still pretty usable without libadwaita and all its Gnome-isms.
But frankly I think forking and maintaining GTK3 is preferable to moving to EFL or Qt. GIMP is still on GTK3. MATE is still on GTK3. Inkscape is still on GTK3 (but GTK4 work is in progress). Evolution is still on GTK3.
I don’t see much Wayland hype. It’s boring plumbing for most people, isn’t it? Most of us are just going along with whatever the volunteer plumbing community decided to put together.
If the XLibre project appears to be making enough fairly-consistent progress for you to be comfortable tossing around some cash, then do gather up some likeminded folks to hire a dev to follow the guidance here [0] and help out!
Do note that I've never tried to croudfund a programmer, but that's something that I have to believe is possible to do.
Are you willing to write accessibility support for the new xfce only wayland compositor? How will you get every other wayland compositor to support your non-'wayland core' accessibility extension?
People like to frame things like the waylands are some sort of default and nothing is being lost and no one is being excluded.
Some cognitive dissonance going on here. The vast majority of current Linux Desktop users are on Wayland, and X11 is phased out across the board. Calling it hype is absurd.
i'm trying to build a Linux desktop and the first thing I got stuck at is X11 versus Wayland for greetd. Next thing Il got stuck at his XFCE4 doesn't exist for Wayland. What the shit. if we want to tell me wayland is the future, fine. sure. great. Tt's been 11 years!
If you're just trying to run Linux you're better off either using one of the many read-made distributions or going with X11 since that works just about everywhere and has done so for decades.
>The goal is, that xfwl4 will offer the same functionality and behavior as xfwm4 does...
I wonder how strictly they interpret behavior here given the architectural divergence?
As an example, focus-stealing prevention. In xfwm4 (and x11 generally), this requires complex heuristics and timestamp checks because x11 clients are powerful and can aggressively grab focus. In wayland, the compositor is the sole arbiter of focus, hence clients can't steal it, they can only request it via xdg-activation. Porting the legacy x11 logic involves the challenge of actually designing a new policy that feels like the old heuristic but operates on wayland's strict authority model.
This leads to my main curiosity regarding the raw responsiveness of xfce. On potato hardware, xfwm4 often feels snappy because it can run as a distinct stacking window manager with the compositor disabled. Wayland, by definition forces compositing. While I am not concerned about rust vs C latency (since smithay compiles to machine code without a GC), I am curious about the mandatory compositing overhead. Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
At least they are honest regarding the reasons, not a wall of text to justify what bails down to "because I like it".
Naturally these kinds of having a language island create some attrition regarding build tooling, integration with existing ecosystem and who is able to contribute to what.
So lets see how it evolves, even with my C bashing, I was a much happier XFCE user than with GNOME and GJS all over the place.
You know that all the Wayland primitives, event handling and drawing in gnome-shell are handled in C/native code through Mutter, right ? The JavaScript in gnome-shell is the cherry on top for scripting, similar to C#/Lua (or any GCed language) in game engines, elisp in Emacs, event JS in QtQuick/QML.
It is not the performance bottleneck people seem to believe.
I can dig out the old GNOME tickets and related blog posts...
Implementation matters, including proper use of JIT/AOT toolchains.
One thing to keep in mind is that composition does not mean you have to do it with vsync, you can just refresh the screen the moment a client tells you the window has new contents.
> Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
I think this is ultimately correct. The compositor will have to render a frame at some point after the VBlank signal, and it will need to render with it the buffers on-screen as of that point, which will be from whatever was last rendered to them.
This can be somewhat alleviated, though. Both KDE and GNOME have been getting progressively more aggressive about "unredirecting" surfaces into hardware accelerated DRM planes in more circumstances. In this situation, the unredirected planes will not suffer compositing latency, as their buffers will be scanned out by the GPU at scanout time with the rest of the composited result. In modern Wayland, this is accomplished via both underlays and overlays.
There is also a slight penalty to the latency of mouse cursor movement that is imparted by using atomic DRM commits. Since using atomic DRM is very common in modern Wayland, it is normal for the cursor to have at least a fraction of a frame of added latency (depending on many factors.)
I'm of two minds about this. One, obviously it's sad. The old hardware worked perfectly and never had latency issues like this. Could it be possible to implement Wayland without full compositing? Maybe, actually. But I don't expect anyone to try, because let's face it, people have simply accepted that we now live with slightly more latency on the desktop. But then again, "old" hardware is now hardware that can more often than not, handle high refresh rates pretty well on desktop. An on-average increase of half a frame of latency is pretty bad with 60 Hz: it's, what, 8.3ms? But half a frame at 144 Hz is much less at somewhere around 3.5ms of added latency, which I think is more acceptable. Combined with aggressive underlay/overlay usage and dynamic triple buffering, I think this makes the compositing experience an acceptable tradeoff.
What about computers that really can't handle something like 144 Hz or higher output? Well, tough call. I mean, I have some fairly old computers that can definitely handle at least 100 Hz very well on desktop. I'm talking Pentium 4 machines with old GeForce cards. Linux is certainly happy to go older (though the baseline has been inching up there; I think you need at least Pentium now?) but I do think there is a point where you cross a line where asking for things to work well is just too much. At that point, it's not a matter of asking developers to not waste resources for no reason, but asking them to optimize not just for reasonably recent machines but also to optimize for machines from 30 years ago. At a certain point it does feel like we have to let it go, not because the computers are necessarily completely obsolete, but because the range of machines to support is too wide.
Obviously, though, simply going for higher refresh rates can't fix everything. Plenty of laptops have screens that can't go above 60 Hz, and they are forever stuck with a few extra milliseconds of latency when using a compositor. It is unideal, but what are you going to do? Compositors offer many advantages, it seems straightforward to design for a future where they are always on.
Love your post. So, don’t take this as disagreement.
I’m always a little bewildered by frame rate discussions. Yes, I understand that more is better, but for non-gaming apps (e.g. “productivity” apps), do we really need much more than 60 Hz? Yes, you can get smoother fast scrolling with higher frame rate at 120 Hz or more, but how many people were complaining about that over the last decade?
I enjoy working on my computer more at 144Hz than 60Hz. Even on my phone, the switch from 60Hz to a higher frame rate is quite obvious. It makes the entire system feel more responsive and less glitchy. VRR also helps a lot in cases where the system is under load.
60Hz is actually a downgrade from what people were used to. Sure, games and such struggled to get that kind of performance, but CRT screens did 75Hz/85Hz/100Hz quite well (perhaps at lower resolutions, because full-res 1200p sometimes made text difficult to read on a 21 inch CRT, with little benefit from the added smoothness as CRTs have a natural fuzzy edge around their straight lines anyway).
There's nothing about programming or word processing that requires more than maybe 5 or 6 fps (very few people type more than 300 characters per minute anyway) but I feel much better working on a 60 fps screen than I do a 30 fps one.
Everyone has different preferences, though. You can extend your laptop's battery life by quite a bit by reducing the refresh rate to 30Hz. If you're someone who doesn't really mind the frame rate of their computer, it may be worth trying!
CRT screens did 75Hz/85Hz/100Hz quite well, but rendered only one pixel/dot at a time. This is in no way equivalent to 60Hz on a flat panel!
> how many people were complaining about that over the last decade?
Quite a few. These articles tend to make the rounds when it comes up: https://danluu.com/input-lag/ https://lwn.net/Articles/751763/ Perception varies from person to person, but going from my 144hz monitor to my old 60hz work laptop is so noticeable to me that I switched it from a composited wayland DE to an X11 WM.
Input lag is not the same as refresh rate. 60 Hz is 16.7 ms per frame. If it takes a long time for input to appear on screen it’s because of the layers and layers of bloat we have in our UI systems.
Essentially, the only reason to go over 60 Hz for desktop is for a better "feel" and for lower latency. Compositing latency is mainly centered around frames, so the most obvious and simplest way to lower that latency is to shorten how long a frame is, hence higher frame rates.
However, I do think that high refresh rates feel very nice to use even if they are not strictly necessary. I consider it a nice luxury.
Fair
If our mouse cursors are going to have half a frame of latency, I guess we will need 60Hz or 120Hz desktops, or whatever.
I dunno. It does seem a bit odd, because who was thinking about the framerates of, like, desktops running productivity software, for the last couple decades? I guess I assumed this would never be a problem.
Mouse cursor latency and window compositing latency are two separate things. I probably did not do a good enough job conveying this. In a typical Linux setup, the mouse cursor gets its own DRM plane, so it will be rendered on top of the desktop during scanout right as the video output goes to the screen.
There are two things that typically impact mouse cursor latency, especially with regards to Wayland:
- Software-rendering, which is sometimes used if hardware cursors are unavailable or buggy for driver/GPU reasons. In this case the cursor will be rendered onto the composited desktop frame and thus suffer compositor latency, which is tied to refresh rate.
- Atomic DRM commits. Using atomic DRM commits, even the hardware-rendered cursors can suffer additional latency. In this case, the added latency is not necessarily tied to frame times or refresh rates. Instead, its tied to when during the refresh cycle the atomic commit is sent; specifically, how close to the deadline. I think in most cases we're talking a couple milliseconds of latency. It has been measured before, but I cannot find the source.
Wayland compositors tend to use atomic DRM commits, hence a slightly more laggy mouse cursor. I honestly couldn't tell you if there is a specific reason why they must use atomic DRM, because I don't have knowledge that runs that deep, only that they seem to.
Mouse being jumpy shouldn’t be related to refresh rate. The mouse driver and windowing system should keep track of the mouse position regardless of the video frame rate. Yes, the mouse may jump more per frame with a lower frame rate, but that should only be happening when you move the mouse a long distance quickly. Typically, when you do that, you’re not looking at the mouse itself but at the target. Then, once you’re near it, you slow down the movement and use fine motor skills to move it onto the target. That’s typically much slower and frame rate won’t matter much because the motion is so much smaller.
I agree. Keyboard-action-to-result-on-screen latency is much more important, and we are typically way above 17 ms for that.
Yep, agreed, though it’s not just keyboard to screen. It’s also mouse click to screen. Really, any event to screen.
I couldn't find ready stats on what percentage of displays are 60 hz but outside of gaming and high end machines I suspect 60 hz is still the majority of of machines used by actual users meaning we should evaluate the latency as it is observed by most users.
Xfce / xfwm4 doesn't offer focus stealing prevention.
Settings -> Window Manager Tweaks -> Focus -> Activate focus stealing prevention
https://gitlab.xfce.org/xfce/xfwm4/-/blob/master/settings-di...
> ...or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
I think I know what "frame perfect" means, and I'm pretty sure that you've been able to get that for ages on X11... at least with AMD/ATi hardware. Enable (or have your distro enable) the TearFree option, and there you go.
I read somewhere that TearFree is triple buffering, so -if true- it's my (perhaps mistaken) understanding that this adds a frame of latency.
> I read somewhere that TearFree is triple buffering, so -if true- it's my (perhaps mistaken) understanding that this adds a frame of latency.
True triple buffering doesn't add one frame of latency, but since it enforces only whole frames be sent to the display instead of tearing, it can cause partial frames of latency. (It's hard to come up with a well-defined measure of frame latency when tearing is allowed.)
But there have been many systems that abused the term "triple buffering" to refer to a three-frame queue, which always does add unnecessary latency, making it almost always the wrong choice for interactive systems.
only on the primary display. once you had more than one display there were only workarounds.
> Can the compositor replicate the input-to-pixel latency of uncomposited x11 on low-end devices or is that a class of performance we just have to sacrifice for the frame-perfect rendering of wayland?
well, the answer is just no, wayland has been consistently slower than X11 and nothing running on top can't really go around that
Can you cite any sources for that claim? I found this blog post that says wayland is pretty much on par with X11 except for XWayland, which should be considered a band-aid only anyways: https://davidjusto.com/articles/m2p-latency/
Here's one article: https://mort.coffee/home/wayland-input-latency/
It's specifically about cursor lag, but I think that's because it's more difficult to experimentally measure app rendering latency.
> wayland has been consistently slower than X11
Wayland is a specification, it has an inability to be "faster" than other options. That's like saying JSON is 5% slower than Word.
And as for the implementations being slower than X, that also doesn't reflect reality.
https://www.phoronix.com/review/ubuntu-2504-x11-gaming
There is no Wayland to run on top of as its a standard to implement rather than a server to talk to.
In my view, this project itself shows some of the reasons why Wayland is the right path forward.
On X, we had Xorg and that is it. But at least Xorg did a lot of the work for you.
On Wayland, you in theory have to do a lot more of the work yourself when you build a compositor. But what we are seeing is libraries emerge that do this for you (wlroots, Smithay, Louvre, aquamarine, SWC, etc). So we have this one man project expecting to deliver a dev release in just a few months (mid-2026 is 4 months from now).
But it is not just that we have addressed the Wayland objection. This project was able to evaluate alternatives and decide the smithay is the best fit both for features and language choice. As time goes on, we will see more implementations that will compete with each other on quality and features. This will drive the entire ecosystem forward. That is how Open Source is supposed to work.
Because Wayland only does essential low-level stuff such as display and graphics it forced people to start coming up with a common Linux desktop (programming) interface out of nowhere to basically glue everything together and make programs at least interoperate.
Such an effort to rethink Linux desktop alone could've been a major project on its own but as having something was necessitated by Wayland all of it has become hurried and lacking control. Anything reminiscent of a bigger and more comprehensive project is in initial stages at best. If Wayland has been coming on for about ten years now I'll give it another ten years until we have some kind of established, consistent desktop API for Linux again.
X11 did offer some very basic features for a desktop environment so that programs using different toolkits could work together, and enough hooks you could implement stuff in window managers etc. Yet there was nothing like the more complete interfaces of the desktops of other operating systems that tied everything together in a central, consistent way. So, Linux desktop interface was certainly in need for a rewrite but the way it's happening is just disheartening.
Nobody has a user-space stick big enough to force things in the Linux world.
When Apple dropped the old audio APIs of classic macOS and introduced CoreAudio, they pissed off a lot of developers, but those developers had no choice. In the GUI realm, they only deprecated HIKit for a decade or two before removing it (if they've even done that), but they made it very clear that CoreFoo was the API you should be using and that was that.
In Linux-land, nobody has that authority. Nobody can come up with an equivalent to Core* for Linux and enforce its use. Consequently, you're going to continue to see the Qt/GTK/* splits, where the only commonality is at the lowest level of the window system (though, to Qt's credit, optionally also the event loop).
GNOME has enough weight to at least force most projects to accommodate them. But unfortunately this has mostly been for the worst, as GNOME is usually the odd one out with most matters of taste and design.
I hope that XFCE remains a solid lightweight desktop option. I've become a huge fan of KDE over the past couple of years, but it certainly isn't what you would consider lightweight or minimal.
Personally, I'm a big proponent of Wayland and not big Rust detractor, so I don't see any problem with this. I do, however, wonder how many long-time XFCE fans and the folks who donated the money funding this will feel about it. To me the reasoning is solid: Wayland appears to be the future, and Rust is a good way to help avoid many compositor crashes, which are a more severe issue in Wayland (though it doesn't necessarily need to be fatal, FWIW.) Still I perceive a lot of XFCE's userbase to be more "traditional" and conservative about technologies, and likely to be skeptical of both Wayland and Rust, seeing them as complex, bloated, and unnecessary.
Of course, if they made the right choice, it should be apparent in relatively short order, so I wish them luck.
> Still I perceive a lot of XFCE's userbase to be more "traditional" and conservative about technologies, and likely to be skeptical of both Wayland and Rust, seeing them as complex, bloated, and unnecessary.
Very long time (since 2007) XFCE user here. I don't think this is accurate. We want things to "just work" and not change for no good reason. Literally no user cares what language a project is implemented in, unless they are bored and enjoy arguing about random junk on some web forum. Wayland has the momentum behind it, and while there will be some justified grumbling because change is always annoying, the transition will happen and will be fairly painless as native support for it continues to grow. The X11 diehards will go the way of the SysV-init diehards; some weird minority that likes to scream about the good old days on web forums but really no one cares about.
There are good reasons to switch to Wayland, and I trust the XFCE team to handle the transition well. Great news from the XFCE team here, I'm excited for them to pull this off.
I used XFCE for a long time and I very much agree. it just works, and is lightweight. I use KDE these days but XFCE would be my second choice.
> The X11 diehards will go the way of the SysV-init diehard
I hope you are not conflating anti-systemD people with SysV init diehards? As far as I can see very few people want to keep Sysv init, but there are lots who think SystemD init is the wrong replacement, and those primarily because its a lot more than an init system.
In many ways the objects are opposite. People hate system D for being more than init, people hate Wayland for doing less than X.
Edit: corrected "Wayland" to "XFCE" in first sentence!
It is refreshing to see somebody else notice that the complaints about systemd and Wayland are philosophically incompatible.
Systemd is creating the same kind of monolith monoculture that Xorg represented. Wayland is far more modular.
Regardless of your engineering preferences, rejecting change is the main reason to object to both.
If Rust has one weakness right now, it's bindings to system and hardware libraries. There's a massive barrier in Rust communicating with the outside ecosystem that's written in C. The definitive choice to use Rust and an existing Wayland abstraction library narrows their options down to either creating bindings of their own, or using smithay, the brand new Rust/Wayland library written for the Cosmic desktop compositor. I won't go into details, but Cosmic is still very much in beta.
It would have been much easier and cost-effective to use wlroots, which has a solid base and has ironed out a lot of problems. On the other hand, Cosmic devs are actively working on it, and I can see it getting better gradually, so you get some indirect manpower for free.
I applaud the choice to not make another core Wayland implementation. We now have Gnome, Plasma, wlroots, weston, and smithay as completely separate entities. Dealing with low-level graphics is an extremely difficult topic, and every implementor encounters the same problems and has to come up with independent solutions. There's so much duplicated effort. I don't think people getting into it realize how deceptively complex and how many edge-cases low-level graphics entails.
There really isn't a "massive barrier" to FFI. Autogenerate the C bindings and you're done. You don't have to wrap it in a safe abstraction, and imo you shouldn't.
This. It is somewhat disheartening to hear the whole interop-with-C with Rust being an insurmountable problem. Keeping the whole “it’s funded by the Government/Google etc” nonsense aside: I personally wish that at least a feeble attempt would be made to actually use the FFI capabilities that Rust and its ecosystem has before folks form an opinion. Personally - and I’m not ashamed to state that I’m an early adopter of the language - it’s very good. Please consider that the Linux kernel project, Google, Microsoft etc went down the Rust path not on a whim but after careful analysis of the pros and cons. The pros won out.
> This. It is somewhat disheartening to hear the whole interop-with-C with Rust being an insurmountable problem.
I have done it and it left a bad taste in my mouth. Once you're doing interop with C you're just writing C with Rust syntax topped off with a big "unsafe" dunce cap to shame you for being a naughty, lazy programmer. It's unergonomic and you lose the differentiating features of Rust. Writing safe bindings is painful, and using community written ones tends to pull in dozens of dependencies. If you're interfacing a C library and want some extra features there are many languages that care far more about the developer experience than Rust.
> a big "unsafe" dunce cap to shame you for being a naughty, lazy programmer
That's bizarrely emotional. It's a language feature that allows you to do things the compiler would normally forbid you from doing. It's there because it's sometimes necessary or expedient to do those things.
> a big "unsafe" dunce cap to shame you for being a naughty, lazy programmer
You just have to get over that. `unsafe` means "compiler cannot prove this to be safe." FFI is unsafe because the compiler can't see past it.
> Once you're doing interop with C you're just writing C with Rust syntax
Just like C++, or go, or anything else. You can choose to wrap it, but that's just indirection for no value imo. I honestly hate seeing C APIs wrapped with "high level" bindings in C++ for the same reason I hate seeing them in Rust. The docs/errors/usage are all in terms of the C API and in my code I want to see something that matches the docs, so it should be "C in syntax of $language".
> The X11 diehards will go the way of the SysV-init diehards; some weird minority
I upvoted your general response but this line was uncalled for. No need to muddy the waters about X11 -> Wayland with the relentlessly debated, interminable, infernal init system comparison.
Just wait for systemd-wayland.
Systemd does not have to force Wayland as it is already going the other way. Both GNOME and KDE are requiring systemd now.
This discussion is originally about xfce, which does not require systemd now.
Gnome?
> Literally no user cares what language a project is implemented in
I think this is true but also maybe not true at the same time.
For one thing, programming languages definitely come with their own ecosystems and practices that are common.
Sometimes, programming languages can be applied in ways that basically break all of the "norms" and expectations of that programming language. You can absolutely build a bloated and slow C application, for example, so just using C doesn't make something minimal or fast. You can also write extremely reliable C code; sqlite is famously C after all, so it's clearly possible, it just requires a fairly large amount of discipline and technical effort.
Usually though, programs fall in line with the norms. Projects written in C are relatively minimal, have relatively fewer transitive dependencies, and are likely to contain some latent memory bugs. (You can dislike this conclusion, but if it really weren't true, there would've been a lot less avenues for rooting and jailbreaking phones and other devices.)
Humans are clearly really good at stereotyping, and pick up on stereotypes easily without instruction. Rust programs have a certain "feel" to them; this is not delusion IMO, it's likely a result of many things, like the behaviors of clap and anywho/Rust error handling leaking through to the interface. Same with Go. Even with languages that don't have as much of a monoculture, like say Python or C, I think you can still find that there are clusters of stereotypes of sorts that can predict program behavior/error handling/interfaces surprisingly well, that likely line up with specific libraries/frameworks. It's totally possible to, for example, make a web page where there are zero directly visible artifacts of what frameworks or libraries were used to make it. Yet despite that, when people just naturally use those frameworks, there are little "tells" that you can pick up on a lot of the time. You ever get the feeling that you can "tell" some application uses Angular, or React? I know I have, and what stuns me is that I am usually right (not always; stereotypes are still only stereotypes, after all.)
So I think that's one major component of why people care about the programming language that something is implemented in, but there's also a few others:
- Resources required to compile it. Rust is famously very heavy in this regard; compile times are relatively slow. Some of this will be overcome with optimization, but it still stands to reason that the act of compiling Rust code itself is very computationally expensive compared to something as simple as C.
- Operational familiarity. This doesn't come into play too often, but it does come into play. You have to set a certain environment variable to get Rust to output full backtraces, for example. I don't think it is part of Rust itself, but the RUST_LOG environment variable is used by multiple libraries in the ecosystem.
- Ease of patching. Patching software written in Go or Python, I'd argue, is relatively easy. Rust, definitely can be a bit harder. Changes that might be possible to shoehorn in in other languages might be harder to do in Rust without more significant refactoring.
- Size of the resulting programs. Rust and Go both statically link almost all dependencies, and don't offer a stable ABI for dynamic linking, so each individual Rust binary will contain copies of all of their dependencies, even if those dependencies are common across a lot of Rust binaries on your system. Ignoring all else, this alone makes Rust binaries a lot larger than they could be. But outside of that, I think Rust winds up generating a lot of code, too; trying to trim down a Rust wasm binary tells you that the size cost of code that might panic is surprisingly high.
So I think it's not 100% true to say that people don't care about this at all, or that only people who are bored and like to argue on forums ever care. (Although admittedly, I just spent a fairly long time typing this to argue about it on a forum, so maybe it really is true.)
Why does Wayland "feel like the future?" It feels like a regression to me and a lot of other people who have run into serious usability problems.
At best, it seems like a huge diversion of time and resources, given that we already had a working GUI. (Maybe that was the intention.) The arguments for it have boiled down to "yuck code older than me" from supposed professionals employed by commercial Linux vendors to support the system, and it doesn't have Android-like separation — a feature no one really wants.
The mantra of "it's a protocol" isn't very comforting when it lacks so many features that necessitate workarounds, leading to fragmentation and general incompatibility. There are plenty of complicated, bad protocols. The ones that survive are inherently "simple" (e.g., SMTP) or "trivial" (e.g., TFTP). Maybe there will be a successor to Wayland that will be the SMTP to its X400, but to me, Wayland seems like a past compromise (almost 16 years of development) rather than a future.
Wayland supports HDR, it's very easy to configure VRR, and it's fractional scaling (if implemented properly) is far superior to anything X11 can offer.
Furthermore, all of these options can be enabled individually on multiple screens on the same system and still offer a good mix-used environment. As someone who has been using HiDPI displays on Linux for the past 7 years, wayland was such a game changer for how my system works.
We’re accustomed to "the future" connoting progress and improvement. Unfortunately, it isn’t always so (no matter how heavily implied). Just that it’s literally expected to be the future state if matters.
I've been on and off linux desktops since the advent of Wayland. Unsure of the actual issues people run into at this point outside of very niche workflows or applications, to which, there are X11 fallbacks for.
Also, by "commercial linux vendors", you do realize Wayland is directly supported (afaik, correct me if wrong) by the largest commercial linux contributors, Red Hat, Canoncial. They're not simply 'vendors'.
> Unsure of the actual issues people run into at this point outside of very niche workflows or applications, to which, there are X11 fallbacks for.
I don't know if others have experienced this but the biggest bug I see in Wayland right now is sometimes on an external monitor after waking the computer, a full-screen electron window will crash the display (ie the display disconnects).
I can usually fix this by switching to another desktop and then logging out and logging back in.
Such a strange bug because it only affects my external monitor and only affects electron apps (I notice it with VSCode the most but that's just cause I have it running virtually 24/7)
If anyone has encountered this issue and figured out a solution i am all ears.
Is it gnome or kde or what?
That's like saying "the website doesn't work", without saying what browser you are using.
This is probably worth reporting. I don't think I've ever heard or ran into something like that before. Most issues I ran into during the early rollout of Wayland desktop environments was broken or missing functionality in existing apps.
Because X is not getting much development at this point (personally I still use i3, haven’t switched to Sway, the present works fine for me).
This argument is actually backwards: one of the goals of the wayland project is to draw development away from X. If wayland didn't exist, people would have worked on X11 a lot more.
It's not an argument in the first place: it's describing the current situation. Wayland does exist, and did draw development away from X.
Not quite. Wayland was created in part to draw developers away from X. Seeking buy-in from Xorg developers specifically was a big part of it.
This seems to be implying that the creation of Wayland had some motivation that was essentially malicious toward X. Is that right?
This question sounds to me like you suspect some outright evil getting projected here. That would go too far. The wayland project tried to get the support of X developers early so that they could become a sort of "blessed" X successor early on. Plenty of earlier replacement attempts have failed because they couldn't get bigger community support, so this had to be part of a successful strategy. Any detrimental effects on X from that move were never a direct goal, as far as I am aware, just a consequence.
Yes, I do interpret your “draw development away from X” as suggesting an attempt to damage X (sorry if I misinterpreted your post, but I do think my interpretation was not really that unreasonable).
This “blessed successor” without and detrimental effects as a main goal: that’s pretty close to my understanding of the project. IIRC some X people were involved from the beginning, right?
This isn't quite right? Wayland was literally created by an X11 developer who got two more main X11 developers in. It's a second system, not a competitor as such.
Wanting developers to switch projects doesn't have to be malicious, in fact personally i doubt there were any bad intentions in place, the developers of Wayland most likely think they're doing the right thing.
Hmm? Seems to be getting plenty of development.
https://github.com/X11Libre/xserver/activity
That’s a fork, which is fine. But for example, users from most mainstream distros will have to compile it themselves.
I guess we’ll see if that development is ever applied to the main branch, or if it supplants the main X branch. At the moment, though… if that’s the future of X, then it is fair to be a little bit unsure if it is going to stick, right?
That's X.org, which is controlled by the Free Desktop Foundation.
The OpenBSD people are still working on Xenocara, and it introduces actual security via pledge system calls.
That seems pretty interesting. I guess it relies on BSD plumbing though?
Funny enough, the my first foray into these sort of operating systems was BSD, but it was right when I was getting started. So I don’t really know which of my troubles were caused by BSD being tricky (few probably), and which were caused by my incompetence at the time (most, probably). One of these days I’ll try it again…
Even if you dislike Wayland, forwards-going development is clearly centred around it.
Development of X11 has largely ended and the major desktop environments and several mainstream Linux distributions are likewise ending support for it. There is one effort I know of to revive and modernize X11 but it’s both controversial and also highly niche.
You don’t have to like the future for it to be the future.
Actually multiple including Phoenix a re-implementation, running an x wm under Wayland via Wayback in addition to xlibre
It's mostly coz nobody really wants to improve X11. I don't think there is many wayland features that would be impossible to implement in X11 it's just nobody wants to dig into crusty codebase to do it.
And sadly wayland decided to just not learn any lessons from X11 and it shows.
What do you mean nobody wants to improve X11? There were developers with dozens of open merge requests with numerous improvements to X11 that were being actively ignored/held back by IBM/Red Hat because they wanted Wayland, their corporate project, to succeed instead.
Reviewing PRs and merging them requires great effort, especially in case of a non-trivial behemoth like X. Surely if all these merge requests were of huge value, someone could have forked the project and be very happy with all the changes, right?
Not having enough maintainers, and some design issues that can't be solved are both reasons why X was left largely unmaintained.
> Surely if all these merge requests were of huge value
There were a lot of MRs with valuable changes however Red Hat wanted certain features to be exclusive to Wayland to make the alternative more appealing to people so they actively blocked these MRs from progressing.
> someone could have forked the project and be very happy with all the changes, right?
That's precisely what happened, one of the biggest contributors and maintainers got bullied by Red Hat from the project for trying to make X11 work and decided to create X11Libre (https://github.com/X11Libre/xserver) which is now getting all these fancy features that previously were not possible to get into X11 due to Red Hat actively sabotaging the project in their attempt to turn Linux into their own corporate equivalent of Windows/macOS.
> "yuck code older than me"
You mean like the code that the Manchester Baby, ENIAC, the Manchester Mark 1, EDSAC and EDVAC ran? Or maybe Plankalkül...
It's a downgrade that we have no choice but to accept in order to continue using our machines. Anyone familiar with Microsoft or Apple already knows that's the future.
> It's a downgrade that we have no choice but to accept in order to continue using our machines.
Odd. Xorg still works fine [0], and we'll see how XLibre pans out.
[0] I'm using it right now, and it's still getting updates.
Here's my PoV:
- Having a single X server that almost everyone used lead to ossification. Having Wayland explicitly be only a protocol is helping to avoid that, though it comes with its own growing pains.
- Wayland-the-Protocol (sounds like a Sonic the Hedgehog character when you say it like that) is not free of cruft, but it has been forward-thinking. It's compositor-centric, unlike X11 which predates desktop compositing; that alone allows a lot of clean-up. It approaches features like DPI scaling, refresh rates, multi-head, and HDR from first principles. Native Wayland enables a much better laptop docking experience.
- Linux desktop security and privacy absolutely sucks, and X.org is part of that. I don't think there is a meaningful future in running all applications in their own nested X servers, but I also believe that trying to refactor X.org to shoehorn in namespaces is not worth the effort. Wayland goes pretty radical in the direction of isolating clients, but I think it is a good start.
I think a ton of the growing pains with Wayland come from just how radical the design really is. For example, there is deliberately no global coordinate space. Windows don't even know where they are on screen. When you drag a window, it doesn't know where it's going, how much it's moving, anything. There isn't even a coordinate space to express global positions, from a protocol PoV. This is crazy. Pretty much no other desktop windowing system works this way.
I'm not even bothered that people are skeptical that this could even work; it would be weird to not be. But what's really crazy, is that it does work. I'm using it right now. It doesn't only work, but it works very well, for all of the applications I use. If anything, KDE has never felt less buggy than it does now, nor has it ever felt more integrated than it does now. I basically have no problems at all with the current status quo, and it has greatly improved my experience as someone who likes to dock my laptop.
But you do raise a point:
> It feels like a regression to me and a lot of other people who have run into serious usability problems.
The real major downside of Wayland development is that it takes forever. It's design-by-committee. The results are actually pretty good (My go-to example is the color management protocol, which is probably one of the most solid color management APIs so far) but it really does take forever (My go-to example is the color management protocol, which took about 5 years from MR opening to merging.)
The developers of software like KiCad don't want to deal with this, they would greatly prefer if software just continued to work how it always did. And to be fair, for the most part XWayland should give this to you. (In KDE, XWayland can do almost everything it always could, including screen capture and controlling the mouse if you allow it to.) XWayland is not deprecated and not planned to be.
However, the Wayland developers have taken a stance of not just implementing raw tools that can be used to implement various UI features, but instead implement protocols for those specific UI features.
An example is how dragging a window works in Wayland: when a user clicks or interacts with a draggable client area, all the client does is signal that they have, and the compositor takes over from there and initiates a drag.
Another example would be how detachable tabs in Chrome work in Wayland: it uses a slightly augmented invocation of the drag'n'drop protocol that lets you attach a window drag to it as well. I think it's a pretty elegant solution.
But that's definitely where things are stuck at. Some applications have UI features that they can't implement in Wayland. xdg-session-management for being able to save and restore window positions is still not merged, so there is no standard way to implement this in Wayland. ext-zones for positioning multi-window application windows relative to each-other is still not merged, so there is no standard way to implement this in Wayland. Older techniques like directly embedding windows from other applications have some potential approaches: embedding a small Wayland compositor into an application seems to be one of the main approaches in large UI toolkits (sounds crazy, but Wayland compositors can be pretty small, so it's not as bad as it seems) whereas there is xdg-foreign which is supported by many compositors (Supported by GNOME, KDE, Sway, but missing in Mir, Hyprland and Weston. Fragmentation!) but it doesn't support every possible thing you could do in X11 (like passing an xid to mpv to embed it in your application, for example.)
I don't think it's unreasonable that people are frustrated, especially about how long the progress can take sometimes, but when I read these MRs and see the resulting protocols, I can't exactly blame the developers of the protocols. It's a long and hard process for a reason, and screwing up a protocol is not a cheap mistake for the entire ecosystem.
But I don't think all of this time is wasted; I think Wayland will be easier to adapt and evolve into the future. Even if we wound up with a one-true-compositor situation, there'd be really no reason to entirely get rid of Wayland as a protocol for applications to speak. Wayland doesn't really need much to operate; as far as I know, pretty much just UNIX domain sockets and the driver infrastructure to implement a WSI for Vulkan/GL.
You seen to know your Waylands.
Do you know if global shortcuts are solved in a satisfactory way, and if there easy mechanism for one application to query wayland about other applications.
One hack I've made a while ago was to bind win+t command to a script that queried the active window in the current workspace, and based on a decision opened up a terminal at the right filesystem location, with a preferred terminal profile.
All I get from llms is that dbus might be involved in gnome for global shortcuts, and when registering global shortcuts in something like hyperland app ids must be passed along, instead of simple scripts paths.
> xdg-session-management for being able to save and restore window positions > is still not merged, so there is no standard way to implement this in Wayland
For me, this is a real reason not to want to be forced to use Wayland. I'm sure the implementation of Wayland in xfce is a long time off, and the dropping of Xwindows even further off, so hopefully this problem will have been solved by then.
Thanks a lot for an actually constructive comment on Wayland! The information tends to be lost in all the hate.
I understand the frustration, but I see a lot of "it's completely useless" and "it's a regression", though to me it really sounds like Wayland is an improvement in terms of security. So there's that.
The fact that this post is downvoted into grayness while lazy hateful rants aren't shows just how rotten HN community has gotten around opensource these days :/
My watch from 8 years ago runs Wayland. Nothing written in Rust as far as I can tell, though.
With that knowledge, I'm certain that XFCE will remain lightweight. It can be done, so I feel confident that the XFCE folks will get it done.
Yeah, I am staunch proponent of "don't try to fix what is not broken". Current XFCE is fast, light-weight, usable and works fine without major issues. While I don't fully understand the advantages / disadvantages of XFCE using Wayland instead of X, if, as someone else pointed out here on HN, running XFCE on Wayland is going to make it slower, it means these developers will be crippling one of XFCE's strongest feature. In that case other minor advantages seems pointless to users like me.
> running XFCE on Wayland is going to make it slower
Citation. None of the other desktops have slowed with Wayland, and gaming is as fast as, if not marginally faster on KDE/Gnome with Wayland vs LXDE on X.
https://www.phoronix.com/review/ubuntu-2504-x11-gaming
I based it on this thread - https://news.ycombinator.com/item?id=46780901
Long-time XFCE user here. We care that stuff works the same, we appreciate how much work it is to achieve that when the world is changing out from under you, and we appreciate that XFCE understands this and cares about it. Being in Rust is not a concern.
Then the future is full of high latency.
Afaik there exists only X11 and Wayland, and X11 is dying if not dead. And for rust I don't see why a desktop user would be concerned by the language used as long as it is good enough.
I've been using Xfce as a daily driver in one machine for about a decade now.
Great to know there's work on the wayland support front.
Also, writing it in Rust should help bring more contributors to the project.
If you use Xfce I urge you to donate to their Open Collective:
https://opencollective.com/xfce
https://opencollective.com/xfce-eu
I've been using xfce for about five years. I just setup my monthly donation last month and saw this good news today:)
Isn't the switch from X11 to Wayland the most painful switch that happened in the linux world ? Even going from python 2 to 3 was not as bad
The move from kernel 2.4.x to 2.6.x was pretty painful. The absolute slog from 2.6 to 3.0 and a development model that a least somewhat resembles the model used today was exhausting.
In case you weren't there, the "even" kernels (e.g. 2.0, 2.2, 2.4, and 2.6) were the stable series while the "odd" kernels (e.g. 2.1, 2.3, 2.5) were the development series, the development model was absolutely mental and development moved at a glacial pace compared to today's breakneck speed.
The pre-git days were less than ideal. The BitKepper years were... interesting, politically and philosophically speaking.
Also, KDE4 was a dark, dark period.
To me the most painful switch was Gnome 2 to Gnome 3. I still miss Gnome 2.
I left Gnome 3 for other WMs (eventually settled on cinnamon), but every once in a while I decided to give Gnome 3 a try, just to be disappointed again. I felt like those people in abusive romantic relationships that keep coming back and divorcing over and over again. "Oh, Gnome has really changed now, he won't beat me again this time!".
X11 to Wayland was painless for me. I guess it depends on what you need from it.
Not really, /lib -> /usr/lib was worse for me
What about systemd?
Systemd was easy for me. All things worked in transition and have the big advantage that don't need shell scripts for create services. Wayland..., is slow, buggy, applications close without reason...
How on earth would Wayland be slow? Like it's literally an IPC on top of the lowest level Linux kernel API (DRM), displaying buffers.
It was partially made for car infotainment systems that are knowingly weak hardware.
Fully-featured DEs like Gnome and KDE work a lot worse when doing everything in software rendering. If you're working on a device with subpar/nonexistent GPU driver support (i.e. Nvidia hardware for years on end), the experience is absolutely awful.
Nvidia's driver do something weird on Wayland when my laptop is connected to HDMI, probably something funky with the iGPU<->dGPU communication. Everything works, but at the whims of Nvidia an update reduces the maximum FPS I can achieve over HDMI to about 30-45fps. Jittery and painful, even on a monitor that supposedly supports VRR.
That's not really Wayland's fault of course, but in the same way Linux is broken because Photoshop doesn't work on it, Wayland is broken for many users because their desktop is weird on it.
systemd was a problem for early adopters (e.g., Fedora). Distros like Debian joined the party later and, as a result, got things way more stable. I never had any systemd-related problem in Debian, while for Fedora (some years earlier) I had some bugs affecting my ability to work. They all seem to work very fine now. Things took a while to mature, but it just works now.
I haven't had a single issue with Systemd and the transition was measured in years, not decades.
Just wait. In 8 years, Wayland will be as old as X11 was when Wayland was created.
Then we'll make Wayland 2.
X11 was basically used everywhere when it was released
Cries in KDE3 -> KDE4
I've used Smithay's Rust client toolkit for a few months now. For making apps it is still sometimes have unsafe wrappers disguised as safe. It has a lot of internals wrapped in Arc<>, but in my tests, the methods are not safe to call from different threads anyhow, you will get weird crashes if done so.
I will seek to dive-in to how Wayland API actually works, because I'd really like to know what not to do, when the wrappers used 'wrong' can crash.
I’ll switch to Wayland as soon as I can use xscreensaver with it, preferably as the screen locker.
FYI, you can currently use most wlroots-based compositors with XFCE. I myself am running Hyprland + XFCE on Gentoo. https://github.com/bergutman/dots
I like the retro theme.
Could you expand on why you describe Hyprland and XFCE4 as "a cursed combination"? Might provide some insight as to why the official XFCE project decided to create their own compositor.
I see the words "feature parity". I hope those words are taken seriously. I feel like most Wayland advocates would do well to take those words seriously.
Does Wayland work on non-Linux systems (e.g. *BSD)?
If an application is written for Wayland, is there a way to send its windows to (e.g.) my Mac, like I can with X11 to XQuartz?
Microsoft's WSL2 GUI integration works based on Wayland (and XWayland): https://github.com/microsoft/wslg
Rather than going fully protocol-based (like Waypipe), they used Weston to render to RDP. Using RDP's "remote apps" functionality, practically any platform can render the windows. I think it's a pretty clever solution, one perhaps even better than plain X11 forwarding (which breaks all kinds of things like GPU acceleration).
I don't know if anyone has messed with this enough to get it to work like plain old RemoteApps for macOS/BSD/Windows/Linux, but the technology itself is clearly ready for it.
Wayland works pretty well on FreeBSD and I know at least wlroots compositors work a bit on OpenBSD (though, I suspect anyone on OpenBSD would prefer to use their homegrown Xenocara). There are Wayland compositors for Mac, the youtuber Brodie Robertson did a good overview of them a few days ago
It depends on what you mean by send. Wayland doesn't have network transparency, there's a bit of a song and dance you have to do to get that working properly. I'm not sure the state of that or of Wayland compositors in general on Mac.
> It depends on what you mean by send.
Currently I can:
and get a window on macOS.For xeyes that works. It is absolutely an inferior and chatty protocol for any other application though, like try to watch a youtube video in chrome through it.
X's network transparency was made at a time when we drawn two lines as UI, and for that it works very well. But today even your Todo app has a bunch of icons that are just bitmaps to X, and we can transfer those via much better means (that should probably not be baked into a display protocol).
I think Wayland did the correct decision here. Just be a display protocol that knows about buffers and that's it.
User space can then just transport buffers in any way they seem fit.
Also, another interesting note, the original X network transparency's modern analogue might very well be the web, if you look at it squinted. And quite a few programs just simply expose a localhost port to avoid the "native GUI" issue wholesale.
Today you would do:
`$ waypipe ssh somehost foot`
You need waypipe installed on both machines. For the Mac, I guess you'll need something like cocoa-way (https://github.com/J-x-Z/cocoa-way). Some local Wayland compositor, anyway.
Yes, but still kind of WIP.
https://docs.freebsd.org/en/books/handbook/wayland/
If wayland support was there already I would be using xfce. I truly admire it, it's great to see this happening and I hope the project continues in great speed. With DE's requiring hard system-d support, I would rather have something like xfce
I started off using twm / olwm / vtwm in 1991. Then FVWM and Afterstep / WindowMaker. I've been using XFCE since around 2007. As long as it functions similarly I'll be happy.
Can relate, however I had Unity in between, and some Enlightment as well.
GNOME was cool during the sawfish days.
I resisted Wayland for a longtime, but I'm sold now that I see how well it does on old hardware.
I have an old Thinkpad. Firefox on X is slow and scrolls poorly. On wayland, the scrolling is remarkably smooth for 10 y/o hardware, and the addition of touchpad gestures is very nice. Yes, there's more configuration overhead for each compositor, but I'm now accepting this trade.
I certainly hope they support themes. I have been using a Mac OS 7 Platinum theme on all my XFCE desktops for years and I want to keep doing so :)
So long as I can windowshade things and it doesn't end up making things a blurry mess, cool.
Now the last 3 times I tried Wayland everything ended up a blurry mess and some windows just ended up the wrong size, so.
I suppose I'll just keep holding out hope.
Very interesting that they opted for a rewrite in Rust instead of adjusting the existing codebase.
I wonder how long it'll take them writing a compositor from scratch.
Not the whole codebase, only the window manager (compositor is the Wayland equivalent). Other components are written in C and will remain so for the foreseeable future. Those components gained Wayland support in the last couple of years, you can try Xfce in a labwc session, there are of course several things to improve, but the compositor is the last big piece missing.
Thanks for the context!
As someone that is sensitive to displays, one of the best features of XFCE, unlike others desktops, is that it doesn't cause eye strain, probably because it doesn't play tricks - a pixel at a certain color is stable, and not dithered(if you choose) and higher level primitives are also stable and don't play time/frequency based games.
I hope XFCE preserves this, it is a killer feature in today's world.
I suspect many of us still using X, are xfce users waiting for an alternative; I've heard very mixed things about current Fedora xfce wayland setups from different people.
Great to see xfce continue on into the next age.
I've been using popos for a while, but xfce will always have a place in my heart.
If it had tiling support I'd probably use it still. Being so lightweight is a massive boon.
Wow, this is annoying. I really like Xfce, but there are plenty of minor things which would need improvements. Instead of fixing all these minor things, they waste a lot of their donations on a rewrite for Wayland / Rust - apparently for exactly the same reason as all the other Wayland stuff and Rust reworks. Developers like to write new code more than actually maintaining / improving fixing existing things and finds some excuses to do this.
This is great news! If anyone from the Team reds these comments, Thank you people so much for XFCE4!
daily drive xfce4, best DE ever, simple and complete.
So will it be renamed to Wfce in the end?
It was originally named XFce after the XForms library. As of Xfce 3, it uses GTK though, so it could be called GTKce, but renaming the project every time you change widget toolkits is probably not a good idea.
Rust is not GNU
I love XFCE, with the move to wayland I hope they start thinking about abandoning GTK though
Why do you hope they abandon GTK?
What would you have them replace it with?
If they do not mind introducing C++ (they're introducing Rust so i guess multilanguage development isn't out of the question) then FLTK could be an option, though it'd probably need to improve its theming support.
They both have kinda similar roots in that XFCE originally used XForms which was an open source replacement of the SGI Forms library while FLTK also started as a somewhat compatible/inspired opensource replacement of SGI Forms in C++.
Enlightenment. No, really.
If they ever move away from GTK (due to the GNOME shenanigans GNOME-izing GTK) I wish Englightenment and Xfce were together a single thing. But that's if I could ask the Tux genie three wishes.
GTK4 is still pretty usable without libadwaita and all its Gnome-isms.
But frankly I think forking and maintaining GTK3 is preferable to moving to EFL or Qt. GIMP is still on GTK3. MATE is still on GTK3. Inkscape is still on GTK3 (but GTK4 work is in progress). Evolution is still on GTK3.
I think GTK3 will be around for a long time.
Hell I wish EFL was more used in general. I was thinking QT (mainly because I forgot about EFL) but that's much better
If you are wishing they used QT instead the fact that they chose Rust only makes that less likely.
Am I the only one who's not buying into the Wayland hype? I just want X11 support not to fall into disrepair, as I see nothing wrong with it.
I don’t see much Wayland hype. It’s boring plumbing for most people, isn’t it? Most of us are just going along with whatever the volunteer plumbing community decided to put together.
> I just want X11 support not to fall into disrepair
Are you also willing to maintain it?
Honestly at this point, I would be willing to pay $10-20 a month just for someone to maintain Xorg and xfree86. I really doubt I am the only one.
If the XLibre project appears to be making enough fairly-consistent progress for you to be comfortable tossing around some cash, then do gather up some likeminded folks to hire a dev to follow the guidance here [0] and help out!
Do note that I've never tried to croudfund a programmer, but that's something that I have to believe is possible to do.
[0] <https://github.com/X11Libre/xserver?tab=readme-ov-file#i-wan...>
Are you willing to write accessibility support for the new xfce only wayland compositor? How will you get every other wayland compositor to support your non-'wayland core' accessibility extension?
People like to frame things like the waylands are some sort of default and nothing is being lost and no one is being excluded.
Some cognitive dissonance going on here. The vast majority of current Linux Desktop users are on Wayland, and X11 is phased out across the board. Calling it hype is absurd.
Sure, but do you have any facts to backup that assertion?
Yes, you are the only one in the entire world who hasn't fallen for it. Well done.
Until I can still compile xfce with an small and simple C compiler or even a simpler SDK.
i'm trying to build a Linux desktop and the first thing I got stuck at is X11 versus Wayland for greetd. Next thing Il got stuck at his XFCE4 doesn't exist for Wayland. What the shit. if we want to tell me wayland is the future, fine. sure. great. Tt's been 11 years!
If you're just trying to run Linux you're better off either using one of the many read-made distributions or going with X11 since that works just about everywhere and has done so for decades.
in the time it took me to complain about xfce4, three other complaints popped up! so I guess I'm not alone