> Browsers have become so nice to work with, that these days, I get away with just the following two lines of code to simplify DOM manipulation:
>
> dqs = document.querySelector.bind(document);
> dqsA = document.querySelectorAll.bind(document);
Sounds useful and reasonable.
> I usually import the two functions from a module like this:
>
> import { dqs, dqsA } from '/lib/js/dqs.js';
Utterly absurd. Just copy and paste. It’s only two simple lines, how could it be worth a dependency?
Every file? You have one .js file per project, if you're like me. So just throwing those two lines in the top and never having to worry about it ever again seems like a nice option.
How big are your projects? It's very strange to me that you would want to have just a single JS file per project. Even if you want to avoid bundlers, ES modules make it easy to import code from other files.
In terms of Javascript, as little as I can possibly get away with. The web stuff that I do is mostly CRUD type apps, which can be done entirely server side. The Javascript is only where it make the user experience better, so basic form help or to do a modal, things like that.
Because using a JS module will set back your 30-minutes project by an entire day.
Of course you are supposed to master which of the es/interop/amd/require incantation you are supposed to use. I wish Typescript would have mandated one style and one style of JS module only!!
And I’ve never succeeded to find a good guidelines on which kind of JS module I should use. Any advice on what is a very easy and stable and worth-to-learn technique to master imports in 2024?
The overhead here would be the need to make another request just for these two functions.
On the other hand, with bundling though it’s totally fine to have a module just for these two helpers. (Even better if it can be inlined, but I haven’t seen anything supporting this since Prepack, which is still POC I think.)
AFAIK modern HTTP versions like HTTP/3 can request multiple files in a single network packet. So it is basically free to do "another request". As the data request goes out and the data comes in in packets with other "requests".
A network request isn't free, only less costly than it used to be. Even with HTTP 3, your JS execution is stalled for however long the RTT is back to the server. That could be 500+ ms if its on the side of the world and doesn't have a CDN.
Depends on the import tree. The way I understand it, this:
import { x } from '/a.js';
import { y } from '/b.js';
Does not take longer than this:
import { x } from '/a.js';
Because the message to the server "Give me b.js" goes out in the same network packet as "Give me a.js" and the data of b.js comes back in the same packet(s) as the data of a.js.
Because if it ever needs to change, you're in for a world of hurt. Because useful stuff like that is worth sharing elsewhere. It starts with 2 lines, but then theres another useful function you'd like in another file. So you just copy and paste those two lines. But then you want that in a third file. Pretty soon you have this almost-library you're carrying around you've spread across a bunch of files, and now what started with two simple lines is now a mountain of tech debt.
Maybe you'll never write enough JavaScript to have additional utility functions. You'll probably never need to modify those two lines. But copying and pasting like that makes for quite the code smell. Because if you're copying and pasting that, the question that someone may never actually verbalize to you is what else in the code is copy and pasting instead of being turned into a shared function in a libray?
querySelectorAll() isn't live. So you could do what I very often do and already convert the result to an array, i.e.
dqsA = s => Array.from(document.querySelectorAll(s));
Reason why I do that very often is because it allows all array methods to be used on the result, like .map() or .filter(), which makes it feel very much like jQuery. YMMV
Does the underlying data structure work okay with that? I would assume there is some sort of lazy iterator involved that may not work with array methods, or only work once.
I recently attempted to remove React as a dependency just to see what would happen. It turns out different browsers are still incredibly inconsistent when it comes to event handling. For example the select event on an <input> element somehow doesn't fire at all on Safari during my test, and doesn't fire when the caret is merely moved on some browsers. Using just the native browser functions isn't just fine, even if you don't need all the React features like components or state or props. It turns out React DOM is valuable as it papers over browser differences.
And does an equivalent in React work? Because I don't believe React does any of the papering-over you describe. My understanding (as a non-user) is that React does, logically, essentially nothing special around event handling.
I don't remember off the top of my head whether this specific example works in React as I'm not next to a computer. But I remember reading React source code and finding a whole lot of code to handle the select event. (Just found it by doing a GitHub code search on my phone https://github.com/facebook/react/blob/7c8e5e7ab8bb63de91163...)
In general React has its own event handling code. For one in React the user doesn't even deal with the browser native DOM events but React synthetic events. React also readily creates brand new synthetic events from other browser events. React also sometimes gives different names or behaviors to browser events; the most famous example is that the React onChange event is roughly equivalent to the browser onInput event, but absolutely different from the browser onChange event.
Good to know, thanks. I knew it made synthetic events, but thought it was all still 1:1. I see I was completely wrong. I gotta say, yuck. Don't like it, wish they'd taken a more polyfill-like approach.
IIRC they do that to deal with browser differences and be consistent with things like event bubbling. Probably other benefits as well but that's the one I'm fairly sure I remember from years ago.
There are a couple of edge cases I forget at the moment where react event handlers intentionally behave differently from the DOM handlers with the same name.
The way I see it once you’ve thinned the polyfills to next to nothing, the enduring feature of jQuery is the automatic list comprehensions. The ability to unselect all of the buttons in a form in a single call is still hard to match elsewhere. That and parent queries.
The main problem I have with the implementation is that it chooses to fail silently when the list is empty. I’ve fixed too many bugs of this sort, often caused by someone refactoring a DOM tree to do some fancy layout trick after the fact. If I were implementing jquery again today, I’d make it error on empty set by default and add a call chain or flag to fail silently when you really don’t care. I’ve spent a few hours poking around at jQuery seeing what it would take to pull out sizzle and do this, but never took things any farther than that.
At the end of the day jquery is about the old debate of libraries versus frameworks. We’ve been doing SPAs with giant frameworks long enough now for the Trough of Disillusionment to be just around the corner again.
Not having to worry whether some selector matches any elements is part of what makes jQuery attractive to many though. It's very "fire and forget", you send off your command to hide all .foo, and if there are any .foo they will be hidden, and if there are no .foo, nothing happens and you don't need to worry about it, much like CSS. If you write .foo { color: red; } and there isn't any .foo in the document it doesn't do anything but also has no negative side-effects (except that tiny overhead).
Exactly this. It's extremely useful to be able to say "with all document elements that matches this selector (whether there are any or not), do this."
It's possible it's also situationally useful to say "if there aren't any document elements that match this selector, error out, if so do this with all of them." I'm struggling to imagine a specific situation in which that has compelling advantages (and would be interested in elaboration), but let's say it exists. Then something like this:
would make it easy to explicitly add the guard condition with an invocation like `$("#selector .nonextant").mustMatch().each(function (emt) { /*do this*/ })`, rather than having it invisibly ride along with every comprehension and making the "whether there are any or not" case harder.
And if for some reason one were possessed of the conviction that implicit enforcement of this universally within their project outweighed the advantages of explicit options for both ways, it'd probably be better to patch the jQuery lib for that specific project than enforce it as a standard for everyone worldwide.
Just had a script today that fires in 2 contexts and ran into an error where the element I attach a handler to doesn't exist in one of the contexts which breaks JS on the page. Since I already had jQuery as a dependency in the project, in the moment it felt easier to replace the querySelector call with jQuery, which I did, instead of checking the querySelect result so I second this, the 'fire and forget' part still holds up very well even though the tree traversal pain points have mostly been solved by browsers.
Optional chaining is widely supported and solves this problem for single-element queries.
qsA returns an empty NodeList when the selector matches no elements, as long as the string is a valid selector. Then forEach doesn't require it anyway.
true it is tedious but I have a vs code shortcut for doing the following (and same goes for queryselectorall)
let foo = document.queryselector(‘.foo’);
Fully agree, the default should be strict, jQuery based code requires every developer to be aware of every selector used in the project and remember to update it when the DOM changes. That is of course impossible.
I think it is possible to replace the jQuery init function with your own implementation that enforces length.
But why? With mainstream websites pumping out literal megabytes of JavaScript, why spend time rewriting an entire library (with less features) to save 50KB?
Not relevant to this package in particular, but this line of reasoning baffles me every time I see HN comments about JQuery. So many posters argue against the use of JQuery because of its package size and bandwidth constraints, while simultaneously advocating for SPA frameworks that use orders of magnitude more bandwidth. Absolutely ridiculous cargo cult reasoning.
IMHO the way to achieve this is to pay the upfront cost of building out a small framework for your application, which has lightweight abstractions for common patterns. With some design, a small internal API can be as nice to work with as the kitchen sink abstractions. (Much nicer, too, when it comes to maintenance and debugging.)
> IMHO the way to achieve this is to pay the upfront cost of building out a small framework for your application
And then 5 years down the line it has grown into a worse version of the popular alternatives, the original developers are gone and the ones who currently maintain the mess have to pay the price. In corporate or professional contexts, you probably just should pick whatever is popular.
When you’re working on something others won’t have to maintain years down the line, thankfully your hands aren’t tied then and you can have a bit more fun.
For everything else? Svelte, HTMX, jQuery, Vue, React, Angular or whatever else makes sense.
That said, sometimes I wonder what a world would look like, where the browser would have the most popular options pre-packaged in a way where you wouldn’t need to download hundreds of KB in each site you visit, but you’d get the packages with browser updates. It’d probably save petabytes of data.
> And then 5 years down the line it has grown into a worse version of the popular alternatives, the original developers are gone and the ones who currently maintain the mess have to pay the price.
Isn't that true for using the popular alternative too? At some point the original devs have moved on from $FRAMEWORK v1 to $FRAMEWORK v2 and now you're going to have to do a migration project and hope it doesn't break.
> When you’re working on something others won’t have to maintain years down the line, thankfully your hands aren’t tied then and you can have a bit more fun.
I think the implication is, with the in-house library, that the in-house library would be a lot easier to replace or update than a deprecated external alternative.
No one's forcing you to upgrade when the framework does. We still have a Vue 2.7 codebase chugging along just fine and won't upgrade it unless truly necessary.
The thing is that while your application is working well, the library authors would have moved on and it's up to you to upgrade your application and fix breaking changes. At least with an in-house framework, it's always morphing into something that the company needs. Not saying that there aren't nicer framework, but it's always someone agenda that has aligned with yours at the time of selection.
> The thing is that while your application is working well, the library authors would have moved on and it's up to you to upgrade your application and fix breaking changes.
AngularJS is actually a pretty good argument to support your point, I had to migrate an app off of it (we picked Vue as the successor) and it was quite the pain, because a lot of the code was already a bit messy and the concepts don't carry over all that nicely, especially if you want something quite close to the old implementation, functionality wise.
On the other hand, jQuery just seems to be trucking along throughout the years. There are cases like Vue 2 to Vue 3 migrations which can also have growing pains, but I think that the likes of Vue, React and Angular are generally unlikely to be abandoned, even with growing pains along the way.
In that regard, your job as a developer is probably to pick whatever might have the least amount of surprises, the most longevity and the lowest chance of you having to maintain it yourself and instead being able to coast off of the work of others (and maybe contributing, if you have the time), with the project having hundreds if not thousands of contributors, which will often be better than what a few people within any given org could achieve.
Sometimes that might even be reaching for something like SSR instead of making SPAs, depending on what you can get away with. One can probably talk about Boring Technology or Lindy effect here.
I think, in view of my previous comment which was made prior to reading this refinement of yours, that it all very much depends on whether you are choosing something that is designed to be replaced vs something that is not.
Sorta along the lines of the mantra "Don't design your code for extendability, design it for replaceability" (not sure where I read that).
> with the project having hundreds if not thousands of contributors, which will often be better than what a few people within any given org could achieve.
The upside of "what a few people within the org could achieve" is that a couple of devs spending a few weeks on a project are never going to make something that cannot also be replaced by a different couple of developers of a similar timeframe.
IOW, small efforts are two-way doors; large efforts (thousands of contributors over 5 years) are effectively one-way doors.
> Sorta along the lines of the mantra "Don't design your code for extendability, design it for replaceability" (not sure where I read that).
I agree in principle and strive to do that myself, but it has almost never been my experience with code written by others across bunches of projects.
Anything developed in house without the explicit goal of being reusable across numerous other projects (e.g. having a framework team within the org) always ends up tightly coupled to the codebase to a degree where throwing it away is basically impossible. E.g. other people typically build bits of frameworks that infect the whole project, rather than decoupled libraries that can be swapped out.
> The upside of "what a few people within the org could achieve" is that a couple of devs spending a few weeks on a project are never going to make something that cannot also be replaced by a different couple of developers of a similar timeframe.
Because of the above, this also becomes really difficult - you end up with underdocumented and overly specific codebases vs community efforts that are basically forced to think about onboarding and being adaptable enough for all of the common use cases.
Instead, these codebases will often turn to shit, due to not enough people caring and not being exposed to enough eyes to make up for whatever shortcomings a small group of individuals might have on a technical level. This is especially common in 5-10 year old codebases that have been developed by multiple smaller orgs along the way (one at a time, then inherited by someone else).
Maybe it’s my fault for not working with the mythical staff engineers that’d get everything right, but so don’t most people - they work with colleagues that are mostly concerned with shipping whatever works, not how things will be 5 years down the line and I don’t blame them.
A. You're assuming they are largely the same people by extrapolating from your observations. It's impossible to actually know.
B. Your two examples provide different things. This is like saying it's OK to include any old multi-megabyte dependency if a site loads a couple mb worth of images. There's no reason to stop considering the size of the small parts just because you decided you need some large parts. Things add up - that will never stop being a useful thing to remember, in any context.
We're using jQuery on our sites which score 100% on all Google Lighthouse pagespeed tests. A smaller version of jQuery really wouldn't matter to us, our pages are already extremely fast to load and score amazingly well on any page speed/SEO test.
About the only place I could see a benefit from this library is maybe in embedded, where space really is an issue. I've created a few IoT devices with web interfaces that are built-into the tiny ROM of the device. A 6KB library is nice, but I'm using Preact with everything gzipped in one single .html file and my very complex web app hosted in the IoT device is about 50KB total size gzipped - including code, content, SVG images and everything, so jQuery or a JQ substitute isn't going to be a better solution for me, but maybe it fits for someone that doesn't know how to set up the tooling for a react/preact app.
To add, I really try to minimize external deps but if first-load speed were absolutely critical loading from jQuery CDN would increase odds of it already being cached..
Using jQuery CDN might have helped with cross-site caching in the past, but now all major browsers have cache partitioning by origin for privacy reasons.
We don't make any external HTTP requests for any library code. jQuery is embedded into the page HTML file, along with all other required library code necessary for the page to start functioning, in one bundle. Nothing that runs below the fold is executed until the page is scrolled. All scripts are deferred, except the required libraries, one of which is jQuery and is loaded in-line in a <script> block in the page <head>. There's a ton of tricks we use to get to a perfect Google Lighthouse score - we also score perfect 100% on mobile too. This isn't a complex web application but we do a lot of cool front-end stuff.
That's great and fair. Some places are NUTS about first page load speed(and I mean first time someone has ever visited the site) though and it really could matter across all deps depending on a ton of other factors..
Serving super common libs, like jQuery, from the lost likely CDN location could maximize the likelihood it's already cached.
I have never personally worked anywhere this mattered.
We provide a website among many other services to our clients. Our clients are very SEO focused, and they will go to Google's Lighthouse (or another testing site) to test their site's page speed, and then they will put in the URL for their competition's website to see how their site compares to their competitors. If they see their page speed score is 1/2 as fast as their competition, they have a reason to leave us and find a better host (whoever their competition is using). We have thousands of clients, so I am managing thousands of individual customized websites based on core "white-label" template code. Page speed matters to us very much, because it matters to our clients.
Google Lighthouse will complain about every HTTP request, and it doesn't care about CDN caching, because none of the external code will be cached when the test is run. It will tell you to minimize external HTTP requests. This is the same way every page speed test works, not just Google. So including any external dependency will cause the page speed score to go down a bit. Have enough of them and your page speed score ends up being very poor (many other factors can affect this, all of which are detailed in the Lighthouse report). It doesn't matter what the average site visitor experiences if their cache has jQuery in it from some random CDN. The only thing that really matters is that Google is telling our client that their site is performing badly compared to their competitor's site.
So, my job is to make sure our clients never, ever think about leaving us because of page load speed as measured by Google or any other testing site. Our clients pay us hundreds of dollars every month, some of them pay 10s of thousands depending on their needs (we don't just provide websites). So there is a lot of money at stake. Page speed scores matter very much to us. When our client sees their site is scoring perfect 100% on all Lighthouse tests, and their competitor is scoring a 70%, then we win, and the client has one less reason to leave. We even use this as a selling point to bring on new clients, because we have an absolutely untouchable page speed score compared to our competitors in this space.
I'm not sure what to say, I believe you but you seem to be talking past my point that other companies may prefer to go a different route based on their needs and what they are optimizing for. There are real situations a CDN may be preferred.
Companies that are using CDNs to load commonly used libraries aren't actually interested in page load speed scores. They're pursuing a tech trick that was always somewhat of a red herring, and frankly a bit risky. We've experimented with CDNs and they have actually added stuff to the libraries that shouldn't be there. Trusting a 3rd party to load library code from isn't great for security.
This argument confuses me. It seems equivalent to saying "with mainstream fast food restaurants selling meals with 1600 calories, why are you making yourself a green salad for lunch?", or saying "with the national debt approaching $35 trillion dollars, why are you shopping around for the best rate on a mortgage?". One answer for all three cases is: I'm not the thing that's big, I'm a different thing that's smaller. Another answer is: if being too large is the problem, then being smaller sounds like a solution.
But I guess you're really asking why the developer would spend time on rewriting a library. Is that really surprising? Most of programming is rewriting something that's been made before, either because you have to for your job, or because you need it to do something slightly different, or have different performance characteristics, or just want to learn how it's done.
Why rewrite an entire code base away from JQuery.. and not to native implementations?
The era of jQuery and it's clones are over. People need to move on. If you're ever at the architecture level of your code base and think "What package should I use for DOM manipulation?", you're doing something wrong.
My current client has a web application written in a lightweight strongly typed php framework, htmx and sprinkled jquery.
Devs move very quickly, the website is blazing fast, and it makes around 140k mrr. It's not small. About 350 database tables and 200 crud pages. Business logic is well unit tested.
You don't need to make jQuery the center of DOM manipulation if your application swaps dom with htmx with all the safety and comfort of a cozy backend.
It feels magical. And the node_modules folder is smol. Icing on the cake.
I look forward to jQuery 4 and 5.
You don't see this kind of architecture in CVs because these people are too busy making money to bother.
If you GET /user/save you'll get back HTML and `<script>` to build the form.
If you POST /user/save you're expected to pass the entire form data PLUS an "operation" parameter which is used by the backend to decide what should be done and returned.
For example if user clicks [add new user] button, the "operation" parameter has value of "btnSubmit.click".
Why pass operation parameter? Because business forms can have more than just a [submit] button.
For example, there might be a datagrid filter value being changed (operation: "txtFilter.change"), or perhaps a dropdown search to select a city name from a large list (operation: "textCitySearch.change"), it can be a postal code to address lookup (operation: "txtPostalCode.change"), etc.
On the backend, the pseudocode looks somewhat like this but it's cleaner/safer because of encapsulation, validation, error handling, data sanitization, model binding and csrf/xss protection:
function user_save($operation) {
$form = new Form('/user/save');
$form->add($textName = new component(...));
$form->add($textCitySearch = new component(...));
$form->add($btnSubmit = new component(...));
if (method == "GET") return $form->getHtml();
try {
if ($operation == "btnSubmit.click") {
$newUser = UserService.createNewUser($_POST);
return '<script>' . makeJavaScriptSuccessDialog('New user created!') . '</script>';
}
if ($operation == "textCitySearch.change") {
$foundCities = UserService.searchCities($_POST);
return '<script>' . $textCitySearch->getJsToReplaceResultsWith($foundCities) . '</script>';
}
} catch ($exception){
// Services above throw ValidationException() for incorrect input, $form takes that and generates friendly HTML for users in a centralized way
if ($exception is ValidationException) {
return '<script>' . $form->getValidationErrorJs($exception) . '</script>';
}
// code below is actually done by a middleware elsewhere that catches unhandled exceptions,
// but i put it here for brevity in this example.
logSystemException($exception);
return '<script>' . makeJavaScriptErrorDialog('Ops, something went wrong with us. We will fix it!') . '</script>';
}
So the HTML generation and form processing for user creation is handled by a single HTTP endpoint and the code is very straight-forward. The locality of behaviour is off the charts and I don't need 10 template fragments for each form because everything is component based.
jQuery's API is nice. And it's abstraction reflects common sense more than technical implementations. It's another abstraction layer, all right, and not required, but it's so convenient.
For those interested in jQuery alternatives- I've been waiting for jQuery 4.0 soooo long I ended up making my own jQuery with some key differences:
* Animations, tweens, timelines use pure CSS, instead of jQuery's custom system.
* Use one element or lists transparently.
* Inline <script> Locality of Behavior. No more inventing unique "one time" names.
* Vanilla first. Zero dependencies. 1 file. Under 340 lines.
me() is guaranteed to return 1 element (or first found, or null).
any() is guaranteed to return an array (or empty array).
Array methods
any('button')?.forEach(...)
any('button')?.map(...)
So does any() always return an array as described near the top, or can it return null as implied by the example below?
Locality of Behaviour is of special interest to me.
How is your experience with currentScript.parentElement?
Last month I did a quick research and my impression is that it wasn't reliable in some probably niche case but I can't remember when.
But I didn't investigate much and I'm glad you made it work!
If I load 3 consecutive scripts currentScript.parentElement should still work in all browsers right? As long as it is not async or module, which is fine with me.
SvelteKit had this conversation and they ended up implementing random ids for elements to set their targets:
Here's a stretch goal: use typescript template string magic to correctly infer the type of elements. For instance you can statically infer that $('div#name') will be a HTMLDivElement.
Elixir and a few other languages have the pattern matching and type system that could pull that off but not a lot of languages do. Can you do that in typescript? I don’t see how.
Back in the days when trying to slim down JS I used https://github.com/filamentgroup/shoestring Main reason was because they had offered a custom build to only add what you really need.
Somehow I still think going with what the browsers have to offer nowadays is a better option - actually it's really good and jQuery isn't really needed anymore. Especially when even the small jQuery alternative is still 6kB, while Preact, a react like lib, is only half the size.
I used this initially in a browser extension I'm building. Ended up migrating to a JSX library instead, because jQuery turns into hard-to-reason-about code pretty quickly once you're past “simple app” territory (and I say this as someone who wrote my own jQuery-inspired library[1]). Right tool for the job, as they say.
... unless you want to send a body with your HTTP GET. There is tons of utility value in this! For example, let's say you want to GET some data but also provide some client request statistics along with the request -- happens all the time in the real world.
Fetch will reject your GET if it contains a body (a deliberate maintainer decision), even though it's entirely permissible by HTTP and done by many real-world AJAX APIs. Real AJAX will do what it's supposed to. (The HTTP 1.1 2014 Spec says that including a request body in a GET "might cause some implementations to reject the request." Guess which one!)
Also, advanced features like progress are completely absent from Fetch as well.
However, there are some fantastic libraries like Axios[1], SuperAgent (requires npm), and, yes, jQuery[2], that have really excellent API's (far superior to Fetch), or you could just write your own (or use an LLM) short wrapper around modern AJAX and call it a day. h/t to Claude:
This gives you xhr methods with a fetch-style API and you can still do all the things that fetch can't (but this won't do real streaming or cache control like Fetch, but it'll do 95% of all common use cases in a tiny bit of code.)
Each method listed above returns a Promise that resolves with the XMLHttpRequest object or rejects with the error. So you get both the Promise functionality and full access to the XHR object in the resolution.
For more advanced AJAX stuff, check out the very powerful and flexible Axios library[1].
And, if you don't need AJAX but do want some of the features from jQuery (like some of the more unusual selectors) that aren't in Cash (to save bytes!), AJAX (and special effects) is excluded from jQuery Slim which brings the code down to only 69KB[3].
Caching is the most important reason to consider GET for a non-hypertext API. Vary headers tell the server which header diffs should cause cache misses, but there's no way to do that for an encoded body.
In standard HTTP/1.1, any method can have a request body. In Representational State Transfer (REST) as defined by Dr. Fielding, HTTP doesn't even come up, let alone "methods" per se, so there is no distinction between DELETE, POST, or GET from a REST standpoint, only within HTTP as an engine for hypertext. Further, in HTTP, any of these requests can contain a request body.
But, because of this behavior by the WhatWG for Fetch, the IETF has added this paragraph to the specification for HTTP/1.1:
"A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request."
"Some existing implementations" really just means fetch. The p*ing contest between two groups resulted in a neutered and prescriptive fetch.
In other words, it's fetch that is non-standard, and the actual HTTP standard had to be updated to let you know that.
You've got the chronology and causality wrong. The Fetch API came after the RFC 7230 advice. Due to arguably dubious interpretation of arguably poor wording in RFC 2616 (from 1999) that suggested you SHOULD ignore GET bodies, various caching and proxy servers would ignore or reject GET request bodies, so that it became dangerous to use them.
Since then, each iteration of the HTTP specs has strengthened the advice. The most recent 9110 family says you SHOULD NOT use GET request bodies unless you have confirmed in some way that they'll work, because otherwise you can't trust they'll work.
Fetch was going along with this consensus, not causing the problem.
The pool was muddied; nay, poisoned. And so the solution is the QUERY method. That's how things tend to work in such a space. See also 307 because of 302 being misimplemented.
Yes exactly, at some point I asked to maintain it and kinda redid it. Now I kinda consider it "done", as in "maybe some more work would be put into it, but by end large I don't think it's going to change in the future".
I created a similar `$()` utility function for my projects albeit with 10 times less functionality.
I used the same basic signature for the `$()` function. However I found that 95% of the time I don't need to use the chain method on a collection. There's almost no scenario in which I want to do <collection>.addClass() etc. There's practically ZERO situations in which I would use something like attach an event to a collection of nodes, since event delegation is more elegant (attach a single event and check for event.type and event.target).
So TLDR I made $() always select a single element with `querySelector()`, which means I could remove the collection/loop from every chained method like addClass() or css() or toggle().
Point unless you write bad code to begin with, you can probably make this significantly smaller by removing the collection handling. The 1% of the time it is warranted to do an addClass() or something else on a bunch of nodes you can just go native and if the collection is small enough just call $() on each element.
PS: I guess the subtext also to my post is sometimes something looks logically elegant, like the ability for any chained method to act on the collection selected by $(), but it may not make any sense in the real world.
Assuming you mean that ironically. Unfortunately, the README doesn't reveal where the name comes from, but it is truly absurdly misleading, as if it came from a random generator...
Oh wow, I really didn't make that connection. Thanks!
Still not sure it really is a good name for a lib: someone who doesn't already know it will probably not think about jQuery when they see this name in a dependency list...
Fine as an exercise but for a range of use cases what you really want is the smallest alternative to the bloated reactive js frameworks and alpine.js seems to be occupying that sweet spot.
Thanks. Didn't know about datatables.net looks very useful.
Looks like it does a great job of dealing with tables on mobile, putting my own manual efforts for that task to shame. I would typically just enable horizontal scrolling on mobile and call it a day. Now I feel a bit guilty about that after seeing the much better ways datatables does it!
Yes, that's the primary point. Write your own wrappers for the subset of functions you need - a small price for the great reward of removing a 50kb dependency.
It specifically calls out the use case of writing a library, where reducing dependencies is a much higher priority. It demonstrates how easy it is to replace usage of many functions. It never implies that the native equivalents are shorter.
They don't claim it's clearer than jQuery. The pitch, as I understand it: if you only need a few of those operations, it may be better to forego adding jQuery dependency.
That's not really the point. Of course they're smaller in JQuery, the whole idea is to provide equivalents for things that are easy one-liners in JQuery.
The wrong syntax notwithstanding, this doesn't let you recursively use querySelector(All), e.g. to find children of a node like document.querySelector("#foo").querySelectorAll(".bar")
But I think the OP's jQuery replacement is also dropping features in the service of a small footprint. So this was my 80/20 contribution to the "smallest jQuery replacement" problem ;)
I'm always surprised that an API that is defined by matching 0-n dom elements doesn't return a container that by default maps over them list monad style.
Browsers have become so nice to work with, that these days, I get away with just the following two lines of code to simplify DOM manipulation:
So instead of I can write For everything else, I am fine with just using the native browser functions.I usually import the two functions from a module like this:
import { dqs, dqsA } from '/lib/js/dqs.js';
This is the module:
https://github.com/no-gravity/dqs.js
> Browsers have become so nice to work with, that these days, I get away with just the following two lines of code to simplify DOM manipulation: > > dqs = document.querySelector.bind(document); > dqsA = document.querySelectorAll.bind(document);
Sounds useful and reasonable.
> I usually import the two functions from a module like this: > > import { dqs, dqsA } from '/lib/js/dqs.js';
Utterly absurd. Just copy and paste. It’s only two simple lines, how could it be worth a dependency?
Bizarre comment. Why would you copypaste this into every file when you can do it once and import it? What's the problem exactly?
Every file? You have one .js file per project, if you're like me. So just throwing those two lines in the top and never having to worry about it ever again seems like a nice option.
How big are your projects? It's very strange to me that you would want to have just a single JS file per project. Even if you want to avoid bundlers, ES modules make it easy to import code from other files.
> How big are your projects?
In terms of Javascript, as little as I can possibly get away with. The web stuff that I do is mostly CRUD type apps, which can be done entirely server side. The Javascript is only where it make the user experience better, so basic form help or to do a modal, things like that.
Left-pad anyone?
That's an external dependency. You don't install anything here. It's no different than making any other module you reuse in multiple places.
It's in their codebase.
Because using a JS module will set back your 30-minutes project by an entire day.
Of course you are supposed to master which of the es/interop/amd/require incantation you are supposed to use. I wish Typescript would have mandated one style and one style of JS module only!!
And I’ve never succeeded to find a good guidelines on which kind of JS module I should use. Any advice on what is a very easy and stable and worth-to-learn technique to master imports in 2024?
Your rant is several years out of date. You can use ES imports natively in Node and the browser. I have been very happy doing so.
Besides, if you’re working with a codebase of non-zero complexity you need imports/require/whatever anyway.
Use ESM. It's built into the browser. CJS is legacy. Every major JS runtime has module interop built in now.
Modules aren't hard anymore.
> Because using (a JS) module will set back your 30-minutes project by an entire day
May be learn to do the basic of YOUR JOB for once, some "software engineer".
What? If you don’t have external dependencies, just remove your bundler/transpiler and rely on browsers to import your code.
/lib/js/dq.js is part of their codebase.
It looks like it's intended to be copied and pasted into your codebase, not be an external dependency.
modern browsers support the import syntax natively, so it really shouldn't be a lot of overhead to import it.
The overhead here would be the need to make another request just for these two functions.
On the other hand, with bundling though it’s totally fine to have a module just for these two helpers. (Even better if it can be inlined, but I haven’t seen anything supporting this since Prepack, which is still POC I think.)
AFAIK modern HTTP versions like HTTP/3 can request multiple files in a single network packet. So it is basically free to do "another request". As the data request goes out and the data comes in in packets with other "requests".
A network request isn't free, only less costly than it used to be. Even with HTTP 3, your JS execution is stalled for however long the RTT is back to the server. That could be 500+ ms if its on the side of the world and doesn't have a CDN.
Depends on the import tree. The way I understand it, this:
Does not take longer than this: Because the message to the server "Give me b.js" goes out in the same network packet as "Give me a.js" and the data of b.js comes back in the same packet(s) as the data of a.js.Included in multiple places?
Because if it ever needs to change, you're in for a world of hurt. Because useful stuff like that is worth sharing elsewhere. It starts with 2 lines, but then theres another useful function you'd like in another file. So you just copy and paste those two lines. But then you want that in a third file. Pretty soon you have this almost-library you're carrying around you've spread across a bunch of files, and now what started with two simple lines is now a mountain of tech debt.
Maybe you'll never write enough JavaScript to have additional utility functions. You'll probably never need to modify those two lines. But copying and pasting like that makes for quite the code smell. Because if you're copying and pasting that, the question that someone may never actually verbalize to you is what else in the code is copy and pasting instead of being turned into a shared function in a libray?
I guess it's meant to be processed by some bundler later.
querySelectorAll() isn't live. So you could do what I very often do and already convert the result to an array, i.e.
Reason why I do that very often is because it allows all array methods to be used on the result, like .map() or .filter(), which makes it feel very much like jQuery. YMMVGood point!
I wonder what I would have to look for in my codebase in terms of what could break when dqsA starts returning an array instead of a NodeList?
Does the underlying data structure work okay with that? I would assume there is some sort of lazy iterator involved that may not work with array methods, or only work once.
This is JavaScript though…
Prototype pollution is bad. We learned this over a decade ago.
I recently attempted to remove React as a dependency just to see what would happen. It turns out different browsers are still incredibly inconsistent when it comes to event handling. For example the select event on an <input> element somehow doesn't fire at all on Safari during my test, and doesn't fire when the caret is merely moved on some browsers. Using just the native browser functions isn't just fine, even if you don't need all the React features like components or state or props. It turns out React DOM is valuable as it papers over browser differences.
I haven’t tested myself, but according to MDN the select event on <input> elements should be supported by Safari?
https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputEl...
The MDN page you linked to includes a nice selection logger example. It just doesn't work on my Safari (iOS).
And does an equivalent in React work? Because I don't believe React does any of the papering-over you describe. My understanding (as a non-user) is that React does, logically, essentially nothing special around event handling.
I don't remember off the top of my head whether this specific example works in React as I'm not next to a computer. But I remember reading React source code and finding a whole lot of code to handle the select event. (Just found it by doing a GitHub code search on my phone https://github.com/facebook/react/blob/7c8e5e7ab8bb63de91163...)
In general React has its own event handling code. For one in React the user doesn't even deal with the browser native DOM events but React synthetic events. React also readily creates brand new synthetic events from other browser events. React also sometimes gives different names or behaviors to browser events; the most famous example is that the React onChange event is roughly equivalent to the browser onInput event, but absolutely different from the browser onChange event.
Good to know, thanks. I knew it made synthetic events, but thought it was all still 1:1. I see I was completely wrong. I gotta say, yuck. Don't like it, wish they'd taken a more polyfill-like approach.
IIRC they do that to deal with browser differences and be consistent with things like event bubbling. Probably other benefits as well but that's the one I'm fairly sure I remember from years ago.
There are a couple of edge cases I forget at the moment where react event handlers intentionally behave differently from the DOM handlers with the same name.
Looks like bling.js https://gist.github.com/paulirish/12fb951a8b893a454b32
I really wish it was a native script to use qs and qsa, rather than something I have to add.
FYI: I know you meant to give an example, but element tags with ID are DOM variables as well.
This is my favorite trick that I've been using for a long time
...I just checked and it turns out I first blogged[0] about it 12 years ago. Time flies.
[0] https://nmn.gl/blog/javascript-shortcut-for-getelementbyid-a...
Your “ct” ligatures in your headings have a fun little loop connecting them!
Haha I'm glad you noticed :) I'm a huge typography nerd.
The css to do that comes from the Normalize-OpenType.css [0] library
[0] https://kennethormandy.com/journal/normalize-opentype-css/
you can use `$(queryGoesHere)` or `$$(queryGoesHere)` too from the devtool console.
How do people you work with recieve this? I'd imagine it could get messy6if every dev has their own little things like this
The way I see it once you’ve thinned the polyfills to next to nothing, the enduring feature of jQuery is the automatic list comprehensions. The ability to unselect all of the buttons in a form in a single call is still hard to match elsewhere. That and parent queries.
The main problem I have with the implementation is that it chooses to fail silently when the list is empty. I’ve fixed too many bugs of this sort, often caused by someone refactoring a DOM tree to do some fancy layout trick after the fact. If I were implementing jquery again today, I’d make it error on empty set by default and add a call chain or flag to fail silently when you really don’t care. I’ve spent a few hours poking around at jQuery seeing what it would take to pull out sizzle and do this, but never took things any farther than that.
At the end of the day jquery is about the old debate of libraries versus frameworks. We’ve been doing SPAs with giant frameworks long enough now for the Trough of Disillusionment to be just around the corner again.
You can modify every item in a query pretty nicely with a one-liner in modern browsers now:
This takes advantage of iterable NodeList and iterator helpers.Many parent queries can be done with element.closest()
Not having to worry whether some selector matches any elements is part of what makes jQuery attractive to many though. It's very "fire and forget", you send off your command to hide all .foo, and if there are any .foo they will be hidden, and if there are no .foo, nothing happens and you don't need to worry about it, much like CSS. If you write .foo { color: red; } and there isn't any .foo in the document it doesn't do anything but also has no negative side-effects (except that tiny overhead).
Exactly this. It's extremely useful to be able to say "with all document elements that matches this selector (whether there are any or not), do this."
It's possible it's also situationally useful to say "if there aren't any document elements that match this selector, error out, if so do this with all of them." I'm struggling to imagine a specific situation in which that has compelling advantages (and would be interested in elaboration), but let's say it exists. Then something like this:
would make it easy to explicitly add the guard condition with an invocation like `$("#selector .nonextant").mustMatch().each(function (emt) { /*do this*/ })`, rather than having it invisibly ride along with every comprehension and making the "whether there are any or not" case harder.And if for some reason one were possessed of the conviction that implicit enforcement of this universally within their project outweighed the advantages of explicit options for both ways, it'd probably be better to patch the jQuery lib for that specific project than enforce it as a standard for everyone worldwide.
Just had a script today that fires in 2 contexts and ran into an error where the element I attach a handler to doesn't exist in one of the contexts which breaks JS on the page. Since I already had jQuery as a dependency in the project, in the moment it felt easier to replace the querySelector call with jQuery, which I did, instead of checking the querySelect result so I second this, the 'fire and forget' part still holds up very well even though the tree traversal pain points have mostly been solved by browsers.
For every one of these we had ten where a button press definitely needed to definitely update a DOM element.
As I said above, you’d want a way to override the behavior in the few cases where it’s inappropriate
Optional chaining is widely supported and solves this problem for single-element queries. qsA returns an empty NodeList when the selector matches no elements, as long as the string is a valid selector. Then forEach doesn't require it anyway.
true it is tedious but I have a vs code shortcut for doing the following (and same goes for queryselectorall) let foo = document.queryselector(‘.foo’);
if (!!foo) { //do thing }
Fully agree, the default should be strict, jQuery based code requires every developer to be aware of every selector used in the project and remember to update it when the DOM changes. That is of course impossible.
I think it is possible to replace the jQuery init function with your own implementation that enforces length.
Enjoy!
But why? With mainstream websites pumping out literal megabytes of JavaScript, why spend time rewriting an entire library (with less features) to save 50KB?
Not relevant to this package in particular, but this line of reasoning baffles me every time I see HN comments about JQuery. So many posters argue against the use of JQuery because of its package size and bandwidth constraints, while simultaneously advocating for SPA frameworks that use orders of magnitude more bandwidth. Absolutely ridiculous cargo cult reasoning.
Two different types of people. One wants to create lightweight applications. The other wants lightweight development.
Lightweight development for lightweight applications is a bit of an oxymoron at this time.
IMHO the way to achieve this is to pay the upfront cost of building out a small framework for your application, which has lightweight abstractions for common patterns. With some design, a small internal API can be as nice to work with as the kitchen sink abstractions. (Much nicer, too, when it comes to maintenance and debugging.)
> IMHO the way to achieve this is to pay the upfront cost of building out a small framework for your application
And then 5 years down the line it has grown into a worse version of the popular alternatives, the original developers are gone and the ones who currently maintain the mess have to pay the price. In corporate or professional contexts, you probably just should pick whatever is popular.
Though that anecdote about risk management should also have this link alongside it: https://www.robinsloan.com/notes/home-cooked-app/
When you’re working on something others won’t have to maintain years down the line, thankfully your hands aren’t tied then and you can have a bit more fun.
For everything else? Svelte, HTMX, jQuery, Vue, React, Angular or whatever else makes sense.
That said, sometimes I wonder what a world would look like, where the browser would have the most popular options pre-packaged in a way where you wouldn’t need to download hundreds of KB in each site you visit, but you’d get the packages with browser updates. It’d probably save petabytes of data.
Except seems like we went in the opposite direction, with even CDNs being less efficient in some ways: https://httptoolkit.com/blog/public-cdn-risks/
> And then 5 years down the line it has grown into a worse version of the popular alternatives, the original developers are gone and the ones who currently maintain the mess have to pay the price.
Isn't that true for using the popular alternative too? At some point the original devs have moved on from $FRAMEWORK v1 to $FRAMEWORK v2 and now you're going to have to do a migration project and hope it doesn't break.
> When you’re working on something others won’t have to maintain years down the line, thankfully your hands aren’t tied then and you can have a bit more fun.
I think the implication is, with the in-house library, that the in-house library would be a lot easier to replace or update than a deprecated external alternative.
IMO, it's all very contextual.
No one's forcing you to upgrade when the framework does. We still have a Vue 2.7 codebase chugging along just fine and won't upgrade it unless truly necessary.
> No one's forcing you to upgrade when the framework does.
Many large companies have entire departments dedicated to forcing you to keep your code up to date.
If you're working for that kind of company then you certainly aren't getting a choice whether to use JQuery or React.
> If you're working for that kind of company then you certainly aren't getting a choice whether to use JQuery or React.
Not necessarily. There is probably a tickbox for satisfying some regulation that says "Don't use versions that aren't getting security fixes anymore".
In which case, yes, you get the choice to choose between JQuery and $SOMETHING_ELSE but not the choice to remain on unsupported versions of anything.
The thing is that while your application is working well, the library authors would have moved on and it's up to you to upgrade your application and fix breaking changes. At least with an in-house framework, it's always morphing into something that the company needs. Not saying that there aren't nicer framework, but it's always someone agenda that has aligned with yours at the time of selection.
> The thing is that while your application is working well, the library authors would have moved on and it's up to you to upgrade your application and fix breaking changes.
AngularJS is actually a pretty good argument to support your point, I had to migrate an app off of it (we picked Vue as the successor) and it was quite the pain, because a lot of the code was already a bit messy and the concepts don't carry over all that nicely, especially if you want something quite close to the old implementation, functionality wise.
On the other hand, jQuery just seems to be trucking along throughout the years. There are cases like Vue 2 to Vue 3 migrations which can also have growing pains, but I think that the likes of Vue, React and Angular are generally unlikely to be abandoned, even with growing pains along the way.
In that regard, your job as a developer is probably to pick whatever might have the least amount of surprises, the most longevity and the lowest chance of you having to maintain it yourself and instead being able to coast off of the work of others (and maybe contributing, if you have the time), with the project having hundreds if not thousands of contributors, which will often be better than what a few people within any given org could achieve.
Sometimes that might even be reaching for something like SSR instead of making SPAs, depending on what you can get away with. One can probably talk about Boring Technology or Lindy effect here.
I think, in view of my previous comment which was made prior to reading this refinement of yours, that it all very much depends on whether you are choosing something that is designed to be replaced vs something that is not.
Sorta along the lines of the mantra "Don't design your code for extendability, design it for replaceability" (not sure where I read that).
> with the project having hundreds if not thousands of contributors, which will often be better than what a few people within any given org could achieve.
The upside of "what a few people within the org could achieve" is that a couple of devs spending a few weeks on a project are never going to make something that cannot also be replaced by a different couple of developers of a similar timeframe.
IOW, small efforts are two-way doors; large efforts (thousands of contributors over 5 years) are effectively one-way doors.
> Sorta along the lines of the mantra "Don't design your code for extendability, design it for replaceability" (not sure where I read that).
I agree in principle and strive to do that myself, but it has almost never been my experience with code written by others across bunches of projects.
Anything developed in house without the explicit goal of being reusable across numerous other projects (e.g. having a framework team within the org) always ends up tightly coupled to the codebase to a degree where throwing it away is basically impossible. E.g. other people typically build bits of frameworks that infect the whole project, rather than decoupled libraries that can be swapped out.
> The upside of "what a few people within the org could achieve" is that a couple of devs spending a few weeks on a project are never going to make something that cannot also be replaced by a different couple of developers of a similar timeframe.
Because of the above, this also becomes really difficult - you end up with underdocumented and overly specific codebases vs community efforts that are basically forced to think about onboarding and being adaptable enough for all of the common use cases.
Instead, these codebases will often turn to shit, due to not enough people caring and not being exposed to enough eyes to make up for whatever shortcomings a small group of individuals might have on a technical level. This is especially common in 5-10 year old codebases that have been developed by multiple smaller orgs along the way (one at a time, then inherited by someone else).
Maybe it’s my fault for not working with the mythical staff engineers that’d get everything right, but so don’t most people - they work with colleagues that are mostly concerned with shipping whatever works, not how things will be 5 years down the line and I don’t blame them.
> Lightweight development for lightweight applications is a bit of an oxymoron at this time.
Apt description
A. You're assuming they are largely the same people by extrapolating from your observations. It's impossible to actually know.
B. Your two examples provide different things. This is like saying it's OK to include any old multi-megabyte dependency if a site loads a couple mb worth of images. There's no reason to stop considering the size of the small parts just because you decided you need some large parts. Things add up - that will never stop being a useful thing to remember, in any context.
We're using jQuery on our sites which score 100% on all Google Lighthouse pagespeed tests. A smaller version of jQuery really wouldn't matter to us, our pages are already extremely fast to load and score amazingly well on any page speed/SEO test.
About the only place I could see a benefit from this library is maybe in embedded, where space really is an issue. I've created a few IoT devices with web interfaces that are built-into the tiny ROM of the device. A 6KB library is nice, but I'm using Preact with everything gzipped in one single .html file and my very complex web app hosted in the IoT device is about 50KB total size gzipped - including code, content, SVG images and everything, so jQuery or a JQ substitute isn't going to be a better solution for me, but maybe it fits for someone that doesn't know how to set up the tooling for a react/preact app.
To add, I really try to minimize external deps but if first-load speed were absolutely critical loading from jQuery CDN would increase odds of it already being cached..
Meh for most places I've worked though.
Using jQuery CDN might have helped with cross-site caching in the past, but now all major browsers have cache partitioning by origin for privacy reasons.
We don't make any external HTTP requests for any library code. jQuery is embedded into the page HTML file, along with all other required library code necessary for the page to start functioning, in one bundle. Nothing that runs below the fold is executed until the page is scrolled. All scripts are deferred, except the required libraries, one of which is jQuery and is loaded in-line in a <script> block in the page <head>. There's a ton of tricks we use to get to a perfect Google Lighthouse score - we also score perfect 100% on mobile too. This isn't a complex web application but we do a lot of cool front-end stuff.
That's great and fair. Some places are NUTS about first page load speed(and I mean first time someone has ever visited the site) though and it really could matter across all deps depending on a ton of other factors..
Serving super common libs, like jQuery, from the lost likely CDN location could maximize the likelihood it's already cached.
I have never personally worked anywhere this mattered.
We provide a website among many other services to our clients. Our clients are very SEO focused, and they will go to Google's Lighthouse (or another testing site) to test their site's page speed, and then they will put in the URL for their competition's website to see how their site compares to their competitors. If they see their page speed score is 1/2 as fast as their competition, they have a reason to leave us and find a better host (whoever their competition is using). We have thousands of clients, so I am managing thousands of individual customized websites based on core "white-label" template code. Page speed matters to us very much, because it matters to our clients.
Google Lighthouse will complain about every HTTP request, and it doesn't care about CDN caching, because none of the external code will be cached when the test is run. It will tell you to minimize external HTTP requests. This is the same way every page speed test works, not just Google. So including any external dependency will cause the page speed score to go down a bit. Have enough of them and your page speed score ends up being very poor (many other factors can affect this, all of which are detailed in the Lighthouse report). It doesn't matter what the average site visitor experiences if their cache has jQuery in it from some random CDN. The only thing that really matters is that Google is telling our client that their site is performing badly compared to their competitor's site.
So, my job is to make sure our clients never, ever think about leaving us because of page load speed as measured by Google or any other testing site. Our clients pay us hundreds of dollars every month, some of them pay 10s of thousands depending on their needs (we don't just provide websites). So there is a lot of money at stake. Page speed scores matter very much to us. When our client sees their site is scoring perfect 100% on all Lighthouse tests, and their competitor is scoring a 70%, then we win, and the client has one less reason to leave. We even use this as a selling point to bring on new clients, because we have an absolutely untouchable page speed score compared to our competitors in this space.
I'm not sure what to say, I believe you but you seem to be talking past my point that other companies may prefer to go a different route based on their needs and what they are optimizing for. There are real situations a CDN may be preferred.
Companies that are using CDNs to load commonly used libraries aren't actually interested in page load speed scores. They're pursuing a tech trick that was always somewhat of a red herring, and frankly a bit risky. We've experimented with CDNs and they have actually added stuff to the libraries that shouldn't be there. Trusting a 3rd party to load library code from isn't great for security.
This argument confuses me. It seems equivalent to saying "with mainstream fast food restaurants selling meals with 1600 calories, why are you making yourself a green salad for lunch?", or saying "with the national debt approaching $35 trillion dollars, why are you shopping around for the best rate on a mortgage?". One answer for all three cases is: I'm not the thing that's big, I'm a different thing that's smaller. Another answer is: if being too large is the problem, then being smaller sounds like a solution.
But I guess you're really asking why the developer would spend time on rewriting a library. Is that really surprising? Most of programming is rewriting something that's been made before, either because you have to for your job, or because you need it to do something slightly different, or have different performance characteristics, or just want to learn how it's done.
Maybe if we embraced small dependencies rather than saying "why bother?", then dependencies would become smaller?
This
Mainstream websites are advertising-delivery trash. Don't use them as a benchmark for what we should be doing.
Some of us still try to ship websites that use less than 50KB of JavaScript total.
Embedded system? Or “I don’t need all that stuff for my comic book collection manager” or “minimalism has it’s own rewards”?
Why rewrite an entire code base away from JQuery.. and not to native implementations?
The era of jQuery and it's clones are over. People need to move on. If you're ever at the architecture level of your code base and think "What package should I use for DOM manipulation?", you're doing something wrong.
for htmx, jQuery is amazing
My current client has a web application written in a lightweight strongly typed php framework, htmx and sprinkled jquery.
Devs move very quickly, the website is blazing fast, and it makes around 140k mrr. It's not small. About 350 database tables and 200 crud pages. Business logic is well unit tested.
You don't need to make jQuery the center of DOM manipulation if your application swaps dom with htmx with all the safety and comfort of a cozy backend.
It feels magical. And the node_modules folder is smol. Icing on the cake.
I look forward to jQuery 4 and 5.
You don't see this kind of architecture in CVs because these people are too busy making money to bother.
Sounds interesting from a tech perspective. What PHP framework is it and at what abstraction level you handle forms?
Thanks. It's a small custom framework built from libraries, some custom, some third party.
- File based HTTP router running on top of https://frankenphp.dev/
- ORM/SQL with: https://github.com/cycle/orm but this is preference. Anything works. From SQL builders to ORMs.
I'll try to explain their form handling:
Forms almost always POST to their own GET URL.
If you GET /user/save you'll get back HTML and `<script>` to build the form.
If you POST /user/save you're expected to pass the entire form data PLUS an "operation" parameter which is used by the backend to decide what should be done and returned.
For example if user clicks [add new user] button, the "operation" parameter has value of "btnSubmit.click".
Why pass operation parameter? Because business forms can have more than just a [submit] button.
For example, there might be a datagrid filter value being changed (operation: "txtFilter.change"), or perhaps a dropdown search to select a city name from a large list (operation: "textCitySearch.change"), it can be a postal code to address lookup (operation: "txtPostalCode.change"), etc.
On the backend, the pseudocode looks somewhat like this but it's cleaner/safer because of encapsulation, validation, error handling, data sanitization, model binding and csrf/xss protection:
So the HTML generation and form processing for user creation is handled by a single HTTP endpoint and the code is very straight-forward. The locality of behaviour is off the charts and I don't need 10 template fragments for each form because everything is component based.Thanks for sharing FrankenPHP. This thing looks amazing.
Thanks for the detailed response. Very interesting approach. I didn't know about FrankenPHP! You ever considered pure Go for the backend?
jQuery's API is nice. And it's abstraction reflects common sense more than technical implementations. It's another abstraction layer, all right, and not required, but it's so convenient.
oh, ok. Let make things larger then.
For those interested in jQuery alternatives- I've been waiting for jQuery 4.0 soooo long I ended up making my own jQuery with some key differences:
https://github.com/gnat/surrealConflicting documentation:
So does any() always return an array as described near the top, or can it return null as implied by the example below?This is great!
Locality of Behaviour is of special interest to me.
How is your experience with currentScript.parentElement?
Last month I did a quick research and my impression is that it wasn't reliable in some probably niche case but I can't remember when.
But I didn't investigate much and I'm glad you made it work!
If I load 3 consecutive scripts currentScript.parentElement should still work in all browsers right? As long as it is not async or module, which is fine with me.
SvelteKit had this conversation and they ended up implementing random ids for elements to set their targets:
https://github.com/sveltejs/kit/issues/2221
From the migration guide I learned a few things that jQuery can do (and Cash can't) that I didn't know and I'll probably use some time:
https://github.com/fabiospampinato/cash/blob/master/docs/mig...
Here's a stretch goal: use typescript template string magic to correctly infer the type of elements. For instance you can statically infer that $('div#name') will be a HTMLDivElement.
Elixir and a few other languages have the pattern matching and type system that could pull that off but not a lot of languages do. Can you do that in typescript? I don’t see how.
You can, using `function $<S>(sel: S | `${S}${ ' '|'#'|'.'|'[' }${string}`): HTMLElementMap[T];` or
TS types may go quite deep Check Arktype library [https://arktype.io/], it's type definitions are basically a Typescript written in JSONTypeScript's type system is Turing complete, so you can not only do that, but also some insane stuff like:
- A SQL database implemented purely in TypeScript type system (https://github.com/codemix/ts-sql)
- Chess implemented entirely in TypeScript (and Rust) type systems (https://github.com/Dragon-Hatcher/type-system-chess)
- Lambda calculus in TypeScript type system (https://ayazhafiz.com/articles/21/typescript-type-system-lam...)
You definitely can do that in TypeScript. The kinds of things you can do with generic inference and string literals are crazy
The package is called `typed-query-selector`. Here it is in action: https://github.com/GoogleChrome/lighthouse/blob/main/types/i...
I hear jQuery 4 is a jQuery alternative for modern browsers.
Back in the days when trying to slim down JS I used https://github.com/filamentgroup/shoestring Main reason was because they had offered a custom build to only add what you really need.
It looks like cash has that as well, just bit more hidden in the documentation https://github.com/fabiospampinato/cash/blob/master/docs/par... If I'd use it I'd give that a try.
Somehow I still think going with what the browsers have to offer nowadays is a better option - actually it's really good and jQuery isn't really needed anymore. Especially when even the small jQuery alternative is still 6kB, while Preact, a react like lib, is only half the size.
I used this initially in a browser extension I'm building. Ended up migrating to a JSX library instead, because jQuery turns into hard-to-reason-about code pretty quickly once you're past “simple app” territory (and I say this as someone who wrote my own jQuery-inspired library[1]). Right tool for the job, as they say.
[1]: https://github.com/aleclarson/dough
P.S. If you can cope with jQuery in a medium/large app, good for you. But it's not my cup of tea.
I'm confused, how is this helpful beyond having some aliases for already existing web apis?
In theory, I love all those tiny libs and frameworks
In practice, I always need to import some huge a* library that make gains from these small alternatives miniscule.
Framework -> 50KB
Tiny version of framework -> 5KB
Lib I need and can't replace -> 1MB
Note that it's really dom centric and doesn't include ajax.
Isn’t AJAX fairly well supported via fetch now?
In the same way selectors and map replace jquery. It depends how much sugar you want.
Yeah at this point I’ve totally forgotten $.ajax API but fetch is pretty easy, just a single function call
Now we only need something that makes websockets more resilient against network errors and corporate firewalls.
... unless you want to send a body with your HTTP GET. There is tons of utility value in this! For example, let's say you want to GET some data but also provide some client request statistics along with the request -- happens all the time in the real world.
Fetch will reject your GET if it contains a body (a deliberate maintainer decision), even though it's entirely permissible by HTTP and done by many real-world AJAX APIs. Real AJAX will do what it's supposed to. (The HTTP 1.1 2014 Spec says that including a request body in a GET "might cause some implementations to reject the request." Guess which one!)
Also, advanced features like progress are completely absent from Fetch as well.
However, there are some fantastic libraries like Axios[1], SuperAgent (requires npm), and, yes, jQuery[2], that have really excellent API's (far superior to Fetch), or you could just write your own (or use an LLM) short wrapper around modern AJAX and call it a day. h/t to Claude:
This gives you xhr methods with a fetch-style API and you can still do all the things that fetch can't (but this won't do real streaming or cache control like Fetch, but it'll do 95% of all common use cases in a tiny bit of code.)Each method listed above returns a Promise that resolves with the XMLHttpRequest object or rejects with the error. So you get both the Promise functionality and full access to the XHR object in the resolution.
Usage:
For more advanced AJAX stuff, check out the very powerful and flexible Axios library[1].And, if you don't need AJAX but do want some of the features from jQuery (like some of the more unusual selectors) that aren't in Cash (to save bytes!), AJAX (and special effects) is excluded from jQuery Slim which brings the code down to only 69KB[3].
1. Axios https://github.com/axios/axios (41kb)
2. jQuery AJAX https://api.jquery.com/jQuery.ajax/ (87kb but includes ALL of jquery!)
3. https://code.jquery.com/jquery-3.7.1.slim.min.js
Caching is the most important reason to consider GET for a non-hypertext API. Vary headers tell the server which header diffs should cause cache misses, but there's no way to do that for an encoded body.
I believe providing a body with GET is non-standard, which could lead to problems with proxies. IETF is introducing the QUERY method to fill this gap.
It's not non-standard; it's actually in the standard: https://www.rfc-editor.org/rfc/rfc7231#page-24
In standard HTTP/1.1, any method can have a request body. In Representational State Transfer (REST) as defined by Dr. Fielding, HTTP doesn't even come up, let alone "methods" per se, so there is no distinction between DELETE, POST, or GET from a REST standpoint, only within HTTP as an engine for hypertext. Further, in HTTP, any of these requests can contain a request body.
But, because of this behavior by the WhatWG for Fetch, the IETF has added this paragraph to the specification for HTTP/1.1:
"Some existing implementations" really just means fetch. The p*ing contest between two groups resulted in a neutered and prescriptive fetch.In other words, it's fetch that is non-standard, and the actual HTTP standard had to be updated to let you know that.
You've got the chronology and causality wrong. The Fetch API came after the RFC 7230 advice. Due to arguably dubious interpretation of arguably poor wording in RFC 2616 (from 1999) that suggested you SHOULD ignore GET bodies, various caching and proxy servers would ignore or reject GET request bodies, so that it became dangerous to use them.
Since then, each iteration of the HTTP specs has strengthened the advice. The most recent 9110 family says you SHOULD NOT use GET request bodies unless you have confirmed in some way that they'll work, because otherwise you can't trust they'll work.
Fetch was going along with this consensus, not causing the problem.
The pool was muddied; nay, poisoned. And so the solution is the QUERY method. That's how things tend to work in such a space. See also 307 because of 302 being misimplemented.
What does the "modern websites" mean? It honestly sounds like "this only works in the latest chrome, and only on the latest windows and macos".
"modern websites" means IE11+ for cash, it's a fairly old library.
I remember using cash about 10 years ago. Was it under a different user back then? Ken wheeler maybe?
Thanks for your continued work on it!
Yes exactly, at some point I asked to maintain it and kinda redid it. Now I kinda consider it "done", as in "maybe some more work would be put into it, but by end large I don't think it's going to change in the future".
I created a similar `$()` utility function for my projects albeit with 10 times less functionality.
I used the same basic signature for the `$()` function. However I found that 95% of the time I don't need to use the chain method on a collection. There's almost no scenario in which I want to do <collection>.addClass() etc. There's practically ZERO situations in which I would use something like attach an event to a collection of nodes, since event delegation is more elegant (attach a single event and check for event.type and event.target).
So TLDR I made $() always select a single element with `querySelector()`, which means I could remove the collection/loop from every chained method like addClass() or css() or toggle().
Point unless you write bad code to begin with, you can probably make this significantly smaller by removing the collection handling. The 1% of the time it is warranted to do an addClass() or something else on a bunch of nodes you can just go native and if the collection is small enough just call $() on each element.
PS: I guess the subtext also to my post is sometimes something looks logically elegant, like the ability for any chained method to act on the collection selected by $(), but it may not make any sense in the real world.
I am using another one Umbrella JS https://umbrellajs.com
Finally a name that is perfectly fitting and describes the library surprisingly well.
Assuming you mean that ironically. Unfortunately, the README doesn't reveal where the name comes from, but it is truly absurdly misleading, as if it came from a random generator...
I assumed it comes from jQuery defaulting to $ as an alias for the jQuery function.
Not sarcastic at all actually, I take you you've missed the absolute horde of dollar signs it uses in its syntax?
Reminds me of this old joke: "Why do greedy developers all learn PHP? Because there's a lot of dollars in that."
Oh wow, I really didn't make that connection. Thanks!
Still not sure it really is a good name for a lib: someone who doesn't already know it will probably not think about jQuery when they see this name in a dependency list...
For some reason I would have preferred they called it “Cash Money”
Fine as an exercise but for a range of use cases what you really want is the smallest alternative to the bloated reactive js frameworks and alpine.js seems to be occupying that sweet spot.
This seems pretty different from the functionality alpine provides, no?
Not sure I’d call IE11 a modern browser. Aren’t they leaving more size/speed improvements on the table by supporting it?
> Aren’t they leaving more size/speed improvements on the table by supporting it?
Only tiny ones, I don't remember the details now, IE11 ended up providing almost all the same APIs.
The primary reason we keep around and use jQuery is because most pages on our site rely upon datatables.net which relies upon jQuery.
Thanks. Didn't know about datatables.net looks very useful.
Looks like it does a great job of dealing with tables on mobile, putting my own manual efforts for that task to shame. I would typically just enable horizontal scrolling on mobile and call it a day. Now I feel a bit guilty about that after seeing the much better ways datatables does it!
ah that’s what people were looking for
a jquery alternative
actually the native typescript is interesting
Is it just me who doesn't need jQuery or anything like that anymore? What kind of crazy direct dom query/manipulation do you need?
The manipulation should be on the backing state, and then the dom should just derive from that, such as with data binding.
[flagged]
> I thought we got past this.
>
> https://youmightnotneedjquery.com/
I keep waiting for the equivalent www.youmightnotneedreact.com url to pop up.
not exactly that but there is http://youmightnotneedjs.com/ and https://www.htmhell.dev/adventcalendar/2023/2/
Looks to me like you might still want plenty of wrappers though.
Yes, that's the primary point. Write your own wrappers for the subset of functions you need - a small price for the great reward of removing a 50kb dependency.
I've always found this website absurd since almost every example is harder to read and remember in native JS than in jQuery.
It makes an extremely good case for jQuery IMO.
I don’t recall them implying it was more concise just “good enough, and 1 less dependency”
It specifically calls out the use case of writing a library, where reducing dependencies is a much higher priority. It demonstrates how easy it is to replace usage of many functions. It never implies that the native equivalents are shorter.
They don't claim it's clearer than jQuery. The pitch, as I understand it: if you only need a few of those operations, it may be better to forego adding jQuery dependency.
That's not really the point. Of course they're smaller in JQuery, the whole idea is to provide equivalents for things that are easy one-liners in JQuery.
Not according to the last couple of threads about it:
https://news.ycombinator.com/item?id=25770858
https://news.ycombinator.com/item?id=7152068
This perfectly shows why native js is ridiculously shittified.
el.getBoundingClientRect().height;
gtfo
window.$ = document.querySelectorAll
Uncaught TypeError: 'querySelectorAll' called on an object that does not implement interface Document.
Damn, you're right, sorry.
window.$ = (x => document.querySelectorAll(x))
I find this a little cleaner:
Since it works properly for any function no matter the number of arguments it receivesI like wrapping it in an Array.from() so you can use .map/.filter/etc.
The wrong syntax notwithstanding, this doesn't let you recursively use querySelector(All), e.g. to find children of a node like document.querySelector("#foo").querySelectorAll(".bar")
I know, it was a bit of a joke.
But I think the OP's jQuery replacement is also dropping features in the service of a small footprint. So this was my 80/20 contribution to the "smallest jQuery replacement" problem ;)
But it doesn't do chaining and you have to loop through elements to do anything with them.
I'm always surprised that an API that is defined by matching 0-n dom elements doesn't return a container that by default maps over them list monad style.
There’s a fairly small polyfill that makes a DOMNodeList have the same functions as Array.
Are the various browser JS implementations clever enough not to make a new Object for Array.from(DOMNodeList) ?
window.$ = document.querySelectorAll.bind(document)
[flagged]