It's nice to see this. Things used to be simple! (XSLT itself should've been simpler of course).
BTW, as I commented on earlier HN threads re: removal of XSLT support from HTML spec and browswers, IBM owns a high-performance XSLT implementation that they may want to consider contributing to one or more browsers. (It is a JIT that generates machine code directly from XSLT and several other data transformation and policy languages, and then executes it).
I think it would be very unlikely browsers would use a jit engine for xslt. They are removing it because they are afraid of the security footprint. A JIT engine would make that footprint much worse.
The core concept behind XSLT is evergreen: Being able to programmatically transform the results of a HTTP request into a document with native tools is still useful. I don't foresee any equivalent native framework for styling JSON ever coming into being though.
I could easily imagine a functional-programming JSON transformation language, or perhaps even a JSLT based on latest XSLT spec. The key in these things is to constraing what is can do.
We wouldn't even need anything as complex as XSLT, or a functional language for transforming JSON. Other markup-based template processing systems exist for higher-level languages like Pug, Mustache, etc. for Node.js. You could achieve a lot with a template engine in the browser!
> I want to see XSL import an XML. I want to see the reverse. XSL will be the view. XML will be the model. And the browser will be the controller. MVC paradigm.
It then dawned on me that the MVC framework for XML is where XML is the model (or data or database table). And XSLT is the viewer in the rear. Meaning the web browser can browse database information.
I never appreciated this very much before. The web has this incredible format to see database information in raw form or a styled form.
I still want to see development of it in reverse, and I hope to find better use cases now that I understand this paradigm.
One recommendation I’d make: replace RSS with Atom. Outside of podcasting, everything that supports RSS supports Atom, and Atom is just better, in various ways that actually matter for content correctness, and in this case in ways that make it easier to process. One of the ways that matters here: Atom <published> uses RFC 3339 date-time, rather than the mess that is RSS’s pubDate. As it stands, you’re generating an invalid JSON-LD datePublished. (If you then want to convert it into a format like “25 August 2025”, you’ll have to get much fancier with substringing and choosing, but it’s possible.)
One of the nice things about Atom is that you can declare whether text constructs (e.g. title, content) are text (good if there’s to be no markup), HTML encoded as text (easiest for most blog pipelines), or HTML as XML (ideal for XSLT pipelines).
I guess I just don't get the point. In order for the page to load it needed to make four round trips on the server sequentially which ended up loading slower than my bloated javascript spa framework blog on a throttled connection. I don't really see how this is preferential to html, especially when there is a wealth of tools for building static blogs. Is it the no-build aspect of it?
It did make all those requests, but only because the author set up caching incorrectly. If the cache headers were to be corrected, site.xsl, pages.xml, and posts.xml would only need to be downloaded once.
The cache headers are correct, you can't indefinitely cache those because they might change. Maybe you could get away with a short cache time but you can't cache them indefinitely like you can a javascript bundle.
Not to mention on a more involved site, each page will probably include a variety of components. You could end up with deeper nesting than just 4, and each page could reveal unique components further increasing load times.
I don't see much future in an architecture that inherently waterfalls in the worst way.
It's nice to see this. Things used to be simple! (XSLT itself should've been simpler of course).
BTW, as I commented on earlier HN threads re: removal of XSLT support from HTML spec and browswers, IBM owns a high-performance XSLT implementation that they may want to consider contributing to one or more browsers. (It is a JIT that generates machine code directly from XSLT and several other data transformation and policy languages, and then executes it).
I think it would be very unlikely browsers would use a jit engine for xslt. They are removing it because they are afraid of the security footprint. A JIT engine would make that footprint much worse.
The core concept behind XSLT is evergreen: Being able to programmatically transform the results of a HTTP request into a document with native tools is still useful. I don't foresee any equivalent native framework for styling JSON ever coming into being though.
I could easily imagine a functional-programming JSON transformation language, or perhaps even a JSLT based on latest XSLT spec. The key in these things is to constraing what is can do.
We wouldn't even need anything as complex as XSLT, or a functional language for transforming JSON. Other markup-based template processing systems exist for higher-level languages like Pug, Mustache, etc. for Node.js. You could achieve a lot with a template engine in the browser!
JSX!
> I don't foresee any equivalent native framework for styling JSON ever coming into being though.
Well yeah I hope not! That's what a programming language is for, to turn data into documents.
XSLT 2.0 is Turing complete.
Let’s rewrite W3C into XML and xslt.
A few HN posts ago I commented this
> I want to see XSL import an XML. I want to see the reverse. XSL will be the view. XML will be the model. And the browser will be the controller. MVC paradigm.
It then dawned on me that the MVC framework for XML is where XML is the model (or data or database table). And XSLT is the viewer in the rear. Meaning the web browser can browse database information.
I never appreciated this very much before. The web has this incredible format to see database information in raw form or a styled form.
I still want to see development of it in reverse, and I hope to find better use cases now that I understand this paradigm.
Haven't seen this much interest in XML/XSLT in 20 years.
One recommendation I’d make: replace RSS with Atom. Outside of podcasting, everything that supports RSS supports Atom, and Atom is just better, in various ways that actually matter for content correctness, and in this case in ways that make it easier to process. One of the ways that matters here: Atom <published> uses RFC 3339 date-time, rather than the mess that is RSS’s pubDate. As it stands, you’re generating an invalid JSON-LD datePublished. (If you then want to convert it into a format like “25 August 2025”, you’ll have to get much fancier with substringing and choosing, but it’s possible.)
One of the nice things about Atom is that you can declare whether text constructs (e.g. title, content) are text (good if there’s to be no markup), HTML encoded as text (easiest for most blog pipelines), or HTML as XML (ideal for XSLT pipelines).
I got my file extensions mixed up, thought this was going to be a "Use M$ Excel as an IDE" type post.
I guess I just don't get the point. In order for the page to load it needed to make four round trips on the server sequentially which ended up loading slower than my bloated javascript spa framework blog on a throttled connection. I don't really see how this is preferential to html, especially when there is a wealth of tools for building static blogs. Is it the no-build aspect of it?
It did make all those requests, but only because the author set up caching incorrectly. If the cache headers were to be corrected, site.xsl, pages.xml, and posts.xml would only need to be downloaded once.
The cache headers are correct, you can't indefinitely cache those because they might change. Maybe you could get away with a short cache time but you can't cache them indefinitely like you can a javascript bundle.
Not to mention on a more involved site, each page will probably include a variety of components. You could end up with deeper nesting than just 4, and each page could reveal unique components further increasing load times.
I don't see much future in an architecture that inherently waterfalls in the worst way.