I implemented something similar to the compositional regular expressions feature described here for JavaScript a while ago (independently, so semantics may not be the same), and it is one of the libraries I find myself most often bringing into other projects years later. It gets you a tiny bit closer to feeling like you have a first-class parser in the language. Here is an example of implementing media type parsing with regexes using it: https://runkit.com/tolmasky/media-type-parsing-with-template...
To be clear, programming languages should just have actual parsers and you shouldn't use regular expressions for parsers. But if you ARE going to use a regular expression, man is it nice to break it up into smaller pieces.
"Actual parsers" aren't powerful enough to be used to parse Raku.
Raku regular expressions combined with grammars are far more powerful, and if written well, easier to understand than any "actual parser". In order to parse Raku with an "actual parser" it would have to allow you to add and remove things from it as it is parsing. Raku's "parser" does this by subclassing the current grammar adding or removing them in the subclass, and then reverting back to the previous grammar at the end of the current lexical scope.
In Raku, a regular expression is another syntax for writing code. It just has a slightly different default syntax and behavior. It can have both parameters and variables. If the regular expression syntax isn't a good fit for what you are trying to do, you can embed regular Raku syntax to do whatever you need to do and return right back to regular expression syntax.
It also has a much better syntax for doing advanced things, as it was completely redesigned from first principles.
The following is an example of how to match at least one `A` followed by exactly that number of `B`s and exactly that number of `C`s.
(Note that bare square brackets [] are for grouping, not for character classes.)
my $string = 'AAABBBCCC';
say $string ~~ /
^
# match at least one A
# store the result in a named sub-entry
$<A> = [ A+ ]
{} # update result object
# create a lexical var named $repetition
:my $repetition = $<A>.chars(); # <- embedded Raku syntax
# match B and then C exactly $repetition times
$<B> = [ B ** {$repetition} ]
$<C> = [ C ** {$repetition} ]
$
/;
Result:
「AAABBBCCC」
A => 「AAA」
B => 「BBB」
C => 「CCC」
The result is actually a very extensive object that has many ways to interrogate it. What you see above is just a built-in human readable view of it.
In most regular expression syntaxes to match equal amounts of `A`s and `B`s you would need to recurse in-between `A` and `B`. That of course wouldn't allow you to also do that for `C`. That also wouldn't be anywhere as easy to follow as the above. The above should run fairly fast because it never has to backtrack, or recurse.
When you combine them into a grammar, you will get a full parse-tree. (Actually you can do that without a grammar, it is just easier with one.)
Frankly from my perspective much of the design of "actual parsers" are a byproduct of limited RAM on early computers. The reason there is a separate tokenization stage was to reduce the amount of RAM used for the source code so that further stages had enough RAM to do any of the semantic analysis, and eventual compiling of the code. It doesn't really do that much to simplify any of the further stages in my view.
The JSON::Tiny module from above creates the native Raku data structure using an actions class, as the grammar is parsing. Meaning it is parsing and compiling as it goes.
A "monad" is not really a "thing" you can make use of, because a monad is a type of thing. Think "iterator"; an iterator is not a thing itself, it is a type of thing that things can be.
There is probably a monad you could understand this as being, a specific one, but "monad" itself is not a way to understand it.
And just as you can understand any given Iterator by simply understanding it directly, whatever "monad" you might use to understand this process can be simply understood directly without reference to the "monad" concept.
I don't think we disagree here. To clarify, my statement about using "actual parsers" over regexes was more directed at my own library than Raku. Since I had just posted a link on how to "parse" media types using my library, I wanted to immediately follow that with a word of caution of "But don't do that! You shouldn't be using (traditional) regexes to parse! They are the wrong tool for that. How unfortunate it is that most languages have a super simple syntax for (traditional/PCRE) regexes and not for parsing." I had seen in the article that Raku had some sort of "grammar" concept, so I was kind of saying "oh it looks like Raku may be tackling that to."
Hopefully that clarifies that I was not necessarily making any statement about whether or not to use Raku regexes, which I don't pretend to know well enough to qualify to give advice around. Just for the sake of interesting discussion however, I do have a few follow up comments to what you wrote:
1. Aside from my original confusing use of the term "regexes" to actually mean "PCRE-style regexes", I recognize I also left a fair amount of ambiguity by referring to "actual parsers". Given that there is no "true" requirement to be a parser, what I was attempting to say is something along the lines of: a tool designed to transform text into some sort of structured data, as opposed to a tool designed to match patterns. Again, from this alone, seems like Raku regexes qualify just fine.
2. That being said, I do have a separate issue with using regexes for anything, which is that I do not think it is trivial to reason about the performance characteristics of regexes. IOW, the syntax "doesn't scale". This has already been discussed plenty of course, but suffice it to say that backtracking has proven undeniably popular, and so it seems an essential part of what most people consider regexes. Unfortunately this can lead to surprises when long strings are passed in later. Relatedly, I think regexes are just difficult to understand in general (for most people). No one seems to actually know them all that well. They venture very close to "write-only languages". Then people are scared to ever make a change in them. All of this arguably is a result of the original point that regexes are optimized for quick and dirty string matching, not to power gcc's C parser. This is all of course exacerbated by the truly terrible ergonomics, including not being able to compose regexes out of the box, etc. Again, I think you make a case here that Raku is attempting to "elevate" the regex to solve some if not all of these problems (clearly not only composable but also "modular", as well as being able to control backtracking, etc.) All great things!
I'd still be apprehensive about the regex "atoms" since I do think that regexes are not super intuitive for most people. But perhaps I've reversed cause and effect and the reason they're not intuitive is because of the state they currently exist in in most languages, and if you could write them with Raku's advanced features, regexes would be no more unintuitive than any other language feature, since you aren't forced to create one long unterminated 500-character regex for anything interesting. In other words, perhaps the "confusing" aspects of regexes are much more incidental to their "API" vs. an essential consequence of the way they describe and match text.
3. I'd like to just separately point out that many aspects of what you mentioned was added to regexes could be added to other kinds of parsers as well. IOW, "actual parsers" could theoretically parse Raku, if said "actual parsers" supported the discussed extensions. For example, there's no reason PEG parsers couldn't allow you to fall into dynamic sub-languages. Perhaps you did not mean to imply that this couldn't be the case, but I just wanted to make sure to point out that these extensions you mention appear to have much more generally applicable than they are perhaps given credit for by being "a part of regexes in Raku" (or maybe that's not the case at all and it was just presented this way in this comment for brevity, totally possible since I don't know Raku).
I'll certainly take a closer look at the full Raku grammar stuff since I've written lots of parser extensions that I'd be curious have analogues in Raku or might make sense to add to it, or alternatively interesting other ideas that can be taken from Raku. I will say that RakuAST is something I've always wanted languages to have, so that alone is very exciting!
I use Raku in production. It's the best language to deal with text because building parsers so so damn nice. I'm shocked this isn't the top language to create an LLM text pipeline.
Very late to the thread, but I was wondering if you knew of a good example of Raku calling an API over https, polling the API until it returns a specific value?
use HTTP::UserAgent;
use JSON::Fast;
my $url = 'https://api.coindesk.com/v1/bpi/currentprice.json';
my $ua = HTTP::UserAgent.new;
my $total-time = 0;
loop {
my $response = $ua.get($url);
if $response.is-success {
my $data = from-json $response.content;
my $rate = $data<chartName>;
say "Current chart name: $rate";
#last if $rate eq 'Bitcoin';
last if $total-time ≥ 16;
}
else {
say "Failed to fetch data: {$response.status-line}";
}
sleep 3; # Poll every 3 seconds
$total-time += 3;
}
(Tweak / uncomment / rename the $rate variable assignments and checks.)
Same. And it's a bit funny because I'm usually against unnecessary complexity, and here's a language that seems to have embraced it and become a giant castle of language features that I could spend weeks studying.
Maybe it's because Raku's features were actually well thought-out, unlike the incidental "doesn't actually buy me anything, just makes the code hard to deal with" complexity I have to deal at work day in and day out.
Maybe if Java had a few of these features back in the day people wouldn't've felt the need to construct these monstrous annotation soup frameworks in it.
Every Design Pattern is a workaround for a missing feature. What that missing feature is, isn't always obvious.
For example the Singleton Design Pattern is a workaround for missing globals or dynamic variables. (A dynamic variable is sort of like a global where you get to have your own dynamic version of it.)
If Raku has a missing feature, you can add it by creating a module that modifies the compiler to support that feature. In many cases you don't even need to go that far.
Of course there are far fewer missing features in Raku than Java.
If you ever needed a Singleton in Raku (which you won't) you can do something like this:
role Singleton {
method new (|) {
once callsame
}
}
class Foo does Singleton {
has $.n is required
}
say Foo.new( n => 1 ).n;
say Foo.new( n => 2 ).n;
That prints `1` twice.
The way it works is that the `new` method in Singleton always gets called because it is very generic as it has a signature of `:(|)`. It then calls the `new` method in the base class above `Foo` (`callsame` "calls" the next candidate using the "same" arguments). The result then gets cached by the `once` statement.
There are actually a few limitations to doing it this way. For one, you can't create a `new` method in the actual class, or any subclasses. (Not that you need to anyway.) It also may not interact properly with other roles. There are a variety of other esoteric limitations. Of course none of that really matters because you would never actually need, or want to use it anyway.
Note that `once` basically stores its value in the next outer frame. If that outer frame gets re-entered it will run again. (It won't in this example as the block associated with Foo only gets entered into once.) Some people expect `once` to run only once ever. If it did that you wouldn't be able to reuse `Singleton` in any other class.
What I find funny is that while Java needs this Design Pattern, it is easier to make in Raku, and Raku doesn't need it anyway.
If you mean Design Patterns: Elements of Reusable OO software by Gamma et al, it was published in 1994. Java came out in 1995.
The Patterns book was originally a C++ text.
All programming languages have design patterns, they aren’t patterns as in “templates you should follow”, they are patterns as in “concepts you will see frequently for solving classes of problems”.
The Design Patterns book was a bestiary not a guide to replacement features.
Java does have a particular blend of features and lack of features that has led to the bloated, boilerplate-laden, inflationary framework ecosystem around it that is worse that I've seen in any other language.
Lack of stack-allocated structs leads to object pooling.
Lack of named arguments combined with the tediousness of writing `this.x = x` over and over, along with the reflection system that Java does provide leads to IoT frameworks that muck about in your private variables and/or generate objects "for you"[1].
Lack of a way to mark object trees as immutable short of duplicating all the constituent classes leads to everyone generally assuming that everything is and moreover should be mutable, necessitating complex systems for isolating changes to object graphs (e.g. the way Hibernate supports transactions).
Etc, etc. I wrote a list of these things somewhere.
[1] "It does X for you" is a phrase I've heard too many times from coworkers trying to sell me on some framework that we didn't need. "Oh yeah, it does an easy job for me an in exchange I have an incomprehensible spaghetti mess to deal with, thanks." Being the only person in the room who notices the complexity monster growing bigger and bigger is a never-ending source of frustration.
Record classes alleviate the pain of writing immutable data object classes but are unfortunately late to the party.
The feature that Observer would correspond to is simply Observers. Some of the patterns may happen to correspond to different names, but they don't all need different names or weird mappings, many of them are just "and now it's a feature instead of a set of classes".
That said, while the point "a design pattern is a feature missing from a language" has some validity on its own terms, the implied "and therefore a language is deficient if it has design patterns because those could be features" is nonsense. A language has some set of features. These features have an exponential combination of possibilities, and a smaller, but still exponential, set of those are useful. For every feature one lifts from "design pattern" and tries to put into the language, all that happens is an exponential number of other "features" are now closer to hand and are now "design patterns". This process does not end, and this process does not even complete enumerating all the possible useful patterns before the language has passed all human ability to understand it... or implement it.
Moreover, the argument that "all design patterns should be lifted to features" ignores the fact that features carry costs. Many kinds of costs. And those costs generally increase the cost of all the features around them. The costs become overwhelming.
"Design patterns are really Band-Aids for missing language features" comes from a 1996 Peter Norvig presentation[0][1]:
> Some suggest that design patterns may be a sign that features are missing in a given programming language (Java or C++ for instance). Peter Norvig demonstrates that 16 out of the 23 patterns in the Design Patterns book (which is primarily focused on C++) are simplified or eliminated (via direct language support) in Lisp or Dylan.
In a language with a sufficiently expressive object system or other features such as macros we could turn the Observer pattern into a library. To get objects to participate in the pattern we then just somehow declare that they are observers or subjects. Then they are endowed with all the right methods. Simple inheritance might be used, but if your Observer or Subject are already derived then you need multiple inheritance to inject the pattern to them. Or some other way of injecting that isn't inheritance. In C++, the CRTP might be used.
Language features don't necessarily make the design pattern's concept go away, just the laborious coding pattern that must be executed to instantiate the pattern.
Writing a design pattern by hand is like writing a control flow pattern by hand in a machine language. When you work in assembly language on some routine, you may have the concept of a while loop in your head. That's your design pattern for the loop. The way you work the while loop pattern into code is that you write testing and branching instructions to explicit labels, in a particular, recognizable arrangement. A macro assembler could give you something more like an actual while loop and of course higher level languages give it to you. The concept doesn't go away just the coding pattern.
The meaning of "pattern" in the GoF book refers not only to concepts like having objects observe each other, but also refers to the programmer having to act as a human compiler for translating the concept into code by following a detailed recipe.
Because GoF design patterns are all object-based, they're able to use naming for all the key parts coming from the recipe. When you read code based on one of these patterns, the main reason why you can see the pattern is that it uses the naming from the book. If you change your naming, it's a lot harder to recognize the code as being an instance of a pattern.
What captures my mind is a little blend of radical language design that I only find in Haskell for instance, without the type theoretic bagage, and a bit of 'thats just a fun perl idiom' when you used to hack around your old linux box.
I think it has to do more with familiarity than complexity. You could have a good understanding of features like the ones showcased in the blogpost, but it could take you a minute of staring at a line to parse it if someone uses it in a way you're unfamiliar with. Doing that for potentially hours on end would be a pain.
It's definitely something I'd write for fun/personal projects but can't imagine working with other people in. On a side note, I believe this is where go's philosophy of having a dead simple single way of doing things is effective for working with large teams.
Some of them seem ok, e.g. ignoring whitespaces in regex by default is a great move, and the `` as a shorthand single lambda argument is neat.
But trust me if you ever have to actually work* with them you will find yourself cursing whoever decided to riddle their code with <<+>>, or the person that decided `* + ` isn't the same as `2 *` (that parser must have been fun to write!)
That's a fair reaction to the post if you haven't looked at any normal Raku code.
If you look at any of the introductory Raku books, it seems a LOT like Python with a C-like syntax. By that I mean the syntax is more curly-brace oriented, but the ease of use and built-in data structures and OO features are all very high level stuff. I think if you know any other high level scripting language that you would find Raku pretty easy to read for comparable scripts. I find it pretty unlikely that the majority of people would use the really unusual stuff in normal every day code. Raku is more flexible (more than one way to do things), but it isn't arcane looking for the normal stuff I've seen. I hope that helps.
Sure, but the fact that weird stuff is possible means that someone, at some point, will try to use it in your codebase. This might be prevented if you have a strong code review culture, but if the lead dev in the project wants to use something unusual, chances are no one will stop them. And once you start...
This is true. I’ve written overly-clever, indecipherably dense code in many different languages. But some languages seem to practically encourage that kind of thing. Compare a random sampling of code from APL, Scala, Haskell, Perl, Clojure, C, and Go. You’ll probably find that average inscrutability varies widely between various language pairs.
Inscrutability to whom? I'm confident someone with 10 years of production Haskell experience will do a better job of reading production Haskell code than a comparable situation with C.
But then again, maybe that was what you were saying.
Same as Perl, nobody wants to maintain it, but it's extremely fun to write. It has a lot of expression.
You can see that in Raku's ability to define keyword arguments with a shorthand (e.g. :global(:$g)' as well as assuming a value of 'True', so you can just call match(/foo/, :g) to get a global regex match). Perl has tons of this stuff too, all aimed at making the language quicker and more fun to write, but less readable for beginners.
Many of the features that make Perl harder to write cleanly have been improved in Raku.
Frankly I would absolutely love to maintain a Raku codebase.
I would also like to update a Perl codebase into being more maintainable. I'm not sure how much I would like to actually maintain a Perl codebase because I have been spoiled by Raku. So I also wouldn't like to maintain one in Java, Python, C/C++, D, Rust, Go, etc.
Imagine learning how to use both of your arms as if they were your dominant arm, and doing so simultaneously. Then imagine going back to only using one arm for most tasks. That's about how I feel about using languages other than Raku.
its not that reading perl is hard, the _intent_ of the operations of often hard/unclear.
Yes its nice to write dense fancy code, however, something very boring to write like PHP is a lot of "loop over this bucket and do something with the fish in the bucket, afterwards, take the bucket and throw it into the bucket pile" that mirrors a 'human follows these steps' type.
In the intent department, I have had more troubles with AbstractClassFactorySingletonDispatcher type Java code, add to that dependency injection magic/madness.
I'd rather maintain Perl any day than 30 classes of Java code just to build a string.
I once heard of a merger between a company that used Java, and another one that used Perl.
After that merger, both teams were required to make a similar change.
If I remember right, the Perl team was done before the Java team finished the design phase. Or something like that.
The best aspect of Java is that it is difficult to write extremely terrible code.
The worst aspect is that it is difficult to write extremely awesome code. (If not impossible.)
The thing is you can chose to write perl in a way that doesn't suck. The problem with TMTOWTDI is that with 18 people in a code base.... well its best to have a set of conventions when writing perl. Let's all agree to use 5.x.x with these features for example
These days you can TMTOWTDI in Python as well though.
The TMTOWTDI argument was valid in very early stages of Perl vs Python debate, like some what between 2000 - 2006 etc.
These days Python is just Perl with mandatory tab indentation. Python is no longer that small language, with small set of syntactical features. C++/Java etc in the meanwhile have gotten even more TMTOWTDI heavy over the years.
I don't really work with python often enough but as for php there's usually one boring way to do it. We eschew generally more fun stuff like array operations because for loops work and the intent is vlear
I’d rather have a compiler+ide-supported testing framework that doesn’t require a tested code to be prepared for being tested in any way. Almost all boring languages are fine at writing straightforward code without ceremony, even Java.
I am not totally convinced--well, not at all convinced, really-- that the OOP revolution made programming code better, simpler, easier to follow, than procedural code.
I suppose the argument ended up being a philosophical discussion about where State was held in a running program (hard to tell in procedural code, perhaps, and easier to tell in OOP), but after 20+ years of Java I wonder if baby hadnt disappeared down the plughole with the bathwater.
Things do not require to be useful or to increase revenue in order for them to be enjoyable. If the only reason you ever do something is because you get material wealth out of it, are you even making choices or are you a perfect rational actor as described in textbooks?
Things are allowed to exist and be enjoyed on the sole basis that they are enjoyable
Oh I’m not wealth motivated at all. My regrets are about uselessness of that time itself. E.g. I could learn ML instead and do nothing useful with it, rather than not doing nothing useful with my perl knowledge today.
I could enjoy one knowledge today without having monetary interest in it ever, but instead I have another knowledge which is completely useless even for enjoyment.
Its strange that people are saying the same about maintaining code bases written using AI assistance.
Im guessing its going to be a generational thing now. A whole older generation of programmers will just find themselves out of place in what is like a normal work set up for the current generation.
Some of these are halfway familiar. Hyper sounds like a more ad-hoc version of something from recursion-schemes, and * as presented is somewhat similar to Scala _ (which I love for lambdas and think every language should adopt something similar).
Speed is still a major issue with Raku. Parsing a log file with a regex is Perl's forte but the latest Raku still takes 6.5 times as long as Python 3.13 excluding startup time.
You'd need to qualify that with an example. In my experience some things are faster in Raku and some are slower, so declaring that "Raku takes 6.5 times as long as Python 3.13" is pretty meaningless without seeing what it's slower at.
I specified the use case so why "meaningless"? Here's the code:
Raku
for 'logs1.txt'.IO.lines -> $_ { .say if $_ ~~ /<<\w ** 15>>/; }
Python
from re import search
with open('logs1.txt', 'r') as fh:
for line in fh:
if search(r'\b\w{15}\b', line): print(line, end='')
I believe that it only tries to DWIM with arithmetic and geometric sequences, and gives up otherwise. Of course there’s nothing keeping you from writing a module that would override infix:... in the local scope with a lookup in OEIS.
You get either a compile time instantiated infinite lazy sequence, or a compilation error.
My personal bar for "amount of clever involved" is fairly high when the clever either does exactly what you'd expect or fails, and even higher when it does so at compile time.
(personal bars, and personal definitions of "exactly what you'd expect" will of course vary, but I think your brain may have miscalibrated the level of risk before it got as far as applying your preferences in this particular case)
Programming is not just a matter of slinging syntax at a problem. Good programmers need to develop a mental model of how a language works.
Needing to do geometric sequences as syntax like that is clearly a parlor trick, a marginal use case. What goes through a good programmer's mind, with experience of getting burned by such things over the years, is "If Raku implements this parlor trick, what other ones does it implement? What other numbers will do something I didn't expect when I put them in? What other patterns will it implement?"
Yes, you can read the docs, and learn, but we also know this interacts with all sorts of things. I'm not telling you why you should be horrified, I'm explaining why on first glance this is something that looks actually quite unappealing and scary to a certain set of programmers.
It actually isn't my opinion either. My opinion isn't so much that this is scary on its own terms, but just demonstrates it is not a language inline with any of my philosophies.
It's a parlor trick because something like "1, * × 2 ..." is much more sensible. Heck, it isn't even longer, if we're talking about saving keystrokes. It's still more syntax than I'm looking for from a language, but "initial, update rule, continue infinitely" does not give me that immediate "oh wtf, what other magic is going to happen with other values?" reaction I describe from trying to divine update rules from raw numbers.
It is also immediately obvious how to use this for other patterns, immediately obvious how to compose it, and just generally experiences all the benefits things get from not being special cases.
> The detecting of increments has been in Perl for ages so that's not new.
But this is detecting that the increment is multiplicative, rather than additive. It might seem like a natural next step, but, for example, I somewhat suspect (and definitely hope) that Raku wouldn't know that `(1, 1, 2...*)` is a (shifted) list of Fibonacci numbers.
raku kinda puts a bunch of parts of itself together during startup, not entirely unlike Julia.
The sheer dynamism of the thing makes pre-baking that non-trivial, also not entirely unlike Julia.
I seem to recall chatting with the devs on IRC a few years and there seeming to be more than one viable way to potentially fix it, but they all seemed to me to share the property of needing a lot of effort during which nothing would work at all before you got something to show for it - and a decent chance of what you got to show after all that time was "welp, here's why which one can't work" - which is a really heavy lift in terms of somebody finding motivation to try it in the first place.
So tl;dr "yes, I dislike the startup cost, no, I don't expect it to change soon and I don't think it's a black mark against the team that it probably won't."
These are all very clever, but what's the use case? I'm not saying there isn't one, I just don't know what it is! Not to speak of the dead, but Perl was utilitarian: it was built to solve problems. From my point of view, these are solutions to problems I've never had.
Yea, I think he has a knack for that. If you want some more I recommend a talk he gave a few years back called “Three Little Words” <https://www.youtube.com/watch?v=e1T7WbKox6s>.
Damian is an excellent writer and communicator for sure. But I don't know if it answers the question of what you would use these features in Raku for. If one wanted to compute e to higher precision, I feel like one would use a DSL. But we also don't need to compute e presently.
No, we don’t need to compute e very often; it’s value is pretty well known. The article is just showing off Raku (or Perl 6 as it was then known) by writing a small program of moderate complexity that still manages to show off some of Raku’s interesting features. Computing approximations of e is merely an interesting exercise; it’s not the point.
The question that justinator asked was what good uses Raku’s indefinite series have. This article points out that different ways of approximating e grow at different rates, so it is appropriate to associate a different range of trial values with each of those methods. Dörrie's bounds uses powers of 10 as shown. Others use powers of 2. Newton’s method uses sequential trial values, since it grows really fast:
#| Newton's series
assess -> \k=0..∞ { sum (0..k)»!»⁻¹ }
And several methods compute approximations in a single step, so they don’t take a trial value at all:
These are a lot of fun, but of course they can also be profound:
#| From Euler's Identity
assess { (-1+0i) ** (π×i)⁻¹ }
For those who are interested, the article shows off a lot of obvious syntactic features like superscripts and hyperoperators, but there are also things like classes and roles and new operators as well. It really is a nice tour.
If you can read Lisp (Scheme) syntax, I think SICP [0] has the clearest demonstration of the utility of lazy streams. The problem it presents (calculating pi) is, admittedly, rather academic; but the concept of having values that evolve over "time" as first-class entities that you can abstract over is practically useful. I think that all of reactive programming (React, Angular, etc) is closely related to this idea (perhaps even originates from it), although the implementation and applications differ.
It's a long and effortful read, but the payoff is worth the effort.
EDIT: I think the Perl article posted in a sibling comment uses essentially the same example (but for calculating e instead of pi), although I only skimmed it.
I hadn't realized that SICP covered using lazy streams for calculating pi. That reminds me of this article I read recently about using lazy lists in Haskell to calculate pi by means of a few different algorithms.
I've heard that the best way to solve a hard problem is to create a language in which solving that problem would be easy.
Basically creating a Domain Specific Language.
Raku isn't necessarily that language. What it is, is a language which you can modify into being a DSL for solving your hard problem.
Raku is designed so that easy things are easy and hard things are possible. Of course it goes even farther, as some "hard" things are actually easy. (Hard from the perspective of trying to do it in some other language.)
Lets say you want a sequence of Primes. At first you think sieve of Eratosthenes.
Since I am fluent in Raku, I just write this instead:
( 2..∞ ).grep( *.is-prime )
This has the benefit that it doesn't generate any values until you ask for them. Also If you don't do anything to cache the values, they will be garbage collected as you go.
The most important Raku features are Command Line Interface (CLI) and grammars.
CLI support is a _usual_ feature -- see "docopt" implementations (and adoption), for example. But CLI is built-in in Raku and nice to use.
As for the grammars -- it is _unusual_ a programming language to have grammars as "first class citizens" and to give the ability to create (compose) grammars using Object-Oriented Programming.
I’ve followed this project for years, and while it’s interesting, I think it’s really a shame that Perl 6 seemed to have been so badly waylaid by this sojourn into the looking-glass.
Junctions introduce a degree of non-determinism to the language. Think Prolog variables. Junctions allow you to talk about a set of solutions without having to mind how they are kept together or how the operations are distributed between members of the Junction. It's especially convenient when you search for something and that something can be a complicated series of logical expressions: you can pack them all in a single Junction and treat as a first-class object. It's a little hard to explain without giving examples, but it really has a lot of uses :)
PowerShell does something similar with their pipelines, see e.g. the answer [0] and the question it answers. Something similar happens in Bash: $x refers not to the string $x, but to the list of the strings that you get by splitting the original string by IFS.
And yes, this feature is annoying and arguably is a mis-feature: containers shall not explode when you touch them.
> I’m not sure that I see the connection that you are making here. Can you elaborate?
Back when I had to write PowerShell scripts, I constantly found that piping an array to some command would almost always make that command to be invoked once for every item in array, instead of being invoked once and given the whole array as a single input. Sometimes it's the latter that you need, so the workaround is to make a new, single-element array with the original array as its only element, and pipe this into the command.
The connection to junctions is still not very clear to me, however. A junction doesn’t really have any correlation to a single element version of a list. As a super-position of potential values, it doesn’t have many correlaries in other languages.
For example, part of the original concept of junctions involved parallel evaluation of junction elements in an expression but that turned out to be less useful than hoped for in practice.
An array in Powershell, when piped to a command, automatically get this command invoked for each of the array's element and the results are combined into a new output array, which can be piped further.
A junction in Raku, when given as an argument to a function, automatically applies this function to each of the junction's element and the results are combined into a new junction as a result.
I don't know, seems like a pretty clear parallel to me. And since the Powershell's behaviour is quite often undesirable, I agreed with the original commenter that perhaps the junctions could also be somewhat annoying to use instead of just working normal lists: after all, "s in ('this', 'that', 'other')" is about just as clear as "$string ~~ “this”|”that”|”other”" but doesn't require support for magical self-destructuring containers in the language.
Since I rarely use junctions myself -- and never in the manner of applying non-boolean operations across their values -- the connection to automatic function application wasn't clear to me from your initial comment.
I can see the parallel now, so thank you for clarifying.
EDIT: On further reflection, another reason that the connection escaped me is that even after applying functions across a junction’s values, the junction is never useful (without significant and unnecessary effort) for accessing any individual values that it contains.
Anyway, thanks for sharing your thoughts and bearing with me.
I dream of a day where one can post a Raku article on HNN and not encounter a comments section full of digressions into discussing Perl.
There is some sense to it by means of comparison, but the constant conflation of the two becomes tiresome.
But in that spirit, let's compare:
The =()= "operator" is really a combination of Perl syntax[^1] that achieves the goal of converting list context to scalar context. This isn't necessary to determine the length of an array (`my $elems = @array` or, in favor of being more explicity, `my $elems = 0+@array`). It is, however, useful in Perl for the counting of more complex list contexts on the RHS.
Let's use some examples from it's documentation to compare to Raku.
Perl:
my $n =()= "abababab" =~ /a/g;
# $n == 4
Raku:
my $n = +("abababab" ~~ m:g/'a'/);
# $n == 4
# Alternatively...
my $n = ("abababab" ~~ m:g/'a'/).elems;
That's it. `+` / `.elems` are literally all you ever need to know for gathering a count of elements. The quotes around 'a' in the regex are optional but I always use them because I appreciate denoting which characters are literal in regexes (Note also that the regex uses the pair syntax mentioned in OP via `m:g`. Additional flags are provided as pairs, eg `m:g:i`).
Another example.
Perl:
my $count =()= split /:/, "ab:ab:ab";
# $count == 3
Raku:
my $count = +"ab:ab:ab".split(':');
# $count == 3
While precedence can at times be a conceptual hindrance, it's also nice to save some parentheses where it is possible and legible to do so. Opinions differ on these points, of course. Note also that `Str.split` can take string literals as well as regexes.
To me it’s not unlike spending all of a thread about Clojure commenting on Java or Common Lisp without even bothering to mention or contrast to Clojure.
There are connections to both but that doesn’t necessarily make them topical.
I disagree that discussing a previous language by a designer (often in terms that seem to conflate equivalence) is usefully relevant to discussion of a different language by that designer.
Note that I have never said that there is zero utility. I just find it tiresome to encounter comments about Perl syntax as if it is automatically useful or interesting to discussions about Raku.
Which is why I took the time to provide an example of what (I would consider) an actually relevant mention of Perl syntax.
> It is absolutely mindbending to me that all of this language development has happened on top of Perl, of all things.
Why "of all things?" The Perl philosophy of TIMTOWTDI, and Wall's interest in human-language constructs and the ways that they could influence programming-language constructs, seem to make its successor an obvious home for experiments like this.
It has the same whimsy and DWIM of perl, look at Promise having a status of 'kept' or 'broken' which is more fun than 'fulfilled' or 'rejected'. Brings to mind Perl 5's use of bless, calling the filter function 'grep' and local/global variables created with 'my' and 'our'.
Yes, though perl5 is also an incredibly bendable language syntax wise - you can add keywords and operators to the compiler via CPAN modules (our async/await syntax is provided simply by doing 'use Future::AsyncAwait' for example).
I'm sure that there has been more changes from Perl 5.8 to Perl 5.40 than there is between Python 2.0 to Python 3.x (Whatever version it is up to at the moment.)
What's more is that every change from Python 2 to Python 3 that I've heard of, resembles a change that Perl5 has had to do over the years. Only Perl did it without breaking everything. (And thus didn't need a major version bump.)
# Does weird things with nested lists too
> [1, [2, 3], 4, 5] <<+>> [10, 20]
[11 [22 23] 14 25]
This article makes me feel like I'm watching a Nao Geo/Animal Planet documentary. Beautiful and interesting to see these creatures in the wild? Absolutely. Do I want to keep my distance? As far away as possible.
I agree. This attempt to fuse higher-order functional programming with magic special behaviors from Perl comes off to me as quixotic. HOP works because you're gluing together extremely simple primitives—ordinary pure functions. You can build big things fearlessly because you perfectly understand the simple bricks they're made of. But here: magic functions—which behave differently on lists-of-scalars vs. lists-of-lists, by special default logic—that's not a good match for HOP. Now you have two major axes of complexity: a vertical one of functional abstraction, and a horizontal one of your "different kinds of function and function application".
>> magic functions—which behave differently on lists-of-scalars vs. lists-of-lists by special default logic ...
That is completely the wrong way to think about it. Before the Great List Refactor those were dealt with the same by most operations (as you apparently want it). And that was the absolute biggest problem that needed to be changed at the time. There were things that just weren't possible to do no matter how I tried. The way you want it to work DID NOT WORK! It was terrible. Making scalars work as single elements was absolutely necessary to make the language usable. At the time of the GLR that was the single biggest obstacle preventing me from using Raku for anything.
It also isn't some arbitrary default logic. It is not arbitrary, and calling it "default" is wrong because that insinuates that it is possible to turn it off. To get it to do something else with a scalar, you have to specifically unscalarify that item in some manner. (While you did not specifically say 'arbitrary', it certainly seems like that is your position.)
Let's say you have a list of families, and you want to treat each family as a group, not as individuals. You have to scalarize each family so that they are treated as a single item. If you didn't do that most operations will interact with individuals inside of families, which you didn't want.
In Raku a single item is also treated as a list with only one item in it depending on how you use it. (Calling `.head` on a list returns the first item, calling it on an item returns the item as if it had been the first item in a list.) Being able to do the reverse and have a list act as single item is just as important in Raku.
While you may not understand why it works the way it works, you are wrong if you think that it should treat lists-of-scalars the same as lists-of-lists.
>> This attempt to fuse higher-order functional programming with magic special behaviors from Perl comes off to me as quixotic.
It is wrong to call it an attempt, as it is quite successful at it. There is a saying in Raku circles that Raku is strangely consistent. Rather than having a special feature for this and another special feature for that, there is one generic feature that works for both.
In Python there is a special syntax inside of a array indexing operation. Which (as far as I am aware) is the only place that syntax works, and it is not really like anything else in the language. There is also a special syntax in Raku designed for array indexing operations, but it is just another slightly more concise way to create a lambda/closure. You can use that syntax anywhere you want a lambda/closure. Conversely if you wanted to use one of the other lambda/closure syntaxes in an array indexing operation you could.
The reason that we say that Raku is strangely consistent, is that basically no other high level language is anywhere near as consistent. There is almost no 'magic special behaviors'. There is only the behavior, and that behavior is consistent regardless of what you give it. There are features in Perl that are magic special behaviors. Those special behaviors were specifically not copied into Raku unless there was a really good reason. (In fact I can't really think of any at the moment that were copied.)
Any sufficiently advanced technology is indistinguishable from magic. So you saying that it is magic is only really saying that you don't understand it. It could be magic, or it could be advanced technology, either way it would appear to be magic.
In my early days of playing with Raku I would regularly try to break it by using one random feature with another. I often expected it to break. Only it almost never did. The features just worked. It also generally worked the way I thought it should.
The reason you see it as quixotic is that you see a someone tilting at a windmill and assuming they are insane. The problem is that it maybe it isn't actually a windmill, and maybe you are just looking at it from the wrong perspective.
Nil punning in Clojure gives you that kind of experience, for example. Things that would break in other languages, just "work as you'd expect them" in Clojure (except when you drop down to host primitives, and then nils don't behave nicely anymore). In general, it makes for a really pleasant dev experience, I find.
You're making a mistake if you're thinking like that. Applying an operation that generally works on single values over a list of values automatically is an incredibly powerful technique. If you have ever used Numpy, you will appreciate not needing it in many cases where Raku's built-ins suffice.
> You're making a mistake if you're thinking like that. Applying an operation that generally works on single values over a list of values automatically is an incredibly powerful technique.
Indeed, the defining technique of array programming!
My fundamental objection here it that it is recursive. The non-recursive hyperoperators all have nice clean mathematical definitions. The recursive ones are just weird and ad hoc, at least from the perpective of the underlying mathematical structures.
And if you try to avoid the ad-hoc-ness by formalizing each useful weird behavior as its own documented type with its own documented semantics for application, then you've just reinvented monads. (Which is not to say you shouldn't; IMHO stdlib support for monad types in a dynamic scripting language is long overdue.)
I implemented something similar to the compositional regular expressions feature described here for JavaScript a while ago (independently, so semantics may not be the same), and it is one of the libraries I find myself most often bringing into other projects years later. It gets you a tiny bit closer to feeling like you have a first-class parser in the language. Here is an example of implementing media type parsing with regexes using it: https://runkit.com/tolmasky/media-type-parsing-with-template...
"templated-regular-expression" on npm, GitHub: https://github.com/tolmasky/templated-regular-expression
To be clear, programming languages should just have actual parsers and you shouldn't use regular expressions for parsers. But if you ARE going to use a regular expression, man is it nice to break it up into smaller pieces.
"Actual parsers" aren't powerful enough to be used to parse Raku.
Raku regular expressions combined with grammars are far more powerful, and if written well, easier to understand than any "actual parser". In order to parse Raku with an "actual parser" it would have to allow you to add and remove things from it as it is parsing. Raku's "parser" does this by subclassing the current grammar adding or removing them in the subclass, and then reverting back to the previous grammar at the end of the current lexical scope.
In Raku, a regular expression is another syntax for writing code. It just has a slightly different default syntax and behavior. It can have both parameters and variables. If the regular expression syntax isn't a good fit for what you are trying to do, you can embed regular Raku syntax to do whatever you need to do and return right back to regular expression syntax.
It also has a much better syntax for doing advanced things, as it was completely redesigned from first principles.
The following is an example of how to match at least one `A` followed by exactly that number of `B`s and exactly that number of `C`s.
(Note that bare square brackets [] are for grouping, not for character classes.)
Result: The result is actually a very extensive object that has many ways to interrogate it. What you see above is just a built-in human readable view of it.In most regular expression syntaxes to match equal amounts of `A`s and `B`s you would need to recurse in-between `A` and `B`. That of course wouldn't allow you to also do that for `C`. That also wouldn't be anywhere as easy to follow as the above. The above should run fairly fast because it never has to backtrack, or recurse.
When you combine them into a grammar, you will get a full parse-tree. (Actually you can do that without a grammar, it is just easier with one.)
To see an actual parser I often recommend people look at JSON::TINY::Grammar https://github.com/moritz/json/blob/master/lib/JSON/Tiny/Gra...
Frankly from my perspective much of the design of "actual parsers" are a byproduct of limited RAM on early computers. The reason there is a separate tokenization stage was to reduce the amount of RAM used for the source code so that further stages had enough RAM to do any of the semantic analysis, and eventual compiling of the code. It doesn't really do that much to simplify any of the further stages in my view.
The JSON::Tiny module from above creates the native Raku data structure using an actions class, as the grammar is parsing. Meaning it is parsing and compiling as it goes.
I imagine this could be understood as making use of a monad. Right?
The main problem with generalised regexes is that you can't match them in linear time worst-case. I'm wondering if this is addressed at all by Raku.
A "monad" is not really a "thing" you can make use of, because a monad is a type of thing. Think "iterator"; an iterator is not a thing itself, it is a type of thing that things can be.
There is probably a monad you could understand this as being, a specific one, but "monad" itself is not a way to understand it.
And just as you can understand any given Iterator by simply understanding it directly, whatever "monad" you might use to understand this process can be simply understood directly without reference to the "monad" concept.
> I imagine this could be understood as making use of a monad. Right?
Can you clarify what do you mean?
Do expect the concept of "monad" to help explaining Raku grammars?
Yes. Compare it to the List monad or Parsec.
- There is a natural from-to conversion of Functional Parsers (FP) monad (as in Parsec) to Extended Backus-Naur Form (EBNF).
- Similarly, EBNF can be applied to Raku grammars.
- Hence, the representation of Raku grammars into FP monad is doable, at least for certain large enough set of Raku grammars.
Why are they called regular expressions if they can parse non-regular languages?
It’s gradually got so. <https://youtu.be/JIlpjJnc6qY?t=54>
Literally, Larry Wall was adding things to regexes all the way back before the release of Perl 2.
The word 'regular' comes from the mathematical roots of automata and finite state machines.
Which have a one to one correspondence with regular languages, so this isn’t actually an answer to the question.
I don't think we disagree here. To clarify, my statement about using "actual parsers" over regexes was more directed at my own library than Raku. Since I had just posted a link on how to "parse" media types using my library, I wanted to immediately follow that with a word of caution of "But don't do that! You shouldn't be using (traditional) regexes to parse! They are the wrong tool for that. How unfortunate it is that most languages have a super simple syntax for (traditional/PCRE) regexes and not for parsing." I had seen in the article that Raku had some sort of "grammar" concept, so I was kind of saying "oh it looks like Raku may be tackling that to."
Hopefully that clarifies that I was not necessarily making any statement about whether or not to use Raku regexes, which I don't pretend to know well enough to qualify to give advice around. Just for the sake of interesting discussion however, I do have a few follow up comments to what you wrote:
1. Aside from my original confusing use of the term "regexes" to actually mean "PCRE-style regexes", I recognize I also left a fair amount of ambiguity by referring to "actual parsers". Given that there is no "true" requirement to be a parser, what I was attempting to say is something along the lines of: a tool designed to transform text into some sort of structured data, as opposed to a tool designed to match patterns. Again, from this alone, seems like Raku regexes qualify just fine.
2. That being said, I do have a separate issue with using regexes for anything, which is that I do not think it is trivial to reason about the performance characteristics of regexes. IOW, the syntax "doesn't scale". This has already been discussed plenty of course, but suffice it to say that backtracking has proven undeniably popular, and so it seems an essential part of what most people consider regexes. Unfortunately this can lead to surprises when long strings are passed in later. Relatedly, I think regexes are just difficult to understand in general (for most people). No one seems to actually know them all that well. They venture very close to "write-only languages". Then people are scared to ever make a change in them. All of this arguably is a result of the original point that regexes are optimized for quick and dirty string matching, not to power gcc's C parser. This is all of course exacerbated by the truly terrible ergonomics, including not being able to compose regexes out of the box, etc. Again, I think you make a case here that Raku is attempting to "elevate" the regex to solve some if not all of these problems (clearly not only composable but also "modular", as well as being able to control backtracking, etc.) All great things!
I'd still be apprehensive about the regex "atoms" since I do think that regexes are not super intuitive for most people. But perhaps I've reversed cause and effect and the reason they're not intuitive is because of the state they currently exist in in most languages, and if you could write them with Raku's advanced features, regexes would be no more unintuitive than any other language feature, since you aren't forced to create one long unterminated 500-character regex for anything interesting. In other words, perhaps the "confusing" aspects of regexes are much more incidental to their "API" vs. an essential consequence of the way they describe and match text.
3. I'd like to just separately point out that many aspects of what you mentioned was added to regexes could be added to other kinds of parsers as well. IOW, "actual parsers" could theoretically parse Raku, if said "actual parsers" supported the discussed extensions. For example, there's no reason PEG parsers couldn't allow you to fall into dynamic sub-languages. Perhaps you did not mean to imply that this couldn't be the case, but I just wanted to make sure to point out that these extensions you mention appear to have much more generally applicable than they are perhaps given credit for by being "a part of regexes in Raku" (or maybe that's not the case at all and it was just presented this way in this comment for brevity, totally possible since I don't know Raku).
I'll certainly take a closer look at the full Raku grammar stuff since I've written lots of parser extensions that I'd be curious have analogues in Raku or might make sense to add to it, or alternatively interesting other ideas that can be taken from Raku. I will say that RakuAST is something I've always wanted languages to have, so that alone is very exciting!
I use Raku in production. It's the best language to deal with text because building parsers so so damn nice. I'm shocked this isn't the top language to create an LLM text pipeline.
Very late to the thread, but I was wondering if you knew of a good example of Raku calling an API over https, polling the API until it returns a specific value?
Here is one way:
(Tweak / uncomment / rename the $rate variable assignments and checks.)Wow, huge thanks, that's super helpful :)
Sure, good luck!
Do you use any of Raku's LLM packages? If yes, which ones?
Wow. Sign me up for leaving the industry before I ever have to maintain a Raku codebase.
Funny, cause reading that blog post made me want to quit my job and find a raku team to work with. Maybe I'm still too naive :)
Same. And it's a bit funny because I'm usually against unnecessary complexity, and here's a language that seems to have embraced it and become a giant castle of language features that I could spend weeks studying.
Maybe it's because Raku's features were actually well thought-out, unlike the incidental "doesn't actually buy me anything, just makes the code hard to deal with" complexity I have to deal at work day in and day out.
Maybe if Java had a few of these features back in the day people wouldn't've felt the need to construct these monstrous annotation soup frameworks in it.
Java inspired the Design Patterns book.
Every Design Pattern is a workaround for a missing feature. What that missing feature is, isn't always obvious.
For example the Singleton Design Pattern is a workaround for missing globals or dynamic variables. (A dynamic variable is sort of like a global where you get to have your own dynamic version of it.)
If Raku has a missing feature, you can add it by creating a module that modifies the compiler to support that feature. In many cases you don't even need to go that far.
Of course there are far fewer missing features in Raku than Java.
If you ever needed a Singleton in Raku (which you won't) you can do something like this:
That prints `1` twice.The way it works is that the `new` method in Singleton always gets called because it is very generic as it has a signature of `:(|)`. It then calls the `new` method in the base class above `Foo` (`callsame` "calls" the next candidate using the "same" arguments). The result then gets cached by the `once` statement.
There are actually a few limitations to doing it this way. For one, you can't create a `new` method in the actual class, or any subclasses. (Not that you need to anyway.) It also may not interact properly with other roles. There are a variety of other esoteric limitations. Of course none of that really matters because you would never actually need, or want to use it anyway.
Note that `once` basically stores its value in the next outer frame. If that outer frame gets re-entered it will run again. (It won't in this example as the block associated with Foo only gets entered into once.) Some people expect `once` to run only once ever. If it did that you wouldn't be able to reuse `Singleton` in any other class.
What I find funny is that while Java needs this Design Pattern, it is easier to make in Raku, and Raku doesn't need it anyway.
If you mean Design Patterns: Elements of Reusable OO software by Gamma et al, it was published in 1994. Java came out in 1995.
The Patterns book was originally a C++ text.
All programming languages have design patterns, they aren’t patterns as in “templates you should follow”, they are patterns as in “concepts you will see frequently for solving classes of problems”.
The Design Patterns book was a bestiary not a guide to replacement features.
Java does have a particular blend of features and lack of features that has led to the bloated, boilerplate-laden, inflationary framework ecosystem around it that is worse that I've seen in any other language.
Lack of stack-allocated structs leads to object pooling.
Lack of named arguments combined with the tediousness of writing `this.x = x` over and over, along with the reflection system that Java does provide leads to IoT frameworks that muck about in your private variables and/or generate objects "for you"[1].
Lack of a way to mark object trees as immutable short of duplicating all the constituent classes leads to everyone generally assuming that everything is and moreover should be mutable, necessitating complex systems for isolating changes to object graphs (e.g. the way Hibernate supports transactions).
Etc, etc. I wrote a list of these things somewhere.
[1] "It does X for you" is a phrase I've heard too many times from coworkers trying to sell me on some framework that we didn't need. "Oh yeah, it does an easy job for me an in exchange I have an incomprehensible spaghetti mess to deal with, thanks." Being the only person in the room who notices the complexity monster growing bigger and bigger is a never-ending source of frustration.
Record classes alleviate the pain of writing immutable data object classes but are unfortunately late to the party.
I'm not so sure every design pattern corresponds to a missing feature. For example, what feature would the Observer design pattern correspond to?
The feature that Observer would correspond to is simply Observers. Some of the patterns may happen to correspond to different names, but they don't all need different names or weird mappings, many of them are just "and now it's a feature instead of a set of classes".
That said, while the point "a design pattern is a feature missing from a language" has some validity on its own terms, the implied "and therefore a language is deficient if it has design patterns because those could be features" is nonsense. A language has some set of features. These features have an exponential combination of possibilities, and a smaller, but still exponential, set of those are useful. For every feature one lifts from "design pattern" and tries to put into the language, all that happens is an exponential number of other "features" are now closer to hand and are now "design patterns". This process does not end, and this process does not even complete enumerating all the possible useful patterns before the language has passed all human ability to understand it... or implement it.
Moreover, the argument that "all design patterns should be lifted to features" ignores the fact that features carry costs. Many kinds of costs. And those costs generally increase the cost of all the features around them. The costs become overwhelming.
"Design patterns are really Band-Aids for missing language features" comes from a 1996 Peter Norvig presentation[0][1]:
> Some suggest that design patterns may be a sign that features are missing in a given programming language (Java or C++ for instance). Peter Norvig demonstrates that 16 out of the 23 patterns in the Design Patterns book (which is primarily focused on C++) are simplified or eliminated (via direct language support) in Lisp or Dylan.
[0]: https://en.wikipedia.org/wiki/Software_design_pattern#Critic...
[1]: slide 9 of PDF https://www.norvig.com/design-patterns/design-patterns.pdf
In a language with a sufficiently expressive object system or other features such as macros we could turn the Observer pattern into a library. To get objects to participate in the pattern we then just somehow declare that they are observers or subjects. Then they are endowed with all the right methods. Simple inheritance might be used, but if your Observer or Subject are already derived then you need multiple inheritance to inject the pattern to them. Or some other way of injecting that isn't inheritance. In C++, the CRTP might be used.
Language features don't necessarily make the design pattern's concept go away, just the laborious coding pattern that must be executed to instantiate the pattern.
Writing a design pattern by hand is like writing a control flow pattern by hand in a machine language. When you work in assembly language on some routine, you may have the concept of a while loop in your head. That's your design pattern for the loop. The way you work the while loop pattern into code is that you write testing and branching instructions to explicit labels, in a particular, recognizable arrangement. A macro assembler could give you something more like an actual while loop and of course higher level languages give it to you. The concept doesn't go away just the coding pattern.
The meaning of "pattern" in the GoF book refers not only to concepts like having objects observe each other, but also refers to the programmer having to act as a human compiler for translating the concept into code by following a detailed recipe.
Because GoF design patterns are all object-based, they're able to use naming for all the key parts coming from the recipe. When you read code based on one of these patterns, the main reason why you can see the pattern is that it uses the naming from the book. If you change your naming, it's a lot harder to recognize the code as being an instance of a pattern.
events/reactive programming?
and further down the research path, chemical programming (forgot the name of the languages)
What captures my mind is a little blend of radical language design that I only find in Haskell for instance, without the type theoretic bagage, and a bit of 'thats just a fun perl idiom' when you used to hack around your old linux box.
I think it has to do more with familiarity than complexity. You could have a good understanding of features like the ones showcased in the blogpost, but it could take you a minute of staring at a line to parse it if someone uses it in a way you're unfamiliar with. Doing that for potentially hours on end would be a pain.
It's definitely something I'd write for fun/personal projects but can't imagine working with other people in. On a side note, I believe this is where go's philosophy of having a dead simple single way of doing things is effective for working with large teams.
Some of them seem ok, e.g. ignoring whitespaces in regex by default is a great move, and the `` as a shorthand single lambda argument is neat.
But trust me if you ever have to actually work* with them you will find yourself cursing whoever decided to riddle their code with <<+>>, or the person that decided `* + ` isn't the same as `2 *` (that parser must have been fun to write!)
yeah, you're right.
your entire comment is syntactically valid raku, or can be made so, because raku syntax is so powerful and flexible.
raku grammars ftw!
https://docs.raku.org/language/grammars
https://docs.raku.org/language/grammar_tutorial
;)
even that last little fella above, is or can be made syntactically valid.
That's a fair reaction to the post if you haven't looked at any normal Raku code.
If you look at any of the introductory Raku books, it seems a LOT like Python with a C-like syntax. By that I mean the syntax is more curly-brace oriented, but the ease of use and built-in data structures and OO features are all very high level stuff. I think if you know any other high level scripting language that you would find Raku pretty easy to read for comparable scripts. I find it pretty unlikely that the majority of people would use the really unusual stuff in normal every day code. Raku is more flexible (more than one way to do things), but it isn't arcane looking for the normal stuff I've seen. I hope that helps.
Sure, but the fact that weird stuff is possible means that someone, at some point, will try to use it in your codebase. This might be prevented if you have a strong code review culture, but if the lead dev in the project wants to use something unusual, chances are no one will stop them. And once you start...
If you suggest a language here where nothing weird can be done, I bet someone will reply with something weird done in that language.
This is true. I’ve written overly-clever, indecipherably dense code in many different languages. But some languages seem to practically encourage that kind of thing. Compare a random sampling of code from APL, Scala, Haskell, Perl, Clojure, C, and Go. You’ll probably find that average inscrutability varies widely between various language pairs.
Inscrutability to whom? I'm confident someone with 10 years of production Haskell experience will do a better job of reading production Haskell code than a comparable situation with C.
But then again, maybe that was what you were saying.
You say like you never looked at any Perl Golf solution.
Same as Perl, nobody wants to maintain it, but it's extremely fun to write. It has a lot of expression.
You can see that in Raku's ability to define keyword arguments with a shorthand (e.g. :global(:$g)' as well as assuming a value of 'True', so you can just call match(/foo/, :g) to get a global regex match). Perl has tons of this stuff too, all aimed at making the language quicker and more fun to write, but less readable for beginners.
Many of the features that make Perl harder to write cleanly have been improved in Raku.
Frankly I would absolutely love to maintain a Raku codebase.
I would also like to update a Perl codebase into being more maintainable. I'm not sure how much I would like to actually maintain a Perl codebase because I have been spoiled by Raku. So I also wouldn't like to maintain one in Java, Python, C/C++, D, Rust, Go, etc.
Imagine learning how to use both of your arms as if they were your dominant arm, and doing so simultaneously. Then imagine going back to only using one arm for most tasks. That's about how I feel about using languages other than Raku.
its not that reading perl is hard, the _intent_ of the operations of often hard/unclear.
Yes its nice to write dense fancy code, however, something very boring to write like PHP is a lot of "loop over this bucket and do something with the fish in the bucket, afterwards, take the bucket and throw it into the bucket pile" that mirrors a 'human follows these steps' type.
In the intent department, I have had more troubles with AbstractClassFactorySingletonDispatcher type Java code, add to that dependency injection magic/madness.
I'd rather maintain Perl any day than 30 classes of Java code just to build a string.
I once heard of a merger between a company that used Java, and another one that used Perl.
After that merger, both teams were required to make a similar change.
If I remember right, the Perl team was done before the Java team finished the design phase. Or something like that.
The best aspect of Java is that it is difficult to write extremely terrible code. The worst aspect is that it is difficult to write extremely awesome code. (If not impossible.)
If this was about my former $work:
The company using Perl was able to double its turnover in 3 weeks.
The company using Java was still in the design phase.
Companies choose their tools depending on their internal culture. The company using Perl at the time was simply more agile.
FWIW, the company that was using Perl is now using Java mostly. And yes, the culture of the company has changed. Not sure about cause and effect.
The thing is you can chose to write perl in a way that doesn't suck. The problem with TMTOWTDI is that with 18 people in a code base.... well its best to have a set of conventions when writing perl. Let's all agree to use 5.x.x with these features for example
These days you can TMTOWTDI in Python as well though.
The TMTOWTDI argument was valid in very early stages of Perl vs Python debate, like some what between 2000 - 2006 etc.
These days Python is just Perl with mandatory tab indentation. Python is no longer that small language, with small set of syntactical features. C++/Java etc in the meanwhile have gotten even more TMTOWTDI heavy over the years.
I don't really work with python often enough but as for php there's usually one boring way to do it. We eschew generally more fun stuff like array operations because for loops work and the intent is vlear
I’d rather have a compiler+ide-supported testing framework that doesn’t require a tested code to be prepared for being tested in any way. Almost all boring languages are fine at writing straightforward code without ceremony, even Java.
Good job those aren't the only two options!
Frankly I'm on Team Fish Bucket.
I am not totally convinced--well, not at all convinced, really-- that the OOP revolution made programming code better, simpler, easier to follow, than procedural code.
I suppose the argument ended up being a philosophical discussion about where State was held in a running program (hard to tell in procedural code, perhaps, and easier to tell in OOP), but after 20+ years of Java I wonder if baby hadnt disappeared down the plughole with the bathwater.
Perl reminds me of that job I had writing ANSI MUMPS.
it's extremely fun to write
Then your contrarian phase ends and you regret that you didn’t learn something useful in that time.
I was paid to write it, but still find it useful for short scripts and one liners. I'd use it over sed, awk or shell without pause.
It is very strong for text processing, particularly regexes.
Finally learning any language helps you learn new paradigms which you can apply anywhere. Same as Haskell or Lisp or something.
Things do not require to be useful or to increase revenue in order for them to be enjoyable. If the only reason you ever do something is because you get material wealth out of it, are you even making choices or are you a perfect rational actor as described in textbooks?
Things are allowed to exist and be enjoyed on the sole basis that they are enjoyable
Oh I’m not wealth motivated at all. My regrets are about uselessness of that time itself. E.g. I could learn ML instead and do nothing useful with it, rather than not doing nothing useful with my perl knowledge today.
This is so self-contradicting, it feels like division by zero
I could enjoy one knowledge today without having monetary interest in it ever, but instead I have another knowledge which is completely useless even for enjoyment.
the irony
Its strange that people are saying the same about maintaining code bases written using AI assistance.
Im guessing its going to be a generational thing now. A whole older generation of programmers will just find themselves out of place in what is like a normal work set up for the current generation.
I don't think Raku is intended for "the industry".
Some of these are halfway familiar. Hyper sounds like a more ad-hoc version of something from recursion-schemes, and * as presented is somewhat similar to Scala _ (which I love for lambdas and think every language should adopt something similar).
I think this is the closest equivalent for hyper: https://groovy-lang.org/operators.html#_spread_operator
It's also quite similar to Thread and MapThread in Mathematica
> (2, 30, 4, 50).map(* + *) returns (32, 45)
Should it be `returns (32, 54)` ? i.e. 4+50 for the 2nd term.
Maybe this is a consequence (head translation) of some countries saying e.g. vierenvijftig (four and fifty) instead of the English fifty-four.
checked in rakudo, it does return (32 54), author fingers slipped
[flagged]
[flagged]
So I guess Perl is a gateway drug for the APL family of languages now?
yes, and the post didn't even touch on metaoperators e.g.
That's nothing, use it to calculate the sum of range of values
Which will result in you getting this back in a fraction of a second50000000000000000000000000000000000000000005000000000000000000000000000000000000000000
(It actually cheats because that particular operator gets substituted for `sum` which knows how to calculate the sum of a Range object.)
Speed is still a major issue with Raku. Parsing a log file with a regex is Perl's forte but the latest Raku still takes 6.5 times as long as Python 3.13 excluding startup time.
You'd need to qualify that with an example. In my experience some things are faster in Raku and some are slower, so declaring that "Raku takes 6.5 times as long as Python 3.13" is pretty meaningless without seeing what it's slower at.
I specified the use case so why "meaningless"? Here's the code:
My brain immediately reached for the word 'horrifying', with phrases 'terrible consequences' and 'halting problem' soon after, but to each their own.
I believe that it only tries to DWIM with arithmetic and geometric sequences, and gives up otherwise. Of course there’s nothing keeping you from writing a module that would override infix:... in the local scope with a lookup in OEIS.
You get either a compile time instantiated infinite lazy sequence, or a compilation error.
My personal bar for "amount of clever involved" is fairly high when the clever either does exactly what you'd expect or fails, and even higher when it does so at compile time.
(personal bars, and personal definitions of "exactly what you'd expect" will of course vary, but I think your brain may have miscalibrated the level of risk before it got as far as applying your preferences in this particular case)
This has nothing to do with the halting problem. And I have no idea why you think there would be 'terrible consequences'.
The `...` operator only deduces arithmetic, or geometric changes for up-to the previous 3 values.Basically the above becomes
Since each value is just double the previous one, it can figure that out.If ... can't deduce the sequence, it will error out.
So I really don't understand how you would be horrified.Programming is not just a matter of slinging syntax at a problem. Good programmers need to develop a mental model of how a language works.
Needing to do geometric sequences as syntax like that is clearly a parlor trick, a marginal use case. What goes through a good programmer's mind, with experience of getting burned by such things over the years, is "If Raku implements this parlor trick, what other ones does it implement? What other numbers will do something I didn't expect when I put them in? What other patterns will it implement?"
Yes, you can read the docs, and learn, but we also know this interacts with all sorts of things. I'm not telling you why you should be horrified, I'm explaining why on first glance this is something that looks actually quite unappealing and scary to a certain set of programmers.
It actually isn't my opinion either. My opinion isn't so much that this is scary on its own terms, but just demonstrates it is not a language inline with any of my philosophies.
Raku has features that would appeal to mathematicians. It might seem like a parlour trick to you but that doesn't make it so for everyone.
It's a parlor trick because something like "1, * × 2 ..." is much more sensible. Heck, it isn't even longer, if we're talking about saving keystrokes. It's still more syntax than I'm looking for from a language, but "initial, update rule, continue infinitely" does not give me that immediate "oh wtf, what other magic is going to happen with other values?" reaction I describe from trying to divine update rules from raw numbers.
It is also immediately obvious how to use this for other patterns, immediately obvious how to compose it, and just generally experiences all the benefits things get from not being special cases.
The detecting of increments has been in Perl6 for ages so that's not new. [edit Perl6 not Perl)
I guess (apart from the Whatever), the laziness is new since Perl6/Raku.
> The detecting of increments has been in Perl for ages so that's not new.
But this is detecting that the increment is multiplicative, rather than additive. It might seem like a natural next step, but, for example, I somewhat suspect (and definitely hope) that Raku wouldn't know that `(1, 1, 2...*)` is a (shifted) list of Fibonacci numbers.
Ah, I missed that. Nevertheless it's been in Perl6 for a while!
You're safe WRT Fibonacci.
> Unable to deduce arithmetic or geometric sequence from: 1,1,2
BUT WAIT!
I… love it? It's so elegant it's infuriating.
It also seems slowish, but seems the problem is the startup cost. So probably pretty smart?
The following shows fairly stable times.
> for i in $(seq 20); do time raku -e "say (1, 1, + ... *)[0..$i]"; done
raku kinda puts a bunch of parts of itself together during startup, not entirely unlike Julia.
The sheer dynamism of the thing makes pre-baking that non-trivial, also not entirely unlike Julia.
I seem to recall chatting with the devs on IRC a few years and there seeming to be more than one viable way to potentially fix it, but they all seemed to me to share the property of needing a lot of effort during which nothing would work at all before you got something to show for it - and a decent chance of what you got to show after all that time was "welp, here's why which one can't work" - which is a really heavy lift in terms of somebody finding motivation to try it in the first place.
So tl;dr "yes, I dislike the startup cost, no, I don't expect it to change soon and I don't think it's a black mark against the team that it probably won't."
very cohesive,
and nestableThese are all very clever, but what's the use case? I'm not saying there isn't one, I just don't know what it is! Not to speak of the dead, but Perl was utilitarian: it was built to solve problems. From my point of view, these are solutions to problems I've never had.
Here’s a nice article which uses this feature well (and several others) while computing e <https://blogs.perl.org/users/damian_conway/2019/09/to-comput...>. An example:
That article was a RIDE.
Yea, I think he has a knack for that. If you want some more I recommend a talk he gave a few years back called “Three Little Words” <https://www.youtube.com/watch?v=e1T7WbKox6s>.
Damian is an excellent writer and communicator for sure. But I don't know if it answers the question of what you would use these features in Raku for. If one wanted to compute e to higher precision, I feel like one would use a DSL. But we also don't need to compute e presently.
No, we don’t need to compute e very often; it’s value is pretty well known. The article is just showing off Raku (or Perl 6 as it was then known) by writing a small program of moderate complexity that still manages to show off some of Raku’s interesting features. Computing approximations of e is merely an interesting exercise; it’s not the point.
The question that justinator asked was what good uses Raku’s indefinite series have. This article points out that different ways of approximating e grow at different rates, so it is appropriate to associate a different range of trial values with each of those methods. Dörrie's bounds uses powers of 10 as shown. Others use powers of 2. Newton’s method uses sequential trial values, since it grows really fast:
And several methods compute approximations in a single step, so they don’t take a trial value at all: These are a lot of fun, but of course they can also be profound: For those who are interested, the article shows off a lot of obvious syntactic features like superscripts and hyperoperators, but there are also things like classes and roles and new operators as well. It really is a nice tour.Ha, Conway, once again. His talks during early perl6 days were brilliant.. thanks for sharing
You’re welcome.
If you can read Lisp (Scheme) syntax, I think SICP [0] has the clearest demonstration of the utility of lazy streams. The problem it presents (calculating pi) is, admittedly, rather academic; but the concept of having values that evolve over "time" as first-class entities that you can abstract over is practically useful. I think that all of reactive programming (React, Angular, etc) is closely related to this idea (perhaps even originates from it), although the implementation and applications differ.
It's a long and effortful read, but the payoff is worth the effort.
[0] https://mitp-content-server.mit.edu/books/content/sectbyfn/b...
EDIT: I think the Perl article posted in a sibling comment uses essentially the same example (but for calculating e instead of pi), although I only skimmed it.
I hadn't realized that SICP covered using lazy streams for calculating pi. That reminds me of this article I read recently about using lazy lists in Haskell to calculate pi by means of a few different algorithms.
https://www.cs.ox.ac.uk/people/jeremy.gibbons/publications/s...
I've heard that the best way to solve a hard problem is to create a language in which solving that problem would be easy.
Basically creating a Domain Specific Language.
Raku isn't necessarily that language. What it is, is a language which you can modify into being a DSL for solving your hard problem.
Raku is designed so that easy things are easy and hard things are possible. Of course it goes even farther, as some "hard" things are actually easy. (Hard from the perspective of trying to do it in some other language.)
Lets say you want a sequence of Primes. At first you think sieve of Eratosthenes.
Since I am fluent in Raku, I just write this instead:
This has the benefit that it doesn't generate any values until you ask for them. Also If you don't do anything to cache the values, they will be garbage collected as you go.Interesting set of Raku features to focus on...
The most important Raku features are Command Line Interface (CLI) and grammars.
CLI support is a _usual_ feature -- see "docopt" implementations (and adoption), for example. But CLI is built-in in Raku and nice to use.
As for the grammars -- it is _unusual_ a programming language to have grammars as "first class citizens" and to give the ability to create (compose) grammars using Object-Oriented Programming.
I’ve followed this project for years, and while it’s interesting, I think it’s really a shame that Perl 6 seemed to have been so badly waylaid by this sojourn into the looking-glass.
I wonder how (and what) Patrick Michaud is doing?
I’m not sure this page loaded properly for me, half of it is arbitrary punctuation characters assembled in bizarre nonsensical ways.
The greatest trick the Devil ever played was renaming Perl 6 to Raku.
This is a very funny comment given the context.
https://github.com/Raku/problem-solving/pull/89#pullrequestr...
A quote from the Christian Bible Luke 5:36,37
Agreed. The "Perl 6 is a different language" meme was so overdone. To anyone who actually used Perl 5 the continuity was obvious.
That's an interesting way to describe the English language.
I've often thought of Raku as being a lot like the English language. It borrows heavily from other languages.
Of course since Raku has the benefit of an actual designer, it is more cohesive than English.
https://imgur.com/uMuoeuC
I don't understand why we want to use some language feature like Junctions, instead of using lists explicitly?
I don't understand why we want to use some language feature like loops, instead of using conditional gotos explicitly.
Sure, we can do the same thing with the goto... but why would we want to use the more difficult/annoying alternative when the convenient one exists?
Junctions introduce a degree of non-determinism to the language. Think Prolog variables. Junctions allow you to talk about a set of solutions without having to mind how they are kept together or how the operations are distributed between members of the Junction. It's especially convenient when you search for something and that something can be a complicated series of logical expressions: you can pack them all in a single Junction and treat as a first-class object. It's a little hard to explain without giving examples, but it really has a lot of uses :)
PowerShell does something similar with their pipelines, see e.g. the answer [0] and the question it answers. Something similar happens in Bash: $x refers not to the string $x, but to the list of the strings that you get by splitting the original string by IFS.
And yes, this feature is annoying and arguably is a mis-feature: containers shall not explode when you touch them.
[0] https://stackoverflow.com/a/56977142
I’m not sure that I see the connection that you are making here. Can you elaborate?
Also note that in contrast to Bash, Junction is a type.
Regarding their utility, at their most useful level (in my experience), junctions provide for things like:
This is the same as writing They have many other uses but that’s the most common one that I tend to see in practice.> I’m not sure that I see the connection that you are making here. Can you elaborate?
Back when I had to write PowerShell scripts, I constantly found that piping an array to some command would almost always make that command to be invoked once for every item in array, instead of being invoked once and given the whole array as a single input. Sometimes it's the latter that you need, so the workaround is to make a new, single-element array with the original array as its only element, and pipe this into the command.
Got it, that makes sense based on the link.
In Raku, the equivalent would be:
The connection to junctions is still not very clear to me, however. A junction doesn’t really have any correlation to a single element version of a list. As a super-position of potential values, it doesn’t have many correlaries in other languages.For example, part of the original concept of junctions involved parallel evaluation of junction elements in an expression but that turned out to be less useful than hoped for in practice.
An array in Powershell, when piped to a command, automatically get this command invoked for each of the array's element and the results are combined into a new output array, which can be piped further.
A junction in Raku, when given as an argument to a function, automatically applies this function to each of the junction's element and the results are combined into a new junction as a result.
I don't know, seems like a pretty clear parallel to me. And since the Powershell's behaviour is quite often undesirable, I agreed with the original commenter that perhaps the junctions could also be somewhat annoying to use instead of just working normal lists: after all, "s in ('this', 'that', 'other')" is about just as clear as "$string ~~ “this”|”that”|”other”" but doesn't require support for magical self-destructuring containers in the language.
Since I rarely use junctions myself -- and never in the manner of applying non-boolean operations across their values -- the connection to automatic function application wasn't clear to me from your initial comment.
I can see the parallel now, so thank you for clarifying.
EDIT: On further reflection, another reason that the connection escaped me is that even after applying functions across a junction’s values, the junction is never useful (without significant and unnecessary effort) for accessing any individual values that it contains.
Anyway, thanks for sharing your thoughts and bearing with me.
Ahh, perl operators! My favorite was goatse operator: =()= that assigned(no pun intended) length of an array if I recall correctly.
I dream of a day where one can post a Raku article on HNN and not encounter a comments section full of digressions into discussing Perl.
There is some sense to it by means of comparison, but the constant conflation of the two becomes tiresome.
But in that spirit, let's compare:
The =()= "operator" is really a combination of Perl syntax[^1] that achieves the goal of converting list context to scalar context. This isn't necessary to determine the length of an array (`my $elems = @array` or, in favor of being more explicity, `my $elems = 0+@array`). It is, however, useful in Perl for the counting of more complex list contexts on the RHS.
Let's use some examples from it's documentation to compare to Raku.
Perl:
Raku: That's it. `+` / `.elems` are literally all you ever need to know for gathering a count of elements. The quotes around 'a' in the regex are optional but I always use them because I appreciate denoting which characters are literal in regexes (Note also that the regex uses the pair syntax mentioned in OP via `m:g`. Additional flags are provided as pairs, eg `m:g:i`).Another example.
Perl:
Raku: While precedence can at times be a conceptual hindrance, it's also nice to save some parentheses where it is possible and legible to do so. Opinions differ on these points, of course. Note also that `Str.split` can take string literals as well as regexes.[1]: See https://github.com/book/perlsecret/blob/master/lib/perlsecre...
> digressions into discussing Perl
Changing the name to Raku doesn't obliterate Perl 6's history as Larry Wall's successor to Perl 5.
I don't know why you would expect people to pretend that they're unrelated. They aren't.
To me it’s not unlike spending all of a thread about Clojure commenting on Java or Common Lisp without even bothering to mention or contrast to Clojure.
There are connections to both but that doesn’t necessarily make them topical.
I disagree that discussing a previous language by a designer (often in terms that seem to conflate equivalence) is usefully relevant to discussion of a different language by that designer.
Note that I have never said that there is zero utility. I just find it tiresome to encounter comments about Perl syntax as if it is automatically useful or interesting to discussions about Raku.
Which is why I took the time to provide an example of what (I would consider) an actually relevant mention of Perl syntax.
> Changing the name to Raku doesn't obliterate Perl 6's history as Larry Wall's successor to Perl 5.
Further than that, the name change only happened after it already had at least one official release under the "Perl 6" name.
It was just as annoying to constantly be pulled into discussions about Perl 5 in threads about Perl 6 back then too.
It has expressions that seem like they could lead to much shorter code. I suspect it would take some time to get used to though...
It is absolutely mindbending to me that all of this language development has happened on top of Perl, of all things.
> It is absolutely mindbending to me that all of this language development has happened on top of Perl, of all things.
Why "of all things?" The Perl philosophy of TIMTOWTDI, and Wall's interest in human-language constructs and the ways that they could influence programming-language constructs, seem to make its successor an obvious home for experiments like this.
It has the same whimsy and DWIM of perl, look at Promise having a status of 'kept' or 'broken' which is more fun than 'fulfilled' or 'rejected'. Brings to mind Perl 5's use of bless, calling the filter function 'grep' and local/global variables created with 'my' and 'our'.
Don't forget `tainted` (https://perldoc.perl.org/perlsec#Laundering-and-Detecting-Ta...)!
Indeed! There's lots of magic even in Perl4, some of it obscure.
Wasn’t Raku/Perl6 basically a ground-up reconception more than a building-on-top of existing Perl5?
Yes, though perl5 is also an incredibly bendable language syntax wise - you can add keywords and operators to the compiler via CPAN modules (our async/await syntax is provided simply by doing 'use Future::AsyncAwait' for example).
There is no Perl code in Raku.
Edit: Other than a configuration framework and a test harness.
Shocking that they stopped at five.
I'm sure that there has been more changes from Perl 5.8 to Perl 5.40 than there is between Python 2.0 to Python 3.x (Whatever version it is up to at the moment.)
What's more is that every change from Python 2 to Python 3 that I've heard of, resembles a change that Perl5 has had to do over the years. Only Perl did it without breaking everything. (And thus didn't need a major version bump.)
I agree. This attempt to fuse higher-order functional programming with magic special behaviors from Perl comes off to me as quixotic. HOP works because you're gluing together extremely simple primitives—ordinary pure functions. You can build big things fearlessly because you perfectly understand the simple bricks they're made of. But here: magic functions—which behave differently on lists-of-scalars vs. lists-of-lists, by special default logic—that's not a good match for HOP. Now you have two major axes of complexity: a vertical one of functional abstraction, and a horizontal one of your "different kinds of function and function application".
Doing High-Order Programming in perl is not something new.
There is a full book about it:
https://en.m.wikipedia.org/wiki/Higher-Order_Perl
Whats nee in Raku is putting them together with other experimental features as first class citizens
>> magic functions—which behave differently on lists-of-scalars vs. lists-of-lists by special default logic ...
That is completely the wrong way to think about it. Before the Great List Refactor those were dealt with the same by most operations (as you apparently want it). And that was the absolute biggest problem that needed to be changed at the time. There were things that just weren't possible to do no matter how I tried. The way you want it to work DID NOT WORK! It was terrible. Making scalars work as single elements was absolutely necessary to make the language usable. At the time of the GLR that was the single biggest obstacle preventing me from using Raku for anything.
It also isn't some arbitrary default logic. It is not arbitrary, and calling it "default" is wrong because that insinuates that it is possible to turn it off. To get it to do something else with a scalar, you have to specifically unscalarify that item in some manner. (While you did not specifically say 'arbitrary', it certainly seems like that is your position.)
Let's say you have a list of families, and you want to treat each family as a group, not as individuals. You have to scalarize each family so that they are treated as a single item. If you didn't do that most operations will interact with individuals inside of families, which you didn't want.
In Raku a single item is also treated as a list with only one item in it depending on how you use it. (Calling `.head` on a list returns the first item, calling it on an item returns the item as if it had been the first item in a list.) Being able to do the reverse and have a list act as single item is just as important in Raku.
While you may not understand why it works the way it works, you are wrong if you think that it should treat lists-of-scalars the same as lists-of-lists.
>> This attempt to fuse higher-order functional programming with magic special behaviors from Perl comes off to me as quixotic.
It is wrong to call it an attempt, as it is quite successful at it. There is a saying in Raku circles that Raku is strangely consistent. Rather than having a special feature for this and another special feature for that, there is one generic feature that works for both.
In Python there is a special syntax inside of a array indexing operation. Which (as far as I am aware) is the only place that syntax works, and it is not really like anything else in the language. There is also a special syntax in Raku designed for array indexing operations, but it is just another slightly more concise way to create a lambda/closure. You can use that syntax anywhere you want a lambda/closure. Conversely if you wanted to use one of the other lambda/closure syntaxes in an array indexing operation you could.
The reason that we say that Raku is strangely consistent, is that basically no other high level language is anywhere near as consistent. There is almost no 'magic special behaviors'. There is only the behavior, and that behavior is consistent regardless of what you give it. There are features in Perl that are magic special behaviors. Those special behaviors were specifically not copied into Raku unless there was a really good reason. (In fact I can't really think of any at the moment that were copied.)
Any sufficiently advanced technology is indistinguishable from magic. So you saying that it is magic is only really saying that you don't understand it. It could be magic, or it could be advanced technology, either way it would appear to be magic.
In my early days of playing with Raku I would regularly try to break it by using one random feature with another. I often expected it to break. Only it almost never did. The features just worked. It also generally worked the way I thought it should.
The reason you see it as quixotic is that you see a someone tilting at a windmill and assuming they are insane. The problem is that it maybe it isn't actually a windmill, and maybe you are just looking at it from the wrong perspective.
That concept of "strange consistency" reminds me of my experience writing Clojure, for some reason.
Nil punning in Clojure gives you that kind of experience, for example. Things that would break in other languages, just "work as you'd expect them" in Clojure (except when you drop down to host primitives, and then nils don't behave nicely anymore). In general, it makes for a really pleasant dev experience, I find.
You're making a mistake if you're thinking like that. Applying an operation that generally works on single values over a list of values automatically is an incredibly powerful technique. If you have ever used Numpy, you will appreciate not needing it in many cases where Raku's built-ins suffice.
> You're making a mistake if you're thinking like that. Applying an operation that generally works on single values over a list of values automatically is an incredibly powerful technique.
Indeed, the defining technique of array programming!
My fundamental objection here it that it is recursive. The non-recursive hyperoperators all have nice clean mathematical definitions. The recursive ones are just weird and ad hoc, at least from the perpective of the underlying mathematical structures.
And if you try to avoid the ad-hoc-ness by formalizing each useful weird behavior as its own documented type with its own documented semantics for application, then you've just reinvented monads. (Which is not to say you shouldn't; IMHO stdlib support for monad types in a dynamic scripting language is long overdue.)
The operation itself is trivially useful but the operator soup makes me want to run screaming.
You're right. It makes way more sense when you compare it to Numpy.
That's not really the issue here. The problem is they've elevated this super weird recursive looping operation to the level of a built-in operator.
Built-in operators should be reserved for tasks that are extremely common and have obvious behaviours.
This would be more suitable as a function, like
There are so many subtle behaviours in `<<+>>` that you really want to spell out. Did you notice that it was [22, 23] not [22, 13] for example?National Geographic for the uninitiated.
https://en.m.wikipedia.org/wiki/National_Geographic
https://www.nationalgeographic.com/magazine
Looks more like an advanced Monkey Island puzzle to me.
I got the title confused for "Roku".
Someone downvoted this message. If downvoting also comes with reason, it will be helpful.
The reason is that no one cares about your failure to read correctly. It adds nothing to the conversation. (I didn't downvote.)
I didn't downvote, but this is by far not the first time something like that was stated (Raku being similar to Roku).
I know I'm tired of seeing it, I'm sure others are as well.