I would second Prolog as mind-blowing. I've found you're typically confronted with fully engaging with the core of whatever problem you're solving, and only that core. This is probably what can make it so frustrating sometimes as you have no option but to work out the hard stuff nearly immediately; not to mention that unconsidered edge cases, mistakes can cause some pretty opaquely wrong results, or the query not terminating, which can make conventional debugging pretty difficult. The guarantees you get with using the 'pure' core of Prolog do open up some really interesting avenues though, for example Scryer's debugging library is quite neat in affording _semantic_ debugging: https://www.scryer.pl/debug
Just some additional commentary too - I think this post quite misrepresents it with some of the comparisons.
Prolog at its core is SLD Resolution [1] (a form of search) over Horn Clauses [2] (first order logic). Queries posted to the runtime are attempts to find a set of values which will satisfy (cause to be true) the query – whilst SQL is founded on relational algebra which more closely aligned with set theory.
Whilst there's probably some isomorphism between satisfying/refuting a logical predicate, and performing various set operations, I'd say it's a bit of a confusion of ideas to say that SQL is based on 'a subset of Prolog'. The author might be thinking about Datalog [3], which is indeed a syntactic subset of Prolog.
I remember learning Prolog, it was tricky to wrap my mind around it, it wasn’t like any other language. The day I finally “got it” I was very happy, until I realized all the other languages I had previously learned, no longer made any sense.
I remember taking a PL class in undergrad, learning Prolog as one of a handful of languages. During that section my brain started to want to "bind" variables to things as I was going about my day, it was very weird.
You can run pure ISO Prolog interactively in your browser on [1]. There's also an extended tutorial for solving toy container logistics problems there ([2]). Though while it doesn't require prior Prolog knowledge, it's not so much a learning resource as it is a teaser and practical guide for using Prolog's built-in combinatorical search for linear planning in the classic STRIPS style (David Warren's WARPLAN being among the earliest Prolog applications apart from NLP).
While i am not too familiar with Prolog, i have been intending to study it in depth given that it can be used to design Languages/DSLs/Expert Systems/ML.
Here are some resources which i have been collecting;
4) User "bytebach" gives an example of using Prolog as an intermediate DSL in the prompt to an LLM so as to transform English declarative -> Imperative code - https://news.ycombinator.com/item?id=41549823
The frustrating thing about prolog is that at first it seems you get to ignore the execution model, but very very quickly you have to learn to think of it in imperative terms anyhow, so you can ensure termination and stuff. Or at least, I never got far enough past that to wrap back around to not coding in it with a background imperative model running in my head.
Not the OP but personally I would recommend looking into constraint programming which is related but different to logic programming (Prolog has extensions for Constraint Logic Programming which combines the two).
You mentioned you're looking for something new that doesn't have to be related to data analytics, well constraint programming (and similar) is basically the mirror problem. Instead of data you have the rules of the "game" and the solver's job can be to: find a solution, find the optimal solution, find all solutions, or prove there is no solution.
Things like scheduling problems, resource allocation problems, etc. A real-world example would be finding the most efficient cell placement when developing a microchip, this is basically an advanced rectangle packing puzzle.
Much like prolog you define the rules (constraints) and the solver takes over from there. Part of the fun is figuring out the most effective way to model a real-world problem in this manner.
The closest thing to Prolog in this domain would be ASP, with Clingo/Clasp being the best solver available. But you also have regular constraint programming (look into MiniZinc or Google's OR-Tools), mixed-integer programming which is mainly for hardcore optimization problems (commercial solvers sell for tens of thousands of dollars), satisfiability modulo theories (often used for software verification and mathematical proofs), etc.
The mind-blowing bit is that this sort of problem is NP-complete but these solvers can find solutions to utterly massive problems (millions of variables and constraints) in milliseconds sometimes.
Same Prolog, and particularly Lambda Prolog literally blew my mind. I could (physically) feel my brain, trying to think differently about the problem to solve. It was an experience.
I need to retake that course (was 20 years ago...). I also wonder if/how AI could leverage lambda prolog to prove things.
Also look into ASP for the real mind blowing one. Specially clingo/clasp.
Much more powerful than Datalog, can even solve optimization problems without being turing complete.
My mind blown experience was listening to a Rich Hickey talk (maybe not [1]) that flipped upside down my accepted method of thinking about types and type system paradigms. I still love writing Haskell and Rust, but I see the warts that he is so concisely pointing out. The either type really sticks out to me now, and I see the problems with overtyping what are essentially maps, and how rigid that makes every aspect of your programs. Currently I deal with a fairly large go monolith that had some clean code practices in it, and changing a single field that is going to make it into the public API ends up being hundreds or thousands of lines of changes. Such a waste of time to CRUD what is a map of strings.
The overtyping seems not about not about static vs dynamic but about nominal typing(functions based on the abstract type) vs structural typing/row-polymorphism (functions based on specific fields). Go seems to have structural typing, though less used. Do you have the same problem with overtyping while using structural typing?
There was a very similar complaint [1] ("OOP is not that bad") favorably comparing the flexibility of OOP classes in Dart vs typeclasses in Haskell. But as pointed out by /u/mutantmell [2], this is again about lack of 'structural typing' at the level of modules with exported names playing the role of fields. First class modules in functional languages like ML (backpack proposal in Haskell or units in Racket) allow extensible interfaces and are even more expressive than classes in most usual OOP languages. First-class modules are equivalent to first-class classes [3] ie. a class expression is a value, so a mixin is a regular function which takes a class and returns a class. Both are present in Racket.
My mind wasn't exactly blown; indeed that talk and its arguments give me a sense of deja vu. And then I remembered some of the arguments behind Google's protobuf removing required/optional fields in proto3. It's really the same argument. And that argument is a fairly narrow one when it comes to representing business logic data. You still need Maybe/option types when you are dealing with computer science data. If you are implementing your own linked list, you are going to need that Maybe or option type.
I believe you that he's being concise but don't have time to watch an hour video even at 2x. Can you summarize? (I got to where he was talking about how introducing Maybe in breaking ways will break things, thought "duh" and quit watching.)
The big idea is that record types that have optional values (such as Maybe in Haskell) don't give you enough context on when, where, and how those values can be optional. Additionally, by placing things in buckets, the argument is that records and optional types are more brittle than the alternative, maps that can simply have missing keys for what would be optional. The imagery of a herd of sheep (map) versus compartments into which each sheep can go (records, Optional types, etc.) is brought up to illustrate a desire for more fluid engagement with information that may or may not exist or be needed, all of which is a description of a named aggregate of information we care about in context.
While I see the merit in his arguments, I believe his approach to data is "Clojure-like" or Lisp-like, in that it discourages explicit enumeration of all states and their possible configurations for the tradeoff of being able to sculpt these records. But, as a Haskell dev, I do not want to sculpt the records in the way he describes. I want to explicitly enumerate all the possibilities. This is, in my mind, a difference in cognitive preference. There is no right or wrong way and I think what he proposes is elegant. It is ultimately a matter of up front reasoning about the space compared with reaching a synthesis of the problem domain over time through wrestling with the problem itself. It is a statement about the efficacy of saying what can be known up front and to what extent, how much accuracy, and what utility.
I am about to launch a SaaS that pairs languages to these cognitive patterns as explicit understanding of such information can help bring together people who think similarly about information (team-building) while also opening up the possibility of expanding ones insights with alternative ideas and approaches (as his talk has done for me). The intent is to help hiring teams find devs that match their culture's programming and problem solving styles (whether that be reinforcing or doubling down on ways of thinking and acting).
You don't have half an hour to spare any time in the near future? But you do have the time to scroll HN and comment?
I'm not a huge fan of videos as content delivery mechanisms for simple facts, but this is a talk - an argument made with the intent of convincing you about something that you may find counter-intuitive. What's the point of summarising that, if it loses the granularity of the argument, its persuasive power? "Static types bad, Clojure good?"
What persuasive power are you going to lose if you cut out the jokes that 5-year plan is stripes? And that's the first 3% of the talk, so pure lossless compression
> "Static types bad, Clojure good?"
sure, this kind of summary is useless, but then is simply too short
There is no shortage of long talks that lack nuance. Just say that no, you can't effectively summarize this video, instead of doing whatever it is that you are doing here.
> There is no shortage of long talks that lack nuance.
Nuance is not the same thing as a talk designed to build an argument bit by bit and be persuasive. You could summarise an hour-long closing argument for the defence in a jury trial as 'My client is not guilty', but doing so is rather missing the point.
> Just say that no, you can't effectively summarize this video, instead of doing whatever it is that you are doing here.
x == y, but with the added implication that SBF's take on longform is not actually something to aspire to.
The point of summarizing is to condense the substance for potentially hundreds of readers into a sensible set of ideas in a couple of paragraphs. Any wise listener will stay away from talks that need full half an hour to stay “persuasive”.
>> The point of summarizing is to condense the substance for potentially hundreds of readers into a sensible set of ideas in a couple of paragraphs. Any wise listener will stay away from talks that need full half an hour to stay “persuasive”.
“I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. … If you wrote a book, you f'ed up, and it should have been a six-paragraph blog post.” -Sam Bankman-Fried
Ironically, Nate Silver went into great detail--in his book--about what a damn weirdo SBF was even before he ended up in jail. So yeah, if you want to follow the lead of a convicted felon, well . . .
I’m not Rich, I think to summarize what he’s saying I’d need to have a better grasp of the subject material than he does.
I’ve actually struggled to get people on my team to understand what is so great about this language or the ideas behind it. Pointing people at 1 hour long YouTube videos that almost need to understand the source language to see examples of what he’s talking about haven’t been working.
I’ll think hard on how to summarize this without needing the context I have and come back to this comment. It won’t be what he’d say but I’ll share my view
The key point may be (summarizing from memory): a function signature can be backwards compatibly changed by either making a parameter it accepts nullable from a non-nullable type, and/or by making the return type non-nullable from a nullable. With union types of T | Null, this is a type safe refactor that requires no code change. The Maybe<T> variant on the other hand requires code changes no matter what.
I saw a big uproar in certain strongly typed FP communities around it, but I think it's more like different problem domains having different experiences. Many software operates in a closed world where they control everything, while other software has to communicate with other software that may change with a different schedule, owned by a different team, etc.
Rich Hickey discusses the complexities of optionality in programming, particularly in Clojure’s spec system, emphasizing the need for clear schemas and handling of partial information.
Highlights
* Community Engagement: Acknowledges the presence of both newcomers and regulars at the event.
* Fashion Sense: Introduces a humorous take on the programming roadmap focused on fashion.
* Language Design: Explores the challenges of language design, especially regarding optionality in functions.
* Null References: Cites Tony Hoare’s “billion-dollar mistake” with null references as a cautionary example.
* Spec Improvements: Discusses plans to enhance Clojure’s spec system, focusing on schema clarity and usability.
* Aggregate Management: Emphasizes the importance of properly managing partial information in data structures.
* Future Development: Outlines future directions for Clojure’s spec, prioritizing flexibility and extensibility.
Key Insights
* Community Connection: Engaging with both veteran and new attendees fosters a collaborative environment, enhancing knowledge sharing and community growth.
* Humorous Approach: Infusing humor into technical discussions, like fashion choices, can make complex topics more relatable and engaging.
* Optionality Complexity: The management of optional parameters in programming languages is intricate, requiring careful design to avoid breaking changes.
* Null Reference Risks: Highlighting the historical pitfalls of null references serves as a reminder for developers to consider safer alternatives in language design.
* Schema Clarity: Clear definitions of schemas in programming can significantly improve code maintainability and reduce errors related to optional attributes.
* Information Aggregation: Understanding how to manage and communicate partial information in data structures is crucial for creating robust applications.
* Spec Evolution: Continuous improvement of the spec system in Clojure will enhance its usability, allowing developers to better define and manage their data structures.
I'm generally an advocate for a robust, mandatory type system like Rust. But there are production scenarios where it's a hindrance. Imagine processing arbitrary JSON document containing several required properties and many additional ones. You can't possibly know what additional properties any given document will have (it's only available at runtime, with infinite variations) but it's often a hard requirement that these extra data get passed through the system. Being forced to create a concrete type for what is effectively an open map + some validation logic is awkward at best. Maps and general purpose functions that process maps are a better fit for this problem.
Yes but the concrete types at the byte level can be used to say: "these five bits are, concretely, a type tag, and always a type tag", and so it goes from there.
I don't want to be typing at the high level the same way I'm typing at the byte level.
Very much agree with Erlang/Elixir. After years battling and hating the JVM, I was initially very put off by the existence of a VM (BEAM) and by how it forced the concurrency model on me, and it did make getting into serious development with it harder, but once I "got" it, it really blew my mind. The "let it fail" philosophy was particularly mindblowing to me (especially once I understood that it doesn't mean "don't do error handling").
Joe Armstrong distinguishes in his thesis between exceptions and errors: exceptions are when the runtime system doesn't know what to do, and errors are when the programmer doesn't know what to do.
Say your code divides by 0; the runtime system can't handle 1/0 but you the programmer may have anticipated this and know what to do. This is just someplace you write code right there to handle the case (catch an exception, pattern match the 0 case beforehand, whatever).
Your error, on the other hand, means something you expected to hold didn't; recovery inline from unknown unknowns is a fool's errand. Instead you give up and die, and the problem is now the exception you know how to handle "my child process died".
The distinction makes me think of Java's checked exceptions, which I think have gotten an unfairly bad reputation from a lot of slapdash programmers complaining that they didn't want to think about error-cases. (And, TBF, a little syntactic sugar could have gone a long way.)
In brief, a checked exception is part of a method signature, "function wash throws NoSoapException", and the compiler enforces that whoever writes code calling that signature must make some kind of overt decision about what they want to do when that exception comes up. That may mean recovery, wrapping it inside another exception more-suitable to their own module's level of abstraction, deliberately ignore/log it, or just throw a non-checked exception.
So in a sense checked exceptions suit those "error" cases where you do expect the programmer consuming your function to at least decide whether they can handle it, and if they don't you still get all those wonderful features like stack traces and caused-by chaining. In contrast, regular (unchecked) exceptions have the opposite expectation, that the caller probably won't be able to handle them anyway and must opt-in to capture them.
Indeed. OTOH, "focus your main code on the happy path, and put your error handling in the supervisor tree" is unfortunately a bit less pithy.
Shades of "boring technology"[0] which might better be described as "choose technology that is a good fit for the problem, proven, reliable and proportionate. Spend your innovation tokens sparingly and wisely".
I'm personally caught between my attachment to the "boring technology" philosophy and my desire to try Elixir, which seems like exactly the kind of obscure and exotic technology that it rejects.
I actually think elixir is the perfect boring technology now. The language is matured, and is not changing much. There will be a big addition at some point with the new type system, but other than that the language is pretty much done. Underneath elixir is erlang, which is extremely mature and highly boring in the good way. Live view might be the only exception, but even that is now becoming boring as it has gotten stable. If you are a believer in boring, you should definitely give elixir a try
Erlang is old, reliable tech that used to scare people once upon a very distant time for
- running on a VM (now boringly normal)
- using a pure-functional language with immutable terms (the pure-functional language fad has since then come and gone and this is now archaic and passé if anything)
But languages only get stereotyped once. At any rate, it's pretty boring and old.
Already said by two sibling comments in all the detail I would have, but I wanted third it: as a subscriber to boring technology myself and actually never been one to jump on new shiny tech, Elixir/Erlang is the epitome of "boring technology." Elixir does a lot on its own you can often cut out the need for a lot of other dependencies.
I started learning programming with modern dialect of BASIC and when I first tried Lua, I've had my mind blown when I discovered:
- tables (hashmap)
- regex
- pass by reference (caused a lot of confusion and frustrating bugs)
- metatables
- goto and labels
- multiple return values
After Lua I did want to try a language that's fast, something compiled to small native binary. My choice was Nim. And Nim got my mind blown to pieces:
- static types
- function declarations I can understand how to use without reading their code
- functional paradigm
- batteries included standard library
- compile-time functions
- generics
- templates
- macros
- exploring and manipulating AST
- const, let, var (immutability)
- pointers and references
- compiler optimizations
- move semantics
- memory management
- I can compile code to both javascript and C?!
- C, C++ and JS interop (ffi)
- I can read and understand source code of standard library procedures?!
- huh, my computer is fast, like crazy fast!!
- and probably more, but that's what I can recall now..
Lua is absolutely pass by value. You’re probably mixing container values with by-reference, that is a common mistake. Pass by reference means that an assignment to an argument updates the variable at the call site, e.g.:
local x = 0
function f(byref arg)
arg = 1
end
f(x) -- x == 1
Except that there’s no “byref” in Lua and no way to access the original argument’s location (commonly “lvalue”).
Passing a table to a function and not receiving a deep/shallow copy is still “by value”, not “by reference”, because “by” here is about variables (lvalues), not values.
To clarify what I meant by "pass by reference" here's excerpt from the Lua 5.1 manual:
> Tables, functions, threads, and (full) userdata values are objects: variables do not actually contain these values, only references to them. Assignment, parameter passing, and function returns always manipulate references to such values; these operations do not imply any kind of copy.
I also seen this described as "reference semantics" and "value/copy semantics", maybe that would be a better term?
I see, sorry for the noise then. I’m programming since the '90s but still not sure what the correct term would be here, so, yeah.
When we use exactly “pass[ing] [an argument] by reference” it usually means passing an argument by a reference to its location (opposed to contents). I think we just avoid using that exact form if we want different meaning, there’s no proper form. Cause applying it to values rather than arguments rarely makes sense. Copy semantics are non-trivial in general. Shallow copying may not be sufficient to create a proper copy and deep copying may copy half of the vm state, think f(_G, debug.getregistry()).
The Lua manual probably had the same issue with wording it.
Again, C++ with copy constructors is an exception here. You usually receive a copy (shallow or deep depends on a constructor) if no reference, pointer or move semantics given. That was the edit-removed part above.
Yeah, it's a routinely mistaken term in CS, I have met plenty interviewer who I had to correct on it (most languages, but java is also pass-by-value. It just passes objects by pointer value. C is also always pass-by-value. C++ is one of the few exceptions, it does have pass-by-reference semantics)
That idea `function can return multiple values` was a bit of `Wait what? Ah I almost forgot :facepalm:` moment for me.
(The quadratic formula function for solving quadratic equations `ax^2 + bx + c = 0` returns TWO values.)
Having done all the functional, lisp-like, logic, stack-based, etc., "non-mainstream" languages, the one that most blew my mind so far is Verilog. Everything executing all the time, all at once, is just such a completely different way of thinking about things. Even though the syntax is C-like, the way you have to think about it is completely different from C. Like it looks imperative, but it's actually 100% declarative: you're describing a circuit.
Everything else maps pretty close to minor extensions over OOP after you've worked with it for a while. Except logic languages, which map like minor extensions over SQL. Verilog to me was in a class of its own.
Is there a way of trying it in a simulator free, or are all the implementations commercial? I appreciate that I could use Wikipedia, but sometimes you end up going a long way down dead ends that way, so I hope you don't ming my asking.
I used modelsim free edition years ago. Also got a $200ish test board just to see things work on real hardware. Never did anything super complex but I think I got my money's worth of tinkering around out of it.
Go really blew me away with its explicit error handling. As someone who has come from the OOP cult of clean code and other useless principles that haven’t led our industry to have less messes over the past 20-30 years. Who was slowly moving into the “simplicity”, “build things that can be easily deleted”, “YAGNI” mindset it simply clicked.
Then Rust took it a few levels beyond Go.
It’s a really good list, I suspect the languages you add is going to spend on your experience. So I wouldn’t feel too sad if your favorite language is on that list. The author lists Java as a language with an amazing standard library, but something tells me a lot of people will have C#, Go or similar as their Java, and that’s fine!
I'm still convinced that Rust does it _right_ by allowing you to "chain through" errors using `Result`. Having to pepper `if err != nil { return ..., err}` repeatedly into your code is just distracting from the core logic - 99% of the time, that's the response anyway, so being able to just write your happy-case logic and have an implicit "but if there are any errors, return them immediately" (but still returned as a value rather than thrown as an orthogonal Exception) is the best of both worlds.
It is really great. I just wish there was a better story for combining error types. thiserror works, but it’s annoying boilerplate to have to write imo.
When you require that your "algebraic" types always be tagged, so that `type X = A + B` and `type Y = A + B` are always different, you lose most of your capacity of seamlessly composing the types.
In other words, Rust doesn't have a type that means "either IO Error or Network Error", so it can't just compose those two behind the scenes into something that you can later decompose. You have to create some tag, and tell the compiler how to apply the tag, because each tag is different.
That’s a good way to put it. Lots about Rust is great, but the lack of anonymous union types over complicates a lot of problems. The trait system would be much less of a headache if you could define a local child types as well.
The anyhow crate complements thiserror pretty well in my experience. I use it "top-down" facing where individual errors in any component are defined with thiserror, but then we bubble them up by wrapping them in a `anyhow::Error` if we don't know what to do with them. It also has the nice thing of being able to produce simplified stack traces to help diagnose where things are going wrong. And then you can downcast to component-level thiserror Errors if you want to inspect something closely from high up.
I have been using Golang recently and have the exact opposite feeling about it, which is likely due to my limited exposure to it. Needing to constantly check if a specific return value is not nil just seems so sloppy but I have yet to come across a "better" way as it appears to just be a Golang convention. Is there a good post about better ways to handle errors in Go?
This is the better way as far as I’m concerned. Go has a philosophy to be simple, and this is explicit error handling at its simplest. You deal with errors exactly where they occur. I think a lot of people simply prefer implicit error handling and long exception chains. There is nothing wrong with that but that’s not how Go does error handling.
After a couple of decades in the industry I really prefer explicit error handling because while powerful implicit error handling is also easy to get wrong. Which means I have to go through ridiculous chains of exceptions to find some error someone introduced years ago, instead of being able of heading directly to it and immediately understanding what is going on. So I guess it’s a little contradictory that I prefer the Rust approach, but I think the compromise of added complexity is small enough to make up for the added safety and “manoeuvrability”.
The flip-side is of course that Go’s simplicity is part of the reason it’s seeing a lot of adoption while most other “new” languages aren’t. (Depending on where you live).
Almost all error-related code in Go is just doing by hand what an exception system would do automatically: stop the current function and propagate it to the caller. The error is not being "dealt with" in a meaningful way except at the outermost layers of the onion (i.e. HTTP/RPC middleware converting it to a wire protocol error response) which could have used a try/catch anyway.
I prefer implicit error handling if exceptions are checked and defined as part of the contract, otherwise it’s definitely hard to trace.
I know the biggest complaint with checked exceptions is that people tend to just use catch all exceptions, but that’s just sloppy work. If they’re a junior, that’s when you teach them. Otherwise, people who do sloppy work are just sloppy everywhere anyway.
My issue with a lot of the best practice principles in SWE is that they were written for a perfect world. Even the best developers are going to do sloppy work on a Thursday afternoon after a day of shitty meetings during a week of almost no sleep because their babies were crying. Then there are there times when people have to cut corners because the business demands it. A million little things like that, which eventually leads to code bases which people like Uncle Bob will tell you were made by people who “misunderstood” the principles.
Simplicity works because it’s made for the real world. As I said I personally think Rust did it better, but if you asked me to come help a company fix a codebase I’d rather do it for a Go one than Rust.
Truthfully, I disagree. I’ve worked at a few different companies and I could absolutely rank them in the quality of their staff.
Been on teams where every individual was free to use their best judgement, we didn’t have a lot of documented processes, and… nothing ever went wrong. Everyone knew sloppy work would come back and bite, so they just didn’t ever do it. Deadlines were rarely a problem because everyone knew that you had to estimate in some extra time when presenting to stakeholders. And the team knew when to push back.
On the other hand, I’ve been on teams where you felt compelled to define every process with excruciating detail and yet experienced people somehow screwed up regularly. We didn’t even have hard deadlines so there was no excuse. The difference between implicit and explicit error handling would have not mattered.
At the end of the day, some of these teams got more done with far fewer failures.
Go has a philosophy to be simple, and this is explicit error handling at its simplest. You deal with errors exactly where they occur.
“if err != nil {return nil, err}” is the opposite of this philosophy. If you find yourself passing err up the call stack most of the times and most calls may return err, it’s still exception-driven code but without ergonomics.
It’s not simplicity, it’s head butt deep in the sand.
There is no other way - most error conditions can't be dealt with at the place where the error happened. A parseInt failing is meaningless at the local scope, the error handling depends on who is the caller.
Golang needs monads. If by default it had a monad that propagated errors up the stack, then it would be much cleaner to work with, and should you need another behaviour you could install a different monad.
> Needing to constantly check if a specific return value is not nil just seems so sloppy but I have yet to come across a "better" way as it appears to just be a Golang convention. Is there a good post about better ways to handle errors in Go?
As others have identified, not in the Go language. In other languages which support disjoint unions with "right-biased" operations, such as Haskell's Either[0], Scala's version[1], amongst others, having to explicitly check for error conditions is not required when an Either (or equivalent) is used.
The general idea is that you should pay the cost of handling an error right there where you receive one, even if you're just going to return it. This reduces the incremental cost of actually doing something about it.
If you're given an easy way out, a simpler way of just not handling errors, you either won't even consider handling them, or you'll actively avoid any and all error handling, for fear of polluting your happy path.
I can't say I agree with the approach all the time, but I know I'm much more likely to consider the error flow doing it the Go way, even if I'm comstant annoyed by it.
Error returning is explicit, error returning is declared in the function signature. Error returning plays nice with defer (there is errdefer) and there is limited sugar (try keyword) that makes life easier.
I personally like it better than exceptions (even if it's much noisier for the 95% common case as another poster put it), both of which I've used enough to appreciate the pros/cons of. But that's about it.
> If you're given an easy way out, a simpler way of just not handling errors
Doesn't go offer the simplest way of all to "just not handle errors"? Just ignore them. It is an option. You can ignore them on purpose, you can ignore them by mistake, but you can always simply ignore them.
In practice I want the error propagated to the caller 95% of the time and swallowed 5% of the time. The problem is Go makes me explicitly spell out (and write a unit test case for!) the behavior I almost always want, while the rarely-desired behavior is default.
But ignoring by mistake gets caught by linters, in practice.
And then doing it on purpose is almost as noisy as bubbling it, and sure to raise an eyebrow in code review.
My experience is with exceptions in Java/C#, Go errors, and C++ absl::StatusOr. In theory, I'd favor checked exceptions. In practice, I find that I'm always running away from polluting the happy path and coming up with contrived flow when I want to handle/decorate/whatever some errors but not others, and that giving types to checked exceptions becomes a class hierarchy thing that I've also grown to dislike. Both Go and C++/Abseil are indifferent if noisy to me (and C++ annoys me for plenty other reasons). Maybe elsewhere I'd find options better. Maybe.
Isn’t the reason by chance how try-catch works lexically? I find that as another overlooked billion dollar mistake. It’s not that a happy path gets a crack in it, but the absolute monstrosity of a catch ceremony together with a scope torn between two blocks. “try {} catch {} finally {}” should be “[try] { catch {} finally {} }” absolutely everywhere.
He basically simulates jump to catch by hand. I understand the authority of Rob Pike, but don’t understand why this idea is great.
Processes IRL don’t look like write() three times. They look foo(); bar(); baz(); quux();, which weren’t designed to cooperate under a single context. If only there was an implicit err-ref argument which could be filled by some “throw” keyword and the rest would auto-skip to the “catch”.
Coming from Python, as an amateur dev, error handling in Go was annoying as hell. But I had no other choice so I went for it.
Afer a few programs I realized I never really thought about error handling - Go forces you to decide where you want to handle your error, where to inform about it etc.
It is still annoying (especially the boilerplate) but it had a net positive value for me in terms of learning.
1. Basic on ZX Spectrum around 5-6 years old. Draw circle and a box and a triangle and... Look ma, a rocket!
2. Pascal around 10-12. I can write a loop! and a function. But anything longer than 100 lines doesn't work for some reason. C is also cool but what are pointers?
3. PHP around 14-15. The Internet is easy!
4. Perl around the same time. I can add 5 to "six" and get "5six". Crazy stuff.
5. C around 16-17. Real stuff for real programmers. Still didn't understand pointers. Why can't it be more like Pascal?!
6. C++ around 18-19. I can do ANYTHING. Especially UIs in Qt. UI is easy! Now, let's make a Wolf3d clone...
7. Common Lisp + Emacs Lisp around 20. No compilation?! REPLs? What are atoms? Cons cells? Live redefinition of everything? A macro? A program image?
8. Python around 22. Ok, so languages can be readable.
9, 10, 11, ... StandardML for beautiful types and compiler writing, vanilla C and x86 assembly/architecture the deep way, back to Lisps for quick personal development...
Looking back, it's Lisps and vanilla C that sticked.
No Objective-C for me. I was always living in the Linux/x86 world as long as I remember. Isn't it like C with dynamic dispatch bolted on..?
As strange as it may sound, I do know quite a lot about Smalltalk implementation details without ever using the language. Self (a Smalltalk dialect) papers were big in the jit compiler writer community in late 90s and early 2000s. A couple of classic general programming books used Smalltalk in examples.
Unfortunately Dijkstra and Iverson took personal dislikes to each other's approach, or we might have had a language that abstracted data flow like Iverson's APL and abstracted code flow like Dijkstra's Guarded Commands.
My university had essentially a weird-languages course to blow our minds. Smalltalk (everything is a message to an object, back when C++ was the new hotness), Lisp (everything is a sexpr that you can redefine), Prolog (everything is a clause to search for known axioms).
Later: Lisp again, because of closures and CLOS let you program how method dispatch should work and CLCS let you resume from just before an error. Haskell, because you can lazily consume an infinite structure, and every type contains _|_ because you can't be sure that every function will always return a value. Java, back when the language was poor but the promise was "don't send a query, send code that the backend will run securely" (this was abandoned).
Was that the case for you? I'm especially curious about the exams if there were any, because it's probably hard to keep the whole context in mind as a student and to evaluate students for the teacher.
I remember my mind being blown when I first came across list comprehensions. It was one thing to be able to effectively generate one list from another:
Squares = [X*X || X <- [1,2,3,4]].
(This can be read as, Squares equals a list of X*Xs such that each X is a member of [1,2,3,4])
Going from that to then realizing that you can use list comprehensions to implement quicksort in basically 2 lines of code:
qsort([]) ->
[];
qsort([H | T]) ->
qsort([ X || X <- T, X < H ]) ++ [H] ++ qsort([ X || X <- T, X >= H ]).
These examples are written in Erlang though list comprehensions are found in many other languages, Python and Haskell to name a couple.
I find this style changes the way I think about and write code transformations.
It's also in shell pipelines, R's magrittr, and Clojure's thread macros, and can be emulated in some OO languages with methods that return the transformed object itself.
I wish there was a linter that could limit it though. It's like coders' subconsciouses say "the feature exists, so I have to use it to the greatest extent possible". Or, "assigning intermediate values is now an anti-pattern", and we end up with things that are like 20 straight pipes, with more pipes inside lambdas of those pipes, etc., that result in a function that is totally incomprehensible. All it really saves us from is this:
let x1 = one(x);
let x2 = two(x1);
let x3 = three(x2);
The other advantage of being explicit is you can set breakpoints and see the intermediate values when debugging.
Granted, the same could be said about `three(two(one(X)))`, so it's not specifically a pipe operator problem. It's just that I see it a lot more ever since pipe operators (or their cousins "streams", "extension methods", etc) have been popularized.
My guess is it's because `three(two(one(X)))` across multiple lines would need indentation on each nested function, and end up looking obviously obtuse with too many levels of nesting. Pipes make it look cleaner, which is great at some level, but also more tempting to get caught up in "expression golf" leading to extremely concise but incomprehensible code.
I’ll second Prolog as mind blowing. It is a fundamentally different way to think about problem solving using a computer. Well worth even a trivial playing with it.
I've been doing advent of code with Uiua [0], an array and stack programming language. And it feels like playing Tower of Hanoi with data and building pipelines with functions.
I think that are a couple of programming langs out there that everyone needs to be in contact with at least one time. I will mention the three ones that actually changed my mind (I have taken these in a semester at uni): Haskell, Prolog and Smalltalk (even if you do not like nor use them, they will force you to think radically different). Then you should add a structured language (C or Pascal will suffice). If you like metal, try assembly. Yes there are others like APL and Forth but I can't say anything about those as I haven't tried them. Lastly a LISP or scheme flavour won't be bad to experience as well.
It was about APL that Alan J. Perlis said "A language that doesn't affect the way you think about programming, is not worth knowing." (number 19 of https://cpsc.yale.edu/epigrams-programming)
Learned programming with Pascal, and Turbo Pascal (not the language itself) was what blew my mind back then. The Swag libraries weren't the language itself neither, but for pre-internet age they were a jump forward.
Then had to do things in OS/2, and then it was the REXX turn. Shell scripting didn't had to be as basic as I knew from DOS. Some years later moved to bash, and from complex script to long but very powerful oneliners, so it was another shock.
Still was working with OS/2 when learned Perl, and regexes, and its hashes, and all with some interesting semantic approach on that. And for many years it was my "complex" shell scripting language. Python was not as mindblowing or at least had the same kind of impact on me.
Implementing and using a Forth for a while for embedded real time scripting was a great in really re-enforcing how everything in programming really boils down to just numbers. Though practically speaking managing the stack gets old quickly.
Have you tried RPL (reverse Polish Lisp)? It's on later HP calculators, and it's pretty elegant for a small-machine language. The type system includes matrices and functions.
More no-code than programming language but Lotus Notes totally blew my mind.
Imagine going from pre World Wide Web - and for many companies pre-email - age of local area networks (for file sharing only) and 38.K modems directly to group and user oriented, email integrated, distributed, replicated, graphics and multimedia integrated, no-code, document oriented, user programmable, fully GUI, secure, distributed database applications.
Lotus Notes blew my mind completely. A truly amazing application at the time.
I wrote a lot of code for lotus notes circa 1995 for an early web app. It was still code (lotus notes script), just that I had to figure out how to do things without loops and recursion.
Round about the seventh language, he stops using exclamation marks. The list goes from "Mind blown: Programming my own games!" to "Mind blown: Some support for writing my own gradual type system."
I think that actually happens to us all. The first time you write an infinite loop to print out that your friend is a doofus to show off is magical. 25 years in and you start to value… other things… that maybe don’t have the childish whimsy and fun.
That’s OK. In the same way that when you start to read books, you might like Blyton, Dahl or diving into a Hardy Boys, by the time you’ve been reading fiction for a few decades your tastes might become a bit more philosophical.
The trick is to introduce these things to people when they’re ready for them, and to not treat those who haven’t experienced them as in some way inferior.
I’m not going to shove Proust down someone’s neck or make them feel dumb for not having read any of his work, in the same way I am going to think carefully about whether somebody would benefit from seeing some Prolog.
What does interest me a lot about this list is that it’s not just a “well, look, I found Clojure and think it’s better than Python, YMMV”, or “if you don’t like or understand Haskell maybe you are just too dumb to grok eigenvectors” brag piece.
There is a thought about what you might get from that language that makes it worth exploring. As a result I might revisit OCaml after a brief and shallow flirtation some years ago, and go and take a look at Coq which I doubt I can use professionally but sounds fascinating to explore.
Are you being flippant, or is there really a sense in which eigenvectors are relevant to Haskell? To the best of my knowledge despite all the category-theoretical goodness in the language, there's no meaningful language-level connection to linear algebra, -XLinearTypes notwithstanding.
- HyperCARD (also on that list, and agree with almost all points made): It was magic to get into Developer mode and to create stacks --- it's unfortunate that Asymetrix Toolbook didn't get further, nor that the Heizer stack for converting HyperCard stacks to Toolbook wasn't more widely available, and a shame that Runtime Revolution which became Livecode reneged on their opensource effort --- hopefully someone will make good on that: https://openxtalk.org/
Unfortunately, I never got anywhere w/ Interfacebuilder.app or Objective-C....
- PythonSCAD: variables and file I/O for OpenSCAD (though to be fair, RapCAD had the latter, it was just hard to use it w/o traditional variables) https://pythonscad.org/
There were a few times I had my mind blown. Once with Scheme when I was watching the old SICP video lectures. But also (and this may be a rather unpopular thing to say) once with C when I had coded something up and compiled it and ran it and it finished instantly. Rust didn't really blow my mind, but I have taken some of its lessons (using error values, lifetime and ownership management) to heart.
Common Lisp, which at the moment is the only language I can see myself using unless I decide to make one myself, never really blew my mind, although the little things it does make it hard for me to imagine switching away from it.
Regular expression is not listed in the article, but it blew my mind when I first used it in Perl. Any of sed/awk/perl/python/ruby/javascript could have expose the author to regular expressions, but maybe they didn't find it as transformative as I did.
'' is one of the first special thing that computers made me look at. Even a simple msdos glob .exe felt like a superpower. regexps were the next level but a two edged weapon as everybody knows. That said I never leveraged PCRE extended capabilities.
Red/Rebol has a different, powerful approach to homoiconicity and DSLs.
And there is this XL language that has very interesting approach to extending the language, but sadly the compiler is not in a runnable state, I could only assess it by the docs.
Early on when I was learning, assembly language blew my mind. I went from Basic to Pascal but I wanted to make games so I tried my hand at C/ASM and it was just so wild to be that close to the metal after a couple years at such a high level.
In recent years, Go blew my mind with it's simplicity and how strong of a case it makes for writing boring code.
It's been decades since writing C/ASM and my guess is what little I remember isn't applicable anymore, but I plan on diving in again at some point if only to better understand Go.
I'd like to put a word in for AS3. As a kid who grew up on hypercard, BASIC, pascal, proce55ing and eventually PHP and javascript, AS3 was the language that bridged the gap for me to strongly typed code. It made me a much better programmer. The graphics APIs were fantastic but they also taught me for the first time how to avoid tightly coupling graphics with logic.
TS + PixiJS is a reasonable replacement for some of it now, but I still sometimes miss having compile-time warnings.
Shame it's impossible to even run the code now and see it. It was a great tool for making visual tools, not just games. Part of that was just being able to build UI components visually as raster or vectors, animated if you wanted, that you could then manipulate interchangeably in code without any overhead. Even though most of my software in it drew everything procedurally, that ability to dip in and out of GUI/design mode made it possible to do beautiful things quite easily. There has been nothing else quite like it.
You can see the code run --- just download and install the Flashplayer from the Wayback Machine on archive.org then install and associate it with .swf files
I happen to be learning functional programming and really struggling with a different way of thinking, at the same time I saw the movie Arrival, with the alien language as main plot point.
It struck me that what programming language we pick, fall in love with, dig into, does seem to change our thinking and ways of approaching problems. It can actually change our brain.
One that never seems to make the list, but is worth considering: Excel, the only popular functional programming language! Not the VBA stuff, but the formulaic language of the spreadsheet itself.
My list is similar: PDP-11 BASIC, followed quickly by UCSD Pascal, Turbo Pascal (which brought the most important parts of UCSD Pascal's IDE to CP/M-80/DOS), LISP.
And then Awk, and associative arrays. You can literally create any data structure you need using associative arrays. It won't be fast; but if I could have only one container type, this would be it.
And then TCL, which is where I really learned to appreciate meta-programming. (I wrote Tcl's "Snit" object system in pure Tcl; it got quite a lot of use once upon a time.)
I make it a point to check out new languages on a regular basis, just to keep myself fresh (most recently, Rust and Haskell).
Strand is one of the languages that particularly blew my mind,
I think it was Armstrong that said that Strand was "too parallel"
Strand is a logic programming language for parallel computing,
derived from Parlog[2] which itself is a dialect of Prolog[3].
While the language superficially resembles Prolog, depth-first
search and backtracking are not provided. Instead execution of a
goal spawns a number of processes that execute concurrently.
Slight tangent: Domain-specific languages (DSLs) that blew my mind.
Here are three DSLs that left lasting and positive impressions:
1) Mathematical programming languages, first GNU MathProg, then later python pulp and pyomo. These launched a subcareer in applying LP and MIP to practical, large-scale problems. The whole field is fascinating, fun, and often highly profitable because the economic gains from optimization can be huge. Think of UPS.
2) Probabilistic programming languages, first BUGS (winbugs, openbugs, JAGS), later Stan, pymc, and various specialized tools. These helped me become a practicing Bayesian who doesn't much like p-value hypothesis testing.
3) The dplyr | ggplot universe. These are really two independent eDSLs but they're from the same folks and work great together. Writing awkward pandas|matplotlib code is just soul-wrecking tedium after using dplyr|ggplot.
I think he means that the stack is not something that you are forced to work with when programming in assembly. You can put data wherever you want (and are allowed to), and jmp into whatever random memory address you want. You can use CPU instructions that handle stack management for you, but you don’t _have_ to.
Somewhere on my random ideas pile is to write a queue-oriented operating system - you know how we have threads? What if we didn't have threads, just a list of things to do, to run on the next available processor? (Haskell's VM calls them sparks)
I mean, that's just a register. The memory region itself is not special in any way, a random "heap" region that would be similarly frequently accessed would be just as fast.
Except of course how return addresses are typically always stored on the machine stack. It is very concretely there and part of the programming model, in my world.
Technically some CPUs have a "jump and link" that saves the return address in a register. Then it is up to the calling convention to save that register on the, also by convention, stack.
* C++: The feeling that I can make a-n-y-t-h-i-n-g. (not true but still mostly true)
* Ruby: elegant, OO, who-needs-compile-time-if-you-have-unit-tests
* Haskell: hard to learn but so worth it, by doing so I became a follow-the-types person
* Elm: not hard at all, a well picked subset of Haskell for browser apps only
* Kotlin: a well typed Ruby, with the ecosystem of the JVM at your disposal
* Rust: when every bit of runtime performance counts (but now without sacrificing safety like with C/C++)
> Just that the concepts either didn’t resonate with me or, more often than not, that I had already discovered these same concepts with other languages, just by the luck of the order in which I learnt languages.
The page has been hugged to death, but it isn't github that is the problem; it is cdn.bootcss.com, where some of the static assets for this site are hosted.
BASIC, but specifically on a TRS-80. I can just type something in and run it. I don't have to wait a week for the teacher to take the card deck over to wherever the mainframe is.
Pascal. That's it, all of it, in those ~4 pages of railroad diagrams expressing the BNF.
C. Like Pascal, but terser and with the training wheels removed. It just fits the way my mind works.
Java. The language itself is kind of "meh", but it has a library for everything.
BBC BASIC. You mean I can POKE things here, and stuff shows up on the screen in a different color?!
Turbo Pascal. Woah this is fast. And Mum is impressed when code sells for money.
Visual Basic. I'm living in the future, Ma. Dentist has a side business sell flowers and needs a program that can handle root canal and roses? I got you.
Perl. Suddenly system administration is fun again.
Java. My mind is blown, in a bad way. This is what I have to write to keep a roof over my head? Ugh.
Go. Aaahhh. Feels like reading W. Richard Stevens again. Coming home to unix, but with the bad parts acknowledged instead of denied.
I found Coq in the list to be very interesting. Is anyone using Coq here for serious math or programming? What kind of problems are you solving with Coq?
And I'd also like to know how different Coq and Lean are. I'm not a mathematician. I'm just a software developer. Is there a good reason for me to pick one over the other?
Coq is cool partially because it can be compiled to OCaml. So you can write part of your program in Coq and prove its correctness, and then write the rest in pleasant OCaml. I believe there are some popular ocaml libraries, like bigint implementations, that do this.
Lean is designed to be a general purpose programming language that is also useful for mathematical proofs. From that perspective it’s kind of trying to be “better haskell”. (Although most of the focus is definitely on the math side and in practice it probably isn’t actually a better haskell yet for ecosystem reasons.)
If you try either, you’ll likely be playing with a programming paradigm very different than anything you’ve used before: tactic oriented programming. It’s very cool and perfect for math, although I don’t think I’d want to use it for everyday tasks.
You won’t go wrong with either, but my recommendation is to try lean by googling “the natural number game”. It’s a cool online game that teaches you lean and the basic of proofs.
By the way, don’t be scared of dependent types. They are very simple. Dependent types just mean that the type of the second element of a tuple can depend on the value of the first, and the return type of a function can depend on the value passed into it. Dependent types are commonly framed as “types that have values in them” or something, which is a misleading simplification.
Interesting perspective! I think HN would benefit if we shared our own mind-blowing insights here. A couple from the top of my head:
- Python: slicing, full reflection capabilities, and its use as an interface to almost anything [1], not to mention its role as an embedded interpreter (beyond the GIL debate).
- Z3: one of the closest I know for defining the ‘what’ rather than the ‘how.’ I got lost with Prolog when I tried to add more complex logic with ‘hows’, though I was a complete newbie with Prolog. Z3, however, really boosted my beginner capabilities.
- OMeta: allows me to write fast parsers without spending time resolving ambiguities, unlike ANTLR and other popular parsers using classical techniques.
- Smalltalk/Squeak: everything can be re-crafted in real-time, a blend of an OS and a programming language. The community vibe is unique, and I even have friends who implemented an operating system using Squeak. The best example? TCP/IP implemented in Smalltalk/Squeak! [3]. David Weil and I also created an early proof of concept [4] that used Linux behind the scenes, aiming to ‘compete’ with the QNX floppy disk.
- AutoLISP: a programming language embedded in AutoCAD, which I discovered in high school in the late ’80s—only to just find out that its first stable release is considered to be around 1995 [5].
- REXX on the Commodore Amiga: not only could applications be extended with this language, but they could also interact with each other. A decade later, when I used REXX on an IBM Mainframe, I still preferred the Amiga’s approach.
- Racket: I can draw in the REPL. For example with Racket Turtle.
- C++: object orientation "for C". I was not mesmerized by templates.
- Clipper: first database usage and relatively simple to create business apps.
- Objective-C: the first implementation of GCD [1], providing a C/C++ style language with fast performance, a clean event-loop solution, and reflective capabilities. However, I never liked the memory handling.
- Machine Code/Assembler on the Apple IIe: I could jump directly into the assembler monitor from BASIC.
- APL: a powerful yet complex way to program using concise expressions. Not ideal for casual use—best suited if you really understand the underlying concepts.
By the way, it’s time for Apple to embrace programming languages on iOS and iPadOS!
> ... sounds like my experience with Instaparse (Clojure)
Thank you! I wasn’t aware of Instaparse or its use of PEGs [1] which gives you the same sense about parsing ambiguities.
> REXX - I thought this was ingenious specifically for file/text processing
Formally the REXX in Amiga was called ARexx and included extensions [2]. REXX [3] itself is not specifically for file/text processing but enables you to connect different environments [4].
(zx spectrum) BASIC: nothing like the first contact with AI
C++: pointers, OOP and so much more - good and bad - all in one package
Fortran90: vector programming for humans
Python: general programming for humans
Biggest disappointment: Scratch. Why isn't visual programming more mind blowing?
Anyway, always looking out for these well-regarded "mind-blowing" languages but somehow they never show up in really mind-blowing projects? Would be interesting in this respect to have a list of mind blowing projects that were only possible due to a mind-blowing language.
Coming from higher level languages, even C, it is actually very mind-opening to see stack as just 2 numbers (stack base, stack top). Not reading about it in theory but seeing in assembly how the program literally decrements the top address value.
I also remember finding the interrupt/syscall system surprisingly stupid and simple. Reading the kernel's source did broaded my horizons!
opalang blew my mind when i first saw it too - it took a whole lot of web development ideas that i'd seen floating around various languages and frameworks, and actually managed to make them consistent, principled and wonderfully ergonomic. no offence to ur/web (probably its closest competitor) but i never managed to quite get to grips with it; somehow opa managed to make the ideas feel a lot friendlier and easier to use.
$Basic - look, I can print things to the screen
$Assembly - I have no clue what's going on
$Matlab - Ok, this mostly makes sense and I can chart graphs and make functions
$Python - ok I can do all kinds of automation now. It would be great to create an executable to give someone, but this apparently requires traversing the inner circles of hell.
$SQL - declarative languages are weird, but it's easy to get up to speed. I'm doing analysis now over multi-TB datasets.
$GAMS - a declarative language for building linear programming and other kinds of mathematical optimization models (algebraic modeling). This language is weird. Ok. Very weird. I'll go back to the Python API.
$Unix/Bash/Awk - ok, I'm doing everything through the command line and it is beautiful, but after a short while I have to switch to Python.
$Powershell - kind of like the Linux command line, but way more powerful and a lot slower. Can write more complex code, but often the cmdlets are too slow for practical use. Have to move back to Python a lot.
Exploration:
$Lisp - this is supposedly the best language. Read several books on language. Everything seems less readable and more complicated than Python. Aesthetics are fine. A lot of power here. Limited libraries.
$Haskell - this seems like it is trying too hard. Too much type theory.
$APL - this is really cool. Typing with symbols. Scalars, vectors, and matrices...yeah!
$Prolog - very cool, but ultimately not that useful. We have SQL for databases and other ways to filter data. Prob a good niche tool.
$Forth - this is amazing, but I'm not a low level hardware coder, so it has limited value. Hardly any libraries for what I want.
$Smalltalk - the environment and GUI is part of the application...what?
$Rebol - way too far ahead of its time. Like lisp, but more easy to use and dozens of apps in a few lines of code. Amazing alien technology.
$Java - OMG...why does everything require so much code? People actually use this ever day? Aghhh.
$C - where is my dictionary/hash type? What is a pointer? A full page of code to do something i can do in 3 loc of Python. At least it's fast.
$Perl5 - cool. It's like Python, but more weirdness and less numerical libraries and a shrinking community.
$Perl6(raku) - They fixed the Perl5 warts and made a super cool and powerful language. Optimization seems like it will take forever though to get it performant.
$OCaml - pretty and elegant. They use this at Jane Street. Why is it so hard to figure out how to loop through a text file (this was 10 years ago)?
$8th - a forth for the desktop. Commercial, but cheap. Very cool ecosystem. Small and eccentric community. I wish this would be open sourced at some point.
$Mathematica - by far the most powerful programming tool/environment I've used for practical engineering and science work. Like lisp, but no environment/library problems. Commercial and pricey. I can create a directed graph and put it on a map of my home town, train a neural network, do differential equations, and create a video out of my graphs all in a single notebook with built-in functions. Nice!
$Swift - can I not get this on Windows?
$F# - oh it's OCaml on .Net, but the documentation is a little hard to follow. Everyone assumes I know C# and .Net.
$Clojure - Rich Hickey is a genius. A lisp IT won't fire me for installing. Oh wait...I don't know the JVM. Workflow kind of soft requires emacs.
I was waiting for this one. Julia's multiple dispatch and the amazing composability was eye opening. The first time you see exchanging a Measurement for a simple number and returning a computation with uncertainties. Or a Unitful for a simple number and returning a computation with units. Amazing!
I use it at work. I must say I've never experienced that as an issue, so perhaps we use the programming language in a different way.
One issue that i have experienced, which may be related, is that it's hard to have a little bit of controlled type dynamism. That is, once you have a type that is a union with many possibilities, you need to be vigilant not to do operations on it that makes type inference give up and return `Any`. This still bites me. There are some solutions to that in pacakges (e.g. the relatively new Moshi.jl), but I do miss it with language-level support.
Thanks a lot, I think that's what people meant by being surprised by functions over complex types. But it seems you can still be productive even without it.
Was honestly looking for a comment pointing it out, it really kills the interest I have in an article. Also, what's up with that face on the flying debris on the right?
I would second Prolog as mind-blowing. I've found you're typically confronted with fully engaging with the core of whatever problem you're solving, and only that core. This is probably what can make it so frustrating sometimes as you have no option but to work out the hard stuff nearly immediately; not to mention that unconsidered edge cases, mistakes can cause some pretty opaquely wrong results, or the query not terminating, which can make conventional debugging pretty difficult. The guarantees you get with using the 'pure' core of Prolog do open up some really interesting avenues though, for example Scryer's debugging library is quite neat in affording _semantic_ debugging: https://www.scryer.pl/debug
Just some additional commentary too - I think this post quite misrepresents it with some of the comparisons.
Prolog at its core is SLD Resolution [1] (a form of search) over Horn Clauses [2] (first order logic). Queries posted to the runtime are attempts to find a set of values which will satisfy (cause to be true) the query – whilst SQL is founded on relational algebra which more closely aligned with set theory.
Whilst there's probably some isomorphism between satisfying/refuting a logical predicate, and performing various set operations, I'd say it's a bit of a confusion of ideas to say that SQL is based on 'a subset of Prolog'. The author might be thinking about Datalog [3], which is indeed a syntactic subset of Prolog.
[1]: https://en.wikipedia.org/wiki/SLD_resolution [2]: https://en.wikipedia.org/wiki/Horn_clause [3]: https://en.wikipedia.org/wiki/Datalog
I remember learning Prolog, it was tricky to wrap my mind around it, it wasn’t like any other language. The day I finally “got it” I was very happy, until I realized all the other languages I had previously learned, no longer made any sense.
I remember taking a PL class in undergrad, learning Prolog as one of a handful of languages. During that section my brain started to want to "bind" variables to things as I was going about my day, it was very weird.
What Prolog resources would you recommend for someone to learn it and a blown mind?
Power of Prolog. Nearly every video is mind blowing. Blog is great too, and Markus is a great guy also.
https://youtube.com/@thepowerofprolog
https://www.metalevel.at/prolog
You can run pure ISO Prolog interactively in your browser on [1]. There's also an extended tutorial for solving toy container logistics problems there ([2]). Though while it doesn't require prior Prolog knowledge, it's not so much a learning resource as it is a teaser and practical guide for using Prolog's built-in combinatorical search for linear planning in the classic STRIPS style (David Warren's WARPLAN being among the earliest Prolog applications apart from NLP).
[1]: https://quantumprolog.sgml.net
[2]: https://quantumprolog.sgml.net/container-planning-demo/part1...
A previous HN discussion - https://news.ycombinator.com/item?id=40994552
Thanks!
While i am not too familiar with Prolog, i have been intending to study it in depth given that it can be used to design Languages/DSLs/Expert Systems/ML. Here are some resources which i have been collecting;
1) Formal Syntax and Semantics of Programming Languages: A Laboratory-Based Approach by Ken Slonneger uses Prolog to design/implement languages - http://homepage.divms.uiowa.edu/~slonnegr/ and https://homepage.cs.uiowa.edu/~slonnegr/plf/Book/
2) Defining and Implementing Domain-Specific Languages with Prolog (PhD thesis of Falco Nogatz) pdf here - https://opus.bibliothek.uni-wuerzburg.de/opus4-wuerzburg/fil...
3) Use Prolog to improve LLM's reasoning HN thread - https://news.ycombinator.com/item?id=41831735
4) User "bytebach" gives an example of using Prolog as an intermediate DSL in the prompt to an LLM so as to transform English declarative -> Imperative code - https://news.ycombinator.com/item?id=41549823
5) Prolog in the LLM Era (a series by Eugene Asahara) - https://eugeneasahara.com/category/prolog-in-the-llm-era/
The frustrating thing about prolog is that at first it seems you get to ignore the execution model, but very very quickly you have to learn to think of it in imperative terms anyhow, so you can ensure termination and stuff. Or at least, I never got far enough past that to wrap back around to not coding in it with a background imperative model running in my head.
Would you recommend prolog over datalog for someone looking for something new that doesn't have to be related to their work in data analytics?
Not the OP but personally I would recommend looking into constraint programming which is related but different to logic programming (Prolog has extensions for Constraint Logic Programming which combines the two).
You mentioned you're looking for something new that doesn't have to be related to data analytics, well constraint programming (and similar) is basically the mirror problem. Instead of data you have the rules of the "game" and the solver's job can be to: find a solution, find the optimal solution, find all solutions, or prove there is no solution.
Things like scheduling problems, resource allocation problems, etc. A real-world example would be finding the most efficient cell placement when developing a microchip, this is basically an advanced rectangle packing puzzle.
Much like prolog you define the rules (constraints) and the solver takes over from there. Part of the fun is figuring out the most effective way to model a real-world problem in this manner.
The closest thing to Prolog in this domain would be ASP, with Clingo/Clasp being the best solver available. But you also have regular constraint programming (look into MiniZinc or Google's OR-Tools), mixed-integer programming which is mainly for hardcore optimization problems (commercial solvers sell for tens of thousands of dollars), satisfiability modulo theories (often used for software verification and mathematical proofs), etc.
The mind-blowing bit is that this sort of problem is NP-complete but these solvers can find solutions to utterly massive problems (millions of variables and constraints) in milliseconds sometimes.
Same Prolog, and particularly Lambda Prolog literally blew my mind. I could (physically) feel my brain, trying to think differently about the problem to solve. It was an experience. I need to retake that course (was 20 years ago...). I also wonder if/how AI could leverage lambda prolog to prove things.
Also look into ASP for the real mind blowing one. Specially clingo/clasp. Much more powerful than Datalog, can even solve optimization problems without being turing complete.
My mind blown experience was listening to a Rich Hickey talk (maybe not [1]) that flipped upside down my accepted method of thinking about types and type system paradigms. I still love writing Haskell and Rust, but I see the warts that he is so concisely pointing out. The either type really sticks out to me now, and I see the problems with overtyping what are essentially maps, and how rigid that makes every aspect of your programs. Currently I deal with a fairly large go monolith that had some clean code practices in it, and changing a single field that is going to make it into the public API ends up being hundreds or thousands of lines of changes. Such a waste of time to CRUD what is a map of strings.
1. https://youtu.be/YR5WdGrpoug?si=jRsXcYlwRuz0C1IN
The overtyping seems not about not about static vs dynamic but about nominal typing(functions based on the abstract type) vs structural typing/row-polymorphism (functions based on specific fields). Go seems to have structural typing, though less used. Do you have the same problem with overtyping while using structural typing?
There was a very similar complaint [1] ("OOP is not that bad") favorably comparing the flexibility of OOP classes in Dart vs typeclasses in Haskell. But as pointed out by /u/mutantmell [2], this is again about lack of 'structural typing' at the level of modules with exported names playing the role of fields. First class modules in functional languages like ML (backpack proposal in Haskell or units in Racket) allow extensible interfaces and are even more expressive than classes in most usual OOP languages. First-class modules are equivalent to first-class classes [3] ie. a class expression is a value, so a mixin is a regular function which takes a class and returns a class. Both are present in Racket.
[1] https://news.ycombinator.com/item?id=41901577
[2] https://www.reddit.com/r/haskell/comments/1fzy3fa/oop_is_not...
[3] https://docs.racket-lang.org/guide/unit_versus_module.html
My mind wasn't exactly blown; indeed that talk and its arguments give me a sense of deja vu. And then I remembered some of the arguments behind Google's protobuf removing required/optional fields in proto3. It's really the same argument. And that argument is a fairly narrow one when it comes to representing business logic data. You still need Maybe/option types when you are dealing with computer science data. If you are implementing your own linked list, you are going to need that Maybe or option type.
Just a heads up, proto3 supports optional again.
I believe you that he's being concise but don't have time to watch an hour video even at 2x. Can you summarize? (I got to where he was talking about how introducing Maybe in breaking ways will break things, thought "duh" and quit watching.)
The big idea is that record types that have optional values (such as Maybe in Haskell) don't give you enough context on when, where, and how those values can be optional. Additionally, by placing things in buckets, the argument is that records and optional types are more brittle than the alternative, maps that can simply have missing keys for what would be optional. The imagery of a herd of sheep (map) versus compartments into which each sheep can go (records, Optional types, etc.) is brought up to illustrate a desire for more fluid engagement with information that may or may not exist or be needed, all of which is a description of a named aggregate of information we care about in context.
While I see the merit in his arguments, I believe his approach to data is "Clojure-like" or Lisp-like, in that it discourages explicit enumeration of all states and their possible configurations for the tradeoff of being able to sculpt these records. But, as a Haskell dev, I do not want to sculpt the records in the way he describes. I want to explicitly enumerate all the possibilities. This is, in my mind, a difference in cognitive preference. There is no right or wrong way and I think what he proposes is elegant. It is ultimately a matter of up front reasoning about the space compared with reaching a synthesis of the problem domain over time through wrestling with the problem itself. It is a statement about the efficacy of saying what can be known up front and to what extent, how much accuracy, and what utility.
I am about to launch a SaaS that pairs languages to these cognitive patterns as explicit understanding of such information can help bring together people who think similarly about information (team-building) while also opening up the possibility of expanding ones insights with alternative ideas and approaches (as his talk has done for me). The intent is to help hiring teams find devs that match their culture's programming and problem solving styles (whether that be reinforcing or doubling down on ways of thinking and acting).
You don't have half an hour to spare any time in the near future? But you do have the time to scroll HN and comment?
I'm not a huge fan of videos as content delivery mechanisms for simple facts, but this is a talk - an argument made with the intent of convincing you about something that you may find counter-intuitive. What's the point of summarising that, if it loses the granularity of the argument, its persuasive power? "Static types bad, Clojure good?"
What persuasive power are you going to lose if you cut out the jokes that 5-year plan is stripes? And that's the first 3% of the talk, so pure lossless compression
> "Static types bad, Clojure good?"
sure, this kind of summary is useless, but then is simply too short
There is no shortage of long talks that lack nuance. Just say that no, you can't effectively summarize this video, instead of doing whatever it is that you are doing here.
> There is no shortage of long talks that lack nuance.
Nuance is not the same thing as a talk designed to build an argument bit by bit and be persuasive. You could summarise an hour-long closing argument for the defence in a jury trial as 'My client is not guilty', but doing so is rather missing the point.
> Just say that no, you can't effectively summarize this video, instead of doing whatever it is that you are doing here.
x == y, but with the added implication that SBF's take on longform is not actually something to aspire to.
That take honestly says more about SBF than it does about the merits of longform writing.
HN comments and text can be skimmed, videos much less so, you need to apply continuous attention to it.
The point of summarizing is to condense the substance for potentially hundreds of readers into a sensible set of ideas in a couple of paragraphs. Any wise listener will stay away from talks that need full half an hour to stay “persuasive”.
>> The point of summarizing is to condense the substance for potentially hundreds of readers into a sensible set of ideas in a couple of paragraphs. Any wise listener will stay away from talks that need full half an hour to stay “persuasive”.
“I don’t want to say no book is ever worth reading, but I actually do believe something pretty close to that. … If you wrote a book, you f'ed up, and it should have been a six-paragraph blog post.” -Sam Bankman-Fried
Ironically, Nate Silver went into great detail--in his book--about what a damn weirdo SBF was even before he ended up in jail. So yeah, if you want to follow the lead of a convicted felon, well . . .
This is actually more complex than you’d think. Just watch the playlist linked below and it will click eventually.
https://youtube.com/playlist?list=PLFDE868BCF58A3950
I’m not Rich, I think to summarize what he’s saying I’d need to have a better grasp of the subject material than he does.
I’ve actually struggled to get people on my team to understand what is so great about this language or the ideas behind it. Pointing people at 1 hour long YouTube videos that almost need to understand the source language to see examples of what he’s talking about haven’t been working.
I’ll think hard on how to summarize this without needing the context I have and come back to this comment. It won’t be what he’d say but I’ll share my view
The key point may be (summarizing from memory): a function signature can be backwards compatibly changed by either making a parameter it accepts nullable from a non-nullable type, and/or by making the return type non-nullable from a nullable. With union types of T | Null, this is a type safe refactor that requires no code change. The Maybe<T> variant on the other hand requires code changes no matter what.
I saw a big uproar in certain strongly typed FP communities around it, but I think it's more like different problem domains having different experiences. Many software operates in a closed world where they control everything, while other software has to communicate with other software that may change with a different schedule, owned by a different team, etc.
I wrote a bit more about it here: https://news.ycombinator.com/context?id=42020509
Here's a summary by notegpt:
Summary
Rich Hickey discusses the complexities of optionality in programming, particularly in Clojure’s spec system, emphasizing the need for clear schemas and handling of partial information.
Highlights
* Community Engagement: Acknowledges the presence of both newcomers and regulars at the event.
* Fashion Sense: Introduces a humorous take on the programming roadmap focused on fashion.
* Language Design: Explores the challenges of language design, especially regarding optionality in functions.
* Null References: Cites Tony Hoare’s “billion-dollar mistake” with null references as a cautionary example.
* Spec Improvements: Discusses plans to enhance Clojure’s spec system, focusing on schema clarity and usability.
* Aggregate Management: Emphasizes the importance of properly managing partial information in data structures.
* Future Development: Outlines future directions for Clojure’s spec, prioritizing flexibility and extensibility.
Key Insights
* Community Connection: Engaging with both veteran and new attendees fosters a collaborative environment, enhancing knowledge sharing and community growth.
* Humorous Approach: Infusing humor into technical discussions, like fashion choices, can make complex topics more relatable and engaging.
* Optionality Complexity: The management of optional parameters in programming languages is intricate, requiring careful design to avoid breaking changes.
* Null Reference Risks: Highlighting the historical pitfalls of null references serves as a reminder for developers to consider safer alternatives in language design.
* Schema Clarity: Clear definitions of schemas in programming can significantly improve code maintainability and reduce errors related to optional attributes.
* Information Aggregation: Understanding how to manage and communicate partial information in data structures is crucial for creating robust applications.
* Spec Evolution: Continuous improvement of the spec system in Clojure will enhance its usability, allowing developers to better define and manage their data structures.
Pass the link to NotebookLM and get the podcast hosts to summarize it for you?
I'm generally an advocate for a robust, mandatory type system like Rust. But there are production scenarios where it's a hindrance. Imagine processing arbitrary JSON document containing several required properties and many additional ones. You can't possibly know what additional properties any given document will have (it's only available at runtime, with infinite variations) but it's often a hard requirement that these extra data get passed through the system. Being forced to create a concrete type for what is effectively an open map + some validation logic is awkward at best. Maps and general purpose functions that process maps are a better fit for this problem.
At the byte level it's all concrete types anyway
Yes but the concrete types at the byte level can be used to say: "these five bits are, concretely, a type tag, and always a type tag", and so it goes from there.
I don't want to be typing at the high level the same way I'm typing at the byte level.
Over time i have grown to appreciate versatility of dictionaries for modelling data especially in Python.
> that flipped upside down my accepted method of thinking about types and type system paradigms.
Could you please write where in the video is Rich talking about it?
The entire video
Very much agree with Erlang/Elixir. After years battling and hating the JVM, I was initially very put off by the existence of a VM (BEAM) and by how it forced the concurrency model on me, and it did make getting into serious development with it harder, but once I "got" it, it really blew my mind. The "let it fail" philosophy was particularly mindblowing to me (especially once I understood that it doesn't mean "don't do error handling").
Ya, "let it fail" is a semi-unfortunate tag line. A lot of people think it means literally to not handle any errors ever and get really frustrated.
Joe Armstrong distinguishes in his thesis between exceptions and errors: exceptions are when the runtime system doesn't know what to do, and errors are when the programmer doesn't know what to do.
Say your code divides by 0; the runtime system can't handle 1/0 but you the programmer may have anticipated this and know what to do. This is just someplace you write code right there to handle the case (catch an exception, pattern match the 0 case beforehand, whatever).
Your error, on the other hand, means something you expected to hold didn't; recovery inline from unknown unknowns is a fool's errand. Instead you give up and die, and the problem is now the exception you know how to handle "my child process died".
The distinction makes me think of Java's checked exceptions, which I think have gotten an unfairly bad reputation from a lot of slapdash programmers complaining that they didn't want to think about error-cases. (And, TBF, a little syntactic sugar could have gone a long way.)
In brief, a checked exception is part of a method signature, "function wash throws NoSoapException", and the compiler enforces that whoever writes code calling that signature must make some kind of overt decision about what they want to do when that exception comes up. That may mean recovery, wrapping it inside another exception more-suitable to their own module's level of abstraction, deliberately ignore/log it, or just throw a non-checked exception.
So in a sense checked exceptions suit those "error" cases where you do expect the programmer consuming your function to at least decide whether they can handle it, and if they don't you still get all those wonderful features like stack traces and caused-by chaining. In contrast, regular (unchecked) exceptions have the opposite expectation, that the caller probably won't be able to handle them anyway and must opt-in to capture them.
This really added something to my way of thinking about this. Thanks.
A "marketing" way of putting it is: "Exceptions are reserved for situations that are truly exceptional."
Indeed. OTOH, "focus your main code on the happy path, and put your error handling in the supervisor tree" is unfortunately a bit less pithy.
Shades of "boring technology"[0] which might better be described as "choose technology that is a good fit for the problem, proven, reliable and proportionate. Spend your innovation tokens sparingly and wisely".
[0]: https://boringtechnology.club/
I'm personally caught between my attachment to the "boring technology" philosophy and my desire to try Elixir, which seems like exactly the kind of obscure and exotic technology that it rejects.
I actually think elixir is the perfect boring technology now. The language is matured, and is not changing much. There will be a big addition at some point with the new type system, but other than that the language is pretty much done. Underneath elixir is erlang, which is extremely mature and highly boring in the good way. Live view might be the only exception, but even that is now becoming boring as it has gotten stable. If you are a believer in boring, you should definitely give elixir a try
Erlang is old, reliable tech that used to scare people once upon a very distant time for
- running on a VM (now boringly normal)
- using a pure-functional language with immutable terms (the pure-functional language fad has since then come and gone and this is now archaic and passé if anything)
But languages only get stereotyped once. At any rate, it's pretty boring and old.
Already said by two sibling comments in all the detail I would have, but I wanted third it: as a subscriber to boring technology myself and actually never been one to jump on new shiny tech, Elixir/Erlang is the epitome of "boring technology." Elixir does a lot on its own you can often cut out the need for a lot of other dependencies.
I started learning programming with modern dialect of BASIC and when I first tried Lua, I've had my mind blown when I discovered:
After Lua I did want to try a language that's fast, something compiled to small native binary. My choice was Nim. And Nim got my mind blown to pieces:Lua is absolutely pass by value. You’re probably mixing container values with by-reference, that is a common mistake. Pass by reference means that an assignment to an argument updates the variable at the call site, e.g.:
Except that there’s no “byref” in Lua and no way to access the original argument’s location (commonly “lvalue”).Passing a table to a function and not receiving a deep/shallow copy is still “by value”, not “by reference”, because “by” here is about variables (lvalues), not values.
Edit: removed more confusing parts
To clarify what I meant by "pass by reference" here's excerpt from the Lua 5.1 manual:
> Tables, functions, threads, and (full) userdata values are objects: variables do not actually contain these values, only references to them. Assignment, parameter passing, and function returns always manipulate references to such values; these operations do not imply any kind of copy.
I also seen this described as "reference semantics" and "value/copy semantics", maybe that would be a better term?
I see, sorry for the noise then. I’m programming since the '90s but still not sure what the correct term would be here, so, yeah.
When we use exactly “pass[ing] [an argument] by reference” it usually means passing an argument by a reference to its location (opposed to contents). I think we just avoid using that exact form if we want different meaning, there’s no proper form. Cause applying it to values rather than arguments rarely makes sense. Copy semantics are non-trivial in general. Shallow copying may not be sufficient to create a proper copy and deep copying may copy half of the vm state, think f(_G, debug.getregistry()).
The Lua manual probably had the same issue with wording it.
Again, C++ with copy constructors is an exception here. You usually receive a copy (shallow or deep depends on a constructor) if no reference, pointer or move semantics given. That was the edit-removed part above.
Yeah, it's a routinely mistaken term in CS, I have met plenty interviewer who I had to correct on it (most languages, but java is also pass-by-value. It just passes objects by pointer value. C is also always pass-by-value. C++ is one of the few exceptions, it does have pass-by-reference semantics)
That idea `function can return multiple values` was a bit of `Wait what? Ah I almost forgot :facepalm:` moment for me. (The quadratic formula function for solving quadratic equations `ax^2 + bx + c = 0` returns TWO values.)
Verilog
Having done all the functional, lisp-like, logic, stack-based, etc., "non-mainstream" languages, the one that most blew my mind so far is Verilog. Everything executing all the time, all at once, is just such a completely different way of thinking about things. Even though the syntax is C-like, the way you have to think about it is completely different from C. Like it looks imperative, but it's actually 100% declarative: you're describing a circuit.
Everything else maps pretty close to minor extensions over OOP after you've worked with it for a while. Except logic languages, which map like minor extensions over SQL. Verilog to me was in a class of its own.
Is there a way of trying it in a simulator free, or are all the implementations commercial? I appreciate that I could use Wikipedia, but sometimes you end up going a long way down dead ends that way, so I hope you don't ming my asking.
I used modelsim free edition years ago. Also got a $200ish test board just to see things work on real hardware. Never did anything super complex but I think I got my money's worth of tinkering around out of it.
Go really blew me away with its explicit error handling. As someone who has come from the OOP cult of clean code and other useless principles that haven’t led our industry to have less messes over the past 20-30 years. Who was slowly moving into the “simplicity”, “build things that can be easily deleted”, “YAGNI” mindset it simply clicked.
Then Rust took it a few levels beyond Go.
It’s a really good list, I suspect the languages you add is going to spend on your experience. So I wouldn’t feel too sad if your favorite language is on that list. The author lists Java as a language with an amazing standard library, but something tells me a lot of people will have C#, Go or similar as their Java, and that’s fine!
I'm still convinced that Rust does it _right_ by allowing you to "chain through" errors using `Result`. Having to pepper `if err != nil { return ..., err}` repeatedly into your code is just distracting from the core logic - 99% of the time, that's the response anyway, so being able to just write your happy-case logic and have an implicit "but if there are any errors, return them immediately" (but still returned as a value rather than thrown as an orthogonal Exception) is the best of both worlds.
It is really great. I just wish there was a better story for combining error types. thiserror works, but it’s annoying boilerplate to have to write imo.
When you require that your "algebraic" types always be tagged, so that `type X = A + B` and `type Y = A + B` are always different, you lose most of your capacity of seamlessly composing the types.
In other words, Rust doesn't have a type that means "either IO Error or Network Error", so it can't just compose those two behind the scenes into something that you can later decompose. You have to create some tag, and tell the compiler how to apply the tag, because each tag is different.
That’s a good way to put it. Lots about Rust is great, but the lack of anonymous union types over complicates a lot of problems. The trait system would be much less of a headache if you could define a local child types as well.
The anyhow crate complements thiserror pretty well in my experience. I use it "top-down" facing where individual errors in any component are defined with thiserror, but then we bubble them up by wrapping them in a `anyhow::Error` if we don't know what to do with them. It also has the nice thing of being able to produce simplified stack traces to help diagnose where things are going wrong. And then you can downcast to component-level thiserror Errors if you want to inspect something closely from high up.
I have been using Golang recently and have the exact opposite feeling about it, which is likely due to my limited exposure to it. Needing to constantly check if a specific return value is not nil just seems so sloppy but I have yet to come across a "better" way as it appears to just be a Golang convention. Is there a good post about better ways to handle errors in Go?
This is the better way as far as I’m concerned. Go has a philosophy to be simple, and this is explicit error handling at its simplest. You deal with errors exactly where they occur. I think a lot of people simply prefer implicit error handling and long exception chains. There is nothing wrong with that but that’s not how Go does error handling.
After a couple of decades in the industry I really prefer explicit error handling because while powerful implicit error handling is also easy to get wrong. Which means I have to go through ridiculous chains of exceptions to find some error someone introduced years ago, instead of being able of heading directly to it and immediately understanding what is going on. So I guess it’s a little contradictory that I prefer the Rust approach, but I think the compromise of added complexity is small enough to make up for the added safety and “manoeuvrability”.
The flip-side is of course that Go’s simplicity is part of the reason it’s seeing a lot of adoption while most other “new” languages aren’t. (Depending on where you live).
Almost all error-related code in Go is just doing by hand what an exception system would do automatically: stop the current function and propagate it to the caller. The error is not being "dealt with" in a meaningful way except at the outermost layers of the onion (i.e. HTTP/RPC middleware converting it to a wire protocol error response) which could have used a try/catch anyway.
This was my view when I tried Go. I felt like I must be missing something but I’m yet to find out what.
Because transient errors shouldn’t be bubbled up immediately in case the issue is resolved quickly.
I prefer implicit error handling if exceptions are checked and defined as part of the contract, otherwise it’s definitely hard to trace.
I know the biggest complaint with checked exceptions is that people tend to just use catch all exceptions, but that’s just sloppy work. If they’re a junior, that’s when you teach them. Otherwise, people who do sloppy work are just sloppy everywhere anyway.
My issue with a lot of the best practice principles in SWE is that they were written for a perfect world. Even the best developers are going to do sloppy work on a Thursday afternoon after a day of shitty meetings during a week of almost no sleep because their babies were crying. Then there are there times when people have to cut corners because the business demands it. A million little things like that, which eventually leads to code bases which people like Uncle Bob will tell you were made by people who “misunderstood” the principles.
Simplicity works because it’s made for the real world. As I said I personally think Rust did it better, but if you asked me to come help a company fix a codebase I’d rather do it for a Go one than Rust.
Truthfully, I disagree. I’ve worked at a few different companies and I could absolutely rank them in the quality of their staff.
Been on teams where every individual was free to use their best judgement, we didn’t have a lot of documented processes, and… nothing ever went wrong. Everyone knew sloppy work would come back and bite, so they just didn’t ever do it. Deadlines were rarely a problem because everyone knew that you had to estimate in some extra time when presenting to stakeholders. And the team knew when to push back.
On the other hand, I’ve been on teams where you felt compelled to define every process with excruciating detail and yet experienced people somehow screwed up regularly. We didn’t even have hard deadlines so there was no excuse. The difference between implicit and explicit error handling would have not mattered.
At the end of the day, some of these teams got more done with far fewer failures.
Go has a philosophy to be simple, and this is explicit error handling at its simplest. You deal with errors exactly where they occur.
“if err != nil {return nil, err}” is the opposite of this philosophy. If you find yourself passing err up the call stack most of the times and most calls may return err, it’s still exception-driven code but without ergonomics.
It’s not simplicity, it’s head butt deep in the sand.
There is no other way - most error conditions can't be dealt with at the place where the error happened. A parseInt failing is meaningless at the local scope, the error handling depends on who is the caller.
Yes, that’s the point. The idea that you deal with errors locally doesn’t work. See also https://news.ycombinator.com/item?id=42038568
Golang needs monads. If by default it had a monad that propagated errors up the stack, then it would be much cleaner to work with, and should you need another behaviour you could install a different monad.
> Needing to constantly check if a specific return value is not nil just seems so sloppy but I have yet to come across a "better" way as it appears to just be a Golang convention. Is there a good post about better ways to handle errors in Go?
As others have identified, not in the Go language. In other languages which support disjoint unions with "right-biased" operations, such as Haskell's Either[0], Scala's version[1], amongst others, having to explicitly check for error conditions is not required when an Either (or equivalent) is used.
0 - https://hackage.haskell.org/package/base-4.20.0.1/docs/Data-...
1 - https://www.scala-lang.org/api/3.x/scala/util/Either.html
There is no better way.
The general idea is that you should pay the cost of handling an error right there where you receive one, even if you're just going to return it. This reduces the incremental cost of actually doing something about it.
If you're given an easy way out, a simpler way of just not handling errors, you either won't even consider handling them, or you'll actively avoid any and all error handling, for fear of polluting your happy path.
I can't say I agree with the approach all the time, but I know I'm much more likely to consider the error flow doing it the Go way, even if I'm comstant annoyed by it.
> There is no better way.
Zig's way is better.
Error returning is explicit, error returning is declared in the function signature. Error returning plays nice with defer (there is errdefer) and there is limited sugar (try keyword) that makes life easier.
Agreed. Zig has all the advantages of Go's explicit errors as values without the drawbacks.
I meant in Go. This is not a pissing match.
I personally like it better than exceptions (even if it's much noisier for the 95% common case as another poster put it), both of which I've used enough to appreciate the pros/cons of. But that's about it.
I'll probably never use Zig enough to find out.
> If you're given an easy way out, a simpler way of just not handling errors
Doesn't go offer the simplest way of all to "just not handle errors"? Just ignore them. It is an option. You can ignore them on purpose, you can ignore them by mistake, but you can always simply ignore them.
In practice I want the error propagated to the caller 95% of the time and swallowed 5% of the time. The problem is Go makes me explicitly spell out (and write a unit test case for!) the behavior I almost always want, while the rarely-desired behavior is default.
Yeah, sure.
But ignoring by mistake gets caught by linters, in practice.
And then doing it on purpose is almost as noisy as bubbling it, and sure to raise an eyebrow in code review.
My experience is with exceptions in Java/C#, Go errors, and C++ absl::StatusOr. In theory, I'd favor checked exceptions. In practice, I find that I'm always running away from polluting the happy path and coming up with contrived flow when I want to handle/decorate/whatever some errors but not others, and that giving types to checked exceptions becomes a class hierarchy thing that I've also grown to dislike. Both Go and C++/Abseil are indifferent if noisy to me (and C++ annoys me for plenty other reasons). Maybe elsewhere I'd find options better. Maybe.
Isn’t the reason by chance how try-catch works lexically? I find that as another overlooked billion dollar mistake. It’s not that a happy path gets a crack in it, but the absolute monstrosity of a catch ceremony together with a scope torn between two blocks. “try {} catch {} finally {}” should be “[try] { catch {} finally {} }” absolutely everywhere.
I've wondered about just being able to pass an error closure to a function. And if you don't it'll execute a default error closure.
No, go has GOd awful error handling, it is C's errno wrapped in a slightly different syntax.
Rob Pike once talked about dealing with this: https://go.dev/blog/errors-are-values
He basically simulates jump to catch by hand. I understand the authority of Rob Pike, but don’t understand why this idea is great.
Processes IRL don’t look like write() three times. They look foo(); bar(); baz(); quux();, which weren’t designed to cooperate under a single context. If only there was an implicit err-ref argument which could be filled by some “throw” keyword and the rest would auto-skip to the “catch”.
Notably this approach is basically completely nonexistent in practice.
Coming from Python, as an amateur dev, error handling in Go was annoying as hell. But I had no other choice so I went for it.
Afer a few programs I realized I never really thought about error handling - Go forces you to decide where you want to handle your error, where to inform about it etc.
It is still annoying (especially the boilerplate) but it had a net positive value for me in terms of learning.
It's supertempting to remember my path:
1. Basic on ZX Spectrum around 5-6 years old. Draw circle and a box and a triangle and... Look ma, a rocket!
2. Pascal around 10-12. I can write a loop! and a function. But anything longer than 100 lines doesn't work for some reason. C is also cool but what are pointers?
3. PHP around 14-15. The Internet is easy!
4. Perl around the same time. I can add 5 to "six" and get "5six". Crazy stuff.
5. C around 16-17. Real stuff for real programmers. Still didn't understand pointers. Why can't it be more like Pascal?!
6. C++ around 18-19. I can do ANYTHING. Especially UIs in Qt. UI is easy! Now, let's make a Wolf3d clone...
7. Common Lisp + Emacs Lisp around 20. No compilation?! REPLs? What are atoms? Cons cells? Live redefinition of everything? A macro? A program image?
8. Python around 22. Ok, so languages can be readable.
9, 10, 11, ... StandardML for beautiful types and compiler writing, vanilla C and x86 assembly/architecture the deep way, back to Lisps for quick personal development...
Looking back, it's Lisps and vanilla C that sticked.
If you like C and Lisp (no Smalltalk experience?) did you ever have occasion to try Objective-C?
No Objective-C for me. I was always living in the Linux/x86 world as long as I remember. Isn't it like C with dynamic dispatch bolted on..?
As strange as it may sound, I do know quite a lot about Smalltalk implementation details without ever using the language. Self (a Smalltalk dialect) papers were big in the jit compiler writer community in late 90s and early 2000s. A couple of classic general programming books used Smalltalk in examples.
Yes.
You can access Objective-C in Linux via GnuStep.
It would have been interesting sometime around the first C++ exposure but probably a bit too late by now :-)
APL is the perfect mind blower: https://xpqz.github.io/learnapl/intro.html
Unfortunately Dijkstra and Iverson took personal dislikes to each other's approach, or we might have had a language that abstracted data flow like Iverson's APL and abstracted code flow like Dijkstra's Guarded Commands.
Have you looked at dfns (https://en.wikipedia.org/wiki/Direct_function)? They allow you something very similar. Compare Dijkstra's GCL:
To an APL dfn with "guards": As in GCL, if none of the guards hold true, the dfn (braces) will terminate without return value, and thus the code will abort with an error.thanks! very close, except for https://en.wikipedia.org/wiki/Direct_function#:~:text=guards...
(seems like a degenerate, empty, dfn also behaves differently?)
and, modulo the above, we could spell "do X ◻ Y od" as "{X ♢ Y}⍣≡"?
My university had essentially a weird-languages course to blow our minds. Smalltalk (everything is a message to an object, back when C++ was the new hotness), Lisp (everything is a sexpr that you can redefine), Prolog (everything is a clause to search for known axioms).
Later: Lisp again, because of closures and CLOS let you program how method dispatch should work and CLCS let you resume from just before an error. Haskell, because you can lazily consume an infinite structure, and every type contains _|_ because you can't be sure that every function will always return a value. Java, back when the language was poor but the promise was "don't send a query, send code that the backend will run securely" (this was abandoned).
the promise was "don't send a query, send code that the backend will run securely"
This is back now with WASM!
Thank you, it had not occurred to me that Node.js and Deno offered that, but of course they do.
Sounds like an amazing course!
Was that the case for you? I'm especially curious about the exams if there were any, because it's probably hard to keep the whole context in mind as a student and to evaluate students for the teacher.
After decades I'm not sure how grading worked, but I don't remember that course being hard. Maybe that's because I enjoyed it so much.
I remember my mind being blown when I first came across list comprehensions. It was one thing to be able to effectively generate one list from another:
(This can be read as, Squares equals a list of X*Xs such that each X is a member of [1,2,3,4])Going from that to then realizing that you can use list comprehensions to implement quicksort in basically 2 lines of code:
These examples are written in Erlang though list comprehensions are found in many other languages, Python and Haskell to name a couple.While I wouldn't say "mind blown", I really like F#'s built-in support for fluent interface style programming using the pipeline operator, e.g.
I find this style changes the way I think about and write code transformations. It's also in shell pipelines, R's magrittr, and Clojure's thread macros, and can be emulated in some OO languages with methods that return the transformed object itself.
> R's magrittr
These days, base R already includes a native pipe operator (and it is literally `|>`, rather than magrittr's `%>%`).
It also works with universal function call syntax line in Nim. Though aesthetically I prefer `|>` for multi-line expressions.
yeah, in lua some libs are used just like this, using the : syntatic sugar, something line
value = table:function1():function2():function3()
I wish there was a linter that could limit it though. It's like coders' subconsciouses say "the feature exists, so I have to use it to the greatest extent possible". Or, "assigning intermediate values is now an anti-pattern", and we end up with things that are like 20 straight pipes, with more pipes inside lambdas of those pipes, etc., that result in a function that is totally incomprehensible. All it really saves us from is this:
let x1 = one(x); let x2 = two(x1); let x3 = three(x2);
The other advantage of being explicit is you can set breakpoints and see the intermediate values when debugging.
Granted, the same could be said about `three(two(one(X)))`, so it's not specifically a pipe operator problem. It's just that I see it a lot more ever since pipe operators (or their cousins "streams", "extension methods", etc) have been popularized.
My guess is it's because `three(two(one(X)))` across multiple lines would need indentation on each nested function, and end up looking obviously obtuse with too many levels of nesting. Pipes make it look cleaner, which is great at some level, but also more tempting to get caught up in "expression golf" leading to extremely concise but incomprehensible code.
Same in Elixir
and in Haskel/Elm
And clojure -> or ->> (thread first or thread last)
At least for me once I first encountered the pipeline operator it has made me dislike traditional inside out code a lot more.
I’ll second Prolog as mind blowing. It is a fundamentally different way to think about problem solving using a computer. Well worth even a trivial playing with it.
Most programming languages work top-down... literally, down through the lines of code.
Prolog works bottom-up.
Emacs LISP works inside-out.
I've been doing advent of code with Uiua [0], an array and stack programming language. And it feels like playing Tower of Hanoi with data and building pipelines with functions.
[0]: https://www.uiua.org/
I think that are a couple of programming langs out there that everyone needs to be in contact with at least one time. I will mention the three ones that actually changed my mind (I have taken these in a semester at uni): Haskell, Prolog and Smalltalk (even if you do not like nor use them, they will force you to think radically different). Then you should add a structured language (C or Pascal will suffice). If you like metal, try assembly. Yes there are others like APL and Forth but I can't say anything about those as I haven't tried them. Lastly a LISP or scheme flavour won't be bad to experience as well.
It was about APL that Alan J. Perlis said "A language that doesn't affect the way you think about programming, is not worth knowing." (number 19 of https://cpsc.yale.edu/epigrams-programming)
Learned programming with Pascal, and Turbo Pascal (not the language itself) was what blew my mind back then. The Swag libraries weren't the language itself neither, but for pre-internet age they were a jump forward.
Then had to do things in OS/2, and then it was the REXX turn. Shell scripting didn't had to be as basic as I knew from DOS. Some years later moved to bash, and from complex script to long but very powerful oneliners, so it was another shock.
Still was working with OS/2 when learned Perl, and regexes, and its hashes, and all with some interesting semantic approach on that. And for many years it was my "complex" shell scripting language. Python was not as mindblowing or at least had the same kind of impact on me.
Nice article. Amazed they came across Prolog but not Forth!?
Prepare to have your mind blown: https://ratfactor.com/forth/the_programming_language_that_wr...
Implementing and using a Forth for a while for embedded real time scripting was a great in really re-enforcing how everything in programming really boils down to just numbers. Though practically speaking managing the stack gets old quickly.
If you look at Moore's forth style, he doesn't manage the stack, he just uses variables.
i didn't have practical exposure to Forth, only somewhat theoretical, but did to Postscript (yeah, that Adobe one under the PDF, printers etc).
Postscript is also RPN based..
And btw, the HP 11c "calculator" is also RPN based :) . Used it to solve some matrix things in university..
Have you tried RPL (reverse Polish Lisp)? It's on later HP calculators, and it's pretty elegant for a small-machine language. The type system includes matrices and functions.
More no-code than programming language but Lotus Notes totally blew my mind.
Imagine going from pre World Wide Web - and for many companies pre-email - age of local area networks (for file sharing only) and 38.K modems directly to group and user oriented, email integrated, distributed, replicated, graphics and multimedia integrated, no-code, document oriented, user programmable, fully GUI, secure, distributed database applications.
Lotus Notes blew my mind completely. A truly amazing application at the time.
I wrote a lot of code for lotus notes circa 1995 for an early web app. It was still code (lotus notes script), just that I had to figure out how to do things without loops and recursion.
I thought LotusScript was a VBA clone, and it was the older Excel-like formula language that wasn't imperative.
I think it was imperative, but barely, like you could store things off in databases but that was it.
Round about the seventh language, he stops using exclamation marks. The list goes from "Mind blown: Programming my own games!" to "Mind blown: Some support for writing my own gradual type system."
I think that actually happens to us all. The first time you write an infinite loop to print out that your friend is a doofus to show off is magical. 25 years in and you start to value… other things… that maybe don’t have the childish whimsy and fun.
That’s OK. In the same way that when you start to read books, you might like Blyton, Dahl or diving into a Hardy Boys, by the time you’ve been reading fiction for a few decades your tastes might become a bit more philosophical.
The trick is to introduce these things to people when they’re ready for them, and to not treat those who haven’t experienced them as in some way inferior.
I’m not going to shove Proust down someone’s neck or make them feel dumb for not having read any of his work, in the same way I am going to think carefully about whether somebody would benefit from seeing some Prolog.
What does interest me a lot about this list is that it’s not just a “well, look, I found Clojure and think it’s better than Python, YMMV”, or “if you don’t like or understand Haskell maybe you are just too dumb to grok eigenvectors” brag piece.
There is a thought about what you might get from that language that makes it worth exploring. As a result I might revisit OCaml after a brief and shallow flirtation some years ago, and go and take a look at Coq which I doubt I can use professionally but sounds fascinating to explore.
Are you being flippant, or is there really a sense in which eigenvectors are relevant to Haskell? To the best of my knowledge despite all the category-theoretical goodness in the language, there's no meaningful language-level connection to linear algebra, -XLinearTypes notwithstanding.
my mind was blown by how many times he said mind blown.
also, what happened to opalang that he mentioned? iirc i read about it back in the day.
For me it would be (after BASIC, which came with my Apple II):
GraFORTH (build the program up rather than down, mindbogglingly fast 3D graphics on the II)
TransFORTH (same thing, with floats)
Pascal and C (build up or down, convenient syntax)
APL (because it's extremely concise, and taught me the value of understandable code - focus on why, the program already says what and how)
Actor (like Smalltalk, but with a less surprising syntax)
Smalltalk (because the syntax IS important)
Visual Basic (because an integrated interface designer is immensely helpful)
Python (because batteries included)
Haskell (everything is clear once you rotate the program 90 degrees on the time axis)
Rust (because the compiler saves me from myself)
Parallel track:
- MacBASIC: Mac GUI programming w/o Pascal or C https://www.folklore.org/MacBasic.html (which is something I'll never forgive Bill Gates for)
- HyperCARD (also on that list, and agree with almost all points made): It was magic to get into Developer mode and to create stacks --- it's unfortunate that Asymetrix Toolbook didn't get further, nor that the Heizer stack for converting HyperCard stacks to Toolbook wasn't more widely available, and a shame that Runtime Revolution which became Livecode reneged on their opensource effort --- hopefully someone will make good on that: https://openxtalk.org/
Unfortunately, I never got anywhere w/ Interfacebuilder.app or Objective-C....
- Lua: real variables in TeX! (and latex), and one gets METAPOST as well --- even recursion becomes simple: https://tex.stackexchange.com/questions/723897/breaking-out-...
- OpenSCAD: Make 3D things w/o having to use a full-fledged CAD program
- BlockSCAD: Make 3D things w/o typing: https://www.blockscad3d.com/editor/ (though to be fair, https://github.com/derkork/openscad-graph-editor also allows that)
- PythonSCAD: variables and file I/O for OpenSCAD (though to be fair, RapCAD had the latter, it was just hard to use it w/o traditional variables) https://pythonscad.org/
Still working through a re-write of my OpenSCAD library in Python: https://github.com/WillAdams/gcodepreview and am hopeful that a tool like to https://nodezator.com/ will make that graphically accessible (w/o going into OpenSCAD mode to use OSGE).
There were a few times I had my mind blown. Once with Scheme when I was watching the old SICP video lectures. But also (and this may be a rather unpopular thing to say) once with C when I had coded something up and compiled it and ran it and it finished instantly. Rust didn't really blow my mind, but I have taken some of its lessons (using error values, lifetime and ownership management) to heart.
Common Lisp, which at the moment is the only language I can see myself using unless I decide to make one myself, never really blew my mind, although the little things it does make it hard for me to imagine switching away from it.
Regular expression is not listed in the article, but it blew my mind when I first used it in Perl. Any of sed/awk/perl/python/ruby/javascript could have expose the author to regular expressions, but maybe they didn't find it as transformative as I did.
'' is one of the first special thing that computers made me look at. Even a simple msdos glob .exe felt like a superpower. regexps were the next level but a two edged weapon as everybody knows. That said I never leveraged PCRE extended capabilities.
Call-with-current-continuation in Scheme blew my mind.
Red/Rebol has a different, powerful approach to homoiconicity and DSLs.
And there is this XL language that has very interesting approach to extending the language, but sadly the compiler is not in a runnable state, I could only assess it by the docs.
Early on when I was learning, assembly language blew my mind. I went from Basic to Pascal but I wanted to make games so I tried my hand at C/ASM and it was just so wild to be that close to the metal after a couple years at such a high level.
In recent years, Go blew my mind with it's simplicity and how strong of a case it makes for writing boring code.
It's been decades since writing C/ASM and my guess is what little I remember isn't applicable anymore, but I plan on diving in again at some point if only to better understand Go.
I'd like to put a word in for AS3. As a kid who grew up on hypercard, BASIC, pascal, proce55ing and eventually PHP and javascript, AS3 was the language that bridged the gap for me to strongly typed code. It made me a much better programmer. The graphics APIs were fantastic but they also taught me for the first time how to avoid tightly coupling graphics with logic.
TS + PixiJS is a reasonable replacement for some of it now, but I still sometimes miss having compile-time warnings.
One of the most amazing tools done using this was:
https://github.com/Jack000/PartKAM
a vector drawing tool and CAM (Computer Aided Manufacturing) program.
Shame it's impossible to even run the code now and see it. It was a great tool for making visual tools, not just games. Part of that was just being able to build UI components visually as raster or vectors, animated if you wanted, that you could then manipulate interchangeably in code without any overhead. Even though most of my software in it drew everything procedurally, that ability to dip in and out of GUI/design mode made it possible to do beautiful things quite easily. There has been nothing else quite like it.
You can see the code run --- just download and install the Flashplayer from the Wayback Machine on archive.org then install and associate it with .swf files
I am hopeful that one day someone will make a replacement tool --- we'll see.
As a corollary to this question.
I happen to be learning functional programming and really struggling with a different way of thinking, at the same time I saw the movie Arrival, with the alien language as main plot point.
It struck me that what programming language we pick, fall in love with, dig into, does seem to change our thinking and ways of approaching problems. It can actually change our brain.
One that never seems to make the list, but is worth considering: Excel, the only popular functional programming language! Not the VBA stuff, but the formulaic language of the spreadsheet itself.
Mathematica. Usefully homoiconic with an underlying compute model of "get AST, pattern-match-and-replace within in, repeat until fixed point".
My list is similar: PDP-11 BASIC, followed quickly by UCSD Pascal, Turbo Pascal (which brought the most important parts of UCSD Pascal's IDE to CP/M-80/DOS), LISP.
And then Awk, and associative arrays. You can literally create any data structure you need using associative arrays. It won't be fast; but if I could have only one container type, this would be it.
And then TCL, which is where I really learned to appreciate meta-programming. (I wrote Tcl's "Snit" object system in pure Tcl; it got quite a lot of use once upon a time.)
I make it a point to check out new languages on a regular basis, just to keep myself fresh (most recently, Rust and Haskell).
Oh, and one minor nit: Knuth wrote WEB in PDP-10 Pascal, not Turbo Pascal.
Strand is one of the languages that particularly blew my mind, I think it was Armstrong that said that Strand was "too parallel"
Strand is a logic programming language for parallel computing, derived from Parlog[2] which itself is a dialect of Prolog[3]. While the language superficially resembles Prolog, depth-first search and backtracking are not provided. Instead execution of a goal spawns a number of processes that execute concurrently.
http://www.call-with-current-continuation.org/strand/strand....
Slight tangent: Domain-specific languages (DSLs) that blew my mind.
Here are three DSLs that left lasting and positive impressions:
1) Mathematical programming languages, first GNU MathProg, then later python pulp and pyomo. These launched a subcareer in applying LP and MIP to practical, large-scale problems. The whole field is fascinating, fun, and often highly profitable because the economic gains from optimization can be huge. Think of UPS.
2) Probabilistic programming languages, first BUGS (winbugs, openbugs, JAGS), later Stan, pymc, and various specialized tools. These helped me become a practicing Bayesian who doesn't much like p-value hypothesis testing.
3) The dplyr | ggplot universe. These are really two independent eDSLs but they're from the same folks and work great together. Writing awkward pandas|matplotlib code is just soul-wrecking tedium after using dplyr|ggplot.
I didn't get the part about the stack in assembly, in my experience the stack is very much a thing at that level.
Mainstream CPUs expose one or more explicit stack pointers, and in assembly you use that all the time, right?
I think he means that the stack is not something that you are forced to work with when programming in assembly. You can put data wherever you want (and are allowed to), and jmp into whatever random memory address you want. You can use CPU instructions that handle stack management for you, but you don’t _have_ to.
Somewhere on my random ideas pile is to write a queue-oriented operating system - you know how we have threads? What if we didn't have threads, just a list of things to do, to run on the next available processor? (Haskell's VM calls them sparks)
I mean, that's just a register. The memory region itself is not special in any way, a random "heap" region that would be similarly frequently accessed would be just as fast.
You also can peek and write in the wrong side of BP and mess the "stack" of the caller funcion and perhaps get unexpected results or nasty crashes.
Instead of "an ilusion", I'd say it's "a convension".
Except of course how return addresses are typically always stored on the machine stack. It is very concretely there and part of the programming model, in my world.
Technically some CPUs have a "jump and link" that saves the return address in a register. Then it is up to the calling convention to save that register on the, also by convention, stack.
Let me add something I found recently, though I haven't used it yet, but it looks impressive from the webpage: Uiua
https://www.uiua.org
Lots of great videos on Uiua on Conor Hoekstra's YT channel [0].
[0]: https://www.youtube.com/c/codereport
Smalltalk: the object/message model so powerful that the language itself doesn't even need any special syntax for ifs or loops.
My parallel track:
* C++: The feeling that I can make a-n-y-t-h-i-n-g. (not true but still mostly true) * Ruby: elegant, OO, who-needs-compile-time-if-you-have-unit-tests * Haskell: hard to learn but so worth it, by doing so I became a follow-the-types person * Elm: not hard at all, a well picked subset of Haskell for browser apps only * Kotlin: a well typed Ruby, with the ecosystem of the JVM at your disposal * Rust: when every bit of runtime performance counts (but now without sacrificing safety like with C/C++)
No Perl? cgi.pm and http requests, plus regexes blew my mind after Basic, Z80 assembler and Turbo Pascal...
If Perl blows your mind, Perl6/Raku would send it to space.
Near the end of the article:
> Just that the concepts either didn’t resonate with me or, more often than not, that I had already discovered these same concepts with other languages, just by the luck of the order in which I learnt languages.
> The entire idea of multi-tier programming seems to have been vanished into oblivion
Not entirely https://eyg.run/
The page has been hugged to death, but it isn't github that is the problem; it is cdn.bootcss.com, where some of the static assets for this site are hosted.
My list:
BASIC, but specifically on a TRS-80. I can just type something in and run it. I don't have to wait a week for the teacher to take the card deck over to wherever the mainframe is.
Pascal. That's it, all of it, in those ~4 pages of railroad diagrams expressing the BNF.
C. Like Pascal, but terser and with the training wheels removed. It just fits the way my mind works.
Java. The language itself is kind of "meh", but it has a library for everything.
Perl. Regular expressions and default variables.
BBC BASIC. You mean I can POKE things here, and stuff shows up on the screen in a different color?!
Turbo Pascal. Woah this is fast. And Mum is impressed when code sells for money.
Visual Basic. I'm living in the future, Ma. Dentist has a side business sell flowers and needs a program that can handle root canal and roses? I got you.
Perl. Suddenly system administration is fun again.
Java. My mind is blown, in a bad way. This is what I have to write to keep a roof over my head? Ugh.
Go. Aaahhh. Feels like reading W. Richard Stevens again. Coming home to unix, but with the bad parts acknowledged instead of denied.
No mention of PicoLisp (application server, embedded Prolog, etc, etc...) or Dylan. No way Teller saw them and they didn't make the list.
I found Coq in the list to be very interesting. Is anyone using Coq here for serious math or programming? What kind of problems are you solving with Coq?
And I'd also like to know how different Coq and Lean are. I'm not a mathematician. I'm just a software developer. Is there a good reason for me to pick one over the other?
Coq is cool partially because it can be compiled to OCaml. So you can write part of your program in Coq and prove its correctness, and then write the rest in pleasant OCaml. I believe there are some popular ocaml libraries, like bigint implementations, that do this.
Lean is designed to be a general purpose programming language that is also useful for mathematical proofs. From that perspective it’s kind of trying to be “better haskell”. (Although most of the focus is definitely on the math side and in practice it probably isn’t actually a better haskell yet for ecosystem reasons.)
If you try either, you’ll likely be playing with a programming paradigm very different than anything you’ve used before: tactic oriented programming. It’s very cool and perfect for math, although I don’t think I’d want to use it for everyday tasks.
You won’t go wrong with either, but my recommendation is to try lean by googling “the natural number game”. It’s a cool online game that teaches you lean and the basic of proofs.
By the way, don’t be scared of dependent types. They are very simple. Dependent types just mean that the type of the second element of a tuple can depend on the value of the first, and the return type of a function can depend on the value passed into it. Dependent types are commonly framed as “types that have values in them” or something, which is a misleading simplification.
> Mind blown backwards: Understanding how finally was implemented in the JVM.
In old versions the JVM does jsr and ret. In new versions it just duplicates the code. I don't understand what's mind blowing about it?
Interesting perspective! I think HN would benefit if we shared our own mind-blowing insights here. A couple from the top of my head:
- Python: slicing, full reflection capabilities, and its use as an interface to almost anything [1], not to mention its role as an embedded interpreter (beyond the GIL debate).
- Z3: one of the closest I know for defining the ‘what’ rather than the ‘how.’ I got lost with Prolog when I tried to add more complex logic with ‘hows’, though I was a complete newbie with Prolog. Z3, however, really boosted my beginner capabilities.
- OMeta: allows me to write fast parsers without spending time resolving ambiguities, unlike ANTLR and other popular parsers using classical techniques.
- Smalltalk/Squeak: everything can be re-crafted in real-time, a blend of an OS and a programming language. The community vibe is unique, and I even have friends who implemented an operating system using Squeak. The best example? TCP/IP implemented in Smalltalk/Squeak! [3]. David Weil and I also created an early proof of concept [4] that used Linux behind the scenes, aiming to ‘compete’ with the QNX floppy disk.
- AutoLISP: a programming language embedded in AutoCAD, which I discovered in high school in the late ’80s—only to just find out that its first stable release is considered to be around 1995 [5].
- REXX on the Commodore Amiga: not only could applications be extended with this language, but they could also interact with each other. A decade later, when I used REXX on an IBM Mainframe, I still preferred the Amiga’s approach.
- Racket: I can draw in the REPL. For example with Racket Turtle.
- C++: object orientation "for C". I was not mesmerized by templates.
- Clipper: first database usage and relatively simple to create business apps.
- Objective-C: the first implementation of GCD [1], providing a C/C++ style language with fast performance, a clean event-loop solution, and reflective capabilities. However, I never liked the memory handling.
- Machine Code/Assembler on the Apple IIe: I could jump directly into the assembler monitor from BASIC.
- APL: a powerful yet complex way to program using concise expressions. Not ideal for casual use—best suited if you really understand the underlying concepts.
By the way, it’s time for Apple to embrace programming languages on iOS and iPadOS!
[1] http://www.garret.ru/dybase.html
[2] https://wiki.squeak.org/squeak/1762
[3] http://swain.webframe.org/squeak/floppy/
[4] http://toastytech.com/guis/qnxdemo.html
[5] https://en.wikipedia.org/wiki/AutoLISP
[6] https://en.wikipedia.org/wiki/Grand_Central_Dispatch
> an early proof of concept [4] that used Linux behind the scenes
Gnu/Linux: Set computer software free so people could discover and learn. 30 years of success and counting. Mind blown.
C: lean, rock-solid durability for 50 years and counting. The connected world runs on C. Mind blown.
AWK, sed, grep, Perl: lean, powerful, rock-solid durability in text and data processing for over 30 years and counting. Mind blown.
SQL and relational data: Querying big data for 50 years and counting, now powering the world in the 21st century. Mind blown.
Hear, Hear :-)
Old IS Gold!
- OMeta - sounds like my experience with Instaparse (Clojure) - REXX - I thought this was ingenious specifically for file/text processing.
> ... sounds like my experience with Instaparse (Clojure)
Thank you! I wasn’t aware of Instaparse or its use of PEGs [1] which gives you the same sense about parsing ambiguities.
> REXX - I thought this was ingenious specifically for file/text processing
Formally the REXX in Amiga was called ARexx and included extensions [2]. REXX [3] itself is not specifically for file/text processing but enables you to connect different environments [4].
[1] https://en.wikipedia.org/wiki/Parsing_expression_grammar
[2] https://en.wikipedia.org/wiki/ARexx
[3] https://en.wikipedia.org/wiki/Rexx
[4] https://www.ibm.com/docs/en/zos/3.1.0?topic=environments-hos...
not necessarily mind blowing, but interesting, deeper cuts:
Aardappel ABC Beatrice Charity Esterel FL Fractran GPM Hope Lean MCPL NESL Oz ProTem Рапира rpython SLC Squiggol UNITY USELESS
I am impressed by Haskell, Prolog, but also ... C.
Good article.
No mention of Reflection? Try-catch/Exceptions?
(zx spectrum) BASIC: nothing like the first contact with AI
C++: pointers, OOP and so much more - good and bad - all in one package
Fortran90: vector programming for humans
Python: general programming for humans
Biggest disappointment: Scratch. Why isn't visual programming more mind blowing?
Anyway, always looking out for these well-regarded "mind-blowing" languages but somehow they never show up in really mind-blowing projects? Would be interesting in this respect to have a list of mind blowing projects that were only possible due to a mind-blowing language.
No Clojure or Lisp so another Mind Blown discovery awaits.
I've been dreaming: can the era of learning foreign languages come to an end now?
Are there large models that fully learn all programming language design principles and all user codes?
Truly achieving natural language programming.
> Mind blown: The stack is an illusion!
Why?
Coming from higher level languages, even C, it is actually very mind-opening to see stack as just 2 numbers (stack base, stack top). Not reading about it in theory but seeing in assembly how the program literally decrements the top address value.
I also remember finding the interrupt/syscall system surprisingly stupid and simple. Reading the kernel's source did broaded my horizons!
opalang blew my mind when i first saw it too - it took a whole lot of web development ideas that i'd seen floating around various languages and frameworks, and actually managed to make them consistent, principled and wonderfully ergonomic. no offence to ur/web (probably its closest competitor) but i never managed to quite get to grips with it; somehow opa managed to make the ideas feel a lot friendlier and easier to use.
For me.
Practical Phase:
$Basic - look, I can print things to the screen $Assembly - I have no clue what's going on $Matlab - Ok, this mostly makes sense and I can chart graphs and make functions $Python - ok I can do all kinds of automation now. It would be great to create an executable to give someone, but this apparently requires traversing the inner circles of hell. $SQL - declarative languages are weird, but it's easy to get up to speed. I'm doing analysis now over multi-TB datasets. $GAMS - a declarative language for building linear programming and other kinds of mathematical optimization models (algebraic modeling). This language is weird. Ok. Very weird. I'll go back to the Python API. $Unix/Bash/Awk - ok, I'm doing everything through the command line and it is beautiful, but after a short while I have to switch to Python. $Powershell - kind of like the Linux command line, but way more powerful and a lot slower. Can write more complex code, but often the cmdlets are too slow for practical use. Have to move back to Python a lot.
Exploration:
$Lisp - this is supposedly the best language. Read several books on language. Everything seems less readable and more complicated than Python. Aesthetics are fine. A lot of power here. Limited libraries. $Haskell - this seems like it is trying too hard. Too much type theory. $APL - this is really cool. Typing with symbols. Scalars, vectors, and matrices...yeah! $Prolog - very cool, but ultimately not that useful. We have SQL for databases and other ways to filter data. Prob a good niche tool. $Forth - this is amazing, but I'm not a low level hardware coder, so it has limited value. Hardly any libraries for what I want. $Smalltalk - the environment and GUI is part of the application...what? $Rebol - way too far ahead of its time. Like lisp, but more easy to use and dozens of apps in a few lines of code. Amazing alien technology. $Java - OMG...why does everything require so much code? People actually use this ever day? Aghhh. $C - where is my dictionary/hash type? What is a pointer? A full page of code to do something i can do in 3 loc of Python. At least it's fast. $Perl5 - cool. It's like Python, but more weirdness and less numerical libraries and a shrinking community. $Perl6(raku) - They fixed the Perl5 warts and made a super cool and powerful language. Optimization seems like it will take forever though to get it performant. $OCaml - pretty and elegant. They use this at Jane Street. Why is it so hard to figure out how to loop through a text file (this was 10 years ago)? $8th - a forth for the desktop. Commercial, but cheap. Very cool ecosystem. Small and eccentric community. I wish this would be open sourced at some point. $Mathematica - by far the most powerful programming tool/environment I've used for practical engineering and science work. Like lisp, but no environment/library problems. Commercial and pricey. I can create a directed graph and put it on a map of my home town, train a neural network, do differential equations, and create a video out of my graphs all in a single notebook with built-in functions. Nice! $Swift - can I not get this on Windows? $F# - oh it's OCaml on .Net, but the documentation is a little hard to follow. Everyone assumes I know C# and .Net. $Clojure - Rich Hickey is a genius. A lisp IT won't fire me for installing. Oh wait...I don't know the JVM. Workflow kind of soft requires emacs.
Julia is a language that blew my mind. It convinced me that multiple dispatch is the way to do generic programming.
I was waiting for this one. Julia's multiple dispatch and the amazing composability was eye opening. The first time you see exchanging a Measurement for a simple number and returning a computation with uncertainties. Or a Unitful for a simple number and returning a computation with units. Amazing!
You use Julia at work ? I was wondering if there were still generics issues (type system being too open and inferring hard to guess types)
I use it at work. I must say I've never experienced that as an issue, so perhaps we use the programming language in a different way.
One issue that i have experienced, which may be related, is that it's hard to have a little bit of controlled type dynamism. That is, once you have a type that is a union with many possibilities, you need to be vigilant not to do operations on it that makes type inference give up and return `Any`. This still bites me. There are some solutions to that in pacakges (e.g. the relatively new Moshi.jl), but I do miss it with language-level support.
Thanks a lot, I think that's what people meant by being surprised by functions over complex types. But it seems you can still be productive even without it.
I like the article but I had to come here and say that these AI images in blogs are killing the reading experience:/
Was honestly looking for a comment pointing it out, it really kills the interest I have in an article. Also, what's up with that face on the flying debris on the right?
Same, closed the tab as soon as I saw it.
Yes, totally confusing to me how bad they look and people use them. I don't get it.
He had me until Rust.