I did a few days of AoC in 2020 in λProlog (as a non-expert in the language), using the Elpi implementation. It provides a decent source of relatively digestable toy examples: https://github.com/shonfeder/aoc-2020
(Caveat that I don't claim to be a λProlog or expert.)
All examples showcase the typing discipline that is novel relative to Prolog, and towards day 10, use of the lambda binders, hereditary harrop formulas, and higher order niceness shows up.
A few of my own experiments in this time with unification over the binders as variables themselves shows there’s almost always a post HM inference sitting there but likely not one that works in total generality.
To me that spot of trying to binding unification in higher order logic constraint equations is the most challenging and interesting problem since it’s almost always decidable or decidably undecidable in specific instances, but provably undecidable in general.
So what gives? Where is this boundary and does it give a clue to bigger gains in higher order unification? Is a more topological approach sitting just behind the veil for a much wider class of higher order inference?
And what of optimal sharing in the presence of backtracking? Lampings algorithm when the unification variables is in the binder has to have purely binding attached path contexts like closures. How does that get shared?
Fun to poke at, maybe just enough modern interest in logic programming to get there too…
I think that might be my favorite department/lab website I've ever come across. Really fun. Doesn't at all align with the contemporary design status quo and it shows just how good a rich website can be on a large screen. Big fan.
I remember learning it in univerisity. It's a really weird language to reason with IMO. But really fun. However I've heard the performances are not that good if you wanna make e.g. game AIs with it.
The term "AI" has changed in recent years but if you mean classic game logic such as complex rules and combinatorial opponents then there's plenty of Prolog game code on github eg. for Poker and other card or board games. Prolog is also as natural a choice for adventure puzzles as it gets with repository items and complicated conditions to advance the game. In fact, Amzi! Prolog uses adventure game coding as a topic for its classic (1980s) introductory Prolog learning book Adventure in Prolog ([1]). Based on a cursory look, most code in that book should run just fine on a modern ISO Prolog engine ([2]) in your browser.
In the Classsic AI course we had to implement gaming AI algorithms (A*, alpha-beta pruning, etc) and in Prolog for one specific assignment. After trying for a while, I got frustrated and asked the teacher if I could do it in Ruby instead. He agreed: he was the kind of person who just couldn't say no, he was too nice for his own good. I still feel bad about it.
First of all, it helps to actually use a proper compiled Prolog implementation like SWI Prolog.
Second you really need to understand and fine tune cuts, and other search optimization primitives.
Finally in what concerns Game AIs, it is a mixture of algorithms and heuristics, a single paradigm language (first order logic) like Prolog, can't be a tool for all nails.
With λProlog in particular I think it probably finds most of its use in specifying and reasoning about systems/languages/logics, e.g. with Abella. I don't think many people are running it in production as an implementation language.
I know you likely mean regular Prolog, but that's actually fairly easy and intuitive to reason with (code dependent). Lambda Prolog is much, much harder to reason about IMO and there's a certain intractability to it because of just how complex the language is.
What would be some applications it handles better than regular Prolog? Something that naturally requires second or higher order logic rather first order logic?
I was responding to @TheRoque GP; I know λProlog quite well and I would be pleasantly surprised if they saw that in university, but I think they got taught Prolog. If you mean to say that they saw Lambda Prolog and it is therefor a lot more popular than I believed it to be, then excellent and ignore this reply.
I'm curious to see how AI is going to reshape research in programming languages. Statically typed languages with expressive type systems should be even more relevant for instance.
Because the type system gives you correctness properties, and gives fast feedback to the coding agent. Much faster to type check the code than let say write and run unit tests.
One possible disadvantage of static types is that it can make the code more verbose, but agents really don't care, quite the opposite.
Funnily enough, when programming with agents in statically typed languages I always find myself in need of reminding the agent to check for type errors from the LSP. Seems like it's something they're not so fond of.
I did a few days of AoC in 2020 in λProlog (as a non-expert in the language), using the Elpi implementation. It provides a decent source of relatively digestable toy examples: https://github.com/shonfeder/aoc-2020
(Caveat that I don't claim to be a λProlog or expert.)
All examples showcase the typing discipline that is novel relative to Prolog, and towards day 10, use of the lambda binders, hereditary harrop formulas, and higher order niceness shows up.
Learning how to implement Prolog in pg's On Lisp was a fun way to spend multiple weeks programming. Doing this again this year should be a lot of fun.
I am a huge fan of the work towards putting this in kanren as λKanren:
https://www.proquest.com/openview/2a5f2e00e8df7ea3f1fd3e8619...
A few of my own experiments in this time with unification over the binders as variables themselves shows there’s almost always a post HM inference sitting there but likely not one that works in total generality.
To me that spot of trying to binding unification in higher order logic constraint equations is the most challenging and interesting problem since it’s almost always decidable or decidably undecidable in specific instances, but provably undecidable in general.
So what gives? Where is this boundary and does it give a clue to bigger gains in higher order unification? Is a more topological approach sitting just behind the veil for a much wider class of higher order inference?
And what of optimal sharing in the presence of backtracking? Lampings algorithm when the unification variables is in the binder has to have purely binding attached path contexts like closures. How does that get shared?
Fun to poke at, maybe just enough modern interest in logic programming to get there too…
I think that might be my favorite department/lab website I've ever come across. Really fun. Doesn't at all align with the contemporary design status quo and it shows just how good a rich website can be on a large screen. Big fan.
https://www.lix.polytechnique.fr/
I'm surprised how hard I had to dig for an actual example of syntax[1], so here you go.
[1]: https://www.lix.polytechnique.fr/~dale/lProlog/proghol/extra...
I have written stuff in Prolog, but I find this lambda Prolog syntax very difficult to grok.
There is also an implementation of 99 Bottles of Beer on Rosetta Code: https://rosettacode.org/wiki/99_bottles_of_beer#Lambda_Prolo...
So brainfuck x lisp
There is a great overview of λProlog from 1988: https://repository.upenn.edu/bitstreams/e91f803b-8e75-4f3c-9...
when I downloaded the example programs, they open up in my music player but don't play anything
As usual, try mplayer. It can play anything.
I remember learning it in univerisity. It's a really weird language to reason with IMO. But really fun. However I've heard the performances are not that good if you wanna make e.g. game AIs with it.
The term "AI" has changed in recent years but if you mean classic game logic such as complex rules and combinatorial opponents then there's plenty of Prolog game code on github eg. for Poker and other card or board games. Prolog is also as natural a choice for adventure puzzles as it gets with repository items and complicated conditions to advance the game. In fact, Amzi! Prolog uses adventure game coding as a topic for its classic (1980s) introductory Prolog learning book Adventure in Prolog ([1]). Based on a cursory look, most code in that book should run just fine on a modern ISO Prolog engine ([2]) in your browser.
[1]: https://www.amzi.com/AdventureInProlog/advtop.php
[2]: https://quantumprolog.sgml.net
I also learned Prolog in the university.
In the Classsic AI course we had to implement gaming AI algorithms (A*, alpha-beta pruning, etc) and in Prolog for one specific assignment. After trying for a while, I got frustrated and asked the teacher if I could do it in Ruby instead. He agreed: he was the kind of person who just couldn't say no, he was too nice for his own good. I still feel bad about it.
Rest In Peace, Alexandre.
First of all, it helps to actually use a proper compiled Prolog implementation like SWI Prolog.
Second you really need to understand and fine tune cuts, and other search optimization primitives.
Finally in what concerns Game AIs, it is a mixture of algorithms and heuristics, a single paradigm language (first order logic) like Prolog, can't be a tool for all nails.
With λProlog in particular I think it probably finds most of its use in specifying and reasoning about systems/languages/logics, e.g. with Abella. I don't think many people are running it in production as an implementation language.
Yeah the main use of it is probably in ELPI which is a higher order structural reasoning and AST transform tool for Coq/Rocq.
> It's a really weird language to reason with IMO
I know you likely mean regular Prolog, but that's actually fairly easy and intuitive to reason with (code dependent). Lambda Prolog is much, much harder to reason about IMO and there's a certain intractability to it because of just how complex the language is.
What would be some applications it handles better than regular Prolog? Something that naturally requires second or higher order logic rather first order logic?
λProlog or Prolog? Probably Prolog I guess?
My bad. Was regular prolog yeah
No. It is actually λProlog which seems to be an extension of Prolog.
I was responding to @TheRoque GP; I know λProlog quite well and I would be pleasantly surprised if they saw that in university, but I think they got taught Prolog. If you mean to say that they saw Lambda Prolog and it is therefor a lot more popular than I believed it to be, then excellent and ignore this reply.
Not at all, it's a completely different language with a very different computational foundation. It's an SML-Haskell type situation.
(1987)
I'm curious to see how AI is going to reshape research in programming languages. Statically typed languages with expressive type systems should be even more relevant for instance.
Why do you think that?
Because the type system gives you correctness properties, and gives fast feedback to the coding agent. Much faster to type check the code than let say write and run unit tests.
One possible disadvantage of static types is that it can make the code more verbose, but agents really don't care, quite the opposite.
Funnily enough, when programming with agents in statically typed languages I always find myself in need of reminding the agent to check for type errors from the LSP. Seems like it's something they're not so fond of.