One of my early "this is neat" programs was a genetic algorithm in Pascal. You entered a bunch of digits and it "evolved" the same sequence of digits. It started out with 10 random numbers. Their fitness (lower was better) was the sum the difference. So if the target was "123456" and the test number was "214365", it had a fitness of 6. It took the top 5, and then mutated a random digit by a random +/- 1. It printed out each row with the full population. and so you could see it scrolling as it converged on the target number.
Looking back, I want to say it was probably the July, 1992 issue of Scientific American that inspired me to write that ( https://www.geos.ed.ac.uk/~mscgis/12-13/s1100074/Holland.pdf ) . And as that was '92, this might have been on a Mac rather than an Apple ][+... it was certainly in Pascal (my first class in C was in August '92) and I had access to both at the time (I don't think it was turbo pascal on a PC as this was a summer thing and I didn't have a IBM PC at home at the time). Alas, I remember more about the specifics of the program than I do about what desk I was sitting at.
I wrote a whole project in pascal around that time. Analyzing two datasets. It was running out of memory the night before it was due, so I decided to have it run twice, once for each dataset.
That's when I learned a very important principal. "When something needs doing quickly, don't force artificial constraints on yourself"
I could have spent three days figuring out how to deal with the memory constraints. But instead I just cut the data in half and gave it two runs. The quick solution was the one that was needed. Kind of an important memory for me that I have thought about quite a bit in the last 30+ years.
An Aeon ago in 1984, I wrote a perceptron on the Apple II. It was amazingly slow (20 minutes to complete a recognition pass), but what most impressed me at the time was that it did work. Since that time as a kid I always wondered just how far linear optimization techniques could take us. If I could just tell myself then what I know now...
That's also what I was thinking. ML predates the Apple II by 4 years, so I think there is definitely a chance of getting it running! If targetting the Apple IIGS I think it would be very achievable; you could fit megabytes of RAM in those.
One of my early "this is neat" programs was a genetic algorithm in Pascal. You entered a bunch of digits and it "evolved" the same sequence of digits. It started out with 10 random numbers. Their fitness (lower was better) was the sum the difference. So if the target was "123456" and the test number was "214365", it had a fitness of 6. It took the top 5, and then mutated a random digit by a random +/- 1. It printed out each row with the full population. and so you could see it scrolling as it converged on the target number.
Looking back, I want to say it was probably the July, 1992 issue of Scientific American that inspired me to write that ( https://www.geos.ed.ac.uk/~mscgis/12-13/s1100074/Holland.pdf ) . And as that was '92, this might have been on a Mac rather than an Apple ][+... it was certainly in Pascal (my first class in C was in August '92) and I had access to both at the time (I don't think it was turbo pascal on a PC as this was a summer thing and I didn't have a IBM PC at home at the time). Alas, I remember more about the specifics of the program than I do about what desk I was sitting at.
I wrote a whole project in pascal around that time. Analyzing two datasets. It was running out of memory the night before it was due, so I decided to have it run twice, once for each dataset.
That's when I learned a very important principal. "When something needs doing quickly, don't force artificial constraints on yourself"
I could have spent three days figuring out how to deal with the memory constraints. But instead I just cut the data in half and gave it two runs. The quick solution was the one that was needed. Kind of an important memory for me that I have thought about quite a bit in the last 30+ years.
An Aeon ago in 1984, I wrote a perceptron on the Apple II. It was amazingly slow (20 minutes to complete a recognition pass), but what most impressed me at the time was that it did work. Since that time as a kid I always wondered just how far linear optimization techniques could take us. If I could just tell myself then what I know now...
This motivates me to try this on my Ministrel 4th (21th century Jupiter Ace clone).
I thought this was going to be about the programming language, and I was wondering how they managed to implement it on a machine that small.
That's also what I was thinking. ML predates the Apple II by 4 years, so I think there is definitely a chance of getting it running! If targetting the Apple IIGS I think it would be very achievable; you could fit megabytes of RAM in those.
Same. What flavor of ML would be the most appropriate for that challenge, do you think?
While not exactly ML, David Turner's Miranda system is pretty small, and might be feasible:
https://codeberg.org/DATurner/miranda
Bit of a weird choice to draw a decision boundary for a clustering algorithm...
Upvoted purely for nostalgia.
Any particular reason why the author chose to do this on an Apple ][?
(I mean, the pictures look cool and all.)
IE, did the author want to experiment with older forms of basic; or were they trying to learn more about old computers?
Since when did regression get upgraded to full blown ML?
What is ML if not interpolation and extrapolation?
A million things.
Diffusion, back propagation, attention, to name a few.
Since when did ML get upgraded to full blown AI?
When you find yourself solving NP-hard problems on an Apple II, chances are strong you've entered machine learning territory