"His brother John, working at the movie visual effects company Industrial Light & Magic" is underselling John Knoll a bit - he became one of the more prominent figures there and won two Oscars for his work (and was nominated for more).
Taking his contribution for Photoshop into account, one could say that if you saw mainstream motion or still pictures in the Western world in the last three decades, you'll probably saw something influenced by him in one way or another.
"There are only a few comments in the version 1.0 source code, most of which are associated with assembly language snippets. That said, the lack of comments is simply not an issue. This code is so literate, so easy to read, that comments might even have gotten in the way."
I'm looking at the code and just cannot agree. If I look at a command like "TRotateFloatCommand.DoIt" in URotate.p, it's 200 lines long without a single comment. I look at a section like this and there's nothing literate about it. I have no idea what it's doing or why at a glance:
pt.h := BSR (r.left + ORD4 (r.right), 1);
pt.v := BSR (r.top + ORD4 (r.bottom), 1);
pt.h := pt.h - BSR (width, 1);
pt.v := pt.v - BSR (height, 1);
pt.h := Max (0, Min (pt.h, fDoc.fCols - width));
pt.v := Max (0, Min (pt.v, fDoc.fRows - height));
IF width > fDoc.fCols THEN
pt.h := pt.h - BSR (width - fDoc.fCols - 1, 1);
IF height > fDoc.fRows THEN
pt.v := pt.v - BSR (height - fDoc.fRows - 1, 1);
Just breaking up the function with comments delineating its four main sections and what they do would be a start. As would simple things like commenting e.g. what purpose 'pt' serves -- the code block above is where it is first defined, but you can't guess what its purpose is until later when it's used to define something else.
Good code does not make comments unnecessary or redundant or harmful. This is a myth that needs to die. Comments help you understand code much faster, understand the purpose of variables before they get used, understand the purpose of functions and parameters before reading the code that defines them, etc. They vastly aid in comprehension. And those are just "what" comments I'm talking about -- the additional necessity of "why" comments (why the code uses x approach instead of seemingly more obvious approach y or z, which were tried and failed) is a whole other subject.
That particular code is idiomatic to anyone who worked with 2D bitmap graphics in that era.
pt == point, r == rect, h, v == horizontal, vertical, BSR(...,1) is a fast integer divide by 2, ORD4 promotes an expression to an unsigned 4 byte integer
The algorithms are extremely common for 2D graphics programming. The first is to find the center of a 2D rectangle, the second offsets a point by half the size, the third clips a point to be in the range of a rectangle, and so on.
Converting the idiomatic math into non-idiomatic words would not be an improvement in clarity in this case.
(Mac Pascal didn't have macros or inline expressions, so inline expressions like this were the way to go for performance.)
It's like using i,j,k for loop indexes, or x,y,z for graphics axis.
> Converting the idiomatic math into non-idiomatic words would not be an improvement in clarity in this case.
You seem to be missing my point. It's not about improving "clarity" about the math each line is doing -- that's precisely the kind of misconception so many people have about comments.
It's about, how long does it take me to understand the purpose of a block of code? If there was a simple comment at the top that said [1]:
# Calculate top-left point of the bounding box
then it would actually be helpful. You'd understand the purpose, and understand it immediately. You wouldn't have to decode the code -- you'd just read the brief remark and move on. That's what literate programming is about, in spirit -- writing code to be easily read at levels of the hierarchy. And very specifically not having to read every single line to figure out what it's doing.
The original assertion that "This code is so literate, so easy to read" is demonstrably false. Naming something "pt" is the antithesis of literature programming. And if you insist on no comments, you'd at least need to name is something like "bbox_top_left". A generic variable name like "pt", that isn't even introduced in the context of a loop or anything, is a cardinal sin here.
Xyz makes sense because that is what those axes are literally labeled, but ijk I will rail against until I die.
There's no context in those names to help you understand them, you have to look at the code surrounding it. And even the most well-intentioned, small loops with obvious context right next to it can over time grow and add additional index counters until your obvious little index counter is utterly opaque without reading a dozen extra lines to understand it.
(And i and j? Which look so similar at a glance? Never. Never!)
> There's no context in those names to help you understand them, you have to look at the code surrounding it.
Hard disagree. Using "meaningful" index names is a distracting anti-pattern, for the vast majority of loops. The index is a meaningless structural reference -- the standard names allow the programmer to (correctly) gloss over it. To bring the point home, such loops could often (in theory, if not in practice, depending on the language) be rewritten as maps, where the index reference vanishes altogether.
The issue isn't the names themselves, it's the locality of information. In a 3-deep nested loop, i, j, k forces the reader to maintain a mental stack trace of the entire block. If I have to scroll up to the for clause to remember which dimension k refers to, the abstraction has failed.
Meaningful names like row, col, cell transform structural boilerplate into self-documenting logic. ijk may be standard in math-heavy code, but in most production code bases, optimizing for a 'low-context' reader is not an anti-pattern.
If the loop is so big it's scrollable, sure use row, col, etc.
That was my "vast majority" qualifier.
For most short or medium sized loops, though, renaming "i" to something "meaningful" can harm readability. And I don't buy the defensive programming argument that you should do it anyway because the loop "might grow bigger someday". If it does, you can consider updating the names then. It's not hard -- they're hyper local variables.
In a single-level loop, i is just an offset. I agree that depending on the context (maybe even for the vast majority of for loops like you're suggesting) it's probably fine.
But once you nest three deep (as in the example that kicked off this thread), you're defining a coordinate space. Even in a 10-line block, i, j, k forces the reader to manually map those letters back to their axes. If I see grid[j][i][k], is that a bug or a deliberate transposition? I shouldn't have to look at the for clause to find out.
As other comments have mentioned, context does matter, and as someone with a lot of 2D image/pixel processing experience, other than the 'BSR' and 'ORD4' items - which are clearly common in the codebase and in that era of computing, all that code makes perfect sense.
Also, breaking things down to more atomic functions wasn't the best idea for performance-sensitive things in those days, as compilers were not as good about knowing when to inline and not: compiler capabilities are a lot better today than they were 35 years ago...
This actually looks surprisingly straightforward for what the function is doing - certainly if you have domain context of image editing or document placement. You'll find it in a lot of UI code - this one uses bit shifts for efficiency but what it's doing is pretty straightforward.
For clarity and to demonstrate, this is basically what this function is doing, but in css:
BSR(x,1) simply meant x divided by 2. This is very comment coding idom back in those days when Compiler don't do any optimization and bitwise-shift is much faster than division.
It’s not a myth, it’s a sound software engineering principle.
Every comment is a line of code, and every line of code is a liability, and, worse, comments are a liability waiting to rot, to be missed in a refactor, and waiting to become a source of confusion. It’s an excuse to name things poorly, because “good comment.” The purpose of variables should be in their name, including units if it’s a measurement. Parameters and return values should only be documented when not obvious from the name or type—for example, if you’re returning something like a generic Pair, especially if left and right have the same type. We’d been living with decades of autocomplete, you don’t need to make variables be short to type.
The problem with AI-generated code is that the myth that good code is thoroughly commented code is so pervasive, that the default output mode for generated code is to comment every darn line it generates. After all, in software education, they don’t deduct points for needless comments, and students think their code is now better w/ the comments, because they almost never teach writing good code. Usually you get kudos for extensive comments. And then you throw away your work. Computer science field is littered with math-formula-influenced space-saving one or two letter identifiers, barely with any recognizable semantic meaning.
No amount of good names will tell you why something was done a certain way, or just as importantly why it wasn't done a certain way.
A name and signature is often not sufficient to describe what a function does, including any assumptions it makes about the inputs or guarantees it makes about the outputs.
That isn't to say that it isn't necessary to have good names, but that isn't enough. You need good comments too.
And if you say that all of that information should be in your names, you end up with very unwieldy names, that will bitrot even worse than comments, because instead of updating a single comment, you now have to update every usage of the variable or function.
>> Every comment is a line of code, and every line of code is a liability, and, worse, comments are a liability waiting to rot,
This is exactly my view. Comments, while can be helpful, can also interrupt the reading of the code. Also are not verified by the compiler; curious, in the era when everyone goes crazy for rust safety, there is nothing unsafer as comments, because are completely ignored.
I do bot oppose to comments. But they should be used only when needed.
No. What you are describing is exactly the myth that needs to die.
> comments are a liability waiting to rot, to be missed in a refactor, and waiting to become a source of confusion
This gets endlessly repeated, but it's just defending laziness. It's your job to update comments as you update code. Indeed, they're the first thing you should update. If you're letting comments "rot", then you're a bad programmer. Full stop. I hate to be harsh, but that's the reality. People who defend no comments are just saying, "I can't be bothered to make this code easier for others to understand and use". It's egotistical and selfish. The solution for confusing comments isn't no comments -- it's good comments. Do your job. Write code that others can read and maintain. And when you update code, start with the comments. It's just professionalism, pure and simple.
For all we know, the comment came from someone who was doing their job (by your definition) and were bitten in the behind by colleagues who did not do their job. We do not live in an ideal world. Some people are sloppy because they don't know, don't care, or simply don't have the time to do it properly. One cannot put their full faith into comments because of that.
(Please note: I'm not arguing against comments. I'm simply arguing that trusting comments is problematic. It is understandable why some people would prefer to have clearly written code over clearly commented code.)
The code's functionality is immediately obvious to me as someone who works a lot with graphics coordinate systems.
I'm sure the code would be immediately obvious to anyone who would be working on it at the time.
Comments aren't unnecessary, they can be very helpful, but they also come with a high maintenance cost that should be considered when using them. They are a long-term maintenance liability because by design the compiler ignores them so its very easy to change/refactor code and miss changing a comment and then having the comment be misleading or just plain wrong.
These days one could make some sort of case (though I wouldn't entirely buy it, yet) that an LLM-based linter could be used to make sure comments do not get disconnected from the code they are documenting, but in 1990? not so much.
Would I have used longer variable names for slightly more clarity? Today, sure. In 1990, probably not. Temporal context is important and compilers/editors/etc have come a long way since then.
When this got released I really expected someone in the opensource community to run with it, but as far as I know no one has. Back around 1990 a Graphic designer that had his office n the same building as my mom worked in let me copy his Photoshop 1.x disks and nothing has ever compared to it for me. When will we get the linux port of Photoshop 1.0? I would love to see how it develops.
> 2. Restrictions. Except as expressly specified in this Agreement, you may not: (a) transfer, sublicense, lease, lend, rent or otherwise distribute the Software or Derivative Works to any third party; or (b) make the functionality of the Software or Derivative Works available to multiple users through any means, including, but not limited to, by uploading the Software to a network or file-sharing service or through any hosting, application services provider, service bureau, software-as-a-service (SaaS) or any other type of services. You acknowledge and agree that portions of the Software, including, but not limited to, the source code and the specific design and structure of individual modules or programs, constitute or contain trade secrets of Museum and its licensors.
I was talking about more than just a literal port, running with it is broader than just a literal port. I guess my general point is that I am disappointed that all these releases of historical code have so little to show for being released.
Edit: Disappointed is really not the right word but I am failing at finding the right word.
What would you expect to happen? Photoshop 1.0 is an almost unusably basic image editor by modern standards. It doesn't even have layers (they were introduced with Photoshop 3.0 4 years later). Even if the code was licensed in a manner that allowed distribution of derivative works (which it isn't), it's written in Apple's Pascal dialect from the mid-80s and uses a UI framework that's also from the mid-80s and only supports classic Mac OS. CHM didn't even release the code in a state that could be usable out of the box if you happen to have a 40 year old Macintosh sitting around. Here's a blog post showing how much work it took someone to compile it: http://basalgangster.macgui.com/RetroMacComputing/The_Long_V...
I think Adobe decided to release the code because they knew it was only valuable from a historical standpoint and wouldn't let anyone actually compete with Photoshop. If you wanted to start a new image editor project from an existing codebase, it would be much easier to build off of something like Pinta: https://www.pinta-project.com/
1) these historical source code releases really are largely historical interest only. The original programs had constraints of memory and cpu speed that no modern use case does; the set of use cases for any particular task today is very different; what users expect and will tolerate in UI has shifted; available programming languages and tooling today are much better than the pragmatic options of decades past. If you were trying to build a Unix clone today there is no way you would want to start with the historical release of sixth edition. Even xv6 is only "inspired by" it, and gets away with that because of its teaching focus. Similarly if you wanted to build some kind of "streamlined lightweight photoshop-alike" then starting from scratch would be more sensible than starting with somebody else's legacy codebase.
2) In this specific case the licence agreement explicitly forbids basically any kind of "running with it" -- you cannot distribute any derivative work. So it's not surprising that nobody has done that.
I think Doom and similar old games are one of the few counterexamples, where people find value in being able to run the specific artefact on new platforms.
Open Source is the same thing as Free Software, just with the different name. The term "Open Source" was coined later to emphasize the business benefits instead of the rights and freedom of the users, but the four freedoms of the Free Software Definition [1] and the ten criteria of the Open Source Definition [2] describe essentially the same thing.
No, it’s source available but not open source. Open source requires at minimum the license to distribute modified copies. Popular open source licenses such as MIT [1] take this further:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
This makes the license transitive so that derived works are also MIT licensed.
Not quite. You need to include the MIT license text when distributing the software*, but the software you build doesn't need to also be MIT.
*: which unfortunately most users of MIT libraries do not follow as I often have an extremely difficult time finding the OSS licenses in their software distributions
MIT is not copyleft. The copyright notice must be included for those incorporated elements, but other downstream code it remains part of can be licensed however it wants.
AGPL and GPL are, on the other hand, as you describe.
Modifications can be licensed differently but that takes extra work. If I release a project with the MIT license at the top of each file and you download my project and make a 1-line change which you then redistribute, you need to explicitly mark that line as having a different license from the rest of the file otherwise it could be interpreted as also being MIT licensed.
You also could not legally remove the MIT license from those files and distribute with all rights reserved. My original granting of permission to modify and redistribute continues downstream.
Even without a specific definition for "open source", I wouldn't consider source code with a restrictive license that doesn't allow you to do much with it to be "open".
* If a country doesn't have "closed borders" then many foreigners can visit if they follow certain rules around visas, purpose, and length of stay. If instead anyone can enter and live there with minimal restrictions we say it has "open borders".
* If a journal isn't "closed access" it is free to read. If you additionally have permissions to redistribute, reuse, etc then it's "open access".
* If an organization doesn't practice "closed meetings" then outsiders can attend meetings to observe. If it additionally provides advance notice, allows public attendance without permission, and records or publishes minutes, then it has “open meetings.”
* A club that doesn't have "closed membership" is open to admitting members. Anyone can join provided they meet relevant criteria (if any) then it's "open membership".
I understand it was a very unique and powerful piece of software in 1990 but why would it be such a game changer to have the 1.0 running on Linux today?
You could try having an LLM port it to Linux :) As an aside I was always (well, no longer) hoping that Photoshop gets ported to Linux because at least an IRIX port existed, so there has to be some source code with X11 or whatever library code.
Photoshop was ported to IRIX using Latitude, Quorum Software's implementation of Mac OS System 7. Apple later acquired the Quorum's code and it became part of Carbon.
That software box on the shelf at Babbage’s is a cherished memory—a tangible oddity of software distribution prior to broadband, now just a relic in memory. Most of us assumed it would last forever. We get our software at the click of a button now, but we traded something for that.
Software felt more valuable when you forked over £60+ ( Which was worth a lot more back then ) and got a physical box, with a chunky set of instruction manuals and 5+ floppy disks.
It wasn't even broadband that destroyed that experience, when CDs came around developers realised they had space to just stick a PDF version of the manual on the CD itself and put in a slip that tells you to stick in the CD, run autorun.exe if it didn't already, and refer to the manual on the CD for the rest!
There are many things I feel nostalgic for in that era, but chunky manuals for specific software are at the bottom of that list.
They weren’t like textbooks, which have knowledge that tends to be relevant for decades. You’d get a new set with every software release, making the last 5-20 lbs of manuals obsolete.
You did lose some of the readability of an actual book. Hard-copy manuals were better for that. But for most software manuals, I did more “look up how to do this thing” than reading straight through. And with a pdf on a CD you had much better search capabilities. Before that you’d have to rely on the ToC, the book index and your own notes. For many manuals, the index wasn’t great. Full text search was a definite step up.
Even the good ones, like the 1980s IBM 2-ring binder manuals, which had good indexes, were a pain to deal with and couldn’t functionally match a PDF or text file on a CD for searchability.
Also, you were far more likely to get actual documentation back in the day. You're never going to get a detailed first-party technical reference for today's Apple computers (at least not without being Big Enough and signing a mountain of NDAs); compare that to the Apple II having a full listing of the Monitor ROM, or the original IBM PC Technical Reference Manual.
The very existence of those manuals improved the software, as the technical writers were trained in a different discipline than programming, and it really showed.
Even some well-documented modern software is obviously documented by the programmers and programmer-adjacent.
Manuals like AutoCADs have certainly felt valuable https://i.ebayimg.com/images/g/Gm8AAeSwwIZowjzn/s-l1600.jpg It's not even complete, for instance the ADS manual is missing. It's also a bit more expensive with roughly 3700 USD in 1992.
I ran an exhibit of eight machines from my retrocomputing collection last year, including a 1986 Mac Plus with 1MB RAM running Photoshop 1.0. People really enjoyed it! It’s kind of remarkable what you can still do with it and how freeing it is to have singular focus in an app.
As I remember, the blue ones where the most ordinary (and boring), at least for 3½-inch size. For 5¼-inch, they were mostly black, but I remember some of them in colors too (especially orange or yellow ones, they were beautiful).
>To download the code you must agree to the terms of the license, which permits only non-commercial use and does not give you the right to license it to third parties by posting copies elsewhere on the web.
Note this is a toxic license. Accepting it and/or reading of the code has potential for legal liability.
Still, applaud releasing the source code, even if encumbered. Preservation is most important, and any legal teeth will eventually expire with the copyright.
> "Software architect Grady Booch is the Chief Scientist for Software Engineering at IBM Research Almaden and a trustee of the Computer History Museum. He offers the following observations about the Photoshop source code."
OMG. Booch?? The father of UML is still around? Given that UML is a true crime against humanity, it just goes to show there is no justice in the world. (I want a lifespan refund for the amount of time I spent learning UML and Design Patterns back in the bad old Enterprise Java days. Oof)
It was going to be the future of Software Engineering in the 2000s, Software Architects laying out boxes for Software Bricklayers to implement as dictated, code generation tools were going to make programming trivial.
For trivial CRUD apps, and maintaining modified versions of the generated code was a nightmare.
I used to use GIMP as an example of OSS desktop applications having bad UX, I mean back around 2010 maybe. The UX felt plain horrible. Anything I every tried there was pain to achieve. And there was plethora of desktop applications having the same issue back then. "Geeks can't do UI".
I feel like that has changed? Even Blender felt good the last time I used it, Firefox became kinda fine, though these are probably bad examples as they are both mainstream software. But what about OSS that is used primarily by OSS enthusiasts? What about GIMP now?
This is just my personal experience, but even with the current UI, there can tend to be a learning curve with GIMP. Alot of it probably comes from figuring out where tools and functionality that are readily available upfront in other paint programs are hidden 2-3 menus deep in GIMP
GIMP in my opinion has a very good UI when you're looking at graphics as a programmer: threshold this, clamp that, apply a kernel ("custom filter")... Everything seems to click with a mental model of someone who does graphics programming.
Whereas Photoshop and other "mainstream" software use terms and procedures non-programmers are more likely to be familiar with: heal this area with a patch, clone something with a clone stamp, scissors/lasso to cut something out (not saying GIMP doesn't have those)...
That’s what happens when you let people do other people's jobs. UI/UX design is a profession, and there is a reason for that.
Unfortunately, designers are rare among the FOSS community. You can't attract real casual or professional users if you don't recognize the value of professional UI/UX.
I've never understood the negative comments around UX for GIMP. It always feels just fine for me. Some stuff is in menus, but its a complex application with a lot of parts so I understand that
Blender feels like an outlier amongst open source software. Outside of programmers tools the great majority of open source feels mediocre. I wonder what the Blender people did differently.
A simple trick to make GIMP perfectly usable (exists since ages):
> To change GIMP to single-window mode (merging panels into one window), go to "Windows" in the top menu and select or check "Single-Window Mode"; this merges all elements like the Toolbox, Layers, and History into one unified view.
For texting I recommend using a mobile phone or desktop instant messaging program. While it's not the case with all of them, graphics editing tools tend to have texting utilities as a second-class citizen at best
Its just the first two results from top of Google.
Maybe the tool was improved in version 3.0, I'm running an older 2.x version. I will check it next time.
The versions were difficult in:
- font size applying
- random loss / reset settings
- there were some issues with the preview when editting
- font preview before selection
etc.
Both of those are from over a year ago? For future, I wouldn't think that's "top" of any discussion.
The strange font sizes and setting reset was mostly fixed as part of the 2020 massive refactor [0]. There are still some minor inconsistencies between the two font editor panels, but they're being worked on.
Thankfully, you shouldn't have any random setting changes since about 2018 build.
"His brother John, working at the movie visual effects company Industrial Light & Magic" is underselling John Knoll a bit - he became one of the more prominent figures there and won two Oscars for his work (and was nominated for more).
Taking his contribution for Photoshop into account, one could say that if you saw mainstream motion or still pictures in the Western world in the last three decades, you'll probably saw something influenced by him in one way or another.
Quite the praise by Grady Booch:
"There are only a few comments in the version 1.0 source code, most of which are associated with assembly language snippets. That said, the lack of comments is simply not an issue. This code is so literate, so easy to read, that comments might even have gotten in the way."
"This is the kind of code I aspire to write.”
> the lack of comments is simply not an issue
I'm looking at the code and just cannot agree. If I look at a command like "TRotateFloatCommand.DoIt" in URotate.p, it's 200 lines long without a single comment. I look at a section like this and there's nothing literate about it. I have no idea what it's doing or why at a glance:
Just breaking up the function with comments delineating its four main sections and what they do would be a start. As would simple things like commenting e.g. what purpose 'pt' serves -- the code block above is where it is first defined, but you can't guess what its purpose is until later when it's used to define something else.Good code does not make comments unnecessary or redundant or harmful. This is a myth that needs to die. Comments help you understand code much faster, understand the purpose of variables before they get used, understand the purpose of functions and parameters before reading the code that defines them, etc. They vastly aid in comprehension. And those are just "what" comments I'm talking about -- the additional necessity of "why" comments (why the code uses x approach instead of seemingly more obvious approach y or z, which were tried and failed) is a whole other subject.
That particular code is idiomatic to anyone who worked with 2D bitmap graphics in that era.
pt == point, r == rect, h, v == horizontal, vertical, BSR(...,1) is a fast integer divide by 2, ORD4 promotes an expression to an unsigned 4 byte integer
The algorithms are extremely common for 2D graphics programming. The first is to find the center of a 2D rectangle, the second offsets a point by half the size, the third clips a point to be in the range of a rectangle, and so on.
Converting the idiomatic math into non-idiomatic words would not be an improvement in clarity in this case.
(Mac Pascal didn't have macros or inline expressions, so inline expressions like this were the way to go for performance.)
It's like using i,j,k for loop indexes, or x,y,z for graphics axis.
> Converting the idiomatic math into non-idiomatic words would not be an improvement in clarity in this case.
You seem to be missing my point. It's not about improving "clarity" about the math each line is doing -- that's precisely the kind of misconception so many people have about comments.
It's about, how long does it take me to understand the purpose of a block of code? If there was a simple comment at the top that said [1]:
then it would actually be helpful. You'd understand the purpose, and understand it immediately. You wouldn't have to decode the code -- you'd just read the brief remark and move on. That's what literate programming is about, in spirit -- writing code to be easily read at levels of the hierarchy. And very specifically not having to read every single line to figure out what it's doing.The original assertion that "This code is so literate, so easy to read" is demonstrably false. Naming something "pt" is the antithesis of literature programming. And if you insist on no comments, you'd at least need to name is something like "bbox_top_left". A generic variable name like "pt", that isn't even introduced in the context of a loop or anything, is a cardinal sin here.
[1] https://news.ycombinator.com/item?id=46366341
Xyz makes sense because that is what those axes are literally labeled, but ijk I will rail against until I die.
There's no context in those names to help you understand them, you have to look at the code surrounding it. And even the most well-intentioned, small loops with obvious context right next to it can over time grow and add additional index counters until your obvious little index counter is utterly opaque without reading a dozen extra lines to understand it.
(And i and j? Which look so similar at a glance? Never. Never!)
> but ijk I will rail against until I die.
> There's no context in those names to help you understand them, you have to look at the code surrounding it.
Hard disagree. Using "meaningful" index names is a distracting anti-pattern, for the vast majority of loops. The index is a meaningless structural reference -- the standard names allow the programmer to (correctly) gloss over it. To bring the point home, such loops could often (in theory, if not in practice, depending on the language) be rewritten as maps, where the index reference vanishes altogether.
I respectfully disagree.
The issue isn't the names themselves, it's the locality of information. In a 3-deep nested loop, i, j, k forces the reader to maintain a mental stack trace of the entire block. If I have to scroll up to the for clause to remember which dimension k refers to, the abstraction has failed.
Meaningful names like row, col, cell transform structural boilerplate into self-documenting logic. ijk may be standard in math-heavy code, but in most production code bases, optimizing for a 'low-context' reader is not an anti-pattern.
If the loop is so big it's scrollable, sure use row, col, etc.
That was my "vast majority" qualifier.
For most short or medium sized loops, though, renaming "i" to something "meaningful" can harm readability. And I don't buy the defensive programming argument that you should do it anyway because the loop "might grow bigger someday". If it does, you can consider updating the names then. It's not hard -- they're hyper local variables.
In a single-level loop, i is just an offset. I agree that depending on the context (maybe even for the vast majority of for loops like you're suggesting) it's probably fine.
But once you nest three deep (as in the example that kicked off this thread), you're defining a coordinate space. Even in a 10-line block, i, j, k forces the reader to manually map those letters back to their axes. If I see grid[j][i][k], is that a bug or a deliberate transposition? I shouldn't have to look at the for clause to find out.
ijk are standard in linear algebra for vector components.
> (And i and j? Which look so similar at a glance? Never. Never!)
This I agree with.
What if not ijk? I know only uvw.
As other comments have mentioned, context does matter, and as someone with a lot of 2D image/pixel processing experience, other than the 'BSR' and 'ORD4' items - which are clearly common in the codebase and in that era of computing, all that code makes perfect sense.
Also, breaking things down to more atomic functions wasn't the best idea for performance-sensitive things in those days, as compilers were not as good about knowing when to inline and not: compiler capabilities are a lot better today than they were 35 years ago...
This actually looks surprisingly straightforward for what the function is doing - certainly if you have domain context of image editing or document placement. You'll find it in a lot of UI code - this one uses bit shifts for efficiency but what it's doing is pretty straightforward.
For clarity and to demonstrate, this is basically what this function is doing, but in css:
.container {
}.obj {
BSR = bitwise right-shift
ORD4 = cast as 32bit integer.
BSR(x,1) simply meant x divided by 2. This is very comment coding idom back in those days when Compiler don't do any optimization and bitwise-shift is much faster than division.
The snippet in C would be:
Reading the full function here https://github.com/amix/photoshop/blob/2baca147594d01cf9d17d...
If I understand it correctly, it was calculating the top-left point of the bounding box.
It’s not a myth, it’s a sound software engineering principle.
Every comment is a line of code, and every line of code is a liability, and, worse, comments are a liability waiting to rot, to be missed in a refactor, and waiting to become a source of confusion. It’s an excuse to name things poorly, because “good comment.” The purpose of variables should be in their name, including units if it’s a measurement. Parameters and return values should only be documented when not obvious from the name or type—for example, if you’re returning something like a generic Pair, especially if left and right have the same type. We’d been living with decades of autocomplete, you don’t need to make variables be short to type.
The problem with AI-generated code is that the myth that good code is thoroughly commented code is so pervasive, that the default output mode for generated code is to comment every darn line it generates. After all, in software education, they don’t deduct points for needless comments, and students think their code is now better w/ the comments, because they almost never teach writing good code. Usually you get kudos for extensive comments. And then you throw away your work. Computer science field is littered with math-formula-influenced space-saving one or two letter identifiers, barely with any recognizable semantic meaning.
No amount of good names will tell you why something was done a certain way, or just as importantly why it wasn't done a certain way.
A name and signature is often not sufficient to describe what a function does, including any assumptions it makes about the inputs or guarantees it makes about the outputs.
That isn't to say that it isn't necessary to have good names, but that isn't enough. You need good comments too.
And if you say that all of that information should be in your names, you end up with very unwieldy names, that will bitrot even worse than comments, because instead of updating a single comment, you now have to update every usage of the variable or function.
>> Every comment is a line of code, and every line of code is a liability, and, worse, comments are a liability waiting to rot,
This is exactly my view. Comments, while can be helpful, can also interrupt the reading of the code. Also are not verified by the compiler; curious, in the era when everyone goes crazy for rust safety, there is nothing unsafer as comments, because are completely ignored.
I do bot oppose to comments. But they should be used only when needed.
No. What you are describing is exactly the myth that needs to die.
> comments are a liability waiting to rot, to be missed in a refactor, and waiting to become a source of confusion
This gets endlessly repeated, but it's just defending laziness. It's your job to update comments as you update code. Indeed, they're the first thing you should update. If you're letting comments "rot", then you're a bad programmer. Full stop. I hate to be harsh, but that's the reality. People who defend no comments are just saying, "I can't be bothered to make this code easier for others to understand and use". It's egotistical and selfish. The solution for confusing comments isn't no comments -- it's good comments. Do your job. Write code that others can read and maintain. And when you update code, start with the comments. It's just professionalism, pure and simple.
For all we know, the comment came from someone who was doing their job (by your definition) and were bitten in the behind by colleagues who did not do their job. We do not live in an ideal world. Some people are sloppy because they don't know, don't care, or simply don't have the time to do it properly. One cannot put their full faith into comments because of that.
(Please note: I'm not arguing against comments. I'm simply arguing that trusting comments is problematic. It is understandable why some people would prefer to have clearly written code over clearly commented code.)
The code's functionality is immediately obvious to me as someone who works a lot with graphics coordinate systems.
I'm sure the code would be immediately obvious to anyone who would be working on it at the time.
Comments aren't unnecessary, they can be very helpful, but they also come with a high maintenance cost that should be considered when using them. They are a long-term maintenance liability because by design the compiler ignores them so its very easy to change/refactor code and miss changing a comment and then having the comment be misleading or just plain wrong.
These days one could make some sort of case (though I wouldn't entirely buy it, yet) that an LLM-based linter could be used to make sure comments do not get disconnected from the code they are documenting, but in 1990? not so much.
Would I have used longer variable names for slightly more clarity? Today, sure. In 1990, probably not. Temporal context is important and compilers/editors/etc have come a long way since then.
Man I just don’t know who to believe, you or the Chief Scientist for Software Engineering at IBM research Almaden.
When this got released I really expected someone in the opensource community to run with it, but as far as I know no one has. Back around 1990 a Graphic designer that had his office n the same building as my mom worked in let me copy his Photoshop 1.x disks and nothing has ever compared to it for me. When will we get the linux port of Photoshop 1.0? I would love to see how it develops.
If they did, they can only send you screenshots
> 2. Restrictions. Except as expressly specified in this Agreement, you may not: (a) transfer, sublicense, lease, lend, rent or otherwise distribute the Software or Derivative Works to any third party; or (b) make the functionality of the Software or Derivative Works available to multiple users through any means, including, but not limited to, by uploading the Software to a network or file-sharing service or through any hosting, application services provider, service bureau, software-as-a-service (SaaS) or any other type of services. You acknowledge and agree that portions of the Software, including, but not limited to, the source code and the specific design and structure of individual modules or programs, constitute or contain trade secrets of Museum and its licensors.
I was talking about more than just a literal port, running with it is broader than just a literal port. I guess my general point is that I am disappointed that all these releases of historical code have so little to show for being released.
Edit: Disappointed is really not the right word but I am failing at finding the right word.
What would you expect to happen? Photoshop 1.0 is an almost unusably basic image editor by modern standards. It doesn't even have layers (they were introduced with Photoshop 3.0 4 years later). Even if the code was licensed in a manner that allowed distribution of derivative works (which it isn't), it's written in Apple's Pascal dialect from the mid-80s and uses a UI framework that's also from the mid-80s and only supports classic Mac OS. CHM didn't even release the code in a state that could be usable out of the box if you happen to have a 40 year old Macintosh sitting around. Here's a blog post showing how much work it took someone to compile it: http://basalgangster.macgui.com/RetroMacComputing/The_Long_V...
I think Adobe decided to release the code because they knew it was only valuable from a historical standpoint and wouldn't let anyone actually compete with Photoshop. If you wanted to start a new image editor project from an existing codebase, it would be much easier to build off of something like Pinta: https://www.pinta-project.com/
I think there's two parts to this:
1) these historical source code releases really are largely historical interest only. The original programs had constraints of memory and cpu speed that no modern use case does; the set of use cases for any particular task today is very different; what users expect and will tolerate in UI has shifted; available programming languages and tooling today are much better than the pragmatic options of decades past. If you were trying to build a Unix clone today there is no way you would want to start with the historical release of sixth edition. Even xv6 is only "inspired by" it, and gets away with that because of its teaching focus. Similarly if you wanted to build some kind of "streamlined lightweight photoshop-alike" then starting from scratch would be more sensible than starting with somebody else's legacy codebase.
2) In this specific case the licence agreement explicitly forbids basically any kind of "running with it" -- you cannot distribute any derivative work. So it's not surprising that nobody has done that.
I think Doom and similar old games are one of the few counterexamples, where people find value in being able to run the specific artefact on new platforms.
you literally said:
> When will we get the linux port of Photoshop 1.0?
The source is now readable but it’s not open source at all.
It is open source but not free software.
Open Source is the same thing as Free Software, just with the different name. The term "Open Source" was coined later to emphasize the business benefits instead of the rights and freedom of the users, but the four freedoms of the Free Software Definition [1] and the ten criteria of the Open Source Definition [2] describe essentially the same thing.
[1] https://www.gnu.org/philosophy/free-sw.en.html
[2] https://opensource.org/osd
No, it’s source available but not open source. Open source requires at minimum the license to distribute modified copies. Popular open source licenses such as MIT [1] take this further:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
This makes the license transitive so that derived works are also MIT licensed.
[1] https://en.wikipedia.org/wiki/MIT_License?wprov=sfti1#Licens...
Not quite. You need to include the MIT license text when distributing the software*, but the software you build doesn't need to also be MIT.
*: which unfortunately most users of MIT libraries do not follow as I often have an extremely difficult time finding the OSS licenses in their software distributions
MIT is not copyleft. The copyright notice must be included for those incorporated elements, but other downstream code it remains part of can be licensed however it wants.
AGPL and GPL are, on the other hand, as you describe.
Modifications can be licensed differently but that takes extra work. If I release a project with the MIT license at the top of each file and you download my project and make a 1-line change which you then redistribute, you need to explicitly mark that line as having a different license from the rest of the file otherwise it could be interpreted as also being MIT licensed.
You also could not legally remove the MIT license from those files and distribute with all rights reserved. My original granting of permission to modify and redistribute continues downstream.
It’s is “source available” but not open source.
It's "source available" [1], not open source [2].
Words have meaning and all that.
1: https://en.wikipedia.org/wiki/Source-available_software
2: https://en.wikipedia.org/wiki/Open_source
> Words have meaning and all that.
Ironic put down when “open source” consists of two words which have meaning, but somehow doesn’t mean that when combined into one phrase.
Same with free software, in a way.
Programmers really are terrible at naming things.
:)
Even without a specific definition for "open source", I wouldn't consider source code with a restrictive license that doesn't allow you to do much with it to be "open".
cant blame him. We're in a bit of a bananas situation where open source isnt the antonym of closed source
This isn't that uncommon:
* If a country doesn't have "closed borders" then many foreigners can visit if they follow certain rules around visas, purpose, and length of stay. If instead anyone can enter and live there with minimal restrictions we say it has "open borders".
* If a journal isn't "closed access" it is free to read. If you additionally have permissions to redistribute, reuse, etc then it's "open access".
* If an organization doesn't practice "closed meetings" then outsiders can attend meetings to observe. If it additionally provides advance notice, allows public attendance without permission, and records or publishes minutes, then it has “open meetings.”
* A club that doesn't have "closed membership" is open to admitting members. Anyone can join provided they meet relevant criteria (if any) then it's "open membership".
EDIT: expanded this into a post: https://www.jefftk.com/p/open-source-is-a-normal-term
* A set that isn't open isn't (necessarily) closed.
* A set that is open can also be closed.
I understand it was a very unique and powerful piece of software in 1990 but why would it be such a game changer to have the 1.0 running on Linux today?
What about GIMP or any of the other open source image editors?
Just supporting a modern OS's graphical API (The pre-OSX APIs are long dead and unsupported) is a major effort.
You could try having an LLM port it to Linux :) As an aside I was always (well, no longer) hoping that Photoshop gets ported to Linux because at least an IRIX port existed, so there has to be some source code with X11 or whatever library code.
https://fsck.technology/software/Silicon%20Graphics/Software...
Photoshop was ported to IRIX using Latitude, Quorum Software's implementation of Mac OS System 7. Apple later acquired the Quorum's code and it became part of Carbon.
There's System 7 for Unix 'natively', with either Executor (there's a fork under Github) or some other project: https://www.v68k.org/advanced-mac-substitute/
https://github.com/autc04/executor
https://github.com/jjuran/metamage_1/
If it uses Motif and IrisGL (now MESA3D) the amount of porting effort it's near NIL.
And, for purity/completeness, avoid Maxx Desktop and/or NSCDE; EMWM with XMToolbar it's close enough to SGI's Irix desktop.
https://fastestcode.org/emwm.html
As an experiment, I gave the source zip file to Claude and told it to make a WASM version of the app, by translating the Pascal to Go.
It nailed it, first try.
I cannot, unfortunately, share a link to the website it created because of the license.
LLM translations of historical software to modern platforms is a solved problem. Try it, you'll see.
I used https://exe.dev/ and their Shelley agent to drive Claude. Give it a try, it is jaw dropping.
Can you post a video demonstrating you using it?
That software box on the shelf at Babbage’s is a cherished memory—a tangible oddity of software distribution prior to broadband, now just a relic in memory. Most of us assumed it would last forever. We get our software at the click of a button now, but we traded something for that.
Software felt more valuable when you forked over £60+ ( Which was worth a lot more back then ) and got a physical box, with a chunky set of instruction manuals and 5+ floppy disks.
It wasn't even broadband that destroyed that experience, when CDs came around developers realised they had space to just stick a PDF version of the manual on the CD itself and put in a slip that tells you to stick in the CD, run autorun.exe if it didn't already, and refer to the manual on the CD for the rest!
There are many things I feel nostalgic for in that era, but chunky manuals for specific software are at the bottom of that list.
They weren’t like textbooks, which have knowledge that tends to be relevant for decades. You’d get a new set with every software release, making the last 5-20 lbs of manuals obsolete.
You did lose some of the readability of an actual book. Hard-copy manuals were better for that. But for most software manuals, I did more “look up how to do this thing” than reading straight through. And with a pdf on a CD you had much better search capabilities. Before that you’d have to rely on the ToC, the book index and your own notes. For many manuals, the index wasn’t great. Full text search was a definite step up.
Even the good ones, like the 1980s IBM 2-ring binder manuals, which had good indexes, were a pain to deal with and couldn’t functionally match a PDF or text file on a CD for searchability.
Also, you were far more likely to get actual documentation back in the day. You're never going to get a detailed first-party technical reference for today's Apple computers (at least not without being Big Enough and signing a mountain of NDAs); compare that to the Apple II having a full listing of the Monitor ROM, or the original IBM PC Technical Reference Manual.
The very existence of those manuals improved the software, as the technical writers were trained in a different discipline than programming, and it really showed.
Even some well-documented modern software is obviously documented by the programmers and programmer-adjacent.
Manuals like AutoCADs have certainly felt valuable https://i.ebayimg.com/images/g/Gm8AAeSwwIZowjzn/s-l1600.jpg It's not even complete, for instance the ADS manual is missing. It's also a bit more expensive with roughly 3700 USD in 1992.
Oh yeah, when I said £60, I was thinking of even the cheapest consumer-grade software!
I ran an exhibit of eight machines from my retrocomputing collection last year, including a 1986 Mac Plus with 1MB RAM running Photoshop 1.0. People really enjoyed it! It’s kind of remarkable what you can still do with it and how freeing it is to have singular focus in an app.
There was something magical about white floppies, as shown in the screenshot.
you mean photo not screenshot.
I think all floppies are magical :)
Image.
Back in time, black were ordinary, and only white/grey ones were for licensed software, thus more desirable.
https://computerhistory.org/wp-content/uploads/2019/08/photo...
As I remember, the blue ones where the most ordinary (and boring), at least for 3½-inch size. For 5¼-inch, they were mostly black, but I remember some of them in colors too (especially orange or yellow ones, they were beautiful).
E.g: https://c7.alamy.com/comp/2AA9BC4/ajaxnetphoto-2019-worthing...
White gold: https://archive.org/download/windows-3.00a/media-disk01.png
The same for cameras back in the 60s/70s. Silver was the norm, black was way more desirable. Funnily it's now the opposite.
Still better than GIMP... /s (maybe)
Interesting little read. I always find it fascinating when old code holds up really well - especially structurally. Great trip down memory lane!
>To download the code you must agree to the terms of the license, which permits only non-commercial use and does not give you the right to license it to third parties by posting copies elsewhere on the web.
Note this is a toxic license. Accepting it and/or reading of the code has potential for legal liability.
Still, applaud releasing the source code, even if encumbered. Preservation is most important, and any legal teeth will eventually expire with the copyright.
> Note this is a toxic license. Accepting it and/or reading of the code has potential for legal liability.
How would this potentially expose you to legal liability?
Wow! Writing photoshop while a phd student at Michigan! Wish current students would do some code
> "Software architect Grady Booch is the Chief Scientist for Software Engineering at IBM Research Almaden and a trustee of the Computer History Museum. He offers the following observations about the Photoshop source code."
OMG. Booch?? The father of UML is still around? Given that UML is a true crime against humanity, it just goes to show there is no justice in the world. (I want a lifespan refund for the amount of time I spent learning UML and Design Patterns back in the bad old Enterprise Java days. Oof)
On the contrary, UML is quite useful in enterprise architecture, and I am yet to find an alternative that isn't much worse.
It is like the YAML junk that gets pushed nowadays in detriment of proper schemas, and validation tools we have in XML.
I completed a CS degree just a year ago, and they absolutely wrecked us with UML. I’m still recovering mentally.
UML used to be a staple of job interviews.
It was going to be the future of Software Engineering in the 2000s, Software Architects laying out boxes for Software Bricklayers to implement as dictated, code generation tools were going to make programming trivial.
For trivial CRUD apps, and maintaining modified versions of the generated code was a nightmare.
I was drawing UML before Christmas vacations, when one works at scale, drawing boxes to discuss implemenations works much better than throw away code.
It is also a great way to document existing architectures.
This AI hype cycle reminds me of that era.
Gimp source code: https://gitlab.gnome.org/GNOME/gimp
I used to use GIMP as an example of OSS desktop applications having bad UX, I mean back around 2010 maybe. The UX felt plain horrible. Anything I every tried there was pain to achieve. And there was plethora of desktop applications having the same issue back then. "Geeks can't do UI".
I feel like that has changed? Even Blender felt good the last time I used it, Firefox became kinda fine, though these are probably bad examples as they are both mainstream software. But what about OSS that is used primarily by OSS enthusiasts? What about GIMP now?
This is just my personal experience, but even with the current UI, there can tend to be a learning curve with GIMP. Alot of it probably comes from figuring out where tools and functionality that are readily available upfront in other paint programs are hidden 2-3 menus deep in GIMP
GIMP in my opinion has a very good UI when you're looking at graphics as a programmer: threshold this, clamp that, apply a kernel ("custom filter")... Everything seems to click with a mental model of someone who does graphics programming.
Whereas Photoshop and other "mainstream" software use terms and procedures non-programmers are more likely to be familiar with: heal this area with a patch, clone something with a clone stamp, scissors/lasso to cut something out (not saying GIMP doesn't have those)...
That’s what happens when you let people do other people's jobs. UI/UX design is a profession, and there is a reason for that.
Unfortunately, designers are rare among the FOSS community. You can't attract real casual or professional users if you don't recognize the value of professional UI/UX.
I've never understood the negative comments around UX for GIMP. It always feels just fine for me. Some stuff is in menus, but its a complex application with a lot of parts so I understand that
Blender feels like an outlier amongst open source software. Outside of programmers tools the great majority of open source feels mediocre. I wonder what the Blender people did differently.
unlike most FOSS, blender gets millions of dollars a year to support development
A simple trick to make GIMP perfectly usable (exists since ages):
> To change GIMP to single-window mode (merging panels into one window), go to "Windows" in the top menu and select or check "Single-Window Mode"; this merges all elements like the Toolbox, Layers, and History into one unified view.
the funny thing with GIMP is: even while its a very powerful tool, it still lacks a good texting tool until today :-)
and having the source available didnt help so far either :-))
For texting I recommend using a mobile phone or desktop instant messaging program. While it's not the case with all of them, graphics editing tools tend to have texting utilities as a second-class citizen at best
haha, good one :-D ;-) ;-)
Can you detail what you mean by good texting tool? What features are missing?
for the downvoters:
could you please show me a good textting tool plugin for GIMP, then?
you can check their forums & other sites: the textingtools is on top of their discussion lists?
I don't see it at the top of the discussion on the forums I checked.
So can you expand why you think the text tool, is bad?
Before release 3.0: https://discuss.pixls.us/t/gimp-3-0-will-the-text-tool-be-im...
Reddit: https://www.reddit.com/r/GIMP/comments/1fecr6u/suggestion_im...
Its just the first two results from top of Google.
Maybe the tool was improved in version 3.0, I'm running an older 2.x version. I will check it next time.
The versions were difficult in: - font size applying - random loss / reset settings - there were some issues with the preview when editting - font preview before selection etc.
Both of those are from over a year ago? For future, I wouldn't think that's "top" of any discussion.
The strange font sizes and setting reset was mostly fixed as part of the 2020 massive refactor [0]. There are still some minor inconsistencies between the two font editor panels, but they're being worked on.
Thankfully, you shouldn't have any random setting changes since about 2018 build.
[0] https://gitlab.gnome.org/GNOME/gimp/-/issues/344
I don't understand what you mean by texting tool. Do you mean text rendering? kerning?
Honestly, I think it was just the smiley faces. I didn't downvote.
FTFY: the funny thing with GIMP is: even while its a very powerful tool, it still lacks a good image editing tool until today
Nothing stops you from creating a PR :-)))
I would, if I would GIMP use often enough to have the motivation - I use GIMP maybe 2 - 3 times a year.
And thats the irony covered in my post: Even that the source is available didnt motivate someone enough so far to create better version of the built
Nothing stops you from commenting these useless comments.