> Magic Lantern is a free software add-on that runs from the SD/CF card and adds a host of new features to Canon EOS cameras that weren't included from the factory by Canon.
It also backports new features to old Canon cameras that aren't supported anymore, and is generally just a really impressive feat of both (1) reverse engineering and (2) keeping old hardware relevant and useful.
More backstory: before the modern generation of digital cameras - Magic Lantern was one of the early ways to "juice" more power out of early generations of Canon cameras, including RAW video recording.
Today, cameras like Blackmagic and editing platforms like DaVinci handle RAW seamlessly, but it wasn't like this even a few years ago.
I wish there are similar projects for other camera brands like Fujifilm. With abilities of ML on old Canon cameras we know there is a lot of potential in those old machines across other brands. It is also "eco" friendly approach that should be supported.
I just switched from Canon to Fujifilm due to enshitification. Canon started charging $5/mo to get clean video out of their cameras. We're plenty screwed if manufacturers decide that cameras are subscriptions and not glass.
I stand by my statement! Compare the length of the C standard to JS / ECMAScript, or C++! :)
Maaaaybe I'm hiding a tradeoff around complexity vs built-in features, but volunteers can work that out themselves later on.
You honestly don't need much knowledge of C to get started in some areas. The ML GUI is easy to modify if you stay within the lines. Other areas, e.g., porting a complex feature to a new camera, are much harder. But that's the life of a reverse engineer.
Conversely, the terseness of the C standard also means there's many more footguns and undefined behaviors. There are many things C is, but being easy to pick up is not one of them. I loved C all the way up until I graduated uni, but it would be a very hard sell to get me to pick it for a project these days. To me, working with C is akin to working with assembly, you just feel that you're doing real programming, but realistically there's better options for most scenarios these days.
I agree with some of what you're saying; some of the well known risks of working in C are because it's a small standard. But much of the undefined behaviour was deliberately made that way to support the hardware of the time - it's hard to be cross-platform on different architectures as a low-level language.
C genuinely is easy to pick up. It is harder to master. And you're right, for many domains, there are better options now, so it may not be worth while mastering it.
Because it's an old language, what it lacks in built-in safety features, is provided by decades of very good surrounding tooling. You do of course need to learn that tooling, and choose to use it!
In the context of Magic Lantern, C is the natural fit. We are working with very tight memory limitations, due to the OS. We support single core 200Mhz targets (ARMv5, no out-of-order or other fancy tricks). We don't include C stdlib, a small test binary can be < 1kB. Normal builds are around 400kB (this includes a full GUI, debug capabilities, all strings and assets, etc).
Canon code is probably mostly C, some C++. We have to call their code directly (casting reverse engineered addresses to function pointers, basically). We don't know what safety guarantees their code makes, or what the API is. Most of our work is interacting with OS or hardware. So we wouldn't gain much by using a safe language for our half.
C is considered easy to pick up for the average user posting HN comments because we have the benefit of years -- the average comp sci student, who has been exposed to Javascript and Python, who might not know what "pass by reference" even means... I'm not sure they're going to be considering C easy.
C is taught as the introduction to programming in CS50x, Harvard's wildly popular MOOC for teaching programming to first-year college students and lifelong learners via the internet. Using the clang toolchain gives you much better error messages than old versions of gcc used to give. And I bet AI/LLM/copilot tools are pretty good at C given how much F/OSS is written in C.
Just to provide another data point here... that C is a little easier to pick up, today, than it was in the 1990s or 2000s, when all you had was the K&R C book and a Linux shell. I regularly recommend CS50x to newcomers to programming via a guide I wrote up as a GitHub gist. I took the CS50x course myself in 2020 (just to refresh my own memory of C after years of not using it that much), and it is very high quality.
I've taught several different languages to both 1st year uni students, and new joiners to a technical company, where they had no programming background.
Honestly, C seems to be one of the easier languages to teach the basics of. It's certainly easier than Java or C++, which have many more concepts.
C has some concepts that confuse the hell out of beginners, and it will let you shoot yourself in the foot very thoroughly with them (much more than say, Java). But you don't tend to encounter them till later on.
I have never said getting good at C is easy. Just that it's easy to pick up.
C made a lot more sense to me after having done assembly (6502 in my case, but it probably doesn't matter). Things like passing a reference suddenly just made sense.
Undefined behaviors -- yes. But being able to trigger undefined behavior is not a huge foot gun by itself. Starting with good code examples means you are much less likely to trigger it.
Having a good, logical description of supported features, with a warning that if you do unsupported stuff things may break, is much more important than trying to define every possible action in a predictable way.
The latter approach often leads to explosion of spec volume and gives way more opportunities for writing bad code: predictable in execution, but instead with problems in design and logic which are harder to understand, maintain and fix. My 2c.
I know it's all at least semi- tongue-in-cheek, but IRL a piano's discrete, sequential keys are what make it almost inarguably the easiest instrument to learn.
That's exactly his point. Languages aren't easier to learn simply because their specification is short, any more than instruments are easier to play because they have fewer strings.
The analogy is completely invalid. Languages with small specifications are easier to learn.
It's sad that the dev, who has done great work, has to spend time defending the C language from critters living under a bridge when it's a fixed element that isn't going to change.
Thanks to all who are sharing their appreciation for this niche but cool project.
I'm the current lead dev, so please ask questions.
Got a Canon DSLR or mirrorless and like a bit of software reverse engineering? Consider joining in; it's quite an approachable hardware target. No code obfuscation, just classic reversing. You can pick up a well supported cam for a little less than $100. Cams range from ARMv5te up to AArch64.
Wow, newly supported models is super exciting to see! I have a 5d mk iii which I got specifically to play around with ML. I haven't done much videography in my life, but do plan to get some b-roll at the very least with my mk iii or maybe record some friends live events sometime.
> I'm the current lead dev, so please ask questions.
Well, you asked for it!
One question I've always wondered about the project is: what is the difference between a model that you can support, and a model you currently can't? Is there a hard line where ML future compatibility becomes a brick wall? Are there models where something about the hardware / firmware makes you go 'ooh, that's a good candidate! I bet we can get that one working next'?
Also, as someone from the outside looking in who would be down to spend $100 to see if this something I can do or am interested in, which (cheap) model would be the easiest to grab and load up as dev environment (or in a configuration that mimics what someone might do to work on a feature), and where can I find documentation on how to do that? Is there a compendium of knowledge about how these cameras work from a reverse-engineering angle, or does everyone cut their teeth on forum posts and official canon technical docs?
edit: Found the RE guide on the website, gonna take a look at this later tonight
5D3 is perhaps the best currently supported ML cam for video. It's very capable - good choice. Using both CF and SD cards simultaneously, it can record at about 145MB/s, so you can get very high quality footage.
Re what we can support - it's a reverse engineering project, we can support anything with enough time ;) The very newest cams have software changes that make enabling ML slightly harder for normal users, but don't make much difference from a developer perspective. I don't see any signs of Canon trying to lock out reverse engineers. Gaining access and doing a basic, ML GUI but no features port, is not hard when you have experience.
What we choose to support: I work on the cams that I have. And the cams that I have are whatever I find for cheap, so it's pretty random. Other devs have whatever priorities they have :)
The first cam I ported to was 200D, unsupported at the time. This took me a few months to get ML GUI working (with no features enabled), and I had significant help. Now I can get a new cam to that standard in a few days in most cases. All the cams are fairly similar for the core OS. It's the peripherals that change the most as hardware improves, so this takes the most time. And the newer the camera, the more the hw and sw has diverged from the best supported cams.
The cheapest way for you to get started is to use your 5D3 - which you can do in our fork of qemu. You can dump the roms (using software, no disassembly required), then emulate a full Canon and ML GUI, which can run your custom ML changes. There are limitations, mostly around emulation of peripherals. It's still very useful if you want to improve / customise the UI.
Re docs - they're not in a great shape. It's scattered over a few different wikis, a forum, and commit messages in multiple repos. Quick discussion happens on Discord. We're very responsive there, it's the best place for dev questions. The forum is the best single source for reference knowledge. From a developer perspective, I have made some efforts on a Dev Guide, but it's far from complete, e.g.:
If you want physical hardware to play with (it is more fun after all), you might be able to find a 650d or 700d for about $100. Anything that's Digic 5 green here is a capable target:
What's the situation re: running on actual hardware these days? I was experimenting around with my 4000D but when it came to trying to actually run my code on the camera rather than the emulator, a1ex told me I needed some sort of key or similar. He told me he'd sign it for me or something but he got busy and I never heard back.
Is this situation still the same? (Apologies for the hazy details -- this was 5 years ago!)
That must have been a few years back. I think you're talking about enabling "camera bootflag". We provide an automated way to do this for new installs on release builds, but don't like to make this too easy before we have stable builds ready. People do the weirdest stuff, including trying to flash firmware that's not for their cam, in order to run an ML build for that different cam...
Anyway, I can happily talk you through how to do it. Our discord is probably easiest, or you can ask on the forum. Discord is linked from the forum: https://www.magiclantern.fm/forum/
Whatever code you had back then won't build without some updates. 4000D is a good target for ML, lots of features that could be added.
Yes, this was in September 2020 according to my records. All I remember is that I could run the ROM dumper just fine, then I could run my firmware in QEMU, and then I just had to locate a bunch of function pointers to make it do anything useful. Worked in QEMU but that's where I got stuck - no way to run it on hardware.
I'll definitely keep this in mind and hit you up whenever I have a buncha hours to spare. :)
That would have been only a little before a1ex left. Getting code running on real hardware is easy, maybe I'll talk to you in discord in a few months when you find this fabled free time we are all looking for ;)
So it has hardware from 2008, but they did update the OS to a recent build. This is not what the ML code expects to find, so it's been a confusing test of our assumptions. Normally the OS stays in sync with the hardware changes, which means when we're reversing, it's hard to tell which changes are which.
That said, 4000D is probably a relatively easy port.
I just want to say "thank you." I run Magic Lantern on my Canon 5D Mark III (5d3) and it is such awesome software.
I am a hobbyist nature photographer and it helped me capture some incredible moments. Though I have a Canon R7, the Canon 5d3 is my favorite camera because I prefer the feel of DSLR optical viewfinders when viewing wildlife subjects, and I prefer certain Canon EF lenses.
You're a better photographer than I am. I'm glad if ML helped you.
Please recruit your programmer friends to the cause :) The R7 is a target cam, but nobody has started work on it yet. There is some early work on the R5 and R6. I don't remember for the R7, but from the age and tier, this may be one of the new gen quad core AArch64.
I expect these modern cams to be powerful enough to run YOLO on cam, perhaps with sub 1s latency. Could be some fun things to do there.
I still shoot a 5Dmkii solely due to the ML firmware. It's primarily a timelapse camera at this point. The ETTR functionality is one of my absolute favorites. The biggest drawback I have is trying to shoot with an interval less than 5 seconds. The ML software gets confused and shoots irregular interval shots. Anything over 5 seconds, and it's great. No external timers necessary for the majority of my shooting. I do still have an external for when <5s intervals are necessary. I'm just waiting for the shutter to die, but I'm confident I'll just have it replaced and continue using the body+ML rather than buy yet another body.
Thanks for your work keeping it going, and for those that have worked on it before.
Strange, it certainly can do sub 5s on some bodies. But I don't have a 5d2 to test with.
Could this be a conflict with long exposures? Conceivably AF, too. The intervalometer will attempt to trigger capture every 5s wall time. If the combined time to AF seek, expose, and finish saving to card (etc) is >5s, you will skip a shot.
When the time comes, compare the price of a used 5d3 vs a shutter replacement on the 5d2, maybe you'll get a "free" upgrade :) Thanks for the kind words!
I've done lots of 1/2 second exposures with 3s interval, and it shoots some at much shorter interval than 3 and some 3+??? At one point, the docs said 5s was a barrier. Maybe it was the 5dmkii specifically. All of my cards are rated higher than the 5D can write (but makes DIT much faster) so I doubt it is write speed interfering. What makes me think it is not the camera is that using a cheap external timer works without skipping a beat.
Yeah, the external timer behaviour is fairly strong evidence. Curious though. These cams all seem to have a milli- and micro-second hw clock, and can both schedule and sleep against either. But it's also true that every cam has some weird quirks. And I don't know the 5d2 internals well.
From what I've seen, the image capture process is state machine based and tries to avoid sleeps and delays. Which makes sense for RTOS and professional photography.
If you care enough to debug it, pop into the discord and I can make you some tests to run.
Just wanted to say thanks for keeping this alive! I used magic lantern in 2014 to unlock 4K video recording on my Canon. It was how students back then could start recording professional video without super expensive gear
I recently obtained an astro converted 6D. Have played around with CHDK a long time ago as a teenager but never magic lantern.
I am a compiler dev with decent low level skills, anything in particular I should look at that would be good for the project as well as my ‘new’ 6D? (No experience with video unfortunately)
I have a newer R62 as well, but would rather not try anything with it yet.
I've had a fun idea knocking around for a while for astro. These cams have a fairly accessible serial port, hidden under the thumb grip rubber. I think the 6D may have one in the battery grip pins, too. We can sample LV data at any time, and do some tricks to boost exposure for "night vision". Soooo, you could turn the cam itself into a star tracker, which controlled a mount over serial. While doing the photo sequence. I bet you could do some very cool tricks with that. Bit involved for a first time project though :D
The 6D is a fairly well understood and supported cam, and your compiler background should really help you - so really the question is what would you like to add? I can then give a decent guess about how hard various things might be. I believe the 6D has integrated Wifi. We understand the network stack (surprisingly standard!) and a few demo things have been written, but nothing very useful so far. Maybe an auto image upload service? Would be cool to support something like OAuth, integrate with imgur etc?
It's slow work, but hopefully you don't mind that too much, compilers have a similar reputation.
Hmm, that's a neat idea. The better language for it is 'auto guider'. Auto guiding is basically supplying correction information to the mount when it drifts off.
Most mounts support guiding input and virtually all astrophotographers set up a separate tiny camera, a small scope, and a laptop to auto guide the mount. It would be neat for the main camera to do it. The caveat is that this live view sampling would add extra noise to the main images (more heat, etc). But in my opinion, the huge boost in convenience would make that worth it, given that modern post processing is pretty good for mitigating noise.
The signals that have to be sent to the mount are pretty simple too, so I'll look at this at some point in future. The bottleneck for me is that I have ever got 'real' auto guiding to work reliably with my mount so if I run into issues it would be tricky as there's no baseline working version.
> Maybe an auto image upload service?
This sounds pretty useful, even uploading seamlessly to a phone or laptop would be a huge time saver for most people! I'll set up ML on my 6D and try out some of the demo stuff that use the network stack.
Is there a sorted list of things that people want and no one has got around to implementing yet?
I am definitely an astro noob :) LV sampling was just the first idea I thought of. We could also load the last image while the next was being taken, and extract guide points from that (assuming an individual frame has enough distinct bright points... which it might not... you could of course sum a few in software). It's a larger image, but your time constraints shouldn't be tight. That way you're not getting any extra sensor heat. Some CPU heat though, dunno if that would be noticeable.
A simple python server, that accepts image data from the cam, does some processing, sends data back. The network protocol is dirt simple. The config file format for holding network creds, IP addr etc is really very ugly. It was written for convenience of writing the code, not convenience of making the config file.
You would need to find the equivalent networking functions (our jargon is "stubs"). You will likely want help with this, unless you're already familiar with Ghidra or IDA Pro, and have both a 6D and 200D rom dump :) Pop in the discord when you get to that stage, it's too much detail for here.
There's no real list of things people want (well, they want everything...). The issues on the repo will have some good ideas. In the early days of setting that up I tagged a few things as Good First Issue, but gave up since it was just me working on them.
I would say it's more important to find something you're personally motivated by, that way you're more likely to stick with it. It gets a lot easier, but it doesn't have a friendly learning curve.
Hey just want to say a massive thank you for everything you've done with this project. I've shot so much (short films, music videos, even a TV pilot!) on my pair of 600Ds and ML has given these cams such an extended life.
I would love to add it to my 1Ds3. I recall reading that once upon a time Canon wrote ML devs a strongly worded letter telling them not to touch a 1D, but a camera that old is long obsolete.
(I literally only want a raw histogram)
(I also have a 1Dx2 but that's probably a harder port)
I have been toying with the idea of picking up an old 1D. I can't remember the guy's name that I saw do this, but he had his 1D modified to use a PL mount instead of an EF mount. Something about the 1D body (being thicker I guess) allowed for the flange distances to work out. He then mounted a $35,000 17mm wide angle to it. That lens was huge and could just suck in photons. With that lens, he could expose the night sky in 1/3 second exposures what would take multiple seconds on my gear. He mounted the camera to the front of his boat floating down river using night vision goggles to see where he was going. The images were fantastic. I always wanted to do something crazy like that
Canon have never had any contact with ML project for any reason, to the best of my knowledge. The decision to stay away from 1D series was made by ML team, I would say out of an abundance of caution to try not to annoy them.
> We're using Git now. We build on modern OSes with modern tooling. We compile clean, no warnings. This was a lot of work, and invisible to users, but very useful for devs. It's easier than ever to join as a dev.
Very impressive! Thankless work. A reminder to myself to chase down some warnings in projects I am a part of...
It’s not too difficult, if you do it from the start, and by habit.
I have an xcconfig file[0], that I add to all my projects, that turns on treat warnings as errors, and enables all warnings. In C, I used to compile -wall.
I also use SwiftLint[1].
But these days, I almost never trigger any warnings, because I’ve developed the habit of good coding.
Since Magic Lantern is firmware, I’m surprised that this was not already the case. Firmware needs to be as close to perfect as possible (I used to write firmware. It’s one of the reasons I’m so anal about Quality).
It's not firmware :) We use what is probably engineering functionality, built into the OS, to load and execute a file from disk. We run as a (mostly) normal program on the cam's normal OS.
Thanks, and for what it's worth, I didn't downvote you (account is too new to even do so :D ), and I agree with your main point - it's not that hard to avoid all compiler warnings if you do it from the start, and make sure it's highly visible.
You only add one at a time, so you only need to fix one at a time, and you understand what you're trying to do.
It is, however, a real bitch to fix all compiler warnings in decade old code that targets a set of undocumented hardware platforms with which you are unfamiliar. And you just updated the toolchain from gcc 5 to 12.
Oh, don't worry about the downvotes. Happens every time someone starts talking about improving software Quality around here.
Unpopular topic. I talk about it anyway, as it's one of my casus belli. I can afford the dings.
BTW: I used to work for Canon's main [photography] competitor, and Magic Lantern was an example of the kind of thing I wanted them to enable, but they were not particularly open to the idea -control freaks.
Also, it's a bit "nit-picky," I know, but I feel that any software that runs on-device is "firmware," and should be held to the same standards as the OS. I know that Magic Lantern has always been good. We used to hear customers telling us how good it was, and asking us to do similar.
I think RED had something like that, as well. I wonder how that's going?
Okay, good, just making sure :) Fun to hear that at least some photo gear places are aware of ML!
I have done a stint in QA, as well as highly aggressive security testing against a big C codebase, so I too care a lot about quality. And you can do it in C, you just have to put in the effort.
I'd like to get Valgrind or ASAN working with our code, but that's quite a big task on an RTOS. It would be more practical in Qemu, but still a lot of effort. The OS has multiple allocators, and we don't include stdlib.
Re firmware / software, doesn't all software run on a device? So I suppose it depends what you mean by a device. Is a Windows exe on a desktop PC firmware? Is an app from your phones store firmware? We support cams that are much more powerful than low end Android devices. Here the cam OS, which is on flash ROM, brings the hardware up, then loads our code from removable storage, which can even be a spinning rust drive. It feels like they're firmware, and we're software, to me. It's not a clearly defined term.
The main reason I make the distinction is because we get a lot of users who think ML is like a phone rom flash, because that's what firmware is to most people. Thus they assume it's a risky process, and that the Canon menus etc will be gone. But we don't work that way.
Good point, and really just semantics. I guess you could say native mobile apps are “firmware,” using my criteria.
But I put as much effort into my mobile apps, as I did, into my firmware projects (it’s been decades since I wrote firmware, BTW. The landscape is quite different, these days -This is my first ever shipped engineering project[0]. Back then, we could still use an ICE to debug our software).
It just taught me to be very circumspect about Quality.
I do feel that any software (in any part of the stack) I write that affects moving parts, needs to be quite well-tested. I never had issues with firmware, but drivers are another matter. I've fried stuff that cost a lot.
Yes, it gets a bit blurry, especially given how fast solid-state storage is these days.
I think IoT has seen a resurgence in firmware devs... but regrettably not so much in quality. Too cheap to be worth it, I suppose. I can imagine a microwave could be quite a concerning product to design - there's some fairly obvious risks there!
Certainly, whatever you class ML as, we could damage the hardware. The shutter in particular is quite vulnerable, and Canon has made an unusual design choice that it flashes an important rom with settings at every power off. Leaving these settings in an inconsistent state can prevent the cam from booting. We do try to think hard about contingencies, and program defensively. At least for anything we release. I've done some very stupid tests on my own cams, and only needed to recover with UART access once ;)
I haven't use ICE, but I have used SoftICE. Oh, and we had a breakthrough on locating JTAG pinouts very recently, so we might end up being able to do similar.
By the way, rift valley software? I'm writing to you from Kenya, one of the homes of the great rift valley. It is truly remarkable to drive down the escarpment just North of Nairobi!
Visiting the Rift Valley in Southwest Uganda was one of the most awesome experiences of my childhood. My other company, Little Green Viper, riffs on that, too.
I was born in Africa, and spent the first eleven years of my life, there.
Yes! As a software developer in the photography space, we are deeply in need of projects like this.
The photography world is mired in proprietary software/ formats, and locked down hardware; and while it has always been true that a digital camera is “just” a computer, now more than ever it is painful just how limited and archaic on-board camera software is when compared to what we’ve grown accustomed to in the mobile phone era.
If I compare photography to another creative discipline I am somewhat familiar with, music production - the latter has way more open software/hardware initiatives, and freedom of not having to tether yourself to large, slow, user-abusing companies when choosing gear to work with.
For a look at some of the amazing output from an "ancient" EOS, you can look at Magic Lantern's Discord. It's rather shocking how far this little camera could be pushed. It is definitely a fun hobby project to fool around with these things. After awhile I stopped having the time and moved over to Sony APS-C with vintage lenses. I was able to maintain some of the aesthetic without getting frustrated by stuttering video. Still it's really a cool project.
An alternative to Magic Lantern is CHDK. Unfortunately that also feels somewhat abandoned and at the best of times held together with string* so I’m glad ML is back.
*No judgement, maintaining a niche and complex reverse-engineering project must be a thankless task
It is actually easier to get started now, as I spent several months updating the dev infrastructure so it all works on modern platforms with modern tooling.
Plus Ghidra exists now, which was a massive help for us.
We didn't really go on hiatus - the prior lead dev left the project, and the target hardware changed significantly. So everything slowed down. Now we are back to a more normal speed. Of course, we still need more devs; currently we have 3.
I should give this a shot. I used to use CHDK so I could use my old crappy Canon into something that would take good time-lapse videos by snapping a photo every X seconds; I miss doing that, though now it's harder because I live in the 'burbs, and there's no particularly spots for that nearby, and anywhere that is a good spot likely doesn't have a power outlet for me to use. I wonder how long I could power my camera from a portable charger?
I used to do it as well with a cheap second-hand IXUS 230 HS. It could run (at least) 48 h off a 7.2 Ah 12 V AGM battery, snapping a photo every 3 s (I used a fake-battery power adapter and a small DC-DC converter.)
> I used a fake-battery power adapter and a small DC-DC converter.
Same here. I used to live in a fairly tall building in Manhattan, so found my way to the roof, found an outlet, and would set it up to do timelapses of sunsets over the Hudson.
Nearly all Canons have a small access port as part of the battery door, which you can put a power supply cable / through, by design. Don't buy too cheap a dummy battery, the really cheap ones may have very bad voltage regulation. You can get ones designed to work from a USB power bank, or mains.
Amazing to see this, I haven’t thought about this since 2013. This turned my very basic entry level 550D into a crazy powerful camera for time lapse photography, I loved it!
This news is probably my excuse to buy my forth EOS; the first three were 100% only because of Magic Lantern.
Can't understand why manufacturers make this hard as it sells hardware.
> Can't understand why manufacturers make this hard as it sells hardware.
Because a lot of features that cost a lot of money are only software limitations. With many of the cheaper cameras the max shutter speed and video capabilities are limited by software to make the distinction with the more expensive cameras bigger. So they do sell hardware - but opening up the software will make their higher-end offerings less compelling.
Magic Lantern is fantastic software that makes EOS cameras even better, but I understand why manufacturers make it hard:
Camera manufacturers live and die on their reputation for making tools that deliver for the professional users of those tools. On a modern camera, the firmware and software needs to 100% Just Work and completely get out of the photographer's way, and a photographer needs to be able to grab a (camera) body out of the locker and know exactly what it's going to do for given settings.
The more cameras out there running customized firmware, the more likely someone misses a shot because "shutter priority is different on this specific 5d4" or similar.
I'm sure Canon is quietly pleased that Magic Lantern has kept up the resale value of their older bodies. I'm happy that Magic Lantern exists-- I no longer need an external intervalometer! It does make sense, though, that camera manufacturers don't deliberately ship cameras as openly-programmable computational photography tools.
You have an interesting point about consistency and I'd like to provide a counterargument. While control consistency is very important, the actual image you get from a camera varies significantly between models as the manufacturers change tone curves, colour models, etc. JPGs from the camera are basically arbitrary and RAWs are not much better. The manufacturers don't provide many guarantees, it's just up to you and downstream software to figure out what looks good.
Funny that so much thought goes into designing the feel of a camera yet the photo output is basically undefined...
Also another thing, Magic Lantern adds optional features which are arbitrarily(?) not present on some models. Perhaps Canon doesn't think you're "pro enough" (e.g. spent enough money) so they don't switch on focus peeking or whatever on your model.
If you want JPGs to look different, you can change them in the camera, and RAW files are just that: raw. They will vary between cameras slightly because the cameras have different sensors. Editing RAWs from 5d3 vs. 5d4 vs. 6d (my only experience) is not very different. Ultimately, the workflow that matters is a photographer capturing the image and getting the output to the studio quickly, in high quality. Event photographers often tether via ethernet or USB and the studio can post-process the RAW in minutes (or even seconds). The part of this that is most sensitive and hardest to recover from error is the photographer capturing the image, which is why consistency and usability of camera controls is so important.
IIRC none of the EOS DSLRs had focus peaking from the factory, you need Magic Lantern -- Canon didn't program it at all.
My point about JPGs is they will look different between cameras anyway because of software differences, with the "same" settings, so they're already inconsistent from the user perspective. Editing RAW is not necessarily different but from what I've heard that's because RAW editing software busts its ass to try to correct for all manner of arbitrary differences between camera models. It's in spite of camera design that we have consistency, not really because of.
Very happy to see ML return, used it in my T2i for 10 years at least, this year I bought a R6 Mark II so no need but I'd be very happy to see it someday with support for ML. Congratulations on the return!
I still have my 600D - it's hand down the most user-friendly DSLR I've ever owned, thanks to Magic Lantern. I also have a Sony A7S2. But, it is nowhere near the ease of use of my 600D. 12 years ago or so, I discovered Magic Lantern and I was blown away. It literally turned my camera into a high-end unit (for its time). What blows me away is that, my 600D can capture RAW video after installing ML. My Sony still can do only 10-bit video, 12 years later. The team deserves so much more funding and credit than they receive. I'm extremely grateful to the project and the people behind. I still haven't sold my 600D - only because of Magic Lantern. Thank you team :)
The list is so long. My favorite is the internal intervalometer + ETTR. Canon has always been laughed at for not having an internal intervalometer, and ML proves how lame it is to not have one. ETTR (Expose To The Right) is an auto metering mode that allows the camera to keep the histogram pushed as far to the right (better exposure) automagically by increasing shutter time and/or increasing ISO. This is essential for doing holy grail timelapse of sunset/sunrise where the exposure is constantly changing. This feature alone is worth it's weight in gold.
However, a lot of the features exposed are more video oriented. The Canon bodies were primarily photo cameras that could shoot video in a cumbersome way. ML brings features a video shooter would need without diving into the menus like audio metering. The older bodies also have hardware limitations on write speed, so people use the HDMI out to external recorders to record a larger framesize/bitrate/codec than natively possible. Also, that feed normally has the camera UI overlay which prevents clean recordings. ML allows turning that off.
There are just too many features that ML unlocks. You'd really just need to find the camera body you are interested in using on their site, and see what it does for that body. Different bodies have different features. So some effort is required on your part to know exactly what it can do for you.
I don't know if modern cameras are better for this, but a big one historically was getting a clean, realtime HDMI output so that high quality cameras can be used with a capture card for broadcast purposes as a replacement for a webcam. Manufacturers understand that that's a "pro" level need/feature and have intentionally segmented the market so that lower-tier devices can't do it even though the hardware is obviously all present.
The big one for me was always focus peaking when using vintage lenses or doing IR photography. The extended White Balance settings were nice to have for IR, as well.
- Lua script support.
It is not complete (in ML hardly anything is) but allows to access a lot of ML and Canon functions. Years ago someone made a script for automating solar eclipse shooting catching all critical phases while chilling and enjoying the view.
- Introduced full electronic shutter (Silent Pic) for Digic 4 and 5.
- Focus stacking for macro and - via Lua script - for landscape.
- Exposure simulation switch for "cheaper" cams
- Trap focus
- Dual-ISO. Some HDR mode but without ghosting by manipulating sensor lines to record at different ISOs
- Ghost image overlay
- Customizable cropmark overlays (grids and others)
- Fps finetuning. Several folks used it to record vintage monitors with very, very strange timings and without rolling bars.30.01 fps? No problem!
- Zebras and focus peaking, vectorscope, wavelength monitoring, false colour support
- RAW histogram
- Bracketing with up to 11 frames (But why? ;-> )
- Intervalometer and bracketing (a bit more configurable than Canon has now)
- Trigger by LCD's IR sensor (if any) or Audio (clap your hand) or motion detect
- Rack focus
- Display mirroring and upside/down options
- Configurable presets (up to 15)
- 30 minutes override for RAW recording, USB and HDMI streaming. Oh, and we have a new option to record native H.264/MOV for more than 29:59. Prototype but working.
-Better AF micoradjustment for the cams having that option by Canon.
- ,,,
Frankly: I once tried to maintain a help file and browsed through a lot of lesser known features. Took me days and I didn't even test RAW/MLV recording.
Cool! Am i the only one who has a really hard time finding what models are supported? It says on the frontpage that it's on the downloads page, but I can't seem to find anything?
EDIT: It's on the builds page https://builds.magiclantern.fm
Magic Lantern is amazing... I used it with a custom C script to do auto ISO in Av mode (setting minimum shutter speed based on focal length) before that was built into the newer camera models. It's good to see it back!
The 80D has Magic Lantern code available. We haven't released a build to the public as it has such minimal features available there's no real point yet. But if you were thinking of doing dev work for it, it's in a good place to start: ML GUI works, debug logging works.
I absolutely love Magic Lantern, and I wish similar initiatives existed on Sony and Nikon! I was forced to upgrade my Sony camera purely because of software limitations.
Firmwares should be open-source by law. Especially when products are discontinued.
I have fond memories of squeezing so deep exposure stacking out of the auto/adaptive-HDR-bracketing script in CHDK on my old "IXUS 100IS", that the AFAIK still CCD had severe blooming around the window in the scene. Still great though!
I just got my T2i out a few months ago and the first thing I did was check for new magic lantern versions. haha. Really cool to see this project is still living.
It has been many moons since I used Magic Lantern. Has anamorphic desqueeze ever been a feature or could it be in the future? That's one missing feature that bums me out about shooting videos on Canon.
Would love it if camera manufacturers were forced to open source their firmware after say 5 years of a camera’s release. The longevity of devices would be vastly improved.
In fact make this all devices with firmware, printers, streamers etc.
I don't think forcing a company to open source their IP is a good move, but perhaps there might be some encouragement implemented for opening up their bootloader so the device is more hackable.
The entire copyright and patent system is built on the principle of forcing the release of IP; it is time delayed in exchange for the legal protections you gain if you opt in to the system. That is the encouragement!
Extending this to enable software access by 3rd parties doesn't feel controversial to me. The core intent of copyright and patent seems to be "when the time limit expires, everyone should be able to use the IP". But in practice you often can't, where hardware with software is concerned.
Thanks to all contributors to the project, ML is an amazing feat of work. I've been running it on my Canon 6D since I got it in 2016, very useful for timelapses.
I was trying to understand what this project is. It's some sort of open firmware for Canon camera that you put on the flash card (SD). The home page has info: https://www.magiclantern.fm/
Yes its truly noteworthy project. They exploited Canon cameras by first managing to blink red charging LED. Then they used the LED blinks to transmit the firmware out. Then they built custom firmware which boots right from SD (thus no posibility to break the camera). The Magic Lantern firmware for example allows many basic cameras to do RAW 4K video recording (with unlimited length) - feature which is not even in the high-end models. But it has much more features to tinker with.
There's a fun step you're missing - it's not firmware. We toggle on (presumably) engineering functionality already present in Canon code, which allows for loading a file from card as an ARM binary.
We're a normal program, running on their OS, DryOS, a variant of uITRON.
This has the benefit that we never flash the OS, removing a source of risk.
> As of December 2024, the Magic Leap One is no longer supported or working, becoming end of life and abruptly losing functionality when cloud access was ended. This happened whilst encouraging users to buy a newer model.
Its demise seems to have completely passed me by; I read about its enormous funding and unrealistic expectations, then a v1 came out which was mediocre/disappointing, then... nothing. Apple's thing overshadowed it, but that too has passed - unless they're going to announce a new model at a fraction of the price soon, supposedly their UI redesigns follow its concepts so it's not buried yet.
I wouldn't recommend the 600D if you want to do video. For stills it's perfectly acceptable. Auto-focus will feel slow compared to a modern cam. If you're going for an old / budget cam, try and reach to the 650D or 700D, those are a newer generation of hardware.
200D is much newer, but less well supported by ML. I own this cam and am actively working on improving it. 200D has DPAF, which means considerably improved auto-focus, especially for video. Also it can run Doom.
Are there any ML features in particular you're interested in?
I'm interested in using all kinds of devices for streaming live video from live (think music) events via OBS with minimal effort. Current setup is with a device (just an old iPhone can do) which provides WiFi connectivity, then another (or the same device) runs DroidCAM which them streams into a nearby laptop with OBS (typically capturing the audio needed) and this is then sent to wherever we decide (twitch, RSTP, etc). We've tried this setup with as many as three droidcam phones, and it is just fine on... a legacy MacBook Pro with Intel + 1500MB Iris card.
So ideally I'd imagine getting a second-hand 600D or 200D and having a similar setup. We did have a setup (previously) where a GoPro or mini-HDMI campera is captured and then processed by a RBPi 2/3/4, but this seems an overkill compared to the DroidCAM Setup.
And, of course, the optics on the 600D/200D are expected to be much more correct than those on an iPhone or similar phone/mobile device.
With 600D you are stuck to 1620x912 in video mode embedded in 1080i59.94 8-bit. Black borders around and you have to crop and - maybe - scale up.
200D HDMI stream with ML is clean with MF but AF will still draw a focus rectangle. But at least true FHD via HDMI.
AF with 600D in liveview: Phase detection only. Focus hunting galore. 200D comes with usable DPAF.
I prefer 250D for streaming. Dual display support, no 30 minute limit for HDMI out (but cam display will go dark until some button action).
Same boat. I have a 6D Mark II since 7 years ago now, and I misguidedly was counting on ML to be released within 3 years of my purchase. But luckily, it's still a fantastic camera.
The nifty thing would come from opening up the high end cameras, so why not go there? Of course Canon's legal team gonna crack down on the project, as they've previously said.
Canon's legal team have never said anything about Magic Lantern in any context that I'm aware of.
The high end cams need ML less, they have more features stock, plus devs need access to the cam to make a good port. So higher end cams tend to be less attractive to developers.
But this is actually really cool because, as it turns out, I've got an old Canon Eos DSLR that I haven't used for a long time and didn't know this thing existed before.
Around 2020, our old lead dev, a1ex, after years of hard work, left the project. The documentation was fragmentary. Nobody understood the build system. A very small number of volunteers kept things alive, but nothing worked well. Nobody had deep knowledge of Magic Lantern code.
Sounds like a bit of a dick move. Part of being a lead dev is making sure you can get hit by a bus and the project continues. That means documentation, simple enough and standard build system (It's C after all), etc. As a lead dev you should ensure the people on the project get familiarity with other part than their niche too, so that one can succeed you.
Uh, sure, maybe in a professional setting where you’re getting paid. But this was unpaid volunteer work. If, as a community, we start enforcing professional grade standards on people who are just contributing their free time to give us neat toys and tools, I kinda worry it makes the whole thing the whole thing less fun or sustainable. And if that happens, we probably stop getting these free toys altogether.
I heart-fully disagree. Being professional crosses the bounds of paid work and unpaid work.
It doesn't take much work to not leave a gigantic pile of trash behind you.
If anything, it's an even more a self-responsible thing to do in the OSS world, as there isn't a chain of command such as in the corporate world, enforcing this.
It's selfish to engage in group relation with other people building something without the conscious decision of continuity.
A job worth doing is a job worth doing well. Maybe I'm just a gray beard with unrealistic expectations, or maybe I care about quality.
Think of it as a non-profit club. If you volunteer to be the treasurer, are you then free to ignore everything and do whatever you like, just because you aren’t paid? Of course not. It’s the same with being a software project maintainer; you have willingly taken on some obligations.
If I put some code out on the internet and some other people find it and start using it, they message me we talk and I start adding things they suggest and working with others to improve this code. Then one day I wake up and don't want to do it anymore. At what point did I become obligated? When I published the code? When I first started talking to others about it(building a community)? When I coded their suggestions? When I worked with other coders?
It's not like this kind of thing doesn't happen in the professional world - in fact, quite the opposite. The incentives to cut corners in a company are if anything greater than in open source, with pressure from management to meet the next deadline.
For folks who don't know what Magic Lantern is:
> Magic Lantern is a free software add-on that runs from the SD/CF card and adds a host of new features to Canon EOS cameras that weren't included from the factory by Canon.
It also backports new features to old Canon cameras that aren't supported anymore, and is generally just a really impressive feat of both (1) reverse engineering and (2) keeping old hardware relevant and useful.
More backstory: before the modern generation of digital cameras - Magic Lantern was one of the early ways to "juice" more power out of early generations of Canon cameras, including RAW video recording.
Today, cameras like Blackmagic and editing platforms like DaVinci handle RAW seamlessly, but it wasn't like this even a few years ago.
Funny, when i saw it uses a .fm TLD i thought it's some online radio.
They were trendy at the time :D
I think possibly someone thought it sounded a bit like firmware?
Same :) I had in mind Groove Salad from soma.fm
last.fm
"Scrobbles" will always be a funny word to me.
sub.fm
I wish there are similar projects for other camera brands like Fujifilm. With abilities of ML on old Canon cameras we know there is a lot of potential in those old machines across other brands. It is also "eco" friendly approach that should be supported.
I just switched from Canon to Fujifilm due to enshitification. Canon started charging $5/mo to get clean video out of their cameras. We're plenty screwed if manufacturers decide that cameras are subscriptions and not glass.
Fuji's are great, but ecosystem is definitely smaller, and I've found some software still doesn't support debayering x-trans
Yeah like Adobe. Whatever method they use has been peak worm creation for over 10 years. Capture one and dcraw are head and shoulders better.
it also has a scripting system and is damn fun to mess with.
> The main thing you need is knowledge of C, which is a small language that has good tutorials.
Heh, a little like saying "the main thing you need is to be able to play the violin, which is a small instrument with good tutorials".
I stand by my statement! Compare the length of the C standard to JS / ECMAScript, or C++! :)
Maaaaybe I'm hiding a tradeoff around complexity vs built-in features, but volunteers can work that out themselves later on.
You honestly don't need much knowledge of C to get started in some areas. The ML GUI is easy to modify if you stay within the lines. Other areas, e.g., porting a complex feature to a new camera, are much harder. But that's the life of a reverse engineer.
Conversely, the terseness of the C standard also means there's many more footguns and undefined behaviors. There are many things C is, but being easy to pick up is not one of them. I loved C all the way up until I graduated uni, but it would be a very hard sell to get me to pick it for a project these days. To me, working with C is akin to working with assembly, you just feel that you're doing real programming, but realistically there's better options for most scenarios these days.
I agree with some of what you're saying; some of the well known risks of working in C are because it's a small standard. But much of the undefined behaviour was deliberately made that way to support the hardware of the time - it's hard to be cross-platform on different architectures as a low-level language.
C genuinely is easy to pick up. It is harder to master. And you're right, for many domains, there are better options now, so it may not be worth while mastering it.
Because it's an old language, what it lacks in built-in safety features, is provided by decades of very good surrounding tooling. You do of course need to learn that tooling, and choose to use it!
In the context of Magic Lantern, C is the natural fit. We are working with very tight memory limitations, due to the OS. We support single core 200Mhz targets (ARMv5, no out-of-order or other fancy tricks). We don't include C stdlib, a small test binary can be < 1kB. Normal builds are around 400kB (this includes a full GUI, debug capabilities, all strings and assets, etc).
Canon code is probably mostly C, some C++. We have to call their code directly (casting reverse engineered addresses to function pointers, basically). We don't know what safety guarantees their code makes, or what the API is. Most of our work is interacting with OS or hardware. So we wouldn't gain much by using a safe language for our half.
> C genuinely is easy to pick up.
I feel like this is a bit of an https://xkcd.com/2501/ situation.
C is considered easy to pick up for the average user posting HN comments because we have the benefit of years -- the average comp sci student, who has been exposed to Javascript and Python, who might not know what "pass by reference" even means... I'm not sure they're going to be considering C easy.
C is taught as the introduction to programming in CS50x, Harvard's wildly popular MOOC for teaching programming to first-year college students and lifelong learners via the internet. Using the clang toolchain gives you much better error messages than old versions of gcc used to give. And I bet AI/LLM/copilot tools are pretty good at C given how much F/OSS is written in C.
Just to provide another data point here... that C is a little easier to pick up, today, than it was in the 1990s or 2000s, when all you had was the K&R C book and a Linux shell. I regularly recommend CS50x to newcomers to programming via a guide I wrote up as a GitHub gist. I took the CS50x course myself in 2020 (just to refresh my own memory of C after years of not using it that much), and it is very high quality.
See this comment for more info:
https://news.ycombinator.com/item?id=40690760
I've taught several different languages to both 1st year uni students, and new joiners to a technical company, where they had no programming background.
Honestly, C seems to be one of the easier languages to teach the basics of. It's certainly easier than Java or C++, which have many more concepts.
C has some concepts that confuse the hell out of beginners, and it will let you shoot yourself in the foot very thoroughly with them (much more than say, Java). But you don't tend to encounter them till later on.
I have never said getting good at C is easy. Just that it's easy to pick up.
C made a lot more sense to me after having done assembly (6502 in my case, but it probably doesn't matter). Things like passing a reference suddenly just made sense.
depends on which school you went? the one I've been to started with C and LISP in the 2010s and then moved on to C++ and java with some python
Everything is passed by reference in Python. Everything is passed by value in C.
Not quite true for Python but a close approximation.
Undefined behaviors -- yes. But being able to trigger undefined behavior is not a huge foot gun by itself. Starting with good code examples means you are much less likely to trigger it.
Having a good, logical description of supported features, with a warning that if you do unsupported stuff things may break, is much more important than trying to define every possible action in a predictable way.
The latter approach often leads to explosion of spec volume and gives way more opportunities for writing bad code: predictable in execution, but instead with problems in design and logic which are harder to understand, maintain and fix. My 2c.
I stand by my statement! Compare the number of strings a violin has to the keys on a piano! :)
I know it's all at least semi- tongue-in-cheek, but IRL a piano's discrete, sequential keys are what make it almost inarguably the easiest instrument to learn.
That's exactly his point. Languages aren't easier to learn simply because their specification is short, any more than instruments are easier to play because they have fewer strings.
The analogy is completely invalid. Languages with small specifications are easier to learn.
It's sad that the dev, who has done great work, has to spend time defending the C language from critters living under a bridge when it's a fixed element that isn't going to change.
Accusing people who disagree w/ you of being trolls doesn't bolster your argument.
Speaking of weak arguments: that wasn't the basis of the accusation.
People don't argue with a carpenter over what tools were used to build a piece of furniture. It feels like a religious debate.
> Languages with small specifications are easier to learn.
Only if all other things are equal, which they never are.
Thanks to all who are sharing their appreciation for this niche but cool project.
I'm the current lead dev, so please ask questions.
Got a Canon DSLR or mirrorless and like a bit of software reverse engineering? Consider joining in; it's quite an approachable hardware target. No code obfuscation, just classic reversing. You can pick up a well supported cam for a little less than $100. Cams range from ARMv5te up to AArch64.
Wow, newly supported models is super exciting to see! I have a 5d mk iii which I got specifically to play around with ML. I haven't done much videography in my life, but do plan to get some b-roll at the very least with my mk iii or maybe record some friends live events sometime.
> I'm the current lead dev, so please ask questions.
Well, you asked for it!
One question I've always wondered about the project is: what is the difference between a model that you can support, and a model you currently can't? Is there a hard line where ML future compatibility becomes a brick wall? Are there models where something about the hardware / firmware makes you go 'ooh, that's a good candidate! I bet we can get that one working next'?
Also, as someone from the outside looking in who would be down to spend $100 to see if this something I can do or am interested in, which (cheap) model would be the easiest to grab and load up as dev environment (or in a configuration that mimics what someone might do to work on a feature), and where can I find documentation on how to do that? Is there a compendium of knowledge about how these cameras work from a reverse-engineering angle, or does everyone cut their teeth on forum posts and official canon technical docs?
edit: Found the RE guide on the website, gonna take a look at this later tonight
5D3 is perhaps the best currently supported ML cam for video. It's very capable - good choice. Using both CF and SD cards simultaneously, it can record at about 145MB/s, so you can get very high quality footage.
Re what we can support - it's a reverse engineering project, we can support anything with enough time ;) The very newest cams have software changes that make enabling ML slightly harder for normal users, but don't make much difference from a developer perspective. I don't see any signs of Canon trying to lock out reverse engineers. Gaining access and doing a basic, ML GUI but no features port, is not hard when you have experience.
What we choose to support: I work on the cams that I have. And the cams that I have are whatever I find for cheap, so it's pretty random. Other devs have whatever priorities they have :)
The first cam I ported to was 200D, unsupported at the time. This took me a few months to get ML GUI working (with no features enabled), and I had significant help. Now I can get a new cam to that standard in a few days in most cases. All the cams are fairly similar for the core OS. It's the peripherals that change the most as hardware improves, so this takes the most time. And the newer the camera, the more the hw and sw has diverged from the best supported cams.
The cheapest way for you to get started is to use your 5D3 - which you can do in our fork of qemu. You can dump the roms (using software, no disassembly required), then emulate a full Canon and ML GUI, which can run your custom ML changes. There are limitations, mostly around emulation of peripherals. It's still very useful if you want to improve / customise the UI.
https://github.com/reticulatedpines/qemu-eos/tree/qemu-eos-v...
Re docs - they're not in a great shape. It's scattered over a few different wikis, a forum, and commit messages in multiple repos. Quick discussion happens on Discord. We're very responsive there, it's the best place for dev questions. The forum is the best single source for reference knowledge. From a developer perspective, I have made some efforts on a Dev Guide, but it's far from complete, e.g.:
https://github.com/reticulatedpines/magiclantern_simplified/...
If you want physical hardware to play with (it is more fun after all), you might be able to find a 650d or 700d for about $100. Anything that's Digic 5 green here is a capable target:
https://en.wikipedia.org/wiki/Template:Canon_EOS_digital_cam...
Digic 4 stuff is also easy to support, and will be cheaper, but it's less capable and will be showing its age generally - depends if that bothers you.
What's the situation re: running on actual hardware these days? I was experimenting around with my 4000D but when it came to trying to actually run my code on the camera rather than the emulator, a1ex told me I needed some sort of key or similar. He told me he'd sign it for me or something but he got busy and I never heard back.
Is this situation still the same? (Apologies for the hazy details -- this was 5 years ago!)
That must have been a few years back. I think you're talking about enabling "camera bootflag". We provide an automated way to do this for new installs on release builds, but don't like to make this too easy before we have stable builds ready. People do the weirdest stuff, including trying to flash firmware that's not for their cam, in order to run an ML build for that different cam...
Anyway, I can happily talk you through how to do it. Our discord is probably easiest, or you can ask on the forum. Discord is linked from the forum: https://www.magiclantern.fm/forum/
Whatever code you had back then won't build without some updates. 4000D is a good target for ML, lots of features that could be added.
Yes, this was in September 2020 according to my records. All I remember is that I could run the ROM dumper just fine, then I could run my firmware in QEMU, and then I just had to locate a bunch of function pointers to make it do anything useful. Worked in QEMU but that's where I got stuck - no way to run it on hardware.
I'll definitely keep this in mind and hit you up whenever I have a buncha hours to spare. :)
That would have been only a little before a1ex left. Getting code running on real hardware is easy, maybe I'll talk to you in discord in a few months when you find this fabled free time we are all looking for ;)
The 4000D is an interesting cam, we've had a few people start ports then give up. It has a mix of old and new parts in the software. Canon used an old CPU / ASIC: https://en.wikipedia.org/wiki/Template:Canon_EOS_digital_cam...
So it has hardware from 2008, but they did update the OS to a recent build. This is not what the ML code expects to find, so it's been a confusing test of our assumptions. Normally the OS stays in sync with the hardware changes, which means when we're reversing, it's hard to tell which changes are which.
That said, 4000D is probably a relatively easy port.
I just want to say "thank you." I run Magic Lantern on my Canon 5D Mark III (5d3) and it is such awesome software.
I am a hobbyist nature photographer and it helped me capture some incredible moments. Though I have a Canon R7, the Canon 5d3 is my favorite camera because I prefer the feel of DSLR optical viewfinders when viewing wildlife subjects, and I prefer certain Canon EF lenses.
More here:
https://amontalenti.com/photos
When I hang out with programmer friends and demo Magic Lantern to them, they are always blown away.
You're a better photographer than I am. I'm glad if ML helped you.
Please recruit your programmer friends to the cause :) The R7 is a target cam, but nobody has started work on it yet. There is some early work on the R5 and R6. I don't remember for the R7, but from the age and tier, this may be one of the new gen quad core AArch64.
I expect these modern cams to be powerful enough to run YOLO on cam, perhaps with sub 1s latency. Could be some fun things to do there.
I've always wanted to work on Magic Lantern myself (I am in the Discord) but just haven't found the time yet! Thanks again!
I still shoot a 5Dmkii solely due to the ML firmware. It's primarily a timelapse camera at this point. The ETTR functionality is one of my absolute favorites. The biggest drawback I have is trying to shoot with an interval less than 5 seconds. The ML software gets confused and shoots irregular interval shots. Anything over 5 seconds, and it's great. No external timers necessary for the majority of my shooting. I do still have an external for when <5s intervals are necessary. I'm just waiting for the shutter to die, but I'm confident I'll just have it replaced and continue using the body+ML rather than buy yet another body.
Thanks for your work keeping it going, and for those that have worked on it before.
Strange, it certainly can do sub 5s on some bodies. But I don't have a 5d2 to test with.
Could this be a conflict with long exposures? Conceivably AF, too. The intervalometer will attempt to trigger capture every 5s wall time. If the combined time to AF seek, expose, and finish saving to card (etc) is >5s, you will skip a shot.
When the time comes, compare the price of a used 5d3 vs a shutter replacement on the 5d2, maybe you'll get a "free" upgrade :) Thanks for the kind words!
> Could this be a conflict with long exposures?
I've done lots of 1/2 second exposures with 3s interval, and it shoots some at much shorter interval than 3 and some 3+??? At one point, the docs said 5s was a barrier. Maybe it was the 5dmkii specifically. All of my cards are rated higher than the 5D can write (but makes DIT much faster) so I doubt it is write speed interfering. What makes me think it is not the camera is that using a cheap external timer works without skipping a beat.
Yeah, the external timer behaviour is fairly strong evidence. Curious though. These cams all seem to have a milli- and micro-second hw clock, and can both schedule and sleep against either. But it's also true that every cam has some weird quirks. And I don't know the 5d2 internals well.
From what I've seen, the image capture process is state machine based and tries to avoid sleeps and delays. Which makes sense for RTOS and professional photography.
If you care enough to debug it, pop into the discord and I can make you some tests to run.
Just wanted to say thanks for keeping this alive! I used magic lantern in 2014 to unlock 4K video recording on my Canon. It was how students back then could start recording professional video without super expensive gear
I recently obtained an astro converted 6D. Have played around with CHDK a long time ago as a teenager but never magic lantern.
I am a compiler dev with decent low level skills, anything in particular I should look at that would be good for the project as well as my ‘new’ 6D? (No experience with video unfortunately)
I have a newer R62 as well, but would rather not try anything with it yet.
Ah I'd love an astro conversion.
I've had a fun idea knocking around for a while for astro. These cams have a fairly accessible serial port, hidden under the thumb grip rubber. I think the 6D may have one in the battery grip pins, too. We can sample LV data at any time, and do some tricks to boost exposure for "night vision". Soooo, you could turn the cam itself into a star tracker, which controlled a mount over serial. While doing the photo sequence. I bet you could do some very cool tricks with that. Bit involved for a first time project though :D
The 6D is a fairly well understood and supported cam, and your compiler background should really help you - so really the question is what would you like to add? I can then give a decent guess about how hard various things might be. I believe the 6D has integrated Wifi. We understand the network stack (surprisingly standard!) and a few demo things have been written, but nothing very useful so far. Maybe an auto image upload service? Would be cool to support something like OAuth, integrate with imgur etc?
It's slow work, but hopefully you don't mind that too much, compilers have a similar reputation.
> turn the cam itself into a star tracker
Hmm, that's a neat idea. The better language for it is 'auto guider'. Auto guiding is basically supplying correction information to the mount when it drifts off.
Most mounts support guiding input and virtually all astrophotographers set up a separate tiny camera, a small scope, and a laptop to auto guide the mount. It would be neat for the main camera to do it. The caveat is that this live view sampling would add extra noise to the main images (more heat, etc). But in my opinion, the huge boost in convenience would make that worth it, given that modern post processing is pretty good for mitigating noise.
The signals that have to be sent to the mount are pretty simple too, so I'll look at this at some point in future. The bottleneck for me is that I have ever got 'real' auto guiding to work reliably with my mount so if I run into issues it would be tricky as there's no baseline working version.
> Maybe an auto image upload service?
This sounds pretty useful, even uploading seamlessly to a phone or laptop would be a huge time saver for most people! I'll set up ML on my 6D and try out some of the demo stuff that use the network stack.
Is there a sorted list of things that people want and no one has got around to implementing yet?
I am definitely an astro noob :) LV sampling was just the first idea I thought of. We could also load the last image while the next was being taken, and extract guide points from that (assuming an individual frame has enough distinct bright points... which it might not... you could of course sum a few in software). It's a larger image, but your time constraints shouldn't be tight. That way you're not getting any extra sensor heat. Some CPU heat though, dunno if that would be noticeable.
For networking, this module demonstrates the principles: https://github.com/reticulatedpines/magiclantern_simplified/...
A simple python server, that accepts image data from the cam, does some processing, sends data back. The network protocol is dirt simple. The config file format for holding network creds, IP addr etc is really very ugly. It was written for convenience of writing the code, not convenience of making the config file.
You would need to find the equivalent networking functions (our jargon is "stubs"). You will likely want help with this, unless you're already familiar with Ghidra or IDA Pro, and have both a 6D and 200D rom dump :) Pop in the discord when you get to that stage, it's too much detail for here.
There's no real list of things people want (well, they want everything...). The issues on the repo will have some good ideas. In the early days of setting that up I tagged a few things as Good First Issue, but gave up since it was just me working on them.
I would say it's more important to find something you're personally motivated by, that way you're more likely to stick with it. It gets a lot easier, but it doesn't have a friendly learning curve.
Does LV sampling work when ..say.. a 120 second image is being captured?
Hey just want to say a massive thank you for everything you've done with this project. I've shot so much (short films, music videos, even a TV pilot!) on my pair of 600Ds and ML has given these cams such an extended life.
It’s been a huge blessing!
I would love to add it to my 1Ds3. I recall reading that once upon a time Canon wrote ML devs a strongly worded letter telling them not to touch a 1D, but a camera that old is long obsolete.
(I literally only want a raw histogram)
(I also have a 1Dx2 but that's probably a harder port)
I have been toying with the idea of picking up an old 1D. I can't remember the guy's name that I saw do this, but he had his 1D modified to use a PL mount instead of an EF mount. Something about the 1D body (being thicker I guess) allowed for the flange distances to work out. He then mounted a $35,000 17mm wide angle to it. That lens was huge and could just suck in photons. With that lens, he could expose the night sky in 1/3 second exposures what would take multiple seconds on my gear. He mounted the camera to the front of his boat floating down river using night vision goggles to see where he was going. The images were fantastic. I always wanted to do something crazy like that
Canon have never had any contact with ML project for any reason, to the best of my knowledge. The decision to stay away from 1D series was made by ML team, I would say out of an abundance of caution to try not to annoy them.
I use magic lantern on my canon 650D to get a clean feed for my blackmagic ATEM. The installation was easy and everything works well.
Thank you and the magic lantern team!
> We're using Git now. We build on modern OSes with modern tooling. We compile clean, no warnings. This was a lot of work, and invisible to users, but very useful for devs. It's easier than ever to join as a dev.
Very impressive! Thankless work. A reminder to myself to chase down some warnings in projects I am a part of...
It’s not too difficult, if you do it from the start, and by habit.
I have an xcconfig file[0], that I add to all my projects, that turns on treat warnings as errors, and enables all warnings. In C, I used to compile -wall.
I also use SwiftLint[1].
But these days, I almost never trigger any warnings, because I’ve developed the habit of good coding.
Since Magic Lantern is firmware, I’m surprised that this was not already the case. Firmware needs to be as close to perfect as possible (I used to write firmware. It’s one of the reasons I’m so anal about Quality).
[0] https://github.com/RiftValleySoftware/RVS_Checkbox/blob/main... (I need to switch the header to MIT license, to match the rest of the project. It’s been a long time, since I used GPL, but I’ve been using this file, forever).
[1] https://littlegreenviper.com/swiftlint/
It's not firmware :) We use what is probably engineering functionality, built into the OS, to load and execute a file from disk. We run as a (mostly) normal program on the cam's normal OS.
We build with: -Wall -Wextra -Werror-implicit-function-declaration -Wdouble-promotion -Winline -Wundef -Wno-unused-parameter -Wno-unused-function -Wno-format
Warnings are treated as errors for release builds.
Awesome!
Great work, and good luck!
Thanks, and for what it's worth, I didn't downvote you (account is too new to even do so :D ), and I agree with your main point - it's not that hard to avoid all compiler warnings if you do it from the start, and make sure it's highly visible.
You only add one at a time, so you only need to fix one at a time, and you understand what you're trying to do.
It is, however, a real bitch to fix all compiler warnings in decade old code that targets a set of undocumented hardware platforms with which you are unfamiliar. And you just updated the toolchain from gcc 5 to 12.
Oh, don't worry about the downvotes. Happens every time someone starts talking about improving software Quality around here.
Unpopular topic. I talk about it anyway, as it's one of my casus belli. I can afford the dings.
BTW: I used to work for Canon's main [photography] competitor, and Magic Lantern was an example of the kind of thing I wanted them to enable, but they were not particularly open to the idea -control freaks.
Also, it's a bit "nit-picky," I know, but I feel that any software that runs on-device is "firmware," and should be held to the same standards as the OS. I know that Magic Lantern has always been good. We used to hear customers telling us how good it was, and asking us to do similar.
I think RED had something like that, as well. I wonder how that's going?
Okay, good, just making sure :) Fun to hear that at least some photo gear places are aware of ML!
I have done a stint in QA, as well as highly aggressive security testing against a big C codebase, so I too care a lot about quality. And you can do it in C, you just have to put in the effort.
I'd like to get Valgrind or ASAN working with our code, but that's quite a big task on an RTOS. It would be more practical in Qemu, but still a lot of effort. The OS has multiple allocators, and we don't include stdlib.
Re firmware / software, doesn't all software run on a device? So I suppose it depends what you mean by a device. Is a Windows exe on a desktop PC firmware? Is an app from your phones store firmware? We support cams that are much more powerful than low end Android devices. Here the cam OS, which is on flash ROM, brings the hardware up, then loads our code from removable storage, which can even be a spinning rust drive. It feels like they're firmware, and we're software, to me. It's not a clearly defined term.
The main reason I make the distinction is because we get a lot of users who think ML is like a phone rom flash, because that's what firmware is to most people. Thus they assume it's a risky process, and that the Canon menus etc will be gone. But we don't work that way.
Good point, and really just semantics. I guess you could say native mobile apps are “firmware,” using my criteria.
But I put as much effort into my mobile apps, as I did, into my firmware projects (it’s been decades since I wrote firmware, BTW. The landscape is quite different, these days -This is my first ever shipped engineering project[0]. Back then, we could still use an ICE to debug our software).
It just taught me to be very circumspect about Quality.
I do feel that any software (in any part of the stack) I write that affects moving parts, needs to be quite well-tested. I never had issues with firmware, but drivers are another matter. I've fried stuff that cost a lot.
[0] https://littlegreenviper.com/TF30194/TF30194-Manual-1987.pdf
Yes, it gets a bit blurry, especially given how fast solid-state storage is these days.
I think IoT has seen a resurgence in firmware devs... but regrettably not so much in quality. Too cheap to be worth it, I suppose. I can imagine a microwave could be quite a concerning product to design - there's some fairly obvious risks there!
Certainly, whatever you class ML as, we could damage the hardware. The shutter in particular is quite vulnerable, and Canon has made an unusual design choice that it flashes an important rom with settings at every power off. Leaving these settings in an inconsistent state can prevent the cam from booting. We do try to think hard about contingencies, and program defensively. At least for anything we release. I've done some very stupid tests on my own cams, and only needed to recover with UART access once ;)
I haven't use ICE, but I have used SoftICE. Oh, and we had a breakthrough on locating JTAG pinouts very recently, so we might end up being able to do similar.
You do need to be careful with the shutter. It is possible to do damage (and add dirt) from it.
We had to add software dust removal, because the shutter kicked dirt onto the sensor.
I’m assuming that, at some point, the sensor technology will progress to where mechanical shutters are no longer necessary.
Great, thanks for sharing the links.
By the way, rift valley software? I'm writing to you from Kenya, one of the homes of the great rift valley. It is truly remarkable to drive down the escarpment just North of Nairobi!
I used to live in Uganda.
Visiting the Rift Valley in Southwest Uganda was one of the most awesome experiences of my childhood. My other company, Little Green Viper, riffs on that, too.
I was born in Africa, and spent the first eleven years of my life, there.
Had to leave Uganda in a hurry, though (1973).
Yes! As a software developer in the photography space, we are deeply in need of projects like this.
The photography world is mired in proprietary software/ formats, and locked down hardware; and while it has always been true that a digital camera is “just” a computer, now more than ever it is painful just how limited and archaic on-board camera software is when compared to what we’ve grown accustomed to in the mobile phone era.
If I compare photography to another creative discipline I am somewhat familiar with, music production - the latter has way more open software/hardware initiatives, and freedom of not having to tether yourself to large, slow, user-abusing companies when choosing gear to work with.
Long live Magic Lantern!
Agreed
cries in .x3f & Sigma Photo Pro
If you don't know about it already and are a macOS user, you may appreciate https://x3fuse.com/
For a look at some of the amazing output from an "ancient" EOS, you can look at Magic Lantern's Discord. It's rather shocking how far this little camera could be pushed. It is definitely a fun hobby project to fool around with these things. After awhile I stopped having the time and moved over to Sony APS-C with vintage lenses. I was able to maintain some of the aesthetic without getting frustrated by stuttering video. Still it's really a cool project.
Unfortunately, they're not using a github organization- leaving it to fail again if that account disappears. Continuity is hard.
> git clone https://github.com/reticulatedpines/magiclantern_simplified
Why would it fail if the code is available?
If it's github.com/magiclantern/magiclantern ownership can change hands via organizational user changes.
An alternative to Magic Lantern is CHDK. Unfortunately that also feels somewhat abandoned and at the best of times held together with string* so I’m glad ML is back.
*No judgement, maintaining a niche and complex reverse-engineering project must be a thankless task
https://chdk.fandom.com/wiki/CHDK
This is good news
One of those projects I wanted to take on but always back logged. Wild that they've been on a 5 year hiatus -- https://www.newsshooter.com/2025/06/21/the-genie-is-out-of-t... -- that's the not-so-happy side of cool free wares.
No time like the present :)
It is actually easier to get started now, as I spent several months updating the dev infrastructure so it all works on modern platforms with modern tooling.
Plus Ghidra exists now, which was a massive help for us.
We didn't really go on hiatus - the prior lead dev left the project, and the target hardware changed significantly. So everything slowed down. Now we are back to a more normal speed. Of course, we still need more devs; currently we have 3.
I should give this a shot. I used to use CHDK so I could use my old crappy Canon into something that would take good time-lapse videos by snapping a photo every X seconds; I miss doing that, though now it's harder because I live in the 'burbs, and there's no particularly spots for that nearby, and anywhere that is a good spot likely doesn't have a power outlet for me to use. I wonder how long I could power my camera from a portable charger?
I used to do it as well with a cheap second-hand IXUS 230 HS. It could run (at least) 48 h off a 7.2 Ah 12 V AGM battery, snapping a photo every 3 s (I used a fake-battery power adapter and a small DC-DC converter.)
> I used a fake-battery power adapter and a small DC-DC converter.
Same here. I used to live in a fairly tall building in Manhattan, so found my way to the roof, found an outlet, and would set it up to do timelapses of sunsets over the Hudson.
The camera lens was pretty dirty, so they weren't great, but I enjoyed them: https://www.youtube.com/watch?v=OVpOgP-8c9A
Nearly all Canons have a small access port as part of the battery door, which you can put a power supply cable / through, by design. Don't buy too cheap a dummy battery, the really cheap ones may have very bad voltage regulation. You can get ones designed to work from a USB power bank, or mains.
Amazing to see this, I haven’t thought about this since 2013. This turned my very basic entry level 550D into a crazy powerful camera for time lapse photography, I loved it!
This news is probably my excuse to buy my forth EOS; the first three were 100% only because of Magic Lantern. Can't understand why manufacturers make this hard as it sells hardware.
> Can't understand why manufacturers make this hard as it sells hardware.
Because a lot of features that cost a lot of money are only software limitations. With many of the cheaper cameras the max shutter speed and video capabilities are limited by software to make the distinction with the more expensive cameras bigger. So they do sell hardware - but opening up the software will make their higher-end offerings less compelling.
Magic Lantern is fantastic software that makes EOS cameras even better, but I understand why manufacturers make it hard:
Camera manufacturers live and die on their reputation for making tools that deliver for the professional users of those tools. On a modern camera, the firmware and software needs to 100% Just Work and completely get out of the photographer's way, and a photographer needs to be able to grab a (camera) body out of the locker and know exactly what it's going to do for given settings.
The more cameras out there running customized firmware, the more likely someone misses a shot because "shutter priority is different on this specific 5d4" or similar.
I'm sure Canon is quietly pleased that Magic Lantern has kept up the resale value of their older bodies. I'm happy that Magic Lantern exists-- I no longer need an external intervalometer! It does make sense, though, that camera manufacturers don't deliberately ship cameras as openly-programmable computational photography tools.
You have an interesting point about consistency and I'd like to provide a counterargument. While control consistency is very important, the actual image you get from a camera varies significantly between models as the manufacturers change tone curves, colour models, etc. JPGs from the camera are basically arbitrary and RAWs are not much better. The manufacturers don't provide many guarantees, it's just up to you and downstream software to figure out what looks good. Funny that so much thought goes into designing the feel of a camera yet the photo output is basically undefined...
Also another thing, Magic Lantern adds optional features which are arbitrarily(?) not present on some models. Perhaps Canon doesn't think you're "pro enough" (e.g. spent enough money) so they don't switch on focus peeking or whatever on your model.
If you want JPGs to look different, you can change them in the camera, and RAW files are just that: raw. They will vary between cameras slightly because the cameras have different sensors. Editing RAWs from 5d3 vs. 5d4 vs. 6d (my only experience) is not very different. Ultimately, the workflow that matters is a photographer capturing the image and getting the output to the studio quickly, in high quality. Event photographers often tether via ethernet or USB and the studio can post-process the RAW in minutes (or even seconds). The part of this that is most sensitive and hardest to recover from error is the photographer capturing the image, which is why consistency and usability of camera controls is so important.
IIRC none of the EOS DSLRs had focus peaking from the factory, you need Magic Lantern -- Canon didn't program it at all.
My point about JPGs is they will look different between cameras anyway because of software differences, with the "same" settings, so they're already inconsistent from the user perspective. Editing RAW is not necessarily different but from what I've heard that's because RAW editing software busts its ass to try to correct for all manner of arbitrary differences between camera models. It's in spite of camera design that we have consistency, not really because of.
Very happy to see ML return, used it in my T2i for 10 years at least, this year I bought a R6 Mark II so no need but I'd be very happy to see it someday with support for ML. Congratulations on the return!
I still have my 600D - it's hand down the most user-friendly DSLR I've ever owned, thanks to Magic Lantern. I also have a Sony A7S2. But, it is nowhere near the ease of use of my 600D. 12 years ago or so, I discovered Magic Lantern and I was blown away. It literally turned my camera into a high-end unit (for its time). What blows me away is that, my 600D can capture RAW video after installing ML. My Sony still can do only 10-bit video, 12 years later. The team deserves so much more funding and credit than they receive. I'm extremely grateful to the project and the people behind. I still haven't sold my 600D - only because of Magic Lantern. Thank you team :)
What would be something you can achieve using ML that you couldn't do with the stock firmware and postprocessing?
The list is so long. My favorite is the internal intervalometer + ETTR. Canon has always been laughed at for not having an internal intervalometer, and ML proves how lame it is to not have one. ETTR (Expose To The Right) is an auto metering mode that allows the camera to keep the histogram pushed as far to the right (better exposure) automagically by increasing shutter time and/or increasing ISO. This is essential for doing holy grail timelapse of sunset/sunrise where the exposure is constantly changing. This feature alone is worth it's weight in gold.
However, a lot of the features exposed are more video oriented. The Canon bodies were primarily photo cameras that could shoot video in a cumbersome way. ML brings features a video shooter would need without diving into the menus like audio metering. The older bodies also have hardware limitations on write speed, so people use the HDMI out to external recorders to record a larger framesize/bitrate/codec than natively possible. Also, that feed normally has the camera UI overlay which prevents clean recordings. ML allows turning that off.
There are just too many features that ML unlocks. You'd really just need to find the camera body you are interested in using on their site, and see what it does for that body. Different bodies have different features. So some effort is required on your part to know exactly what it can do for you.
I don't know if modern cameras are better for this, but a big one historically was getting a clean, realtime HDMI output so that high quality cameras can be used with a capture card for broadcast purposes as a replacement for a webcam. Manufacturers understand that that's a "pro" level need/feature and have intentionally segmented the market so that lower-tier devices can't do it even though the hardware is obviously all present.
The big one for me was always focus peaking when using vintage lenses or doing IR photography. The extended White Balance settings were nice to have for IR, as well.
- Lua script support. It is not complete (in ML hardly anything is) but allows to access a lot of ML and Canon functions. Years ago someone made a script for automating solar eclipse shooting catching all critical phases while chilling and enjoying the view. - Introduced full electronic shutter (Silent Pic) for Digic 4 and 5. - Focus stacking for macro and - via Lua script - for landscape. - Exposure simulation switch for "cheaper" cams - Trap focus - Dual-ISO. Some HDR mode but without ghosting by manipulating sensor lines to record at different ISOs - Ghost image overlay - Customizable cropmark overlays (grids and others) - Fps finetuning. Several folks used it to record vintage monitors with very, very strange timings and without rolling bars.30.01 fps? No problem! - Zebras and focus peaking, vectorscope, wavelength monitoring, false colour support - RAW histogram - Bracketing with up to 11 frames (But why? ;-> ) - Intervalometer and bracketing (a bit more configurable than Canon has now) - Trigger by LCD's IR sensor (if any) or Audio (clap your hand) or motion detect - Rack focus - Display mirroring and upside/down options - Configurable presets (up to 15) - 30 minutes override for RAW recording, USB and HDMI streaming. Oh, and we have a new option to record native H.264/MOV for more than 29:59. Prototype but working. -Better AF micoradjustment for the cams having that option by Canon. - ,,,
Frankly: I once tried to maintain a help file and browsed through a lot of lesser known features. Took me days and I didn't even test RAW/MLV recording.
Cool! Am i the only one who has a really hard time finding what models are supported? It says on the frontpage that it's on the downloads page, but I can't seem to find anything? EDIT: It's on the builds page https://builds.magiclantern.fm
Magic Lantern is amazing... I used it with a custom C script to do auto ISO in Av mode (setting minimum shutter speed based on focal length) before that was built into the newer camera models. It's good to see it back!
This is such excellent news! I was extremely sad when progress was halted, lost hope my 80D would ever get cfw.
The 80D has Magic Lantern code available. We haven't released a build to the public as it has such minimal features available there's no real point yet. But if you were thinking of doing dev work for it, it's in a good place to start: ML GUI works, debug logging works.
I absolutely love Magic Lantern, and I wish similar initiatives existed on Sony and Nikon! I was forced to upgrade my Sony camera purely because of software limitations.
Firmwares should be open-source by law. Especially when products are discontinued.
I have fond memories of squeezing so deep exposure stacking out of the auto/adaptive-HDR-bracketing script in CHDK on my old "IXUS 100IS", that the AFAIK still CCD had severe blooming around the window in the scene. Still great though!
Looks like there's still no support for the M50. I hope with the revitalized development it's on the roadmap!
Nothing published. But just few hours ago I was asked to test its raw video capture mode ...
I just got my T2i out a few months ago and the first thing I did was check for new magic lantern versions. haha. Really cool to see this project is still living.
It has been many moons since I used Magic Lantern. Has anamorphic desqueeze ever been a feature or could it be in the future? That's one missing feature that bums me out about shooting videos on Canon.
Support for anamorphic lense ML has for ages: 5:4 (1.25); 4:3 (1.33); 7:5 (1.4); 3:2 (1.5); 5:3 (1.66); 9:5 (1.8); 2:1 (2)
Awesome thanks for the heads up! I wasn't using anamorphic lenses back when I tried ML so I'll have to give it another try now.
Would love it if camera manufacturers were forced to open source their firmware after say 5 years of a camera’s release. The longevity of devices would be vastly improved.
In fact make this all devices with firmware, printers, streamers etc.
I don't think forcing a company to open source their IP is a good move, but perhaps there might be some encouragement implemented for opening up their bootloader so the device is more hackable.
But forcing is never a right thing.
The entire copyright and patent system is built on the principle of forcing the release of IP; it is time delayed in exchange for the legal protections you gain if you opt in to the system. That is the encouragement!
Extending this to enable software access by 3rd parties doesn't feel controversial to me. The core intent of copyright and patent seems to be "when the time limit expires, everyone should be able to use the IP". But in practice you often can't, where hardware with software is concerned.
Thanks to all contributors to the project, ML is an amazing feat of work. I've been running it on my Canon 6D since I got it in 2016, very useful for timelapses.
This is not about the key logging spyware Magic Lantern. I thought this would be an interesting read with whatever is happening around the world.
https://en.wikipedia.org/wiki/Magic_Lantern_(spyware)
I was trying to understand what this project is. It's some sort of open firmware for Canon camera that you put on the flash card (SD). The home page has info: https://www.magiclantern.fm/
Yes its truly noteworthy project. They exploited Canon cameras by first managing to blink red charging LED. Then they used the LED blinks to transmit the firmware out. Then they built custom firmware which boots right from SD (thus no posibility to break the camera). The Magic Lantern firmware for example allows many basic cameras to do RAW 4K video recording (with unlimited length) - feature which is not even in the high-end models. But it has much more features to tinker with.
There's a fun step you're missing - it's not firmware. We toggle on (presumably) engineering functionality already present in Canon code, which allows for loading a file from card as an ARM binary.
We're a normal program, running on their OS, DryOS, a variant of uITRON.
This has the benefit that we never flash the OS, removing a source of risk.
Hi - I'm the current lead dev.
It's not firmware, which is a nice bonus, no risk of a bad rom flash damaging your camera (only our software!).
We load as normal software from the SD card. The cam is running a variant of uITRON: https://en.wikipedia.org/wiki/ITRON_project
From a security mindset, I was thinking this had made a return: https://en.m.wikipedia.org/wiki/Magic_Lantern_(spyware)
I was pleasantly surprised to find out this was something very different.
I thought it was Magic Leap, the AR scam company.
https://en.wikipedia.org/wiki/Magic_Leap
> As of December 2024, the Magic Leap One is no longer supported or working, becoming end of life and abruptly losing functionality when cloud access was ended. This happened whilst encouraging users to buy a newer model.
Ah, that’s about how I thought that would end up.
Its demise seems to have completely passed me by; I read about its enormous funding and unrealistic expectations, then a v1 came out which was mediocre/disappointing, then... nothing. Apple's thing overshadowed it, but that too has passed - unless they're going to announce a new model at a fraction of the price soon, supposedly their UI redesigns follow its concepts so it's not buried yet.
CHDK and Magic Lantern are fantastic, I really wish there was a Nikon equivalent.
There is, but it's more limited: https://www.dslrbodies.com/cameras/general-nikon-camera-info...
Are there modded firmware for Sony or Lumix cameras?
Sony has https://github.com/ma1co/OpenMemories-Tweak The developer disappeared but it still works I think.
Magic lantern is amazing. I'm still hoping a project grows for the other camera brands since I only have Nikon and Panasonic cameras.
If I’m to get a secondhand camera to run MM which would your recommend? 200D or 600D?
I wouldn't recommend the 600D if you want to do video. For stills it's perfectly acceptable. Auto-focus will feel slow compared to a modern cam. If you're going for an old / budget cam, try and reach to the 650D or 700D, those are a newer generation of hardware.
200D is much newer, but less well supported by ML. I own this cam and am actively working on improving it. 200D has DPAF, which means considerably improved auto-focus, especially for video. Also it can run Doom.
Are there any ML features in particular you're interested in?
I'm interested in using all kinds of devices for streaming live video from live (think music) events via OBS with minimal effort. Current setup is with a device (just an old iPhone can do) which provides WiFi connectivity, then another (or the same device) runs DroidCAM which them streams into a nearby laptop with OBS (typically capturing the audio needed) and this is then sent to wherever we decide (twitch, RSTP, etc). We've tried this setup with as many as three droidcam phones, and it is just fine on... a legacy MacBook Pro with Intel + 1500MB Iris card.
So ideally I'd imagine getting a second-hand 600D or 200D and having a similar setup. We did have a setup (previously) where a GoPro or mini-HDMI campera is captured and then processed by a RBPi 2/3/4, but this seems an overkill compared to the DroidCAM Setup.
And, of course, the optics on the 600D/200D are expected to be much more correct than those on an iPhone or similar phone/mobile device.
Thanks for your kind attention.
With 600D you are stuck to 1620x912 in video mode embedded in 1080i59.94 8-bit. Black borders around and you have to crop and - maybe - scale up. 200D HDMI stream with ML is clean with MF but AF will still draw a focus rectangle. But at least true FHD via HDMI.
AF with 600D in liveview: Phase detection only. Focus hunting galore. 200D comes with usable DPAF.
I prefer 250D for streaming. Dual display support, no 30 minute limit for HDMI out (but cam display will go dark until some button action).
Fantastic news! Congrats to the new team!
Magic Lantern Devs are GOAT < 3
I've been waiting for Magic Lantern for my 6D Mark II for years now, checking the homepage every 6 months or so for an update, so this is great news!
Same boat. I have a 6D Mark II since 7 years ago now, and I misguidedly was counting on ML to be released within 3 years of my purchase. But luckily, it's still a fantastic camera.
6D2 is a nice cam, and one I happen to own. This cam is under active development. Locally I have ~FHD raw video working.
The nifty thing would come from opening up the high end cameras, so why not go there? Of course Canon's legal team gonna crack down on the project, as they've previously said.
Canon's legal team have never said anything about Magic Lantern in any context that I'm aware of.
The high end cams need ML less, they have more features stock, plus devs need access to the cam to make a good port. So higher end cams tend to be less attractive to developers.
I was hoping it was these guys: https://youtu.be/x3Y1dAcHK5Y
But this is actually really cool because, as it turns out, I've got an old Canon Eos DSLR that I haven't used for a long time and didn't know this thing existed before.
Uh, sure, maybe in a professional setting where you’re getting paid. But this was unpaid volunteer work. If, as a community, we start enforcing professional grade standards on people who are just contributing their free time to give us neat toys and tools, I kinda worry it makes the whole thing the whole thing less fun or sustainable. And if that happens, we probably stop getting these free toys altogether.
I heart-fully disagree. Being professional crosses the bounds of paid work and unpaid work.
It doesn't take much work to not leave a gigantic pile of trash behind you.
If anything, it's an even more a self-responsible thing to do in the OSS world, as there isn't a chain of command such as in the corporate world, enforcing this.
It's selfish to engage in group relation with other people building something without the conscious decision of continuity.
A job worth doing is a job worth doing well. Maybe I'm just a gray beard with unrealistic expectations, or maybe I care about quality.
Think of it as a non-profit club. If you volunteer to be the treasurer, are you then free to ignore everything and do whatever you like, just because you aren’t paid? Of course not. It’s the same with being a software project maintainer; you have willingly taken on some obligations.
If you volunteer, sure.
If I put some code out on the internet and some other people find it and start using it, they message me we talk and I start adding things they suggest and working with others to improve this code. Then one day I wake up and don't want to do it anymore. At what point did I become obligated? When I published the code? When I first started talking to others about it(building a community)? When I coded their suggestions? When I worked with other coders?
Who get to decide where the line is?
(Many people disagree vehemently with ascribing any obligation at all to software maintainers, as discussed previously: <https://news.ycombinator.com/item?id=43143176>)
It's not like this kind of thing doesn't happen in the professional world - in fact, quite the opposite. The incentives to cut corners in a company are if anything greater than in open source, with pressure from management to meet the next deadline.
https://en.wikipedia.org/wiki/Bus_factor