He scans one line at a time with a mirror into a photomultiplier tube which can detect single photon events. This is captured continually at 2MSample/s (2 billion times per second: 2B FPS) with an oscilloscope and a clever hack.
The laser is actually pulsing at 30KHz, and the oscilloscope capture is synchronized to the laser pulse.
So we consider each 30KHz pulse a single event in a single pixel (even though the mirror is rotating continuously). So he runs the experiment 30,000 times per second, each one recording a single pixel at 2B FPS for a few microseconds. Each pixel-sized video is then tiled into a cohesive image
Good explanation. One detail though: it is one pixel at a time, not one line at a time. Basically does the whole sequence for one pixel, adjusts mirror to next one, and does it again. The explanation is around the 8 minutes mark.
Just want to make it clear that in any one instant, only one pixel is being recorded. The mirror moves continuously across a horizontal sweep and a certain arc of the mirror's sweep is localized to a pixel in the video encoding sequence. A new laser pulse is triggered when one pixel of arc has been swept, recording a whole new complete mirror bounce sequence for each pixel sequentially. He has an additional video explaining the timing / triggering / synchronization circuit in more depth: https://youtu.be/WLJuC0q84IQ
One piece I'd like to see more clarification on is, is he doing multiple samples per pixel (like with ray tracing?). For his 1280x720 resolution video, that's around 900k pixels, so at 30Khz, it would take around 30s to record one of these videos if he were to doing one sample per pixel. But in theory he could run this for much longer and get a less noisy image.
I find it interesting that a project like this would easily be a PhD paper, but nowadays Youtubers do it just for the fun of it.
And the reason it matters that this is a single pixel at two billion times per second is that we can hypothetically stack many of these assemblies on top of each other and get video of a single event that is not repeatable.
The author explained that he originally attempted to pulse the laser at 30 KHz, but for the actual experiment used a slower rate of 3 KHz. The rate at which the digital data can be read out from the oscilloscope to the computer seems to be the main bottleneck limiting the throughput of the system.
Overall, recording one frame took approximately an hour.
Thanks for the explanation. Honestly, your explanation is better than the entire video. - I watched it in full and got really confused. I completely missed the part where he said the light is pulsing at 30kHZ and was really puzzled at how he is able to move the mirror so fast to cover the entire scene.
The author uses "real time sampling" to acquire evolution of light intensity for one pixel at 2 GSps rate. The signal is collected for approximately one microsecond at each firing of the laser, and corresponding digital data is sent from the oscilloscope to the computer.
"Equivalent time sampling" is a different technique which involves sliding the sampling point across the signal to rebuild the complete picture over multiple repetitions of the signal.
The triggering scheme is completely brilliant. One of those cases where not knowing too much made it possible, because someone who does analog debug would never do that (because they would have a 50k$ scope!.
Honestly I think if we each wrote a nice personal letter to Keysight they’d probably gift him one in exchange for the YouTube publicity. Several other electrical engineers on YT get free $20-50k keysight scopes not just for themselves, but once a year or so to give away to their audience members.
And yes, this person could make use of it. His videos are among the highest quality science explainers - he’s like the 3B1B of first principles in physics. Truly a savant at creating experiments that demonstrate fundamental phenomena. Seriously check out any of his videos. He made one that weighs an airplane overhead. His videos on speed of electricity and speed of motion and ohms law are fantastic.
What this experiment does is very similar to how an ordinary LIDAR unit operates, except that during a LIDAR scan the laser and the receiver are always pointed in the same direction, while in this demonstration the detector is scanning the room while the laser is stationary and is firing across the room.
But in principle, a LIDAR could be reconfigured for the purposes of such demonstration.
If one wants to build the circuit from scratch, then specifically for such applications there exist very inexpensive time-to-digital converter chips. For example, Texas Instruments TDC7200 costs just a few dollars and has time uncertainty of some tens of picoseconds.
Keysight is not very hobbyist friendly these days. A year or two ago it broke on the Eevblog forums that they were refusing to honor warranties/service contracts unless you had a corporate account with them. If you were just a guy with a scope you would be SOL.
Hmm, it's a clever hack, but they would use an oscilloscope with an "External trigger" input, like most of the older Rigols. That would let you use the full sample rate without needing to trigger from CH2
The view from one end of a laser going between two mirrors (timestamp 1:37) is a fairly good demonstration of the camera having to wait for light to get to it.
The video is definitely more interesting than 28 fps but it's also not really 2B fps.
It captures two billion pixels per second. Essentially he captures the same scene several times (presumably 921,600 times to form a full 720 picture), watching a single pixel at a time, and composite all the captures together for form frames.
I suppose that for entirely deterministic and repeatable scenes, where you also don't care too much about noise and if you have infinite time on your hands to capture 1ms of footage, then yes you can effectively visualize 2B frames per second! But not capture.
Nah, it's definitely 2B fps, the frames are just 1x1 wide and a lot of the interesting output comes from the careful synchronization, camera pointing, and compositing of nearly a million 1x1 videos of effectively identical events.
And there are 1 million milliseconds every ~15 minutes. It doesn't take that long to capture all the angles you need so long as you have an automated setup for recreating the scene you are videoing.
Others say that you're wrong, but I think you're describing it approximately perfectly.
As you say: It does capture two billion pixels per second. It does watch a single pixel at a time, 921,600 times. And these pixels [each individually recorded at 2B FPS] are ultimately used to create a composition that embodies a 1280x720 video.
That's all correct.
And your summary is also correct: It definitely does not really capture 2 billion frames per second.
Unless we're severely distorting the definition of a "video frame" to also include "one image in a series of images that can be as small as one pixel," then accomplishing 2B entire frames per second is madness with today's technology.
As stated at ~3:43 in the video: "Basically, if you want to record video at 2 billion frames per second, you pretty much can't. Not at any reasonable resolution, with any reasonably-accessible consumer technology, for any remotely reasonable price. Which is why setups like this kind of cheat."
You appear to be in complete agreement with AlphaPhoenix, the presenter of this very finely-produced video.
> Unless we're severely distorting the definition of a "video frame" to also include "one image in a series of images
What is your definition of "video frame" if not this?
> that can be as small as one pixel,"
Why would this be a criteria on the images? If it is, what is the minimum resolution to count as a video frame? Must I have at least two pixels for some reason? Four so that I have a grid? These seem like weird constraints to try and attach to the definition when they don't enable anything that the 1x1 camera doesn't - nor are the meaningfully harder to build devices that capture.
I agree the final result presented to the viewer is a composite... but it seems to me that it's a composite of a million videos.
I concur, and as you say it comes from a video frame and thus a video. The fact that the video frame contains only a single one seems to change nothing.
If I were to agree with this, then would you be willing to agree that the single-pixel ambient light sensor adorning many pocket supercomputers is a camera?
And that recording a series of samples from this sensor would result in a video?
Each pixel was captured at 2 billion frames per second, even if techinically they were separate events. Why not call it (FPS / pixels) frames per second?
As I understand it, this is sort of simulating what it would be like to capture this, by recreating the laser pulse and capturing different phases of it each time, then assembling them; so what is represented in the final composite is not a single pulse of the laser beam.
Would an upgraded version of this that was actually capable of capturing the progress of a single laser pulse through the smoke be a way of getting around the one-way speed of light limitation [0]? It seems like if you could measure the pulse's propagation in one direction, and the other (as measured by when it scatters of the smoke at various positions in both directions), this seems like it would get around it?
But it's been a while since I read an explanation for why we have the one-way limitation in the first place, so I could be forgetting something.
No, as he explains in the video, this is not a stroboscopic technique, the camera _does_ capture at 2 billion fps. But it is only a single pixel! He actually scans the scene horizontally then vertically and sends a pulse then captures pixel by pixel.
>As I understand it, this is sort of simulating what it would be like to capture this, by recreating the laser pulse and capturing different phases of it each time, then assembling them; so what is represented in the final composite is not a single pulse of the laser beam.
It is not different phases, but it is a composite! On his second channel he describes the process[0]. Basically, it's a photomultiplier tube (PMT) attached to a precise motion control rig and a 2B sample/second oscilloscope. So he ends up capturing the actual signal from the PMT over that timespan at a resolution of 2B samples/s, and then repeating the experiment for the next pixel over. Then after some DSP and mosaicing, you get the video.
>It seems like if you could measure the pulse's propagation in one direction, and the other (as measured by when it scatters of the smoke at various positions in both directions), this seems like it would get around it?
The point here isn't to measure the speed of light, and my general response when someone asks "can I get around physics with this trick" by answer is no. But I'd be lying if I said I totally understood your question.
Think of it more like "IRL raytracing", where a ray (the beam) is cast and the result for a single pixel from the point of view is captured, and then it is repeated millions of times.
Even if you had a clock and camera for every pixel, the sync is dependent on the path of the signal taken. Even if you sent a signal along every possible route and had a clock for each route for each pixel (a dizzingly large number) it still isn't clear that this would represent a single inertial frame. As I understand it even if you used quantum entanglement for sync, the path of the measurement would still be an issue. I suggest not thinking about this at all, it seems like an effective way to go mad https://arxiv.org/pdf/gr-qc/0202031
E: Do not trust my math under any circumstances but I believe the number of signal paths would be something like 10^873,555? That's a disgustingly large number. This would reveal whether the system is in a single inertial frame (consistency around loops), but it does not automatically imply a single inertial frame. It's easy to forget that the earth, galaxy, etc are also still rotating while this happens.
No, you cannot escape the conclusion of the limitations on measuring the one-way speed of light.
While the video doesn't touch on this explicitly, the discussion of the different path lengths around 25:00 in is about the trigonometric effect of the different distances of the beam from the camera. Needing to worry about that is the same grappling with the limitation on the one-way speed.
The problem (ignoring quantum mechanics) is that the sensors all require an EM field to operate in. So assuming that the speed of light was weighted with a vector in space-time, it would be affected everywhere -- including in the measurement apparatus.
If on the other hand one could detect a photon by sending out a different field, maybe a gravitational wave instead... well it might work, but the gravitational wave might be affected in exactly the same way that the EM field is affected.
I thought his method of multiplexing the single channel was very smart. I guess it's more common on 2 channel or high end 4 channel scopes to have a dedicated trigger input, which I've checked this one doesn't have. That said, there're digital inputs that could've been used. Presumably from whatever was controlling the laser.
I was confused by that part of the video exactly because I wondered why he wasn’t using the trigger input. Or, would it normally be possible to use a different channel as the trigger for the first channel?
He explained that. His inexpensive oscilloscope _can_ trigger from the second channel, but only at one billion samples per second. Where’s the fun in that?
It would look something like this[1] except with slower visual propagation.
Note that this camera (like any camera) cannot observe photons as they pass through the slits -- it can only record photons once they've bounced off the main path. Thus you will never record the interference-causing photons mid-flight, and you'll get a standard interference pattern.
AlphaPhoenix mentions in the description that he wants to try and image an interference pattern, and it seems possible.
Though it wouldn't really be showing you the quantum effect; that's only proven with individual photons at a time. This technique sends a "big" pulse of light relying on some of it being diffusely reflected to the camera at each oscilloscope timestep.
Truly sending individual photons and measuring them is likely impractical as you'd have to wait for a huge time collecting data for each pixel, just hoping the photons happens to bounce directly into the photomultiplier tube.
This video has brough warm and fuzzy memories from my other life. When I was a scientist back in USSR my research subject required measuring ridiculously low amounts of light and I used photomultiplier tube in photon counting mode for that. I needed current preamp that can amplify nanosecond long pulses and have concocted one out of arsenide-gallium logic elements pushed to work in a linear mode. The tube was cooled by Peltier elements and data fed to a remote Soviet relative of Wang computer [0].
Yes, the laser was fired at 3 KHz, while the mirrors were slowly scanning across the room.
For each laser pulse, one microsecond of the received signal was digitized with the sample rate of 2 billion samples per second, producing a vector of light intensity indexed by time.
A large number of vectors were stored, each tagged by the pixel XY coordinates which were read out from the mirror position encoders. In post-processing, this accumulated 3D block of numbers was sliced time-wise into 2D frames, making the sequence of frames for the clip.
Shere are so many levels this could be answered at.
All light in a narrow cone extending from the camera gets recorded to one pixel, entirely independently from other pixels. There's no reason this would be blurry. Blur is an artifact created by lenses when multiple pixels interact.
There is a lens in the apparatus, which is used to project the image from the mirror onto the pinhole, but it is configured so the plane of the laser is in focus at the pinhole.
What I don't understand is how the projection remains in focus as the mirror sweeps to the side, but perhaps the change in focus is so small.
They did the way more expensive version briefly mentioned towards the start of the video, having 12+ cameras with ridiculously fast shutters (as low as 10 nanoseconds) arranged to run in seqeunce.
The rapatronic camera had an incredibly fast electronic shutter. To record a video they needed one camera per frame. Rather like "bullet time" in the movies. The technique in the youtube video is completely different.
It's not completely different. I'd argue it's the exact opposite. Instead of using a single single-pixel camera to record video of a repeatable event, a sequence of regular film cameras captured photographs of an unrepeatable event.
He did a good job on his setup, but I have to think that adding a spinning mirror would have made everything much faster and easier.
He could then capture an entire line quite quickly, and would only need a 1 dimensional janky mirror setup to handle the other axis. And his resolution in the rotating axis is limited only by how quickly he can pulse the laser.
Of course, his janky mirror setup could have been 2 off-the-shelf galvos, but I guess that isn't as much "content".
I think a spinning mirror would make it a lot harder. He's only moving the mirror after the "animation" finishes. So it's capture video, step by 1 pixel, capture video, step by 1 pixel, capture video, etc... He's replaying the scene ~1 million times, for 1 million unique single pixel 2 billion fps videos.
Tl:dw for how this works:
He scans one line at a time with a mirror into a photomultiplier tube which can detect single photon events. This is captured continually at 2MSample/s (2 billion times per second: 2B FPS) with an oscilloscope and a clever hack.
The laser is actually pulsing at 30KHz, and the oscilloscope capture is synchronized to the laser pulse.
So we consider each 30KHz pulse a single event in a single pixel (even though the mirror is rotating continuously). So he runs the experiment 30,000 times per second, each one recording a single pixel at 2B FPS for a few microseconds. Each pixel-sized video is then tiled into a cohesive image
Good explanation. One detail though: it is one pixel at a time, not one line at a time. Basically does the whole sequence for one pixel, adjusts mirror to next one, and does it again. The explanation is around the 8 minutes mark.
Just want to make it clear that in any one instant, only one pixel is being recorded. The mirror moves continuously across a horizontal sweep and a certain arc of the mirror's sweep is localized to a pixel in the video encoding sequence. A new laser pulse is triggered when one pixel of arc has been swept, recording a whole new complete mirror bounce sequence for each pixel sequentially. He has an additional video explaining the timing / triggering / synchronization circuit in more depth: https://youtu.be/WLJuC0q84IQ
One piece I'd like to see more clarification on is, is he doing multiple samples per pixel (like with ray tracing?). For his 1280x720 resolution video, that's around 900k pixels, so at 30Khz, it would take around 30s to record one of these videos if he were to doing one sample per pixel. But in theory he could run this for much longer and get a less noisy image.
I find it interesting that a project like this would easily be a PhD paper, but nowadays Youtubers do it just for the fun of it.
You should check the other channel by the same person, where he goes into more details about the system: https://www.youtube.com/@BetaPhoenixChannel
From what I remember, recording one frame took about an hour.
And the reason it matters that this is a single pixel at two billion times per second is that we can hypothetically stack many of these assemblies on top of each other and get video of a single event that is not repeatable.
The author explained that he originally attempted to pulse the laser at 30 KHz, but for the actual experiment used a slower rate of 3 KHz. The rate at which the digital data can be read out from the oscilloscope to the computer seems to be the main bottleneck limiting the throughput of the system.
Overall, recording one frame took approximately an hour.
Thanks for the explanation. Honestly, your explanation is better than the entire video. - I watched it in full and got really confused. I completely missed the part where he said the light is pulsing at 30kHZ and was really puzzled at how he is able to move the mirror so fast to cover the entire scene.
FWIW he explains it better in his earlier video about the original setup. He might be assuming people have seen that.
Yup, this technique also allows oscilloscope capture signal with frequency higher than their Nyquyst bandwidth.
The downside is it only works with repeative signal.
I believe this technique is known as "equivalent-time sampling".
The author uses "real time sampling" to acquire evolution of light intensity for one pixel at 2 GSps rate. The signal is collected for approximately one microsecond at each firing of the laser, and corresponding digital data is sent from the oscilloscope to the computer.
"Equivalent time sampling" is a different technique which involves sliding the sampling point across the signal to rebuild the complete picture over multiple repetitions of the signal.
https://www.tek.com/en/documents/application-note/real-time-...
The original MIT video from 2011: "Visualizing video at the speed of light — one trillion frames per second" https://youtu.be/EtsXgODHMWk (project site: https://web.media.mit.edu/~raskar/trillionfps/)
He mentions this as the inspiration in his previous video (https://youtu.be/IaXdSGkh8Ww).
The triggering scheme is completely brilliant. One of those cases where not knowing too much made it possible, because someone who does analog debug would never do that (because they would have a 50k$ scope!.
Does anyone have a $50,000 scope they could just give to this dude? He seems like he would make great use of it.
Honestly I think if we each wrote a nice personal letter to Keysight they’d probably gift him one in exchange for the YouTube publicity. Several other electrical engineers on YT get free $20-50k keysight scopes not just for themselves, but once a year or so to give away to their audience members.
And yes, this person could make use of it. His videos are among the highest quality science explainers - he’s like the 3B1B of first principles in physics. Truly a savant at creating experiments that demonstrate fundamental phenomena. Seriously check out any of his videos. He made one that weighs an airplane overhead. His videos on speed of electricity and speed of motion and ohms law are fantastic.
What this experiment does is very similar to how an ordinary LIDAR unit operates, except that during a LIDAR scan the laser and the receiver are always pointed in the same direction, while in this demonstration the detector is scanning the room while the laser is stationary and is firing across the room.
But in principle, a LIDAR could be reconfigured for the purposes of such demonstration.
If one wants to build the circuit from scratch, then specifically for such applications there exist very inexpensive time-to-digital converter chips. For example, Texas Instruments TDC7200 costs just a few dollars and has time uncertainty of some tens of picoseconds.
Keysight is not very hobbyist friendly these days. A year or two ago it broke on the Eevblog forums that they were refusing to honor warranties/service contracts unless you had a corporate account with them. If you were just a guy with a scope you would be SOL.
Do you have any better recommendations?
I bet we can find 50,000 people with one dollar to give. Let's make this happen HN!
Hmm, it's a clever hack, but they would use an oscilloscope with an "External trigger" input, like most of the older Rigols. That would let you use the full sample rate without needing to trigger from CH2
The view from one end of a laser going between two mirrors (timestamp 1:37) is a fairly good demonstration of the camera having to wait for light to get to it.
Could redshift/blueshift explain why the light appeared to move at different velocity when he moved the camera to another position?
Ah, two billion. The first several times I saw this it looked like "twenty eight", which didn't seem terribly interesting.
The video is definitely more interesting than 28 fps but it's also not really 2B fps.
It captures two billion pixels per second. Essentially he captures the same scene several times (presumably 921,600 times to form a full 720 picture), watching a single pixel at a time, and composite all the captures together for form frames.
I suppose that for entirely deterministic and repeatable scenes, where you also don't care too much about noise and if you have infinite time on your hands to capture 1ms of footage, then yes you can effectively visualize 2B frames per second! But not capture.
Nah, it's definitely 2B fps, the frames are just 1x1 wide and a lot of the interesting output comes from the careful synchronization, camera pointing, and compositing of nearly a million 1x1 videos of effectively identical events.
And there are 1 million milliseconds every ~15 minutes. It doesn't take that long to capture all the angles you need so long as you have an automated setup for recreating the scene you are videoing.
Others say that you're wrong, but I think you're describing it approximately perfectly.
As you say: It does capture two billion pixels per second. It does watch a single pixel at a time, 921,600 times. And these pixels [each individually recorded at 2B FPS] are ultimately used to create a composition that embodies a 1280x720 video.
That's all correct.
And your summary is also correct: It definitely does not really capture 2 billion frames per second.
Unless we're severely distorting the definition of a "video frame" to also include "one image in a series of images that can be as small as one pixel," then accomplishing 2B entire frames per second is madness with today's technology.
As stated at ~3:43 in the video: "Basically, if you want to record video at 2 billion frames per second, you pretty much can't. Not at any reasonable resolution, with any reasonably-accessible consumer technology, for any remotely reasonable price. Which is why setups like this kind of cheat."
You appear to be in complete agreement with AlphaPhoenix, the presenter of this very finely-produced video.
> Unless we're severely distorting the definition of a "video frame" to also include "one image in a series of images
What is your definition of "video frame" if not this?
> that can be as small as one pixel,"
Why would this be a criteria on the images? If it is, what is the minimum resolution to count as a video frame? Must I have at least two pixels for some reason? Four so that I have a grid? These seem like weird constraints to try and attach to the definition when they don't enable anything that the 1x1 camera doesn't - nor are the meaningfully harder to build devices that capture.
I agree the final result presented to the viewer is a composite... but it seems to me that it's a composite of a million videos.
We already have a useful and established term with which to describe just one pixel from a video frame.
That term is "pixel".
I concur, and as you say it comes from a video frame and thus a video. The fact that the video frame contains only a single one seems to change nothing.
I see.
If I were to agree with this, then would you be willing to agree that the single-pixel ambient light sensor adorning many pocket supercomputers is a camera?
And that recording a series of samples from this sensor would result in a video?
Yes :)
Each pixel was captured at 2 billion frames per second, even if techinically they were separate events. Why not call it (FPS / pixels) frames per second?
As I understand it, this is sort of simulating what it would be like to capture this, by recreating the laser pulse and capturing different phases of it each time, then assembling them; so what is represented in the final composite is not a single pulse of the laser beam.
Would an upgraded version of this that was actually capable of capturing the progress of a single laser pulse through the smoke be a way of getting around the one-way speed of light limitation [0]? It seems like if you could measure the pulse's propagation in one direction, and the other (as measured by when it scatters of the smoke at various positions in both directions), this seems like it would get around it?
But it's been a while since I read an explanation for why we have the one-way limitation in the first place, so I could be forgetting something.
[0] https://en.wikipedia.org/wiki/One-way_speed_of_light
No, as he explains in the video, this is not a stroboscopic technique, the camera _does_ capture at 2 billion fps. But it is only a single pixel! He actually scans the scene horizontally then vertically and sends a pulse then captures pixel by pixel.
>As I understand it, this is sort of simulating what it would be like to capture this, by recreating the laser pulse and capturing different phases of it each time, then assembling them; so what is represented in the final composite is not a single pulse of the laser beam.
It is not different phases, but it is a composite! On his second channel he describes the process[0]. Basically, it's a photomultiplier tube (PMT) attached to a precise motion control rig and a 2B sample/second oscilloscope. So he ends up capturing the actual signal from the PMT over that timespan at a resolution of 2B samples/s, and then repeating the experiment for the next pixel over. Then after some DSP and mosaicing, you get the video.
>It seems like if you could measure the pulse's propagation in one direction, and the other (as measured by when it scatters of the smoke at various positions in both directions), this seems like it would get around it?
The point here isn't to measure the speed of light, and my general response when someone asks "can I get around physics with this trick" by answer is no. But I'd be lying if I said I totally understood your question.
[0] https://www.youtube.com/watch?v=-KOFbvW2A-o
Think of it more like "IRL raytracing", where a ray (the beam) is cast and the result for a single pixel from the point of view is captured, and then it is repeated millions of times.
Even if you had a clock and camera for every pixel, the sync is dependent on the path of the signal taken. Even if you sent a signal along every possible route and had a clock for each route for each pixel (a dizzingly large number) it still isn't clear that this would represent a single inertial frame. As I understand it even if you used quantum entanglement for sync, the path of the measurement would still be an issue. I suggest not thinking about this at all, it seems like an effective way to go mad https://arxiv.org/pdf/gr-qc/0202031
E: Do not trust my math under any circumstances but I believe the number of signal paths would be something like 10^873,555? That's a disgustingly large number. This would reveal whether the system is in a single inertial frame (consistency around loops), but it does not automatically imply a single inertial frame. It's easy to forget that the earth, galaxy, etc are also still rotating while this happens.
No, you cannot escape the conclusion of the limitations on measuring the one-way speed of light.
While the video doesn't touch on this explicitly, the discussion of the different path lengths around 25:00 in is about the trigonometric effect of the different distances of the beam from the camera. Needing to worry about that is the same grappling with the limitation on the one-way speed.
The problem (ignoring quantum mechanics) is that the sensors all require an EM field to operate in. So assuming that the speed of light was weighted with a vector in space-time, it would be affected everywhere -- including in the measurement apparatus.
If on the other hand one could detect a photon by sending out a different field, maybe a gravitational wave instead... well it might work, but the gravitational wave might be affected in exactly the same way that the EM field is affected.
I thought his method of multiplexing the single channel was very smart. I guess it's more common on 2 channel or high end 4 channel scopes to have a dedicated trigger input, which I've checked this one doesn't have. That said, there're digital inputs that could've been used. Presumably from whatever was controlling the laser.
Frankly it's uncommon to not have a trigger input. I'm not sure I've ever seen a DSO in person without a trigger in.
I was confused by that part of the video exactly because I wondered why he wasn’t using the trigger input. Or, would it normally be possible to use a different channel as the trigger for the first channel?
He explained that. His inexpensive oscilloscope _can_ trigger from the second channel, but only at one billion samples per second. Where’s the fun in that?
I'd like to see this with the double slit experiment
It would look something like this[1] except with slower visual propagation.
Note that this camera (like any camera) cannot observe photons as they pass through the slits -- it can only record photons once they've bounced off the main path. Thus you will never record the interference-causing photons mid-flight, and you'll get a standard interference pattern.
[1]: https://www.researchgate.net/figure/The-apparatus-used-in-th...
AlphaPhoenix mentions in the description that he wants to try and image an interference pattern, and it seems possible.
Though it wouldn't really be showing you the quantum effect; that's only proven with individual photons at a time. This technique sends a "big" pulse of light relying on some of it being diffusely reflected to the camera at each oscilloscope timestep.
Truly sending individual photons and measuring them is likely impractical as you'd have to wait for a huge time collecting data for each pixel, just hoping the photons happens to bounce directly into the photomultiplier tube.
This video has brough warm and fuzzy memories from my other life. When I was a scientist back in USSR my research subject required measuring ridiculously low amounts of light and I used photomultiplier tube in photon counting mode for that. I needed current preamp that can amplify nanosecond long pulses and have concocted one out of arsenide-gallium logic elements pushed to work in a linear mode. The tube was cooled by Peltier elements and data fed to a remote Soviet relative of Wang computer [0].
OMG this was back in 1979-1981.
0. - https://ru.wikipedia.org/wiki/%D0%AD%D0%BB%D0%B5%D0%BA%D1%82...
Did he actually repeat the experiment 1280x720 times for every pixel?
Yes, the laser was fired at 3 KHz, while the mirrors were slowly scanning across the room.
For each laser pulse, one microsecond of the received signal was digitized with the sample rate of 2 billion samples per second, producing a vector of light intensity indexed by time.
A large number of vectors were stored, each tagged by the pixel XY coordinates which were read out from the mirror position encoders. In post-processing, this accumulated 3D block of numbers was sliced time-wise into 2D frames, making the sequence of frames for the clip.
It's pretty clear he had a computer repeat the experiment that many times in reasonably rapid succession rather than doing it "himself", but yes...
How is the image focused and not a big blur?
Shere are so many levels this could be answered at.
All light in a narrow cone extending from the camera gets recorded to one pixel, entirely independently from other pixels. There's no reason this would be blurry. Blur is an artifact created by lenses when multiple pixels interact.
There is a lens in the apparatus, which is used to project the image from the mirror onto the pinhole, but it is configured so the plane of the laser is in focus at the pinhole.
What I don't understand is how the projection remains in focus as the mirror sweeps to the side, but perhaps the change in focus is so small.
Techniques like this are/were used to film nuclear explosions (but with a single explosion).
Who detonated 2073600 bombs?
They did the way more expensive version briefly mentioned towards the start of the video, having 12+ cameras with ridiculously fast shutters (as low as 10 nanoseconds) arranged to run in seqeunce.
That was probably not the more expensive version in that case.
Why?
Because of the cost of making and detonating so many bombs.
Scanning a single pixel over an image? How does that work with an explosion? The laser pointer is reproducible
https://en.wikipedia.org/wiki/Rapatronic_camera
The rapatronic camera had an incredibly fast electronic shutter. To record a video they needed one camera per frame. Rather like "bullet time" in the movies. The technique in the youtube video is completely different.
It's not completely different. I'd argue it's the exact opposite. Instead of using a single single-pixel camera to record video of a repeatable event, a sequence of regular film cameras captured photographs of an unrepeatable event.
But that bears no relation to what happened in the video.
He did a good job on his setup, but I have to think that adding a spinning mirror would have made everything much faster and easier.
He could then capture an entire line quite quickly, and would only need a 1 dimensional janky mirror setup to handle the other axis. And his resolution in the rotating axis is limited only by how quickly he can pulse the laser.
Of course, his janky mirror setup could have been 2 off-the-shelf galvos, but I guess that isn't as much "content".
I think a spinning mirror would make it a lot harder. He's only moving the mirror after the "animation" finishes. So it's capture video, step by 1 pixel, capture video, step by 1 pixel, capture video, etc... He's replaying the scene ~1 million times, for 1 million unique single pixel 2 billion fps videos.
Are you suggesting it would be easier if the mirror spun at 2 billion revolutions per second?
But I think he does capture an entire line quite quickly. As I understood it, he “scans” a line of pixels in seconds.