A M.2 HDMI capture card

(interfacinglinux.com)

132 points | by Venn1 16 hours ago ago

49 comments

  • viccis 13 hours ago

    >Fortunately, those extra PCIe lanes tend to get repurposed as additional M.2 holes.

    Or unfortunately, for the unlucky people who didn't do their research, so now their extra M.2 drives are sucking up some of their GPU's PCIe bus.

    • Numerlor 12 hours ago

      The vast majority of people run just one gpu, which motherboards have a dedicated direct to CPU x16 slot for. Stealing lanes comes into play with chipset connected slots

      • zten 11 hours ago

        I bought a Gigabyte X870E board with 3 PCIe slots (PCIe5 16x, PCIe4 4x, PCIe3 4x) and 4 M.2 slots (3x PCIe5, 1x PCIe 4). Three of the M.2 slots are connected to the CPU, and one is connected to the chipset. Using the 2nd and 3rd M.2 CPU-connected slots causes the board to bifurcate the lanes assigned to the GPU's PCIe slot, so you get 8x GPU, 4x M.2, 4x M.2.

        I wish you didn't have to buy Xeon or Threadripper to get considerably more PCIe lanes, but for most people I suspect this split is acceptable. The penalty for gaming going from 16x to 8x is pretty small.

        • ciupicri 10 hours ago

          For a moment I didn't believe you, then I looked at the X870E AORUS PRO ICE (rev. 1.1) motherboard [1] and found this:

          > 1x PCI Express x16 slot (PCIEX16), integrated in the CPU:

          > AMD Ryzen™ 9000/7000 Series Processors support PCIe 5.0 x16 mode

          > * The M2B_CPU and M2C_CPU connectors share bandwidth with the PCIEX16 slot.

          > When theM2B_CPU orM2C_CPU connector is populated, the PCIEX16 slot operates at up to x8 mode.

          [1]: https://www.gigabyte.com/Motherboard/X870E-AORUS-PRO-ICE-rev...

        • elevation 11 hours ago

          Even with a Threadripper you're at the mercy of the motherboard design.

          I use ROG board that has 4 PCIe slots. While each can physically seat an x16 card, only one of them has 16 lanes -- the rest are x4. I had to demote my GPU to a slower slot in order to get full throughput from my 100GbE card. All this despite having a CPU with 64 lanes available.

          • nrdvana 6 hours ago

            You're using 100GbE ... in an end-user PC? What would you even saturate that with?

            • aaronmdjones 3 hours ago

              I wouldn't think it's about saturating it during normal use; rather, simply exceeding 40 Gbit/s, which is very possible with solid-state NASes.

        • kimixa 7 hours ago

          Though for the most the performance cost of going down to 8x PCIe is often pretty tiny - only a couple of percent at most

          [0] shows a pretty "worst case" impact of 1-4% - that's on the absolute highest-end card possible (a geforce 5090) and pushing it down to 16x PCIe3.0. A lower end card would likely show an even smaller difference. They even showed zero impact from 16xPCIe4.0, which is the same bandwidth as 8x of the PCIe5.0 lanes supported on X870E boards like you mentioned.

          Though if you're not on a gaming use case and know you're already PCIe limited it could be larger - but people who have that sort of use case likely already know what to look for, and have systems tuned to that use case more than "generic consumer gamer board"

          [0] https://gamersnexus.net/gpus/nvidia-rtx-5090-pcie-50-vs-40-v...

        • vladvasiliu 6 hours ago

          I wonder how this works. I'm typing this on a machine running an i7-6700K, which, according to Intel, only has 16 lanes total.

          It has a 4x SSD and a 16x GPU. Their respective tools report them as using all the lanes, which is clearly impossible if I'm to believe Intel's specs.

          Could this bifurcation be dynamic, and activate those lanes which are required at a given time?

          • toast0 5 hours ago

            For Skylake, Intel ran 16 lanes of pci-e to the CPU, and ran DMI to the chipset, which had pci-e lanes behind it. Depending on the chipset, there would be anywhere from 6 lanes at pci-e 2.0 to 20 lanes at pci-e 3.0. My wild guess is that a board from back then would have put m.2 behind the chipset and no cpu attached ssd for you; that fits with your report of the GPU having all 16 lanes.

            But, if you had the nicer chipsets, wikipedia says your board could split the 16 cpu lanes into two x8 slots or one x8 and 2 x4 slots, which would fit. This would usually be dynamic at boot time, not at runtime; the firmware would typically look if anything is in the x4 slots and if so, set bifurcation, otherwise the x16 gets all the lanes. Some motherboards do have PCI-e switches to use the bandwidth more flexibly, but those got really expensive; i think at the transition to pci-e 4.0, but maybe 3.0?

            • vladvasiliu 3 hours ago

              Indeed. I dug out the manual (MSI H170 Gaming M3), which has a block diagram showing the M2 port behind the chipset, which is connected via DMI 3 to the CPU. In my mind, the chipset was connected via actual PCIe, but apparently, it's counted separately from the "actual" PCIe lanes.

        • dur-randir 4 hours ago

          >I wish you didn't have to buy Xeon

          But that's the whole point of Intel's market segmentation strategy - otherwise their low-tier workstation Xeons would see no market.

      • doubled112 12 hours ago

        The real PITA is when adding the NVMe disables the SATA ports you planned to use.

        • creatonez 9 hours ago

          Doesn't this usually only happen when you put an M.2 SATA drive in? I've never seen a motherboard manual have this caveat for actual NVMe M.2 drives. And encountering an M.2 SATA drive is quite rare.

          • ZekeSulastin 6 hours ago

            I have a spare-parts NAS on a Z170 (Intel 6k/7k) motherboard with 8 SATA ports and 2 NVME slots - if I put an x2 SSD in the top slot it would disable two ports, and if it was an x4 it would disable four! Luckily the bottom m2 slot doesn’t conflict with any SATA ports, just an expansion card slot. (The board supports SATA Express even - did anything actually use that?)

            SATA ports are far scarcer these days though and there’s more PCIE bandwidth available anyways, so it’s not surprising that there aren’t conflicts as often anymore.

          • toast0 5 hours ago

            Nope, for AM5, both of the available chipsets[1] have 4 serial ports that can be configured as x4 pci-e 3.0, 4x sata, or two and two. I think Intel does similar, but I haven't really kept up.

            [1] A620 is cut down, but everything else is actually the same chip (or two)

      • viccis 5 hours ago

        As some others have pointed out, there are some motherboards in which, if you use M.2 cards on the wrong slot, it will turn your 16x GPU slot into 8x.

    • throwaway48476 12 hours ago

      New chipsets have become PCIe switches since broadcom rug pulled the PCIe switch market.

      • crote 9 hours ago

        I wish someone bothered with modern bifurcation and/or generation downgrading switches.

        For homelab purposes I'd rather have two Gen3 x8 slots than one Gen5 x4 slot, as that'd allow me to use a (now ancient) 25G NIC and a HBA. Similarly I'd rather have four Gen5 x1 slots than one Gen5 x4 slot, as Gen5 NVMe SSDs are readily available and even a single Gen5 lane is enough to saturate a 25G network connection and it'd allow me to attach four SSDs instead of only one.

        The consumer platforms have more than enough IO bandwidth for some rather interesting home server stuff, it just isn't allocated in a useful way.

      • gruez 12 hours ago

        >broadcom rug pulled the PCIe switch market.

        What does this mean? Did they jack up prices?

        • nyrikki 11 hours ago

          Avego wanted PLX switches for enterprise storage, not low margin PC/server sales.

          Same thing that Avego did with Broadcom, LSI, Brocade etc... during the 2010's, buy a market leader, dump the parts that they didn't want, leaving a huge hole in the market.

          When you realize that Avego was the brand produced when KKR and Silver Lake bought the chip biz from Agilent, it is just the typical private equity play, buy your market position and sell off or shut down the parts you don't care about.

          • bbarnett 2 hours ago

            When the LSI buy happened, they dumped massive amounts of new, but ptior gen stock into the distributor channel, then immediately claimed them EOL.

            Scumbags.

  • 0cf8612b2e1e 13 hours ago

    If I wanted to capture something with HDCP, what’s the most straightforward path to stripping it away?

    • ec109685 12 hours ago
    • kodt 12 hours ago

      HDFury has multiple devices that can do it, but they are fairly expensive. Many of the cheap HDMI 1x2 splitters on Amazon also strip HDCP on the secondary output. You can check reviews for hints.

    • baby_souffle 13 hours ago

      There are various splitters and mixers that have the necessary edid/hdcp emulator functions.

      I don't know if anybody has managed to figure out how to defeat hdcp higher than 1.4 though.

      • mistersquid 13 hours ago

        > I don't know if anybody has managed to figure out how to defeat hdcp higher than 1.4 though.

        This works for me: https://www.amazon.com/dp/B08T64JWWT

        • userbinator 9 hours ago

          As with the vast majority of products on Amazon, you could probably find the same on Aliexpress/baba for less.

          • Aurornis 8 hours ago

            If you're in the United States you can expect hefty brokerage fees and tariff charges for anything arriving internationally starting on May 2nd.

            If the Amazon listing ships from the United States it's a better choice now.

        • mschuster91 12 hours ago

          Aside from the high number of 1-star reviews complaining about the gadget dying fast - how in god's name is this thing still selling assuming it can actually strip HDCP for modern HDMI standards?

          I'd have expected HDMI LA to be very very strict in enforcing actions against HDCP strippers. If not, why even keep up the game? It's not like pirates can already defeat virtually all copy protection mechanisms on the market, even before HDCP ever enters the field.

          • mistersquid 12 hours ago

            > Aside from the high number of 1-star reviews complaining about the gadget dying fast - how in god's name is this thing still selling assuming it can actually strip HDCP for modern HDMI standards?

            How is 1 review a "high number of 1-star reviews"?

            There are a total of 32 reviews for this device, 2 of which are 1-star reviews. Only one of those warns "Stopped working in 5 minutes". The other 1-star review notes (in translation) "When I tried this device, I got another very bad device at a lower price".

            I'm not sure what your expectation that "HDMI LA to be very very strict in enforcing actions against HDCP strippers" means in this context. Indeed, your second paragraph seems to be an expression of consternation that manufacturers would go through the trouble of implementing HDCP given how easily it can be circumvented.

            • mschuster91 11 hours ago

              > I'm not sure what your expectation that "HDMI LA to be very very strict in enforcing actions against HDCP strippers" means in this context.

              It used to be the case that HDMI LA would act very swiftly on any keybox leaks and revoke the certificates, as well as pursuing legal actions against sellers of HDCP strippers. These devices were sold by fly-by-night eBay and darknet sellers, not right on the storefront of Amazon.

              > Indeed, your second paragraph seems to be an expression of consternation that manufacturers would go through the trouble of implementing HDCP given how easily it can be circumvented.

              Manufacturers do because HDCP is a requirement to even be allowed to use the HDMI trademark, in contrast to DisplayPort. I was referring to HDMI LA and the goons of the movie rightsholder industry that insist on continuing this pointless arms race.

          • Havoc 11 hours ago

            Not even the finest lawyers can keep up with fly by night marketplace suppliers with company names that are just random letters

  • Venn1 11 hours ago

    The website is wheezing a bit. Here's a link to the video https://www.youtube.com/watch?v=xNebV8KIlZQ

  • amelius 12 hours ago

    Looking for a way to show an image over HDMI while my embedded system is booting, and then (seamlessly) switching over to the HDMI output of that system when booting finishes. Any ideas how to accomplish that? In hardware, of course.

    • myself248 11 hours ago

      Seems to me like the answer is to get the splashscreen going earlier in your boot process. If you know the display geometry, I suspect you can skip the DDC read and just hardcode the mode and stuff, which should save even more time.

      • amelius an hour ago

        I asked for a hardware solution. On some embedded systems there is an entire hierarchy of proprietary crapware preventing access to the display controller, and I just don't want to look into it because chances are it is futile anyway and they might change it next year or I will move to a different platform and I'll have to reinvent this again, and again. Hence hardware.

    • actionfromafar 12 hours ago

      There are a bunch of HDMI switches with buttons on them, and some with remotes. Doesn't seem too outlandish to rig these buttons or remotes to be controlled by the computer itself.

      • amelius 11 hours ago

        Yeah, I've looked into them, but I still need to generate the second image and these switches typically don't provide a seamless transition, so it's not optimal.

  • gitroom 5 hours ago

    absolutely wild seeing all the ways people stretch these boards tbh - kinda makes me wanna mess with my own setup more

  • puzzlingcaptcha 5 hours ago

    >PCIe slots are becoming an endangered species on modern motherboards

    Except... not at all? Just about any ATX-sized motherboard is going to have a full-sized X4 slot and a small X1 slot _in addition_ to the X16 one.

    And with decent audio and 2.5GBps ethernet PHY on board even those slots often sit disused.

    I mean, want to test goofy hardware - go for it, no need to invent a justification.

    • franga2000 3 hours ago

      Except most motherboards I see in the wild are mATX. They're more common and cheaper in stores, also basically every pre-built comes with mATX.

    • crote 3 hours ago

      I'd say that proves the point, doesn't it?

      Those Gen3 x1 slots are limited to 7.88Gbps so they are pretty pointless for anything beyond a sound card, and you only get a single Gen4 x4 slot. All the other x4 connections are taken up by M.2!

      Want to add a capture card and a HBA? Not enough slots. Want to add a capture card and a 10G/25G NIC? Not enough slots. Want to add a capture card and a USB-C expansion card? Not enough slots.

  • peterburkimsher 11 hours ago

    Looking for a way to freeze an HDMI feed so that the current image (a PPT slide) stays up on the projector/TV while edits are made. Any suggestions welcome.

  • adolph 12 hours ago

    Nice to see another use for those lanes exposed with M.2. M.2 to OcuLink to a standard PCIE slot/carrier still seems more flexible tho.

    example: https://community.frame.work/t/oculink-egpu-works-with-the-d...

    • tehlike 10 hours ago

      Coral boards also use m.2