Does our “need for speed” make our wi-fi suck?

(orb.net)

257 points | by jamies a day ago ago

298 comments

  • PaulHoule a day ago

    I did some experimentation with UniFi hubs and came to the conclusion that if you can give each device its own WiFi channel that would be ideal -- contention is that bad and often an uncontended channel with otherwise poor characteristics will beat a contended channel that otherwise looks good.

    The other bit of advice that is buried in there that no-one wants to hear for residences is the best way to speed up your Wi-Fi is to not use it. You might think it's convenient to have your TV connect to Netflix via WiFi and it is, but it is going to make everything else that really needs the Wi-Fi slower. It's a much better answer to hook up everything on Ethernet that you possibly can than it is to follow the more traveled route of more channels and more congestion with mesh Wi-Fi.

    • JoshTriplett a day ago

      > The other bit of advice that is buried in there that no-one wants to hear for residences is the best way to speed up your Wi-Fi is to not use it. You might think it's convenient to have your TV connect to Netflix via WiFi and it is, but it is going to make everything else that really needs the Wi-Fi slower. It's a much better answer to hook up everything on Ethernet that you possibly can than it is to follow the more traveled route of more channels and more congestion with mesh Wi-Fi.

      Absolutely. Everything other than cell phones and laptops-not-at-a-desk should be on Ethernet.

      I had wires run in 2020 when I started doing even more video calls. Huge improvement in usability.

      • marssaxman a day ago

        The house I live in was built with ethernet, but of the fourteen outlets the builders saw fit to include, not one is located where we can make use of it. The two devices in our house which use a wired connection are both plugged directly into the switch in our utility closet.

        (We do have one internet-connected device which permanently lives about an inch away from one of the ethernet sockets, but it is, ironically, a wifi-only device with no RJ45 port.)

        • ashdksnndck a day ago

          Some friends live in a rental that they’ve decorated well. It wasn’t until multiple visits that I realized they had run Ethernet throughout the house.

          You can get skinny Ethernet cables that bend easily. If you get some that match your paint, and route them in straight lines, those can be unobtrusive. Use tricks like running the cables along baseboards and other trim pieces. If you really want to minimize the visual impact you can use cable runners and paint over them. The cables are not attention-grabbing compared to furniture or art on the wall.

          If you’re willing to drill holes (if you terminate the cable yourself, the hole can be narrow), you can pass the cables through walls. If you don’t want to drill, you can go under a door.

          If you’ve got fourteen outlets, it seems like there ought to be some solution to get cables everywhere you need.

          • econ 6 hours ago

            I use to wire houses. (Here all wires go in tubes.) The absurdity of not adding a few empty tubes for later use endlessly amazed me.

            I think I've done only one house where the owner wanted to be able to put speakers in every corner of every room on every floor with multiple possible locations for his stereo.

            Then he wanted multiple cable tv connections per room, multiple sockets for landlines, Ethernet everywhere.

            The speaker tube was left empty and a few short distance sockets didn't have wires in them.

            It seemed excessive even to me but it isn't actually a lot of work to run 5 tubes in stead of 1. You might add 1-2% to the renovation bill. Even less for a new house.

            The end result was wonderful. He could do his chores with music all over the house. Move his TV sofa bed or desk where ever he wanted.

            Doing this after the house is finished is more expensive, it takes a lot more work and the result is inferior.

            I think nowadays we should have an USB socket next to each power outlet that provides both internet and extra fast charging. In reality I've never even seen such socket.

            With a few small updates Android could switch off wifi and mobile networking and seamlessly switch to calling over <s>wifi</s> wired internet when you plug in the charging cable.

            Who knows, maybe the mobile phone could even be a first class citizen in the landline network.

            • toast0 4 hours ago

              > I think nowadays we should have an USB socket next to each power outlet that provides both internet and extra fast charging. In reality I've never even seen such socket.

              I've seen power outlets with embedded USB power adapters. I think I've seen usb ethernet adaptets with embedded USB power for like chromecasts and similar. But not both smooshed into the same outlet. It might be problematic because nobody wants to mix low voltage and high voltage together in the wall. But it's technically feasible.

              > With a few small updates Android could switch off wifi and mobile networking and seamlessly switch to calling over <s>wifi</s> wired internet when you plug in the charging cable.

              I'm not sure you need updates. I think if the adapter exposes as usb cdc-ethernet that would likely work out of the box, and there may be drivers for specific usb nics available as well; I haven't checked, but this is a thing that is used by ChromeCast devices and AndroidTV devices, so it should also work on Android. Seamlessness is maybe in the air, but if it's seamless from wifi to cellular, it should be better going from wired to something else, because wired has an unambigious and timely disconnect signal.

              > Who knows, maybe the mobile phone could even be a first class citizen in the landline network.

              IMHO there's less value here; the landline network has degraded and there's not really any first class citizens anymore. Few people retain landlines, and those that remain tend to be ATAs in the home; if you care to use that with an android, there's likely better options than interfacing with the analog side.

          • jval43 15 hours ago

            I did this years ago using the very thin (3mm, round) Unifi Patch Cables in white. Very clean and reliable, and getting 1 Gbit/s without issue.

            Another benefit is that I can cram 4 of them inside a single cable runner at the one spot I have to (no space for a switch). Where it's just one cable you run them bare and they look very clean.

            The old ones I have are still CAT5e, the newer ones they sell are CAT6 at the same thinness. All unshielded (UTP).

            10/10 would buy again.

            • OJFord 8 hours ago

              Patch cables are meant for connections between equipment, e.g. in a networking cabinet. Cable for the runs between terminal points (like the cabinet's patch panel and a workstation, or your home TV, etc.) is less flexible - more shielded and I think solid core instead of multi-stranded (like twin & Earth vs. flex) - I'm not sure if it's available flat or skinny though.

              • jval43 7 hours ago

                I know and agree. My point is in a home setting at 1 Gigabit you don't really need it. Obviously YMMV.

            • ElijahLynn 8 hours ago

              Thanks

              Here is a link for others who want to know how thin:

              https://store.ui.com/us/en/category/accessories-cables-dacs/...

        • godelski a day ago

          Rental or do you own?

          If you own, you should replace and/or move them. Might sound scary if you've never done this before but it is much easier than you'd think. If you want to make your future life easier I suggest running a PVC pipe (at minimum in the drop down portion). Replacing or adding new cabling will be much easier if you do this so it's totally worth the few extra bucks and few extra minutes of work. They'll also be less likely to be accidentally damaged (stepping on them, rodents, water damage, etc). I seriously cannot understand why this is not more common practice (leave the pull string in). You might save a few bucks but you sacrifice a lot more than you're saving... (chasing pennies with pounds)

          If rental, you could put in an extender. If you're less concerned about aesthetics you can pop the wall place off and directly tie into the existing cable OR run a new one in parallel. If you're willing to donate the replacement wire and don't have access to the attic but do to both ends of the existing cable then you can use one to pull the other through. You could coil the excess wire behind the plate when you reinstall it. But that definitely runs the risk of losing the cable since it might be navigating through a hard corner. If you go that route I'd suggest just asking your landlord. They'd probably be chill about it and might even pay for it.

          • silversmith 18 hours ago

            There are times I do envy people living in stick houses with hollow walls.

            • nine_k 17 hours ago

              I live in a brick house where only half of the walls are hollow. Bringing Ethernet wires to a few critical areas and putting small surface-mount RJ-45 sockets was not that hard.

              Of course, some thin raceways can be seen somewhere along the baseboard. It does not look terrible, and is barely noticeable.

              • lostlogin 17 hours ago

                Fibre is good for getting to hard-to-reach places.

                But the slope is slippery. If you’re doing fibre, you might as well do 10gbe.

              • tguvot 2 hours ago

                there are baseboards with build in raceways. installing some right now

            • toast0 18 hours ago

              Stick houses with hollow walls are cheaper to build (assuming cheap wood) and cheaper to work on. Probably cheaper to maintain too, but not as durable, so it might work out... Otoh, durable isn't great when housing trends have moved on.

              • jandrewrogers 16 hours ago

                Much more durable in an earthquake though, which is important in places like the US where half the country is a serious seismic hazard zone. In many locales only wood or steel framing is allowed because historically stone and concrete construction collapsed due to the strength of the earthquakes.

              • lostlogin 17 hours ago

                > not as durable

                Your clearly don’t live in an earthquake prone area.

                I do. But given how cheapskate New Zealand is, I’m 100% sure that we would build in stone and brick if it was cheaper.

                • toast0 4 hours ago

                  I do live on the west coast of the US. Unreinforced masonry doesn't do well in earthquakes, but reinforced masonry or concrete is probably more durable. I've got 25 year old wood siding, and it might make it to 30, but there's no way it'll be in reasonable shape at 40. It probably won't be too expensive to replace though.

                  • godelski 3 hours ago

                    Probably another great example of chasing pennies with pounds. {re,green,pink}bar is really cheap. Yes, it's more expensive but only 10-20% more. It's an upfront cost that that saves you tons of damage, which costs money too! Even more when you put off repair.

                    It's incredible how people do not understand boot theory... which seems to be something most people know but don't employ in practice

                    https://en.wikipedia.org/wiki/Boots_theory

                  • applied_heat 2 hours ago

                    My wood siding is original cedar that has been painted several times since 1970s when house was built … I haven’t considered it not lasting indefinitely

            • hvb2 18 hours ago

              Until it gets cold outside and you need to heat them, or cool then obviously

            • godelski 4 hours ago

              Install a veneer

            • hopelite 11 hours ago

              Molding is your friend to create and hide channels, and it will make your place look more sophisticated than just the cube cave it is, my cave man friend.

          • quickthrowman 7 hours ago

            > If you want to make your future life easier I suggest running a PVC pipe (at minimum in the drop down portion). Replacing or adding new cabling will be much easier if you do this so it's totally worth the few extra bucks and few extra minutes of work.

            I’m trying to understand how removing an entire sheet of gypsum (or cutting a 6” by 8’ channel) and installing an empty PVC raceway is ‘a few extra minutes of work’. Installing the PVC might be, but you’re looking at hours of work over multiple days to replace the drywall and refinish the wall.

            Raceways are unnecessary in stick built houses if you have a fish stick and fish tape. If you’re building a new house, then sure, install 1” EMT as raceway for Cat6A before putting up the drywall.

            • godelski 3 hours ago

                > I’m trying to understand how removing an entire sheet of gypsum
              
              This is a fixed task. Required if you install the conduit or not. You have to cut the wall to make the port. If you have the port you can just use a slightly longer conduit, brace it where you can reach, and oh no you need an extra 2" of cable?

                > Raceways are unnecessary in stick built houses
               
              Your mental model is too naïve. Have you done this before? Have you then replaced it or added additional lines?

              The conduit makes all that easier, and provides the additional protection that I discussed. By having a conduit you're far less likely to get snagged on something while fishing the lines. You can stop hard corners that strip your cables while pulling on them. It's also a million times easier to see while you're chasing those cables. Sure, your house is a framing with wood but you still have insulation and who likes icy hands?

              Really, think about it. What is the cost now compared to the future?

              Is an additional 10 or let's even say a crazy 50% additional work now really that costly when you have to do the whole thing again in the future? And multiple times? It's a no brainier lol. Definition of chasing pennies with pounds. Just be nice to your future self. Be lazy long term, not lazy short term because lazy short term requires more work

        • ahartmetz a day ago

          You can lay your own cables, either to the next wall socket or directly to a switch. Flat ethernet cables can be very helpful for hiding and for crossing doorways. Generous "unnecessary" wire length helps with keeping them out of sight.

          • matwood 15 hours ago

            Just want to second your suggestion about flat cables. They are great for situations as this.

            I’m in an old stone house and currently have flat cables snaked around until I can piggy back on the workers putting in conduit for other things.

        • baby_souffle a day ago

          > The house I live in was built with ethernet, but of the fourteen outlets the builders saw fit to include, not one is located where we can make use of it.

          I had a similar situation a few years back. It was a rental so I didn't have access to the attic let alone permission to do my own drops. It'll depend a _lot_ on your exact setup, but we had reasonably good results with some ethernet-over-power adapters.

          • Denatonium 20 hours ago

            Ethernet of powerline adapters a very YMMV situation. Occasionally, it works great for people, but more often than not, the performance is poor and/or unreliable, especially in countries with split-phase 120/240 volt power (where good performance relies on choosing outlets with hots on the same side of the center-tapped neutral. The people who most commonly share success stories with powerline Ethernet are residents of the UK, where houses only have 2 wires coming in from the pole and there's often a ring main system where an entire floor of a house will be on one circuit.

            A better solution is repurposing unused 75Ω coaxial cable with MoCA 2.5 adapters, which will actually give you 1+ Gbps symmetrical. The latency is a very consistent 3-4ms, which is negligible. I use Screenbeam (formerly Actiontek) ECB6250 adapters, though they now make a new model, ECB7250, which is identical to the ECB6250 except with 2.5GBASE-T ports instead of 1000BASE-T.

            • topspin 16 hours ago

              > A better solution is repurposing unused 75Ω coaxial cable with MoCA 2.5 adapters

              I'll second this. MoCA works. You can get MoCA adapters off Ebay or whatnot for cheap: look for Frontier branded FCA252. ~90 MBps with a 1000BASE-T switch in the loop. I see ~3 ms of added latency. I've made point-to-point links exclusively, as opposed to using splitters and putting >2 MoCA adapters on shared medium, but that is supported as well.

            • kbouck 16 hours ago

              That was my experience too. The experience with powerline ethernet adapters was unbearable on a daily basis.

              We had an unused coax (which we disconnected from the outside world) and used MoCA adapters (actiontek) and it's been consistently great/stable. No issues ever... for years.

          • jojomodding a day ago

            We have them at home as well and they really suck. They lose connection every 20ish minutes at best, and take about 5 to reconnect. Makes Zoom meetings impossible, among other things.

            • mikepurvis 21 hours ago

              I used those during covid to get a reliable connection for video calls and it was a huge step up over wifi. The bandwidth was like 1/10th of actual gige, so I got a wire pulled to my office when I went to fibre but there’s no question in my mind that decent powerline adaptors are the winner for connection stability.

            • virtue3 a day ago

              I’ve used Ethernet over coax in my current apartment.

              It’s worked well!

              You do need to be a bit careful as coax signal can be shared with neighbors and others sometimes.

              • variaga 20 hours ago

                You can isolate your ethernet over coax from your neighbor with a MoCA POE "point of entry" filter which blocks the frequencies used by MoCA.

                You can buy them online for around $10 and they install without tools,

                Besides neighbors, you may also need a POE filter if you have certain types of cable modem.

                • tguvot 2 hours ago

                  cable companies require poe filters. if they find that there is some "noise" leaking from your house, they may put a big filter of their own outside, that can degrade speed of modem

            • dwood_dev a day ago

              For PoE you want two networks for the best performance. One for each phase of your mains.

              In general they do suck, but they can be pretty decent if you stick them all on one phase, even better if all on the same breaker.

              • BenjiWiebe 21 hours ago

                Powerline Ethernet != PoE (power over Ethernet)

                • dwood_dev 20 hours ago

                  Yes, no idea what I was thinking when I typed that. I've used both extensively, in fact this message was sent over a PoE enabled WiFi AP.

        • 1718627440 3 hours ago

          You know about Ethernet over power lines right?

      • ghaff 9 hours ago

        I was mostly wired throughout the house. But with the smoke mitigation after a kitchen fire, pretty much all the hard wiring for both audio and Ethernet is gone or hopelessly messed up. There's no way I'll spend the time and effort to redo everything at this point.

      • b3lvedere 11 hours ago

        When i bought my house i was very pleasantly surprised the previous owners had installed pvc pipes from corner to corner (so at least three connnections per corner) when they installed floor heating. It made installing ethernet and speaker cables everywhere i needed so much easier. Should i ever require more than 1Gbit i could easily replace it for fast fiber cables.

      • CoolGuySteve a day ago

        It depends on your wiring but I've had pretty good success with AV2000 powerline ethernet. I get about 400Mbps and a reliable 2ms ping which is good enough for gaming and streaming from my media center.

        The endpoint in my living room also has a wifi AP so signal is pretty good for laptops and whatnot.

        In NYC every channel is congested, I can see like 25 access points at any time and half are poorly configured. Any wired medium is better than the air, I could probably propagate a signal through the drywall that's more reliable than wifi here.

        So having something I can just plug into the wall is pretty nice compared to running cables even if it's a fraction of gigE standards.

      • fragmede a day ago

        And the big one I want to point out, is that this AI stuff has me downloading so many ten gigabyte model files to run them locally that I'm really feeling the lack of speed that my setup has.

      • kindacuriouzzx 16 hours ago

        Does this advice still hold true for Internet that is provided through power sockets in the house?

        • hylaride 8 hours ago

          If you live in a dense area with lots of APs and regularly get performance issues, power line networking will provide excellent ~400Mbps connections that are more than adequate for things like video calls unless your power cables are ancient or under-spec'd (some older houses can sometimes have lower gauge cables that may not perform as well and I imagine some knob and tube setups are not ideal for data transfer, either).

          If you have newer clients that support it, Wifi 6E/7/802.11ax (or whatever it's called) uses the 6GHz spectrum that isn't as heavily used (yet). I've had good success with it in my multi-unit apartment condo (feels as clean as 5GHz did ~2010). Some higher end APs can also use multi-antenna beams that can help, too.

      • typpilol a day ago

        The reality is that most people only have a single cord coming into the house

        So they would have to do quite a bit of work to run cable. Also people living in apartments that cant just start drilling through walls.

        I'd say most ppl use wifi because they have too, not pure convenience

        • jdeibele a day ago

          We downsized from a house built in 1914 with phone jacks everywhere to a house built in 2007 with coax and ethernet ports in every room, some rooms with two.

          At the 1914 house, I used ethernet-over-powerline adapters so I could have a second router running in access point mode. The alternative was punching holes in the outside walls since there was no way to feasibly run cabling inside lath-and-plaster walls.

          I don't know how 2025 houses are built but I would be surprised if they didn't have an ethernet jack in every room to a wiring closet of some sort. Not sure about coax.

          My son has ethernet in his dorm with an ethernet switch so he can connect his video game consoles and TV. I think that's pretty common.

          • runjake a day ago

            > I don't know how 2025 houses are built but I would be surprised if they didn't have an ethernet jack in every room to a wiring closet of some sort. Not sure about coax.

            Speaking from a US standpoint, it still not common in new construction for ethernet to be deployed in a house. I'm not sure why. It seems like a no-brainer.

            Coax is still usually reserved to a couple jacks -- usually in the living room and master bedrooms.

            • sidewndr46 a day ago

              Adding cat5e or cat6 to each room is just a cost. Builders generally compete on cost.

              • Retric a day ago

                It’s a cost that doesn’t show up on listings. There’s a surprising number of ways new US construction sucks that just comes down to how it can be advertised.

            • creato 20 hours ago

              Most people think they can just use WiFi, and most of them are probably right.

            • tguvot 2 hours ago

              i live in 2003 built house in usa. i have 2 x cat5e and 2 x coax (they are bundled together ) coming to outlet in every room. everything goes to (un)structured media enclosure.

          • ssl-3 18 hours ago

            > I don't know how 2025 houses are built but I would be surprised if they didn't have an ethernet jack in every room to a wiring closet of some sort. Not sure about coax.

            Aye.

            Cat5/6/whatever-ish cabling has been both the present and the future for something on the order of 25 years now. It's as much of a no-brainer to build network wiring into a home today as it once was to build telephone and TV wiring into a home. Networking should be part of all new home builds.

            And yet: Here in 2025, I'm presently working on a new custom home, wherein we're installing some vaguely-elaborate audio-visual stuff. The company in charge of the LAN/WAN end of things had intended to have the ISP bring fiber WAN into a utility area of the basement (yay fiber!), and put a singular Eeros router/mesh node there, and have that be that.

            The rest of the house? More mesh nodes, just wirelessly-connected to eachother. No other installed network wires at all -- in a nicely-finished and fairly opulent house that is owned by a very successful local doctor.

            They didn't even understand why we were planning to cable up the televisions and other AV gear that would otherwise be scooping up finite wireless bandwidth from their fixed, hard-mounted locations.

            In terms of surprise: Nothing surprises me now.

            (In terms of cost: We wound up volunteering to run wiring for the mesh nodes. It will cost us ~nothing on the scale that we're operating at, and we're already installing cabling... and not doing it this way just seems so profoundly dumb.)

            • toast0 4 hours ago

              Sheesh. I would expect a high end house to have ceiling mount ethernet jacks for fancy APs in most rooms. At least family room(s) and bedrooms. Very much not worth it to retrofit later in a multistory building, but would be super handy.

              • ssl-3 3 hours ago

                Yeah, that first meeting with the other contractors was like walking into bizarro-world.

                They (the homeowner) were getting dedicated custom-built single-purpose wall-mounted shelving for each of these Eeros devices, along with dedicated 120V outlets for each of them to provide power.

                Now they're still getting that, plus the Ethernet jack that I will be installing on the wall at these locations because that's the extent to which I am empowered to inject sanity.

                (Maybe someone down the road will look at it and go "Yeah, that just needs to be a wall-mounted access point with PoE," and remove even more stupid from the things.

                Or... not: People are unpredictable and it seems like many home buyers' first task is to rip out and erase as much current-millennia technology as possible, reducing the home to bare walls under a roof, with a kitchen, a shitter, and some light switches and HVAC.)

          • dpb001 10 hours ago

            We just moved from a 70's-era house where I spent some time with a fish tape running cable to a 2025 three story townhouse (drywall already finished when we purchased).

            For some reason the cable service entry is on the third floor in the laundry room. Ethernet and the TV signal cable runs from there to exactly one place, where the TV is expected to be mounted. Nothing in the nice office area on the other side of the wall.

            My guess is that the thinking these days is that everyone's on laptops with wifi and hardwired network connections are only of interest for video streaming. Probably right for 99% of purchasers.

          • Analemma_ a day ago

            Powerline Ethernet is a coin toss though. Depending on how many or few shits the last electrician to work on your house gave, it could be great or unusable. Especially if you're in a shared space like an apartment/condo: in theory units are supposed to be sufficiently electrically isolated from each other that powerline is possible; in practice, not so much. I've been in apartments where I plugged in my powerline gear and literally nothing happened: no frames, nothing.

            • superkuh 9 hours ago

              Powerline Ethernet is directly equivalent to littering in the park. By using it you are littering and being a jerk, even if you don't realize it. The FCC only tests such setups in very limited contrived ways. When it comes to actual house wiring the copper wiring is never impedance controlled, constantly approaches and leaves large metal objects, etc, so that it is always radiating radio waves. And powerline ethernet is HF (<30MHz) frequencies so those radio waves travel around the entire earth, ruining a shared medium. Just like littering in a public park is ruining a shared medium.

        • GeorgeTirebiter a day ago

          Mu-MIMO would help. The real problem is that energy between a unit and an AP is not in a pencil-thin RF laser-beam --- it is spread out. Other nodes hear that energy, and back off. If we had better control of point-to-point links, then you could have plenty of bandwidth. It's not as if the photon field cannot hold them all. When we broadcast in all directions, we waste energy, and we cause unnecessary interference to other receivers.

          • sidewndr46 a day ago

            it was quite a while back but I read some press release about a manufacturer that would make an access point that had mechanically steered directional antennas. Unfortunately I don't think it ever made it to market.

            • ssl-3 18 hours ago

              That can help in one direction, but networks are bi-directional.

              No matter how fancy and directive the antenna arrangement may be at the access point end, the other devices that use this access point will be using whatever they have for antennas.

              The access point may be able to produce and/or receive one or many signals with arbitrarily-aimed, laser-like precision, but the client devices will still tend to radiate mostly-omnidirectionally -- to the access point, to eachother, and to the rest of the world around them.

              The client devices will still hear eachother just fine and will back off when another one nearby is transmitting. The access point cannot help with this, no matter how fanciful it may be.

              (Waiting for a clear-enough channel before transmitting is part of the 802.11 specification. That's the Carrier Sense part of CSMA/CA.)

        • wlesieutre a day ago

          MoCA adapters are an option if you’re already wired for coax

          • ellisv a day ago

            MoCA is how I get Ethernet upstairs. Works great.

        • eikenberry a day ago

          Ethernet cables can be as long as 100meters, long enough to snake around most any apartment. Add on a few rugs to cover over where they'd be tripping hazards and you're all set.

          • rtpg a day ago

            the one sort of asterisk I'd put there is that ethernet cable damage is a real risk. Lots of stories of people just replacing cables they have used for a while and seeing improvements.

            But if you can pull it off (or even better, move your router closest to the most annoying thing and work from there!), excellent

          • passivegains a day ago

            I got good results from running cables around the entire perimeter of a room to avoid crossing doorways. Doesn't work so well on bathrooms though.

            • ssl-3 17 hours ago

              Oh, bathrooms are [sometimes] easy.

              In an apartment I once had, I ran some cat5-ish cable through the back wall of one closet and into another.

              In between those closets was a bathroom, with a bathtub.

              I fished the cable through the void of the bathtub's internals.

              Spanning a space like this is not too hard to do with a tape measure, some cheap fiberglass rods, a metal coat hanger, and an apt helper.

              Or these days, a person can replace the helper by plugging a $20 endoscope camera into their pocket supercomputer. They usually come with a hook that can be attached, or different hooks can be fashioned and taped on. It takes patience, but it can go pretty quickly. In my experience, most of the time is spent just trying to wrap one's brain around working in 3 dimensions while seeing through a 2-dimensional endoscope camera that doesn't know which way is up, which is a bit of a mindfuck at first.

              Anyway, just use the camera to grab the rod or the ball of string pushed in with the rod or whatever. Worst-case: If a single tiny thread can make it from A to B, then that thread can pull in a somewhat-larger string, and that string can finally pull in a cable.

              (Situations vary, but I never heard a word about these little holes in the closets that I left behind when I moved out, just as I also didn't hear anything about any of the other little holes I'd left from things like hanging up artwork or office garb.)

            • sokoloff a day ago

              I’m pretty tech-addicted, but I’ve never felt the need for a hard-wired drop in the bathroom.

              • xethos a day ago

                I assumed to get from one side of a doorframe to the other, instead of crossing underneath the door, go around the perimeter of the room the door is for. Which seems like a lot to remove a trip hazard, but I suspect the Wife Approval Factor plays a role

        • numpad0 10 hours ago

          Everyone gets one cord coming into the house and into the "master" router. You then branch it out to things you own through switches. The suggestion isn't to pay for multiple internets for each of your equipments.

        • ipython a day ago

          Well, unless you’re multihomed, you’ll always only have one cable coming in.

          It’s what you do with that cable that matters :)

          Even the telco provided router/ap combo units usually have a built in switch, so you don’t even need another device in most cases.

        • StillBored a day ago

          A lot has changed in the 25 years since gbit wired ethernet was rolled out. While wired ethernet stagnated due to greed.

          Got powerlines? Well then you can get gbit+ to a few outlets in your house.

          Got old CATV cables? Then you can use them at multiple gbit with MoCA.

          Got old phone lines? Then its possible to run ethernet over them with SPE and maybe get a gbit.

          And frankly just calling someone who wires houses and getting a quote will tell you if its true. The vast majority of houses arent that hard, even old ones. Attic drops through the walls, cables below in the crawlspace, behind the baseboards. Hell just about every house in the USA had cable/dish at one point, and all they did was nail it to the soffit and punch it right through the walls.

          Most people don't need a drop every 6 feet, one near the TV, one in a study, maybe a couple in a closet/ceiling/etc. Then those drops get used to put a little POE 8 port switch in place and drive an AP, TV, whatever.

          • toast0 18 hours ago

            > Got old phone lines? Then its possible to run ethernet over them with SPE and maybe get a gbit.

            Depending on the age of the house, there's a chance that phone lines are 4-pair, and you can probably run 1G on 4-pair wire, it's probably at least cat3 if it's 4-pair and quality cat3 that's not a max length run in dense conduit is likely to do gigE just fine. If it's only two-pair, you can still run 100, but you'll want to either run a managed switch that you can force to 100M or find an unmanaged switch that can't do 1G ... Otherwise you're likely to negotiate to 1G which will fail because of missing pairs.

            • ziml77 3 hours ago

              Can confirm on the gigabit because I've got my gigabit internet running over old phone line right now. I'm not sure exactly how long the run is, but it goes to this floor's electrical room where the ONT is housed into a closet in my apartment where it's then spliced into CAT-5 to reach the router. I really didn't expect it to work but speed tests report that I'm getting 900+ Mbps.

            • ssl-3 17 hours ago

              Gigabit ethernet "requires" 4 pairs of no-less-than cat5. The 100mbps standard that won the race -- 100BASE-TX -- also "requires" no-less-than cat5, but only 2 pairs of it.

              Either may "work" with cat3, but that's by no means a certainty. The twists are simply not very twisty with cat3 compared to any of its successors...and this does make a difference.

              But at least: If gigabit is flaky over a given span of whatever wire, then the connection can be forced to be not-gigabit by eliminating the brown and blue pairs. Neither end will get stuck trying to make a 1000BASE-T connection with only the orange and green pairs being contiguous.

              I think I even still have a couple of factory-made cat5-ish patch cords kicking around that feature only 2 pairs; the grey patch cord that came with the OG Xbox is one such contrivance. Putting one of these in at either end brings the link down to no more than 100BASE-TX without any additional work.

              (Scare quotes intentional, but it may be worth trying if the wire is already there.

              Disclaimers: I've made many thousands of terminations of cat3 -- it's nice and fast to work with using things like 66 blocks. I've also spent waaaaay too much time trying to troubleshoot Ethernet networks that had been made with in-situ wiring that wasn't quite cutting the mustard.)

              • toast0 6 hours ago

                > Neither end will get stuck trying to make a 1000BASE-T connection with only the orange and green pairs being contiguous.

                They can get stuck, because negotiation happens on the two original pairs (at 1Mbps), and to-spec negotiation advertises the NIC capabilities and selects the best mutually supported option. Advertising fewer capabilities for retries is not within the spec, but obviously helps a lot with wiring problems.

                The key thing with the ethernet wiring requirements is that most of the specs are for 100m of cabling with the bulk of that in a dense conduit with all the other cables running ethernet or similar. Most houses don't have 100m of cabling, and if you're reusing phone cabling, it's almost certainly low density, so you get a lot of margin from that. I wouldn't pull new cat3 for anything (and largely, nobody has since the 90s; my current house was built in 2001, it has cat5e for ethernet and cat5e in blue sheaths for phone), but wire in the wall is worth trying.

                • ssl-3 2 hours ago

                  TIL that they can get stuck in no-man's-mand with 2 pairs. That seems stupidly-incompatible, and it isn't something I've witnessed myself, but it makes sense that it can happen.

                  My intent wasn't to dissuade anyone from trying to make existing cat 3 wire work (which I've never encountered in any home, but I've not been everywhere), but to try to set reasonable expectations and offer some workarounds.

                  If a person has a house that is still full of old 2- or 4-pair wire, and that wire is actually cat3, and is actually home-run (or at least, features aspects that can usefully-intercepted), then they should absolutely give it a fair shot.

                  I agree that the as a practical matter, the specifications are more guidelines than anything else.

                  I've also gone beyond 100 meters with fast ethernet (when that was still the most commonly-encountered) and achieved proven-good results: The customer understood the problem very well and wanted to try it, so we did try it, and it was reliable for years and years (until that building got destroyed in a flood).

                  If the wiring is already present and convenient, then there's no downside other than some time and some small materials cost to giving it a go. Decent-enough termination tools are cheap these days. :)

                  (Most of the cat3 I've ran has been for controls and voice, not data. Think stuff like jails, with passive, analog intercom stations in every cell, and doors from Southern Steel that operate on relay logic...because that was the style at the time when it was constructed. Cat3, punch blocks, and a sea of cross-connect wire still provides a flexible way to deal with that kind of thing in an existing and rather-impervious building -- especially when that building's infrastructure already terminates on 25-pair Amphenols. I'll do it again if I have to, but IP has been the way forward even in that stodgy slow-moving space for a good bit now.)

        • ericd a day ago

          Eh flat Ethernet cables can easily be snaked all over with adhesive clips, and if you color match cable/clips/walls, it doesn’t look bad.

          • PaulHoule a day ago

            Visiting museum ships also showed me you can sometimes route cables over living and working spaces.

          • 0cf8612b2e1e a day ago

            This is what I did. Takes minimal effort and then you never have to worry about it again.

          • dpark a day ago

            Cables routed on visible walls look absolutely terrible. I wish they didn’t, but they do.

            Yes, it’s better if your cable and clips and wall all match, but it still looks bad.

            • ericd a day ago

              Why? Run them along baseboards in the corners, you'll never notice them (or at least we didn't at our last house, white on white).

            • jandrese 20 hours ago

              What if you ran the cable on the top of the wall and covered it with crown molding?

            • Citizen8396 a day ago

              when done right, raceway along (or even behind the) baseboards works nicely

        • stacktraceyo a day ago

          I wish I could have multiple modems coming into the house using the same provided cable. Why’s that not possible?

          When I was younger I went and bought a new modem so I could play halo on my Xbox in another room than where my parents had the original modem. Found out then I’d need to pay for each modem.

          • arcanemachiner a day ago

            If you're not sure what a router is, you should probably look that up, because it sounds like you want another router.

            • stacktraceyo 10 hours ago

              I know what a router is lol. I just was wondering what are the available options to use all the coax connections already in the house so I could connect everything via Ethernet , if you wanted to avoid running Ethernet through the walls or don’t want Ethernet cables visible

              When I was younger and before WiFi was a thing I naively thought I’d just plug in a new modem.

            • jgeralnik 20 hours ago

              It actually sounds like they just want a switch

          • sokoloff a day ago

            If you have coax, look into MoCA. I have one attic device on a MoCA connection and it runs very well.

            • stacktraceyo 10 hours ago

              How does the age of the copper affect performance. Will look into it thanks.

              • sokoloff 9 hours ago

                I don’t think age of copper itself matters (assuming it supports TV already), other than what might come along with that.

                https://en-us.support.motorola.com/app/answers/detail/a_id/1... will give you some additional info.

                My house had quite old (likely 1980s) coax home runs and it worked flawlessly. All I did was change out the entry (root)splitter for one that had a point of entry filter. I’m not sure that was even needed, but it seemed sensible and was not expensive or difficult.

              • phil21 7 hours ago

                It will be less the age of the actual cable, and more the standards used when cabled. The largest issue is likely to be splitters behind the wall that limit frequencies passed through.

                Usually those can be found in the wall boxes behind the plate - but not always!

                These used to be a bane on cable modem installs for apartment complexes, but the situation should generally be better 25 years later...

    • ipython a day ago

      > the best way to speed up your Wi-Fi is to not use it.

      So true!

      Other tips I’ve found useful:

      Separate 2.4ghz network for only IoT devices. They tend to have terrible WiFi chipsets and use older WiFi standards. Slower speed = more airtime used for the same amount of data. This way the “slow” IoT devices don’t interfere with your faster devices which…

      Faster devices such as laptops and phones belong on 5ghz only network, if you’re able to get enough coverage. Prefer wired backhaul and more access points, as you’re better off with a device talking on another channel to an ap closer to it rather than tieing up airtime with lots of retries to a far away ap (which impacts all the other clients also trying to talk to that ap)

      WiFi is super solid at our house but it took some tweaking and wiring everything that doesn’t move.

      • chrneu a day ago

        Absolutely. Your IoT devices should be on their own 2.4ghz network running on a specific channel to isolate them. You should also firewall these devices pretty heavily on their own router.

        The only devices on wifi should be cell phones and laptops if they can't be plugged in. Everything else, including TVs, should be ethernet.

        When I moved into my last house with roommates their network was gaaarbage cuz everything was running off the same router. The 2.4ghz congestion slowed the 5ghz connections because the router was having to deal with so much 2.4ghz noise.

        A good way of thinking about it is that every 2.4ghz device you add onto a network will slow all the other devices by a small amount. This compounds as you add more devices. So those smart lights? Yeaaahh

        • cycomanic 20 hours ago

          > When I moved into my last house with roommates their network was gaaarbage cuz everything was running off the same router. The 2.4ghz congestion slowed the 5ghz connections because the router was having to deal with so much 2.4ghz noise.

          I don't know why you're saying, a 2.4 GHz device should not interfere with 5 GHz channels unless it's somehow emits some harmonics, which would most definitely make i noncompliant with various FC standards. Or do you mean the modem was so crappy it couldn't deal with processing noisy 2.4 GHz channels at the same time as 5GHz ones? That might be true, but I would assume the modems would run completely different DSP chains on different asics, so this would be surprising.

          • robocat 19 hours ago

            > do you mean the modem was so crappy > but I would assume the modems

            Your assumption is sometimes incorrect as cheap devices can share some RF front end. Also apparently resource contention can also occur due to CPU, thermal, and memory issues.

            https://chatgpt.com/share/68e9d2ee-01a4-8004-b27b-01e9083f7e... (Note that Prof is one "character" I have defined in the prompt customisation)

            Or:

            https://g.co/gemini/share/1e8d55831809

            • ssl-3 15 hours ago

              Ah, splendid. I'm so glad that you have come before me today to present this bot's confounding quandary, and I receive it with tremendous glee.

              Please allow me to proffer the following retort: The answer to having a shitty, incapable router is to use one that is not shitty, and is capable.

              (The routing-bits have no clue what RF spectrum is being utilized, and never have. They just deal with packets. The packets are all shaped the same way regardless of the physical interface on which they arrive, or which they are destined for.)

              • robocat 8 hours ago

                There's no need to be rude.

                cycomanic knows stuff but their answer was basically contradicting chrneu, which nobody likes. It is counterintuitive to me (and I'm guessing cycomanic too) that the different bands should interact so much.

                The AI answers passed my shit-detector... And I think it is the same as trying to be helpful but providing a search link in the past. Other HN users can make their own decision about reading the prompt or reply (although using links does make me wonder about cross account tracking and doxing myself).

                • ssl-3 4 hours ago

                  The false supposition built into the question asked of the bot combined with the resulting answer to the bad question result in the whole thing being -- at very best -- a boondoggle of a red herring.

                  It's all quite well-worded, and yet is still completely unrelated to what is being discussed.

                  Real people: "Hey, let's talk about networks!"

                  Eventually: "Cool, I like networks! Did you know that down is actually up, and up is actually down? In fact, I asked a sycophant bot to demonstrate this fiction with its wily words, and it did so with with wonderful articulation. Here's a link!"

                  Having tolerance towards this kind of make-believe anti-truth is not something that I would consider to be a healthy human function. Especially when this nonsense has deflected through a third party that is completely absent from the discourse and isolated from the context, such as a sycophant bot, and particularly so when there's an implied appeal to authority for that absent third party.

                  (I have no intention of considering whether this kind of action is deliberate or not. I simply recognize this move for how consistently successful it is at poisoning a discussion amongst a group of people.)

                  ---

                  If you were to ask me, a person, the following question:

                  > "What is the most likely reason that a cheap router/AP would slow down servicing clients on 5GHz when also servicing clients on a congested 2.4GHz spectrum"

                  ...then I would not have responded to that question with a single confidently-stated and presumptive answer, but instead by opening a dialogue.

                  And I would begin this dialogue by asking about the reasons that lead you to believe that this would ever be true in the first place.

                  (But that's not the path that was chosen here.)

        • drnick1 21 hours ago

          My advice would be NOT to connect any kind of TV to the Internet. They have microphones and sometimes cameras, and are a huge privacy risk.

          • inkyoto 17 hours ago

            If one must forgo the comfort of complete isolation from the vulgarities of contemporary media and visual indulgence – an unwise choice, yet one that many appear compelled to make – then prudence demands mitigation rather than surrender.

            A measured compromise would entail the meticulous profiling of the TV’s network traffic, followed by the imposition of complete blocking at the DNS level (via Pi-hole, NextDNS and alike) first, whilst blacklisting the outgoing CIDR's on the router itself at the same time.

            This course of action shall not eliminate the privacy invasion risk in its entirety – for a mere firmware update may well redirect the TV traffic to novel hosts – yet it shall transform a reckless exposure into a calculated and therefore manageable risk.

            • ipython 9 hours ago

              I don't connect my TVs to the Internet; instead I hook up Apple TVs to an HDMI port and just use the TV as God intended - as a dumb display device. The Apple TV is connected to the Internet and functions as my portal to, as you say, the vulgarities of contemporary media and visual indulgence. Without the downsides of buggy and spyware ridden TV firmware.

          • bdangubic 21 hours ago

            so does your phone :)

            • drnick1 21 hours ago

              Yes, but unlike TVs, my phone runs free software (Graphene) and is free of the spyware "smart" TVs are known for.

              • whatevaa 19 hours ago

                Most people don't run Graphene so point stands.

                • drnick1 18 hours ago

                  Most people don't know that Big Tech is extracting data from them on a massive scale. It's up to us, the "tech people," to educate the people and show them alternatives like Graphene. As for the TV, my advice is not to connect it to the internet. If you need to stream something, hook up a laptop or dedicated device to the TV.

                  • hvb2 16 hours ago

                    This is where regulation comes in. For the TV makers. Things should be secure by default and come with fines if they aren't.

                    As for the extracting of data, yes that happens on a massive scale. In free products that no one is forced to use. And I would argue that, by now, almost everyone should know that comes at a price, it's just not monetary to the user. At that point it's a choice people make and should be allowed to make.

                    • franga2000 12 hours ago

                      The "it spies on you because it's free" thing hasn't been true for many years now. TVs that cost almost a grand still spy on you, as do cars that cost tens of thousands. Youtube/Netflix/Spotify/... still spy on you even if you pay for the premium/whatever tier.

                      If something is free, you're the product. But if it isn't free, you're paying to be the product.

      • vitaflo a day ago

        Solid idea and something I should work towards. We have Ethernet drops in every room but you’re right about IoT devices. Now I have some more planning to do.

        • kiney 12 hours ago

          skip wifi and use zigbee for IoT where possible.

      • ssl-3 15 hours ago

        That's sounds like a good concept: I'm no stranger to cheap IoT devices chewing up local 2.4GHz bandwidth with chatter and I have a lot of that going on. But does it matter in 2025?

        As a broad concept: Ever since my last Sonos device [that they didn't deliberately brick] died, I don't have any even vaguely bandwidth-intensive devices left in my world that are 2.4GHz-only.

        Whatever laptop I have this year prefers the 5GHz network, and has for 20 years. My phone, whatever it is today, does as well and has for 15 years. My CCwGTV Chromecast would also prefer hanging out on the 5GHz network if it weren't plugged into the $12 ethernet switch behind the TV.

        Even things like the Google Home Mini speakers that I buy on the used market for $10 or $15 seem to prefer using 5GHz 802.11ac, and do so at a reasonably-quick (read: low-airtime) modulation rate.

        The only time I spend with my phone or tablet or whatever on the singular 2.4GHz network I have is when I'm at the edge of what I can reach with my access points -- like, when I visit the neighbors or something, where range is more important than speed and 2.4GHz tends to go a wee bit further.

        So the only things I have left in normal use that requires a 2.4GHz network are IoT things like smart plugs and light bulbs and other small stuff like my own little ESP/Pi Zero W projects that require so little bandwidth that the contention doesn't matter. (I mean... the ye olde Wii console and PSP handheld only do 2.4GHz, but they don't have much to talk about on the network anymore and never really did even in the best of times.)

        It's difficult to imagine that others' wifi devices aren't in similar form, because there's just not much stuff left out there in the world that's both not IoT and that can't talk at 5GHz.

        I can see some merit to having a separate IoT VLAN with its own SSID where that's appropriate (just to prevent their little IoT fingers from ever reaching out to the rest of the stuff on my LAN and discovering how insecure it may be), but that's a side-trip from your suggestion wherein the impetus is just logical isolation -- not spectral isolation.

        So yes, of course: Build out a robust wireless network. Make it awesome -- and use it for stuff.

        But unless I'm missing something, it sounds like building two separate-but-parallel 2.4GHz networks is just an exercise in solving a problem that hasn't really existed for a number of years.

        • ipython 9 hours ago

          There is no non-IoT 2.4ghz network in my design. All "fast" devices are on a 5ghz only network. The only 2.4ghz network is dedicated to IoT devices. This also eliminates the need for devices to hunt and roam between 5ghz and 2.4ghz unnecessarily. Just need to balance tx power to make sure the 5ghz handoffs are as smooth as possible between APs.

        • bxparks 8 hours ago

          You are lucky. In 2025 I have to run most of my 20-30 wifi devices on 2.4 GHz because 5 GHz won't penetrate the walls in my house, especially diagonally.

          My dev laptop is about 10 m (30 ft) away from the wifi access point, but goes through about 6 walls diagonally, due to some weird layout, and 2.4 GHz is way faster.

          The house has some thick walls.

          Same with phones. As soon as I'm in a different room, 2.4 GHz is faster. So I just keep things on 2.4.

          Yeah, I've been planning to wire the house with Cat-6 into every room and add some access points. It's been on the backlog for 6 years..

          • ssl-3 3 hours ago

            I've lived in houses like that.

            My last house, which was rather small (by midwestern American standards, anyway) had some interior walls that were very good at blocking 5GHz transmissions. (I never took them apart to look, but I suspect that some of them had plaster with metal lath as one or more layers.)

            I started with one access point downstairs at the front (because that's where the cable modem lived) but it didn't work so well upstairs, at the back (diagonally) in the room I was using as an office.

            So I added another access point upstairs at the back and that fixed it: Wifi became solid-enough both upstairs and down, and also covered the entire back yard, and also worked great for the neighbors when they asked if they could borrow a cup of Internet. It took some literal gymnastics in some very weird normally-unseen spaces to accomplish that run, but it got done. :)

            As an side: It's interesting that being blocked by walls is also part of what makes 5GHz wifi so speedy indoors (in addition to having a lot more spectrum to use), for many [not all] people. By being attenuated so well by walls, the co-channel interference from the neighbors is reduced rather dramatically. With neighbors nearby, the RF environment tends to be a lot quieter at 2.4GHz than at 5GHz.

            ---

            Present-day house is a bit lucky: All of the thirsty tech is on the first floor, and it's very simple to get ethernet cables routed 'round in the basement (it's all utility space). I was able to find enough pre-existing holes in the floor (from old cable TV installs and also floor-mounted outlets that have been removed and covered) that getting ethernet to every useful area of every first-floor room with tech in it was a very simple ordeal that did not require a drill. (Yeah, that means that there's a wire poking up through the floor behind the desk I'm sitting at right now instead of a tidy RJ45 receptacle on a wall plate with a nice port designation label. I'm over it; it works perfectly and inertia is a hell of a drug.)

            But I'm not completely "lucky." The present house has aluminum siding and low-E windows. It's a great house that is amazingly inexpensive to heat and cool for how old it is, but it has aluminum siding and low-E windows and approximates a somewhat-leaky Faraday cage.

            Thus, my cell phone barely works indoors, but it works great outside. And wifi barely works outside on the porch (front or back, doesn't matter), and really not at all beyond the porch (but things like my phone think that it should work, which is problematic).

            I worked around that well-enough for the detached garage and back yard area by adding another access point in the garage, configured as a wireless repeater. Its advantage is that it has antennas that are optimized to work well, instead of some that are optimized to be very small (like those inside my phone, or my laptop). It's identical to the one inside the house and gets OK signal to/from the main AP, which it has a visual line-of-sight to through a couple of windows.

            As an impromptu solution made from stuff I already had leftover from the last place, it works. I'm not winning any speed records with that remote access point... but it seems to be reliable, and reliability is good.

            (Maybe some day I'll actually get around to upgrading the electricity to the garage to support some easy-to-access rooftop solar and/or car charging and/or welding and/or something, and when that trenching happens I'll also drop in some single-mode fiber. A single run of pre-terminated fiber is very cheap to buy, the "optics" at the endpoints are very inexpensive, and it is very safe with its essentially-absolute electrical isolation. It feels like overkill, but it's also once and done.)

    • m463 a day ago

      > not use it.

      A few things come to mind...

      - You can buy ethernet adapters... for iPhone/ipad/etc. Operations are so much faster, especially large downloads like offline maps.

      - many consumer devices suck wrt to wifi. For example, there seem to me ZERO soundbars with wired subwoofers. They all incorporate wifi.

      - also, if anyone has lived in a really dense urban environment, wifi is a liability in just about every way.

      - Whats's worse is how promiscuous many devices are. Why do macs show all the neighbor's televisions in the airplay menu?

      - and you can't really turn off wifi on a mac without turning off sip. (in settings, wifi OFF toggle is stuck on but greyed out)

      • varenc 18 hours ago

        > Why do macs show all the neighbor's televisions in the airplay menu?

        That's a feature that can be configured on the TV/AirPlay receiver. They've configured to allow streaming from "Anyone", which is probably the default. They could disable this is setting and limit it to only clients on their home network. And you can't actually stream without entering a confirmation code shown on the TV.

        When you stream to an AirPlay device this way it sets up an adhoc device-to-device wireless connection which usually performs much better that using a wifi network/router and is why screen sharing can be so snappy. Part of the 'Apple Wireless Direct Link' proprietary secret sauce also used by AirDrop. You can sniff the awdl0 or llw0 interfaces to see the traffic. Open AirDrop and then run `ping6 ff02::1%awdl0` to see all the Apple devices your Mac is in contact with (not necessarily on your wifi network)

        > and you can't really turn off wifi on a mac without turning off sip.

        Just `sudo ifconfig en0 down` doesn't work? You can also do `networksetup -setairportpower en0 off`. Never had issues turning off wifi.

      • yardstick 19 hours ago

        > many consumer devices suck wrt to wifi. For example, there seem to me ZERO soundbars with wired subwoofers. They all incorporate wifi.

        Sonos has its issues, but I do need to point out that their subs (and the rest) all have Ethernet ports in addition to WiFi.

        • ssl-3 14 hours ago

          And also, previously: In the dark times when end-user wireless network bandwidth very low and glitchy (and most home users didn't care much), and before "mesh" became a term associated with a single-box collection of items that could be bought at Wal-Mart, Sonos devices were able to mesh together and form their own wireless network that Just Worked.

          In software-land, they even solved latency inequalities well-enough to keep things properly in-phase at 20KHz between different devices, to allow stereo imaging to work correctly betwixt two wirelessly-connected speakers. (This seems very passe' in these modern enlightened times of seemingly-independent wireless Bluetooth earbuds, but it was a tough nut for them to crack back in 2002[!].)

          It wasn't all smiles and rainbows, of course, because the world never properly settled on one, true, universal implementation of something like Spanning Tree Protocol and agreed on how to use it. It was very possible for a person to really hose up their network by connecting Sonos gear the "wrong" way -- by connecting "too much" of it directly to the LAN.

          But those potential problems were broadly mitigable by picking exactly one Sonos device to bridge the wireless SonosNet into the home's LAN: Ideally, a Sonos Bridge would -- uh -- provide that bridge, but any random Sonos speaker (or subwoofer!) would do just as well. This worked, but it involved some aspect of wifi.

          And yeah, the problems could also be mitigated in other ways if they showed up: A person could certainly plug in their Sonos sub, sound bar, and surround speakers into Ethernet -- which was really quite neat and tidy if it worked, and it often worked. But it was a pickle if it didn't work because STP implementations can be an unadjustable boondoggle in the consumer space.

          They had a really neat and rather unique thing going for quite a long time before the market shifted to make their products apparently be fickle, outdated, inferior, and expensive. ("What, no Bluetooth?" people once said, even though, being an independent network-based streamer, it doesn't have Bluetooth problems like a person walking to the other side of the house with their phone where everyone but them can hear it noisily glitch out until they wander back.)

          Nowadays, SonosNet seems to be mostly dead, and the STP problems died with it. Common home wifi has also grown up a lot since 2002. So a person can hard-wire their Sonos sub, soundbar, and surround speakers into the LAN without fear of badness -- or use one or more of those wirelessly, instead. All without problems.

          It was pretty neat. It's still pretty neat today.

          • phil21 7 hours ago

            > So a person can hard-wire their Sonos sub, soundbar, and surround speakers into the LAN without fear of badness -- or use one or more of those wirelessly, instead. All without problems.

            Eh, I just had to go through and disconnect all ethernet from a bunch of Sonos devices in my house a couple months ago due to issues. It's on my list to go through and connect everything to the LAN when I get the time to make another couple ethernet drops - but mixing wifi/ethernet connected Sonos devices is not a great experience even in 2025.

            • ssl-3 2 hours ago

              I thought that was fixed with S2, by basically ditching the glory (and pitfalls) of ye olde SonosNet.

              Are you still on S1?

    • varenc 18 hours ago

      An idle Wi-Fi client with no traffic should have a very minimal effect on your network's quality. The TV is only going to be slowing things down if it's actually using the network and downloading/uploading. Which regrettably, is a problem with smart TVs. But there's no reason to limit the number of idle clients on a Wi-Fi network assuming your gateway can handle it. The challenge is though in the real world many devices that should be idle aren't.

      For my IoT network I just block most every device's access to the internet. That cuts down on a lot of their background chatter and gives me some minor protection.

      Also honestly, I feel the majority of wifi problems could be fixed by having proper coverage (more access points), using hardwired access points (no meshing), and getting better equipment. I like Ubiquiti/Unifi stuff but other good options out there. Avoid TP-Link and anything provided by an ISP. If you do go meshing, insist on a 6ghz backhaul, though that hurts the range.

    • rbranson a day ago

      > It's a much better answer to hook up everything on Ethernet that you possibly can than it is to follow the more traveled route of more channels and more congestion with mesh Wi-Fi.

      Certainly this is the brute-force way to do it and can work if you can run enough UTP everywhere. As a counterexample, I went all-in on WiFi and have 5 access points with dedicated backhauls. This is in SF too, so neighbors are right up against us. I have ~60 devices on the WiFi and have no issues, with fast roaming handoff, low jitter, and ~500Mbit up/down. I built this on UniFi, but I suspect Eero PoE gear could get you pretty close too, given how well even their mesh backhaul gear performs.

      • xmprt a day ago

        I'm not super familiar with SF construction materials but I wonder if that plays a part in it too? If your neighbors are separated by concrete walls then you're probably getting less interference from them than you'd think and your mesh might actually work better(?)... but what do I know since I'm no networking engineer.

        • rbranson a day ago

          It's all wood construction, originally stick victorians with 2x4 exterior walls. My "loudest" neighbor is being picked up on 80MHz at -47 dBm.

          • varenc 18 hours ago

            Old Victorians in SF will sometimes have lathe and plaster walls (the 'wet wall' that drywall replaced). Lathe and plaster walls often have chicken wire in them that degrade wifi more than regular drywall will.

          • hansvm 19 hours ago

            Man, at times in my life I would've killed to get a -47 dBm or better signal.

      • chrneu a day ago

        lol 5 APs for ~60 devices is so wasteful and just throwing money at the problem.

        I'm glad it works but lol that's just hilarious.

      • Marsymars 18 hours ago

        FWIW you don't need POE Eero devices for a wired backhaul, all of their devices has support it.

      • sidewndr46 a day ago

        you have five access points and 60 devices? How many square feet are you trying to cover?

        • chrneu a day ago

          He said SF with neighbors so I'm assuming condo/apartment. Probably less than 2000sq feet would be my guess.

          5 aps for 60 devices is hilarious. I have over 120 devices running off 2 APs without issue. lol

          • phil21 7 hours ago

            It's way less about device count, and more about AP density - especially in RF challenging environments.

            I pretty much just deploy WiFi as a "line of sight" technology these days in a major city. Wherever you use the wifi you need to be able to visually see the AP. Run them in low power mode so they become effectively single-room access points.

            Obviously for IoT 2.4ghz stuff sitting in closets or whatever it's still fine, but with 6ghz becoming standard the "AP in every room" model is becoming more and more relevant.

          • Tenemo 17 hours ago

            You have 120 wifi-connected devices at home?? What kind of devices? 100 smart light bulbs or something like that?

            I'm just curious – I'm a relatively techy person and I have maybe 15 devices on my whole home network.

            • sahruum9 10 hours ago

              A smart home will definitely run those numbers up. I have about 60 WiFi devices and another 45 Zigbee devices and I'm only about halfway done with the house.

    • BadBadJellyBean a day ago

      I wish I could put Ethernet everywhere but I live in a German apartment in a German house and here walls are massive and made out of brick and concrete. Routing cables through this without it being a massive eyesore is pretty hard.

      • molszanski a day ago

        Try Powerline. This €40 device will turn your electrical sockets into an 100-500 mbps Ethernet cable. Simple and efficient. Just check if sockets you want to connect are on the same circuit breaker. If yes, chances are really high it would work very well.

        I’ve connected a switch and a second access point with mine.

        Also I think they work best if there fewer of them on the same circuit. But not sure. Check first.

        • izacus 11 hours ago

          Powerline almost never comes close to performance of wifi in the same conditions.

          It's literally wifi just over an even worse medium.

        • BadBadJellyBean a day ago

          I tried that but the performance was worse than wifi.

          • MaKey a day ago

            G.hn powerline devices are better than the ancient HomePlug AV2 ones. Which devices did you try?

      • molszanski a day ago

        Oh, one more idea. You can use existing coax cables (tv cable) via adapters to get 1-2 reliable gbps over cable. For e.g. a switch with an additional access point

      • zoeysmithe a day ago

        Does it have any wiring? I've lived in old homes with coax for cable and those can be used with moca adapters to do ethernet. They can do 2.5gbps too.

    • 1vuio0pswjnm7 7 hours ago

      Younger generations seem attached to internet access over wifi in an unheallthy, irrational way

      Proliferation of consumer hardware that lacks ethernet ports is probably a contributing factor

      IMHO, the greatest utility of wifi is wireless keyboards and monitors, not wireless internet access

      The ability to remotely control multiple computers not on the same network from the same keyboard, for example

      But I've always had a bias for using a (mechanical) external keyboards over built-in laptop keyboards, even before there were wireless keyboards

    • dylanowen 21 hours ago

      For people who don't or can't have Ethernet wiring, I've had great success with Ethernet over coax. My ancient coax wiring gets 800mbps back to my router with a screenbeam MoCA 2.5

      • Denatonium 20 hours ago

        MoCA is truly amazing. I'm getting full symmetrical 940 Mbps speeds simultaneously over upload and download using RG59 cable with a pair of ECB6250. It helps that our house is fairly small, as the high frequencies that MoCA uses get attenuated pretty quickly on RG59 cabling, but even still, I'm impressed by the results.

    • mlinhares a day ago

      Yeah, we built our home and i made sure whenever there would be devices on the wall there was an ethernet cable there, best decision ever.

    • avree a day ago

      Unfortunately, Unifi only supports DFS channels (which is the only real way for 'each device to have its own wifi channel in a crowded area) on some of their models.

      • esseph a day ago

        What unifi AP doesn't support DFS?

        Sometimes DFS certification comes after general device approval, but I'm not aware of any that just flat out doesn't support it. It supported it 10+ years ago.

        • varenc 18 hours ago

          Yea I've had all sorts of UniFi gear and have never seen an access point that only works on DFS channels. That'd make no sense and their admin software actively discourages DFS channel selection.

          I'd guess OP might be trying to use 160mhz channel width on 5ghz band, which will only work on DFS channels though. I wouldn't recommend 160mhz channel width unless you have a very quiet RF environment and peak speed is very important to you. Also I've found it hard to get clients to actually use the full 160mhz width on a network configured this way.

    • GeorgeTirebiter a day ago

      yes, and... convenience says 'use WiFi'. No wires! I've said, if it moves - wireless. If it doesn't -- wired. Counterexamples that 'work': AM / FM / TV / Paging big transmitters to simple/cheap receivers. For the 1-way case, that works. But for 2-way....

    • manmal 17 hours ago

      I agree, but as a quite heavy user household, switching to Unifi 10y ago has fixed our issues, and they haven’t returned. With most devices on WiFi, on 3 APs.

    • alyandon a day ago

      I use powerline ethernet adapters to hook up the media center in the living room. They aren't super fast (~100 mbps) but they are so much more consistent than wifi.

    • cryptoegorophy 20 hours ago

      And all iot devices on protocol such as zwave

    • paulddraper a day ago

      There are two kinds of networks: wireless networks and reliable networks.

      Wired connection is an absolute hack.

      • lxgr a day ago

        I hear people say this often, but when you look into what they actually mean, it's often a comparison of having a single mediocre ISP CPE in a corner of an apartment, at most with a wireless repeater in another, vs. Ethernet. Of course the wire wins in that comparison.

        Now put an access point into every room and wire them to the router, and things start looking very differently.

      • protocolture a day ago

        Lmao.

        People say this until it takes 3 days to restore a fibre cut, when the wireless guys just work around the problem with replacement radios etc.

        Issue with Wireless is usually the wireless operator. And most of them do work hard to give wireless a bad rep.

        • toast0 17 hours ago

          Where I live we have what seems like an unusual amount of fiber cuts... whenever the cable company or the phone company fiber is cut, at least one of the major wireless networks is offline too; maybe calls work, but data doesn't. They could potentially restore service through wireless backhaul, but they don't. They also rely on utility power and utility power outages longer than about 4 hours mean towers are going to turn off.

          • protocolture 15 hours ago

            Yeah sounds very true.

            I am aware of a datacentre, whose principal fibre bundle transits a fast tracked development area where theres always construction and always fibre cuts.

            I am also aware of a wireless backhaul path with close to 2 weeks battery backup, running entirely off of solar. They only truckroll of they get consistent bad weather.

            I used to maintain an absolutely perfect 25km link that only went offline due to wind twisting the mast the radio was mounted on.

            I also have maintained an absolute dogs breakfast of a network where customers frequently lost connection. Like daily.

            I had one fibre link supporting 1000 customers or so, that the provider admitted had so many joins they could scarcely maintain it. And to add insult to that injury, they mislaid the service id, and would always take an adjacent service offline while troubleshooting it.

            The technology is rarely the problem, its the implementation.

    • lxgr a day ago

      > You might think it's convenient to have your TV connect to Netflix via WiFi and it is, but it is going to make everything else that really needs the Wi-Fi slower.

      TV streaming seems like a bad example, since it's usually much lower average bandwidth than e.g. a burst of mobile app updates installing with equal priority on the network as as soon as a phone is plugged in for charging, or starting a cloud photo backup.

      • vel0city a day ago

        Kind of true, but potentially also untrue. If that TV is running a crappy WiFi chip running an older WiFi standard on the same channel, it'll end up performing worse or not playing as nice with other clients during those bursts of buffering. That'll potentially be seen by other clients as little bursts of jitter.

        That's true of any client with older and crappier WiFi chips though, but TVs are such a race to the bottom when it comes to performance in so many other things.

    • TiredOfLife 10 hours ago

      > You might think it's convenient to have your TV connect to Netflix via WiFi and it is, but it is going to make everything else that really needs the Wi-Fi slower.

      There is other stuff to watch - like uhd bluray backups and those need more than the crappy 100mbps lan port can deliver.

      • ndriscoll 5 hours ago

        If the playback device isn't terrible (though smart TVs probably are), 100 Mbps should still be adequate as long as the average bitrate stays below that (which I think is the case for almost all UHD blurays?) and you get close to the nominal speed. For example if it peaks at 120 Mbps, then you're only draining your buffer at 2.5 MB/s, so a 150 MB buffer gets you an entire minute of peak bitrate as long as it was full. A quick search suggests very few movies go above 100 Mbps for longer than a few seconds at a time and averages are usually below 80.

    • drob518 a day ago

      That tip about not using it also works with Ethernet and other technologies, BTW.

    • jeffbee a day ago

      Ethernet pretty much sucks and has not improved substantially in consumer devices since the previous century. It also has pretty severe idle power consumption consequences for PCs, unless you are an expert who goes around fixing that.

      • p_j_w a day ago

        >Ethernet [...] has not improved substantially in consumer devices since the previous century.

        We've gone from 100 Mbps being standard consumer level to 2.5 or 10 Gbps being standard now. That sounds substantial to me.

        • Dylan16807 4 hours ago

          They exaggerated a little bit on the timeline. But 20+ years ago 1gbps became standard, and today there are signs of change but 1gbps is still standard.

        • userbinator a day ago

          10G Ethernet is not quite that common yet, but should become very common soon: https://news.ycombinator.com/item?id=44071701

        • jeffbee a day ago

          There is not any meaningful sense in which 2.5gb ethernet is "standard". There are no TVs with 2.5gb ethernet ports. Or even 1gb ports. Yet they all have WiFi 5 or better.

          • dpark a day ago

            In practical terms, WiFi 5 is slower than 1gb Ethernet.

            It is bizarre that they are putting 100mbps Ethernet ports on TVs though.

            • baby_souffle a day ago

              > It is bizarre that they are putting 100mbps Ethernet ports on TVs though.

              It's a few pennies cheaper and i'm sure they have some data showing 70%+ will just use WiFi. TCL in particular doesn't even have very good/stable drivers for their 10/100 NIC; there's a ton of people on the Home Assistant forums that have noticed that their android powered smart TV will just ... stop working / responding on the network until it's rebooted.

              • dpark a day ago

                I’m sure you’re right, but the fact that it’s almost certainly literal pennies makes it very lame. Lack of stable drivers is also ridiculous given how long gbps Ethernet has been around.

            • toast0 17 hours ago

              > It is bizarre that they are putting 100mbps Ethernet ports on TVs though.

              It's not that bizarre. About the only media one might have access to that is above 100mbps is 4k blu-ray rips which can hit peaks above 100m; but TVs don't really cater to that. They're really trying to be your conduit to commercial streaming services which do not encode at that high of a bitrate (and even if they did, would gracefully degrade to 100Mbps). And then you can save on transformers for the two pairs that are unused for 100base-tx.

            • kiwijamo a day ago

              No video streams out there uses over 100mbits so makes sense.

              • dpark 21 hours ago

                I’ve read that 8k streams can exceed 100mbps. I have not dig very far into that though since I don’t have an 8k tv or any 8k sources.

                • kllrnohj 11 hours ago

                  Streaming services are extremely compressed. Netflix only recommendeds 15mbps for 4k, even. A naive straight quadrupling of that for 8k is only 60mbps, and in reality they'll just dial up the compression anyway and probably use a 30mbps stream.

          • wtallis a day ago

            2.5GbE only started gaining steam when cheap Realtek chips became available (especially since the Intel chips that were on the market earlier were buggy). Those have been adopted by almost all desktop motherboards now on the market, and most laptops that still have Ethernet. Embedded systems are lagging because they're always behind technologically and because they have longer design cycles, but it's pretty clear that most devices designed in the last year or two are moving beyond 1GbE and 2.5GbE will be the new baseline going forward.

          • esseph a day ago

            Home user CPE we install have multiple 2.5G Ethernet ports.

      • tryauuum a day ago

        even with 1Gbit/s ethernet, measure latency. It will be smaller and more predictable than any wifi you can have.

      • kjkjadksj a day ago

        You still get the best speeds over ethernet today because of how wifi standards are slow walked, both on the router and the device connected with the router. Ethernet standards are slow walked too of course but we are talking slow walking a 2.5g or 10g connection here, even otherwise crappy hardware is likely to have 1g ethernet and it’s been that way for at least 10 or 15 years.

        • jeffbee a day ago

          If you want to transfer the contents of your old mac to your new mac, your best options in order of speed are 1) thunderbolt, 2) wifi, and 3) ethernet. You do not, in any sense, get "the best speeds" from ethernet. The market penetration of greater-than-1gb wired networks in consumer devices is practically nothing.

          • kllrnohj a day ago

            I have a U7 Pro XGS hooked up to a Pro HD 24 POE switch (all 2.5gb ports or faster).

            The only way I've managed to convince any Wifi 7 client to exceed 1gbps is by freshly connecting to it over 6ghz while standing physically within arm's reach of the AP. That's it. That's the only time it can exceed 1gbps.

            In all other scenarios it's well under 1gbps, often more like 300-500mbps. Which is great for wifi, but still quite below the cheapest ethernet ports around. And 6ghz client behavior across OS's (Windows, MacOS, iOS, and Android) is so bad at roaming that I actually end up just disabling it entirely. The only thing it can do is generate bragging rights screenshots, in actual use it's basically entirely DOA.

            And that's ignoring that ~$200 N150 NUCs come with 2.5gbps ethernet now.

            • tass a day ago

              I’m with you on 6ghz wifi disappointment. My phone does well with it since it supports MLO but my macbook will refuse to roam away from 6ghz until it’s close to unusable.

          • tass a day ago

            My isp-supplied router had 10gbe on both wan and lan sides. I swapped it for my own, but that is what modern consumer equipment looks like.

            You can find a 2 port 10gbe+4 port 2.5gbe switch for just over $30 on Amazon.

            If the run isn’t too long this can all run over cat5. Handily beats wifi especially for reliability but Thunderbolt is fastest if you only have 2 machines to link.

          • astrange a day ago

            I have all 2.5gbit at home with some 10gbit SFP copper connections, it wasn't particularly difficult. The devices with built-in Ethernet ports are all gigabit of course, but the ones with USB-C ports have 2.5gbit adapters.

            I could go to 10gbit but the Thunderbolt adapters for those all have fans.

          • kstrauser a day ago

            This is so insanely wrong that I almost feel like we're being trolled. Yes, a direct Thunderbolt connection would be best. Failing that, a guaranteed 1Gb Ethernet connection, which is ubiquitous and dirt cheap, and has latency measured in microseconds, is going to wipe the floor with real-world Wi-Fi 7 speeds. And for what you'd pay for end-to-end Wi-Fi 7 compatible gear, you could be using 10Gb Ethernet, which is in a different league of stability and actual observed throughput compared to anything wireless.

            I have Firewalla Wi-Fi 7 APs connected via 10Gb Ethernet to my router. They're brilliant, very expensive, very high quality devices. I use them only for devices which I can't hardwired, because even 1Gb Ethernet smokes them in actual real-world use.

            • jeffbee a day ago

              > wipe the floor with real-world Wi-Fi 7 speeds.

              I see that you have never tried this. By the way, Mac Migration Assistant doesn't need Wi-Fi infrastructure at all.

              • kstrauser a day ago

                Sure have, within the last 2 weeks when I helped a coworker migrate to a new machine! Both were November 2024 MacBook Pros, so Apple's current top-of-the-line laptops.

                Running over Wi-Fi dragged on interminably and we gave up several hours in. When we scrounged up a could of USB Ethernet dongles and started over, it took about an hour.

                So yeah, my own personal experience confirms exactly what I'd expect: Wi-Fi is slow and high-latency compared to Ethernet, and you should always use hardwired connections when you care about stability and performance more than portability. By all means, use Wi-Fi for routine laptop mobility. If you have the option, definitely run a cable to your stationary desktop computers, game consoles, set-top boxes, NASes, and everything else within reach of a switch.

          • pbronez a day ago

            If you’re the kind of person who wants better than gigabit Ethernet, it’s very available. 2.5Gbe is just a USB adapter away. Mac Studio comes with 10GbE. Unifi networking gives you managed multi-gig and plenty of others do unmanaged multigig at affordable prices. Piles of consumer NAS support multigig.

            I think this market is driven by content creators. Lots of prosumers shoot terabytes of video on a weekly basis. Local NAS are essential and multi-gig local networks dramatically improve the editing experience.

          • kulahan a day ago

            brb ima turn on my microwave halfway through your transfer

            • chrneu a day ago

              or a single shitty wifi chipset in your network thanks to a cheap iot device.

              Wifi is garbage. This person has no idea what they're talking about. It sounds like they read a blog post like 5 years ago and stuck with it cuz it's an edgy take.

              • jeffbee 20 hours ago

                Yes, me and the other literally billions of people who do not use wired Ethernet to their TV are just parroting an old blog. The OP who says Ethernet is an absolute requirement for Netflix is clearly correct. You sure got me.

            • bobbiechen a day ago

              To this day I expect my wifi to drop whenever I hear a microwave, thanks to the one in my parents house: https://digitalseams.com/blog/microwave-ovens-wi-fi-and-http

              • bornfreddy 19 hours ago

                Shouldn't such microwaves be decommissioned? I would assume that microwaves that are not properly shielded are dangerous to people in their vicinity?

          • kjkjadksj a day ago

            Yes thunderbolt is best but look at costs. Apple is selling a 4ft cable for $130. I have a ton of random cat 5e and cat 6 and they go for a couple dollars.

            Now lets talk about my actual “old mac” and “new mac” Mid 2012 mbp and my m3 pro. The 2012 only can do 802.11n so not gigabit speeds. It does have a gigabit ethernet however.

            Even if I was going m3 pro to m3 pro, I’m only getting full wifi 6e speeds if I actually have a router that makes use of 160hz channels. My router can’t. It is hard to even gleam router offerings to see which are offering proper wifi 6 because there are like dozens of skus sold even to different stores from the same brand getting slightly different skus. Afaik my mac does not support 160hz wifi 6 either.

            • wtallis a day ago

              A 4ft USB 4 cable is $30. That's more bandwidth per dollar than an Ethernet cable. Thunderbolt cables aren't cost prohibitive any more (though the devices at either end are still very expensive).

      • chrneu a day ago

        Ethernet will usually hit hardware limits of your HDD or SSD before it actually maxes out. 1gb ethernet is better than wifi in 99% of cases because wifi in the real world is pretty bad, even with modern standards. Why else do they have to continually revamp the standards to get around congestion and roaming issues? Cuz wifi is garbage in the real world. Ethernet = Very little jitter, latency, or packet loss. Wifi = Tons of jitter, latency and packet loss.

        Your take is really weird and doesn't represent the real world. What blog did you read this on and why haven't you bothered to attack that obviously wrong stance?

        • jeffbee a day ago

          This is the most ridiculous lie in the thread. An ethernet link that can barely keep up with a $150 SSD costs $1250 per switch port, and needs a $1200 NIC and can go only 3m over copper before you need a $1000+ optic assembly. There is nobody with an ethernet setup in their home that outruns consumer-grade SSDs. "Ethernet is limited by SSDs" is a Charlie's Hoes level of wrong.

          • BenjiWiebe 20 hours ago

            Yes even an HDD can keep up with 1GbE.

            But if you actually want your Ethernet to be similar speed to your SSD, you don't need to spend that much. Get some used gear.

            32 port 40GbE switch (Dell S6000) $210 used

            Dual port 40GbE NIC (Mellanox MCX354A-FCCT) $31 used

            40GbE DAC 1 meter new price $22 or 40GbE optics from FS.com (QSFP-SR4-40G) $43 new + MMF fiber cable

            Of course, that's probably not going to be very power efficient for home use - 32 port switch and probably only connecting a handful of devices at most.

      • lxgr a day ago

        Compared to what?

  • ttshaw1 a day ago

    I don't get what the point of the article is. Is the takeaway that I should lower the channel width in my home? How many WAPs would I need to be running for that to matter? I'd argue it's more important to get everyone to turn down TX power in cases where your neighbors in an apartment building are conflicting. And that's never going to happen, so just conform to the legal limit and your SNR should be fine. Anything that needs to be high performance shouldn't be on wifi anyway.

    If you want to spend a really long time optimizing your wifi, this is the resource: https://www.wiisfi.com/

    • varenc 17 hours ago

      The takeaway is that you'll probably experience more reliable wifi if you turn your 5ghz channel width down to 40mhz and especially make sure your 2.4ghz width is 20mhz not 40mhz. As noted, you can't do anything about the neighbors, but making these changes can improve your reliability. And I think the larger takeaway is that if manufacturers just defaulted to 40mhz 5ghz width, like enterprise equipment does, wifi would be better for everyone. But if your wifi works great then no need.

      Also that's an amazing resource, thanks for linking.

    • jerf a day ago

      This sort of thing is definitely in the class of "are you experiencing problems? if not don't worry about it".

      If you are experiencing problems, this might give you an angle to think about that you hadn't otherwise, if you just naively assume Wifi is as good as a dedicated wire. Modern Wifi has an awful lot of resources, though. I only notice degradation of any kind when I have one computer doing a full-speed transfer for quite a while to another, but that's a pretty exceptional case and not one I'm going to run any more wires around for for something that happens less than once a month.

    • jpc0 a day ago

      2.4GHz wifi at 40MHz squats literally half of the usabke channels, you speed improvement, very likely you now get 100mbps. If you just disabled 2.4GHz and forced 5GHz you would get the exact same improvement and wouldn't be polluting half of the available frequencies.

      Add another idiot sitting on channel 8 or 9 and the other half of the bandwidth is also polluted, now even your mediocre IoT devices that cannot be on 5GHz are going to struggle for signal and instead of the theoretical 70/70mbps you could get off a well placed 20MHz channel you are lucky to get 30.

      Add another 4 people are you cannot make a FaceTime call without disabling wifi or forcing 5GHz

    • tetris11 a day ago

      I lose wifi signal consistently in my bedroom on my 80Mhz wide 5Ghz wifi.

      I just now reduced it to 20Mhz, and though there is a (slight) perceptible drop in latency, those 5 extra dB I gained from Signal/Noise have given me wifi in the bedroom again

      • grogers 7 hours ago

        Wow! There are certain areas of my house that I get such bad wifi signal that I often switch to cellular data since it's more reliable. I didn't even know you could change a setting like this to reduce speeds but improve reliability - it worked like a charm, thanks!

      • bcrl a day ago

        Every doubling of the channel width costs roughly 3dB. Shannon's law strikes again!

        • bobmcnamara 20 hours ago

          Every doubling of the channel width doubles the Shannon limit*

          * An a gaussian white noise environment, which WiFi usually isn't in.

    • operator-name a day ago

      Wow, that is an awesome resource and something I wish I knew about earlier!

    • BonoboIO 17 hours ago

      Everytime I have questions about Wi-Fi I search for this distinctive site wiisfi.com … I should bookmark this.

      The best Ressource out there. Period.

  • MedAzizBenSalem 14 hours ago

    This is such a great write-up it highlights a truth that’s been hiding in plain sight for years: we’ve optimized Wi-Fi for headline speeds, not human experience. Emphasis on throughput reminds one of the "megapixel wars" of early digital photography a simple, clear-cut figure that completely misrepresents actual quality. Responsiveness and reliability are the actual measures that control day to day satisfaction, but they are harder to define and won't neatly go on a store shelf. What is fascinating here is that speed tests themselves actively degrade the network performance. It's like taking your pulse by first dashing a lap in sprinting mode. Whether we'll see more router makers or ISPs start offering "responsiveness scores" instead of speed numbers once the consumer pays attention to latency and airtime contention remains to be seen. At any rate, this post nails the broader cultural problem in networking: the industry still chases awe inspiring numbers instead of better experiences.

    • vlan0 12 hours ago

      How do you think speeds effect airtime utilization/optimization? And how does this change with lower PHY rates?

      • immibis 8 hours ago

        Not linearly. The data may take twice as long to transmit at half the rate, but there's overhead which doesn't change. If you can use half the channel width and transmit at half the rate, you halve the Hz-seconds used for preambles and stuff, while keeping the Hz-seconds used for data the same.

    • ieie3366 9 hours ago

      AI slop

  • varenc 17 hours ago

    Apple has a draft specification for a better way of measuring network quality than just doing speed tests: https://github.com/network-quality/goresponsiveness

    Their `networkQuality` implementation is on the CLI for any Mac recently updated. It's pretty interesting and I've found it to be very good at predicting which networks will be theoretically fast, but feel unreliable and laggy, and which ones will feel snappy and fast. It measures Round-trips Per Minute under idle and load condition. It's a much better predictor of how fast casual browsing will be than a speed test.

  • spragl 16 hours ago

    This is a clear case of "you get what you measure". Measuring speed is so easy, everybody can do it, and do it all the time. No wonder that providers optimize for speed. But it also works the other way around. We have developed a focus on speed as it was the only thing that mattered.

    I have worked with networks for many years, and users blaming all sorts of issues on the network is a classic, so of course in their minds they need more speed and more bandwidth. But improvements only makes sense up to some point. After that it is just psychological.

  • lxgr a day ago

    > Many ISPs, device manufacturers, and consumers automate periodic, high-intensity speed tests that negatively impact the consumer internet experience as demonstrated.

    Is that actually a thing? Why would any ISP intentionally add unnecessary load to their network?

    • cowsandmilk a day ago

      See https://www.thousandeyes.com/blog/cisco-announces-intent-to-... for example, SamKnows is in millions of homes measuring performance and now sending the data to Cisco.

    • chrneu a day ago

      For what it's worth, I think most ISPs that do this will host their speed test in-network so their speeds are inflated. This benefits both the ISP and whoever is in charge of the speed test(like speedtest.net).

      So they're not really increasing their network load a measurable amount since the data never actually leaves their internal network. My ISP's network admin explained this to me one day when I asked about it. He said they don't really notice any difference.

      • cruffle_duffle 6 hours ago

        That’s why Netflix has fast.com, which pulls actual video content, meaning it’s hard for the ISP to inflate the number.

        (at least as per my understanding)

    • sidewndr46 a day ago

      I've only met around 10 people that even know what a speed test is. I'm not sure how most consumers would even go about automating one. What would be the first step?

  • kuon a day ago

    For me the only thing that really matters, and globally sucks with WiFi is roaming.

    My house is old and has stones walls up to 120cm, including the inner walls, so I have to have access points is nearly all rooms.

    I never had a true seamless roaming experience. Today, I have TP-Link Omada and it works better than previous solutions, but it is still not as good as DECT phones for examples.

    For example if I watch a twitch stream in my room and go to the kitchen grab something with my tablet or my phone, I have a freeze about 30% of the times, but not very long. Before I sometime had to turn the wifi off and on on my device for it to roam.

    I followed all Omada and general WiFi best practice I could find about frequency, overlap... But it is still not fully seamless yes.

    • bcrl a day ago

      DECT phones run on the 1.9 GHz spectrum which doesn't get absorbed by water like 2.4 GHz, and will penetrate through many other materials far better than higher frequencies.

      Most people place wifi repeaters incorrectly, or invest in crappy repeater / mesh devices that do not multiple radios. A Wifi repeater or mesh device with a single radio by definition cuts your throughput in half for every hop.

      I run an ISP. Customers always cheap out when it comes to their in home wireless networks while failing to understand the consequences of their choices (even when carefully explained to them).

      • gh02t a day ago

        Eh, multiple APs and roaming being awful isn't just a matter of shitty placement and bad wireless backhaul, it's also client side software. I have two APs on opposite ends of my house and my phone tries to hang on to whatever AP its connected to far longer than it should when moving around the house. My APs are placed correctly, and support 802.11r, yet my phone and most other devices don't try to roam until far, far past the point they should have switched to the other AP.

        The design of roaming being largely client initiated means roaming doesn't really work how people intuitively think it should, because at least every device I've ever seen seems to be programmed to aggressively cling to a single AP.

        • toast0 17 hours ago

          Have you tried turning down the tx power on your APs? It will help your devices decide to roam, and it may not actually reduce your effective range, because often times effective range is limited by tx power on the client more than the AP.

          • gh02t 7 hours ago

            I have, doesn't really help very much unfortunately. Setting an RSSI threshold on the AP can also help devices roam, but its hard to set it at a level that works for all devices (since different devices have different sensitivities so an RSSI threshold that works well for my phone might cause some other device to constantly get dropped).

      • protocolture a day ago

        "Wheres your router"

        "The basement"

        "Uh, i can send someone out to install some repeaters for $$$"

        "No just make internet good now"

    • jval43 15 hours ago

      I live in a similar building.

      I assume you have hardwired all the APs, otherwise that would be the first step. Make sure they're on different channels, and have narrow MHz bands (20Mhz for 2.4GHz, 40MHz for 5GHz) selected.

      Only use 1,6,11 for 2.4GHz and don't use the DFS channels on 5GHz as they will regularly hang everything.

      Afterwards you can try reducing the 5GHz transmission power so there is no/less overlap in the far rooms.

      Unfortunately you probably need the 2.4GHz (at least I do) but as the range is so much higher it might make sense to deactivate it on some APs to prevent overlaps.

      Doing this basically eliminated the issues for me.

    • Marsymars 18 hours ago

      I use a DECT VoIP phone for most of my phone calls. It's great!

  • ksec 6 hours ago

    >The IEEE 802.11bn (Wi-Fi 8) working group has acknowledged the need for a shift in focus, framing the standard’s goals differently from past generations: not chasing ever-higher peak speeds, but improving reliability, lower latency (especially at the 95th percentile), reduced packet loss, and robustness under challenging conditions (interference, mobility).

    For people who dont follow WiFi closely. While WiFi 8, or 7 or 6 all has the intended features for its release, they are either not mandated or dont work as well as it should. Instead every release was a full refined execution of previous version. So the best WiFi 6 ( OFDMA ) originally promised will only come in WiFi 7. And current WiFi 7 feature like Multi-Link Operation will likely work better in WiFI 8. So if you wanted a fully working WiFi 8 as they marketed it, you better wait for WiFI 9.

    But WiFi has come a long way. Not only have they exceeded 1Gbps in real world performance, they are coming close to 2.5Gbps, maximising the 2.5Gbps Ethernet. And we are now working on more efficient and reliable WiFi.

  • throw0101d a day ago

    The next release standard is 802.11bn, "Wifi 8", and it has been dubbed Ultra High Reliability (UHR):

    * https://en.wikipedia.org/wiki/IEEE_802.11bn

    So other considerations are being considered.

    • superkuh 9 hours ago

      It might help. But wifi, like all radio, is a shared medium. Unlike actual transmission lines like ethernet where you get the entire electromagnetic spectrum to yourself to use multiple times, once for each twisted pair. Even if ethernet itself only uses baseband modulation and doesn't efficiently fill the spectrum it's more than enough. The reliability is incomparible to radio.

  • drewg123 3 hours ago

    One of the things that makes our wifi suck are HP (and other) printers using wifi direct. Looking at a wifi scan, I can see new fewer than 5 of my neighbors printers screaming to the top of their lungs.

    The only thing that makes wifi in a large condo building viable is the 6Ghz channels available on wifi 6e

  • kazinator 21 hours ago

    > Because consumers have been conditioned to understand only raw speed as a metric of Wi-Fi quality and not more important indicators of internet experience such as responsiveness and reliability.

    Whie the two are not the same, they are not exactly separable.

    You will not get good Internet speed out of a flaky network, because the interrupted flow of acknowledgements, and the need to retransmit lost segments, will not only itself impact the performance directly, but also trigger congestion-reducing algorithms.

    Most users are not aware whether they are getting good speed most of the time, if they are only browsing the web, because of the latencies of the load times of complex pages. Individual video streams are not enough to stress the system either. You have to be running downloads (e.g. torrents) to have a better sense of that.

    The flakiness of web page loads and insufficient load caused by streams can conceal both: some good amount of unreliability and poor throughput.

  • rpcope1 a day ago

    Honestly what's unsaid in a lot of this is that it would be really nice if there were more and wider ISM bands. So much makes use of 900Mhz, 2.4GHz and 5GHz in novel and innovative ways, that if the government and FCC really actually wanted to spark innovation including augmenting wifi performance, they'd stop letting telcos and other questionable interests hoard spectrum and release it as ISM (and no, they shouldn't steal from ham bands to make ISM bands either).

    • ianburrell 5 hours ago

      WiFi 6E added 6 GHz spectrum bigger than 5 GHz. The problem is that not that many devices support it. I think it is required in WiFi 7. I think it is shared with other users so devices have to find free space. I think most developed countries have allocated it.

    • esseph a day ago

      The only way forward is new frequencies and larger blocks of spectrum.

  • PaulKeeble a day ago

    A households bandwidth use is quite a bit different to a business. While a household may have a lot of devices most of them are doing very little at any given time, but the primary device in use requires the best speed possible. In a business however there are a lot of primary devices and not a lot of idle little devices and as such fairness and reliability dominate the needs as does getting the frequencies maxed out for coverage and total bandwidth available.

    Wifi 8 will probably be another standard homes can skip. Like wifi 6 it is going to bring little that they need to utilise their fibre home connnections well across their home.

  • Brajeshwar 19 hours ago

    “Behind every good wi-fi network is an excellent wired backbone infrastructure.” - the Tao of Wi-Fi

    • stingraycharles 19 hours ago

      “Those who understand wireless use cables” - random guy on the internet.

  • amluto a day ago

    I wish the Wi-Fi developers would put some serious effort into improving range and contention. Forgot 40 MHz vs 80 MHz — how about some 5 MHz channels? How about some modulations designed to work at low received power and/or low SNR? How about improving the stack to get better performance when a device has mediocre signal quality to multiple APs at the same time?

    There are are these cool new features like MLO, but maybe devices could mostly use narrow channels and only use more RF bandwidth when they actually need it.

    • bobmcnamara 20 hours ago

      IEEE 802.11af old TV band

      IEEE 802.11ah 900Ish

      IEEE 802.11ax(WiFi6): traditional channels can be subdivided between 26 and 2x996 resource units according to need(effectively a 2MHz channel at the low end). This means multiple devices can be transmitted to within the same transmit opportunity.

      > How about some modulations designed to work at low received power and/or low SNR?

      802.11(og), 1 & 2 Mbps.

      • amluto 19 hours ago

        And do those ax resource units work, in practice, in a way that allows two APs that are moderately close to each other to coexist efficiently within the same 20MHz channel? Preferably even if they’re from different vendors and even if the users are not experts?

        > 802.11(og), 1 & 2 Mbps

        I’m a little vague on the details, but those are rather old and I don’t think there is anything that low-rate in the current MCS table. Do they actually work well? Do they support modern advancements like LDPC?

        • bobmcnamara 18 hours ago

          > > 802.11(og), 1 & 2 Mbps > I’m a little vague on the details, but those are rather old

          They're the original, phase shift keyed modulations.

          > Do they actually work well?

          They work great, if your problem is SNR, and if you value range more than data rate.

          They are, of course, horribly spectrally inefficient which means they work better than OFDM near the guard bands. OFDM has a much flatter power level over frequency, so you have to limit TX power whenever the shoulder of the signal nears the guard band. IIRC, some standard supports individually adjusting the resource unit transmit power which would solve this as well. PSK modulation solves this somewhat accidentally. Guardbands especially suck since there's only 3 non overlapping 2.4GHz channels.

          > I don’t think there is anything that low-rate in the current MCS table.

          > Do they support modern advancements like LDPC?

          Dunno! Generally though, each MCS index will specify both a modulation mechanism (BPSK, OFDM, ...) and a coding rate. All of the newer specs allow you to go almost as slow if you want to, usually 6-7mbps ish , and this is done with the same modulation scheme just a bit faster and with newer coding.

          > do those ax resource units work, in practice, in a way that allows two APs that are moderately close to each other to coexist efficiently within the same 20MHz channel?

          Yes and no. It doesn't improve RF coexistence directly. But in many cases allows much more efficient use of the available airtime. Before every outgoing packet to a different station consumed a guard interval and the entire channel bandwidth, but now for a single guard interval you can pack as many station's data as will fit.

  • semiquaver a day ago

    The thing about speed tests causing a bad experience because they hog airtime felt like a non sequitur (since performing them is rare and manual) until I saw this:

      > Many ISPs, device manufacturers, and consumers automate periodic, high-intensity speed tests that negatively impact the consumer internet experience as demonstrated
    
    But there’s no support for this claim presented frankly I am skeptical. What WiFi devices are regularly conducting speed tests without being asked?
    • thewebguyd a day ago

      > What WiFi devices are regularly conducting speed tests without being asked?

      ISP provided routers, at least Xfinity does. I've gotten emails from them (before I ripped out their equipment and put my own in) "Great news, you're getting more than your plan's promised speeds" with speedtest results in the email, because they ran speed tests at like 3AM.

      I wouldn't be surprised if it's happening often across all the residential ISPs, most likely for marketing purposes.

      • gm678 a day ago

        Pretty sure Verizon does this as well, when I had a tech come out he had access to historical speed test results from my router (I didn't ask any questions about it at the time so don't have any more info).

      • lxgr a day ago

        That would be a speedtest between the router/modem and CMTS then, not one between a Wi-Fi connected device and the ISP, no?

      • kjkjadksj a day ago

        I have noticed Spectrum internet shits the bed at 12:30am pretty reliably.

        • typpilol a day ago

          Really? My spectrum has been super reliable in Michigan. Way better than when I had Comcast here

    • dlcarrier a day ago

      DOCSIS cable modems perform perform regularly scheduled tests, but it's only between devices, and shouldn't affect available bandwidth, because there's far more bandwidth within the DOCSIS network than between the network and the Internet.

      • lxgr a day ago

        > there's far more bandwidth within the DOCSIS network than between the network and the Internet.

        Really? DOCSIS has been the bottleneck out of Wi-Fi, DOCSIS, and wider Internet every time I've had the misfortune of having to use it in an apartment.

        Especially the tiny uplink frequency slice of DOCSIS 3 and below is pathetic.

    • joshstrange a day ago

      Eero does this automatically (mine says it was last run 2 days ago at 5:08am) and I had software on my DD-WRT router (OpenLede) that did it, though obviously not many people (overall) are running that.

      I used to run a docker than ran a speed test every hour and graphed the results but I haven't done that in a while now.

      • Marsymars 18 hours ago

        Eero I think just tests internet speed from the gateway, so no Wi-Fi involved.

    • toast0 17 hours ago

      I think Roku devices might. There's a network speed indicator in the settings and I think it had values before I explicitly ran a test. My Rokus are all wired, because I'm civilized, and the test interval is very short, so that ends my investigation.

    • esseph a day ago

      Ubiquiti UniFi used to, I don't know if it still does.

      • nativeit a day ago

        At least in my UniFi instance, this is only done when manually triggered, but I seem to recall a setting where it could be automatically updated daily.

      • pbronez a day ago

        It’s configurable

    • jeffbee a day ago

      Google Nest access points do this, but they do it only when networks are idle, so I fail to see the negative consequences.

  • saghm 17 hours ago

    Moving into a house for the first time since before college this year, I only just learned about Wi-Fi channel width this week. Apparently the mesh routers I ended up picking several months ago had a default width of 160 MHz, but only go as low as 80 MHz, so that's what I ended up switching to. Anecdotally it has seemed to be somewhat more reliable, but maybe in the long run finding something that can go even lower might be worth it because we do still notice some stutter occasionally that would be nice to reduce even if the theoretical max throughout was a bit lower.

  • LikeBeans 18 hours ago

    I'm surprised, at least for businesses, small cell wifi is not a thing. For example, if you walk into an office building everyone seems to have a physical phone on their desk that is hard wired. What if that is also a small cell AP. Like a personal AP. Using automation and central provisioning and analytics can make this doable. Yeah handoff and roaming has to be seamless and quick but it doesn't feel that hard, no? If so this would be pretty neat and would solve the contention issue in the air.

  • ElijahLynn 8 hours ago

    Anyone know what Google Wi-Fi devices are set to?

    I don't see a way to change that setting and I don't see a way to see what it's currently set to.

  • userbinator a day ago

    The average US household has 21 Wi-Fi devices

    I wonder how many of those could be wired.

  • everdrive 10 hours ago

    Wifi sucks in general. You ever get on a video call and someone has starts talking like a robot? And then, out of pure superstition they start walking around their house hoping to "find" the better reception? And it never really works, but they keep trying it anyhow, and then they either say "maybe I need a better internet" or "but I have full bars." No information is really given to a normal (or in some cases, even technical users) which would allow them to actually correctly diagnose their problem. So like Skinner's pigeons, they just keep playing their ritual even though it has no effect.

    • ElijahLynn 8 hours ago

      WiFi is pretty magical in general. Sometimes it sucks.

  • Havoc 21 hours ago

    Is it really that big of an issue? With device spread over 2.4 5 and 6 ghz you really need a lot of them to run into issues

  • harrall a day ago

    I actually switched from 40 MHz to 80 Mhz when a friend complained about slow downloads on my Wi-Fi.

    So yeah, I do think speed is more important.

    Responsiveness doesn’t matter that often and when it does, plugging in Ethernet takes it out of the equation.

    • chrneu a day ago

      You can't use that speed if your device is dropping half the packets.

  • celeryd 13 hours ago

    I always assumed it was Ethernet protocol itself that made wi-fi suck

  • 0xbadcafebee 9 hours ago

    I bought a Wi-Fi 6 TP-Link router recently and found out that it's normal for people to have performance/responsiveness issues due to all the "advanced" features enabled. I turned them all off and use the simplest possible connection settings, but somehow there is still a 20-second delay when my smartphone tries to access a web page (and I use 1.1.1.1/8.8.8.8, and this never happened on previous wifi routers). The great enshittification rolls on.

  • vlan0 12 hours ago

    Honestly, none of the comments here coming from a place with enough protocol knowledge to talk about the "whys"

    I operate a large enterprise wireless network with 80mhz 5Ghz channels and 160Mhz 6Ghz channels. It is possible if your environment allows.

  • mithcs 21 hours ago

    Not the "Need for Speed" I expected.

  • somanyphotons a day ago

    Is there a good guide on what the right things to do are?

    • chrneu a day ago

      Hardwire everything you can over ethernet to get them off Wifi.

      Use a dedicated 2.4ghz AP for all IoT devices. Firewall this network and only allow the traffic those devices need. This greatly reduces congestion.

      Use 5ghz for phones/laptops and keep IoT off that network.

      That's really about it. If you have special circumstances there are other solutions, but generally the solution to bad wifi is to not use the wifi, lol.

  • knorker 14 hours ago

    Is this still true with OFDM and subchannels or whatever it's called?

    Also MIMO.

    And don't think it's relevant to compare what to do in a large space with what one should do at home. The requirements are entirely different.

    In a large space with many users I'd use small channels and many access points. I want it to work good enough for everyone to have calls, and have good aggregate throughput.

    In a two bed home I'd use large channels and probably only one AP. Peak single device speed is MUCH more important than aggregate speed.

    And in a home it matters much more what channels are being busyed by neighbors.

    For latency, of course, there is only wired. Even with few devices.

  • XorNot a day ago

    In the IoT space I really wish an "ESP for power line Ethernet" existed these days.

    I have 50+ ESP based devices on WiFi and while low bandwidth (and their own SSID) I really wish there were affordable options that they could be "wired" for comms (since they mostly control mains appliances, but the rules and considerations for mixing data and mains in one package are prohibitively expensive).

    • Neywiny a day ago

      Have you considered 1 WiFi device and 49 sub-ghz devices?

      • XorNot 19 hours ago

        The point isn't wifi contention per se (it's working fine) - it's that having home automation depend on wireless signals at all is both a vulnerability, and feels silly when all those devices have hard wired power.

  • 725686 a day ago

    "The average US household has 21 Wi-Fi devices"... wtf?

    • gnabgib a day ago

      Doesn't take long to add up. Family of 4 - every phone, including prior generation which might be off in a draw: 3-8

      Router, and extenders (multi floor house): 1-4

      Chromecast|Sonos|Apple speaker/Chromecast|google|firestick|roku|apple TV/smart speaker/hifi receiver/eaves dropping devices: 2-10

      Smart doorbell/light switch/temperature sensor/weather station/co2|co detector/flood detector/bulb/led strip/led light/nanoleaf/garage door: 4-16

      Some cars: 0-2

      Some smart watches speak wifi: 0-4

      Computers.. maybe the desktops are wired (likely still support wifi), all laptops, chromebooks, and tablets : 3-8

      All game consoles, many TVs, some computer monitors: 3-8

      Some smart appliances: 0-4 (based on recent news of ads, best to aim for 0)

      • chrismorgan 11 hours ago

        The numbers still feel pretty outlandish to me.

        The biggest factor in your count, and I think it is the one with the highest ceiling, is smart devices. Trouble is, even by sources like https://www.consumeraffairs.com/homeowners/average-number-of..., around half of all households still have zero, and the average household has only 2.6 people.

        In this thread (from its root), we have various users defending the reasonableness of the numbers, some providing numbers in their own houses: 10, 11, 14, 17, 19, 23, 28, 34, over 50, 60+. Averaging, I’ll say, about 27, and that’s with two pretty big outliers—if you excluded them (maybe reasonable, maybe not), you’d be down to 19.5. And these sorts of users are already likely to be above-average, it’s the nature of HN, compounded by them being the ones commenting (confirmation bias). Yet already (with the fiddling of removing what I’m calling outliers) they’re under the claimed average. And for each one of them, there’s another household with zero smart home devices; and the 20% of the population with no broadband are, I imagine, effectively using zero wifi devices, though discounting in this way is a little too simplistic. However you look at it, the average will drop quite a bit. In fact, if you return to the original 27 and simplify the portion of the population without smart home devices to a 30% zero rate (mildly arbitrary, but I think reasonable enough as a starting point) and let the other 70% be average… your 27 has dropped to about 19. In order to reach the 21 across the population, you’d need to establish these HN users, defenders of high wifi device counts, to be below average users of wifi devices, which is implausible.

        If the number was 10, I’d consider it plausible, though honestly I’d still expect the number to be lower. But I think my reasoning backs up my initial feeling that 21 is pretty outlandish for your national average. I’d like to see Deloitte Insights’ methodology; I reckon it’s a furphy. I bet it’s come from some grossly misleading survey data, or from sales figures of devices that are wifi-capable even though half of them never get used that way, or from terrible sampling bias (surveys are notorious for that), or something like that. Wouldn’t be the first wildly wrong or grossly misleading result one of those sorts of companies have published.

    • isaacdl a day ago

      I live alone, and just counted, I have 10 in regular use. A few more that can connect to WiFi but aren’t (why would I want my tower fans on the internet, anyway?)

      I had probably 20 prior to swapping out some smart light bulbs and switches for Zigbee.

      21 for an average household isn’t nuts.

    • tempestn a day ago

      34 devices connected to my router at the moment, 8 wired and 26 wifi. About 8 of the wifi devices are phones, tablets, and laptops; the rest are various iot things: locks, plugs, alarm, thermostat, water heater, doorbell, etc.

    • jsight 20 hours ago

      It is pretty easy to get there when everyone has a phone, a laptop, and there are a few shared tablets around. Add work + personal machines and it goes up a bit more.

      Add a few wifi security cameras and other IoT devices and 30+ is probably pretty common.

    • rdschouw a day ago

      I got 28 online right now according to my Eero. 3 people, with smartphones and laptops. Several game consoles, a few Apple TVs and music streaming devices, Ring camera, Zwave Hub, printer, washing machine, garage opener, Ring doorbell and an assortment of Echo dots.

    • seemaze a day ago

      I just checked;

      I currently have 23, my parent's house has 19

      People have all kinds of stuff on wifi these days - cameras, light bulbs, dishwashers, irrigation, solar, hifi..

    • pixl97 a day ago

      I'm probably not average, but I have over 50 wifi devices registered on my UBNT system and 15 wired.

    • IshKebab a day ago

      Doesn't seem unreasonable. Look at your router. I have 17 and I would say we're a totally normal household - the kids don't even have phones yet.

      We have 2 phones, a tablet for the kids, a couple of Google homes, a Chromecast, 2 yoto players, a printer, a smart TV, 2 laptops, a raspberry pi, a solar power Inverter, an Oculus Quest, and a couple of things that have random hostnames.

      It adds up.

    • drob518 a day ago

      Yep. And each of your neighbors also has that many devices and you’re all sharing the same channels.

    • lynndotpy a day ago

      And that's not to mention everything else on the 2.4GHz band :) Bluetooth, zigbee, your microwave, etc

    • MrZander a day ago

      That seems very high to me. A family of four each has 5 devices connected at the same time?

      • tzs 21 hours ago

        I'm single and have 11 devices on 2.4 GHz:

          Wireless temperature monitor
          Sync module for some Blink cameras
          2 smart plugs
          Roomba
          5 smart lights
          RPi 3
        
        3 of the smart lights I currently don't need and and so aren't actually connected. That leaves 8 connected 2.4 GHz devices.

        On 5 GHz I've got 16 devices:

          Amazon Fire Stick
          iPad
          Printer
          Echo Show
          Apple Watch
          Surface Pro 4
          iMac
          Nintendo Switch
          EV charger
          Mac Studio
          A smart plug
          Google Home Mini
          Echo Dot
          RPi 4
          Kindle
          iPhone
        
        The iMac and the Surface Pro 4 are almost never turned on, and the printer is also most of the time. That leaves 13 regularly connected 5 GHz devices.

        That's a total of 21 devices usually connected on my WiFi, right what the article says is average. :-)

      • paxys a day ago

        Smartphone, laptop, tablet, watch - that's 4 already. And this isn't just counting personal devices. Include TV, streaming stick, game console, printers, bulbs, plugs, speakers, doorbell, security cameras, thermostat and you'll hit that number pretty quick.

        • MrZander a day ago

          There are 16 devices on my WiFi right now and I would've though I was above average. I have a bunch of weird stuff like 3 Raspberry Pis that most households would not have, but I don't have most of the stuff you listed.

          I guess I am less "connected" than the average American. Can't say I feel like I am missing out, though.

      • kllrnohj a day ago

        Check your network and see how many wifi devices you have. I'm up to 60+ thanks to a handful of IoT devices, smart speakers, etc... It adds up quick.

      • drob518 a day ago

        Most of your mobile devices are doing background tasks. It’s not typically high bandwidth stuff, but they are connected even when you aren’t using them.

    • commandersaki a day ago

      I count 14 in a 2 person household, 4 bedroom house; 3 wired.