Redbox left PII on decommissioned machines

(digipres.club)

134 points | by NotPractical 10 hours ago ago

58 comments

  • ChicagoDave 2 hours ago

    I worked at RedBox in 2010. C# with embedded Lua for the screens. The intent was to build a flexible architecture for CoinStar to use on many kiosk businesses.

    The PII is likely log files that should have been erased nightly, but I don’t remember.

    I know the guy that designed the architecture. He’s a friend that I’ve argued with about over-engineering things. He never cared if people understood his work, which is a common theme with old school engineering.

  • xnorswap 2 hours ago

    > Redbox.HAL.Configuration

    > .ConfigurationFileService implements IConfigurationFileService

    > STOP MAKING SERVICES AND FACTORIES AND INTERFACES AND JUST READ THE FUCKING

    > JSON FILE YOU ENTERPRISE FUCKERS

    I know it's cool to "hate" on OO, but "just read the fucking file" doesn't work if you want to run your unit tests without reading a fucking file.

    It makes sense to abstract configuration behind an interface so you can easily mock it our or implement it differently for unit testing.

    Perhaps you also want to have some services configured through a database instead.

    This isn't a ConfigurationFileServiceFactoryFactory.

    • proikan an hour ago

      Isn't ```dependency injection``` (aka passing arguments) the big thing that's supposed to solve this?

        Config config;
        // production
        config_from_file(&config, "config.json");
        run_production_stuff(&config);
        
        // unit tests
        Config config;
        config_from_memory(&config, &some_test_values);
        run_tests(&config);
      • xnorswap an hour ago

        Yes, and the typical pattern for .NET DI is to do so with interface based parameters.

        So let's say you have a service FooService that requires some configuration.

        ( Ignoring the System.configuration namespace for now)

        You'd have:

            class FooService(IConfigurationService ConfigurationService){
                // Access Configuration Through IConfigurationService
            }
        
        
        Then elsewhere you'd set up your DI framework to inject your ConfigFileService to satisfy IConfigurationService in prod.

        Yes, it can sometimes feel a bit like "turtles all the way down", where sometimes you just wish you had a bunch of concrete implementations.

        In unit tests, you'd auto-mock IConfigurationService. For integration tests you might provide a different concrete resolution.

        There are some advantages to service based DI though. The standard ASP.NET DI framework makes it trivially easy to configure it as a singleton, or per-request-lifetime, or per-instantiation, without having to manually implement singleton patterns.

        This gives you good control over service lifetime.

        • xnorswap 18 minutes ago

          My example above is terrible, because in reality you'd have another level before this, which sorts out your global configuration, reads it and just injects service specific parameters and configuration for each service.

          But I just wanted to illustrate idiomatic .NET DI, and on reflection picking configuration was probably the worst way to illustrate it.

    • xnorswap 2 hours ago

      Could or should there just be a `IConfigurationService` instead of a separate IConfigurationFileService? Yes, probably.

      "Interface all the things" is a bit lazy, but it's easy, especially if you have Moq as a way to auto-mock interfaces and a DI framework to setup factory methods.

      But spinning into rage just because you see an interface or abstract factory isn't healthy.

      • throwaway365x2 2 hours ago

        I don't follow .Net closely, but it seems like there should be a better alternative. Java has a library called "Mockito" that can mock classes directly without requiring an interface. I assume something similar exists for .Net, as they have similar capabilities. Making an interface for one class, just so another class can be tested seems like we allow the tool (tests) to determine the architecture of what it is testing. Adding complexity in the name of TDD is a close second on my list of triggers

        There's nothing that triggers* me more than seeing an interface that only has one implementation. That's a huge code smell and often a result of pre-mature architecture design in my opinion. It also often leads to complexity where if you have an interface, you create a factory class/method to instantiate a "default" implementation. Fortunately it seems that it is not used as often as before. Our code has no factories and only a few interfaces, that actually have a practical use. The same applied to my previous workplace

        * The trigger applies to 2024 Java code written as if it was 2004. I may have a form of PTSD after many years of interfaces and FactoryFactory, but fortunately times have changed. I don't see much of that today except in legacy systems/organizations.

        • xnorswap 2 hours ago

          I'm sure the same exists for .NET ( Moq can probably do it? ), but writing against an interface and having concrete implementations supplied by the DI framework is pretty much the ordained way to do things in .NET.

          I used to be in the "Interfaces with only a single implementation is a code smell" camp, but I prefer to follow the principle of least surprise, so going with the flow and following the way the MS standards want you to do things makes it easier to onboard developers and get people up to speed with your code base. Save "Do it your own way" for those parts of the system that really requires it.

          And technically the auto-generated mock is a second implementation, even if you never see it.

          • Kuinox 41 minutes ago

            Moq® cannot do it. I forked Moq® and made a library that can mock classes: https://github.com/Kuinox/Myna. It can do that by weaving the class you mock at compile time for your mocks (you can still use your class normally).

          • throwaway365x2 an hour ago

            I think you have good approach. I also tend to go with the flow and follow the common practice. If I tried to do "Java in C#", it would make it more difficult to follow my code and decrease maintainability.

            I sometimes work on legacy C# code that we inherited from another team and I try to follow the style as close as possible. I just haven't invested enough time to make any informed decisions about how things should be.

            • Jerrrrrrry 43 minutes ago

              You are both over thinking it.

              GIT? unit tests? and i thought debuggers spoiled us?

              although cavemen-esque in comparison to 'modernity'; it wasn't a nightmare to Pause/resume program flow and carefully distill every suspected-erroneous call to Console.Log(e)/stdout/IO/alert(e)/WriteLine(e); `everything to find the fun/troublesome bits of one's program - instead of a tedious labyrinth of stack traces obfuscating out any useful information, further insulted by nearly un-googable compiler errors.

              Tests were commented out functional calls with mock data.

              If you never need to instantiate another instance of a structure so much so that it would benefit from an explicit schema for its use - whether it be an object or class inheritance or prototype chain - then sure, optimize it into a byte array, or even a proper Object/struct.

              But if it exists / is instantiated once/twice, it is likely to be best optimized as raw variables - short-cutting OOP and it's innate inheritance chain would be wise, as well as limiting possibly OOP overhead, such as garbage collection.

                >interface in C#
              
              Coincidentally, that is where my patience for abstraction for C# had finally diminished.

              yield and generators gave off awkward syntatic-over-carmelized sugar smell as well - I saw the need, to compliment namespaces/access modifiers, but felt like a small tailored class would always outweigh the negligible time-save.

    • guax 2 hours ago

      Why do you need the interface? You can extend/mock the class itself. Refactoring code is easy and cheap. There is no reason for complex abstractions that protect implantation outside of libraries and frameworks.

    • rkachowski 2 hours ago

      most of these issues disappear with the introduction of first class functions. There's nothing noble about the thick indirection inherent in old school enterprise programming.

  • dylan604 8 hours ago

    Take this as a lesson. If you've been a dev long enough, you've worked on a project knowing that how the project is being done isn't the best method with every intention of going back to make it better later, but not at the expense of getting the MVP up and running. You'll also have seen that never actually happening and all of those bad decisions from the beginning still living all the way to the bitter end.

    I'm guessing not one person involved would have ever imagined their code being left on a machine just left out in the open exposed to the public completely abandoned by the company.

    • flomo 4 hours ago

      They might have had the most perfectly developed decommissioning process. And nobody is going to care when their paychecks stop showing up, and everything suddenly gets trucked-off into receivership.

      Given the era and constraints, I don't see how it was irresponsible or 'sloppy' to have a local database on these things. This most likely is not on development.

      • raffraffraff 3 hours ago

        This.

        I often wonder what was left behind when the financial crash happened. Watching documentaries and movies about it, it looked me like several large banks and insurers closed up shop overnight, and people were leaving the next day with boxes containing their personal effects.

    • WhyNotHugo 5 hours ago

      When a developer says “this is a temporary solution” they mean “this is a temporary solution forever”.

      • Dalewyn 4 hours ago

        There is nothing more permanent than a temporary solution, and nothing more temporary than a permanent solution.

        • throwaway365x2 38 minutes ago

          That's very true.

          When a customer has a problem, you create a solution to it. Often the problem is part of a much larger space, so you tend to have discussions of all the possible features you could implement. This is a necessary step to gain knowledge about the problem space, but it can lead you to think that the solution have to cover it all

          Time restraints leads to a "temporary" feature to solve the customer's immediate need. Most of the time it turns out that all the other features are neither necessary nor important enough to spend time on.

          Projects without time restraints or focus tends to create a complex system that will "solve" the entire problem space. Most of the time it turns out that our assumptions are wrong and we need to do a major refactoring.

          "Temporary" solutions may not have the best code structure or architecture, but it is not necessarily a bad technical debt, as many new developers to the project seems to think it is. Also, the time restraints of a temporary solution encourages the developer to create simple code and not "clever" code, because they can always blame time restraints for their lack of cleverness. If the code has been mostly unchanged for years, somewhat maintainable and solves the customer's problems, it's not really a problem.

      • iancmceachern 2 hours ago

        They mean temporary for them

      • gscott 4 hours ago

        The beta version works fine, lets just keep it.

        • Moru 4 hours ago

          Never make the "Proof of concept" so good it is actually usable.

          • Jerrrrrrry 33 minutes ago

            Words to be forever-tentatively employed by :)

    • guax 2 hours ago

      The good practice of last year is the bad pattern of today.

    • Apocryphon 8 hours ago

      One just wonders if the cooperative-multitasking BASIC implemented in this machine really was necessary for an MVP. Other than if that just happened to be the sort of programming that the developer at the time was familiar with.

      Also, this really is the 'engineering' discipline with the lowest level of craftsmanship and regard for rigor, isn't it? Because the failures are usually intangible, not directly life-threatening.

  • larodi 4 hours ago

    Are there any laws in any country governing how such equipment is supposed to be decommissioned in case if bankruptcy?

    This does not seem to be an isolated case, it just happening more and more with advance of technology.

    • nextlevelwizard 3 hours ago

      What are they going to do? Sue the bankrupted company?

      • larodi 24 minutes ago

        Or perhaps make sure it gets sanitised before refurbished and resold. I mean bankruptcy always means debt collector tries to sale assets to cover for losses, no?

        But this does not answer my question. So basically many people here are unaware where such legal framework exist at all.

      • Cthulhu_ 3 hours ago

        When a company goes bankrupt, a curator comes in who oversees the decomissioning of the company and its assets; I don't know if a curator is a government agent or a private company (probably either), but they become responsible for the company and in this case the customer data.

      • michaelt 3 hours ago

        If instead of a load of old computers the bankrupt company had a pile of old tyres, I'd expect the receivers winding up the bankrupt company would dispose of the old tyres properly, paid for with the company's assets before anything is distributed to creditors.

        • nextlevelwizard 2 hours ago

          If someone creates this fucked up compute stack and handles the customer data this poorly, what makes you think they would be any better at disposal even if there was some laws around it? It is not like law makers have any idea either what is going on.

  • jordigh 8 hours ago

    Where does Foone keep finding this stuff?

    Earlier, Foone finds a NUC:

    https://news.ycombinator.com/item?id=41294585

    • raffraffraff 3 hours ago

      I know an employee in an IT company. He told me that they have hundreds of decommissioned laptops and no time to wipe them. And they won't pay someone else to do it because it's too expensive. So right now they are in storage. If they go bust, the storage company will likely dump them.

      I've seen a lot of stuff in e-waste. There are several facilities within 5 miles of my home, and you can walk right in and drop off your old TVs, toasters and laptops in large metal bins. And if you have a quiet word with the attendant you can usually walk off with stuff that someone else dropped off. "Good for the environment mate! I can use this for parts!"

      If social engineering works at banks, you can be damn sure it works at an e-waste facility. And if that fails, a few bank notes help.

      I don't do this but I have intercepted e-waste coming from family and friends. In one case I found a treasure trove of photos from my deceased sister. Nobody had ever seen them before. I also found personal documents, internet history and saved passwords in the browser, which got me into her iCloud account which, until then, nobody could access. This lead to more photos and documents. And it was all destined for e-waste.

      • michaelt 3 hours ago

        > I've seen a lot of stuff in e-waste. [...] if you have a quiet word with the attendant you can usually walk off with stuff that someone else dropped off.

        In my country, there are specialist e-waste disposal companies large IT organisations can hire, which guarantee to remove and shred the hard drives before recycling the rest.

        • Crosseye_Jack 2 hours ago

          >> And they won't pay someone else to do it because it's too expensive.

          Its not that such entities that will correctly handle the e-waste don't exist, its that the company doesn't want to pay to have it donem esp when storage is cheap, let those containers of old laptops and towers be someone else's (budgetary) problem.

        • theshrike79 2 hours ago

          "they won't pay someone else to do it because it's too expensive"

          This is the relevant bit. It's cheaper just to pile the old computers in storage vs doing something about it.

      • cpach 2 hours ago

        Hopefully that company had enabled FDE on those laptops. (It still would be prudent to wipe them before recycling, of course.)

    • 4 hours ago
    • hggigg 2 hours ago

      I know someone who worked at a recycler who was paid to physically destroy and certify media as destroyed. It took time and money to destroy media so it just sat in a warehouse for months and the paperwork was lies. Eventually they went bankrupt and another company bought the stock and sold it on eBay by the palette load as is.

      The only companies that do a proper job are the ones that turn up to your office and shred the hardware in front of you. Paperwork is worth shit otherwise.

  • misiek08 7 hours ago

    I would to see people start engineering instead of overengineering, being scared of such exposure happening to them. The only thing I can’t disagree more - it’s not the graduates who write ServiceFactoryBuilder where I work at. Those are guys after 12-15 years, started with bad PHP, now being "Senior Java", but they didn’t learn much since beginning. This is how corporate software looks like.

    • eichin 6 hours ago

      "1 year of experience, 15 times"

  • theanonymousone 4 hours ago

    > JUST READ THE FUCKING JSON FILE YOU ENTERPRISE FUCKERS

    On a codebase I worked on, accepting an expanded range of data in a field required modifying seven files in three repositories and two (or three) programming languages. Someone thought that's the right way of doing it.

  • asp_hornet 5 hours ago

    Sounds like most idiomatic C# code bases i saw. Words cant express how happy i am to be outside that bubble

  • gU9x3u8XmQNG 40 minutes ago

    I once worked for an MSP that would reuse decommissioned storage for other customers and services.

    Tried to convince them to stop, and even setup a service on lunch breaks to attempt and sanitise them using open guidelines available at that time - an automated service that would display drive sanitisation status on a text based panel next to a multi disk bay caddy in the storage room.

    I resigned because of this, and many other similar business practices - all just to make more money (it was not malicious otherwise).

    The business is rather successful to this day. I only hope they have stopped such activities.

    And yet I still lose sleep about this..

  • nightski 3 hours ago

    Foone seems kind of childish.

  • excalibur 7 hours ago

    I just watched a video earlier today about how if you find a working redbox, you can get movies out of it without getting charged, although you still have to enter your payment info. This prospect suddenly sounds even less appealing.

    https://www.youtube.com/watch?v=ucsYziSl7xk

    • JKCalhoun 6 hours ago

      Seen what the rentals are on a Redbox?

      Every time I would go browse their movie collection I always walked away uninterested in anything. Most of it was what I call direct-to-inflight-video.

      • shiroiushi 6 hours ago

        Judging by the movies I've seen on international flights in the past few years, I think this is really unfair to inflight video. (i.e., the available movies were all highly-rated theater movies, not direct-to-video garbage.)

        Perhaps a better description would be "even worse than Netflix originals"...

        • Larrikin 6 hours ago

          I question if this has ever been true outside of old timey planes of decades back where everyone had to watch the same movie. If you were on a decent international flight you could sometimes even see a movie still in theaters.

          The only real regression I've seen in my life time in in-flight entertainment is ANA removing their partnership with Nintendo to run SNES emulators to all the seats. A 14 hour flight to Narita used to involve Super Mario All Stars

          • ipaddr 4 hours ago

            I didn't bring 5 dollars cash the first time I took a plane to SF from the east coast. You end up watching a movie on a projector screen without sound. I end up reading through Microsoft foundation classes books. On the way back I had my 5 dollars ready.

            The movie was about a brother who returns to a small town to visit his sister in a southern town. He ends up staying and helping her with the kids. But his irresponsible ways causes frustration with the sister and the kids he is watching. He leaves. End of movie.

            Truly awful. I wonder what the first movie was like.

  • Spivak 6 hours ago

    Any C# devs wanna explain the XML thing? To me having a separate class to deserialize each kind of XML document to its respective object seems nice and the "right" way to use XML. The class just becomes the config file. Generic loaders but poking at the tree is super brittle. Does C# come with magic to do this better?

    Because if you have to invent the config file then isn't that creating a DSL and we're back to over engineering?

    • jonathanlydall 5 hours ago

      Active C# dev here, but I haven’t read the article.

      For configuration these days XML is generally not used, they have a configuration system which can use a variety of underlying sources (like environment variables and JSON files) and you can either access these settings by key from a dictionary or trivially hydrate a plain old C# classes, including with collections.

      People may still manually read their own configuration independent of this system or perhaps they’re just generally deserialising XML.

      There are (I think) at least a few ways to work with XML in .NET.

      For well known schemas I definitely generally recommend the C# class approach where you somewhat simply deserialize a file into well typed C# classes.

      From your question it sounds like the XML API which allows you to arbitrarily query or manipulate XML directly was used here. I have on occasion used this when I don’t have the full schema for the XML available and need to tweak a single element or search for something with XQuery. It’s a useful tool for some scenarios, but a poor choice for others.

      • Adachi91 3 hours ago

        System.Xml has been around for a very long time. I use it in one of my tools as a MSN deserialization class to make all my old MSN history "readable" by traversing the nodes and picking out sending/receiving user, timestamp, and message.

    • paranoidrobot 5 hours ago

      I stopped reading when it got all shouty, and a quick scan doesn't show whether it's got actual sample code.

      It's also been a long while since I wrote any C# for real, and even longer since I had to deal with .NET's XmlSerializer.

      It used to be a pain in the backside to deal with XML in .NET if you wanted to just deserialise a file to an object. You'd need to mark up the class with attributes and stuff.

      I remember being very happy when JSON.NET came out and we could just point at just about any standard object and JSON.NET would figure it out without any extra attributes.

      I'm not sure if XmlSerializer ever caught up in that regard.

      e: Also, without seeing the code and where else it might be deployed it's not easy to know whether this is a case of over-engineering.

      This same code might be deployed to a phone, webserver, lambda, whatever - so having an IConfigFileLoader might make 100% sense if you need to have another implementation that'll load from some other source.

  • dbg31415 4 hours ago

    I mean that’s scary, but at this point is there anybody to hold accountable?

  • add-sub-mul-div 8 hours ago

    I love how the all-caps disgust is reserved for the greatest sin, the overengineering.

  • auguzanellato 5 hours ago

    Looks like the mastodon instance got the HN hug of death, anybody archived it?