I'm always a bit surprised how long it can take to triage and fix these pretty glaring security vulnerabilities. October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed. Sure the actual bug ended up being (what I imagine to be) a <1hr fix plus the time for QA testing to make sure it didn't break anything.
Is the issue that people aren't checking their security@ email addresses? People are on holiday? These emails get so much spam it's really hard to separate the noise from the legit signal? I'm genuinely curious.
In my experience, it comes down to project management and organizational structure problems.
Companies hire a "security team" and put them behind the security@ email, then decide they'll figure out how to handle issues later.
When an issue comes in, the security team tries to forward the security issue to the team that owns the project so it can be fixed. This is where complicated org charts and difficult incentive structures can get in the way.
Determining which team actually owns the code containing the bug can be very hard, depending on the company. Many security team people I've worked with were smart, but not software developers by trade. So they start trying to navigate the org chart to figure out who can even fix the issue. This can take weeks of dead-ends and "I'm busy until Tuesday next week at 3:30PM, let's schedule a meeting then" delays.
Even when you find the right team, it can be difficult to get them to schedule the fix. In companies where roadmaps are planned 3 quarters in advance, everyone is focused on their KPIs and other acronyms, and bonuses are paid out according to your ticket velocity and on-time delivery stats (despite PMs telling you they're not), getting a team to pick up the bug and work on it is hard. Again, it can become a wall of "Our next 3 sprints are already full with urgent work from VP so-and-so, but we'll see if we can fit it in after that"
Then legal wants to be involved, too. So before you even respond to reports you have to flag the corporate counsel, who is already busy and doesn't want to hear it right now.
So half or more of the job of the security team becomes navigating corporate bureaucracy and slicing through all of the incentive structures to inject this urgent priority somewhere.
Smart companies recognize this problem and will empower security teams to prioritize urgent things. This can cause another problem where less-than-great security teams start wielding their power to force everyone to work on not-urgent issues that get spammed to the security@ email all day long demanding bug bounties, which burns everyone out. Good security teams will use good judgment, though.
Oh man this is so true. In this sort of org, getting something fixed out-of-band takes a huge political effort (even a critical issue like having your client database exposed to the world).
While there were numerous problems with the big corporate structures I worked in decades ago where everything was done by silos of specialists, there were huge advantages. No matter where there was a security, performance, network, hardware, etc. issue, the internal support infrastructure had the specialist’s pagers and for a problem like this, the people fixing it would have been on a conference call until it was fixed. There was always a team of specialists to diagnose and test fixes, always available developers with the expertise to write fixes if necessary, always ops to monitor and execute things, always a person in charge to make sure it all got done, and everybody knew which department it was and how to reach them 24/7.
Now if you needed to develop something not-urgent that involved, say, the performance department, database department, and your own, hope you’ve got a few months to blow on conference calls and procedure documents.
A lot of the time it’s less “nobody checked the security inbox” and more “the one person who understands that part of the system is juggling twelve other fires.” Security fixes are often a one-hour patch wrapped in two weeks of internal routing, approvals, and “who even owns this code?” archaeology. Holiday schedules and spam filters don’t help, but organizational entropy is usually the real culprit.
It could also be someone "practicing good time management."
They have a specific time of day, when they check their email, and they only give 30 minutes to that time, and they check emails from most recent, down.
The email comes in, two hours earlier, and, by the time they check their email, it's been buried under 50 spams, and near-spams; each of which needs to be checked, so they run out of 30 minutes, before they get to it. The next day, by email check time, another 400 spams have been thrown on top.
Think I'm kidding?
Many folks that have worked for large companies (or bureaucracies) have seen exactly this.
> A lot of the time it’s less “nobody checked the security inbox” and more “the one person who understands that part of the system is juggling twelve other fires.”
At my past employers it was "The VP of such-and-such said we need to ship this feature as our top priority, no exceptions"
I've once had a whole sector of a fintech go down because one DevOps person ignored daily warning emails for three months that an API key was about to expire and needed reset.
And of course nobody remembered the setup, and logging was only accessible by the same person, so figuring out also took weeks.
I'm currently on the other side of this trying to convince management that the maintenance that should have been done 3 years ago needs to get done. They need "justification".
security@ emails do get a lot of spam. It doesn't get talked about very much unless you're monitoring one yourself, but there's a fairly constant stream of people begging for bug bounty money for things like the Secure flag not being set on a cookie.
That said, in my experience this spam is still a few emails a day at the most, I don't think there's any excuse for not immediately patching something like that. I guess maybe someone's on holiday like you said.
There is so much spam from random people about meaningless issues in our docs. AI has made the problem worse. Determining the meaningful from the meaningless is a full time job.
This is where “managed” bug bounty programs like BugCrowd or HackerOne deliver value: only telling you when there is something real. It can be a full time job to separate the wheat from the chaff.
It’s made worse by the incentive of the reporters to make everything sound like a P1 hair-on-fire issue.
Well we have 600 people in the global response center I work at. And the priority issue count is currently 26000. That means its serious enough that its been assigned to some one. There are tens of thousands of unassigned issues cuz the traige teams are swamped. People dont realize as systems get more complex issues increase. They never decrease. And the chimp troupes response has always been a Story - we can handle it.
Another aspect to consider: when you reduce the amount of permission anything has (like here the returned token), you risk breaking something.
In a complex system it can be very hard to understand what will break, if anything. In a less complex system, it can still be hard to understand if the person who knows the security model very well isn't available.
Not every organization prioritizes being able to ship a code change at the drop of a hat. This often requires organizational dedication to heavy automated testing a CI, which small companies often aren't set up to do.
I can't believe that any company takes a month to ship something. Even if they don't have CI, surely they'd prefer to break the app (maybe even completely) than risk all their legal documents exfiltrated.
It’d be pretty reasonable to take the whole API down in this scenario, and put it back up once it’s patched. They’d lose tons of cash but avoid being liable for extreme amounts of damages.
> I can't believe that any company takes a month to ship something.
Outside of startups and big tech, it's not uncommon to have release cycles that are months long. Especially common if there is any legal or regulatory involvement.
> October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed
There is always the simple answer, these are lawyers so they are probably scrambling internally to write a response that covers themselves legaly also trying to figure out how fucked they are.
> October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed
I have unfortunately seen way worse. If it will take more than an hour and the wrong people are in charge of the money, you can go a pretty long time with glaring vulnerabilities.
I call that one of the worrisome outcomes from "Marketing Driven Development" where the business people don't let you do technical debt "Stories" because you REALLY need to do work that justifies their existence in the project.
I'm a bit conflicted about what responsible disclosure should be, but in many cases it seems like these conditions hold:
1) the hack is straightforward to do;
2) it can do a lot of damage (get PII or other confidential info in most cases);
3) downtime of the service wouldn't hurt anyone, especially if we compare it to the risk of the damage.
But, instead of insisting on the immediate shutting down of the affected service, we give companies weeks or months to fix the issue while notifying no one in the process and continuing with business as usual.
I've submitted 3 very easy exploits to 3 different companies the past year and, thankfully, they fixed them in about a week every time. Yet, the exploits were trivial (as I'm not good enough to find the hard ones, I admit). Mostly IDORs, like changing id=123456 to id=1 all the way up to id=123455 and seeing a lot medical data that doesn't belong to me. All 3 cases were medical labs because I had to have some tests done and wanted to see how secure my data was.
Sadly, in all 3 cases I had to send a follow-up e-mail after ~1 week, saying that I'll make the exploit public if they don't fix it ASAP. What happened was, again, in all 3 cases, the exploit was fixed within 1-2 days.
If I'd given them a month, I feel they would've fixed the issue after a month. If I'd given then a year - after a year.
And it's not like there aren't 10 different labs in my city. It's not like online access to results is critical, either. You can get a printed result or call them to write them down. Yes, it would be tedious, but more secure.
So I should've said from the beginning something like:
> I found this trivial exploit that gives me access to medical data of thousands of people. If you don't want it public, shut down your online service until you fix it, because it's highly likely someone else figured it out before me. If you don't, I'll make it public and ruin your reputation.
Now, would I make it public if they don't fix it within a few days? Probably not, but I'm not sure. But shutting down their service until the fix is in seems important. If it was some hard-to-do hack chaining several exploits, including a 0-day, it would be likely that I'd be the first one to find it and it wouldn't be found for a while by someone else afterwards. But ID enumerations? Come on.
So does the standard "responsible disclosure", at least in the scenario I've given (easy to do; not critical if the service is shut down), help the affected parties (the customers) or the businesses? Why should I care about a company worth $X losing $Y if it's their fault?
I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.
As to the other comments talking about how spammed their security@ mail is - that's the cost of doing business. It doesn't seem like a valid excuse to me. Security isn't one of hundreds random things a business should care about. It's one of the most important ones. So just assign more people to review your mail. If you can't, why are you handling people's PII?
If they have a billion dollar valuation, this fairly basic (and irresponsible) vulnerability could have cost them a billion dollars. If someone with malice had been in your shoes, in that industry, this probably wouldn't have been recoverable. Imagine a firm's entire client communications and discovery posted online.
They could have sold this to a ransomare group or affiliate for 5-6 figures and then the ransomware group could have exfil'd the data and attempted to extort the company for millions.
Then if they didnt pay and the ransomware group leaked the info to the public, they'd likely have to spend millions on lawsuits and fines anyways.
They should have paid this dude 5-6 figures for this find. It's scenarios like this that lead people to sell these vulns on the gray/black market instead of traditional bug bounty whitehat routes.
I work for a finance firm and everyone is wondering why we can store reams of client data with SaaS Company X, but not upload a trust document or tax return to AI SaaS Company Y.
My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
This article demonstrates that, but it does sort of beg the question as to why not trust one vs the other when they both promise the same safeguards.
Does SaaS X/Cloud offer IAM capabilities? Or going further, do they dogfood their own access via the identity and access policies? If so, and you construct your own access policy, you have relative peace of mind.
If SaaS Y just says "Give me your data and it will be secure", that's where it gets suspect.
Because it's the Cloud and we're told the cloud is better and more secure.
In truth the company forced our hand by pricing us out of the on-premise solution and will do that again with the other on-premise we use, which is set to sunset in five years or so.
SaaS is now a "solved problem"; almost all vendors will try to get SOX/SOC2 compliance (and more for sensitive workloads). Although... its hard to see how these certifications would have prevented something like this :melting_face:.
> My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
The funny thing is that this exploit (from the OP) has nothing to do with AI and could be <insert any SaaS company> that integrates into another service.
And nobody seems to pay attention to the fact that modern copiers cache copies on a local disk and if the machines are leased and swapped out the next party that takes possession has access to those copies if nobody bothered to address it.
This is the collision between two cultures that were never meant to share the same data: "move fast and duct-tape APIs together" startup engineering, and "if this leaks we ruin people's lives" legal/medical confidentiality.
What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals. This is a 2010-level bug pattern wrapped in 2025 AI hype. The only truly "AI" part is that centralizing all documents for model training drastically raises the blast radius when you screw up.
The economic incentive is obvious: if your pitch deck is "we'll ingest everything your firm has ever touched and make it searchable/AI-ready", you win deals by saying yes to data access and integrations, not by saying no. Least privilege, token scoping, and proper isolation are friction in the sales process, so they get bolted on later, if at all.
The scary bit is that lawyers are being sold "AI assistant" but what they're actually buying is "unvetted third party root access to your institutional memory". At that point, the interesting question isn't whether there are more bugs like this, it's how many of these systems would survive a serious red-team exercise by anyone more motivated than a curious blogger.
First, as an organization, do all this cybersecurity theatre, and then create an MCP/LLM wormhole that bypasses it all.
All because non-technical folks wave their hands about AI and not understanding the most fundamental reality about LLM software being fundamentally so different than all the software before it that it becomes an unavoidable black hole.
I'm also a little pleased I used two space analogies, something I can't expect LLMs to do because they have to go large with their language or go home.
Assuming a 101 security program past the quality bar, there are a number of reason why this can still happen at companies.
Summarized as - security is about risk acceptance, not removal. There’s massive business pressure to risk accept AI. Risk acceptance usually means some sort of supplemental control that’s not the ideal but manages. There are very little of these with AI tools however - small vendors, they’re not really service accounts but IMO best way to monitor them probably is that, integrations are easy, eng companies hate devs losing admin of some kind but if you have that random AI on endpoints becomes very likely.
I’m ignoring a lot of nuance but solid sec program blown open by LLM vendors is going to be common, let alone bad sec programs. Many sec teams I think are just waiting for the other shoe to drop for some evidentiary support while managing heavy pressure to go full bore AI integration until then.
My first reaction to the announcement of MCP was that I must be missing something. Surely giving an LLM unlimited access to protected data is going to introduce security holes?
Nitpick, but wormholes and black holes aren't limited to space! (unless you go with the Rick & Morty definition where "there's literally everything in space")
Maybe this is the key takeaway of GenAI: that some access to data, even partially hallucinated data, is better than the hoops that the security theatre puts in place that prevents average Joe doing their job.
This might just be a golden age for getting access to the data you need for getting the job done.
Next security will catch up and there'll be a good balance between access and control.
Then, as always security goes to far and nobody can get anything done.
I am at a loss for words. This wasn't a sophisticated attack.
I'd love to know who filevine uses for penetration testing (which they do, according to their website) because holy shit, how do you miss this? I mean, they list their bug bounty program under a pentesting heading, so I guess it's just nice internet people.
I don't disagree with the sentiment. But let's also be honest. There is a lot of improvement to be made in security software, in terms of ease of use and overcomplicating things.
I worked at Google and then at Meta. Man, the amount of "nonsense" of the ACL system was insane. I write nonsense in quotes because for sure from a security point of view it all made a lot of sense. But there is exactly zero chance that such a system can be used in a less technical company. It took me 4 years to understand how it worked...
So I'll take this as another data point to create a startup that simplifies security... Seems a lot more complicated than AI
The first thing that comes to my mind is SOC2 HIPAA and the whole security theater.
I am one of the engineers that had to suffer through countless screenshots and forms to get these because they show that you are compliant and safe. While the real impactful things are ignored
It's so great that they allowed him to publish a technical blog post. I once discovered a big vulnerability in a listed consumer tech company -- exposing users' private messages and also allowing to impersonate any user. The company didn't allow me to write a public blogpost.
Up until Van Buren v. United States in 2020, ToS violations were sometimes prosecuted as unauthorized access under the CFAA. I suspect there are other jurisdictions that still do the equivalent to that.
Presumably they'll threaten to sue you and/or file a criminal complaint, which can be pretty hard to deal with depending on the jurisdiction. At that point you'll probably start asking yourself if it's worth publishing a blog post for some internet points.
Lawyers can and will send cease and desist letters to people whether or not there is any legal basis for it. Often the threat of a lawsuit, even a meritless one, is enough to keep people quiet.
Given the absurd amount startups I see lately that have the words "healthcare" and "AI", I'm actually incredibly concerned that in just a couple of months we're going to have an multiple, enormous HIPAA-data disasters
My thing is, even ingesting the BOK should have been done in phases, to avoid having all your virtual eggs in one basket or nest at any ONE time. Staggering tokens to these compartments would not have cost them anything at all . I always say, whatever convenience you enjoy yourself, will be highly appreciated by bad actors... WHEN, not if.. they get thru.
This might be off topic since we are in topic of AI tool and on HackerNews.
I've been pondering a long time how does one build a startup company in domain they are not familiar with but ... Just have this urge to 'crave a pie' in this space. For the longest time, I had this dream of starting or building a 'AI Legal Tech Company' -- big issue is, I don't work in legal space at all. I did some cold reach on lawfirm related forums which did not take any traction.
I later searched around and came across the term, 'case management software'. From what I know, this is what Cilo fundamentally is and make millions if not billion.
This was close to two years or 1.5 years ago and since then, I stopped thinking about it because of this understanding or belief I have, "how can I do a startup in legal when I don't work in this domain" But when I look around, I have seen people who start companies in totally unrelated industry. From starting a 'dental tech's company to, if I'm not mistaken, the founder of hugging face doesn't seem to have PHD in AI/ML and yet founded HuggingFace.
Given all said, how does one start a company in unrelated domain? Say I want to start another case management system or attempt to clone FileVine, do I first read up what case management software is or do I cold reach to potential lawfirm who would partner up to built a SAAS from scratch? Other school of thought goes like, "find customer before you have a product to validate what you want to build", how does this realistically work?
I think if you have no domain expertise or unique insight it will be quite hard to find a real pain point to solve, deliver a winning solution, and have the ability to sell it.
Not impossible, but very hard. And starting a company is hard enough as it is.
So 9/10 times the answer will be to partner with someone who understands the space and pain point, preferably one who has lived it, or find an easier problem to solve.
I think it comes down to, having some insight about the customer need and how you would solve it. Having prior experience in the same domain is helpful but is neither a guarantee nor a blocker, towards having a customer insight (lots of people might work in a domain but have no idea how to improve it; alternatively an outsider might see something that the "domain experts" have been overlooking).
I just randomly happened to read about the story of, some surgeons asking a Formula 1 team to help improve its surgical processes, with spectacular results in the long term... The F1 team had zero medical background, but they assessed the surgical processes and found huge issues with communication and lack of clarity, people reaching over each other to get to tools, or too many people jumping to fix something like a hose coming loose (when you just need 1 person to do that 1 thing). F1 teams were very good at designing hyper efficient and reliable processes to get complex pit stops done extremely quickly, and the surgeons benefitted a lot from those process engineering insights, even though it had nothing specifically to do with medical/surgical domain knowledge.
Anyways, back to your main question -- I find that it helps to start small... Are you someone who is good at using analogies to explain concepts in one domain, to a layperson outside that domain? Or even better, to use analogies that would help a domain expert from domain A, to instantly recognize an analogous situation or opportunity in domain B (of which they are not an expert)? I personally have found a lot of benefit, from both being naturally curious about learning/teaching through analogies, finding the act of making analogies to be a fun hobby just because, and also honing it professionally to help me be useful in cross-domain contexts. I think you don't need to blow this up in your head as some big grand mystery with some big secret cheat code to unlock how to be a founder in a domain you're not familiar with -- I think you can start very small, and just practice making analogies with your friends or peers, see if you can find fun ways of explaining things across domains with them (either you explain to them with an analogy, or they explain something to you and you try to analogize it from your POV).
That doesn't surprise me one bit. Just think about all the confidential information that people post into their Chatgpt and Claude sessions. You could probably keep the legal system busy for the next century on a couple of days of that.
i recall reading a silly article like half a year ago about using leetspeak and setting the prompt up to emulate House the tv show or something to get around restrictions
I mean... in what world would you send a customers private root key to a web browsing client. Like even if the user was authenticated why would they need this? This sort of secret shouldn't even be in an environment variable or database but stored with encryption at rest. There could easily have been a proxy service between client and box if the purpose is to search or download files. It's very bad, even for a prototype... this researcher deserves a bounty!
"Companies often have a demo environment that is open" - huh?
And... Margolis allowed this open demo environment to connect to their ENTIRE Box drive of millions of super sensitive documents?
HUH???!
Before you get to the terrible security practices of the vendor, you have to place a massive amount of blame on the IT team of Margolis for allowing the above.
No amount of AI hype excuses that kind of professional misjudgement.
I don't think we have enough information to conclude exactly what happened. But my read is the researcher was looking for demo.filevine.com and found margolis.filevine.com instead. The implication is that many other customers may have been vulnerable in the same way.
I got downvoted, so maybe that means someone thinks un-minifying code is not advised for dealing with security issues? But on reflection surely you can just use the 'format code' command in the ide? I am no expert but surely it's ok to use AI to help track down and identify security issues with the usual caveats of 'don't believe it blindly, do your double checking and risk assessing.'
Personally, I'd just use common sense and good judgment. At the end of the day, would you want someone to hand your address, and other private data to OpenAI just like that? Probably not. So don't paste customer data into it if you can avoid it.
On the other hand, minified code is literally published by the company. Everyone can see it and do with it as they please. So handing that over to an AI to un-minify is not really your problem, since you're not the developer working on the tool internally.
I think this class of problems can be protected against.
It's become clear that the first and most important and most valuable agent, or team of agents, to build is the one that responsibly and diligently lays out the opsec framework for whatever other system you're trying to automate.
A meta-security AI framework, cursor for opsec, would be the best, most valuable general purpose AI tool any company could build, imo. Everything from journalism to law to coding would immediately benefit, and it'd provide invaluable data for post training, reducing the overall problematic behaviors in the underlying models.
Move fast and break things is a lot more valuable if you have a red team mechanism that scales with the product. Who knows how many facepalm level failures like this are out there?
I'm saying that if executives get praise and bonuses for when good things happen, they should also have negative consequences when bad things happen. Litigate that further how you wish.
I'm always a bit surprised how long it can take to triage and fix these pretty glaring security vulnerabilities. October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed. Sure the actual bug ended up being (what I imagine to be) a <1hr fix plus the time for QA testing to make sure it didn't break anything.
Is the issue that people aren't checking their security@ email addresses? People are on holiday? These emails get so much spam it's really hard to separate the noise from the legit signal? I'm genuinely curious.
In my experience, it comes down to project management and organizational structure problems.
Companies hire a "security team" and put them behind the security@ email, then decide they'll figure out how to handle issues later.
When an issue comes in, the security team tries to forward the security issue to the team that owns the project so it can be fixed. This is where complicated org charts and difficult incentive structures can get in the way.
Determining which team actually owns the code containing the bug can be very hard, depending on the company. Many security team people I've worked with were smart, but not software developers by trade. So they start trying to navigate the org chart to figure out who can even fix the issue. This can take weeks of dead-ends and "I'm busy until Tuesday next week at 3:30PM, let's schedule a meeting then" delays.
Even when you find the right team, it can be difficult to get them to schedule the fix. In companies where roadmaps are planned 3 quarters in advance, everyone is focused on their KPIs and other acronyms, and bonuses are paid out according to your ticket velocity and on-time delivery stats (despite PMs telling you they're not), getting a team to pick up the bug and work on it is hard. Again, it can become a wall of "Our next 3 sprints are already full with urgent work from VP so-and-so, but we'll see if we can fit it in after that"
Then legal wants to be involved, too. So before you even respond to reports you have to flag the corporate counsel, who is already busy and doesn't want to hear it right now.
So half or more of the job of the security team becomes navigating corporate bureaucracy and slicing through all of the incentive structures to inject this urgent priority somewhere.
Smart companies recognize this problem and will empower security teams to prioritize urgent things. This can cause another problem where less-than-great security teams start wielding their power to force everyone to work on not-urgent issues that get spammed to the security@ email all day long demanding bug bounties, which burns everyone out. Good security teams will use good judgment, though.
Oh man this is so true. In this sort of org, getting something fixed out-of-band takes a huge political effort (even a critical issue like having your client database exposed to the world).
While there were numerous problems with the big corporate structures I worked in decades ago where everything was done by silos of specialists, there were huge advantages. No matter where there was a security, performance, network, hardware, etc. issue, the internal support infrastructure had the specialist’s pagers and for a problem like this, the people fixing it would have been on a conference call until it was fixed. There was always a team of specialists to diagnose and test fixes, always available developers with the expertise to write fixes if necessary, always ops to monitor and execute things, always a person in charge to make sure it all got done, and everybody knew which department it was and how to reach them 24/7.
Now if you needed to develop something not-urgent that involved, say, the performance department, database department, and your own, hope you’ve got a few months to blow on conference calls and procedure documents.
For that industry it made sense though.
A lot of the time it’s less “nobody checked the security inbox” and more “the one person who understands that part of the system is juggling twelve other fires.” Security fixes are often a one-hour patch wrapped in two weeks of internal routing, approvals, and “who even owns this code?” archaeology. Holiday schedules and spam filters don’t help, but organizational entropy is usually the real culprit.
It could also be someone "practicing good time management."
They have a specific time of day, when they check their email, and they only give 30 minutes to that time, and they check emails from most recent, down.
The email comes in, two hours earlier, and, by the time they check their email, it's been buried under 50 spams, and near-spams; each of which needs to be checked, so they run out of 30 minutes, before they get to it. The next day, by email check time, another 400 spams have been thrown on top.
Think I'm kidding?
Many folks that have worked for large companies (or bureaucracies) have seen exactly this.
> A lot of the time it’s less “nobody checked the security inbox” and more “the one person who understands that part of the system is juggling twelve other fires.”
At my past employers it was "The VP of such-and-such said we need to ship this feature as our top priority, no exceptions"
I've once had a whole sector of a fintech go down because one DevOps person ignored daily warning emails for three months that an API key was about to expire and needed reset.
And of course nobody remembered the setup, and logging was only accessible by the same person, so figuring out also took weeks.
I'm currently on the other side of this trying to convince management that the maintenance that should have been done 3 years ago needs to get done. They need "justification".
It's not about fixing it, it's about acknowledging it exists
security@ emails do get a lot of spam. It doesn't get talked about very much unless you're monitoring one yourself, but there's a fairly constant stream of people begging for bug bounty money for things like the Secure flag not being set on a cookie.
That said, in my experience this spam is still a few emails a day at the most, I don't think there's any excuse for not immediately patching something like that. I guess maybe someone's on holiday like you said.
This.
There is so much spam from random people about meaningless issues in our docs. AI has made the problem worse. Determining the meaningful from the meaningless is a full time job.
This is where “managed” bug bounty programs like BugCrowd or HackerOne deliver value: only telling you when there is something real. It can be a full time job to separate the wheat from the chaff. It’s made worse by the incentive of the reporters to make everything sound like a P1 hair-on-fire issue.
Half of the emails I used to get in a previous company were pointless issues, some coming from a honey pot.
The other half was people demanding payment.
Use AI for that :)
Well we have 600 people in the global response center I work at. And the priority issue count is currently 26000. That means its serious enough that its been assigned to some one. There are tens of thousands of unassigned issues cuz the traige teams are swamped. People dont realize as systems get more complex issues increase. They never decrease. And the chimp troupes response has always been a Story - we can handle it.
Another aspect to consider: when you reduce the amount of permission anything has (like here the returned token), you risk breaking something.
In a complex system it can be very hard to understand what will break, if anything. In a less complex system, it can still be hard to understand if the person who knows the security model very well isn't available.
Not every organization prioritizes being able to ship a code change at the drop of a hat. This often requires organizational dedication to heavy automated testing a CI, which small companies often aren't set up to do.
I can't believe that any company takes a month to ship something. Even if they don't have CI, surely they'd prefer to break the app (maybe even completely) than risk all their legal documents exfiltrated.
It’d be pretty reasonable to take the whole API down in this scenario, and put it back up once it’s patched. They’d lose tons of cash but avoid being liable for extreme amounts of damages.
> I can't believe that any company takes a month to ship something.
Outside of startups and big tech, it's not uncommon to have release cycles that are months long. Especially common if there is any legal or regulatory involvement.
> October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed
There is always the simple answer, these are lawyers so they are probably scrambling internally to write a response that covers themselves legaly also trying to figure out how fucked they are.
1 week is surprisingly not that slow.
> October 27, 2025 disclosure and November 4, 2025 email confirmation seems like a long time to have their entire client file system exposed
I have unfortunately seen way worse. If it will take more than an hour and the wrong people are in charge of the money, you can go a pretty long time with glaring vulnerabilities.
I call that one of the worrisome outcomes from "Marketing Driven Development" where the business people don't let you do technical debt "Stories" because you REALLY need to do work that justifies their existence in the project.
I'm a bit conflicted about what responsible disclosure should be, but in many cases it seems like these conditions hold:
1) the hack is straightforward to do;
2) it can do a lot of damage (get PII or other confidential info in most cases);
3) downtime of the service wouldn't hurt anyone, especially if we compare it to the risk of the damage.
But, instead of insisting on the immediate shutting down of the affected service, we give companies weeks or months to fix the issue while notifying no one in the process and continuing with business as usual.
I've submitted 3 very easy exploits to 3 different companies the past year and, thankfully, they fixed them in about a week every time. Yet, the exploits were trivial (as I'm not good enough to find the hard ones, I admit). Mostly IDORs, like changing id=123456 to id=1 all the way up to id=123455 and seeing a lot medical data that doesn't belong to me. All 3 cases were medical labs because I had to have some tests done and wanted to see how secure my data was.
Sadly, in all 3 cases I had to send a follow-up e-mail after ~1 week, saying that I'll make the exploit public if they don't fix it ASAP. What happened was, again, in all 3 cases, the exploit was fixed within 1-2 days.
If I'd given them a month, I feel they would've fixed the issue after a month. If I'd given then a year - after a year.
And it's not like there aren't 10 different labs in my city. It's not like online access to results is critical, either. You can get a printed result or call them to write them down. Yes, it would be tedious, but more secure.
So I should've said from the beginning something like:
> I found this trivial exploit that gives me access to medical data of thousands of people. If you don't want it public, shut down your online service until you fix it, because it's highly likely someone else figured it out before me. If you don't, I'll make it public and ruin your reputation.
Now, would I make it public if they don't fix it within a few days? Probably not, but I'm not sure. But shutting down their service until the fix is in seems important. If it was some hard-to-do hack chaining several exploits, including a 0-day, it would be likely that I'd be the first one to find it and it wouldn't be found for a while by someone else afterwards. But ID enumerations? Come on.
So does the standard "responsible disclosure", at least in the scenario I've given (easy to do; not critical if the service is shut down), help the affected parties (the customers) or the businesses? Why should I care about a company worth $X losing $Y if it's their fault?
I think in the future I'll anonymously contact companies with way more strict deadlines if their customers (or others) are in serious risk. I'll lose the ability to brag with my real name, but I can live with it.
As to the other comments talking about how spammed their security@ mail is - that's the cost of doing business. It doesn't seem like a valid excuse to me. Security isn't one of hundreds random things a business should care about. It's one of the most important ones. So just assign more people to review your mail. If you can't, why are you handling people's PII?
If they have a billion dollar valuation, this fairly basic (and irresponsible) vulnerability could have cost them a billion dollars. If someone with malice had been in your shoes, in that industry, this probably wouldn't have been recoverable. Imagine a firm's entire client communications and discovery posted online.
They should have given you some money.
Exactly.
They could have sold this to a ransomare group or affiliate for 5-6 figures and then the ransomware group could have exfil'd the data and attempted to extort the company for millions.
Then if they didnt pay and the ransomware group leaked the info to the public, they'd likely have to spend millions on lawsuits and fines anyways.
They should have paid this dude 5-6 figures for this find. It's scenarios like this that lead people to sell these vulns on the gray/black market instead of traditional bug bounty whitehat routes.
They should have given him a LOT of money.
Would you settle for a LOT of free AI generated legal advice? ;)
I work for a finance firm and everyone is wondering why we can store reams of client data with SaaS Company X, but not upload a trust document or tax return to AI SaaS Company Y.
My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
This article demonstrates that, but it does sort of beg the question as to why not trust one vs the other when they both promise the same safeguards.
FWIW this company was founded in 2014 and appears to have added LLM-powered features relatively recently: https://www.reuters.com/legal/transactional/legal-tech-compa...
Does SaaS X/Cloud offer IAM capabilities? Or going further, do they dogfood their own access via the identity and access policies? If so, and you construct your own access policy, you have relative peace of mind.
If SaaS Y just says "Give me your data and it will be secure", that's where it gets suspect.
The question is what reason did you have to trust SaaS Company X in the first place?
Because it's the Cloud and we're told the cloud is better and more secure.
In truth the company forced our hand by pricing us out of the on-premise solution and will do that again with the other on-premise we use, which is set to sunset in five years or so.
SaaS is now a "solved problem"; almost all vendors will try to get SOX/SOC2 compliance (and more for sensitive workloads). Although... its hard to see how these certifications would have prevented something like this :melting_face:.
It doesn't sound like your firm does any diligence that would actually prevent you from buying a vendor that has security flaws.
> My argument is we're in the Wild West with AI and this stuff is being built so fast with so many evolving tools that corners are being cut even when they don't realize it.
The funny thing is that this exploit (from the OP) has nothing to do with AI and could be <insert any SaaS company> that integrates into another service.
And nobody seems to pay attention to the fact that modern copiers cache copies on a local disk and if the machines are leased and swapped out the next party that takes possession has access to those copies if nobody bothered to address it.
This was the plot of Grisham's book The Firm in 1991
This is the collision between two cultures that were never meant to share the same data: "move fast and duct-tape APIs together" startup engineering, and "if this leaks we ruin people's lives" legal/medical confidentiality.
What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals. This is a 2010-level bug pattern wrapped in 2025 AI hype. The only truly "AI" part is that centralizing all documents for model training drastically raises the blast radius when you screw up.
The economic incentive is obvious: if your pitch deck is "we'll ingest everything your firm has ever touched and make it searchable/AI-ready", you win deals by saying yes to data access and integrations, not by saying no. Least privilege, token scoping, and proper isolation are friction in the sales process, so they get bolted on later, if at all.
The scary bit is that lawyers are being sold "AI assistant" but what they're actually buying is "unvetted third party root access to your institutional memory". At that point, the interesting question isn't whether there are more bugs like this, it's how many of these systems would survive a serious red-team exercise by anyone more motivated than a curious blogger.
It's a little hilarious.
First, as an organization, do all this cybersecurity theatre, and then create an MCP/LLM wormhole that bypasses it all.
All because non-technical folks wave their hands about AI and not understanding the most fundamental reality about LLM software being fundamentally so different than all the software before it that it becomes an unavoidable black hole.
I'm also a little pleased I used two space analogies, something I can't expect LLMs to do because they have to go large with their language or go home.
Assuming a 101 security program past the quality bar, there are a number of reason why this can still happen at companies.
Summarized as - security is about risk acceptance, not removal. There’s massive business pressure to risk accept AI. Risk acceptance usually means some sort of supplemental control that’s not the ideal but manages. There are very little of these with AI tools however - small vendors, they’re not really service accounts but IMO best way to monitor them probably is that, integrations are easy, eng companies hate devs losing admin of some kind but if you have that random AI on endpoints becomes very likely.
I’m ignoring a lot of nuance but solid sec program blown open by LLM vendors is going to be common, let alone bad sec programs. Many sec teams I think are just waiting for the other shoe to drop for some evidentiary support while managing heavy pressure to go full bore AI integration until then.
My first reaction to the announcement of MCP was that I must be missing something. Surely giving an LLM unlimited access to protected data is going to introduce security holes?
Nitpick, but wormholes and black holes aren't limited to space! (unless you go with the Rick & Morty definition where "there's literally everything in space")
Not a nit pick at all friend, it is even more rabbit holes to explore.
Maybe this is the key takeaway of GenAI: that some access to data, even partially hallucinated data, is better than the hoops that the security theatre puts in place that prevents average Joe doing their job.
This might just be a golden age for getting access to the data you need for getting the job done.
Next security will catch up and there'll be a good balance between access and control.
Then, as always security goes to far and nobody can get anything done.
It's a tale as old as computer security.
I am at a loss for words. This wasn't a sophisticated attack.
I'd love to know who filevine uses for penetration testing (which they do, according to their website) because holy shit, how do you miss this? I mean, they list their bug bounty program under a pentesting heading, so I guess it's just nice internet people.
It's inexcusable.
I don't disagree with the sentiment. But let's also be honest. There is a lot of improvement to be made in security software, in terms of ease of use and overcomplicating things.
I worked at Google and then at Meta. Man, the amount of "nonsense" of the ACL system was insane. I write nonsense in quotes because for sure from a security point of view it all made a lot of sense. But there is exactly zero chance that such a system can be used in a less technical company. It took me 4 years to understand how it worked...
So I'll take this as another data point to create a startup that simplifies security... Seems a lot more complicated than AI
The first thing that comes to my mind is SOC2 HIPAA and the whole security theater.
I am one of the engineers that had to suffer through countless screenshots and forms to get these because they show that you are compliant and safe. While the real impactful things are ignored
It's so great that they allowed him to publish a technical blog post. I once discovered a big vulnerability in a listed consumer tech company -- exposing users' private messages and also allowing to impersonate any user. The company didn't allow me to write a public blogpost.
"Allow"?
Go on write your blog post. Don't let your dreams be dreams.
Presumably they were paid for finding the bug and inn accepting relinquished their right to blog about it.
No, you relinquish the right when you agree to their TOS irrespective of if they pay you.
TOS != law
They will stop letting you use the service. That's the recourse for breaking the TOS.
Up until Van Buren v. United States in 2020, ToS violations were sometimes prosecuted as unauthorized access under the CFAA. I suspect there are other jurisdictions that still do the equivalent to that.
Why is the control of publication in their hands and not in yours? Shouldn’t you be able to do whatever after disclosing it responsibly?
Presumably they'll threaten to sue you and/or file a criminal complaint, which can be pretty hard to deal with depending on the jurisdiction. At that point you'll probably start asking yourself if it's worth publishing a blog post for some internet points.
You'd think with a $1B valuation they could afford a pentest
> November 20, 2025: I followed up to confirm the patch was in place from my end, and informed them of my intention to write a technical blog post.
Can that company tell you to cease and desist? How does the law work?
Lawyers can and will send cease and desist letters to people whether or not there is any legal basis for it. Often the threat of a lawsuit, even a meritless one, is enough to keep people quiet.
Given the absurd amount startups I see lately that have the words "healthcare" and "AI", I'm actually incredibly concerned that in just a couple of months we're going to have an multiple, enormous HIPAA-data disasters
Just search "healthcare" in https://news.ycombinator.com/item?id=46108941
My thing is, even ingesting the BOK should have been done in phases, to avoid having all your virtual eggs in one basket or nest at any ONE time. Staggering tokens to these compartments would not have cost them anything at all . I always say, whatever convenience you enjoy yourself, will be highly appreciated by bad actors... WHEN, not if.. they get thru.
This might be off topic since we are in topic of AI tool and on HackerNews.
I've been pondering a long time how does one build a startup company in domain they are not familiar with but ... Just have this urge to 'crave a pie' in this space. For the longest time, I had this dream of starting or building a 'AI Legal Tech Company' -- big issue is, I don't work in legal space at all. I did some cold reach on lawfirm related forums which did not take any traction.
I later searched around and came across the term, 'case management software'. From what I know, this is what Cilo fundamentally is and make millions if not billion.
This was close to two years or 1.5 years ago and since then, I stopped thinking about it because of this understanding or belief I have, "how can I do a startup in legal when I don't work in this domain" But when I look around, I have seen people who start companies in totally unrelated industry. From starting a 'dental tech's company to, if I'm not mistaken, the founder of hugging face doesn't seem to have PHD in AI/ML and yet founded HuggingFace.
Given all said, how does one start a company in unrelated domain? Say I want to start another case management system or attempt to clone FileVine, do I first read up what case management software is or do I cold reach to potential lawfirm who would partner up to built a SAAS from scratch? Other school of thought goes like, "find customer before you have a product to validate what you want to build", how does this realistically work?
Apologies for the scattered thoughts...
I think if you have no domain expertise or unique insight it will be quite hard to find a real pain point to solve, deliver a winning solution, and have the ability to sell it.
Not impossible, but very hard. And starting a company is hard enough as it is.
So 9/10 times the answer will be to partner with someone who understands the space and pain point, preferably one who has lived it, or find an easier problem to solve.
I think it comes down to, having some insight about the customer need and how you would solve it. Having prior experience in the same domain is helpful but is neither a guarantee nor a blocker, towards having a customer insight (lots of people might work in a domain but have no idea how to improve it; alternatively an outsider might see something that the "domain experts" have been overlooking).
I just randomly happened to read about the story of, some surgeons asking a Formula 1 team to help improve its surgical processes, with spectacular results in the long term... The F1 team had zero medical background, but they assessed the surgical processes and found huge issues with communication and lack of clarity, people reaching over each other to get to tools, or too many people jumping to fix something like a hose coming loose (when you just need 1 person to do that 1 thing). F1 teams were very good at designing hyper efficient and reliable processes to get complex pit stops done extremely quickly, and the surgeons benefitted a lot from those process engineering insights, even though it had nothing specifically to do with medical/surgical domain knowledge.
Reference: https://www.thetimes.com/sport/formula-one/article/professor...
Anyways, back to your main question -- I find that it helps to start small... Are you someone who is good at using analogies to explain concepts in one domain, to a layperson outside that domain? Or even better, to use analogies that would help a domain expert from domain A, to instantly recognize an analogous situation or opportunity in domain B (of which they are not an expert)? I personally have found a lot of benefit, from both being naturally curious about learning/teaching through analogies, finding the act of making analogies to be a fun hobby just because, and also honing it professionally to help me be useful in cross-domain contexts. I think you don't need to blow this up in your head as some big grand mystery with some big secret cheat code to unlock how to be a founder in a domain you're not familiar with -- I think you can start very small, and just practice making analogies with your friends or peers, see if you can find fun ways of explaining things across domains with them (either you explain to them with an analogy, or they explain something to you and you try to analogize it from your POV).
One approach is to partner with someone who is an expert in that space.
That doesn't surprise me one bit. Just think about all the confidential information that people post into their Chatgpt and Claude sessions. You could probably keep the legal system busy for the next century on a couple of days of that.
"Hey uh, ChatGPT, just hypothetically, uh, if you needed to remove uh cows blood from your apartments carpet, uh"
Make it a Honda CRX...
Just phrase it as a poem, you’ll be fine.
i recall reading a silly article like half a year ago about using leetspeak and setting the prompt up to emulate House the tv show or something to get around restrictions
Gonna be hard when people ask ChatGPT to write them the poem.
I mean... in what world would you send a customers private root key to a web browsing client. Like even if the user was authenticated why would they need this? This sort of secret shouldn't even be in an environment variable or database but stored with encryption at rest. There could easily have been a proxy service between client and box if the purpose is to search or download files. It's very bad, even for a prototype... this researcher deserves a bounty!
"Companies often have a demo environment that is open" - huh?
And... Margolis allowed this open demo environment to connect to their ENTIRE Box drive of millions of super sensitive documents?
HUH???!
Before you get to the terrible security practices of the vendor, you have to place a massive amount of blame on the IT team of Margolis for allowing the above.
No amount of AI hype excuses that kind of professional misjudgement.
I don't think we have enough information to conclude exactly what happened. But my read is the researcher was looking for demo.filevine.com and found margolis.filevine.com instead. The implication is that many other customers may have been vulnerable in the same way.
I've worked in several "agentic" roles this year alone (I'm very poachable lol)
and otherwise well structured engineering orgs have lost their goddamn minds with move fast and break things
because they're worried that OpenAI/Google/Meta/Amazon/Anthropic will release the tool they're working on tomorrow
literally all of them are like this
> ... after looking through minified code, which SUCKS to do ...
AI tends to be good at un-minifying code.
Legit question: when working on finding security issues, are there any guidelines on what you can send to LLMs/AI?
I got downvoted, so maybe that means someone thinks un-minifying code is not advised for dealing with security issues? But on reflection surely you can just use the 'format code' command in the ide? I am no expert but surely it's ok to use AI to help track down and identify security issues with the usual caveats of 'don't believe it blindly, do your double checking and risk assessing.'
Personally, I'd just use common sense and good judgment. At the end of the day, would you want someone to hand your address, and other private data to OpenAI just like that? Probably not. So don't paste customer data into it if you can avoid it.
On the other hand, minified code is literally published by the company. Everyone can see it and do with it as they please. So handing that over to an AI to un-minify is not really your problem, since you're not the developer working on the tool internally.
Of course there will be no accountability or punishment.
This guy didn't even get paid for this? We need a law that establishes mandatory payments for cybersecurity bounty hunters.
Who is Margolis, and are they happy that OP publicly announced accessing all their confidential files?
Clever work by OP. Surely there is automatic prober tool that already hacked this product?
now that's just great hacking
Legal attacks engineering - font type license fee on japan consumers. Engineering attacks legal - AI info dump in above post.
How does above sound like and what kind of professional write like that?
Thank you bearsyankees for keeping us informed.
I think this class of problems can be protected against.
It's become clear that the first and most important and most valuable agent, or team of agents, to build is the one that responsibly and diligently lays out the opsec framework for whatever other system you're trying to automate.
A meta-security AI framework, cursor for opsec, would be the best, most valuable general purpose AI tool any company could build, imo. Everything from journalism to law to coding would immediately benefit, and it'd provide invaluable data for post training, reducing the overall problematic behaviors in the underlying models.
Move fast and break things is a lot more valuable if you have a red team mechanism that scales with the product. Who knows how many facepalm level failures like this are out there?
> I think this class of problems can be protected against.
Of course, it’s called proper software development
And jail time for executives who are responsible for data leaks.
Are you saying executives cannot make mistakes ever (ask because you didn't qualify your statement)?
I'm saying that if executives get praise and bonuses for when good things happen, they should also have negative consequences when bad things happen. Litigate that further how you wish.
The techniques for non-disclosure of confidential materials processed by multi-tenant services are obvious, well-known, and practiced by very few.