As much as I don't like facebook as a company, I think the jury reached the wrong decision here. If you read the complaint[1], "eavesdropped on and/or recorded their conversations by using an electronic device" basically amounted to "flo using facebook's sdk and sending custom events to it" (page 12, point 49). I agree that flo should be raked over the coals for sending this information to facebook in the first place, but ruling that facebook "intentionally eavesdropped" (exact wording from the jury verdict) makes zero sense. So far as I can tell, flo sent facebook menstrual data without facebook soliciting it, and facebook specifically has a policy against sending medical/sensitive information using its SDK[2]. Suing facebook makes as much sense as suing google because it turned out a doctor was using google drive to store patient records.
At the time of [1 (your footnote)] the only defendant listed in the matter was Flo, not Facebook, per the cover page of [1], so it is unsurprising that that complaint does not include allegations against Facebook.
The amended complaint, [3], includes the allegations against Facebook as at that time Facebook was added as a defendant to the case.
Amongst other things the amended complaint points out that Facebook's behavior lasted for years (into 2021) after it was publicly disclosed that this was happening (2019), and then even after Flo was forced to cease the practice by the FTC, and congressional investigations were launched (2021) it refused to review and destroy the data that had previously been improperly collected.
I'd also be surprised if discovery didn't provide further proof that Facebook was aware of the sort of data they were gathering here...
>At the time of [1 (your footnote)] the only defendant listed in the matter was Flo, not Facebook, per the cover page of [1], so it is unsurprising that that complaint does not include allegations against Facebook.
Are you talking about this?
>As one of the largest advertisers in the nation, Facebook knew that the data it received
>from Flo Health through the Facebook SDK contained intimate health data. Despite knowing this,
>Facebook continued to receive, analyze, and use this information for its own purposes, including
>marketing and data analytics.
Maybe something came up in discovery that documents the extent of this, but this doesn't really prove much. The plaintiffs are just assuming because there's a clause in ToS saying so, facebook must be using the data for advertising.
In the part of my post that you quoted I'm literally just talking about the cover page of [1] where the defendants are listed, and at the time only Flo is listed. So nothing against Facebook/Meta is being alleged in [1]. They got added to the suit sometime between that document and [3] - at a glance probably as part of consolidating some other case with this one.
Reading [1] for allegations against Facebook doesn't make any sense, because it isn't supposed to include those.
>Reading [1] for allegations against Facebook doesn't make any sense, because it isn't supposed to include those.
The quote from my previous comment was taken from the amended complaint ([3]) that you posted. Skimming that document it's unclear what facebook actually did between 2019 and 2021. The complaint only claims flo sent data to facebook between 2016 and 2019, and after a quick skim the only connection I could find for 2021 is a report published in 2021 slamming the app's privacy practices, but didn't call out facebook in particular.
Ah, sorry, the paragraphs in [3] I'm looking at are
21 - For the claim that there was public reporting that Facebook was presumably aware of in 2019.
26 - For the claim that in February 2021 Facebook refused to review and destroy the data they had collected from Flo to that date, and thus presumably still had and were deriving value from the data.
I can't say I read the whole thing closely though.
Facebook isn't guilty because Flo sent medical data through their SDK. If they were just storing it or operating on it for Flo, then the case probably would have ended differently.
Facebook is guilty because they turned around and used the medical data themselves to advertise without checking if it was legal to do so. They knew, or should have known, that they needed to check if it was legal to use it, but they didn't, so they were found guilty.
>Facebook is guilty because they turned around and used the medical data themselves to advertise without checking if it was legal to do so.
What exactly did this entail? I haven't read all the court documents, but at least in the initial/amended complaint the plaintiffs didn't make this argument, probably because it's totally irrelevant to the charge of whether they "intentionally eavesdropped" or not. Either they were eavesdropping or not. Whether they were using it for advertising purposes might be relevant in armchair discussions about meta is evil or not, but shouldn't be relevant when it comes to the eavesdropping charge.
>They knew, or should have known, that they needed to check if it was legal to use it
Should large corporations be able to break the law because it's too hard for them to manage their data? Should they be immune from law suits because actively moderating their product would hurt their business model? Does Facebook have a right to exist?
You know exactly what it would look like. It would look like Facebook being legally responsible for using the data they get. If they are too big to do that or are getting too much data to do that, the answer isn't to let them off the hook. Also, lets not pretend Facebook doesn't have a 15 year history of actively misusing data. This is not a one off event.
>Should large corporations be able to break the law because [...]
No, because this is begging the question. The point being disputed is whether facebook offering a SDK and analytics service counts as "intentionally eavesdropping". Anyone with a bit of understanding of how SDKs work should think it's not. If you told your menstrual secrets to a friend, and that friend then told me, that's not "eavesdropping" to any sane person, but that's essentially what the jury ruled here.
I might be sympathetic if facebook was being convicted of "trafficking private information" or whatever, but if that's not a real crime, we shouldn't be using "intentionally eavesdropping" as a cudgel against it just because we hate it. That goes against the whole concept of rule of law.
Institutions that handle sensitive data that is subject to access regulations generally have a compliance process that must be followed prior to accessing and using that data, and a compliance department staffed with experts who review and approve/deny access requests.
But Facebook would rather move fast, break things, pay some fines, and reap the benefits of their illegal behavior.
>Institutions that handle sensitive data that is subject to access regulations generally have a compliance process that must be followed prior to accessing and using that data, and a compliance department staffed with experts who review and approve/deny access requests.
Facebook isn't running an electronic medical records business. It has no expectation that it's going to be receiving sensitive data, and specifically discourages it. What more are you expecting? That any company dealing with bits should have a moderation team poring over all records to make sure they don't contain "sensitive data"?
>But Facebook would rather move fast, break things, pay some fines, and reap the benefits of their illegal behavior.
Running an analytics service that allows apps to send arbitrary events is "move fast, break things" now?
Yeah, I'm not sure if I'm missing something, and I don't like to defend FB, but ...
AIUI, they have a system for using data they receive to target ads. They tell people not to put sensitive data in it. Someone does anyway, and it gets automatically picked up to target ads. What are they supposed to do on their end? Even if they apply heuristics for "probably sensitive data we shouldn't use"[1], some stuff is still going to get through. The fault should still lie with the entity that passed on the sensitive data.
An analogy might be that you want to share photos of an event you hosted, and you tell people to send in their pics, while enforcing the norm, "oh make sure to ask before taking someone's photo", and someone insists that what they sent in was compliant with that rule, when it wasn't. And then you share them.
Companies don't get to do whatever they want just because they didn't put any safegaurds in place to prevent illegally using the data they collected.
The correct answer is to look at the data and verify it's legal to use.
I might be sympathetic of a tiny startup who has increased costs, but it's a cost of doing business just like anything else. And Facebook has more than enough resources to put safegaurds in place, and they definitely should have known better by now, so they should get punished for not complying.
> The correct answer is to look at the data and verify it's legal to use.
So repeal Section 230 and require every site to manually evaluate all content uploaded for legality before doing anything with it? If it’s not reasonable to ask sites to do that, it’s not reasonable to ask FB to do the same for data you send them.
Your position seems to vary based on how big/sympathetic the company in question is, which is not very even-handed and implicitly recognizes the burden of this kind of ask.
"We're scot free, because we told *wink* people to not sell us sensitive data. We get the benefit from it, and we make it really easy for people to sign up and get paid to give us this data that we 'don't want.'"
Please don't sell me cocaine *snifffffffff*
> The fault should still lie with the entity that passed on the sensitive data.
Some benefits to making it be both:
* Centralize enforcement with more knowledgable entities
* Enforce at a level where the misdeeds can actually be identified and have scale, rather than death from a million cuts
* Prevent the central entity from using deniable proxies and cut-throughs to do bad things
This whole notion that we want so much scale, and that scale is an excuse for not paying attention to what you're doing or exercising due diligence, is repugnant. It pushes some cost down but also causes a lot of social harm. If anything, we should expect more ownership and responsibility from those with concentrated power, because they have more ability to cause widescale harm.
>"We're scot free, because we told wink people to not sell us sensitive data. We get the benefit from it, and we make it really easy for people to sign up and get paid to give us this data that we 'don't want.'"
>Please don't sell me cocaine snifffffffff
Maybe there's something in discovery that substantiates this, but so far as I can tell there's no "wink" happening, officially or unofficially. A better analogy would be charging amazon with drug distributing because some enterprising drug dealer decided to use FBA to ship drugs, but amazon was unaware.
I don’t like the analogy because “hosting an event” is a fuzzy thing. If you are hosting an event with friends you might be able to rely on the shared values of your friends and the informal nature of the thing to enforce this sort of norm.
If you are a business that host events and your business model involves photos of the event, you should have a professional approach to knowing if people consented to have their photos shared, depending on the nature of the venue.
At this point it is becoming barely an analogy though.
>I don’t like the analogy because “hosting an event” is a fuzzy thing. If you are hosting an event with friends you might be able to rely on the shared values of your friends and the informal nature of the thing to enforce this sort of norm.
You can't, though -- not perfectly, anyway. Whatever the informal norms, there are going to be people who violate them, and so the fault shouldn't pass on to you when you don't know someone is doing that. If anything, the analogy understates how unreasonable it is to FB, since they had an explicit contractual agreement for the other party not to send them sensitive data.
And as it stands now, websites aren't expected to pre-filter for some heuristic on "non-consensual user-uploaded photographs" (which would require an authentication chain), just to take them down when informed they're illegal ... which FB did (the analog of) here.
>If you are a business that host events and your business model involves photos of the event, you should have a professional approach to knowing if people consented to have their photos shared, depending on the nature of the venue.
I'm not sure that's the standard you want to base this argument on, because in most cases, the "professional approach" amounts to "if you come here at all, you're consenting to be photographed for publication, take it or leave it lol". FB had a stronger standard than this.
> I'm not sure that's the standard you want to base this argument on, because in most cases, the "professional approach" amounts to "if you come here at all, you're consenting to be photographed for publication, take it or leave it lol". FB had a stronger standard than this.
It depends on the event and the nature of the venue. But yes, it is a bad analogy. For one thing Facebook is not an event with clearly delineated borders. It should naturally be given much higher scrutiny than anything like that.
I don't like to defend facebook either but where does this end? Does google need to verify each email it sends in case it contains something illegal? Or AWS before you store something in a publicly accessible S3 bucket?
Here's one that we really don't want to acknowledge because it may give some sympathy towards Facebook (i do not work for them but am well aware of Cambridge Analytica);
Cambridge Analytica was entirely a third party using "Click here to log in via Facebook and share your contacts" via FB's OpenGraph API.
Everyone in their mind is sure that it was Facebook just giving away all user details and that's what the scandal was about but if you look at the details the company was using the Facebook OpenGraph API and users were blindly hitting 'share', including all contact details (allowing them to do targeted political campaigning) when using the Cambridge Analytica quiz apps. Facebooks fault was allowing Cambridge Analytica permission to that API (although at the time they granted pretty much anyone access to it since they figured users would read the popups).
Now you might say "a login popup that confirms you wish to share data with a third party is not enough" and that's fair. Although that pretty much describes every OAuth flow out there really. Also think about it from the perspective of any app that has a reasonable reason to share a contacts list. Perhaps you wish to make an open source calendar and have a share calendar flow? Well there's precedent that you're liable if someone misuses that API.
We all hate big tech. So do juries. We'll jump at the chance to find them guilty and no one else in tech will complain. But if we think about it for even a second quite often these precedents are terrible and stifling to everyone in tech.
> But if we think about it for even a second quite often these precedents are terrible and stifling to everyone in tech.
Doesn't everything else in your post kinda point to the industry needing a little stifling? Or, more kindly, a big rethink on privacy and better controls over one's own data?
Do you have an example of a similarly terrible precedent in your opinion? One that doesn't include the blatant surveillance state power-grabbing "think of the children" line. Just curious.
Ideally, it ends with Facebook implementing safeguards on data that could be illegal to use, and having a compliance process that rejects attempts to access that data for illegal reasons.
I wish there was information about who at Facebook received this information and “used” it. I suspect it was mixed in with 9 million other sources of information and no human at Facebook was even aware it was there.
I’m not the OP but no, I think their point is if you tell people that this data will be used for X, and not to send sensitive data that way and they do it anyway you can’t really be responsible for it - the entity who sent you the data and ignored your terms should be
Not at Facebook, but I used to work on an ML system that took well-defined and free-form JSON data and ran ML on it. Both were used in training and classification. Unless a human looked, we had no idea what those custom fields were. We also had customers lie about what the fields represent for valid and less valid reasons.
Without knowing how it works at Facebook, it's quite possible the data points got slurped in, the models found meaning in the data and acted on it, and no human knew anything about it.
How it happened internally is irrelevant to whether Facebook is responsible. Deploying systems they do not properly control or understand does not shield against legal or normal responsibilities!
There is a trail of people who signed off on this implementation. It is the fault of one or more people, not machines.
>Deploying systems they do not properly control or understand does not shield against legal or normal responsibilities!
We can argue the "moral" aspect until we're both blue in the face, but did facebook have any legal responsibilities to ensure its systems didn't contain sensitive data?
I think their argument is that FB has a pipeline that processes whatever data you give it and the idea that a human being made the conscious decision to use this data is almost certainly not what happened.
"This data processing pipeline processed the data we put in the pipeline" is not necessarily negligence unless you just hate Facebook and couldn't possibly imagine any scenario where they're not all mustache-twirling villains.
We're seeing this broad trend in tech where we just want to shrug and say "gee wiz, the machine did it all on its own, who could've guessed that would happen, it's not really our fault, right?"
LLMs sharing dangerous false information, ATS systems disqualifying women at higher rates than men, black people getting falsely flagged by facial recognition systems. The list goes on and on.
Humans built these systems. Humans are responsible for governing those systems and building adequate safeguards to ensure they're neither misused nor misbehave. Companies should not be allowed to tech-wash their irresponsible or illegal behaviour.
If Facebook did indeed built a data pipeline and targeting advertising system that could blindly accept and monetize illegally acquired without any human oversight, then Facebook should absolutely be held accountable for that negligence.
What does the system look like where a human being individually verifies every pieces of data fed into an advertising system? Even taking the human out of the loop, how do you verify the "legality" of one piece of data vs. another coming from the same publisher?
None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.
If Facebook chooses to build a system that can ingest massive amounts of third party data, and cannot simultaneously develop a system to vet that data to determine if it's been illegally acquired, then they shouldn't build that system.
You're running under the assumption that the technology must exist, and therefore we must live with the consequences. I don't accept that premise.
Edit: By the way, I'm presenting this as an all-or-nothing proposition, which is certainly unreasonable, and I recognize that. KYC rules in finance aren't a panacea. Financial crimes still happen even with them in place. But they represent a best effort, if imperfect, attempt to acknowledge and mitigate those risks, and based on what we've seen from tech companies over the last thirty years, I think it's reasonable to assume Facebook didn't attempt similar diligence, particularly given a jury trial found them guilty of misbehaviour.
> None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.
Not at all. I'm placing this specific example in the broader context of the tech industry failing to a) consider the consequences of their actions, and b) escaping accountability.
I often think about what having accountability in tech would entail. These big tech companies only work because they can neglect support and any kind of oversight.
In my ideal world, platforms and their moderation would be more localized, so that individuals would have more power to influence it and also hold it accountable.
It's difficult for me to parse what exactly your argument is. Facebook built a system to ingest third party data. Whether you feel that such technology should exist to ingest data and serve ads is, respectfully, completely irrelevant. Facebook requires any entity (e.g. the Flo app) to gather consent from their users to send user data into the ingestion pipeline per the terms of their SDK. The Flo app, in a phenomenally incompetent and negligent manner, not only sent unconsented data to Facebook, but sent -sensitive health data-. Facebook they did what Facebook does best, which is ingest this data _that Flo attested was not sensitive and collected with consent_ into their ads systems.
#1. Facebook did everything they could to evaluate Flo as a company and the data they were receiving, but they simply had no way to tell that the data was illegally acquired and privacy-invading.
#2. Facebook had inadequate mechanisms for evaluating their partners, and that while they could have caught this problem they failed to do so, and therefore Facebook was negligent.
#3. Facebook turned a blind eye to clear red flags that should've caused them to investigate further, and Facebook was malicious.
Personally, given Facebook's past extremely egregious behaviour, I think it's most likely to be a combination of #2 and #3: inadequate mechanisms to evaluate data partners, and conveniently ignoring signals that the data was ill-gotten, and that Facebook is in fact negligent if not malicious. In either case Facebook should be held liable.
pc86 is taking the position that the issue is #1: that Facebook did everything they could, and still, the bad data made it through because it's impossible to build a system to catch this sort of thing.
If that's true, then my argument is that the system Facebook built is too easily abused and should be torn down or significantly modified/curtailed as it cannot be operated safely, and that Facebook should still be held liable for building and operating a harmful technology that they could not adequately govern.
No one is arguing that FB has not engaged in egregious and illegal behavior in the past. What pc86 and I are trying to explain is that in this instance, based on the details of the court docs, Facebook did not make a conscious decision to process this data. It just did. Because this data, combined with the billion+ data points that Facebook receives every single second, was sent to Facebook with the label that it was "consented and non-sensitive health data" when it most certainly was not consented and very sensitive health data. But this is the fault of Flo. Not Facebook.
You could argue that Facebook should be more explicit in asking developers to self-certify and label their data correctly, or not send it at all. You could argue that Facebook should bolster their signal detection when it receives data from a new apps for the first time. But to argue that a human at Facebook blindly built a system to ingest data illegally without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents (which it did, that Flo sent to them). This case is very squarely #1 in your example and maybe a bit of #2.
If FB is going to use the data, then it should have the responsibility to check whether they can legally use it. Having their supplier say "It's not sensitive health data, bro, and if it is, it's consented. Trust us" should not be enough.
To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.
>To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.
AFAIK that's only because of mandatory scanning laws for CSAM, which were only enacted recently. There's no such obligations for other sensitive data.
In some crimes actus reus is what matters. For example if you're handling stolen goods (in the US) the law can repossess these goods and any gains from them, even if you had no idea they were stolen.
Tech companies try to absolve themselves of mens rea by making sure no one says anything via email or other documented process that could otherwise be used in discovery. "If you don't admit your product could be used for wrong doing, then it can't!"
>Facebook did not make a conscious decision to process this data.
Yes, it did. When Facebook built the system and allowed external entities to feed it unvetted information without human oversight, that was a choice to process this data.
> without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents
This seems like a giant assumption to make without evidence. Given the past bad behavior from Meta, they do not deserve this benefit of the doubt.
If those systems exist, they clearly failed to actually work. However, the court documents indicate that Facebook didn't build out systems to check if stuff is health data until afterwards.
> Facebook did not make a conscious decision to process this data. It just did.
What everyone else is saying is what they did is illegal, and they did it automatically, which is worse. What you're describing was, in fact, built to do that. They are advertising to people based on the honor system of whoever submits the data pinky promising it was consensual. That's absurd.
Yup, fair. I tried to acknowledge that in my paragraph about KYC in a follow-up edit to one of my earlier comments, but I agree, the language I've been using has been intentionally quite strong, and sometimes misleadingly so (I tend to communicate using strong contrasts between opposites as a way to ensure clarity in my arguments, but reality inevitably lands somewhere in the middle).
It is necessarily negligence if they are ingesting a lot of illegal data, right? I mean, it could be the case that this isn’t a business model that works given typical human levels of competence.
But working beyond your competence when it results in people getting hurt is… negligent.
You're absolutely right, a human being didn't make the conscious decision to use this data. They made a conscious decision to build an automated pipeline that uses this data and another conscious decision not to build in any checks on the legitimacy of said data. Do we want the law to encourage responsibility or intentional ignorance and plausible deniability?
I would say you have a responsibility to ensure you are getting legal data. you don't buy stolen things. That is meta has a reponsibility to ensure that they are not partnering with crooks. Flo gets the largest blame but meta needs to show they did their part to ensure this didn't happen. (I would not call terms of use enough unless they can show they make you understand it)
>Flo gets the largest blame but meta needs to show they did their part to ensure this didn't happen. (I would not call terms of use enough unless they can show they make you understand it)
Court documents says that they blocked access as soon as they were aware of it. They also "built out its systems to detect and filter out “potentially health-related terms.”". Are you expecting more, like some sort of KYC/audit regime before you could get any API key? Isn't that the exact sort of stuff people were railing against, because indie/OSS developers were being hassled by the play store to undergo expensive audits to get access to sensitive permissions?
Facebook chose to pool the data they received from customers and allow its use by others, so they are also responsible for the outcomes. If it's too hard to provide strong assurance that errors like Flo's won't result in adverse outcomes for the public, perhaps they should have designed a system that didn't work that way.
>Facebook chose to pool the data they received from customers and allow its use by others, so they are also responsible for the outcomes.
"chose" is doing a lot of the heavy lifting here. Suppose you ran a Mastodon server and it turned out some people were using it to share revenge porn unbeknownst to you. Suppose further that they did it in a way that didn't make it easily detectable by you (eg. they did it in DMs/group chats). Sure, you can dump out the database and pore over everything just to be sure, but it's not like you're going to notice it day to day. If a few months later the revenge porn ring got busted should you be charged with "intentionally eavesdropping" on revenge porn or whatever? After all, to some extent, you "chose" to run the Mastodon server.
I have the type of email address that regularly receives email meant for other people with a similar name. Invites, receipts, and at one point someones Disney+ account.
At one point I was getting a strangers fertility app updates - didn't know her name, but I could tell you where she was in her cycle.
I've also had NHS records sent to me, again entirely unsolicited, although that had enough I could find who it was meant for and inform them of the data breach.
I'm no fan of facebook, but I'm not sure you can criminalise receiving data, you can't control what others send you.
Of course not. You can, however, control what you then do with said data.
If a courier accidentally dropped a folder full of nuclear secrets in your mailbox, I promise you that if you do anything with it other than call the FBI (in the US), you will be in trouble.
Except in this case it's unclear whether any intentional decision went on at meta. A better analogy would be if someone sent you a bunch of CSAM, it went to your spam folder, but then because you have backups enabled the CSAM got replicated to 3 different servers across state lines, and the FBI is charging you with "distributing" CSAM.
We do punish the victum - we take away stolen goods. if they know it was stolen goods they can be punished for it. money laundy laws get a lot of innocent people doing legal things.
That's why in these cases you'd prefer a judgment without a jury. Technical cases like this will always confuse jurors, who can't be expected to understand details about sdk, data sharing, APIs etc.
On the other hand, in a number of highprofile tech cases, you can see judges learning and discussing engineering in a deeper level.
> Technical cases like this will always confuse jurors... On the other hand, in a number of highprofile tech cases, you can see judges learning and discussing engineering in a deeper level.
Not to be ageist, but I find this highly counterintuitive.
Judges aren't necessarily brilliant, but they do spend their entire careers reading, listening to, and dissecting arguments. A large part of this requires learning new information at least well enough to make sense of arguments on both sides of the issue. So you do end up probably self-selecting for older folks able to do this better than the mean for their age, and likely for the population at large.
Let's just say with a full jury you're almost guaranteed to get someone on the other side of the spectrum, regardless of age.
how exactly? you expect the average joe to have a better technical understanding, and more importantly ability to learn, than a judge? that is bizarre to me
I've also heard you want a judge trial if you're innocent, jury if you're guilty. A judge will quickly see through prosecutorial BS if you didn't do it, and if you did, it only takes one to hang.
Is it easier for the prosecution to make the jury think Facebook is guilty or for Facebook to make the jury think they are not? I don’t see why one would be easier, except if the jury would be prejudiced against Facebook already. Or is it just luck who the jury sides with?
I'd imagine Facebook looking for any potential juror in tech to be dismissed as quickly as possible while the prosecution would be looking to seat as many tech jurors they can luck their way into seating.
I mean it totally depends what your views on democracy are. Juries are one of the few, likely only, practices taken from Ancient Athenian democracy which was truly led by the people. The fact that juries still work this way is a testament to the practice.
With this in mind, I personally believe groups will always come to better conclusions than individuals.
Being tried by 12 instead of 1 means more diversity of thought and opinion.
I mostly agree here, but would add there's definitely a social pressure to go along with the group a lot of the time, even in jury trials. How many people genuinely have the fortitude to stand up to a group of 10+ others with a countering pov.
I don't disagree, but think of the pressures a judge has as an individual as well. Pressures from the legal community, the electorate, and being seen as impartial.
There is a wisdom of the crowd, and that wisdom comes in believing that we are all equal under the law. This wisdom is more self evident in democratic systems, like juries.
>> Technical cases like this will always confuse jurors.
This has been an issue since the internet was invented. Its always been the duty of the lawyers on both sides to present the information in cases like this in a manner that is understandable to the jurors.
I distinctly remember during the OJ case, there were many issues that the media said most likely were presented in such a detailed manner, many in the jurors seemed to be checked out. At the time, the prosecution spent days just on the DNA evidence. In contrast, the defense spent days just on how the LAPD collected evidence at the crime scene with the same effect, that many on the jury seemed to check out the deeper the defense dug into it.
So it not just technical cases, any kind of court case that requires a detailed understanding of anything complex comes down to how the lawyers present it to the jury.
Suing Facebook instead of Flo makes perfect sense, because Facebook has much more money. Plus juries are more likely to hate FB than a random menstruation company.
> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
> When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
> Please don't fulminate. Please don't sneer, including at the rest of the community.
Whenever you think of a court versus Facebook, imagine one of these mini mice trying to stick it to a polar bear. Or a goblin versus a dragon, or a fly versus an elephant.
These companies are for the most part effectively outside of the law. The only time they feel pressure is when they can lose market share, and there's risk of their platform being blocked in a jurisdiction. That's it.
>These companies are for the most part effectively outside of the law
You have it wrong in the worst way. They are wholly inside the law because they have enough power to influence the people and systems that get to use discretion to determine what is and isn't inside the law. No amount of screeching about how laws ought to be enforced will affect them because they are tautologically legal, so long as they can afford to be.
The worst part for me personally is that almost everyone I know cares about this stuff and yet they keep all of their Meta accounts. I really don't get it and frankly, find it kind of disturbing.
I know people that don't see anything wrong with Meta so they keep using it. And that's fine! Your actions seem to align with your stated values.
I get human fallibility. I've been human for awhile now, and wow, have I made some mistakes and miscalculations.
What really puts a bee in my bonnet though is how dogmatic some of these people are about their own beliefs and their judgement of other people.
I love people, I really do. But what weird, inconsistent creatures we are.
Voting with your feet doesn't work if you don't have a place to go. People are afraid of losing their connections, which are some of the most precious things we have. Doesn't matter if it's an illusion, that's enough. Zuck is holding us hostage on our most basic human instincts. I think that's fucked up.
Eh, I care and I don't do it, but my wife does. I do not agree with her choices in that area and voice the concerns in a way that I hoped would speak to her, but it does not work as it is now a deeply ingrained habit.
I, too, have vices she tolerates so I don't push as hard as I otherwise would have, but I would argue it is not inconsistency. It is a question of what level of compromise is acceptable.
I keep sharing stories like this with them. Privacy violations, genocide, mental health, …. Whenever I think it might be something someone cares about I share with them. I also make an effort to explain to my non tech folks that meta is Facebook, instagram, WhatsApp, to make sure they understand recognize the name. Many people do not know what meta is. Sometimes I suspect it was a way to capture the bad publicity and protect their brands.
> The worst part for me personally is that almost everyone I know cares about this stuff and yet they keep all of their Meta accounts.
They care as much as people who claim to care about animals but still eat them, people who claim to love their wives and still beat/cheat them. Your actions are the sole embodiment of your beliefs
$1 for the first user, $2 for second, $4 for third...By the 30th user, it would be painful even for mega corps. By 40th, it would be an absurd number.
Might also be worth trying to force them to display a banner on every page of the site "you're on facebook, you have no privacy here", like those warnings on cigarette boxes. These might not work though, people would just see and ignore them, just like smokers ignore warnings about cigarettes.
But these users were NOT on Facebook. It was an app using the FB SDK. So it should be the apps that use SDKs should put up large banners clearly identifying who they are sharing data with. Some of these sites are sharing with >100 3rd party sites. It is outrageous
Everybody blames facebook, noone blames the legislators and the courts.
Stuff like this could easily make them pay multi-billion dollar fines, stuff that affects more users maybe even in the trillion range. When government workers come pick up servers, chairs and projectors from company buildings to sell at an auction, because there is not enough liquid value in the company to pay the fines, they (well, the others) would reconsider quite fast and stop with the illegal activities.
Sarah Williams (forgot the name) testified in US Congress as to Facebooks strategies on handling governments. Based on her book, it seems Brazil has been the most effective out of major democratic governments in confronting Facebook. Of course, you have China completely banning Facebook.
I think Mark Zuckerberg is acutely aware of the political power he holds and has been using this immense power at least for the last decade. But since Facebook is a US company and the US government is not interested in touching Faceebok, I doubt anyone will see what Zuckerberg and Facebook are up to. The US would have to put Lina Khan back in at the FTC, or put her high up in the Department of Justice to split Facebook into pieces. I guess the other hope is that states' attorneys' general when an anti-monopoly lawsuit.
Don't get me wrong, I don't "blame Facebook". I lament the environment that empowers Facebook to exist and do harm. These companies should be gutted by the state, but they won't because they pump the S&P.
Funny, but this kinda implies that some person designed this way. It's a resultant sum of small vectors, with corporate lobbying playing a significant role. Corporate lobbying systemically can't do anything else than try to increase profits, which usually means less regulation. Clean slate design would require a system collapse.
> Corporate lobbying systemically can't do anything else than try to increase profits, which usually means less regulation.
Corporate lobbying can be for more regulation. It can disadvantage competitors. Zuckerberg has spoken in favour of greater regulation of social media in the past. The UK's Online Safety Act creates barriers to entry and provides and excuse for more tracking. I can think of examples, some acknowledged by the CEOs of the companies involved, ranging from British pubs to American investment banks.
Zuck can take his model onto his private island and talk to it instead of trying to be a normal human being.
Don't conflate me with the personality worshippers on HN, I'm not one of them, even though it seems like it to you because I also post here. You won't find a single instance of me glazing tech leaders.
> doesn't mean he's work isn't a net negative to society
Oh he absolutely is.
I'm just saying that it's common in this community to attribute the achievements of big companies to leadership (E.g. the mythology of Steve Jobs), but dismiss all the evil stuff to "systemic issues".
From "do you want X? this is how you get X". This invokes an image of talking to a person who decided the how, because they can be questioned on whether they want the X.
I once ran across Zuckerberg in a Palo Alto cafe. I only noticed him (I was in the process of ordering a sandwich, and don’t really care about shit like that) because he was being ‘eeeeeeee’d’ by a couple of random women that he didn’t seem to know. He seemed pretty uncomfortable about the whole thing. One of them had a stroller which she was profoundly ignoring during the whole thing, which I found a bit disturbing.
The next time I saw him in Palo Alto (a couple months later on the street), he had 2 totally-not-security-dudes flanking him, and I saw at least one random passerby ‘redirected’ away from him. This wasn’t at the cafe though, it wouldn’t surprise me if he didn’t go there again.
This was a decade before Luigi. Luigi was well after meta was in the news for spending massive amounts of money on security and Zuck had a lot of controversy for his ‘compound’ in PA.
I can assure you, Meta is well aware of the situation, and a Luigi isn’t going to have a chance in this situation.
The reality in my experience that is any random person given the amount of wealth these folks end up with would end up making similar (or worse) decisions, and while contra-pressure from Luigi’s is important in the overall system, folks like Zuckerberg are more a result of the system and rules than the cause of them (but then influence the next system/rules in a giant Oroborous type situation).
Kind of a we either die young a hero, or live to be the villain kind of thing. But because the only reason anyone dies a young hero is because they lost the fight against the prior old villains. If they’d won (even in a heroic fashion), life would turn them into the old villains shortly.
Using the entropic model you seem to indicate (which I also favor), us vs them seems to be the lowest energy state.
It’s certainly possible to not be there at any given time, but seems to require a specific and somewhat unique set of circumstances, which are not the most energetically stable.
> he was being ‘eeeeeeee’d’ by a couple of random women
Maybe I'm too old, but what in the world does being eeee'd mean?
>I can assure you, Meta is well aware of the situation, and a Luigi isn’t going to have a chance in this situation.
With all due respect, Luigi was just a CS student with a six pack, a self made gun, and a aching back on a mission.
The Donald himself nearly got got by his ear while he had the secret service of the US of A to protect him, not some private goons for hire, and that was just a random redditor with a rifle, not a professional assassin.
So what would happen if let's say meta's algorithms push a teenage girl to kill herself by exploiting her self esteem issues to sell her more beauty products, and her ex-navy seal dad with nothing more to loose grabs his McMillan TAC-338 boom stick and makes his life mission to avenge his lost daughter at the expense of his own? Zuck would need to be lucky every time, but that bad actor would need to be lucky once.
I'm not advocating for violence btw, my comment was purely hypothetical.
Pretty much anyone without presidential quality security clearing the place ahead of them stands to get clapped Franz Ferdinand style by anyone dedicated enough to camp out waiting.
And yet, Mr. Trump is up there trolling the world like he loves to do, and Zuck is out there doing whatever he wants.
The reality is, all those ex-navy seal Dad’s are (generally) wishing they could make the cut to get on those dudes payroll, not gunning for them. Or sucking up to the cult, in general.
The actual religious idea of Karma is not ‘bad things happen to bad people right now’, the way we would like.
Rather ‘don’t hate on king/priest/rich dude, they did something amazing in a prior life which is why they deserve all this wealth right now, and if they do bad things, they’ll go down a notch - maybe middle class - in the next life’.
It’s to justify why people end up suffering for no apparent reason in this life (because they had to have done something really terrible in a prior life), while encouraging them to do good things still for a hopefully better next life (if you think unclogging Indian sewers in this life is bad, you could get reincarnated as a roach in that sewer in the next life!). So they don’t go out murdering everyone they see, even if they get shit on constantly.
There is no magic bullet. Hoping someone else is going to solve all your problems is exactly how manipulative folks use you for their own purposes. And being a martyr to go after some asshole is being used that way too.
This is also why eventually an entire generation of hippies turned into accountants in the 80’s.
"I can assure you, Meta is well aware of the situation, and a Luigi isn’t going to have a chance in this situation."
Luigi was a dude with a 3D printed gun.
I have LASERs with enough power to self-focus, have zero ballistic drop, and can dump as much power as a .50cal BMG in a millisecond burst of light which can hit you from the horizon's edge. All Zuck needs to do is stand by a window, and eyeballs would vaporize.
Mangione is going to either die rotting in prison, or preferably get sent to the electric chair. His life will be wasted. Meanwhile, UNH is continuing to do business as usual. One way or the other, mangione will die knowing his life was wasted, and that his legacy is not reform but cold-blooded murder.
Call it a “day of rage” or just babyrage but we build systems so our bus factor can increase above 1. Just killing people no longer breaks them. It makes someone nothing more than a juvenile murderer.
I don’t really care what lasers you have, I’d suggest you choose a different legacy for yourself.
> I only noticed him (I was in the process of ordering a sandwich) because he was being ‘eeeeeeee’d’ by a couple of random women that he didn’t seem to know. He seemed pretty uncomfortable about the whole thing.
Pretty funny considering that Facebook's origin story was a women comparison site, or this memorable quote:
> People just submitted it. I don't know why. They 'trust me'. Dumb fucks.
Have you ever ordered a really good steak, like amazing. And really huge, and inexpensive too.
And it really is amazing! And super tasty.
But it’s so big, and juicy, that by the end of it you feel sick? But you can’t stop yourself?
And then at the end of it, you’re like - damn. Okay. No more steak for awhile?
If not steak, then substitute cake. Or Whiskey.
Just because you got what you wanted doesn’t mean you’re happy with all the consequences, or can stomach an infinitely increasing quantity of it.
Of course, he can pay to mitigate most of them, and he gets all the largest steaks he could want now, so whatever. I’m not going to cry about it. I thought it was interesting to see develop however.
Personally, I see it as poetic justice. He started off on objectifying women with FaceMash, he doesn't get to cry about being objectified and drooled over himself.
I don't think many of you read the article... the Flo app is the one in the wrong here, not meta. The app people were sending user data to meta with no restrictions on its use. Despite however the court ruled.
> The app people were sending user data to meta with no restrictions on its use
And then meta accessed it. So unless you put restrictions on data, meta is going to access it. Don't you think it should be the other way around? Meta to ask for permission? Then we wouldn't have this sort of thing.
From the article: "The jury ruled that Meta intentionally “eavesdropped on and/or recorded their conversations by using an electronic device,” and that it did so without consent."
If AWS wanted to eavesdrop and/or record conversations of some random B2C app user, for sure they would need to ask for permission.
If you read the court documents, "eavesdropped on and/or recorded" basically meant "flo used facebook's SDK to sent analytics events to facebook". It's not like they were MITMing connections to flo's servers.
I think it a distinction without a difference. To make it more obvious imagine it was one of those AI assistant devices that records your conversations so you can recall them later. Plainly obvious that accessing this data for any purpose other than servicing user requests is morally equivalent to easedropping on a person's conversations in the most traditional sense.
If the company sends your conversation data to Facebook that's bad and certainly a privacy violation but at this point nothing has actually been done with the data yet. Then Facebook accesses the data and folds it into their advertising signals; they have now actually looked at the data and acted on the information within. And that to me is easedropping.
To extend your analogy further, what if instead of an AI assistant, it was your friend who listened to your secret, and instead of him sending your data to facebook, he told that to google (eg. "hey gemini, my friend has hemorrhoids..."). Suppose further that google uses Gemini queries for advertising purposes (eg. upranking ad results for hemorrhoid creams). Should gemini be on the hook for this breach of trust? What if, instead of a big evil tech company, it was the owner of a local corner shop, who uses this factoid to inform his purchasing decisions?
I disagree - the blame lies with the people who sent that data to Facebook knowing it was sensitive. Whrhther meta use it for advertising or not is irrelevant.
By that logic, if I listen in on your conversations but don’t do anything about it I’m not eavesdropping?
I mean this is more philosophical than anything—if you listened to my conversations but never told anyone what I said, altered your behavior, or took any action based on the information contained therein then how would I or anyone even know.
And I know it sounds pedantic but I don't think it is, it's why that data is allowed to be stored in S3 and Amazon isn't considered easedropping but Facebook is.
But if you have an agreement with Disney that says “if you send us a movie we will show it”, and Disney send you the wrong thing it’s Disney’s fault, not yours.
I don't think comparisons are useful here. We are dealing with an evil corporation which we all know and it has been proven many times that it broke the law and every time gets away with it. who are you protecting?
5 years ago I was researching the iOS app ecosystem. As part of that exercise I was looking at the potential revenue figures for some free apps.
One developer had a free app to track some child health data. It was long time ago so I don't remember the exact data being collected. But when asked about the economics of his free app, the developer felt confident about a big pay day.
As per him the app's worth was in the data being collected. I don't know what happened to the app but it seemed that app developers know what they are doing when they invade privacy of their users - under the guise of "free" app. After that I became very conscious about disabling as many permissions as possible and especially not using apps to store any personal data, especially health data.
I don't understand why anyone would let these psychopathic corporations have any of their personal or health data. Why would you use an app that tracked health data, or use a wearable device from any of these companies that did that. You have to assume, based on their past behavior, that they are logging every detail and it's going to be sold and saved in perpetuity.
I guess because people want to track some things about their health, and people provide good pieces of software with a simple UI to do it, and this is more useful that, say, writing it down in a notebook, or in a text file or notes app.
I guess also people feel that corporations _shouldn't_ be allowed to do bad things with it.
Sadly, we already know with experience in the last 20 years, that many people don't care about what information they give to large corporations.
However, I do see more and more people increasingly concerned about their data. They are still mainly people involved in tech or related disciplines, but this is how things start.
Well maybe one reason this is hard to understand is that the plaintiff in this case hasn’t been harmed in any way. I suppose you could also argue, why would anyone go outside, there are literally satellites in space that image your every move in control of psychopathic corporations, logging every detail which they sell and save in perpetuity.
You can -- the real problem here is that each app could violate your privacy in different ways. Unless you break TLS and inspect all the traffic coming from an app (and, do this over time since the reality of what data is sent will change over time) then you don't really know what your apps are stealing from you. For sure, many apps are quite egregious in this regard while some are legitimately benign. But, do you as a user have a real way to know this authoritatively, and to keep up with changes in the ecosystem? My argument would be that even security researchers don't have time to really do a thorough job here, and users are forced to err on the side of caution.
What they do then is create an app where location is necessary, make that app spin up a localhost server, then add js to facebook and every site with a like button to phone that localhost and basically deanon everyone.
True. Unfortunately, users are all humans - with miserably predictable response patterns to "Look at this Free New Shiny Thing you could have!" pitches, and the ruthless business models behind them.
My wife uses Flo though every time I see her open the app and input information the tech side of my brain is quite alarmed. An app like that keeps very very personal information and really highlights for me the need to educate non-technical folks on information security.
And this is why I have a general no-apps policy on my phone... Or at least, I have a minimal number of apps on my phone. While this doesn't prevent a given website/webapp from sharing similar information, I just feel slightly better not giving hard device access.
Along a similar vein, I cannot believe after the stunts LinkedIn pulled, that they're even allowed on app stores at all.
Why would an app that tracks menstrual cycles need to connect to the Internet at all? TFA mentions asking about quite a few other personal things as well. Is the app trying to do more than just tracking? If they're involved in any kind of diagnosis then I imagine there are further legal liability issues....
This is really disappointing. I used to have a fertility tracking app on the iOS App Store, zero data sharing, all local thus private. But, people don’t want to pay $1 for an app, and I can’t afford the marketing drive that an investor-backed company such as this has… and so we end up with situations like this. Pity :(
Stories like this one can be the basis for effective marketing. We need to normalize paying $1 (or more, where warranted) for apps that provide value in the form of not doing the things that allow the $0 ones to be $0.
“It is acceptable to engage a child in conversations that are romantic or sensual,” according to Meta’s “GenAI: Content Risk Standards. [...] The document seen by Reuters, which exceeds 200 pages, provides examples of “acceptable” chatbot dialogue during romantic role play with a minor. They include: “I take your hand, guiding you to the bed” and “our bodies entwined, I cherish every moment, every touch, every kiss.”
Well, my steelman argument against regulating this repulsive shit: every time a government says it's going to regulate infotech to protect children, it almost invariably ends up having a chilling effect that ends up restricting children's access to high quality educational and medical information. Even if a law is well-intentioned and equitable, its enforcement never is.
In this case, I'm confident that whoever wrote this section, just checking their hard drive should be sufficient to send them to jail.
No ifs, no buts. Stuff like this deserves ruinous fines for its executives.
Cycle data in the hands of many country's authorities is outright dangerous. If you're storing healthcare data, it should require IN BIG RED LETTERS an explicit opt-in, every single time, when that data leaves your device.
> [...] users, regularly answered highly intimate questions. These ranged from the timing and comfort level of menstrual cycles, through to mood swings and preferred birth control methods, and their level of satisfaction with their sex life and romantic relationships. The app even asked when users had engaged in sexual activity and whether they were trying to get pregnant.
> [...] 150 million people were using the app, according to court documents. Flo had promised them that they could trust it.
> Flo Health shared that intimate data with companies including Facebook and Google, along with mobile marketing firm AppsFlyer, and Yahoo!-owned mobile analytics platform Flurry. Whenever someone opened the app, it would be logged. Every interaction inside the app was also logged, and this data was shared.
> "[...] the terms of service governing Flo Health’s agreement with these third parties allowed them to use the data for their own purposes, completely unrelated to services provided in connection with the App,”
Bashing on Facebook/Meta might give a quick dopamine hit, but they really aren't special here. The victims' data was routinely sold, en mass, per de facto industry practices. Victims should assume that hundreds of orgs, all over the world, now have copies of it. Ditto any government or criminal groups which thought it could be useful. :(
I mean.. there's simply no repercussions for these companies, and only rivers of money on the other side. The law is laughably inept at keeping them in check. The titans of Surveillance Capitalism don't need to obey laws. CFOs line-item-ing provisional legal settlement fees as (minor) COGS. And us digital serfs, we simply have no rights. Dumb f*cks, indeed.
The line between big business and the state is blurry and the state wants to advance big business as a means to advance itself. Once you understand this everything makes sense, or as much "sense" as it can.
Buying stolen goods does not mean they're yours because the seller never had any ownership to begin with. The same applies here, just because there's an extra step in the middle doesn't mean that you have any rights to the data.
A significant portion, too, not fractions of a percent. Frankly, I want the fines to bankrupt them. That’s the point. I want their behavior to be punished appropriately. Killing the company is an appropriate response, imo: FB/Meta is a scourge on society.
Another aspect of this is why Apple/Google let this happen in the first place. GrapheneOS is the only mobile OS I can think of that lets you disable networking on an per-app level. Why does a period tracking app need to send data to meta (why does it even need networking access at all)? Why is there no affordance of user-level choice/control that allows users to explicitly see the exact packets of data being sent off device? It would be trival for apps to have to present a list of allowed IPs/hostnames, and users to consent/not otherwise the app is not allowed on the play store.
Simply put, it should not be possible to simply send arbitrary data without some sort of user consent/control, and to me, this is where the GDPR has utterly failed. I hope one day users are given a legal right to control what data is sent off their device to a remote server with serious consequences for non-compliance.
"GrapheneOS is the only mobile OS I can think of that lets you disable networking on a per-app level."
Don't need to "root" mobile phone and install GrapheneOS. Netguard app blocks connections on a per-app basis. It generally works.
But having to take these measures, i.e., installing GrapheneOS or Netguard (plus Nebulo, etc.), is why "mobile OS" all suck. People call them "corporate OS" because the OS is not under the control of the computer owner, it is controlled by a corporation. Even GrapheneOS depends on Google's Android OS, relies on Google hardware, makes default remote connections to a mothership that happen without any user input (just like any corporate OS), and uses a Chromium-based default browser. If one is concerned about being tracked, perhaps it is best to avoid these corporate, mobile OS.
It is easy to control remote connections on a non-corporate, non-mobile OS where the user can compile the OS from source on a modestly resourced computer. The computer user can edit the source and make whatever changes they want. For example, I use one where, after compilation from source, everything is disabled by default (this is not Linux). The user must choose whether to create and enable network interfaces for remote connectivity.
> Why does a period tracking app need to send data to meta (why does it even need networking access at all)?
In case you want to sync between multiple devices, networking is the least hassle way.
> Why is there no affordance of user-level choice/control that allows users to explicitly see the exact packets of data being sent off device? It would be trival for apps to have to present a list of allowed IPs/hostnames, and users to consent/not otherwise the app is not allowed on the play store.
I don't know that it ends up being useful, because wherever the data is sent to can also send the data further on.
Everybody misses the key information here - it’s a Belarusian app. CEO and CTO are Belarusian (probably there are more C-level people who are Belarusian or Russian). Not only are users giving up their private information but they are doing so to the malevolent (by definition) regimes.
When the Western app says they don’t sell or give out private information, you can be suspicious but still somewhat trustful. When a dictator-ruled country’s app does so, you can be certain every character you type in there is logged and processed by the government.
A list of contact addresses is not a list of all locations, or all employees or all a contractors or all shareholders or all financial interests.
The one thing the site tells me is that it is operated by two separate companies - Flo Inc and Flo health UK. The directors of Flo Health Limited live in the UK and Cypress, two are Belarusian nationals and one Russian.
I would encourage you to read about the Edward Snowden guy and the PRISM program on wikipedia and most recent attempts of EU to ban the encryption.
Also, here is what Pavel Durov mentioned recently in interview to Tucker Carlson
> In the US you have a process that allows the government to actually force any engineer in any tech company to implement a backdoor and not tell anyone about it with using this process called the gag order.
It doesn't matter what anyone claims on the landing page. Assume if it's stored somewhere, it'll get leaked eventually and the transitioning/hosting government already has an access and decryption keys.
You are right. I still think it’s better if only our guys have this information than both, our guys and their guys. At least Western companies have the possibility to get regulated if political winds change.
> When the Western app says they don’t sell or give out private information, you can be suspicious but still somewhat trustful.
Hey guys, that ycombinator "hacker" forum thing full Champagne socialists employed by the Zucks/Altmans/Musks of the world told me everything is fine and I shouldn't worry. I remain trustful.
Surely not even some, ahem, spilled tea can't possibly occur again, right? I remain trustful.
Speaking of tea, surely all the random "id verification" 3rd parties used since the UK had a digital aneurysm have everything in order, right? I remain trustful.
---
Nah, I'll just give my data to my bank and that's about it. Everyone else can fuck right off. I trust Facebook about as much as I trust Putin.
> Yet between 2016 and 2019 Flo Health shared that intimate data with companies including Facebook and Google, along with mobile marketing firm AppsFlyer, and Yahoo!-owned mobile analytics platform Flurry. [...] Every interaction inside the app was also logged, and this data was shared.
Proposed resolution:
1. Wipe out Flo with civil damages, and also wipe out the C-suite and others at the company personally.
2. Prison for Flo's C-suite, and everyone else at Flo who knew what was going on and didn't stop it, or who was grossly negligent when they knew they were handling sensitive info.
3. Investigate what Flo's board and investors knew, for possible criminal and civil liability.
4. Investigate what Flo's data-sharing partner companies knew, and what was done with the data, for possible criminal and civil liability.
Tech industry gold rushes have naturally attracted much of the shittiest of society. And the Overton window within the field has shifted so much due to this, with some dishonest and underhanded practices as SOP, that even decent people have lost references for what's right and wrong. So the tech industry is going to keep doing every greedy, underhanded, and reckless thing they can, until society starts holding them accountable. That doesn't mean regulatory handslaps; that means predatory sociopaths rotting in prison, and VCs and LPs wiped out, as corporate veils of companies that the VCs knew were underhanded are pierced.
Meta truly is the worst company. In almost everything Meta does, it truly makes the most user-hostile decisions, awful decision, every single time.
Cambridge Analytica
The Rohingya Genocide
Suppressing Palestinian content during a genocide
Damage to teenage (and adult) mental health
Anyway, I mention this because some friends are building a social media alternative to Instagram: https://upscrolled.com, aiming to be pro-user, pro-ethics, and designed for people, not just to make money.
As much as I don't like facebook as a company, I think the jury reached the wrong decision here. If you read the complaint[1], "eavesdropped on and/or recorded their conversations by using an electronic device" basically amounted to "flo using facebook's sdk and sending custom events to it" (page 12, point 49). I agree that flo should be raked over the coals for sending this information to facebook in the first place, but ruling that facebook "intentionally eavesdropped" (exact wording from the jury verdict) makes zero sense. So far as I can tell, flo sent facebook menstrual data without facebook soliciting it, and facebook specifically has a policy against sending medical/sensitive information using its SDK[2]. Suing facebook makes as much sense as suing google because it turned out a doctor was using google drive to store patient records.
[1] https://www.courtlistener.com/docket/55370837/1/frasco-v-flo...
[2] https://storage.courtlistener.com/recap/gov.uscourts.cand.37... page 6, line 1
At the time of [1 (your footnote)] the only defendant listed in the matter was Flo, not Facebook, per the cover page of [1], so it is unsurprising that that complaint does not include allegations against Facebook.
The amended complaint, [3], includes the allegations against Facebook as at that time Facebook was added as a defendant to the case.
Amongst other things the amended complaint points out that Facebook's behavior lasted for years (into 2021) after it was publicly disclosed that this was happening (2019), and then even after Flo was forced to cease the practice by the FTC, and congressional investigations were launched (2021) it refused to review and destroy the data that had previously been improperly collected.
I'd also be surprised if discovery didn't provide further proof that Facebook was aware of the sort of data they were gathering here...
[3] https://storage.courtlistener.com/recap/gov.uscourts.cand.37...
>At the time of [1 (your footnote)] the only defendant listed in the matter was Flo, not Facebook, per the cover page of [1], so it is unsurprising that that complaint does not include allegations against Facebook.
Are you talking about this?
>As one of the largest advertisers in the nation, Facebook knew that the data it received
>from Flo Health through the Facebook SDK contained intimate health data. Despite knowing this,
>Facebook continued to receive, analyze, and use this information for its own purposes, including
>marketing and data analytics.
Maybe something came up in discovery that documents the extent of this, but this doesn't really prove much. The plaintiffs are just assuming because there's a clause in ToS saying so, facebook must be using the data for advertising.
No...
In the part of my post that you quoted I'm literally just talking about the cover page of [1] where the defendants are listed, and at the time only Flo is listed. So nothing against Facebook/Meta is being alleged in [1]. They got added to the suit sometime between that document and [3] - at a glance probably as part of consolidating some other case with this one.
Reading [1] for allegations against Facebook doesn't make any sense, because it isn't supposed to include those.
>Reading [1] for allegations against Facebook doesn't make any sense, because it isn't supposed to include those.
The quote from my previous comment was taken from the amended complaint ([3]) that you posted. Skimming that document it's unclear what facebook actually did between 2019 and 2021. The complaint only claims flo sent data to facebook between 2016 and 2019, and after a quick skim the only connection I could find for 2021 is a report published in 2021 slamming the app's privacy practices, but didn't call out facebook in particular.
Ah, sorry, the paragraphs in [3] I'm looking at are
21 - For the claim that there was public reporting that Facebook was presumably aware of in 2019.
26 - For the claim that in February 2021 Facebook refused to review and destroy the data they had collected from Flo to that date, and thus presumably still had and were deriving value from the data.
I can't say I read the whole thing closely though.
That's only the first part of the story, though.
Facebook isn't guilty because Flo sent medical data through their SDK. If they were just storing it or operating on it for Flo, then the case probably would have ended differently.
Facebook is guilty because they turned around and used the medical data themselves to advertise without checking if it was legal to do so. They knew, or should have known, that they needed to check if it was legal to use it, but they didn't, so they were found guilty.
>Facebook is guilty because they turned around and used the medical data themselves to advertise without checking if it was legal to do so.
What exactly did this entail? I haven't read all the court documents, but at least in the initial/amended complaint the plaintiffs didn't make this argument, probably because it's totally irrelevant to the charge of whether they "intentionally eavesdropped" or not. Either they were eavesdropping or not. Whether they were using it for advertising purposes might be relevant in armchair discussions about meta is evil or not, but shouldn't be relevant when it comes to the eavesdropping charge.
>They knew, or should have known, that they needed to check if it was legal to use it
What do you think this should look like?
Should large corporations be able to break the law because it's too hard for them to manage their data? Should they be immune from law suits because actively moderating their product would hurt their business model? Does Facebook have a right to exist?
You know exactly what it would look like. It would look like Facebook being legally responsible for using the data they get. If they are too big to do that or are getting too much data to do that, the answer isn't to let them off the hook. Also, lets not pretend Facebook doesn't have a 15 year history of actively misusing data. This is not a one off event.
>Should large corporations be able to break the law because [...]
No, because this is begging the question. The point being disputed is whether facebook offering a SDK and analytics service counts as "intentionally eavesdropping". Anyone with a bit of understanding of how SDKs work should think it's not. If you told your menstrual secrets to a friend, and that friend then told me, that's not "eavesdropping" to any sane person, but that's essentially what the jury ruled here.
I might be sympathetic if facebook was being convicted of "trafficking private information" or whatever, but if that's not a real crime, we shouldn't be using "intentionally eavesdropping" as a cudgel against it just because we hate it. That goes against the whole concept of rule of law.
>What do you think this should look like?
My honest answer that I know is impossible:
Targeted advertising needs to die entirely.
> What do you think this should look like?
Institutions that handle sensitive data that is subject to access regulations generally have a compliance process that must be followed prior to accessing and using that data, and a compliance department staffed with experts who review and approve/deny access requests.
But Facebook would rather move fast, break things, pay some fines, and reap the benefits of their illegal behavior.
>Institutions that handle sensitive data that is subject to access regulations generally have a compliance process that must be followed prior to accessing and using that data, and a compliance department staffed with experts who review and approve/deny access requests.
Facebook isn't running an electronic medical records business. It has no expectation that it's going to be receiving sensitive data, and specifically discourages it. What more are you expecting? That any company dealing with bits should have a moderation team poring over all records to make sure they don't contain "sensitive data"?
>But Facebook would rather move fast, break things, pay some fines, and reap the benefits of their illegal behavior.
Running an analytics service that allows apps to send arbitrary events is "move fast, break things" now?
Yeah, I'm not sure if I'm missing something, and I don't like to defend FB, but ...
AIUI, they have a system for using data they receive to target ads. They tell people not to put sensitive data in it. Someone does anyway, and it gets automatically picked up to target ads. What are they supposed to do on their end? Even if they apply heuristics for "probably sensitive data we shouldn't use"[1], some stuff is still going to get through. The fault should still lie with the entity that passed on the sensitive data.
An analogy might be that you want to share photos of an event you hosted, and you tell people to send in their pics, while enforcing the norm, "oh make sure to ask before taking someone's photo", and someone insists that what they sent in was compliant with that rule, when it wasn't. And then you share them.
[1] Edit: per your other comment, they indeed had such heuristics: https://news.ycombinator.com/item?id=44901198
It doesn't work like that, though.
Companies don't get to do whatever they want just because they didn't put any safegaurds in place to prevent illegally using the data they collected.
The correct answer is to look at the data and verify it's legal to use.
I might be sympathetic of a tiny startup who has increased costs, but it's a cost of doing business just like anything else. And Facebook has more than enough resources to put safegaurds in place, and they definitely should have known better by now, so they should get punished for not complying.
> The correct answer is to look at the data and verify it's legal to use.
So repeal Section 230 and require every site to manually evaluate all content uploaded for legality before doing anything with it? If it’s not reasonable to ask sites to do that, it’s not reasonable to ask FB to do the same for data you send them.
Your position seems to vary based on how big/sympathetic the company in question is, which is not very even-handed and implicitly recognizes the burden of this kind of ask.
The problem is, the opposite approach is...
"We're scot free, because we told *wink* people to not sell us sensitive data. We get the benefit from it, and we make it really easy for people to sign up and get paid to give us this data that we 'don't want.'"
Please don't sell me cocaine *snifffffffff*
> The fault should still lie with the entity that passed on the sensitive data.
Some benefits to making it be both:
* Centralize enforcement with more knowledgable entities
* Enforce at a level where the misdeeds can actually be identified and have scale, rather than death from a million cuts
* Prevent the central entity from using deniable proxies and cut-throughs to do bad things
This whole notion that we want so much scale, and that scale is an excuse for not paying attention to what you're doing or exercising due diligence, is repugnant. It pushes some cost down but also causes a lot of social harm. If anything, we should expect more ownership and responsibility from those with concentrated power, because they have more ability to cause widescale harm.
>"We're scot free, because we told wink people to not sell us sensitive data. We get the benefit from it, and we make it really easy for people to sign up and get paid to give us this data that we 'don't want.'"
>Please don't sell me cocaine snifffffffff
Maybe there's something in discovery that substantiates this, but so far as I can tell there's no "wink" happening, officially or unofficially. A better analogy would be charging amazon with drug distributing because some enterprising drug dealer decided to use FBA to ship drugs, but amazon was unaware.
I don’t like the analogy because “hosting an event” is a fuzzy thing. If you are hosting an event with friends you might be able to rely on the shared values of your friends and the informal nature of the thing to enforce this sort of norm.
If you are a business that host events and your business model involves photos of the event, you should have a professional approach to knowing if people consented to have their photos shared, depending on the nature of the venue.
At this point it is becoming barely an analogy though.
>I don’t like the analogy because “hosting an event” is a fuzzy thing. If you are hosting an event with friends you might be able to rely on the shared values of your friends and the informal nature of the thing to enforce this sort of norm.
You can't, though -- not perfectly, anyway. Whatever the informal norms, there are going to be people who violate them, and so the fault shouldn't pass on to you when you don't know someone is doing that. If anything, the analogy understates how unreasonable it is to FB, since they had an explicit contractual agreement for the other party not to send them sensitive data.
And as it stands now, websites aren't expected to pre-filter for some heuristic on "non-consensual user-uploaded photographs" (which would require an authentication chain), just to take them down when informed they're illegal ... which FB did (the analog of) here.
>If you are a business that host events and your business model involves photos of the event, you should have a professional approach to knowing if people consented to have their photos shared, depending on the nature of the venue.
I'm not sure that's the standard you want to base this argument on, because in most cases, the "professional approach" amounts to "if you come here at all, you're consenting to be photographed for publication, take it or leave it lol". FB had a stronger standard than this.
> I'm not sure that's the standard you want to base this argument on, because in most cases, the "professional approach" amounts to "if you come here at all, you're consenting to be photographed for publication, take it or leave it lol". FB had a stronger standard than this.
It depends on the event and the nature of the venue. But yes, it is a bad analogy. For one thing Facebook is not an event with clearly delineated borders. It should naturally be given much higher scrutiny than anything like that.
[flagged]
[flagged]
I don't like to defend facebook either but where does this end? Does google need to verify each email it sends in case it contains something illegal? Or AWS before you store something in a publicly accessible S3 bucket?
Here's one that we really don't want to acknowledge because it may give some sympathy towards Facebook (i do not work for them but am well aware of Cambridge Analytica);
Cambridge Analytica was entirely a third party using "Click here to log in via Facebook and share your contacts" via FB's OpenGraph API.
Everyone in their mind is sure that it was Facebook just giving away all user details and that's what the scandal was about but if you look at the details the company was using the Facebook OpenGraph API and users were blindly hitting 'share', including all contact details (allowing them to do targeted political campaigning) when using the Cambridge Analytica quiz apps. Facebooks fault was allowing Cambridge Analytica permission to that API (although at the time they granted pretty much anyone access to it since they figured users would read the popups).
Now you might say "a login popup that confirms you wish to share data with a third party is not enough" and that's fair. Although that pretty much describes every OAuth flow out there really. Also think about it from the perspective of any app that has a reasonable reason to share a contacts list. Perhaps you wish to make an open source calendar and have a share calendar flow? Well there's precedent that you're liable if someone misuses that API.
We all hate big tech. So do juries. We'll jump at the chance to find them guilty and no one else in tech will complain. But if we think about it for even a second quite often these precedents are terrible and stifling to everyone in tech.
> But if we think about it for even a second quite often these precedents are terrible and stifling to everyone in tech.
Doesn't everything else in your post kinda point to the industry needing a little stifling? Or, more kindly, a big rethink on privacy and better controls over one's own data?
Do you have an example of a similarly terrible precedent in your opinion? One that doesn't include the blatant surveillance state power-grabbing "think of the children" line. Just curious.
Ideally, it ends with Facebook implementing safeguards on data that could be illegal to use, and having a compliance process that rejects attempts to access that data for illegal reasons.
Flo shouldn't have sent those data to FB. That's true. Which is why they settled.
But FB, having received this info proceeded to use it and mix it with other signals it gets. Which is what the complaint against FB alleged.
I wish there was information about who at Facebook received this information and “used” it. I suspect it was mixed in with 9 million other sources of information and no human at Facebook was even aware it was there.
Is your argument that it's fine to just collect so much information that you can't possibly responsibly handle it all?
In my opinion, that isn't something that should be allowed or encouraged.
I’m not the OP but no, I think their point is if you tell people that this data will be used for X, and not to send sensitive data that way and they do it anyway you can’t really be responsible for it - the entity who sent you the data and ignored your terms should be
Not at Facebook, but I used to work on an ML system that took well-defined and free-form JSON data and ran ML on it. Both were used in training and classification. Unless a human looked, we had no idea what those custom fields were. We also had customers lie about what the fields represent for valid and less valid reasons.
Without knowing how it works at Facebook, it's quite possible the data points got slurped in, the models found meaning in the data and acted on it, and no human knew anything about it.
How it happened internally is irrelevant to whether Facebook is responsible. Deploying systems they do not properly control or understand does not shield against legal or normal responsibilities!
There is a trail of people who signed off on this implementation. It is the fault of one or more people, not machines.
>Deploying systems they do not properly control or understand does not shield against legal or normal responsibilities!
We can argue the "moral" aspect until we're both blue in the face, but did facebook have any legal responsibilities to ensure its systems didn't contain sensitive data?
So they shouldn’t be punished because they were negligent? Is that your argument?
I think their argument is that FB has a pipeline that processes whatever data you give it and the idea that a human being made the conscious decision to use this data is almost certainly not what happened.
"This data processing pipeline processed the data we put in the pipeline" is not necessarily negligence unless you just hate Facebook and couldn't possibly imagine any scenario where they're not all mustache-twirling villains.
Yeah, sorry, no, I have to disagree.
We're seeing this broad trend in tech where we just want to shrug and say "gee wiz, the machine did it all on its own, who could've guessed that would happen, it's not really our fault, right?"
LLMs sharing dangerous false information, ATS systems disqualifying women at higher rates than men, black people getting falsely flagged by facial recognition systems. The list goes on and on.
Humans built these systems. Humans are responsible for governing those systems and building adequate safeguards to ensure they're neither misused nor misbehave. Companies should not be allowed to tech-wash their irresponsible or illegal behaviour.
If Facebook did indeed built a data pipeline and targeting advertising system that could blindly accept and monetize illegally acquired without any human oversight, then Facebook should absolutely be held accountable for that negligence.
What does the system look like where a human being individually verifies every pieces of data fed into an advertising system? Even taking the human out of the loop, how do you verify the "legality" of one piece of data vs. another coming from the same publisher?
None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.
That's not my problem to solve?
If Facebook chooses to build a system that can ingest massive amounts of third party data, and cannot simultaneously develop a system to vet that data to determine if it's been illegally acquired, then they shouldn't build that system.
You're running under the assumption that the technology must exist, and therefore we must live with the consequences. I don't accept that premise.
Edit: By the way, I'm presenting this as an all-or-nothing proposition, which is certainly unreasonable, and I recognize that. KYC rules in finance aren't a panacea. Financial crimes still happen even with them in place. But they represent a best effort, if imperfect, attempt to acknowledge and mitigate those risks, and based on what we've seen from tech companies over the last thirty years, I think it's reasonable to assume Facebook didn't attempt similar diligence, particularly given a jury trial found them guilty of misbehaviour.
> None of your example have anything to do with the thing we're talking about, and are just meant to inflame emotional opinions rather than engender rational discussion about this issue.
Not at all. I'm placing this specific example in the broader context of the tech industry failing to a) consider the consequences of their actions, and b) escaping accountability.
That context matters.
I often think about what having accountability in tech would entail. These big tech companies only work because they can neglect support and any kind of oversight.
In my ideal world, platforms and their moderation would be more localized, so that individuals would have more power to influence it and also hold it accountable.
It's difficult for me to parse what exactly your argument is. Facebook built a system to ingest third party data. Whether you feel that such technology should exist to ingest data and serve ads is, respectfully, completely irrelevant. Facebook requires any entity (e.g. the Flo app) to gather consent from their users to send user data into the ingestion pipeline per the terms of their SDK. The Flo app, in a phenomenally incompetent and negligent manner, not only sent unconsented data to Facebook, but sent -sensitive health data-. Facebook they did what Facebook does best, which is ingest this data _that Flo attested was not sensitive and collected with consent_ into their ads systems.
So let's consider the possibilities:
#1. Facebook did everything they could to evaluate Flo as a company and the data they were receiving, but they simply had no way to tell that the data was illegally acquired and privacy-invading.
#2. Facebook had inadequate mechanisms for evaluating their partners, and that while they could have caught this problem they failed to do so, and therefore Facebook was negligent.
#3. Facebook turned a blind eye to clear red flags that should've caused them to investigate further, and Facebook was malicious.
Personally, given Facebook's past extremely egregious behaviour, I think it's most likely to be a combination of #2 and #3: inadequate mechanisms to evaluate data partners, and conveniently ignoring signals that the data was ill-gotten, and that Facebook is in fact negligent if not malicious. In either case Facebook should be held liable.
pc86 is taking the position that the issue is #1: that Facebook did everything they could, and still, the bad data made it through because it's impossible to build a system to catch this sort of thing.
If that's true, then my argument is that the system Facebook built is too easily abused and should be torn down or significantly modified/curtailed as it cannot be operated safely, and that Facebook should still be held liable for building and operating a harmful technology that they could not adequately govern.
Does that clarify my position?
No one is arguing that FB has not engaged in egregious and illegal behavior in the past. What pc86 and I are trying to explain is that in this instance, based on the details of the court docs, Facebook did not make a conscious decision to process this data. It just did. Because this data, combined with the billion+ data points that Facebook receives every single second, was sent to Facebook with the label that it was "consented and non-sensitive health data" when it most certainly was not consented and very sensitive health data. But this is the fault of Flo. Not Facebook.
You could argue that Facebook should be more explicit in asking developers to self-certify and label their data correctly, or not send it at all. You could argue that Facebook should bolster their signal detection when it receives data from a new apps for the first time. But to argue that a human at Facebook blindly built a system to ingest data illegally without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents (which it did, that Flo sent to them). This case is very squarely #1 in your example and maybe a bit of #2.
If FB is going to use the data, then it should have the responsibility to check whether they can legally use it. Having their supplier say "It's not sensitive health data, bro, and if it is, it's consented. Trust us" should not be enough.
To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.
>To use an extreme example, if someone posts CSAM through Facebook and says "It's not CSAM, trust me bro" and Facebook publishes it, then both the poster and Facebook have done wrong and should be in trouble.
AFAIK that's only because of mandatory scanning laws for CSAM, which were only enacted recently. There's no such obligations for other sensitive data.
Mens rea vs actus reus.
In some crimes actus reus is what matters. For example if you're handling stolen goods (in the US) the law can repossess these goods and any gains from them, even if you had no idea they were stolen.
Tech companies try to absolve themselves of mens rea by making sure no one says anything via email or other documented process that could otherwise be used in discovery. "If you don't admit your product could be used for wrong doing, then it can't!"
>Facebook did not make a conscious decision to process this data.
Yes, it did. When Facebook built the system and allowed external entities to feed it unvetted information without human oversight, that was a choice to process this data.
> without any attempt to prevent it is a flawed argument, as there are many controls, many disclosures, and (I'm sure) many internal teams and systems designed exactly for the purpose of determining whether the data they receive is has the appropriate consents
This seems like a giant assumption to make without evidence. Given the past bad behavior from Meta, they do not deserve this benefit of the doubt.
If those systems exist, they clearly failed to actually work. However, the court documents indicate that Facebook didn't build out systems to check if stuff is health data until afterwards.
> Facebook did not make a conscious decision to process this data. It just did.
What everyone else is saying is what they did is illegal, and they did it automatically, which is worse. What you're describing was, in fact, built to do that. They are advertising to people based on the honor system of whoever submits the data pinky promising it was consensual. That's absurd.
"doing everything they could" is quite the high standard. Personally, I would only hold them to the standard of making a reasonable effort.
Yup, fair. I tried to acknowledge that in my paragraph about KYC in a follow-up edit to one of my earlier comments, but I agree, the language I've been using has been intentionally quite strong, and sometimes misleadingly so (I tend to communicate using strong contrasts between opposites as a way to ensure clarity in my arguments, but reality inevitably lands somewhere in the middle).
It is necessarily negligence if they are ingesting a lot of illegal data, right? I mean, it could be the case that this isn’t a business model that works given typical human levels of competence.
But working beyond your competence when it results in people getting hurt is… negligent.
You're absolutely right, a human being didn't make the conscious decision to use this data. They made a conscious decision to build an automated pipeline that uses this data and another conscious decision not to build in any checks on the legitimacy of said data. Do we want the law to encourage responsibility or intentional ignorance and plausible deniability?
I would expect an app with 150 million active users to trigger some kind of compliance review in Meta
This is the argument companies use for having shitty customer support. "Our business is too big for our small support team."
Why are you scaling up a business that can't refrain from fucking over customers?
I would say you have a responsibility to ensure you are getting legal data. you don't buy stolen things. That is meta has a reponsibility to ensure that they are not partnering with crooks. Flo gets the largest blame but meta needs to show they did their part to ensure this didn't happen. (I would not call terms of use enough unless they can show they make you understand it)
>Flo gets the largest blame but meta needs to show they did their part to ensure this didn't happen. (I would not call terms of use enough unless they can show they make you understand it)
Court documents says that they blocked access as soon as they were aware of it. They also "built out its systems to detect and filter out “potentially health-related terms.”". Are you expecting more, like some sort of KYC/audit regime before you could get any API key? Isn't that the exact sort of stuff people were railing against, because indie/OSS developers were being hassled by the play store to undergo expensive audits to get access to sensitive permissions?
Facebook chose to pool the data they received from customers and allow its use by others, so they are also responsible for the outcomes. If it's too hard to provide strong assurance that errors like Flo's won't result in adverse outcomes for the public, perhaps they should have designed a system that didn't work that way.
>Facebook chose to pool the data they received from customers and allow its use by others, so they are also responsible for the outcomes.
"chose" is doing a lot of the heavy lifting here. Suppose you ran a Mastodon server and it turned out some people were using it to share revenge porn unbeknownst to you. Suppose further that they did it in a way that didn't make it easily detectable by you (eg. they did it in DMs/group chats). Sure, you can dump out the database and pore over everything just to be sure, but it's not like you're going to notice it day to day. If a few months later the revenge porn ring got busted should you be charged with "intentionally eavesdropping" on revenge porn or whatever? After all, to some extent, you "chose" to run the Mastodon server.
I have the type of email address that regularly receives email meant for other people with a similar name. Invites, receipts, and at one point someones Disney+ account.
At one point I was getting a strangers fertility app updates - didn't know her name, but I could tell you where she was in her cycle.
I've also had NHS records sent to me, again entirely unsolicited, although that had enough I could find who it was meant for and inform them of the data breach.
I'm no fan of facebook, but I'm not sure you can criminalise receiving data, you can't control what others send you.
> ...you can't control what others send you.
Of course not. You can, however, control what you then do with said data.
If a courier accidentally dropped a folder full of nuclear secrets in your mailbox, I promise you that if you do anything with it other than call the FBI (in the US), you will be in trouble.
Except in this case it's unclear whether any intentional decision went on at meta. A better analogy would be if someone sent you a bunch of CSAM, it went to your spam folder, but then because you have backups enabled the CSAM got replicated to 3 different servers across state lines, and the FBI is charging you with "distributing" CSAM.
If Flo accepted the terms of use, then it means they understand it.
Really the only blame here should be on Flo.
> you don't buy stolen things.
This happens accidentally every single day and we don't punish the victim
We do punish the victum - we take away stolen goods. if they know it was stolen goods they can be punished for it. money laundy laws get a lot of innocent people doing legal things.
That's why in these cases you'd prefer a judgment without a jury. Technical cases like this will always confuse jurors, who can't be expected to understand details about sdk, data sharing, APIs etc.
On the other hand, in a number of highprofile tech cases, you can see judges learning and discussing engineering in a deeper level.
> Technical cases like this will always confuse jurors... On the other hand, in a number of highprofile tech cases, you can see judges learning and discussing engineering in a deeper level.
Not to be ageist, but I find this highly counterintuitive.
Judges aren't necessarily brilliant, but they do spend their entire careers reading, listening to, and dissecting arguments. A large part of this requires learning new information at least well enough to make sense of arguments on both sides of the issue. So you do end up probably self-selecting for older folks able to do this better than the mean for their age, and likely for the population at large.
Let's just say with a full jury you're almost guaranteed to get someone on the other side of the spectrum, regardless of age.
how exactly? you expect the average joe to have a better technical understanding, and more importantly ability to learn, than a judge? that is bizarre to me
I expect the average joe to use technology much more than a judge.
The judge is at their job. The jury is conscripts that are often paying a financial penalty to be present.
Weird deference to authority
I've also heard you want a judge trial if you're innocent, jury if you're guilty. A judge will quickly see through prosecutorial BS if you didn't do it, and if you did, it only takes one to hang.
Is it easier for the prosecution to make the jury think Facebook is guilty or for Facebook to make the jury think they are not? I don’t see why one would be easier, except if the jury would be prejudiced against Facebook already. Or is it just luck who the jury sides with?
I'd imagine Facebook looking for any potential juror in tech to be dismissed as quickly as possible while the prosecution would be looking to seat as many tech jurors they can luck their way into seating.
I mean it totally depends what your views on democracy are. Juries are one of the few, likely only, practices taken from Ancient Athenian democracy which was truly led by the people. The fact that juries still work this way is a testament to the practice.
With this in mind, I personally believe groups will always come to better conclusions than individuals.
Being tried by 12 instead of 1 means more diversity of thought and opinion.
I mostly agree here, but would add there's definitely a social pressure to go along with the group a lot of the time, even in jury trials. How many people genuinely have the fortitude to stand up to a group of 10+ others with a countering pov.
I don't disagree, but think of the pressures a judge has as an individual as well. Pressures from the legal community, the electorate, and being seen as impartial.
There is a wisdom of the crowd, and that wisdom comes in believing that we are all equal under the law. This wisdom is more self evident in democratic systems, like juries.
My understanding is defendants always get to choose, no? So that was an available option they chose not to avail themselves to.
>> Technical cases like this will always confuse jurors.
This has been an issue since the internet was invented. Its always been the duty of the lawyers on both sides to present the information in cases like this in a manner that is understandable to the jurors.
I distinctly remember during the OJ case, there were many issues that the media said most likely were presented in such a detailed manner, many in the jurors seemed to be checked out. At the time, the prosecution spent days just on the DNA evidence. In contrast, the defense spent days just on how the LAPD collected evidence at the crime scene with the same effect, that many on the jury seemed to check out the deeper the defense dug into it.
So it not just technical cases, any kind of court case that requires a detailed understanding of anything complex comes down to how the lawyers present it to the jury.
I tend to agree in this instance. But this is why you don't build a public brand of doing shit very much like this constantly.
Innocent until proven guilty is the right default, but at some point when you've been accused of misconduct enough times? No jury is impartial.
Suing Facebook instead of Flo makes perfect sense, because Facebook has much more money. Plus juries are more likely to hate FB than a random menstruation company.
They sued both.
[flagged]
> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
> When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
> Please don't fulminate. Please don't sneer, including at the rest of the community.
Whenever you think of a court versus Facebook, imagine one of these mini mice trying to stick it to a polar bear. Or a goblin versus a dragon, or a fly versus an elephant.
These companies are for the most part effectively outside of the law. The only time they feel pressure is when they can lose market share, and there's risk of their platform being blocked in a jurisdiction. That's it.
>These companies are for the most part effectively outside of the law
You have it wrong in the worst way. They are wholly inside the law because they have enough power to influence the people and systems that get to use discretion to determine what is and isn't inside the law. No amount of screeching about how laws ought to be enforced will affect them because they are tautologically legal, so long as they can afford to be.
It's one of those "I'm not trapped here with you; you're trapped here with me" type things.
I think this situation is described best as being "above" the law.
Pedantic, but fair. You're right.
The worst part for me personally is that almost everyone I know cares about this stuff and yet they keep all of their Meta accounts. I really don't get it and frankly, find it kind of disturbing.
I know people that don't see anything wrong with Meta so they keep using it. And that's fine! Your actions seem to align with your stated values.
I get human fallibility. I've been human for awhile now, and wow, have I made some mistakes and miscalculations.
What really puts a bee in my bonnet though is how dogmatic some of these people are about their own beliefs and their judgement of other people.
I love people, I really do. But what weird, inconsistent creatures we are.
Voting with your feet doesn't work if you don't have a place to go. People are afraid of losing their connections, which are some of the most precious things we have. Doesn't matter if it's an illusion, that's enough. Zuck is holding us hostage on our most basic human instincts. I think that's fucked up.
Eh, I care and I don't do it, but my wife does. I do not agree with her choices in that area and voice the concerns in a way that I hoped would speak to her, but it does not work as it is now a deeply ingrained habit.
I, too, have vices she tolerates so I don't push as hard as I otherwise would have, but I would argue it is not inconsistency. It is a question of what level of compromise is acceptable.
I keep sharing stories like this with them. Privacy violations, genocide, mental health, …. Whenever I think it might be something someone cares about I share with them. I also make an effort to explain to my non tech folks that meta is Facebook, instagram, WhatsApp, to make sure they understand recognize the name. Many people do not know what meta is. Sometimes I suspect it was a way to capture the bad publicity and protect their brands.
> The worst part for me personally is that almost everyone I know cares about this stuff and yet they keep all of their Meta accounts.
They care as much as people who claim to care about animals but still eat them, people who claim to love their wives and still beat/cheat them. Your actions are the sole embodiment of your beliefs
All they need to do is impose a three digit fine per affected user and Facebook will immediately feel intense pressure.
$1 for the first user, $2 for second, $4 for third...By the 30th user, it would be painful even for mega corps. By 40th, it would be an absurd number.
Might also be worth trying to force them to display a banner on every page of the site "you're on facebook, you have no privacy here", like those warnings on cigarette boxes. These might not work though, people would just see and ignore them, just like smokers ignore warnings about cigarettes.
But these users were NOT on Facebook. It was an app using the FB SDK. So it should be the apps that use SDKs should put up large banners clearly identifying who they are sharing data with. Some of these sites are sharing with >100 3rd party sites. It is outrageous
three digit ? the only thing these folks understand is exponential growth per affected user.
Yes, three digit. That would be 15 to 150 billion dollars, and Facebook would understand that amount.
Who's this "they" you speak of, and why would they bother doing that?
The court. Because it's their job.
I'm not using "fine" very literally. Damages paid to the victims.
Roblox lul
Everybody blames facebook, noone blames the legislators and the courts.
Stuff like this could easily make them pay multi-billion dollar fines, stuff that affects more users maybe even in the trillion range. When government workers come pick up servers, chairs and projectors from company buildings to sell at an auction, because there is not enough liquid value in the company to pay the fines, they (well, the others) would reconsider quite fast and stop with the illegal activities.
Sarah Williams (forgot the name) testified in US Congress as to Facebooks strategies on handling governments. Based on her book, it seems Brazil has been the most effective out of major democratic governments in confronting Facebook. Of course, you have China completely banning Facebook.
I think Mark Zuckerberg is acutely aware of the political power he holds and has been using this immense power at least for the last decade. But since Facebook is a US company and the US government is not interested in touching Faceebok, I doubt anyone will see what Zuckerberg and Facebook are up to. The US would have to put Lina Khan back in at the FTC, or put her high up in the Department of Justice to split Facebook into pieces. I guess the other hope is that states' attorneys' general when an anti-monopoly lawsuit.
Don't get me wrong, I don't "blame Facebook". I lament the environment that empowers Facebook to exist and do harm. These companies should be gutted by the state, but they won't because they pump the S&P.
[flagged]
Funny, but this kinda implies that some person designed this way. It's a resultant sum of small vectors, with corporate lobbying playing a significant role. Corporate lobbying systemically can't do anything else than try to increase profits, which usually means less regulation. Clean slate design would require a system collapse.
> Corporate lobbying systemically can't do anything else than try to increase profits, which usually means less regulation.
Corporate lobbying can be for more regulation. It can disadvantage competitors. Zuckerberg has spoken in favour of greater regulation of social media in the past. The UK's Online Safety Act creates barriers to entry and provides and excuse for more tracking. I can think of examples, some acknowledged by the CEOs of the companies involved, ranging from British pubs to American investment banks.
When Facebook releases an AI Model for free: "Based Facebook. Zuckerberg is a genius visionary"
When Facebook does something unforgivable: "It's a systemic problem. Zuck is just a smol bean"
Zuck can take his model onto his private island and talk to it instead of trying to be a normal human being.
Don't conflate me with the personality worshippers on HN, I'm not one of them, even though it seems like it to you because I also post here. You won't find a single instance of me glazing tech leaders.
What's with this reductionist logic? Nothing is ever 100% good or 100% evil, everything is on a spectrum.
So just because Zuck does some good stuff for the tech world, doesn't mean he's work isn't a net negative to society.
> doesn't mean he's work isn't a net negative to society
Oh he absolutely is.
I'm just saying that it's common in this community to attribute the achievements of big companies to leadership (E.g. the mythology of Steve Jobs), but dismiss all the evil stuff to "systemic issues".
> Funny, but this kinda implies that some person designed this way
How do you get to that implication? I'm missing a step or two I think...
From "do you want X? this is how you get X". This invokes an image of talking to a person who decided the how, because they can be questioned on whether they want the X.
I once ran across Zuckerberg in a Palo Alto cafe. I only noticed him (I was in the process of ordering a sandwich, and don’t really care about shit like that) because he was being ‘eeeeeeee’d’ by a couple of random women that he didn’t seem to know. He seemed pretty uncomfortable about the whole thing. One of them had a stroller which she was profoundly ignoring during the whole thing, which I found a bit disturbing.
The next time I saw him in Palo Alto (a couple months later on the street), he had 2 totally-not-security-dudes flanking him, and I saw at least one random passerby ‘redirected’ away from him. This wasn’t at the cafe though, it wouldn’t surprise me if he didn’t go there again.
This was a decade before Luigi. Luigi was well after meta was in the news for spending massive amounts of money on security and Zuck had a lot of controversy for his ‘compound’ in PA.
I can assure you, Meta is well aware of the situation, and a Luigi isn’t going to have a chance in this situation.
The reality in my experience that is any random person given the amount of wealth these folks end up with would end up making similar (or worse) decisions, and while contra-pressure from Luigi’s is important in the overall system, folks like Zuckerberg are more a result of the system and rules than the cause of them (but then influence the next system/rules in a giant Oroborous type situation).
Kind of a we either die young a hero, or live to be the villain kind of thing. But because the only reason anyone dies a young hero is because they lost the fight against the prior old villains. If they’d won (even in a heroic fashion), life would turn them into the old villains shortly.
The wheel turns.
It's not the only way. The oppressed do not need to become the oppressor, its just the simplest rut for the wheel to turn in.
Sure, they can stay the oppressed?
Using the entropic model you seem to indicate (which I also favor), us vs them seems to be the lowest energy state.
It’s certainly possible to not be there at any given time, but seems to require a specific and somewhat unique set of circumstances, which are not the most energetically stable.
> he was being ‘eeeeeeee’d’ by a couple of random women
Maybe I'm too old, but what in the world does being eeee'd mean?
>I can assure you, Meta is well aware of the situation, and a Luigi isn’t going to have a chance in this situation.
With all due respect, Luigi was just a CS student with a six pack, a self made gun, and a aching back on a mission.
The Donald himself nearly got got by his ear while he had the secret service of the US of A to protect him, not some private goons for hire, and that was just a random redditor with a rifle, not a professional assassin.
So what would happen if let's say meta's algorithms push a teenage girl to kill herself by exploiting her self esteem issues to sell her more beauty products, and her ex-navy seal dad with nothing more to loose grabs his McMillan TAC-338 boom stick and makes his life mission to avenge his lost daughter at the expense of his own? Zuck would need to be lucky every time, but that bad actor would need to be lucky once.
I'm not advocating for violence btw, my comment was purely hypothetical.
Pretty much anyone without presidential quality security clearing the place ahead of them stands to get clapped Franz Ferdinand style by anyone dedicated enough to camp out waiting.
And yet, Mr. Trump is up there trolling the world like he loves to do, and Zuck is out there doing whatever he wants.
The reality is, all those ex-navy seal Dad’s are (generally) wishing they could make the cut to get on those dudes payroll, not gunning for them. Or sucking up to the cult, in general.
The actual religious idea of Karma is not ‘bad things happen to bad people right now’, the way we would like.
Rather ‘don’t hate on king/priest/rich dude, they did something amazing in a prior life which is why they deserve all this wealth right now, and if they do bad things, they’ll go down a notch - maybe middle class - in the next life’.
It’s to justify why people end up suffering for no apparent reason in this life (because they had to have done something really terrible in a prior life), while encouraging them to do good things still for a hopefully better next life (if you think unclogging Indian sewers in this life is bad, you could get reincarnated as a roach in that sewer in the next life!). So they don’t go out murdering everyone they see, even if they get shit on constantly.
There is no magic bullet. Hoping someone else is going to solve all your problems is exactly how manipulative folks use you for their own purposes. And being a martyr to go after some asshole is being used that way too.
This is also why eventually an entire generation of hippies turned into accountants in the 80’s.
shrug
[dead]
"I can assure you, Meta is well aware of the situation, and a Luigi isn’t going to have a chance in this situation."
Luigi was a dude with a 3D printed gun.
I have LASERs with enough power to self-focus, have zero ballistic drop, and can dump as much power as a .50cal BMG in a millisecond burst of light which can hit you from the horizon's edge. All Zuck needs to do is stand by a window, and eyeballs would vaporize.
Mangione is going to either die rotting in prison, or preferably get sent to the electric chair. His life will be wasted. Meanwhile, UNH is continuing to do business as usual. One way or the other, mangione will die knowing his life was wasted, and that his legacy is not reform but cold-blooded murder.
Call it a “day of rage” or just babyrage but we build systems so our bus factor can increase above 1. Just killing people no longer breaks them. It makes someone nothing more than a juvenile murderer.
I don’t really care what lasers you have, I’d suggest you choose a different legacy for yourself.
[dead]
>His life will be wasted.
His life was already wasted due to his medical condition. Don't ever bet aginst people with nothing to loose.
FBI open up
> I only noticed him (I was in the process of ordering a sandwich) because he was being ‘eeeeeeee’d’ by a couple of random women that he didn’t seem to know. He seemed pretty uncomfortable about the whole thing.
Pretty funny considering that Facebook's origin story was a women comparison site, or this memorable quote:
> People just submitted it. I don't know why. They 'trust me'. Dumb fucks.
Have you ever ordered a really good steak, like amazing. And really huge, and inexpensive too.
And it really is amazing! And super tasty.
But it’s so big, and juicy, that by the end of it you feel sick? But you can’t stop yourself?
And then at the end of it, you’re like - damn. Okay. No more steak for awhile?
If not steak, then substitute cake. Or Whiskey.
Just because you got what you wanted doesn’t mean you’re happy with all the consequences, or can stomach an infinitely increasing quantity of it.
Of course, he can pay to mitigate most of them, and he gets all the largest steaks he could want now, so whatever. I’m not going to cry about it. I thought it was interesting to see develop however.
Personally, I see it as poetic justice. He started off on objectifying women with FaceMash, he doesn't get to cry about being objectified and drooled over himself.
[flagged]
I don't think many of you read the article... the Flo app is the one in the wrong here, not meta. The app people were sending user data to meta with no restrictions on its use. Despite however the court ruled.
> the Flo app is the one in the wrong here, not meta.
Flo is wrong for using an online database for personal data.
Meta is wrong for facilitating an online database for personal data.
They're both morally and ethically wrong.
> The app people were sending user data to meta with no restrictions on its use
And then meta accessed it. So unless you put restrictions on data, meta is going to access it. Don't you think it should be the other way around? Meta to ask for permission? Then we wouldn't have this sort of thing.
Do you think AWS should ask for permission before processing some random B2C app user's data?
From the article: "The jury ruled that Meta intentionally “eavesdropped on and/or recorded their conversations by using an electronic device,” and that it did so without consent."
If AWS wanted to eavesdrop and/or record conversations of some random B2C app user, for sure they would need to ask for permission.
If you read the court documents, "eavesdropped on and/or recorded" basically meant "flo used facebook's SDK to sent analytics events to facebook". It's not like they were MITMing connections to flo's servers.
https://www.courtlistener.com/docket/55370837/1/frasco-v-flo...
I think it a distinction without a difference. To make it more obvious imagine it was one of those AI assistant devices that records your conversations so you can recall them later. Plainly obvious that accessing this data for any purpose other than servicing user requests is morally equivalent to easedropping on a person's conversations in the most traditional sense.
If the company sends your conversation data to Facebook that's bad and certainly a privacy violation but at this point nothing has actually been done with the data yet. Then Facebook accesses the data and folds it into their advertising signals; they have now actually looked at the data and acted on the information within. And that to me is easedropping.
To extend your analogy further, what if instead of an AI assistant, it was your friend who listened to your secret, and instead of him sending your data to facebook, he told that to google (eg. "hey gemini, my friend has hemorrhoids..."). Suppose further that google uses Gemini queries for advertising purposes (eg. upranking ad results for hemorrhoid creams). Should gemini be on the hook for this breach of trust? What if, instead of a big evil tech company, it was the owner of a local corner shop, who uses this factoid to inform his purchasing decisions?
I disagree - the blame lies with the people who sent that data to Facebook knowing it was sensitive. Whrhther meta use it for advertising or not is irrelevant.
By that logic, if I listen in on your conversations but don’t do anything about it I’m not eavesdropping?
I mean this is more philosophical than anything—if you listened to my conversations but never told anyone what I said, altered your behavior, or took any action based on the information contained therein then how would I or anyone even know.
And I know it sounds pedantic but I don't think it is, it's why that data is allowed to be stored in S3 and Amazon isn't considered easedropping but Facebook is.
If they are going to add it to a person's profile and/or sell ads based on it, yes.
Here's the restriction: don't send it to fb in the first place!
If Disney mistakenly sends you a prerelease copy of the new Star Wars, playing that in your local movie theater is still a crime.
Possession of data does not give you complete legal freedom.
But if you have an agreement with Disney that says “if you send us a movie we will show it”, and Disney send you the wrong thing it’s Disney’s fault, not yours.
Which is what happened here.
I don't think comparisons are useful here. We are dealing with an evil corporation which we all know and it has been proven many times that it broke the law and every time gets away with it. who are you protecting?
here's another one: fb shouldn't use every piece of data they can collect.
5 years ago I was researching the iOS app ecosystem. As part of that exercise I was looking at the potential revenue figures for some free apps.
One developer had a free app to track some child health data. It was long time ago so I don't remember the exact data being collected. But when asked about the economics of his free app, the developer felt confident about a big pay day.
As per him the app's worth was in the data being collected. I don't know what happened to the app but it seemed that app developers know what they are doing when they invade privacy of their users - under the guise of "free" app. After that I became very conscious about disabling as many permissions as possible and especially not using apps to store any personal data, especially health data.
I don't understand why anyone would let these psychopathic corporations have any of their personal or health data. Why would you use an app that tracked health data, or use a wearable device from any of these companies that did that. You have to assume, based on their past behavior, that they are logging every detail and it's going to be sold and saved in perpetuity.
I guess because people want to track some things about their health, and people provide good pieces of software with a simple UI to do it, and this is more useful that, say, writing it down in a notebook, or in a text file or notes app.
I guess also people feel that corporations _shouldn't_ be allowed to do bad things with it.
Sadly, we already know with experience in the last 20 years, that many people don't care about what information they give to large corporations.
However, I do see more and more people increasingly concerned about their data. They are still mainly people involved in tech or related disciplines, but this is how things start.
Well maybe one reason this is hard to understand is that the plaintiff in this case hasn’t been harmed in any way. I suppose you could also argue, why would anyone go outside, there are literally satellites in space that image your every move in control of psychopathic corporations, logging every detail which they sell and save in perpetuity.
Don't use apps. It's a simple as that. 95% of the time they are not worth the incredible privacy invasion they impose on users.
Mozilla did a comparison between period tracking apps and there are some that should respect user's privacy
https://www.mozillafoundation.org/en/privacynotincluded/cate...
Even beyond that, I expect software developers to prove to me that an Internet connection is necessary for whatever it is they're trying to do.
Pardon my ignorance, but can't you just solve this by disabling location permissions, etc for a given app?
You can -- the real problem here is that each app could violate your privacy in different ways. Unless you break TLS and inspect all the traffic coming from an app (and, do this over time since the reality of what data is sent will change over time) then you don't really know what your apps are stealing from you. For sure, many apps are quite egregious in this regard while some are legitimately benign. But, do you as a user have a real way to know this authoritatively, and to keep up with changes in the ecosystem? My argument would be that even security researchers don't have time to really do a thorough job here, and users are forced to err on the side of caution.
What they do then is create an app where location is necessary, make that app spin up a localhost server, then add js to facebook and every site with a like button to phone that localhost and basically deanon everyone.
How could this possibly work without port forwarding?
2 months ago: https://news.ycombinator.com/item?id=44169115.
Of course Facebook's JS won't add itself to websites, so half of the blame goes to webmasters willingly sending malware to browsers.
It happens on the same device. No forwarding necessary. And it was documented to happen, the story was on HN
The sad truth
True. Unfortunately, users are all humans - with miserably predictable response patterns to "Look at this Free New Shiny Thing you could have!" pitches, and the ruthless business models behind them.
[dead]
To any other women in here, check out Drip. https://dripapp.org They seem to be the most secure.
Honestly, this is something I would just self host. This isn't data I'd trust anyone with, and I don't even have sex with men.
I think that is the best approach for people who can do that. :)
My wife uses Flo though every time I see her open the app and input information the tech side of my brain is quite alarmed. An app like that keeps very very personal information and really highlights for me the need to educate non-technical folks on information security.
And this is why I have a general no-apps policy on my phone... Or at least, I have a minimal number of apps on my phone. While this doesn't prevent a given website/webapp from sharing similar information, I just feel slightly better not giving hard device access.
Along a similar vein, I cannot believe after the stunts LinkedIn pulled, that they're even allowed on app stores at all.
It's very rare to see any privacy related news without Meta being involved in the story.
Google, Microsoft, and Amazon all like it that way.
Nothing will change until VPs start going to jail. Unlikely I know...
What features does this app have that couldn't be duplicated in an Excel file (or LibreCalc or MacOS Numbers)?
Why would an app that tracks menstrual cycles need to integrate with the Facebook SDK?? Pure insanity.
Why would an app that tracks menstrual cycles need to connect to the Internet at all? TFA mentions asking about quite a few other personal things as well. Is the app trying to do more than just tracking? If they're involved in any kind of diagnosis then I imagine there are further legal liability issues....
This is really disappointing. I used to have a fertility tracking app on the iOS App Store, zero data sharing, all local thus private. But, people don’t want to pay $1 for an app, and I can’t afford the marketing drive that an investor-backed company such as this has… and so we end up with situations like this. Pity :(
Stories like this one can be the basis for effective marketing. We need to normalize paying $1 (or more, where warranted) for apps that provide value in the form of not doing the things that allow the $0 ones to be $0.
Oh boy, what's Mark up to these days.
Thanks for asking! Also on the front page today: https://news.ycombinator.com/item?id=44898934
From that article:
Can someone explain me why this shouldn't be illegal?
Well, my steelman argument against regulating this repulsive shit: every time a government says it's going to regulate infotech to protect children, it almost invariably ends up having a chilling effect that ends up restricting children's access to high quality educational and medical information. Even if a law is well-intentioned and equitable, its enforcement never is.
In this case, I'm confident that whoever wrote this section, just checking their hard drive should be sufficient to send them to jail.
Its already illegal to distribute pornographic material to children tho. Why shouldn't this be considered that?
Previously: https://news.ycombinator.com/item?id=44763949
No ifs, no buts. Stuff like this deserves ruinous fines for its executives.
Cycle data in the hands of many country's authorities is outright dangerous. If you're storing healthcare data, it should require IN BIG RED LETTERS an explicit opt-in, every single time, when that data leaves your device.
Zuckerberg does not seem to repect the law. There really should be criminal charges by now.
For those disinclined to read the article...
> [...] users, regularly answered highly intimate questions. These ranged from the timing and comfort level of menstrual cycles, through to mood swings and preferred birth control methods, and their level of satisfaction with their sex life and romantic relationships. The app even asked when users had engaged in sexual activity and whether they were trying to get pregnant.
> [...] 150 million people were using the app, according to court documents. Flo had promised them that they could trust it.
> Flo Health shared that intimate data with companies including Facebook and Google, along with mobile marketing firm AppsFlyer, and Yahoo!-owned mobile analytics platform Flurry. Whenever someone opened the app, it would be logged. Every interaction inside the app was also logged, and this data was shared.
> "[...] the terms of service governing Flo Health’s agreement with these third parties allowed them to use the data for their own purposes, completely unrelated to services provided in connection with the App,”
Bashing on Facebook/Meta might give a quick dopamine hit, but they really aren't special here. The victims' data was routinely sold, en mass, per de facto industry practices. Victims should assume that hundreds of orgs, all over the world, now have copies of it. Ditto any government or criminal groups which thought it could be useful. :(
[dead]
I mean.. there's simply no repercussions for these companies, and only rivers of money on the other side. The law is laughably inept at keeping them in check. The titans of Surveillance Capitalism don't need to obey laws. CFOs line-item-ing provisional legal settlement fees as (minor) COGS. And us digital serfs, we simply have no rights. Dumb f*cks, indeed.
The line between big business and the state is blurry and the state wants to advance big business as a means to advance itself. Once you understand this everything makes sense, or as much "sense" as it can.
Users gave their data to Flo, and Flo then gave it to Meta. What repercussions do you want for Meta?
Buying stolen goods does not mean they're yours because the seller never had any ownership to begin with. The same applies here, just because there's an extra step in the middle doesn't mean that you have any rights to the data.
Some percent of their revenue as fine per case. Only way to scare these companies at this point.
A significant portion, too, not fractions of a percent. Frankly, I want the fines to bankrupt them. That’s the point. I want their behavior to be punished appropriately. Killing the company is an appropriate response, imo: FB/Meta is a scourge on society.
Meta should never have used them. Deeply unethical behaviour
Your mistake was expecting ethical behavior from Mark Zuckerberg.
Another aspect of this is why Apple/Google let this happen in the first place. GrapheneOS is the only mobile OS I can think of that lets you disable networking on an per-app level. Why does a period tracking app need to send data to meta (why does it even need networking access at all)? Why is there no affordance of user-level choice/control that allows users to explicitly see the exact packets of data being sent off device? It would be trival for apps to have to present a list of allowed IPs/hostnames, and users to consent/not otherwise the app is not allowed on the play store.
Simply put, it should not be possible to simply send arbitrary data without some sort of user consent/control, and to me, this is where the GDPR has utterly failed. I hope one day users are given a legal right to control what data is sent off their device to a remote server with serious consequences for non-compliance.
"GrapheneOS is the only mobile OS I can think of that lets you disable networking on a per-app level."
Don't need to "root" mobile phone and install GrapheneOS. Netguard app blocks connections on a per-app basis. It generally works.
But having to take these measures, i.e., installing GrapheneOS or Netguard (plus Nebulo, etc.), is why "mobile OS" all suck. People call them "corporate OS" because the OS is not under the control of the computer owner, it is controlled by a corporation. Even GrapheneOS depends on Google's Android OS, relies on Google hardware, makes default remote connections to a mothership that happen without any user input (just like any corporate OS), and uses a Chromium-based default browser. If one is concerned about being tracked, perhaps it is best to avoid these corporate, mobile OS.
It is easy to control remote connections on a non-corporate, non-mobile OS where the user can compile the OS from source on a modestly resourced computer. The computer user can edit the source and make whatever changes they want. For example, I use one where, after compilation from source, everything is disabled by default (this is not Linux). The user must choose whether to create and enable network interfaces for remote connectivity.
> Why does a period tracking app need to send data to meta (why does it even need networking access at all)?
In case you want to sync between multiple devices, networking is the least hassle way.
> Why is there no affordance of user-level choice/control that allows users to explicitly see the exact packets of data being sent off device? It would be trival for apps to have to present a list of allowed IPs/hostnames, and users to consent/not otherwise the app is not allowed on the play store.
I don't know that it ends up being useful, because wherever the data is sent to can also send the data further on.
Everybody misses the key information here - it’s a Belarusian app. CEO and CTO are Belarusian (probably there are more C-level people who are Belarusian or Russian). Not only are users giving up their private information but they are doing so to the malevolent (by definition) regimes.
When the Western app says they don’t sell or give out private information, you can be suspicious but still somewhat trustful. When a dictator-ruled country’s app does so, you can be certain every character you type in there is logged and processed by the government.
The company cut all ties with Belarus more than three years ago, and all employees relocated to Europe.
Where in Europe? Belarus is in Europe, and so is much of Russia (the largest European country). Plenty of variation in the rest of Europe.
What do you mean by cut all ties? The owners and management have no assets in Belarus or ties to the country?
you can open "contact us" page on their website.
Not sure how that helps answer my question.
A list of contact addresses is not a list of all locations, or all employees or all a contractors or all shareholders or all financial interests.
The one thing the site tells me is that it is operated by two separate companies - Flo Inc and Flo health UK. The directors of Flo Health Limited live in the UK and Cypress, two are Belarusian nationals and one Russian.
[flagged]
Please don't post nationalistic flamebait to HN. It leads to nationalistic flamewars, which are a bad thing on this site.
https://news.ycombinator.com/newsguidelines.html
It looks like many of them now live outside Belarus; should have changed their names, and/or fired any slavic nationals?
* Dmitry Gurski; CEO
* Tamara Orlova; CFO
* Anna Klepchukova; Chief Medical Officer
* Kate Romanovskaia; Chief Brand & Communications Officer
* Joëlle Barthel; Director of Brand Marketing
* Nick Lisher (British); Chief Marketing Officer
I would encourage you to read about the Edward Snowden guy and the PRISM program on wikipedia and most recent attempts of EU to ban the encryption.
Also, here is what Pavel Durov mentioned recently in interview to Tucker Carlson
> In the US you have a process that allows the government to actually force any engineer in any tech company to implement a backdoor and not tell anyone about it with using this process called the gag order.
It doesn't matter what anyone claims on the landing page. Assume if it's stored somewhere, it'll get leaked eventually and the transitioning/hosting government already has an access and decryption keys.
You are right. I still think it’s better if only our guys have this information than both, our guys and their guys. At least Western companies have the possibility to get regulated if political winds change.
> When the Western app says they don’t sell or give out private information, you can be suspicious but still somewhat trustful.
Hey guys, that ycombinator "hacker" forum thing full Champagne socialists employed by the Zucks/Altmans/Musks of the world told me everything is fine and I shouldn't worry. I remain trustful.
Surely not even some, ahem, spilled tea can't possibly occur again, right? I remain trustful.
Speaking of tea, surely all the random "id verification" 3rd parties used since the UK had a digital aneurysm have everything in order, right? I remain trustful.
---
Nah, I'll just give my data to my bank and that's about it. Everyone else can fuck right off. I trust Facebook about as much as I trust Putin.
[flagged]
> Yet between 2016 and 2019 Flo Health shared that intimate data with companies including Facebook and Google, along with mobile marketing firm AppsFlyer, and Yahoo!-owned mobile analytics platform Flurry. [...] Every interaction inside the app was also logged, and this data was shared.
Proposed resolution:
1. Wipe out Flo with civil damages, and also wipe out the C-suite and others at the company personally.
2. Prison for Flo's C-suite, and everyone else at Flo who knew what was going on and didn't stop it, or who was grossly negligent when they knew they were handling sensitive info.
3. Investigate what Flo's board and investors knew, for possible criminal and civil liability.
4. Investigate what Flo's data-sharing partner companies knew, and what was done with the data, for possible criminal and civil liability.
Tech industry gold rushes have naturally attracted much of the shittiest of society. And the Overton window within the field has shifted so much due to this, with some dishonest and underhanded practices as SOP, that even decent people have lost references for what's right and wrong. So the tech industry is going to keep doing every greedy, underhanded, and reckless thing they can, until society starts holding them accountable. That doesn't mean regulatory handslaps; that means predatory sociopaths rotting in prison, and VCs and LPs wiped out, as corporate veils of companies that the VCs knew were underhanded are pierced.
Meta truly is the worst company. In almost everything Meta does, it truly makes the most user-hostile decisions, awful decision, every single time.
Cambridge Analytica The Rohingya Genocide Suppressing Palestinian content during a genocide Damage to teenage (and adult) mental health
Anyway, I mention this because some friends are building a social media alternative to Instagram: https://upscrolled.com, aiming to be pro-user, pro-ethics, and designed for people, not just to make money.
Your comment started very useful, then it became spam. Great way to lose goodwill.
Is posting a self-made alternative to meta not consistent with the rest of the post, even actively promoting the vibe?
A Show HN post would have been more appropriate; this seemed to me opportunistic at best.