> Google’s hash match may well have established probable cause for a warrant to allow police to conduct a visual examination of the Maher file.
Very reasonable. Google can flag accounts as CP, but then a judge still needs to issue a warrant for the police to actually go and look at the file. Good job court. Extra points for reasoning about hash values.
> a judge still needs to issue a warrant for the police to actually go and look at the file
Only in the future. Maher's conviction, based on the warrantless search, still stands because the court found that the "good faith exception" applies--the court affirmed the District Court's finding that the police officers who conducted the warrantless search had a good faith belief that no warrant was required for the search.
I wonder what happened to fruit of the poisoned tree? Seems a lot more liberty oriented than "good faith exception" when police don't think they need a warrant (because police never seem to "think" they need a warrant).
I'm trying to imagine a more "real-world" example of this to see how I feel about it. I dislike that there is yet another loophole to gain access to peoples' data for legal reasons, but this does feel like a reasonable approach and a valid goal to pursue.
I guess it's like if someone noticed you had a case shaped exactly like a machine gun, told the police, and they went to check if it was registered or not? I suppose that seems perfectly reasonable, but I'm happy to hear counter-arguments.
The main factual components are as follows: Party A has rented out property to Party B. Party A performs surveillance on or around the property with Party B's knowledge and consent. Party A discovers very high probability evidence that Party B is committing crimes within the property, and then informs the police of their findings. Police obtain a warrant, using Party A's statements as evidence.
The closest "real world" analogy that comes to mind might be a real estate management company uses security cameras or some other method to determine that there is a crime occurring in a space that they are renting out to another party. The real estate management company then sends evidence to the police.
In the case of real property -- rental housing and warehouse/storage space in particular -- this happens all the time. I think that this ruling is imminently reasonable as a piece of case law (ie, the judge got the law as it exists correct). I also thing this precedent would strike a healthy policy balance as well (ie, the law as it exists if interpreted how the judge in this case interprets it would a good policy situation).
Is there any such thing as this surveillence applying to the inside of the renters bed room, bath room, filing cabinet with medical or financial documents, or political for that matter?
I don't think there is, and I don't think you can reduce reality to being as simple as "owner has more right over property than renter" renter absolutely has at least a few rights in at least a few defined contextx over owner because owner "consented" to accept money in trade for use of property.
If I import hundreds of pounds of poached ivory and store it in a shipping yard or move it to a long term storage unit, the owner and operator of those properties are allowed to notify police of suspected illegal activities and unlock the storage locker if there is a warrant produced.
Maybe the warrant uses some abstraction of the contents of that storage locker like the shipping manifest or customs declaration. Maybe someone saw a shadow of an elephant tusk or rhino horn as I was closing the locker door.
> Is there any such thing as this surveillence applying to the inside of the renters bed room, bath room, filing cabinet with medical or financial documents, or political for that matter?
Yes. Entering property for regular maintenance. Any time a landlord or his agent enters a piece of property, there is implicit surveillance. Some places are more formal about this than others, but anyone who has rented, owned rental property, or managed rental property knows that any time maintenance occurs there's an implicit examination of the premises also happening...
But here is a more pertinent example: the regular comings and goings of people or property can be and often are observed from outside of a property. These can contribute to probable cause for a search of those premises even without direct observation. (E.g., large numbers of disheveled children moving through an apartment, or an exterior camera shot of a known fugitive entering the property.)
Here the police could obtain a warrant on the basis of landlord's testimony without the landlord actually seeing the inside of the unit. This is somewhat similar to the case at hand, since what Google alerted the police to a hash match without actually looking at the image (ie, entering the bedroom).
> I don't think you can reduce reality to being as simple as "owner has more right over property than renter"
But I make no such reduction, and neither does the opinion. In fact, quite the opposite -- this is contributory why the court determines a warrant is required!
> ...Google alerted the police to a hash match without actually looking at the image (ie, entering the bedroom).
Google cannot have calculated that hash without examining the data in the image. They, or systems under there control obviously looked at the image.
It should not legally matter whether the eyes are meat or machine... if anything, machine inspection should be MORE strictly regulated, because of how much easier and cheaper it tends to make surveillance (mass or otherwise).
> It should not legally matter whether the eyes are meat or machine
But it does matter, and, perhaps ironically, it matters in a way that gives you STRONGER (not weaker) fourth amendment rights. That's the entire TL;DR of the fine article.
> if anything, machine inspection should be MORE strictly regulated, because of how much easier and cheaper it tends to make surveillance (mass or otherwise).
I don't disagree. In particular: I believe that the "Reasonable Person", to the extent that we remain stuck with the fiction, should be understood as having stronger privacy expectations in their phone or cloud account than they do even in their own bedroom or bathroom.
With respect to Google's actions in this case, this is an issue for your legislator and not the courts. The fourth amendment does not bind Google's hands in any way, and judges are not lawmakers.
I agree. This is a case where the physical analogy leads us to (imo) the correct conclusion: compelling major property management companies to perform regular searches of their tenant's properties, and then to report any findings to the police, is hopefully something that most judges understand to be a clear violation of the fourth amendment.
But this court decision is a real world example, and not some esoteric edge case.
This is something I don’t think needs analogies to understand. SA/CP image and video distribution is an ongoing moderation, network, and storage issue. The right to not be under constant digital surveillance is somewhat protected in the constitution.
I like speech and privacy and am paranoid of corporate or government overreach, but I arrive at the same conclusion as you taking this court decision at face value.
Wait until Trump is in power and corporations are masterfully using these tools to “mow the grass” (if you want an existing example of this, look at Putin’s Russia, where people get jail time for any pro-Ukraine mentions on social media).
Yeah I’m paranoid like I said, but this case it seems like the hash of a file on google’s remote storage flagged as potential match that was used as justification to request a warrant. That seems common sense and did not involve employees snooping pre-warrant.
The Apple CSAM hash detection process, that the launch was rolled back, concerned me namely because it was run on-device with no opt out. If this is running on cloud storage then it sort of makes sense. You need to ensure you are not aiding or harboring actually harmful illegal material.
I get there are slippery slopes or whatever but the fact is you cannot just store whatever you wish in a rental. I don’t see this as opening mass regex surveillance of our communication channels. We have the patriot act to do that lol.
I think the real-world analogy would be to say that the case is shaped exactly like a machine gun and the hotel calls the police, who then open the case without a warrant. The "private search" doctrine allows the police to repeat a search done by a private party, but here (as in the machine gun case), the case was not actually searched by a private party.
I don't think the analogy holds for two reasons (which cut in opposite directions from the perspective of fourth amendment jurisprudence, fwiw).
First, the dragnet surveillance that Google performs is very different from the targeted surveillance that can be performed by a drug dog. Drug dogs are not used "everywhere and always"; rather, they are mostly used in situations where people have a less reasonable expectation of privacy than the expectation they have over their cloud storage accounts.
Second, the nature of the evidence is quite different. Drug-sniffing dogs are inscrutable and non-deterministic and transmit handler bias. Hashing algorithms can be interrogated and are deterministic and do not have such bias transferal issues; collisions do occur, but are rare, especially because the "search key" set is so minuscule relative to the space of possible hashes. The narrowness and precision of the hashing method preserves most of the privacy expectations that society is currently willing to recognize as objectively reasonable.
Here we get directly to the heart of the problem with the fictitious "reasonable person" used in tests like the Katz test, especially in cases where societal norms and technology co-evolve at a pace far more rapid than that of the courts.
This analogy can have two opposite meanings. Drug dogs can be anything from a prop used by the police to search your car without a warrant (a cop can always say in court the dog "alerted" them) to a useful drug detection tool.
Is it reasonable? Even if the hash was md5, given valid image files, the chances of it being an accidental collision are way lower than the chance of any other evidence given to a judge was false or misinterpreted.
This is NOT a secure hash. This is an image similar to hash which has many many matches in not related images.
Unfortunately the decision didn't mention this at all even though it is important. If it was even as good as a md5 hash (which is broken) I think the search should be allowed without warrant because even though a accidental collision is possible odds are so strongly against it that the courts can safely assume there isn't (and of course if there is the police would close the case). However since this has is not that good the police cannot look at the image unless Google does.
I wish I could get access to the "App'x 29" being referenced so that I could better understand the judges' understanding here. I assume this is Federal Appendix 29 (in which case a more thorough reference would've been appreciated). If the Appeals Court is going to cite the Federal Appendix in a decision like this and in this manner, then the Federal Appendix is as good as case law and West Publishing's copyright claims should be ripped away. Either the Federal Appendix should not be cited in Appeals Court and Supreme Court opinions, or the Federal Appenix is part of the law and belongs to the people. There is no middle there.
> I think the search should be allowed without warrant because even though a accidental collision is possible odds are so strongly against it that the courts can safely assume there isn't
The footnote in the decision bakes this property into the definition of a hash:
A “hash” or “hash value” is “(usually) a short string of characters generated from a much larger string of data (say, an electronic image) using an algorithm—and calculated in a way that makes it highly unlikely another set of data will produce the same value.
(Importantly, this is NOT an accurate definition of a hash for anyone remotely technical... of course hashing algorithms with significant hash collisions exist, and is even a design criterion for some hashing algorithms...)
So you're saying that I craft a file that has the same hash as a CSAM one, I give it to you, you upload it to google, but it also happens to be CSAM, and I've somehow framed you?
My point is that a hash (granted, I'm assuming that we're talking about a cryptographic hash function, which is not clear) is much closer to "This is the file" than someone actually looking at it, and that it's definitely more proof of them having that sort of content than any other type of evidence.
I don't understand. If you contend that it's even better evidence than actually having the file and looking at it, how is not reasonable to then need a judge to issue a warrant to look at it? Are you saying it would be more reasonable to skip that part and go directly to arrest?
It seems like a large part of the ruling hinges on the fact that Google matched the image hash to a hash of a known child pornography image, but didn't require an employee to actually look at that image before reporting it to the police. If they had visually confirmed it was the image they suspected it was based on the hash then no warrant would have been required, but the judge reads that the image hash match is not equivalent to a visual confirmation of the image. Maybe there's some slight doubt in whether or not the image could be a hash collision, which depends on the hash method. It may be incredibly unlikely (near impossible?) for any hash collision depending on the specific hash strategy.
I think it would obviously be less than ideal for Google to require an employee visually inspect child pornography identified by image hash before informing a legal authority like the police. So it seems more likely that the remedy to this situation would be for the police to obtain a warrant after getting the tip but before requesting the raw data from Google.
Would the image hash match qualify as probable cause enough for a warrant? On page 4 the judge stops short of setting precedence on whether it would have or not. Seems likely that it would be a solid probable cause to me, but sometimes judges or courts have a unique interpretation of technology that I don't always share, and leaving it open to individual interpretation can lead to conflicting results.
The hashes involved in stuff like this, as with copyright auto-matching, are perceptual hashes (https://en.wikipedia.org/wiki/Perceptual_hashing), not cryptographic hashes. False matches are common enough that perceptual hashing attacks are already a thing in use to manipulate search engine results (see the example in random paper on the subject https://gangw.cs.illinois.edu/PHashing.pdf).
It seems like that is very relevant information that was not considered by the court. If this was a cryptographic hash I would say with high confidence that this is the same image and so Google examined it - there is a small chance that some unrelated file (which might not even be a picture) matches but odds are the universe will end before that happens and so the courts can consider it the same image for search purposes. However because there are many false positive cases there is reasonable odds that the image is legal and so a higher standard for search is needed - a warrant.
>so the courts can consider it the same image for search purposes
An important part of the ruling seems to be that neither Google nor the police had the original image or any information about it, so the police viewing the image gave them more information than Google matching the hash gave Google: for example, consider how the suspect being in the image would have changed the case, or what might happen if the image turned out not to be CSAM, but showed the suspect storing drugs somewhere, or was even, somehow, something entirely legal but embarrassing to the suspect. This isn't changed by the type of hash.
That makes sense - if they were using a cryptographic hash then people could get around it by making tiny changes to the file. I’ve used some reverse image search tools, which use perceptual hashing under the hood, to find the original source for art that gets shared without attribution (saucenao pretty solid). They’re good, but they definitely have false positives.
Now you’ve got me interested in what’s going on under the hood, lol. It’s probably like any other statistical model: you can decrease your false negatives (images people have cropped or added watermarks/text to), but at the cost of increased false positives.
The hash functions used for these purposes are usually not cryptographic hashes. They are "perceptual hashes" that allows for approximate matches (e.g. if the image has been scaled or brightness-adjusted). https://en.wikipedia.org/wiki/Perceptual_hashing
It seems like there just needs to be case law about the qualifications of an image hash in order to be counted as probable cause for a warrant. Of course you could make an image hash be arbitrarily good or bad.
I am not at all opposed to any of this "get a damn warrant" pushback from judges.
I am also not at all opposed to Google searching it's cloud storage for this kind of content. There are a lot of things I would mind a cloud provider going on fishing expeditions to find potentially illegal activity, but this I am fine with.
I do strongly object to companies searching content for illegal activity on devices in my possession absent probable cause and a warrant (that they would have to get in a way other than searching my device). Likewise I object to the pervasive and mostly invisible delivery to the cloud of nearly everything I do on devices I possess.
In other words, I want custody of my stuff and for the physical possession of my stuff to be protected by the 4th amendment and not subject to corporate search either. Things that I willingly give to cloud providers that they have custody of I am fine with the cloud provider doing limited searches and the necessary reporting to authorities. The line is who actually has the bits present on a thing they hold.
> As the district court correctly ruled in the alternative, the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required
So this means this conviction is upheld but future convictions may be overturned if they similarly don't acquire a warrant?
> the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required
This "good faith exception" is so absurd I struggle to believe that it's real.
Ordinary citizens are expected to understand and scrupulously abide by all of the law, but it's enough for law enforcement to believe that what they're doing is legal even if it isn't?
What that is is a punch line from a Chapelle bit[1], not a reasonable part of the justice system.
The courts accept good faith arguments at times. They will give reduced sentences or even none at all if they think you acted in good faith. There are enough situations where it is legal to kill someone that there are laws to make it clear that is a legal situation where one person can kill another (hopefully they never apply to you).
Note that this case is not about ignorance of the law. This is I knew the law and was trying to follow it - I just honestly thought it didn't apply because of some tricky situation that isn't 100% clear.
The difference between "I don't know" and "I thought it worked like this" is purely a matter of degrees of ignorance. It sounds like the cops were ignorant of the law in the same way as someone who is completely unaware of it, just to a lesser degree. Unless they were misinformed about the origins of what they were looking at, it doesn't seem like it would be a matter of good faith, but purely negligence.
“Mens rea” is a key component of most crimes. Some crimes can only be committed if the perpetrator knows they are doing something wrong. For example, fraud or libel.
> “Mens rea” is a key component of most crimes. Some crimes can only be committed if the perpetrator knows they are doing something wrong. For example, fraud or libel.
We're talking about orthogonal issues.
Mens rea applies to whether the person performs the act on purpose. Not whether they were aware that the act was illegal.
Let's use fraud as an example since you brought it up.
If I bought an item from someone and used counterfeit money on purpose, that would be fraud. Even if I truly believed that doing so was legal. But it wouldn't be fraud if I didn't know that the money was counterfeit.
This specific conviction upheld, yes. But no, this ruling doesn't speak to whether or not any future convictions may be overturned.
It simply means that at the trial court level, future prosecutions will not be able to rely on the good faith exception to the exclusionary rule if warrantless inculpatory evidence is obtained under similar circumstances. If the governement were to try to present such evidence at trial and the trial judge were to admit it over the objection of the defendant, then that would present a specific ground for appeal.
This ruling merely bolsters the 'better to get a warrant' spirit of the Fourth Amendment.
At the time, what they did was assumed to be legal because no one had ruled on it.
Now, there is prior case law declaring it illegal.
The ruling is made in such a way to say “we were allowing this, but we shouldn’t have been, so we wont allow it going forward”.
I am not a legal scholar, but that’s the best way I can explain it. The way that the judicial system applies to law is incredibly complex and inconsistent.
This is a deeply problematic way to operate. En masse, it has the right result, but, for the individual that will have their life turned upside down, the negative impact is effectively catastrophic.
This ends up feeling a lot like gambling in a casino. The casino can afford to bet and lose much more than the individual.
I don't care nearly as much about the 4th amendment when the person is guilty. I care a lot when the person is innocent. Searches of innocent people is costly for the innocent person and so we require warrants to ensure such searches are minimized (even though most warrants are approved, the act of getting on forces the police to be careful). If a search was completely not costly to innocent I wouldn't be against them, but there are many ways a search that finds nothing is costly to the innocent.
Thus, it's not clear that any harm was caused because the right wasn't clearly enshrined and had the police known that it was, they likely would have followed the correct process. There was no intention to violate rights, and no advantage gained from even the inadvertent violation of rights. But the process is updated for the future.
> the private search doctrine, which authorizes a government
actor to repeat a search already conducted by a private party without
securing a warrant.
IANAL, etc. Does that mean that if someone breaks in to your house in search of drugs, finds and steals some, and is caught by the police and confesses all that the police can then search your house without a warrant?
IANAL either, but from what I've read before the courts treat searches of your home with extra care under the 4th Amendment. At least one circuit has pushed back on applying private search cases to residences, and that was for a hotel room[0]:
> Unlike the package in Jacobsen, however, which "contained nothing but contraband," Allen's motel room was a temporary abode containing personal possessions. Allen had a legitimate and significant privacy interest in the contents of his motel room, and this privacy interest was not breached in its entirety merely because the motel manager viewed some of those contents. Jacobsen, which measured the scope of a private search of a mail package, the entire contents of which were obvious, is distinguishable on its facts; this Court is unwilling to extend the holding in Jacobsen to cases involving private searches of residences.
So under your hypothetical, I'd expect the police would be able to test "your drugs" that they confiscated from the thief, and use any findings to apply for a warrant for a search of your house, but any search without a warrant would be illegal.
The problem with the internet nowadays is that a few big players are making up their own law. Very often it is against local laws, but nobody can fight with it. For example someone created some content but other person uploaded it and got better scores which rendered the original poster blocked. Another example: children were playing a violin concert and the audio got removed due to alleged copyright violation. No possibility to appeal, nobody sane would go to court. It just goes this way...
"That, however, does not mean that Maher is
entitled to relief from conviction. As the district court correctly ruled in the
alternative, the good faith exception to the exclusionary rule supports denial of
Maher’s suppression motion because, at the time authorities opened his uploaded
file, they had a good faith basis to believe that no warrant was required."
"Defendant [..] stands convicted following a guilty plea in
the United States District Court for the Northern District of New York
(Glenn T. Suddaby, Judge) of both receiving and possessing approximately
4,000 images and five videos depicting child pornography"
A win for google, for the us judicial system, and for constitutional rights.
The harshness of sentence is not for the action of keeping the photos in itself, but the individual suffering and social damage caused by the actions that he incentivizes when he consumes such content.
Consumption per se does not incentivize it, though; procurement does. It's not unreasonable to causally connect one to the other, but I still think that it needs to be done explicitly. Strict liability for possession in particular is nonsense.
There's also an interesting question wrt simulated (drawn, rendered etc) CSAM, especially now that AI image generators can produce it in bulk. There's no individual suffering nor social damage involved in that at any point, yet it's equally illegal in most jurisdictions, and the penalties aren't any lighter. I've yet to see any sensible arguments in favor of this arrangement - it appears to be purely a "crime against nature" kind of moral panic over the extreme ickiness of the act as opposed to any actual harm caused by it.
Icky things were historically made illegal all the time, but most of those historical examples have not fared well in retrospect. Modern justice systems are generally predicated on some quantifiable harm for good reasons.
Given the extremely harsh penalties at play, I am not at all comfortable about punishing someone with a multi-year prison sentence for possession of a drawn or computer generated image. What exactly is the point, other than people getting off from making someone suffer for reasons they consider morally justifiable?
There's no room for sensible discussion like this in these matters. Not demanding draconian sentences for morally outraging crimes is morally outraging.
Assuming the person is a passive consumer with no messages / money exchanged with anyone, it is very hard to prove social harm or damage. Sentences should be proportional to the crime. Treating possession of cp as equivalent of literally raping a child just seems absurd to me. IMO, just for the legal protection of the average citizen, a simple possession should never warrant jail time.
The language is defined by how people actually use it, not by how a handful of activists try to prescribe its use. Ask any random person on the street, and most of them have no idea what CSAM is, but they know full well what "child porn" is. Dictionaries, encyclopedias etc also reflect this common sense usage.
The justification for this attempt to change the definition doesn't make any sense, either. Just because some porn is child porn, which is bad, doesn't in any way imply that all porn is bad. In fact, I would posit that making this argument in the first place is detrimental to sex-positive outlook on porn.
> Just because some porn is child porn, which is bad, doesn't in any way imply that all porn is bad.
I think people who want others to stop using the term "child porn" are actually arguing the opposite of this. Porn is good, so calling it "child porn" is making a euphemism or otherwise diminishing the severity of "CSAM" by using the positive term "porn" to describe it.
I don't think the established consensus on the meaning of the word "porn" itself includes some kind of inherent implied positivity, either; not even among people who have a generally positive attitude towards porn.
Stop doing this. You are confusing the perfectly noble aspect of calling it abuse material to make it victim centric with denying the basic purpose of the material. The people who worked hard to get it called CSAM do not deny that it’s pornography for its users.
The distinction you went on to make was necessary specifically for this reason.
It's a reasonable argument, but a concerning one because it hinges on a couple of layers of indirection between the person engaging in consuming the content and the person doing the harm / person who is harmed.
That's not outside the purview of US law (especially in the world post-reinterpretation of the Commerce Clause), but it is perhaps worth observing how close to the cliff of "For the good of Society, you must behave optimally, Citizen" such reasoning treads.
For example: AI-generated CP (or hand-drawn illustrations) are viscerally repugnant, but does the same "individual suffering and social damage" reasoning apply to making them illegal? The FBI says yes to both in spite of the fact that we can name no human that was harmed or was unable to give consent in their fabrication (handwaving the source material for the AI, which if one chooses not to handwave it: drop that question on the floor and focus on under what reasoning we make hand-illustrated cartoons illegal to possess that couldn't be applied to pornography in general).
> The FBI says yes to both in spite of the fact that we can name no
They have two arguments for this (that I am aware of). The first argument is a practical one, that AI-generated images would be indistinguishable from the "real thing", but that the real thing still being out there would complicate their efforts to investigate and prosecute. While everyone might agree that this is pragmatic, it's not necessarily constitutionally valid. We shouldn't prohibit activities based on whether these activities make it more difficult for authorities to investigate crimes. Besides, this one's technically moot... those producing the images could do so in such a way (from a technical standpoint) that they were instantly, automatically, and indisputably provable as being AI-generated.
All images could be mandated to require embedded metadata which describes the model, seed, and so forth necessary to regenerate it. Anyone who needs to do so could push a button, the computer would attempt to regenerate the image from that seed, and the computer could even indicate that the two images matched (the person wouldn't even need to personally view the image for that to be the case). If the application indicated they did not match, then authorities could investigate it more thoroughly.
The second argument is an economic one. That is, if a person "consumes" such material, they increase economic demand for it to be created. Even in a post-AI world, some "creation" would be criminal. Thus, the consumer of such imagery does cause (indirectly) more child abuse, and the government is justified in prohibiting AI-generated material. This is a weak argument on the best of days... one of the things that law enforcement efforts excel at is just this. When there are two varieties of a behavior, one objectionable and the other not, but both similar enough that they might at a glance be mistaken for one another, is that it can greatly disincentivize one without infringing the other. Being an economic argument, one of the things that might be said is that economic actors seek to reduce their risk of doing business, and so would gravitate to creating the legal variety of material.
While their arguments are dumb, this filth's as reprehensible as anything. The only question worth asking or answering is, were (AI-generated) it legal, would it result in fewer children being harmed or not? It's commonly claimed that the easy availability of mainstream pornography has reduced the rate of rape since the mid-20th century.
Was using those md5 sums on images for flagging images 20 years ago for the government, occasional false positives, but the safety team would review those, not operations. My only role was to burn the users account to a dvd (via a script) and have the police officer pick up the dvd, we never touched the disk, and only burned the disk with a warrant. (we never saw/touched the users data...)
Figured this common industry standard for chain of custody for evidence. Same with police videos, they are uploaded to the courts digital evidence repository, and everyone who looks at the evidence is logged.
The old example is the email server administrator. If the email administrator has to view the contents of user messages as a part of regular maintenance and that email administrator notices lawful violations in those user messages they can report it to law enforcement. In that case law enforcement can receive the material without a warrant only if law enforcement never asked for it before it was gifted to them. There are no fourth amendment protections provided to offenders in this scenario of third party accidental discovery. Typically, in these cases the email administrator does not have an affirmed requirement to report lawful violations to law enforcement unless specific laws claim otherwise.
If on the other hand law enforcement approaches that email administrator to fish for illegal user content then that email administrator has become an extension of law enforcement and any evidence discovered cannot be used in a criminal proceeding. Likewise, if the email administrator was intentionally looking through email messages for violations of law even not at the request of law enforcement they are still acting as agents of the law. In that case discovery was intentional and not an unintentional product of system maintenance.
There is a third scenario: obscenity. Obscenity is illegal intellectual property, whether digital or physical, as defined by criminal code. Possession of obscene materials is a violation of criminal law for all persons, businesses, and systems in possession. In that case an email administrator that accidentally discovers obscene material does have a required obligation to report their discoveries, typically through their employer's corporate legal process, to law enforcement. Failures to disclose such discoveries potentially aligns the system provider to the illegal conduct of the violating user.
Google's discovery, though, was not accidental as a result of system maintenance. It was due to an intentional discovery mechanism based on stored hashes, which puts Google's conduct in line with law enforcement even if they specified their conduct in their terms of service. That is why the appeals court claims the district court erred by denying the defendant's right to suppression on fourth amendment grounds.
The saving grace for the district court was a good faith exception, such as inevitable discovery. The authenticity and integrity of the hash algorithm was never in question by any party so no search for violating material was necessary, which established probably cause thus allowing law enforcement reasonable grounds to proceed to trial. No warrant was required because the evidence was likely sufficient at trial even if law enforcement did not directly view image in question, but they did verify the image. None of that was challenged by either party. What was challenged was just Google's conduct.
The good faith exception requires the belief be reasonable. Ignorance of clearly settled law is not reasonable, it should be a situation where the law was unclear, had conflicting interpretations or could otherwise be interpreted the way the police did by a reasonable person.
> It feels like it incentivizes the police to minimize their understanding of the law so that they can believe they are following it.
That's a bingo. That's exactly what they do, and why so many cops know less about the law than random citizens. A better society would have high standards for the knowledge expected of police officers, including things like requiring 4-year criminal justice or pre-law degree to be eligible to be hired, rather than capping IQ and preferring people who have had prior experience in conducting violent actions.
Yes, this likely explains part of why the Norwegian police behave like professionals who are trying to do their job with high standards of performance and behavior and the police in the US behave like a bunch of drinking buddies that used to be bullies in high school trying to find their next target to harass.
So now an algorithm can interpret the law better than a judge. It’s amazing how technology becomes judge and jury while privacy rights are left to a good faith interpretation. Are we really okay with letting an algorithmic click define the boundaries of privacy?
It's crazy that the most dangerous people one regularly encounters can do anything they want as long as they believe they can do it. The good faith exemption has to be one of the most fascist laws on the books today.
> "the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required."
In no other context or career can you do anything you want and get away with it just as long as you say you thought you could. You'd think police offiers would be held to a higher standard, not no standard.
And specifically with respect to the law, breaking a law and claiming you didn't know you did anything wrong as an individual is not considered a valid defense in our justice system. This same type of standard should apply even more to trained law enforcement, not less, otherwise it becomes a double standard.
No this is breaking the law by saying this looked like one of the situations where I already know the law doesn't apply. If Google had looked at the actual image and said it was child porn instead of just saying it was similar to some image that is child porn this would be 100% legal as the courts have already said. That difference is subtle enough that I can see how someone would get it wrong (and in fact I would expect other courts to rule differently)
It's not exactly the same imo, since GS laws are meant to protect someone who is genuinely trying to do what a reasonable person could consider "something positive"
In this case you're correct. But the good faith exemption is far broader than this and applies to even officer's completely personal false beliefs in their authority.
I think the judge chose to relax a lot on this one due to the circumstances. Releasing a man in society found with 4,000 child porn photos in his computer would be a shame.
But yeah, this opens too wide gates of precedence for tyranny, unfortunately...
The judge doesn't really understand a hash well. They say things like "Google assigned a hash" which is not true, Google calculated the hash.
Also I'm surprised the 3rd-party doctrine doesn't apply. There's the "private search doctrine" mentioned but generally you don't have an expectation of privacy for things you share with Google
"More simply, a hash value is a string of characters obtained by processing the contents of a given computer file and assigning a sequence of numbers and letters that correspond to the file’s contents."
Google assigned the hashing algorithm (maybe, assuming it wasn't chosen in some law somewhere, I know this CSAM hashing is something the big tech work on together).
Once the hashing algorithm was assigned, individual values are computed or calculated.
I don't think the judge's wording is all that bad but the word "assigned" is making it sound like Google exercised some agency when really all it did was apply a pre-chosen algorithm.
There's a password on my Google account, I totally expect to have privacy for anything I didn't choose to share with other people.
The hash is kind of metadata recorded by Google, I feel like Google using it to keep child porn off their systems should be reasonable. Same ballpark as limiting my storage to 1GB based on file sizes. Sharing metadata without a warrant is a different question though.
As should be expected from the lawyer world, it seems like whether you have an expectation of privacy using gmail comes down to very technical word choices in the ToS, which of course neither this guy nor anyone else has ever read. Specifically, it may be legally relevant to your expectation of privacy whether Google says they "may" or "will" scan for this stuff.
Out of curiosity, what is false positive rate of a hash match?
If the FPR is comparable to asking a human "are these the same image?", then it would seem to be equivalent to a visual search. I wonder if (or why) human verification is actually necessary here.
The reason human verification is necessary is that the government is relying on something called the "private search" doctrine to conduct the search without a warrant. This doctrine allows them to repeat a search already conducted by a private party (i.e., Google) without getting a warrant. Since Google didn't actually look at the file, the government is not able to look at the file without a warrant, as that search exceeds the scope of the initial search Google performed.
I doubt sha1 hashes are used for this. Those image hashes should match files regardless of orientation, cropping, resizing, re-compression, color correction etc. The collision could be far more frequent with these hashes.
The hash should ideally match even if you use photoshop to cut the one person out of the picture and put that person into a different photo. I'm not sure if that is possible, but that is what we want.
> Out of curiosity, what is false positive rate of a hash match?
No way to know without knowledge of the 'proprietary hashing technology'.
Theoretically though, a hash can have infinitely many inputs that produce the same output.
Mismatching hash values from the same hashing algorithm can prove mismatching inputs, but matching hash values don't ensure matching inputs.
> I wonder if (or why) human verification is actually necessary here
It's not about frequency, it's about criticality of getting it right. If you are going to make a negatively life-altering report on someone, you'd better make sure the accusation is legitimate.
I'd say the focus on hashing is a bit of a red herring.
Most anyone would agree that the hash matching should probably form probable cause for a warrant, allowing a judge to sign off on the police searching (i.e., viewing) the image. So, if it's a collision, the cops get a warrant and open up your linux ISO or cat meme, and it's all good. Probably the ideal case is that they get a warrant to search the specific image, and are only able to obtain a warrant to search your home and effects, etc. if the image does appear to be CSAM.
At issue here is the fact that no such warrant was obtained.
I think it'll prove far more likely that the government creates incentives to lead Google/other providers to fully do the search on their behalf.
The entire appeal seems to hinge on the fact that Google didn't actually view the image before passing it to NCMEC. Had Google policy been that all perceptual hash hits were reviewed by employees first, this would've likely been a one page denial.
If the hash algorithm were CRC8, then obviously it should not be probable cause for anything. If it were SHA-3, then it's basically proof beyond reasonable doubt of what the file is. It seems reasonable to question how collisions behave.
I don't agree that it would be proof beyond reasonable doubt, especially because neither google nor law enforcement can produce the original image that got tagged.
> Most anyone would agree that the hash matching should probably form probable cause for a warrant
I disagree with this. Yes, if we were talking MD5, SHA, or some similar true hash algo, then the probability of a natural collision is small enough that I agree in principle.
But if the hash algo is of some other kind then I do not know enough about it to assert that it can justify probable cause. Anyone who agrees without knowing more about it is a fool.
That's fair. I came away from reading the opinion that this was not a perceptual hash, but I don't think it is explicitly stated anywhere. I would have similar misgivings if indeed it is a perceptual hash.
For non-broken cryptographic hashes (e.g., SHA-256), the false-positive rate is negligible. Indeed, cryptographic hashes were designed so that even nation-state adversaries do not have the resources to generate two inputs that hash to the same value.
These are not the kinds of hashes used for CSAM detection, though, because that would only work for the exact pixel-by-pixel copy - any resizing, compression etc would drastically change the hash.
Instead, systems like these use perceptual hashing, in which similar inputs produce similar hashes, so that one can test for likeness. Those have much higher collision rates, and are also much easier to deliberately generate collisions for.
Naively, 1/(2^{hash_size_in_bits}). Which is about 1 in 4 billion odds for a 32 bit hash, and gets astronomically low at higher bit counts.
Of course, that's assuming a perfect, evenly distributed hash algorithm. And that's just the odds that any given pair of images has the same hash, not the odds that a hash conflict exists somewhere on the internet.
And it should be mostly bijective under most conditions. (This is obviously impossible in practice but hashes with common collisions shouldn't be allowed as legal evidence). Also neural/visual hashes like those used by big tech makes things tricky.
The hash in question has many collisions. It it probably enough to get a warrant put it on a warrant, but it may not be enough to get a warrant without some other evidence. (it can be enough evidence to look for other public signs of evidence, or perhaps because there are a number of images that match different hashes)
> Google’s hash match may well have established probable cause for a warrant to allow police to conduct a visual examination of the Maher file.
Very reasonable. Google can flag accounts as CP, but then a judge still needs to issue a warrant for the police to actually go and look at the file. Good job court. Extra points for reasoning about hash values.
> a judge still needs to issue a warrant for the police to actually go and look at the file
Only in the future. Maher's conviction, based on the warrantless search, still stands because the court found that the "good faith exception" applies--the court affirmed the District Court's finding that the police officers who conducted the warrantless search had a good faith belief that no warrant was required for the search.
I wonder what happened to fruit of the poisoned tree? Seems a lot more liberty oriented than "good faith exception" when police don't think they need a warrant (because police never seem to "think" they need a warrant).
I'm trying to imagine a more "real-world" example of this to see how I feel about it. I dislike that there is yet another loophole to gain access to peoples' data for legal reasons, but this does feel like a reasonable approach and a valid goal to pursue.
I guess it's like if someone noticed you had a case shaped exactly like a machine gun, told the police, and they went to check if it was registered or not? I suppose that seems perfectly reasonable, but I'm happy to hear counter-arguments.
The main factual components are as follows: Party A has rented out property to Party B. Party A performs surveillance on or around the property with Party B's knowledge and consent. Party A discovers very high probability evidence that Party B is committing crimes within the property, and then informs the police of their findings. Police obtain a warrant, using Party A's statements as evidence.
The closest "real world" analogy that comes to mind might be a real estate management company uses security cameras or some other method to determine that there is a crime occurring in a space that they are renting out to another party. The real estate management company then sends evidence to the police.
In the case of real property -- rental housing and warehouse/storage space in particular -- this happens all the time. I think that this ruling is imminently reasonable as a piece of case law (ie, the judge got the law as it exists correct). I also thing this precedent would strike a healthy policy balance as well (ie, the law as it exists if interpreted how the judge in this case interprets it would a good policy situation).
Is there any such thing as this surveillence applying to the inside of the renters bed room, bath room, filing cabinet with medical or financial documents, or political for that matter?
I don't think there is, and I don't think you can reduce reality to being as simple as "owner has more right over property than renter" renter absolutely has at least a few rights in at least a few defined contextx over owner because owner "consented" to accept money in trade for use of property.
If I import hundreds of pounds of poached ivory and store it in a shipping yard or move it to a long term storage unit, the owner and operator of those properties are allowed to notify police of suspected illegal activities and unlock the storage locker if there is a warrant produced.
Maybe the warrant uses some abstraction of the contents of that storage locker like the shipping manifest or customs declaration. Maybe someone saw a shadow of an elephant tusk or rhino horn as I was closing the locker door.
> Is there any such thing as this surveillence applying to the inside of the renters bed room, bath room, filing cabinet with medical or financial documents, or political for that matter?
Yes. Entering property for regular maintenance. Any time a landlord or his agent enters a piece of property, there is implicit surveillance. Some places are more formal about this than others, but anyone who has rented, owned rental property, or managed rental property knows that any time maintenance occurs there's an implicit examination of the premises also happening...
But here is a more pertinent example: the regular comings and goings of people or property can be and often are observed from outside of a property. These can contribute to probable cause for a search of those premises even without direct observation. (E.g., large numbers of disheveled children moving through an apartment, or an exterior camera shot of a known fugitive entering the property.)
Here the police could obtain a warrant on the basis of landlord's testimony without the landlord actually seeing the inside of the unit. This is somewhat similar to the case at hand, since what Google alerted the police to a hash match without actually looking at the image (ie, entering the bedroom).
> I don't think you can reduce reality to being as simple as "owner has more right over property than renter"
But I make no such reduction, and neither does the opinion. In fact, quite the opposite -- this is contributory why the court determines a warrant is required!
> ...Google alerted the police to a hash match without actually looking at the image (ie, entering the bedroom).
Google cannot have calculated that hash without examining the data in the image. They, or systems under there control obviously looked at the image.
It should not legally matter whether the eyes are meat or machine... if anything, machine inspection should be MORE strictly regulated, because of how much easier and cheaper it tends to make surveillance (mass or otherwise).
> It should not legally matter whether the eyes are meat or machine
But it does matter, and, perhaps ironically, it matters in a way that gives you STRONGER (not weaker) fourth amendment rights. That's the entire TL;DR of the fine article.
> if anything, machine inspection should be MORE strictly regulated, because of how much easier and cheaper it tends to make surveillance (mass or otherwise).
I don't disagree. In particular: I believe that the "Reasonable Person", to the extent that we remain stuck with the fiction, should be understood as having stronger privacy expectations in their phone or cloud account than they do even in their own bedroom or bathroom.
With respect to Google's actions in this case, this is an issue for your legislator and not the courts. The fourth amendment does not bind Google's hands in any way, and judges are not lawmakers.
The issue of course being the government then pressuring or requiring these companies to look for some sort of content as part of routine operations.
I agree. This is a case where the physical analogy leads us to (imo) the correct conclusion: compelling major property management companies to perform regular searches of their tenant's properties, and then to report any findings to the police, is hopefully something that most judges understand to be a clear violation of the fourth amendment.
> Party A discovers very high probability evidence that Party B is committing crimes within the property ...
This isn't accurate: the hashes were purposefully compared to a specific list. They didn't happen to notice it, they looked specifically for it.
And of course, what happens when it's a different list?
This is an excellent example, I think I get it now and I'm fully on-board. Thanks.
I could easily see an AirBNB owner calling the cops if they saw, for instance, child abuse happening on their property.
With their hidden camera in the bathroom.
I just meant it as an analogy, not that I'm specifically on-board with AirBNB owners putting cameras in bathrooms.
Anyways, that's why I just rent hotel rooms, personally. :)
But this court decision is a real world example, and not some esoteric edge case.
This is something I don’t think needs analogies to understand. SA/CP image and video distribution is an ongoing moderation, network, and storage issue. The right to not be under constant digital surveillance is somewhat protected in the constitution.
I like speech and privacy and am paranoid of corporate or government overreach, but I arrive at the same conclusion as you taking this court decision at face value.
Wait until Trump is in power and corporations are masterfully using these tools to “mow the grass” (if you want an existing example of this, look at Putin’s Russia, where people get jail time for any pro-Ukraine mentions on social media).
Yeah I’m paranoid like I said, but this case it seems like the hash of a file on google’s remote storage flagged as potential match that was used as justification to request a warrant. That seems common sense and did not involve employees snooping pre-warrant.
The Apple CSAM hash detection process, that the launch was rolled back, concerned me namely because it was run on-device with no opt out. If this is running on cloud storage then it sort of makes sense. You need to ensure you are not aiding or harboring actually harmful illegal material.
I get there are slippery slopes or whatever but the fact is you cannot just store whatever you wish in a rental. I don’t see this as opening mass regex surveillance of our communication channels. We have the patriot act to do that lol.
I think the real-world analogy would be to say that the case is shaped exactly like a machine gun and the hotel calls the police, who then open the case without a warrant. The "private search" doctrine allows the police to repeat a search done by a private party, but here (as in the machine gun case), the case was not actually searched by a private party.
It's like a digital 'smell'; Google is a drug sniffing dog.
I don't think the analogy holds for two reasons (which cut in opposite directions from the perspective of fourth amendment jurisprudence, fwiw).
First, the dragnet surveillance that Google performs is very different from the targeted surveillance that can be performed by a drug dog. Drug dogs are not used "everywhere and always"; rather, they are mostly used in situations where people have a less reasonable expectation of privacy than the expectation they have over their cloud storage accounts.
Second, the nature of the evidence is quite different. Drug-sniffing dogs are inscrutable and non-deterministic and transmit handler bias. Hashing algorithms can be interrogated and are deterministic and do not have such bias transferal issues; collisions do occur, but are rare, especially because the "search key" set is so minuscule relative to the space of possible hashes. The narrowness and precision of the hashing method preserves most of the privacy expectations that society is currently willing to recognize as objectively reasonable.
Here we get directly to the heart of the problem with the fictitious "reasonable person" used in tests like the Katz test, especially in cases where societal norms and technology co-evolve at a pace far more rapid than that of the courts.
This analogy can have two opposite meanings. Drug dogs can be anything from a prop used by the police to search your car without a warrant (a cop can always say in court the dog "alerted" them) to a useful drug detection tool.
Is it reasonable? Even if the hash was md5, given valid image files, the chances of it being an accidental collision are way lower than the chance of any other evidence given to a judge was false or misinterpreted.
This is NOT a secure hash. This is an image similar to hash which has many many matches in not related images.
Unfortunately the decision didn't mention this at all even though it is important. If it was even as good as a md5 hash (which is broken) I think the search should be allowed without warrant because even though a accidental collision is possible odds are so strongly against it that the courts can safely assume there isn't (and of course if there is the police would close the case). However since this has is not that good the police cannot look at the image unless Google does.
I wish I could get access to the "App'x 29" being referenced so that I could better understand the judges' understanding here. I assume this is Federal Appendix 29 (in which case a more thorough reference would've been appreciated). If the Appeals Court is going to cite the Federal Appendix in a decision like this and in this manner, then the Federal Appendix is as good as case law and West Publishing's copyright claims should be ripped away. Either the Federal Appendix should not be cited in Appeals Court and Supreme Court opinions, or the Federal Appenix is part of the law and belongs to the people. There is no middle there.
> I think the search should be allowed without warrant because even though a accidental collision is possible odds are so strongly against it that the courts can safely assume there isn't
The footnote in the decision bakes this property into the definition of a hash:
A “hash” or “hash value” is “(usually) a short string of characters generated from a much larger string of data (say, an electronic image) using an algorithm—and calculated in a way that makes it highly unlikely another set of data will produce the same value.
(Importantly, this is NOT an accurate definition of a hash for anyone remotely technical... of course hashing algorithms with significant hash collisions exist, and is even a design criterion for some hashing algorithms...)
Yes. How else would you prevent framing someone?
So you're saying that I craft a file that has the same hash as a CSAM one, I give it to you, you upload it to google, but it also happens to be CSAM, and I've somehow framed you?
My point is that a hash (granted, I'm assuming that we're talking about a cryptographic hash function, which is not clear) is much closer to "This is the file" than someone actually looking at it, and that it's definitely more proof of them having that sort of content than any other type of evidence.
These are perceptual hashes designed on purpose to be a little vague and broad so they catch transformed images. Not cryptographic hashes.
I don't understand. If you contend that it's even better evidence than actually having the file and looking at it, how is not reasonable to then need a judge to issue a warrant to look at it? Are you saying it would be more reasonable to skip that part and go directly to arrest?
It seems like a large part of the ruling hinges on the fact that Google matched the image hash to a hash of a known child pornography image, but didn't require an employee to actually look at that image before reporting it to the police. If they had visually confirmed it was the image they suspected it was based on the hash then no warrant would have been required, but the judge reads that the image hash match is not equivalent to a visual confirmation of the image. Maybe there's some slight doubt in whether or not the image could be a hash collision, which depends on the hash method. It may be incredibly unlikely (near impossible?) for any hash collision depending on the specific hash strategy.
I think it would obviously be less than ideal for Google to require an employee visually inspect child pornography identified by image hash before informing a legal authority like the police. So it seems more likely that the remedy to this situation would be for the police to obtain a warrant after getting the tip but before requesting the raw data from Google.
Would the image hash match qualify as probable cause enough for a warrant? On page 4 the judge stops short of setting precedence on whether it would have or not. Seems likely that it would be a solid probable cause to me, but sometimes judges or courts have a unique interpretation of technology that I don't always share, and leaving it open to individual interpretation can lead to conflicting results.
The hashes involved in stuff like this, as with copyright auto-matching, are perceptual hashes (https://en.wikipedia.org/wiki/Perceptual_hashing), not cryptographic hashes. False matches are common enough that perceptual hashing attacks are already a thing in use to manipulate search engine results (see the example in random paper on the subject https://gangw.cs.illinois.edu/PHashing.pdf).
It seems like that is very relevant information that was not considered by the court. If this was a cryptographic hash I would say with high confidence that this is the same image and so Google examined it - there is a small chance that some unrelated file (which might not even be a picture) matches but odds are the universe will end before that happens and so the courts can consider it the same image for search purposes. However because there are many false positive cases there is reasonable odds that the image is legal and so a higher standard for search is needed - a warrant.
>so the courts can consider it the same image for search purposes
An important part of the ruling seems to be that neither Google nor the police had the original image or any information about it, so the police viewing the image gave them more information than Google matching the hash gave Google: for example, consider how the suspect being in the image would have changed the case, or what might happen if the image turned out not to be CSAM, but showed the suspect storing drugs somewhere, or was even, somehow, something entirely legal but embarrassing to the suspect. This isn't changed by the type of hash.
That's the exact conclusion that was reached - the search required a warrant.
That makes sense - if they were using a cryptographic hash then people could get around it by making tiny changes to the file. I’ve used some reverse image search tools, which use perceptual hashing under the hood, to find the original source for art that gets shared without attribution (saucenao pretty solid). They’re good, but they definitely have false positives.
Now you’ve got me interested in what’s going on under the hood, lol. It’s probably like any other statistical model: you can decrease your false negatives (images people have cropped or added watermarks/text to), but at the cost of increased false positives.
The hash functions used for these purposes are usually not cryptographic hashes. They are "perceptual hashes" that allows for approximate matches (e.g. if the image has been scaled or brightness-adjusted). https://en.wikipedia.org/wiki/Perceptual_hashing
These hashes are not collision-resistant.
They should be called embeddings.
It seems like there just needs to be case law about the qualifications of an image hash in order to be counted as probable cause for a warrant. Of course you could make an image hash be arbitrarily good or bad.
I am not at all opposed to any of this "get a damn warrant" pushback from judges.
I am also not at all opposed to Google searching it's cloud storage for this kind of content. There are a lot of things I would mind a cloud provider going on fishing expeditions to find potentially illegal activity, but this I am fine with.
I do strongly object to companies searching content for illegal activity on devices in my possession absent probable cause and a warrant (that they would have to get in a way other than searching my device). Likewise I object to the pervasive and mostly invisible delivery to the cloud of nearly everything I do on devices I possess.
In other words, I want custody of my stuff and for the physical possession of my stuff to be protected by the 4th amendment and not subject to corporate search either. Things that I willingly give to cloud providers that they have custody of I am fine with the cloud provider doing limited searches and the necessary reporting to authorities. The line is who actually has the bits present on a thing they hold.
> As the district court correctly ruled in the alternative, the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required
So this means this conviction is upheld but future convictions may be overturned if they similarly don't acquire a warrant?
> the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required
This "good faith exception" is so absurd I struggle to believe that it's real.
Ordinary citizens are expected to understand and scrupulously abide by all of the law, but it's enough for law enforcement to believe that what they're doing is legal even if it isn't?
What that is is a punch line from a Chapelle bit[1], not a reasonable part of the justice system.
---
1. https://www.youtube.com/watch?v=0WlmScgbdws
The courts accept good faith arguments at times. They will give reduced sentences or even none at all if they think you acted in good faith. There are enough situations where it is legal to kill someone that there are laws to make it clear that is a legal situation where one person can kill another (hopefully they never apply to you).
Note that this case is not about ignorance of the law. This is I knew the law and was trying to follow it - I just honestly thought it didn't apply because of some tricky situation that isn't 100% clear.
The difference between "I don't know" and "I thought it worked like this" is purely a matter of degrees of ignorance. It sounds like the cops were ignorant of the law in the same way as someone who is completely unaware of it, just to a lesser degree. Unless they were misinformed about the origins of what they were looking at, it doesn't seem like it would be a matter of good faith, but purely negligence.
There was a circuit split and a matter of first impression in this circuit.
“Mens rea” is a key component of most crimes. Some crimes can only be committed if the perpetrator knows they are doing something wrong. For example, fraud or libel.
> “Mens rea” is a key component of most crimes. Some crimes can only be committed if the perpetrator knows they are doing something wrong. For example, fraud or libel.
We're talking about orthogonal issues.
Mens rea applies to whether the person performs the act on purpose. Not whether they were aware that the act was illegal.
Let's use fraud as an example since you brought it up.
If I bought an item from someone and used counterfeit money on purpose, that would be fraud. Even if I truly believed that doing so was legal. But it wouldn't be fraud if I didn't know that the money was counterfeit.
This specific conviction upheld, yes. But no, this ruling doesn't speak to whether or not any future convictions may be overturned.
It simply means that at the trial court level, future prosecutions will not be able to rely on the good faith exception to the exclusionary rule if warrantless inculpatory evidence is obtained under similar circumstances. If the governement were to try to present such evidence at trial and the trial judge were to admit it over the objection of the defendant, then that would present a specific ground for appeal.
This ruling merely bolsters the 'better to get a warrant' spirit of the Fourth Amendment.
At the time, what they did was assumed to be legal because no one had ruled on it.
Now, there is prior case law declaring it illegal.
The ruling is made in such a way to say “we were allowing this, but we shouldn’t have been, so we wont allow it going forward”.
I am not a legal scholar, but that’s the best way I can explain it. The way that the judicial system applies to law is incredibly complex and inconsistent.
This is a deeply problematic way to operate. En masse, it has the right result, but, for the individual that will have their life turned upside down, the negative impact is effectively catastrophic.
This ends up feeling a lot like gambling in a casino. The casino can afford to bet and lose much more than the individual.
I don't care nearly as much about the 4th amendment when the person is guilty. I care a lot when the person is innocent. Searches of innocent people is costly for the innocent person and so we require warrants to ensure such searches are minimized (even though most warrants are approved, the act of getting on forces the police to be careful). If a search was completely not costly to innocent I wouldn't be against them, but there are many ways a search that finds nothing is costly to the innocent.
I think the full reasoning here is something like
1. It was unclear if a warrant was necessary
2. Any judge would have given a warrant
3. You didn't get a warrant
4. A warrant was actually required.
Thus, it's not clear that any harm was caused because the right wasn't clearly enshrined and had the police known that it was, they likely would have followed the correct process. There was no intention to violate rights, and no advantage gained from even the inadvertent violation of rights. But the process is updated for the future.
It doesn’t seem like it was wrong in this specific case however.
Yep, that's basically it.
> the private search doctrine, which authorizes a government actor to repeat a search already conducted by a private party without securing a warrant.
IANAL, etc. Does that mean that if someone breaks in to your house in search of drugs, finds and steals some, and is caught by the police and confesses all that the police can then search your house without a warrant?
IANAL either, but from what I've read before the courts treat searches of your home with extra care under the 4th Amendment. At least one circuit has pushed back on applying private search cases to residences, and that was for a hotel room[0]:
> Unlike the package in Jacobsen, however, which "contained nothing but contraband," Allen's motel room was a temporary abode containing personal possessions. Allen had a legitimate and significant privacy interest in the contents of his motel room, and this privacy interest was not breached in its entirety merely because the motel manager viewed some of those contents. Jacobsen, which measured the scope of a private search of a mail package, the entire contents of which were obvious, is distinguishable on its facts; this Court is unwilling to extend the holding in Jacobsen to cases involving private searches of residences.
So under your hypothetical, I'd expect the police would be able to test "your drugs" that they confiscated from the thief, and use any findings to apply for a warrant for a search of your house, but any search without a warrant would be illegal.
[0] https://casetext.com/case/us-v-allen-167
I think the private search would have to be legal.
The 9th circuit ruled the same way a few years ago: https://www.insideprivacy.com/data-privacy/ninth-circuits-in...
The problem with the internet nowadays is that a few big players are making up their own law. Very often it is against local laws, but nobody can fight with it. For example someone created some content but other person uploaded it and got better scores which rendered the original poster blocked. Another example: children were playing a violin concert and the audio got removed due to alleged copyright violation. No possibility to appeal, nobody sane would go to court. It just goes this way...
"That, however, does not mean that Maher is entitled to relief from conviction. As the district court correctly ruled in the alternative, the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required."
"Defendant [..] stands convicted following a guilty plea in the United States District Court for the Northern District of New York (Glenn T. Suddaby, Judge) of both receiving and possessing approximately 4,000 images and five videos depicting child pornography"
A win for google, for the us judicial system, and for constitutional rights.
A loss for child abusers.
The Fourth Amendment didn't help here, unfortunately. Or, perhaps fortunately.
Still, 25 years for possessing kiddie porn, damn.
The harshness of sentence is not for the action of keeping the photos in itself, but the individual suffering and social damage caused by the actions that he incentivizes when he consumes such content.
Consumption per se does not incentivize it, though; procurement does. It's not unreasonable to causally connect one to the other, but I still think that it needs to be done explicitly. Strict liability for possession in particular is nonsense.
There's also an interesting question wrt simulated (drawn, rendered etc) CSAM, especially now that AI image generators can produce it in bulk. There's no individual suffering nor social damage involved in that at any point, yet it's equally illegal in most jurisdictions, and the penalties aren't any lighter. I've yet to see any sensible arguments in favor of this arrangement - it appears to be purely a "crime against nature" kind of moral panic over the extreme ickiness of the act as opposed to any actual harm caused by it.
It's not an interesting question at all.
Icky things are made illegal all the time. There's no need to have a 'sensible argument'.
Icky things were historically made illegal all the time, but most of those historical examples have not fared well in retrospect. Modern justice systems are generally predicated on some quantifiable harm for good reasons.
Given the extremely harsh penalties at play, I am not at all comfortable about punishing someone with a multi-year prison sentence for possession of a drawn or computer generated image. What exactly is the point, other than people getting off from making someone suffer for reasons they consider morally justifiable?
There's no room for sensible discussion like this in these matters. Not demanding draconian sentences for morally outraging crimes is morally outraging.
Assuming the person is a passive consumer with no messages / money exchanged with anyone, it is very hard to prove social harm or damage. Sentences should be proportional to the crime. Treating possession of cp as equivalent of literally raping a child just seems absurd to me. IMO, just for the legal protection of the average citizen, a simple possession should never warrant jail time.
Respectfully, it's not pornography, it's child sexual abuse material.
Porn of/between consenting adults is fine. CSAM and sexual abuse of minors is not pornography.
EDIT: I intended to reply to the grandparent comment
Pornography is any multimedia content intended for (someone's) sexual arousal. CSAM is obviously a subset of that.
That is out of date
The language has changed as we (in civilised countries) stop punishing sex work "porn" is different from CASM
In the bad old days pornographers were treated the same as sadists
The language is defined by how people actually use it, not by how a handful of activists try to prescribe its use. Ask any random person on the street, and most of them have no idea what CSAM is, but they know full well what "child porn" is. Dictionaries, encyclopedias etc also reflect this common sense usage.
The justification for this attempt to change the definition doesn't make any sense, either. Just because some porn is child porn, which is bad, doesn't in any way imply that all porn is bad. In fact, I would posit that making this argument in the first place is detrimental to sex-positive outlook on porn.
> Just because some porn is child porn, which is bad, doesn't in any way imply that all porn is bad.
I think people who want others to stop using the term "child porn" are actually arguing the opposite of this. Porn is good, so calling it "child porn" is making a euphemism or otherwise diminishing the severity of "CSAM" by using the positive term "porn" to describe it.
I don't think the established consensus on the meaning of the word "porn" itself includes some kind of inherent implied positivity, either; not even among people who have a generally positive attitude towards porn.
Stop doing this. You are confusing the perfectly noble aspect of calling it abuse material to make it victim centric with denying the basic purpose of the material. The people who worked hard to get it called CSAM do not deny that it’s pornography for its users.
The distinction you went on to make was necessary specifically for this reason.
Who do such harsh punishments benefit?
It's a reasonable argument, but a concerning one because it hinges on a couple of layers of indirection between the person engaging in consuming the content and the person doing the harm / person who is harmed.
That's not outside the purview of US law (especially in the world post-reinterpretation of the Commerce Clause), but it is perhaps worth observing how close to the cliff of "For the good of Society, you must behave optimally, Citizen" such reasoning treads.
For example: AI-generated CP (or hand-drawn illustrations) are viscerally repugnant, but does the same "individual suffering and social damage" reasoning apply to making them illegal? The FBI says yes to both in spite of the fact that we can name no human that was harmed or was unable to give consent in their fabrication (handwaving the source material for the AI, which if one chooses not to handwave it: drop that question on the floor and focus on under what reasoning we make hand-illustrated cartoons illegal to possess that couldn't be applied to pornography in general).
> The FBI says yes to both in spite of the fact that we can name no
They have two arguments for this (that I am aware of). The first argument is a practical one, that AI-generated images would be indistinguishable from the "real thing", but that the real thing still being out there would complicate their efforts to investigate and prosecute. While everyone might agree that this is pragmatic, it's not necessarily constitutionally valid. We shouldn't prohibit activities based on whether these activities make it more difficult for authorities to investigate crimes. Besides, this one's technically moot... those producing the images could do so in such a way (from a technical standpoint) that they were instantly, automatically, and indisputably provable as being AI-generated.
All images could be mandated to require embedded metadata which describes the model, seed, and so forth necessary to regenerate it. Anyone who needs to do so could push a button, the computer would attempt to regenerate the image from that seed, and the computer could even indicate that the two images matched (the person wouldn't even need to personally view the image for that to be the case). If the application indicated they did not match, then authorities could investigate it more thoroughly.
The second argument is an economic one. That is, if a person "consumes" such material, they increase economic demand for it to be created. Even in a post-AI world, some "creation" would be criminal. Thus, the consumer of such imagery does cause (indirectly) more child abuse, and the government is justified in prohibiting AI-generated material. This is a weak argument on the best of days... one of the things that law enforcement efforts excel at is just this. When there are two varieties of a behavior, one objectionable and the other not, but both similar enough that they might at a glance be mistaken for one another, is that it can greatly disincentivize one without infringing the other. Being an economic argument, one of the things that might be said is that economic actors seek to reduce their risk of doing business, and so would gravitate to creating the legal variety of material.
While their arguments are dumb, this filth's as reprehensible as anything. The only question worth asking or answering is, were (AI-generated) it legal, would it result in fewer children being harmed or not? It's commonly claimed that the easy availability of mainstream pornography has reduced the rate of rape since the mid-20th century.
Was using those md5 sums on images for flagging images 20 years ago for the government, occasional false positives, but the safety team would review those, not operations. My only role was to burn the users account to a dvd (via a script) and have the police officer pick up the dvd, we never touched the disk, and only burned the disk with a warrant. (we never saw/touched the users data...)
Figured this common industry standard for chain of custody for evidence. Same with police videos, they are uploaded to the courts digital evidence repository, and everyone who looks at the evidence is logged.
Seems like a standard legal process was followed.
Status quo, there is no change here.
The old example is the email server administrator. If the email administrator has to view the contents of user messages as a part of regular maintenance and that email administrator notices lawful violations in those user messages they can report it to law enforcement. In that case law enforcement can receive the material without a warrant only if law enforcement never asked for it before it was gifted to them. There are no fourth amendment protections provided to offenders in this scenario of third party accidental discovery. Typically, in these cases the email administrator does not have an affirmed requirement to report lawful violations to law enforcement unless specific laws claim otherwise.
If on the other hand law enforcement approaches that email administrator to fish for illegal user content then that email administrator has become an extension of law enforcement and any evidence discovered cannot be used in a criminal proceeding. Likewise, if the email administrator was intentionally looking through email messages for violations of law even not at the request of law enforcement they are still acting as agents of the law. In that case discovery was intentional and not an unintentional product of system maintenance.
There is a third scenario: obscenity. Obscenity is illegal intellectual property, whether digital or physical, as defined by criminal code. Possession of obscene materials is a violation of criminal law for all persons, businesses, and systems in possession. In that case an email administrator that accidentally discovers obscene material does have a required obligation to report their discoveries, typically through their employer's corporate legal process, to law enforcement. Failures to disclose such discoveries potentially aligns the system provider to the illegal conduct of the violating user.
Google's discovery, though, was not accidental as a result of system maintenance. It was due to an intentional discovery mechanism based on stored hashes, which puts Google's conduct in line with law enforcement even if they specified their conduct in their terms of service. That is why the appeals court claims the district court erred by denying the defendant's right to suppression on fourth amendment grounds.
The saving grace for the district court was a good faith exception, such as inevitable discovery. The authenticity and integrity of the hash algorithm was never in question by any party so no search for violating material was necessary, which established probably cause thus allowing law enforcement reasonable grounds to proceed to trial. No warrant was required because the evidence was likely sufficient at trial even if law enforcement did not directly view image in question, but they did verify the image. None of that was challenged by either party. What was challenged was just Google's conduct.
What is the context of this?
Wow, do I ever not know how I feel about the "good faith exception."
It feels like it incentivizes the police to minimize their understanding of the law so that they can believe they are following it.
The good faith exception requires the belief be reasonable. Ignorance of clearly settled law is not reasonable, it should be a situation where the law was unclear, had conflicting interpretations or could otherwise be interpreted the way the police did by a reasonable person.
> It feels like it incentivizes the police to minimize their understanding of the law so that they can believe they are following it.
That's a bingo. That's exactly what they do, and why so many cops know less about the law than random citizens. A better society would have high standards for the knowledge expected of police officers, including things like requiring 4-year criminal justice or pre-law degree to be eligible to be hired, rather than capping IQ and preferring people who have had prior experience in conducting violent actions.
In some countries you are required to study the law in order to become a police officer. It's part of the curriculum in the three year bachelor level course you must pass to become a police officer in Norway for instance. See https://en.wikipedia.org/wiki/Norwegian_Police_University_Co... and https://en.wikipedia.org/wiki/Norwegian_Police_Service
Yes, this likely explains part of why the Norwegian police behave like professionals who are trying to do their job with high standards of performance and behavior and the police in the US behave like a bunch of drinking buddies that used to be bullies in high school trying to find their next target to harass.
So now an algorithm can interpret the law better than a judge. It’s amazing how technology becomes judge and jury while privacy rights are left to a good faith interpretation. Are we really okay with letting an algorithmic click define the boundaries of privacy?
It's crazy that the most dangerous people one regularly encounters can do anything they want as long as they believe they can do it. The good faith exemption has to be one of the most fascist laws on the books today.
> "the good faith exception to the exclusionary rule supports denial of Maher’s suppression motion because, at the time authorities opened his uploaded file, they had a good faith basis to believe that no warrant was required."
In no other context or career can you do anything you want and get away with it just as long as you say you thought you could. You'd think police offiers would be held to a higher standard, not no standard.
And specifically with respect to the law, breaking a law and claiming you didn't know you did anything wrong as an individual is not considered a valid defense in our justice system. This same type of standard should apply even more to trained law enforcement, not less, otherwise it becomes a double standard.
No this is breaking the law by saying this looked like one of the situations where I already know the law doesn't apply. If Google had looked at the actual image and said it was child porn instead of just saying it was similar to some image that is child porn this would be 100% legal as the courts have already said. That difference is subtle enough that I can see how someone would get it wrong (and in fact I would expect other courts to rule differently)
> you do anything you want and get away with it just as long as you say you thought you could.
Isn't that the motto of VC? Uber, AirBnB, WeWork, etc...
Sorry, I should have been more explicit. I thought the context provided it.
> you do any illegal action you want and get away with it just as long as you say you thought you could.
And as for corporations: that's the point of incorporating. Reducing liability.
Good Samaritan laws tend to function similarly
It's not exactly the same imo, since GS laws are meant to protect someone who is genuinely trying to do what a reasonable person could consider "something positive"
It is not "you say you thought you could", it is "you have reasonable evidence a crime is happening".
The reasonable here is "google said it", and it was true.
If the police arrive at a house on a domestic abuse call, and hears screams for help, is breaking down the door done in good faith?
In this case you're correct. But the good faith exemption is far broader than this and applies to even officer's completely personal false beliefs in their authority.
> In no other context or career can you do anything you want and get away with it just as long as you say you thought you could
Many white collar crimes, financial and securities fraud/violations can be thwarted this way
Basically, ignorance of the law is no excuse except when you specifically write the law to say it is an excuse
Something that contributes to the DOJ not really trying to bring convictions against individuals at bigger financial institutions
And yeah, a lot of people make sure to write their industry’s laws that way
I think the judge chose to relax a lot on this one due to the circumstances. Releasing a man in society found with 4,000 child porn photos in his computer would be a shame.
But yeah, this opens too wide gates of precedence for tyranny, unfortunately...
The judge doesn't really understand a hash well. They say things like "Google assigned a hash" which is not true, Google calculated the hash.
Also I'm surprised the 3rd-party doctrine doesn't apply. There's the "private search doctrine" mentioned but generally you don't have an expectation of privacy for things you share with Google
Erm, "Assigned" in this context is not new: https://law.justia.com/cases/federal/appellate-courts/ca5/17...
"More simply, a hash value is a string of characters obtained by processing the contents of a given computer file and assigning a sequence of numbers and letters that correspond to the file’s contents."
From 2018 in United States v. Reddick.
The calculation is what assigns the value.
No. The calculation is what determines what the assignation should be. It does not actually assign anything.
This FOIA litigation by ACLU v ICE goes into this topic quite a lot: https://caselaw.findlaw.com/court/us-2nd-circuit/2185910.htm...
Yes, Google's calculation.
Did Google invent this hash?
Why is that relevant? Google used a hashing function to persist a new record within a database. They created a record for this.
Like I said in a sib. comment, this FOIA lawsuit goes into questions of hashing pretty well: https://caselaw.findlaw.com/court/us-2nd-circuit/2185910.htm...
Google at some point decided how to calculate that hash and that influences what the value is right? Assigned seems appropriate in that context?
Either way I think the judge's wording makes sense.
Google assigned the hashing algorithm (maybe, assuming it wasn't chosen in some law somewhere, I know this CSAM hashing is something the big tech work on together).
Once the hashing algorithm was assigned, individual values are computed or calculated.
I don't think the judge's wording is all that bad but the word "assigned" is making it sound like Google exercised some agency when really all it did was apply a pre-chosen algorithm.
There's a password on my Google account, I totally expect to have privacy for anything I didn't choose to share with other people.
The hash is kind of metadata recorded by Google, I feel like Google using it to keep child porn off their systems should be reasonable. Same ballpark as limiting my storage to 1GB based on file sizes. Sharing metadata without a warrant is a different question though.
As should be expected from the lawyer world, it seems like whether you have an expectation of privacy using gmail comes down to very technical word choices in the ToS, which of course neither this guy nor anyone else has ever read. Specifically, it may be legally relevant to your expectation of privacy whether Google says they "may" or "will" scan for this stuff.
Does a lab assigns a DNA to you or does it calculate?
Does two different labs DNA analysis are exactly equal?
Remember that you can use multiple different algorithms to calculate a hash.
Out of curiosity, what is false positive rate of a hash match?
If the FPR is comparable to asking a human "are these the same image?", then it would seem to be equivalent to a visual search. I wonder if (or why) human verification is actually necessary here.
The reason human verification is necessary is that the government is relying on something called the "private search" doctrine to conduct the search without a warrant. This doctrine allows them to repeat a search already conducted by a private party (i.e., Google) without getting a warrant. Since Google didn't actually look at the file, the government is not able to look at the file without a warrant, as that search exceeds the scope of the initial search Google performed.
I doubt sha1 hashes are used for this. Those image hashes should match files regardless of orientation, cropping, resizing, re-compression, color correction etc. The collision could be far more frequent with these hashes.
The hash should ideally match even if you use photoshop to cut the one person out of the picture and put that person into a different photo. I'm not sure if that is possible, but that is what we want.
> Out of curiosity, what is false positive rate of a hash match?
No way to know without knowledge of the 'proprietary hashing technology'. Theoretically though, a hash can have infinitely many inputs that produce the same output.
Mismatching hash values from the same hashing algorithm can prove mismatching inputs, but matching hash values don't ensure matching inputs.
> I wonder if (or why) human verification is actually necessary here
It's not about frequency, it's about criticality of getting it right. If you are going to make a negatively life-altering report on someone, you'd better make sure the accusation is legitimate.
I'd say the focus on hashing is a bit of a red herring.
Most anyone would agree that the hash matching should probably form probable cause for a warrant, allowing a judge to sign off on the police searching (i.e., viewing) the image. So, if it's a collision, the cops get a warrant and open up your linux ISO or cat meme, and it's all good. Probably the ideal case is that they get a warrant to search the specific image, and are only able to obtain a warrant to search your home and effects, etc. if the image does appear to be CSAM.
At issue here is the fact that no such warrant was obtained.
I think it'll prove far more likely that the government creates incentives to lead Google/other providers to fully do the search on their behalf.
The entire appeal seems to hinge on the fact that Google didn't actually view the image before passing it to NCMEC. Had Google policy been that all perceptual hash hits were reviewed by employees first, this would've likely been a one page denial.
If the hash algorithm were CRC8, then obviously it should not be probable cause for anything. If it were SHA-3, then it's basically proof beyond reasonable doubt of what the file is. It seems reasonable to question how collisions behave.
I don't agree that it would be proof beyond reasonable doubt, especially because neither google nor law enforcement can produce the original image that got tagged.
> Most anyone would agree that the hash matching should probably form probable cause for a warrant
I disagree with this. Yes, if we were talking MD5, SHA, or some similar true hash algo, then the probability of a natural collision is small enough that I agree in principle.
But if the hash algo is of some other kind then I do not know enough about it to assert that it can justify probable cause. Anyone who agrees without knowing more about it is a fool.
That's fair. I came away from reading the opinion that this was not a perceptual hash, but I don't think it is explicitly stated anywhere. I would have similar misgivings if indeed it is a perceptual hash.
For non-broken cryptographic hashes (e.g., SHA-256), the false-positive rate is negligible. Indeed, cryptographic hashes were designed so that even nation-state adversaries do not have the resources to generate two inputs that hash to the same value.
See also:
https://en.wikipedia.org/wiki/Collision_resistance
https://en.wikipedia.org/wiki/Preimage_attack
These are not the kinds of hashes used for CSAM detection, though, because that would only work for the exact pixel-by-pixel copy - any resizing, compression etc would drastically change the hash.
Instead, systems like these use perceptual hashing, in which similar inputs produce similar hashes, so that one can test for likeness. Those have much higher collision rates, and are also much easier to deliberately generate collisions for.
Naively, 1/(2^{hash_size_in_bits}). Which is about 1 in 4 billion odds for a 32 bit hash, and gets astronomically low at higher bit counts.
Of course, that's assuming a perfect, evenly distributed hash algorithm. And that's just the odds that any given pair of images has the same hash, not the odds that a hash conflict exists somewhere on the internet.
You need to know the input space as well as the output space (hash size).
If you have a 32bit hash but your input is only 16bit, you'll never have a collision (and you'll be wasting a ton of space on your hashes!).
Image files can get into the megabytes though, so unless the output hash is large the potential for collisions is probably not all that low.
Hash can be arbitrary, the only requirement is it is a deterministic one-way function.
And it should be mostly bijective under most conditions. (This is obviously impossible in practice but hashes with common collisions shouldn't be allowed as legal evidence). Also neural/visual hashes like those used by big tech makes things tricky.
The hash in question has many collisions. It it probably enough to get a warrant put it on a warrant, but it may not be enough to get a warrant without some other evidence. (it can be enough evidence to look for other public signs of evidence, or perhaps because there are a number of images that match different hashes)