The main point raised in the article is that these bots may void attorney client privileges.
But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.
Plus they are super inaccurate. Gemini gets one of its three bullet subtly or very majorly wrong almost every time.
Just a few weeks ago Gemini said we’re rolling out our payment setup in Russia. You know the place where we have 20+ sanctions packages on? We were talking about France in the meeting.
We've found they're surprisingly good if everyone on the call is using a decent headset.
The problems start when using conference room audio or someone is on their laptop mic. If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.
We just went through a round of 100+ (non-sensitive) VoC interviews and they really cut down the workload of compiling all of the feedback. If the audio was a little shaky though, we pretty much had to throw away the transcripts and do them from scratch like we used to.
> If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.
Imo this is the single biggest flaw of LLMs. They're great at a lot of things, but knowing when they're wrong (or don't have enough information to actually work on) is a critical flaw.
IMO there's nothing structural about why they shouldn't be able to spot this and correct themselves - I suspect it's a training issue. But presumably bots that infer context/fill in the dots rank better on what people like... at the cost of accuracy.
It's just a token predictor what do you expect? What we need are tools that embrace that and ping the agent to validate what it just said or double check. But the trade off is that this might hamper their capabilities to some level
While you're correct in what tthe audio models are - at least somewhat (they're not exactly like text based llms), you seem to brush his point away too quickly before fully exploring it.
This is a solvable issue, the current model and harnesses just aren't made with that assumption - hence they're doing "best effort while guessing if unsure".
Give it a few more months to years and things will likely settle how he pitched - at least in the context of note taking: only let it become "lore" if it didn't have to guess a word.
Currently there is basically only one mode - and it's optimized for conversation. The note taking is just glued on with that functionality as the backbone, and that's probably not going to stay.
I don't think it's a training issue, it's simply that there's no inherent "I don't know" in the transformer architecture unless it's really like something completely unknown, otherwise the nearest neighbor will be chosen and that will be whatever sounds similar or is relevant, even if it might cause a problem
Not inherent in transformer architecture, we do try to ingrain a sense of uncertainty but it’s difficult not only technically but also philosophically/culturally. How confident do you want the model to be in its answer to “why did Rome fall”?
Lots of tools in our toolbelts to do better uncertainty calibration but it trades off against other capabilities and actually can be rather frustrating to interact with in agentic contexts since it will constantly need input from you or otherwise be indecisive and overly cautious. It’s not technically a limitation of transformer architecture but it is more challenging to deal with than other architectures/statistical paradigms.
Like you can maintain a belief state and generate conditional on this and train to ensure belief state is stable and performant. But evals reward guessing at this point, and it’s very very hard to evaluate the calibration in these open ended contexts. But we’re slowly getting there, just not nearly as fast as other capabilities.
>How confident do you want the model to be in its answer to “why did Rome fall”?
The confidence level can be any, as long as it's reported accurately often enough. "This is my conjecture, but", "I'm not completely sure, but", and "most historians agree that" are all perfectly valid ways to start a sentence, which LLMs never use. They state mathematical truth, general consensus, hotly debated stances, and total fabrication, with the exact same assertiveness.
The thing is, if LLMs are stochastic parrots predicting the next word (aka, a partially decent auto complete), there's no reason it can't complete <specific question it can't answer> as "I don't know" - as that's a perfectly valid sentence too.
That's why I'm still cautiously optimistic about LLMs somewhere being good enough. I don't know if or when someone will manage to do it, but I'm hopeful.
It's a benchmark and eval issue. Guessing gets them the right result sometimes and the models rank better in error rate than they'd otherwise. We need the kind of benchmarks that penalize being wrong WAY more than saying "I don't know".
Of course there's a secondary problem that the model may then overuse the unintelligible option, but that's something that's a matter of training them properly against that eval.
You could also try thresholding the output based on perplexity to remove the parts that the model is less sure about, but that's not going to be super accurate I think.
Yeah I broadly agree with you. I've tried by explicitly adding a prompt to "ask questions and clarify", and even fairly decent models like Gemini pro (2.5 or 3) tend to make question for the sake of it.
Which reminds me that that's another big issue with LLMs - they'll blindly do whatever you ask them to, without pushback. (Again, I miss 3.5/3.6 era Sonnet which actually had half a spine. Fuck anthropic for blindly chasing coding benchmarks at the cost of everything else.)
I've engaged in several "CMVs" (or "tell me why X is bad") with LLMs, and very often it's clear it's just saying stuff to say it, giving very terrible points on unjustifiable positions that collapse the moment I counter argue even slightly rationally.
Given how financial services can impose silent inexplicable lifetime bans for using the wrong words in the "what is this transaction for" field, I'm wondering at what point the AI automatically reports people for sanctions violation based on its mishearing.
> But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.
I would add that their is no guarantee their are correct as well.
You’d use a computer generated transcript as a guide, not as proof - the proof is the recording of the person actually saying the thing, not the LLMs best guess of what it imagined the person saying.
“At timestamp X, person Y said Z” says the robot, and then you dutifully scrub the audio to timestamp X to verify.
This. The fact LLMs can also amplify existing closed-set research means even smaller shops can now search through a flood of documents to find smoking guns or critical evidence, much faster.
I’ve been saying it since the mid-10s, but it’s worth repeating: data isn’t gold, it’s more like oxygen in a room in that the higher the concentration, the more likely it is to poison the inhabitants or explode with an errant spark (lawsuit).
Collect only what’s needed to perform the function, and store it only as long as necessary for compliance. Anything else is going to spool counsel.
The nuance here too is that just because someone has concern about materials being discoverable does not mean the company is doing something illegal. Corporate law as it pertains to legislation (US in this perspective) is a dance between company and current administration. When it comes to antitrust and other related legislation the equilibrium is shades of gray that changes between both administration changes but sometimes from the same administration. Companies look to optimize their outcomes and the government is optimizing not so much for legality but what the current administration sets as the main concern.
No, that would be a strict improvement. The AI note-takers can easily "mishear" or "misreport" non-existent illegal and unethical things. It also seems to easily mess up numbers (which is big problem, because a lot of decisions hinge on precise numbers -- imagine inflating an inventory by an order of magnitude, and then imagine having to pay a tariff on something that never existed).
I have a friend who works at a large-ish company that imports and manufactures things (in one of the clerical/quantitative professions). A few years back, they had the IT department go on a kind of "inquisition", wherein they forced employees to disable the summarization function that came with MS Teams, and threatened to fire them if they did not. The resistance to this demand was surprising -- most people are clueless about the cost of their own convenience. Worst of all, people would zone out of meetings, because the AI was producing summaries, which they would then never read.
The effect of the technology was that it made meetings infinitely more expensive, because the supposed benefit of meetings was nullified by complacency, _and_ it made the meetings a liability (incorrectly summarized meetings, that could be used in the discovery process, sure, but could also be sold by MSFT as a kind of market-research-data to competitors in the space).
Nothing illegal has to happen in these meetings at all, for this tech to cause an infinity of problems for the corporation. Every employee that uses these is effectively an unwitting spy. And if that is the case, then the meetings might as well be recorded and uploaded to YouTube (or whatever people watch these days)[1].
[1]: Maybe this is the future. Which I am okay with, but only if the entire planet has to do it, and the penalties for not doing it are irrecoverably severe.
With the level of surveillance and erosion of privacy, that is essentially what is happening. We all know that we are being watched and surveilled. There is no longer an "argument". Anything you say in public or private could potentially be used against you in the future.
No there's the potential of that happening. That isn't what actually happens. If everyone's phone was continuously recording and storing everything 24/7 we'd need much bigger batteries for one thing.
Actually, many people fight this kind of "progress". Just look at what is happening to Flock right now. True "technological progress" would be using technology to empower humans, not to exploit and subjugate them.
Going to also be harder to hide completely legal, but not ideal stuff. Like randomly complaining about your boss to a colleague or casually discussing a feature you're stuck working on that you think is a bad idea.
>casually discussing a feature you're stuck working on that you think is a bad idea.
I’ll be honest, this is something that I hope AI note taking tools capture and incorporate into summaries of the company’s status. Especially if they act as an intermediary without revealing the specific person who said it. There’s a lot of information latent within organizations that doesn’t get properly shared due to concerns of retaliation or simply embarrassment that would benefit everyone by being communicated sooner.
The people supplying this technology explicitly want it to tell them what their serf are doing. There will be no "honest but anonymous informing of upper management".
That information is often intentionally not cascaded up the chain because the higher up you go, the more rigid the thinking gets - at least often times. Upstream doesn't want to hear the bad news or hear about how their ideal is dumb. They want us to just do the bad idea and if the bad idea doesn't work out, they want to hang the ICs out to dry.
Maybe some smaller shops are not like this, but the bigger your company is, the more you'll find this type of thinking to persist.
In theory, I do like your idea - anonymously cascading feedback upstream. I just see no avenue for this to succeed in practice.
Back when I was in college, in a fraternity, we always assumed that the phones were tapped. Specifically, we never spoke about alcohol or marijuana (now legal) on the phone.
Even today, I generally assume that my phone could be tapped; even when talking with my trusted work colleagues, friends, and family. I'm extra careful about dirty jokes or "grey morality" in video conferences and email.
The same applies to speaking with lawyers. You never know when some motivated asshole wants to twist your words out of context, and the possibility of a recording just enables that behavior.
---
I know enough about security and encryption to know that unless I've exchanged keys physically with someone else, there really is no guarantee that someone hasn't compromised a certificate somewhere. (IE, a "secure" connection on the internet is secure enough for a credit card.)
This is where I think either realtime transcription (or just in time followed by deleting everything) will be an end state.
Especially real-time transcription where the AI actually takes notes (instead of just recording every word and has a dump of it somewhere) can be appealing. Then there isn't any record of the raw sentences, and things that aren't relevant are immediately discarded without any written record.
OpenAI's realtime whisper and other such models will become the default over time.
I would be concerned about transcription error perhaps (e.g. non native speaker) where precision matters: engineering, compliance, regulation, legal, etc.
Known as ’The Rule of 26’, which is sometimes given as a reason NOT to keep engineering notebooks etc. By Federal Rule 26 you are guilty if you did not volunteer the records before they are requested. Including any backups.
From Cornel Law:
LII Federal Rules of Civil Procedure Rule 26. Duty to Disclose; General Provisions Governing Discovery
Rule 26. Duty to Disclose; General Provisions Governing Discovery
(a) Required Disclosures.
(1) Initial Disclosure.
(A) In General. Except as exempted by Rule 26(a)(1)(B) or as otherwise stipulated or ordered by the court, a party must, without awaiting a discovery request, provide to the other parties:
(i) the name and, if known, the address and telephone number of each individual likely to have discoverable information—along with the subjects of that information—that the disclosing party may use to support its claims or defenses, unless the use would be solely for impeachment;
(ii) a copy—or a description by category and location—of all documents, electronically stored information, and tangible things that the disclosing party has in its possession, custody, or control and may use to support its claims or defenses, unless the use would be solely for impeachment; …
This was interesting and sent me down a research hole.
General conclusion:
Corporate litigation is mostly just a series of self-investigations so that both sides can learn what both sides actually know, given that neither side knows much about themselves OR the other side. At the same time both sides are trying to stop the other side from getting the judge to order them to do more investigating.
Most services have privacy policies that boil down to:
- we promise not to share PII (defined as narrowly as possible)
- we promise not to share payment information except with our payment system
- if you pay us, we promise not to train LLMs on your data
- you agree that everything else can be used for any business purpose, including marketing, intelligence gathering, and "sharing with our 1735 trusted partners".
They care, but realize that there is no such thing as privacy anymore. The amount of obsession required to maybe maintain some degree of privacy is not something most people are willing to do.
If you are in a trusted industry like finance or healthcare, the popular ones generally have industry wide privacy certification like HIPAA compliant, SOC 2 Type 2 etc.
>> Executives and corporate boards generally expect conversations with their legal team about legal matters to have attorney-client privilege. They lose that protection if they share the same information with outside parties — and it’s possible that an A.I. note taker could have the same effect.
Total oversimplification. The fact is the privilege is a rule totally in the hands of the court. Every time a new communications technology come up, someone shouts about privilege but the courts still accept it. (Telephones, cell phones, emails, IMs, zoom court, each have had their day in the A-C privilege debate and been accepted.) What matters is that the parties intended and expected communications to be privileged.
As an example. I had a crim law prof who had been a NYC public defender in the 70s/80s. She had regularly interviewed clients at Rikers Island. All interviews were listened to by guards and she said you could even pay to get a copy of the recording. But these interviews were still covered by attorney-client privilege. No court would allow such evidence, but that doesn't mean that the prison could not use it for jail safety. Why does this matter: Because the presence of a third party doesn't mean anything. This isn't magic. An eavesdropper does not nullify the spell. Whether something is or is not privileged depends on the rules followed in the local jurisdiction, and no jurisdiction has ever followed a simplistic "presence of a third part" rule.
Until someone demonstrates an example of an AI actually leaking privileged information, courts are going to chalk it up as just another electronic tool for recording communications.
It's no impressive, it's scummy hiding news behind a paywal, they simple use some CSS trickery to set the height of the content to the size of your view, so there is nowhere to scroll to.
I have never used them (don't trust them to accurately capture what is important in a meeting vs just noting what's mentioned), but the concept seems very useful to me.
Reminds me of when I worked for a small shop which had the copier maintenance contract at a local college --- when something went wrong and wasn't properly addressed, my bosses found themselves being held to account with their own words from prior phone calls being quoted back to them verbatim --- which they were mystified by until I explained that the administrators had all come up from the clerical pool and knew shorthand.
The main risk is attorney client privilege and it's already been tested in New York, if you transcribe a call you need to turn over the transcriptions and they can subpoena the company doing the transcription for the records if you refuse.
> A trendy productivity hack, A.I. note takers are capturing every joke and offhand comment in many meetings. They could also potentially waive attorney-client privilege.
By now everyone knows that AI notes that aren't curated by a human will catch every silly thing that was said in the meeting while omitting the context of the tone or body language. Something as simple as "yeah, right" has vastly different meanings depending on how it was said. In a different context it's already been established that using AI breaks client attorney privilege [0] and this concern has been raised before by law firms [1][2] or the American Bar Association [3] (you can just hit escape before the paywall loads to see the full content). A judge will have to weigh in on this one too.
I don't know what's with the wave of paywalled articles that keep making it to the front page without any workaround included in the submission. Even when you coax the text out of the page source, they're not very insightful to begin with.
If a lawyer takes notes and puts them in a computer, or a cloud drive, or send it over email, they are still covered by attorney-client privilege, right? If they use an AI to do it, it's treated more like a third party no longer covered by the same privilege. If there's no court decision on this it only takes a bit of bad assumption to screw up with using AI.
To be fair, the attorney-client privilege should be completely technology/medium agnostic. If the intention is to have that info stay between client and attorney, nothing should change this.
The main point raised in the article is that these bots may void attorney client privileges.
But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.
Plus they are super inaccurate. Gemini gets one of its three bullet subtly or very majorly wrong almost every time. Just a few weeks ago Gemini said we’re rolling out our payment setup in Russia. You know the place where we have 20+ sanctions packages on? We were talking about France in the meeting.
We've found they're surprisingly good if everyone on the call is using a decent headset.
The problems start when using conference room audio or someone is on their laptop mic. If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.
We just went through a round of 100+ (non-sensitive) VoC interviews and they really cut down the workload of compiling all of the feedback. If the audio was a little shaky though, we pretty much had to throw away the transcripts and do them from scratch like we used to.
> If they miss a word they never do unintelligible, they just start playing madlibs based on the rest of the sentence.
Imo this is the single biggest flaw of LLMs. They're great at a lot of things, but knowing when they're wrong (or don't have enough information to actually work on) is a critical flaw.
IMO there's nothing structural about why they shouldn't be able to spot this and correct themselves - I suspect it's a training issue. But presumably bots that infer context/fill in the dots rank better on what people like... at the cost of accuracy.
It's just a token predictor what do you expect? What we need are tools that embrace that and ping the agent to validate what it just said or double check. But the trade off is that this might hamper their capabilities to some level
While you're correct in what tthe audio models are - at least somewhat (they're not exactly like text based llms), you seem to brush his point away too quickly before fully exploring it.
This is a solvable issue, the current model and harnesses just aren't made with that assumption - hence they're doing "best effort while guessing if unsure".
Give it a few more months to years and things will likely settle how he pitched - at least in the context of note taking: only let it become "lore" if it didn't have to guess a word.
Currently there is basically only one mode - and it's optimized for conversation. The note taking is just glued on with that functionality as the backbone, and that's probably not going to stay.
I don't think it's a training issue, it's simply that there's no inherent "I don't know" in the transformer architecture unless it's really like something completely unknown, otherwise the nearest neighbor will be chosen and that will be whatever sounds similar or is relevant, even if it might cause a problem
Not inherent in transformer architecture, we do try to ingrain a sense of uncertainty but it’s difficult not only technically but also philosophically/culturally. How confident do you want the model to be in its answer to “why did Rome fall”?
Lots of tools in our toolbelts to do better uncertainty calibration but it trades off against other capabilities and actually can be rather frustrating to interact with in agentic contexts since it will constantly need input from you or otherwise be indecisive and overly cautious. It’s not technically a limitation of transformer architecture but it is more challenging to deal with than other architectures/statistical paradigms.
Like you can maintain a belief state and generate conditional on this and train to ensure belief state is stable and performant. But evals reward guessing at this point, and it’s very very hard to evaluate the calibration in these open ended contexts. But we’re slowly getting there, just not nearly as fast as other capabilities.
>How confident do you want the model to be in its answer to “why did Rome fall”?
The confidence level can be any, as long as it's reported accurately often enough. "This is my conjecture, but", "I'm not completely sure, but", and "most historians agree that" are all perfectly valid ways to start a sentence, which LLMs never use. They state mathematical truth, general consensus, hotly debated stances, and total fabrication, with the exact same assertiveness.
The thing is, if LLMs are stochastic parrots predicting the next word (aka, a partially decent auto complete), there's no reason it can't complete <specific question it can't answer> as "I don't know" - as that's a perfectly valid sentence too.
That's why I'm still cautiously optimistic about LLMs somewhere being good enough. I don't know if or when someone will manage to do it, but I'm hopeful.
It's a benchmark and eval issue. Guessing gets them the right result sometimes and the models rank better in error rate than they'd otherwise. We need the kind of benchmarks that penalize being wrong WAY more than saying "I don't know".
Of course there's a secondary problem that the model may then overuse the unintelligible option, but that's something that's a matter of training them properly against that eval.
You could also try thresholding the output based on perplexity to remove the parts that the model is less sure about, but that's not going to be super accurate I think.
Yeah I broadly agree with you. I've tried by explicitly adding a prompt to "ask questions and clarify", and even fairly decent models like Gemini pro (2.5 or 3) tend to make question for the sake of it.
Which reminds me that that's another big issue with LLMs - they'll blindly do whatever you ask them to, without pushback. (Again, I miss 3.5/3.6 era Sonnet which actually had half a spine. Fuck anthropic for blindly chasing coding benchmarks at the cost of everything else.)
I've engaged in several "CMVs" (or "tell me why X is bad") with LLMs, and very often it's clear it's just saying stuff to say it, giving very terrible points on unjustifiable positions that collapse the moment I counter argue even slightly rationally.
Given how financial services can impose silent inexplicable lifetime bans for using the wrong words in the "what is this transaction for" field, I'm wondering at what point the AI automatically reports people for sanctions violation based on its mishearing.
> But the real danger with these IMO is that they're turning casual conversations into a permanent record, and one that will be completely discoverable in court, should the company get into trouble later.
I would add that their is no guarantee their are correct as well.
You’d use a computer generated transcript as a guide, not as proof - the proof is the recording of the person actually saying the thing, not the LLMs best guess of what it imagined the person saying.
“At timestamp X, person Y said Z” says the robot, and then you dutifully scrub the audio to timestamp X to verify.
Is audio always kept in addition to transcripts? (genuine question, I rarely record either)
This. The fact LLMs can also amplify existing closed-set research means even smaller shops can now search through a flood of documents to find smoking guns or critical evidence, much faster.
I’ve been saying it since the mid-10s, but it’s worth repeating: data isn’t gold, it’s more like oxygen in a room in that the higher the concentration, the more likely it is to poison the inhabitants or explode with an errant spark (lawsuit).
Collect only what’s needed to perform the function, and store it only as long as necessary for compliance. Anything else is going to spool counsel.
What are you trying to get away with I wonder?
The nuance here too is that just because someone has concern about materials being discoverable does not mean the company is doing something illegal. Corporate law as it pertains to legislation (US in this perspective) is a dance between company and current administration. When it comes to antitrust and other related legislation the equilibrium is shades of gray that changes between both administration changes but sometimes from the same administration. Companies look to optimize their outcomes and the government is optimizing not so much for legality but what the current administration sets as the main concern.
Basically, it will be harder to hide illegal and unethical stuff companies routinely engage in.
No, that would be a strict improvement. The AI note-takers can easily "mishear" or "misreport" non-existent illegal and unethical things. It also seems to easily mess up numbers (which is big problem, because a lot of decisions hinge on precise numbers -- imagine inflating an inventory by an order of magnitude, and then imagine having to pay a tariff on something that never existed).
I have a friend who works at a large-ish company that imports and manufactures things (in one of the clerical/quantitative professions). A few years back, they had the IT department go on a kind of "inquisition", wherein they forced employees to disable the summarization function that came with MS Teams, and threatened to fire them if they did not. The resistance to this demand was surprising -- most people are clueless about the cost of their own convenience. Worst of all, people would zone out of meetings, because the AI was producing summaries, which they would then never read.
The effect of the technology was that it made meetings infinitely more expensive, because the supposed benefit of meetings was nullified by complacency, _and_ it made the meetings a liability (incorrectly summarized meetings, that could be used in the discovery process, sure, but could also be sold by MSFT as a kind of market-research-data to competitors in the space).
Nothing illegal has to happen in these meetings at all, for this tech to cause an infinity of problems for the corporation. Every employee that uses these is effectively an unwitting spy. And if that is the case, then the meetings might as well be recorded and uploaded to YouTube (or whatever people watch these days)[1].
[1]: Maybe this is the future. Which I am okay with, but only if the entire planet has to do it, and the penalties for not doing it are irrecoverably severe.
"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him" - Cardinal Richelieu
Be careful what you wish for. Particularly when it involves tech that often gets it very, very wrong.
That's an argument for recording everyone on earth 24/7. Is that what you mean?
With the level of surveillance and erosion of privacy, that is essentially what is happening. We all know that we are being watched and surveilled. There is no longer an "argument". Anything you say in public or private could potentially be used against you in the future.
No there's the potential of that happening. That isn't what actually happens. If everyone's phone was continuously recording and storing everything 24/7 we'd need much bigger batteries for one thing.
It'll just happen. Can't really fight technological progress.
Actually, many people fight this kind of "progress". Just look at what is happening to Flock right now. True "technological progress" would be using technology to empower humans, not to exploit and subjugate them.
Is it progress though?
Show me man the man and I will show you the crime.
Modernized. Industrial AI scale.
Going to also be harder to hide completely legal, but not ideal stuff. Like randomly complaining about your boss to a colleague or casually discussing a feature you're stuck working on that you think is a bad idea.
>casually discussing a feature you're stuck working on that you think is a bad idea.
I’ll be honest, this is something that I hope AI note taking tools capture and incorporate into summaries of the company’s status. Especially if they act as an intermediary without revealing the specific person who said it. There’s a lot of information latent within organizations that doesn’t get properly shared due to concerns of retaliation or simply embarrassment that would benefit everyone by being communicated sooner.
The people supplying this technology explicitly want it to tell them what their serf are doing. There will be no "honest but anonymous informing of upper management".
That information is often intentionally not cascaded up the chain because the higher up you go, the more rigid the thinking gets - at least often times. Upstream doesn't want to hear the bad news or hear about how their ideal is dumb. They want us to just do the bad idea and if the bad idea doesn't work out, they want to hang the ICs out to dry.
Maybe some smaller shops are not like this, but the bigger your company is, the more you'll find this type of thinking to persist.
In theory, I do like your idea - anonymously cascading feedback upstream. I just see no avenue for this to succeed in practice.
Back when I was in college, in a fraternity, we always assumed that the phones were tapped. Specifically, we never spoke about alcohol or marijuana (now legal) on the phone.
Even today, I generally assume that my phone could be tapped; even when talking with my trusted work colleagues, friends, and family. I'm extra careful about dirty jokes or "grey morality" in video conferences and email.
The same applies to speaking with lawyers. You never know when some motivated asshole wants to twist your words out of context, and the possibility of a recording just enables that behavior.
---
I know enough about security and encryption to know that unless I've exchanged keys physically with someone else, there really is no guarantee that someone hasn't compromised a certificate somewhere. (IE, a "secure" connection on the internet is secure enough for a credit card.)
This is where I think either realtime transcription (or just in time followed by deleting everything) will be an end state.
Especially real-time transcription where the AI actually takes notes (instead of just recording every word and has a dump of it somewhere) can be appealing. Then there isn't any record of the raw sentences, and things that aren't relevant are immediately discarded without any written record.
OpenAI's realtime whisper and other such models will become the default over time.
https://archive.is/wPKhf
I would be concerned about transcription error perhaps (e.g. non native speaker) where precision matters: engineering, compliance, regulation, legal, etc.
gift link: https://www.nytimes.com/2026/05/09/business/dealbook/ai-note...
Some companies want no records at all, see:
"2028 – A Dystopian Story By Jack Ganssle":
http://www.ganssle.com/articles/2028adystopianstory.htm
Known as ’The Rule of 26’, which is sometimes given as a reason NOT to keep engineering notebooks etc. By Federal Rule 26 you are guilty if you did not volunteer the records before they are requested. Including any backups.
From Cornel Law:
LII Federal Rules of Civil Procedure Rule 26. Duty to Disclose; General Provisions Governing Discovery
Rule 26. Duty to Disclose; General Provisions Governing Discovery
(a) Required Disclosures.
(1) Initial Disclosure.
(A) In General. Except as exempted by Rule 26(a)(1)(B) or as otherwise stipulated or ordered by the court, a party must, without awaiting a discovery request, provide to the other parties:
(i) the name and, if known, the address and telephone number of each individual likely to have discoverable information—along with the subjects of that information—that the disclosing party may use to support its claims or defenses, unless the use would be solely for impeachment;
(ii) a copy—or a description by category and location—of all documents, electronically stored information, and tangible things that the disclosing party has in its possession, custody, or control and may use to support its claims or defenses, unless the use would be solely for impeachment; …
https://www.law.cornell.edu/rules/frcp/rule_26
This was interesting and sent me down a research hole.
General conclusion:
Corporate litigation is mostly just a series of self-investigations so that both sides can learn what both sides actually know, given that neither side knows much about themselves OR the other side. At the same time both sides are trying to stop the other side from getting the judge to order them to do more investigating.
See also the OpenAI vs. Musk trial, where Greg Brockman's diary and Sam Altman's texts have taken center stage.
https://www.nytimes.com/2026/05/09/business/dealbook/ai-note...
Honest question:
Do these systems not share data with the AI servers? Or are they all local (on-site, not on-computer)?
I am totally baffled by the trust people put on these systems, sharing with them the most obviously private data.
Most services have privacy policies that boil down to:
- we promise not to share PII (defined as narrowly as possible)
- we promise not to share payment information except with our payment system
- if you pay us, we promise not to train LLMs on your data
- you agree that everything else can be used for any business purpose, including marketing, intelligence gathering, and "sharing with our 1735 trusted partners".
> I am totally baffled by the trust people put on these systems
The average person doesn't care about online privacy.
They care, but realize that there is no such thing as privacy anymore. The amount of obsession required to maybe maintain some degree of privacy is not something most people are willing to do.
If you are in a trusted industry like finance or healthcare, the popular ones generally have industry wide privacy certification like HIPAA compliant, SOC 2 Type 2 etc.
>> Executives and corporate boards generally expect conversations with their legal team about legal matters to have attorney-client privilege. They lose that protection if they share the same information with outside parties — and it’s possible that an A.I. note taker could have the same effect.
Total oversimplification. The fact is the privilege is a rule totally in the hands of the court. Every time a new communications technology come up, someone shouts about privilege but the courts still accept it. (Telephones, cell phones, emails, IMs, zoom court, each have had their day in the A-C privilege debate and been accepted.) What matters is that the parties intended and expected communications to be privileged.
As an example. I had a crim law prof who had been a NYC public defender in the 70s/80s. She had regularly interviewed clients at Rikers Island. All interviews were listened to by guards and she said you could even pay to get a copy of the recording. But these interviews were still covered by attorney-client privilege. No court would allow such evidence, but that doesn't mean that the prison could not use it for jail safety. Why does this matter: Because the presence of a third party doesn't mean anything. This isn't magic. An eavesdropper does not nullify the spell. Whether something is or is not privileged depends on the rules followed in the local jurisdiction, and no jurisdiction has ever followed a simplistic "presence of a third part" rule.
Until someone demonstrates an example of an AI actually leaking privileged information, courts are going to chalk it up as just another electronic tool for recording communications.
unrealted to the article, but how do you make a page that that prevents the mouse scroll wheel from working? that's pretty impressive.
It's no impressive, it's scummy hiding news behind a paywal, they simple use some CSS trickery to set the height of the content to the size of your view, so there is nowhere to scroll to.
Paywall: can anyone share what the issue is?
Inaccuracy in meeting minutes?
Leaking private info, re security of notes?
I have never used them (don't trust them to accurately capture what is important in a meeting vs just noting what's mentioned), but the concept seems very useful to me.
Reminds me of when I worked for a small shop which had the copier maintenance contract at a local college --- when something went wrong and wasn't properly addressed, my bosses found themselves being held to account with their own words from prior phone calls being quoted back to them verbatim --- which they were mystified by until I explained that the administrators had all come up from the clerical pool and knew shorthand.
The main risk is attorney client privilege and it's already been tested in New York, if you transcribe a call you need to turn over the transcriptions and they can subpoena the company doing the transcription for the records if you refuse.
They are saying that it could invalidate attorney client privilege because the transcription could technically be available to an outside party.
I suspect what isn't being said by the lawyers is they want to keep attorney client privilege so they can outright lie.
It's in the viewable text on the page.
> A trendy productivity hack, A.I. note takers are capturing every joke and offhand comment in many meetings. They could also potentially waive attorney-client privilege.
By now everyone knows that AI notes that aren't curated by a human will catch every silly thing that was said in the meeting while omitting the context of the tone or body language. Something as simple as "yeah, right" has vastly different meanings depending on how it was said. In a different context it's already been established that using AI breaks client attorney privilege [0] and this concern has been raised before by law firms [1][2] or the American Bar Association [3] (you can just hit escape before the paywall loads to see the full content). A judge will have to weigh in on this one too.
I don't know what's with the wave of paywalled articles that keep making it to the front page without any workaround included in the submission. Even when you coax the text out of the page source, they're not very insightful to begin with.
[0] https://perkinscoie.com/insights/update/federal-court-rules-...
[1] https://www.smithlaw.com/newsroom/publications/the-silent-gu...
[2] https://natlawreview.com/article/when-ai-takes-notes-protect...
[3] https://www.americanbar.org/groups/gpsolo/resources/ereport/...
> It's in the viewable text on the page.
Not for me - there was no viewable text.
People opt in to the panopticon and then discover they have no more secrets. I'm surprised lawyers fall for that as well.
the doofus lawyer probably didn't realise, i wouldn't call it opt in
If a lawyer takes notes and puts them in a computer, or a cloud drive, or send it over email, they are still covered by attorney-client privilege, right? If they use an AI to do it, it's treated more like a third party no longer covered by the same privilege. If there's no court decision on this it only takes a bit of bad assumption to screw up with using AI.
To be fair, the attorney-client privilege should be completely technology/medium agnostic. If the intention is to have that info stay between client and attorney, nothing should change this.