This spiel is hilarious in the context of the product this company (https://juno-labs.com/) is pushing – an always on, always listening AI device that inserts itself into your and your family’s private lives.
“Oh but they only run on local hardware…”
Okay, but that doesn't mean every aspect of our lives needs to be recorded and analyzed by an AI.
Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Have all your guests consented to this?
What happens when someone breaks in and steals the box?
What if the government wants to take a look at the data in there and serves a warrant?
What if a large company comes knocking and makes an acquistion offer? Will all the privacy guarantees still stand in face of the $$$ ?
The fundamental problem with a lot of this is that the legal system is absolute: if information exists, it is accessible. If the courts order it, nothing you can do can prevent the information being handed over, even if that means a raid of your physical premises. Unless you encrypt it in a manner resistant to any way you can be compelled to decrypt it, the only way to have privacy is for information not to exist in the first place. It's a bit sad as the potential for what technology can do to assist us grows that this actually may be the limit on how much we can fully take advantage of it.
I do sometimes wish it would be seen as an enlightened policy to legislate that personal private information held in technical devices is legally treated the same as information held in your brain. Especially for people for whom assistive technology is essential (deaf, blind, etc). But everything we see says the wind is blowing the opposite way.
Agreed, while we've tried to think through this and build in protections we can't pretend that there is a magical perfect solution. We do have strong conviction that doing this inside the walls of your home is much safer than doing it within any companies datacenter (I accept that some just don't want this to exist period and we won't be able to appease them).
Some of our decisions in this direction:
- Minimize how long we have "raw data" in memory
- Tune the memory extraction to be very discriminating and err on the side of forgetting (https://juno-labs.com/blogs/building-memory-for-an-always-on-ai-that-listens-to-your-kitchen)
- Encrypt storage with hardware protected keys (we're building on top of the Nvidia Jetson SOM)
We're always open to criticism on how to improve our implementation around this.
It’s definitely a strange pitch, because the target audience (the privacy-conscious crowd) is exactly the type who will immediately spot all the issues you just mentioned. It's difficult to think of any privacy-conscious individual who wouldn't want, at bare minimum, a wake word (and more likely just wouldn't use anything like this period).
The non privacy-conscious will just use Google/etc.
A good example of this is what one of my family member's partner said. "Isn't creep that you just talked about something and now you are seeing ads for it. Guess we just have to accept it."
My response was no I don't get any of that because I disable that technology since it is always listening and can never be trusted. There is no privacy in those services.
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Typically not how these things work. Speech is processed using ASR (automatic speech recognition), and then ran through a prompt that checks for appropriate tools calls.
I've been meaning to basically make this myself but I've been too lazy lately to bother.
I actually want a lot more functionality from a local only AI machine, I believe the paradigm is absurdly powerful.
Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.
Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.
> Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.
> Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
> I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.
Those don't sound like things that you need AI for.
I agree. I also don't really have an ambient assistant problem. My phone is always nearby and Siri picks up wake words well (or I just hold the powerbutton).
My problem is Siri doesn't do any of this stuff well. I'd really love to just get it out of the way so someone can build it better.
Some of the more magical moments we’ve had with Juno is automatic shopping list creation saying “oh no we are out of milk and eggs” out loud without having to remember to tell Siri becomes a shopping list and event tracking around kids “Don’t forget next Thursday is early pickup”. A nice freebie is moving the wake word to the end. “What’s weather Juno today?” becomes much more natural than a prefixed wake word.
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
One of our core architecture decisions was to use a streaming speech-to-text model. At any given time about 80ms of actual audio is in memory and about 5 minutes of transcribed audio (text) is in memory (this is help the STT model know the context of the audio for higher transcription accuracy).
Of these 5 minute transcripts, those that don't become memories are forgotten. So only selected extracted memories are durably stored. Currently we store the transcript with the memory (this was a request from our prototype users to help them build confidence in the transcription accuracy) but we'll continue to iterate based on feedback if this is the correct decision.
I agree with the core premise that the big AI companies are fundamentally driven towards advertising revenue and other antagonistic but profit-generating functionality.
Also agree with paxys that the social implications here are deep and troubling. Having ambient AI in a home, even if it's caged to the home, has tricky privacy problems.
I really like the explorations of this space done in Black Mirror's The Entire History of You[1] and Ted Chiang's The Truth of Fact short story[2].
My bet is that the home and other private spaces almost completely yield to computer surveillance, despite the obvious problems. We've already seen this happen with social media and home surveillance cameras.
Just as in Chiang's story spaces were 'invaded' by writing, AI will fill the world and those opting out will occupy the same marginal positions as those occupied by dumb phone users and people without home cameras or televisions.
> There needs to be a business model based on selling the hardware and software, not the data the hardware collects. An architecture where the company that makes the device literally cannot access the data it processes, because there is no connection to access it through.
Genuine Q: Is this business model still feasible? Its hard to imagine anyone other than apple sustaining a business off of hardware; they have the power to spit out full hardware refreshes every year. How do you keep a team of devs alive on the seemingly one-and-done cash influx of first-time-buyers?
This strikes me as a pretty weak rationalization for "safe" always-on assistants. Even if the model runs locally, there’s still a serious privacy issue: Unwitting victims of something recording everything they said.
Friends at your house who value their privacy probably won’t feel great knowing you’ve potentially got a transcript of things they said just because they were in the room. Sure, it's still better than also sending everything up to OpenAI, but that doesn’t make it harmless or less creepy.
Unless you’ve got super-reliable speaker diarization and can truly ensure only opted-in voices are processed, it’s hard to see how any always-listening setup ever sits well with people who value their privacy.
I wonder if the answer is that it is stored and processes in a way that a human can’t access or read, like somehow it’s encrypted and unreadable but tokenized and can be processed, I don’t know how but it feels possible.
This is something we call out under the "What we got wrong" section. We're currently collecting an audio dataset that should help create a speech-to-text (STT) model that incorporates speaker identification and that tag will be weaved into the core of the memory architecture.
> The shared household memory pool creates privacy situations we’re still working through. The current design has everyone in the family shares the same memory corpus. Should a child be able to see a memory their parents created? Our current answer is to deliberately tune the memory extraction to be household-wide with no per-person scoping because a kitchen device hears everyone equally. But “deliberately chose” doesn’t mean “solved.” We’re hoping our in-house STT will allow us to do per-person memory tagging and then we can experiment with scoping memories to certain people or groups of people in the household.
Yes! We see a lot of the same things that really should have been solved by the first wave of assistants. Your _Around The House_ reads similar to a lot of our goals though we would love the system to be much more pro-active than current assistants.
Feel free to reach out. Would love to swap notes and send you a prototype.
> I hope the memory crisis isn't hurting you too badly.
Oh man, we've had to really track our bill of materials (BOM) and average selling price (ASP) estimates to make sure everything stays feasible. Thankfully these models quantize well and the size-to-intelligence frontier is moving out all the time.
True, we focused on hardware embodied AI assistants (smart speakers, smart glasses, etc) as those are the ones we believe will soon start leaving wake words behind and moving towards an always-on interaction design. The privacy implications of an always-listening smart speaker are magnitudes higher than OpenClaw that you intentionally interact with.
This. Kids already have tons of those gadgets on. Previously, I only really had to worry about a cell phone so even if someone was visiting, it was a simple case of plop all electronics here, but now with glasses I am not even sure how to reasonably approach this short of not allowing it period. Eh, brave new world.
Both are pandoras box. Open Claw has access to your credit cards, social media accounts, etc by default (i.e. if you have them saved in your browser on the account that Open Claw runs on, which most people do.)
How long web search had been objective, nice, and helpful - 10 years? Now things are happening faster so there is max 5 years in total of AI prompt pretending that they want to help.
This was the inevitable endpoint of the current AI unit economics. When inference costs are this high and open-source models are compressing SaaS margins to zero, companies can't survive on standard subscription models. They have to subsidize the compute by monetizing the user's context window. The real liability isn't just ads; it's what happens when autonomous agents start making financial decisions influenced by sponsored retrieval data.
I think local inference is great for many things - but this stance seems to conflate that you can’t have privacy with server side inference, and you can’t have nefariousness with client side inference. A device that does 100% client side inference can still phone home unless it’s disconnected from internet. Most people will want internet-connected agents right? And server side inference can be private if engineered correctly (strong zero retention guarantees, maybe even homomorphic encryption)
I guess it goes to show that real value is in the broader market to a certain extent, if they can’t just sell people the power they and up just earning a commission for helping someone else sell a product.
This isn't a technology issue. Regulation is the only sane way to address the issue.
For once,we (as the technologists) have a free translator to laymen speak via the frontier LLMs, which can be an opportunity to educate the masses as to the exact world on the horizon.
> This isn't a technology issue. Regulation is the only sane way to address the issue.
It is actually both a technology and regulation/law issue.
What can be solved with the former should be. What is left, solved with the latter. With the best cases where both consistently/redundantly uphold our rights.
I want legal privacy protections, consistent with privacy preserving technology. Inconsistencies create technical and legal openings for nefarious or irresponsible powers.
Ads in AI should be banned right now. We need to learn from mistakes of the internet (crypto, facebook) and aggressively regulate early and often before this gets too institutionalized to remove.
Boomers in government would be clueless on how to properly regulate and create correct incentives. Hell, that is still a bold ask for tech and economist geniuses with the best of intentions.
Would that be the same cohort of boomers jamming LLMs up our collective asses? So they don’t understand how to regulate a technology they don’t understand, but fucking by golly you’re going to be left behind if you don’t use it?
It's mostly SV grifters who shoved LLMs up our asses. They then get in cahoots with boomers in the government to create policies and "investment schemes" that inflate their stock in a ponzi-like fashion and regulate competition.
Why do you think Trump has some no-name crypto firm, or why Thiel has Vance as his whipping boy, and Elon spend a fortune trying to get Trump to win? This is a multiparty thing, as most politicians are heavily bought and paid for.
I trust corporations far far far less than government or lawmakers (who I also don’t trust). I know corporations will use ads in the most manipulative and destructive manner. Laws may be flawed but are worth the risk.
Who would buy OpenAI's spy device? I think a lot of public discourse and backlash about the greedy, anticompetitive, and exploitative practices of the silicon valley elite have gone mainstream and will hopefully course correct the industry in time.
> ...exploitative practices of the silicon valley elite have gone mainstream and will hopefully course correct the industry in time.
I have little hope that is true. Don't expect privacy laws and boycott campaigns. That very same elite control the law via bribes to US politicians (and indirectly the laws of other counties via those politicians threats, see the ongoing watering down of EU laws). They also directly control public discourse via ownership of the media and mainstream communication platforms. What backlash can they really suffer?
With cloud based inference we agree, this being just one more benefit of doing everything with "edge" inference (on device inside the home) as we do with Juno.
Pretty sure a) it's not a matter of whether you agree and b) GDPR still considers always-on listening to be something the affected user has to actively consent to. Since someone in a household may not realize that another person's device is "always on" and may even lack the ability to consent - such as a child - you are probably going to find that it is patently illegal according to both the letter and the spirit of the law.
Is your argument that these affected parties are not users and that the GDPR does not require their consent?
Don't take this as hostility. I am 100% for local inference. But that is the way I understand the law, and I do think it benefits us to hold companies to a high standard. Because even such a device could theoretically be used against a person, or could have other unintended consequences.
It's interesting to me that there seems to be an implicit line being drawn around what's acceptable and what's not between video and audio.
If there's a camera in an AI device (like Meta Ray Ban glasses) then there's a light when it's on, and they are going out of their way to engineer it to be tamper resistant.
But audio - this seems to be on the other side of the line. Passively listening ambient audio is being treated as something that doesn't need active consent, flashing lights or other privacy preserving measures. And it's true, it's fundamentally different, because I have to make a proactive choice to speak, but I can't avoid being visible. So you can construct a logical argument for it.
I'm curious how this will really go down as these become pervasively available. Microphones are pretty easy to embed almost invisibly into wearables. A lot of them already have them. They don't use a lot of power, it won't be too hard to just have them always on. If we settle on this as the line, what's it going to mean that everything you say, everywhere will be presumed recorded? Is that OK?
> Passively listening ambient audio is being treated as something that doesn't need active consent
That’s not accurate. There are plenty of states that require everyone involved to consent to a recording of a private conversation. California, for example.
Voice assistants today skirt around that because of the wake word, but always-on recording obviously negates that defense.
AI "recording" software has never been tested in court, so no one can say what the legality is. If we are having a conversation (in a two party consent state) and a secret AI in my pocket generates a text transcript of it in real time without storing the audio, is that illegal? What about if it just generates a summary? What about if it is just a list of TODOs that came out of the conversation?
Speech-to-text has gone through courts before. It's not a new technology. You're out of luck on sneaking the use of speech-to-text in 2-party consent states.
This spiel is hilarious in the context of the product this company (https://juno-labs.com/) is pushing – an always on, always listening AI device that inserts itself into your and your family’s private lives.
“Oh but they only run on local hardware…”
Okay, but that doesn't mean every aspect of our lives needs to be recorded and analyzed by an AI.
Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Have all your guests consented to this?
What happens when someone breaks in and steals the box?
What if the government wants to take a look at the data in there and serves a warrant?
What if a large company comes knocking and makes an acquistion offer? Will all the privacy guarantees still stand in face of the $$$ ?
The fundamental problem with a lot of this is that the legal system is absolute: if information exists, it is accessible. If the courts order it, nothing you can do can prevent the information being handed over, even if that means a raid of your physical premises. Unless you encrypt it in a manner resistant to any way you can be compelled to decrypt it, the only way to have privacy is for information not to exist in the first place. It's a bit sad as the potential for what technology can do to assist us grows that this actually may be the limit on how much we can fully take advantage of it.
I do sometimes wish it would be seen as an enlightened policy to legislate that personal private information held in technical devices is legally treated the same as information held in your brain. Especially for people for whom assistive technology is essential (deaf, blind, etc). But everything we see says the wind is blowing the opposite way.
Agreed, while we've tried to think through this and build in protections we can't pretend that there is a magical perfect solution. We do have strong conviction that doing this inside the walls of your home is much safer than doing it within any companies datacenter (I accept that some just don't want this to exist period and we won't be able to appease them).
Some of our decisions in this direction:
We're always open to criticism on how to improve our implementation around this.It’s definitely a strange pitch, because the target audience (the privacy-conscious crowd) is exactly the type who will immediately spot all the issues you just mentioned. It's difficult to think of any privacy-conscious individual who wouldn't want, at bare minimum, a wake word (and more likely just wouldn't use anything like this period).
The non privacy-conscious will just use Google/etc.
A good example of this is what one of my family member's partner said. "Isn't creep that you just talked about something and now you are seeing ads for it. Guess we just have to accept it."
My response was no I don't get any of that because I disable that technology since it is always listening and can never be trusted. There is no privacy in those services.
They did not like that response.
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
Typically not how these things work. Speech is processed using ASR (automatic speech recognition), and then ran through a prompt that checks for appropriate tools calls.
I've been meaning to basically make this myself but I've been too lazy lately to bother.
I actually want a lot more functionality from a local only AI machine, I believe the paradigm is absurdly powerful.
Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.
Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.
> Imagine an AI reminding you that you've been on HN too long and offering to save off the comment your working on for later and then moving they browser window to a different tab.
> Having idle thoughts in the car of things you need to do and being able to just say them out loud and know important topics won't be forgotten about.
> I understand for people who aren't neurodiverse that the idea of just forgetting to do something that is incredibly critical to ones health and well-being isn't something that happens (often) but for plenty of other people a device that just helps people remember important things can be dramatically life changing.
Those don't sound like things that you need AI for.
[delayed]
It really is a prosthetic for minds that struggle to organize themselves.
I agree. I also don't really have an ambient assistant problem. My phone is always nearby and Siri picks up wake words well (or I just hold the powerbutton).
My problem is Siri doesn't do any of this stuff well. I'd really love to just get it out of the way so someone can build it better.
Some of the more magical moments we’ve had with Juno is automatic shopping list creation saying “oh no we are out of milk and eggs” out loud without having to remember to tell Siri becomes a shopping list and event tracking around kids “Don’t forget next Thursday is early pickup”. A nice freebie is moving the wake word to the end. “What’s weather Juno today?” becomes much more natural than a prefixed wake word.
They seem quite honest with who they are and how they do what they do.
> Are you okay with private and intimate conversations and moments (including of underage family members) being saved for replaying later?
One of our core architecture decisions was to use a streaming speech-to-text model. At any given time about 80ms of actual audio is in memory and about 5 minutes of transcribed audio (text) is in memory (this is help the STT model know the context of the audio for higher transcription accuracy).
Of these 5 minute transcripts, those that don't become memories are forgotten. So only selected extracted memories are durably stored. Currently we store the transcript with the memory (this was a request from our prototype users to help them build confidence in the transcription accuracy) but we'll continue to iterate based on feedback if this is the correct decision.
I agree with the core premise that the big AI companies are fundamentally driven towards advertising revenue and other antagonistic but profit-generating functionality.
Also agree with paxys that the social implications here are deep and troubling. Having ambient AI in a home, even if it's caged to the home, has tricky privacy problems.
I really like the explorations of this space done in Black Mirror's The Entire History of You[1] and Ted Chiang's The Truth of Fact short story[2].
My bet is that the home and other private spaces almost completely yield to computer surveillance, despite the obvious problems. We've already seen this happen with social media and home surveillance cameras.
Just as in Chiang's story spaces were 'invaded' by writing, AI will fill the world and those opting out will occupy the same marginal positions as those occupied by dumb phone users and people without home cameras or televisions.
Interesting times ahead.
1. https://en.wikipedia.org/wiki/The_Entire_History_of_You 2. https://en.wikipedia.org/wiki/The_Truth_of_Fact,_the_Truth_o...
> There needs to be a business model based on selling the hardware and software, not the data the hardware collects. An architecture where the company that makes the device literally cannot access the data it processes, because there is no connection to access it through.
Genuine Q: Is this business model still feasible? Its hard to imagine anyone other than apple sustaining a business off of hardware; they have the power to spit out full hardware refreshes every year. How do you keep a team of devs alive on the seemingly one-and-done cash influx of first-time-buyers?
This strikes me as a pretty weak rationalization for "safe" always-on assistants. Even if the model runs locally, there’s still a serious privacy issue: Unwitting victims of something recording everything they said.
Friends at your house who value their privacy probably won’t feel great knowing you’ve potentially got a transcript of things they said just because they were in the room. Sure, it's still better than also sending everything up to OpenAI, but that doesn’t make it harmless or less creepy.
Unless you’ve got super-reliable speaker diarization and can truly ensure only opted-in voices are processed, it’s hard to see how any always-listening setup ever sits well with people who value their privacy.
I wonder if the answer is that it is stored and processes in a way that a human can’t access or read, like somehow it’s encrypted and unreadable but tokenized and can be processed, I don’t know how but it feels possible.
We give an overview of our the current memory architecture at https://juno-labs.com/blogs/building-memory-for-an-always-on...
This is something we call out under the "What we got wrong" section. We're currently collecting an audio dataset that should help create a speech-to-text (STT) model that incorporates speaker identification and that tag will be weaved into the core of the memory architecture.
> The shared household memory pool creates privacy situations we’re still working through. The current design has everyone in the family shares the same memory corpus. Should a child be able to see a memory their parents created? Our current answer is to deliberately tune the memory extraction to be household-wide with no per-person scoping because a kitchen device hears everyone equally. But “deliberately chose” doesn’t mean “solved.” We’re hoping our in-house STT will allow us to do per-person memory tagging and then we can experiment with scoping memories to certain people or groups of people in the household.
Heyas! Glad to see someone making this
I wrote a blog post about this exact product space a year ago. https://meanderingthoughts.hashnode.dev/lets-do-some-actual-...
I hope y'all succeed! The potential use cases for locally hosted AI dwarf what can be done with SaSS.
I hope the memory crisis isn't hurting you too badly.
Yes! We see a lot of the same things that really should have been solved by the first wave of assistants. Your _Around The House_ reads similar to a lot of our goals though we would love the system to be much more pro-active than current assistants.
Feel free to reach out. Would love to swap notes and send you a prototype.
> I hope the memory crisis isn't hurting you too badly.
Oh man, we've had to really track our bill of materials (BOM) and average selling price (ASP) estimates to make sure everything stays feasible. Thankfully these models quantize well and the size-to-intelligence frontier is moving out all the time.
The article is forgetting about Anthropic which currently has the best agentic programmer and was the backbone for the recent OpenClaw assistants.
True, we focused on hardware embodied AI assistants (smart speakers, smart glasses, etc) as those are the ones we believe will soon start leaving wake words behind and moving towards an always-on interaction design. The privacy implications of an always-listening smart speaker are magnitudes higher than OpenClaw that you intentionally interact with.
This. Kids already have tons of those gadgets on. Previously, I only really had to worry about a cell phone so even if someone was visiting, it was a simple case of plop all electronics here, but now with glasses I am not even sure how to reasonably approach this short of not allowing it period. Eh, brave new world.
Both are pandoras box. Open Claw has access to your credit cards, social media accounts, etc by default (i.e. if you have them saved in your browser on the account that Open Claw runs on, which most people do.)
How long web search had been objective, nice, and helpful - 10 years? Now things are happening faster so there is max 5 years in total of AI prompt pretending that they want to help.
This was the inevitable endpoint of the current AI unit economics. When inference costs are this high and open-source models are compressing SaaS margins to zero, companies can't survive on standard subscription models. They have to subsidize the compute by monetizing the user's context window. The real liability isn't just ads; it's what happens when autonomous agents start making financial decisions influenced by sponsored retrieval data.
I think local inference is great for many things - but this stance seems to conflate that you can’t have privacy with server side inference, and you can’t have nefariousness with client side inference. A device that does 100% client side inference can still phone home unless it’s disconnected from internet. Most people will want internet-connected agents right? And server side inference can be private if engineered correctly (strong zero retention guarantees, maybe even homomorphic encryption)
> Every company building your AI assistant is now an ad company
Apple? [1]
[1] https://www.apple.com/apple-intelligence/
Yes, Apple is an ad company. Their annual ad revenue is in the billions, and climbing every year.
I guess it goes to show that real value is in the broader market to a certain extent, if they can’t just sell people the power they and up just earning a commission for helping someone else sell a product.
The level of trust I have in a promise made by any existing AI company that such a device would never phone home: 0.
This isn't a technology issue. Regulation is the only sane way to address the issue.
For once,we (as the technologists) have a free translator to laymen speak via the frontier LLMs, which can be an opportunity to educate the masses as to the exact world on the horizon.
> This isn't a technology issue. Regulation is the only sane way to address the issue.
It is actually both a technology and regulation/law issue.
What can be solved with the former should be. What is left, solved with the latter. With the best cases where both consistently/redundantly uphold our rights.
I want legal privacy protections, consistent with privacy preserving technology. Inconsistencies create technical and legal openings for nefarious or irresponsible powers.
Ads in AI should be banned right now. We need to learn from mistakes of the internet (crypto, facebook) and aggressively regulate early and often before this gets too institutionalized to remove.
They did learn. That's why they are adding ads.
Boomers in government would be clueless on how to properly regulate and create correct incentives. Hell, that is still a bold ask for tech and economist geniuses with the best of intentions.
Would that be the same cohort of boomers jamming LLMs up our collective asses? So they don’t understand how to regulate a technology they don’t understand, but fucking by golly you’re going to be left behind if you don’t use it?
This is like a shitty Disney movie.
It's mostly SV grifters who shoved LLMs up our asses. They then get in cahoots with boomers in the government to create policies and "investment schemes" that inflate their stock in a ponzi-like fashion and regulate competition. Why do you think Trump has some no-name crypto firm, or why Thiel has Vance as his whipping boy, and Elon spend a fortune trying to get Trump to win? This is a multiparty thing, as most politicians are heavily bought and paid for.
Ads (at least in the classical pre-AI sense) are by orders of magnitude better than preventive laws
I trust corporations far far far less than government or lawmakers (who I also don’t trust). I know corporations will use ads in the most manipulative and destructive manner. Laws may be flawed but are worth the risk.
Who would buy OpenAI's spy device? I think a lot of public discourse and backlash about the greedy, anticompetitive, and exploitative practices of the silicon valley elite have gone mainstream and will hopefully course correct the industry in time.
i'm continually surprised by how many people will buy and wear meta's AI spy sunglasses.
if there's a market for a face camera that sends everything you see to meta, there's probably a market for whatever device openAI launches.
> ...exploitative practices of the silicon valley elite have gone mainstream and will hopefully course correct the industry in time.
I have little hope that is true. Don't expect privacy laws and boycott campaigns. That very same elite control the law via bribes to US politicians (and indirectly the laws of other counties via those politicians threats, see the ongoing watering down of EU laws). They also directly control public discourse via ownership of the media and mainstream communication platforms. What backlash can they really suffer?
Always on is incompatible with data protection rights, such as the GDPR in Europe.
With cloud based inference we agree, this being just one more benefit of doing everything with "edge" inference (on device inside the home) as we do with Juno.
Pretty sure a) it's not a matter of whether you agree and b) GDPR still considers always-on listening to be something the affected user has to actively consent to. Since someone in a household may not realize that another person's device is "always on" and may even lack the ability to consent - such as a child - you are probably going to find that it is patently illegal according to both the letter and the spirit of the law.
Is your argument that these affected parties are not users and that the GDPR does not require their consent?
Don't take this as hostility. I am 100% for local inference. But that is the way I understand the law, and I do think it benefits us to hold companies to a high standard. Because even such a device could theoretically be used against a person, or could have other unintended consequences.
It's interesting to me that there seems to be an implicit line being drawn around what's acceptable and what's not between video and audio.
If there's a camera in an AI device (like Meta Ray Ban glasses) then there's a light when it's on, and they are going out of their way to engineer it to be tamper resistant.
But audio - this seems to be on the other side of the line. Passively listening ambient audio is being treated as something that doesn't need active consent, flashing lights or other privacy preserving measures. And it's true, it's fundamentally different, because I have to make a proactive choice to speak, but I can't avoid being visible. So you can construct a logical argument for it.
I'm curious how this will really go down as these become pervasively available. Microphones are pretty easy to embed almost invisibly into wearables. A lot of them already have them. They don't use a lot of power, it won't be too hard to just have them always on. If we settle on this as the line, what's it going to mean that everything you say, everywhere will be presumed recorded? Is that OK?
> Passively listening ambient audio is being treated as something that doesn't need active consent
That’s not accurate. There are plenty of states that require everyone involved to consent to a recording of a private conversation. California, for example.
Voice assistants today skirt around that because of the wake word, but always-on recording obviously negates that defense.
AI "recording" software has never been tested in court, so no one can say what the legality is. If we are having a conversation (in a two party consent state) and a secret AI in my pocket generates a text transcript of it in real time without storing the audio, is that illegal? What about if it just generates a summary? What about if it is just a list of TODOs that came out of the conversation?
Speech-to-text has gone through courts before. It's not a new technology. You're out of luck on sneaking the use of speech-to-text in 2-party consent states.
Of course it's new! Now it's "AI"! /s