Regarding: “Find the emails of everyone who reacted to my latest LinkedIn post and send personalized outreach”
A forewarning and maybe advice to apply some heuristics — on another HN thread I received the brilliant advice to prefix my first name on LinkedIn with an emoji.
Now, whenever I get a message on LinkedIn that starts with “Hello [emoji] [first name]” I know it was automated spam, and I reflexively block the sender.
It seems like this collection of tools gives you a ton of lethal-trifecta risk for prompt injection attacks. How have you mitigated this—are you doing something like CaMeL?
We do a lot of processing on our backend to prevent against prompt injection, but there definitely still is some risk. We can do better on as is always the case.
Need to read up on how CaMel does it. Do you have any good links?
Here’s a paper offering a survey of different mitigation techniques, including CaMeL. Design Patterns for Securing LLM Agents against Prompt Injections (2025):
https://arxiv.org/abs/2506.08837
what's interesting about this one is that their claims about what makes slashy different are almost entirely wrong... almost all the big models let you connect and do all of the things mentioned. Not understanding MCP at all is hilarious. If an agent has toosl to access multiple data sources it will make calls across them to resolve things, not sure whatever claiming but there's no way you are actually indexing at scale and probably doing just the exact same thing.
the demo video is literally just single thread tool calling to external sources. Indexing data is also a really complex problem more than just adding some elastic search to gmail which also you will find does not scale easily, if that's even what you're doing.
How does the scraper work? e.g. LinkedIn aggressively blocks scraping and you'd need to be logged in to see most things you'd care about. How do you handle that?
> mostly uses text, no image, no desktop control (?)
hardly can see what this app brings. also, it is paid and requests are routed to someone else? shouldn't this be free, local, and with bring-your-own–key already with things like ollama/llama.cpp?
Find that the quality of them currently aren't there yet for a general system. They tend to be designed just to use that singular app instead of to be used in parallel with other apps.
But you are compatible with MCP, right? Otherwise users are going to miss out on the MCP ecosystem. And you are going to be spending all your time developing your own versions of MCP plugins. Wouldn't it be easier to improve the existing ones?
MCP is what you use to make tools you own compatible with agents (like Claude Code) that you don't --- or vice/versa. It's not doing anything useful in the scenario where you own both the tool calling code and the agent.
Are you sure they want to provide access to arbitrary random tools other people wrote? It's easy enough to add MCP support to native tool calls, but I don't know that that's a great idea given their problem domain.
that just sounds like you have no idea what MCP is, I don't even like MCPs but I can't even understand what angle you are coming from unless they specifically mean using external MCPs instead of your own, since it is you know open source...
> Out of all the AI slopups I've seen, y'all might the worst. Have fun, clowns.
You can't attack others like this here. We've banned the account.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
Do you worry that AI browser agents (comet etc) will eat this market of light integrations? Since the user is already logged in to various services like linkedin/email etc it's easy for tasks to be scripted together - or fully prompted.
also what did you use to make the video? looks better than most looms.
Congratulations on the launch.
I think it's a smart move to not use MCP here. Because your LLM really needs to understand how the different integrations work together.
Question: you say you do semantics search. If I understand correctly that means you must somehow index all data (Gmail, GDrive, ...) otherwise the AI would have to "download/scan" thousands of files each time you ask a question.
So how do you do the indexing?
For some background: I'm working on something similar. My clients are architects. They have about 300k files for just one building. With an added 50k issues and a couple of thousand emails. And don't forget all subcontractors.
Anything that gives any sort of system access to sensitive data and lets agents carry out actions on basically unchecked input sounds like a complete security and privacy nightmare by design.
You give it access, it grabs your ssh keys and exfiltrate to some third party server. That is not the access the user gave to your platform but it is what it would be capable of doing.
Ohh we don't give it computer use access or anything like that. We inject tokens post tool call, so to protect users from the agent doing anything malicious.
Seems to me that these kind of systems, by design, tick all three boxes. I've had many discussions with people that let agent systems read and act on their incoming email for instance, and I think it's utter insanity from a security perspective.
Slashy is great and the founders are so talented. I've been following Pranjali on Twitter for a while -- they've got great weekly videos where they keep releasing new features.
The team ships fast and I'm excited to see where they go
I really hate to be the curmudgeon here but won't foundation models end up having their own AI workflows like the GPT store but with MCP?
I could really envision saving an 'AI Workflow' template with integrated MCP clients that will balloon once adoption is reached. Right now adoption is low so its not a priority for them, once it is, they will tack it on.
I really wish this the best of luck its a great concept, but surely you must be thinking ahead to plan for this situation.
We do think there's a good chance they'll make their own version.
But we view it as a Dropbox situation where, the foundation models much like Apple and Google know that this will be the future, but are a bit slow to act on it.
I would argue Dropbox was a new product category rather than a feature and as such, was a much deeper strategic decision to enter that category than add a feature. My only recommendation would be to focus on deep complex workflows (E.g. N8N style) with extensive integrations or build out a developer community so you can build some data lock in, because if they are surface level templates surely these will get easily disrupted.
As an attorney (and this is not legal advice), I don't think it's quite that simple. The court held that the CFAA does not proscribe scraping of pages to which the user already has access and in a way that doesn't harm the service, and thus it's not a crime. But there are other mechanisms that might impact a scraper, such as civil liability, that have not been addressed uniformly by the courts yet. And if you scrape in such a way that does harm the operator (e.g. by denying service), it might still be unlawful, even criminal.
There's a relevant footnote in the cited HiQ Labs v. LinkedIn case:
"LinkedIn’s cease-and-desist letter also asserted a state common law claim of trespass to chattels. Although we do not decide the question, it may be that web scraping exceeding the scope of the website owner’s consent gives rise to a common law tort claim for trespass to chattels, at least when it causes demonstrable harm."
They also said: "Internet companies and the public do have a substantial interest in thwarting denial-of-service attacks and blocking abusive users, identity thieves, and other ill-intentioned actors."
It's a good idea to take legal conclusions from media sites with a grain of salt. Same goes for any legal discussion on social media, including HN. If you want a thorough analysis of legal risk--either for your business or for personal matters--hire a good lawyer.
Have you actually tried this approach? I’m curious as to the result, especially when you took it to your lawyer. Not a contract review but a business practice risk evaluation.
what a nonsense. they explicitly say "do not scrape us, unless we approve". they put paywalls and captchas. their service is literally selling access to users data.
now you scraping it. this is direct violation and direct harm to their business, despite their explicit statements for you to stop.
you're building a tool that is designed to sink its tentacles into peoples' most personal accounts and take unsupervised automated actions with them, using a technology that has serious, well known, documented security issues. you haven't demonstrated any experience with, awareness of, or consideration for the security issues at hand, so the ideal amount of code to share would likely be all of it.
i actually really like your product for what it's worth. don't listen to the haters. hackers build things.
i just won't use it, and nobody should, unless they can understand exactly how it works and reason for themselves about the risks they are taking. you clearly work hard and care deeply about what you are building, and it will be very useful. but it has the potential to cause widespread harm, no matter how trustworthy you are, how much you care about it, or what your intentions are.
with respect to user security and privacy, doing your best is not much better than yolo security. the minimum standard should be to research the threat landscape, study the state of the art in methods to mitigate those threats, implement them, and test them thoroughly, yourselves and through vendors. iterate through that process continuously, alongside your development. it will never end. or, you can open source it and the internet does this for you for free. build something people love, grow traction, convert that to money. THEN figure out how to make money from them.. not the other way around. or, more likely, some combination of all of the above.
someone else linked you to simon wilson's lethal trifecta page, i would absolutely start there, and read everything linked as well. pangea and spectreops both do good work in the llm pentesting space, i'm sure there are more.
So you do have access to all the data. It's not really a great look if you're lying about what you have access to, and this is a technical audience, it's not like we don't know how agents work.
Not really and this is totally not related to Slashy, it just look like the same as the other 20 Slashys launched last month. Launch HNs used to be exciting.
Maybe HN/ycombinator is just not interesting anymore. I saw some of you commenting that this might be similar to the famous Dropbox situation. That could not be more delusional and representative of what HN became, a meme of itself.
The strategy is throw a little bit of money at everything, hope one of them will become a unicorn, everyone gets richer.
Rinse and repeat.
You're right though ... these YC batches are not what they used to be. AI is hot right now, so it seems YC is throwing money at anything that seems like it can at least actually do something (not that it is necessarily good). If that product doesn't get hot, who cares? Plenty more money to go around on the next batch, because one of them probably will!
Regarding: “Find the emails of everyone who reacted to my latest LinkedIn post and send personalized outreach”
A forewarning and maybe advice to apply some heuristics — on another HN thread I received the brilliant advice to prefix my first name on LinkedIn with an emoji.
Now, whenever I get a message on LinkedIn that starts with “Hello [emoji] [first name]” I know it was automated spam, and I reflexively block the sender.
It seems like this collection of tools gives you a ton of lethal-trifecta risk for prompt injection attacks. How have you mitigated this—are you doing something like CaMeL?
We do a lot of processing on our backend to prevent against prompt injection, but there definitely still is some risk. We can do better on as is always the case.
Need to read up on how CaMel does it. Do you have any good links?
That’s a pretty scary answer, to be honest.
Regardless, here’s the CaMeL paper. Defeating Prompt Injections by Design (2025): https://arxiv.org/abs/2503.18813
Here’s a paper offering a survey of different mitigation techniques, including CaMeL. Design Patterns for Securing LLM Agents against Prompt Injections (2025): https://arxiv.org/abs/2506.08837
And here’s a high-level overview of the state of prompt injection from 'simonw (who coined the term), which includes links to summaries of both papers above: https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/
Thanks!
Don't worry have worked with a few friends experienced in prompt injection to help with the platform.
But will read these too :)
Re: CaMeL, Jesus, why not build a UI with explicit access controls at that point?
because you can't enjoy your pina coladas on the beach if your phone keeps buzzing every 10 seconds.
what's interesting about this one is that their claims about what makes slashy different are almost entirely wrong... almost all the big models let you connect and do all of the things mentioned. Not understanding MCP at all is hilarious. If an agent has toosl to access multiple data sources it will make calls across them to resolve things, not sure whatever claiming but there's no way you are actually indexing at scale and probably doing just the exact same thing.
Why wouldn't we be indexing at scale?
the demo video is literally just single thread tool calling to external sources. Indexing data is also a really complex problem more than just adding some elastic search to gmail which also you will find does not scale easily, if that's even what you're doing.
How does the scraper work? e.g. LinkedIn aggressively blocks scraping and you'd need to be logged in to see most things you'd care about. How do you handle that?
We don't scrape LinkedIn ourselves, instead work with large data providers who do live scraping.
Have a waterfall approach in case one source doesn't have the requested information!
You should make Slashy a chrome extension for BrowserOS (https://github.com/browseros-ai/BrowserOS), then it can read/extract Linkedin using user credentials :)
Hmm we've been considering a chrome extension
This
who is it? give them a plug, seems like it works well.
Unfortunately have an NDA with them so can't disclose (most of our providers are still in stealth)
> we build own MCP
> we use existing models via their API
> we use existing tools/services/platforms
> ChatGPT/OpenWebUI-like web interface
> mostly uses text, no image, no desktop control (?)
hardly can see what this app brings. also, it is paid and requests are routed to someone else? shouldn't this be free, local, and with bring-your-own–key already with things like ollama/llama.cpp?
We actually don't use MCP!
We just make our own tools in-house :)
Hmm the local open source model is something we've thought of, but currently haven't found open source models to be usable
Why __don't__ you use MCP?
Find that the quality of them currently aren't there yet for a general system. They tend to be designed just to use that singular app instead of to be used in parallel with other apps.
But you are compatible with MCP, right? Otherwise users are going to miss out on the MCP ecosystem. And you are going to be spending all your time developing your own versions of MCP plugins. Wouldn't it be easier to improve the existing ones?
It's a bit more complicated. We have a full custom single agent architecture, sort of like Manus that isn't fully compatible with MCP
MCP is what you use to make tools you own compatible with agents (like Claude Code) that you don't --- or vice/versa. It's not doing anything useful in the scenario where you own both the tool calling code and the agent.
The question is whether the tools are limited to what they offer.
Are you sure they want to provide access to arbitrary random tools other people wrote? It's easy enough to add MCP support to native tool calls, but I don't know that that's a great idea given their problem domain.
that just sounds like you have no idea what MCP is, I don't even like MCPs but I can't even understand what angle you are coming from unless they specifically mean using external MCPs instead of your own, since it is you know open source...
[flagged]
It's useful for quality.
For example we can read and attach pdfs to gmail which not a lot of people can, since we have our own internal storage api.
oh so now we are flagging people that think not having MCP support is bad?
HN is out of control.
[flagged]
> Out of all the AI slopups I've seen, y'all might the worst. Have fun, clowns.
You can't attack others like this here. We've banned the account.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
We don't use open source models as of right now!
nice launch!
Do you worry that AI browser agents (comet etc) will eat this market of light integrations? Since the user is already logged in to various services like linkedin/email etc it's easy for tasks to be scripted together - or fully prompted.
also what did you use to make the video? looks better than most looms.
Oh I used Screen Studio :)
Thanks for the compliment.
Not worried about browser agents, as we actually have pretty deep integrations (we include semantic search as well as user action graphs).
Naturally apis will always be better than browsers as apis are computer languages and browsers are human language.
The sale of Browser Company today too I think shows there's not that much of a ceiling for agentic browsers.
Congratulations on the launch. I think it's a smart move to not use MCP here. Because your LLM really needs to understand how the different integrations work together.
Question: you say you do semantics search. If I understand correctly that means you must somehow index all data (Gmail, GDrive, ...) otherwise the AI would have to "download/scan" thousands of files each time you ask a question. So how do you do the indexing?
For some background: I'm working on something similar. My clients are architects. They have about 300k files for just one building. With an added 50k issues and a couple of thousand emails. And don't forget all subcontractors.
Would Slashy be able to handle that?
Not sure we haven't ever done volume that size for one person, but in theory should be able too!
We use indexing similar to glean (but a bit less elegant without the ACLs)
Can talk more about your use case if you'd like to.
Send me a text at 262-271-5339
Here's a fun launch video we made as well :)
https://x.com/raidingAI/status/1955890345927172359
Anything that gives any sort of system access to sensitive data and lets agents carry out actions on basically unchecked input sounds like a complete security and privacy nightmare by design.
Why do you think privacy?
Security I understand, but if you consent to giving it access would it not be fine for privacy.
You give it access, it grabs your ssh keys and exfiltrate to some third party server. That is not the access the user gave to your platform but it is what it would be capable of doing.
Ohh we don't give it computer use access or anything like that. We inject tokens post tool call, so to protect users from the agent doing anything malicious.
I'm thinking about what this post explains more clearly than I can:
https://simonwillison.net/2025/Jun/16/the-lethal-trifecta
Seems to me that these kind of systems, by design, tick all three boxes. I've had many discussions with people that let agent systems read and act on their incoming email for instance, and I think it's utter insanity from a security perspective.
> We use a single agent architecture (as we found this reduces hallucinations)
Do you have a benchmark for this? in my experience, hallucinations have nothing to do with what framework you use.
We did a lot of internal testing but no official benchmark.
We find that the less the agent knows, the more it hallucinates
Slashy is great and the founders are so talented. I've been following Pranjali on Twitter for a while -- they've got great weekly videos where they keep releasing new features.
The team ships fast and I'm excited to see where they go
Not sure if we've ever spoken before but appreciate the support <3
I really hate to be the curmudgeon here but won't foundation models end up having their own AI workflows like the GPT store but with MCP?
I could really envision saving an 'AI Workflow' template with integrated MCP clients that will balloon once adoption is reached. Right now adoption is low so its not a priority for them, once it is, they will tack it on.
I really wish this the best of luck its a great concept, but surely you must be thinking ahead to plan for this situation.
Yep!
We do think there's a good chance they'll make their own version.
But we view it as a Dropbox situation where, the foundation models much like Apple and Google know that this will be the future, but are a bit slow to act on it.
I would argue Dropbox was a new product category rather than a feature and as such, was a much deeper strategic decision to enter that category than add a feature. My only recommendation would be to focus on deep complex workflows (E.g. N8N style) with extensive integrations or build out a developer community so you can build some data lock in, because if they are surface level templates surely these will get easily disrupted.
Yep!
That's our goal long term to get better templates
How much time do you spend in gmail now? have you continued to track that?
Now probably ~1 hour or so
This is quite useful where has this been all my life
Email drafting is decent since it reads my drive, previous emails, and everything else so it has a good bit of context
Nice!
Let me know how it goes, and feel free to text/call me at 262-271-5339 with any feedback
> scraping LinkedIn profiles
is this legal? last time I checked linkedin.com/robots.txt do not allow scraping, unless explicit approval from linkedin
If it is publicly available information it is legal to scrape it, regardless of what robots.txt says.
See: https://www.webspidermount.com/is-web-scraping-legal-yes/
As an attorney (and this is not legal advice), I don't think it's quite that simple. The court held that the CFAA does not proscribe scraping of pages to which the user already has access and in a way that doesn't harm the service, and thus it's not a crime. But there are other mechanisms that might impact a scraper, such as civil liability, that have not been addressed uniformly by the courts yet. And if you scrape in such a way that does harm the operator (e.g. by denying service), it might still be unlawful, even criminal.
There's a relevant footnote in the cited HiQ Labs v. LinkedIn case:
"LinkedIn’s cease-and-desist letter also asserted a state common law claim of trespass to chattels. Although we do not decide the question, it may be that web scraping exceeding the scope of the website owner’s consent gives rise to a common law tort claim for trespass to chattels, at least when it causes demonstrable harm."
They also said: "Internet companies and the public do have a substantial interest in thwarting denial-of-service attacks and blocking abusive users, identity thieves, and other ill-intentioned actors."
It's a good idea to take legal conclusions from media sites with a grain of salt. Same goes for any legal discussion on social media, including HN. If you want a thorough analysis of legal risk--either for your business or for personal matters--hire a good lawyer.
Smart
Or run your legal questions through a frontier model and then have a lawyer verify the answers. You can save a lot of money and time.
Yes, all LLM caveats apply. Due your diligence. But they are quite good at this now.
Have you actually tried this approach? I’m curious as to the result, especially when you took it to your lawyer. Not a contract review but a business practice risk evaluation.
Some context from coverage of GPT 5:
https://legaltechnology.com/2025/08/08/openai-launches-gpt-5...
https://www.artificiallawyer.com/2025/08/08/gpt-5-tops-harve...
Remember when "asking for a friend" was a thing?
Today's expression is "I asked a friend". You can try that when talking to your lawyer about your latest ChatGPT — they might still believe you.
Hmm this is a good idea too
what a nonsense. they explicitly say "do not scrape us, unless we approve". they put paywalls and captchas. their service is literally selling access to users data.
now you scraping it. this is direct violation and direct harm to their business, despite their explicit statements for you to stop.
you loose the case, it is clear as day.
what a nonsense. this is equivalent of "sovereign citizens" online. go and try it, and get yourself into jail.
Do not confuse strong language with strong argument. Yours is the former not the latter.
LinkedIn has api. So why to scrap?
because they are pulling what they are not supposed to. they are doing it illegally. that's why.
> they are doing it illegally.
ToS aren't real laws, mate.
Edit: oops, just saw a message from the creator of this thing saying he gets the data in the most illegal possible ways. They have no salvation.
It is possible to do what they propose legally tho the "agent" is just the users computer.
ToS are leagally binding contracts. there are there for a reason.
contracts are not laws themselves. but correctly done ToS (I bet LinkedIn does) hold very real legal power.
We get our data from third party data vendors who we assume have gotten explicit approval from linkedin!
You assume! Such due diligence!
Unfortunately not able to get into their codebase
Or yours...
What would you like to see?
Can tell you :)
you're building a tool that is designed to sink its tentacles into peoples' most personal accounts and take unsupervised automated actions with them, using a technology that has serious, well known, documented security issues. you haven't demonstrated any experience with, awareness of, or consideration for the security issues at hand, so the ideal amount of code to share would likely be all of it.
Fair enough makes sense to not have trust!
We like to believe we're pretty trustworthy, and do our best to make everything secure.
i actually really like your product for what it's worth. don't listen to the haters. hackers build things.
i just won't use it, and nobody should, unless they can understand exactly how it works and reason for themselves about the risks they are taking. you clearly work hard and care deeply about what you are building, and it will be very useful. but it has the potential to cause widespread harm, no matter how trustworthy you are, how much you care about it, or what your intentions are.
with respect to user security and privacy, doing your best is not much better than yolo security. the minimum standard should be to research the threat landscape, study the state of the art in methods to mitigate those threats, implement them, and test them thoroughly, yourselves and through vendors. iterate through that process continuously, alongside your development. it will never end. or, you can open source it and the internet does this for you for free. build something people love, grow traction, convert that to money. THEN figure out how to make money from them.. not the other way around. or, more likely, some combination of all of the above.
someone else linked you to simon wilson's lethal trifecta page, i would absolutely start there, and read everything linked as well. pangea and spectreops both do good work in the llm pentesting space, i'm sure there are more.
Looks nice, but little hesitant to give access to emails. What model is being used on backend ?
We use Claude/OpenAI right now with Groq for tool routing!
I'd say maybe to get comfortable try out the non email features first, but we don't have access to any of your data.
How do you not have access to the data if I give you access to my email?
The agent does!
We don't, and agent pulls in data only when executing queries
Does the agent run on hardware you control?
Runs on AWS for now!
So you do have access to all the data. It's not really a great look if you're lying about what you have access to, and this is a technical audience, it's not like we don't know how agents work.
Sad state of current launch HNs where OP don't even know they are talking to hackers, not people that get easily impressed.
So you have access to the users Gmail, not "the agent".
Hmm ig yeah I can be more granular.
Yeah we store our user credentials on our side and manage them. Along with refreshing tokens and so forth
This is horrifying. Everyone should be horrified.
I think they mean OAuth credentials (all these APIs use OAuth unless you're doing something terribly wrong).
Yep we're using Oauth, so it's easy for a user to disconnect.
Or an alt/throwaway email...
ooh good idea!
> connects to apps and does tasks
Gosh, I hope it also does things too!
haha, it certainly does :)
Honestly, what have HN become? These AI projects are looking more and more like shitcoins and their creators are shitcoin shillers.
Have you tried out Slashy?
What makes you say that
Not really and this is totally not related to Slashy, it just look like the same as the other 20 Slashys launched last month. Launch HNs used to be exciting.
Maybe HN/ycombinator is just not interesting anymore. I saw some of you commenting that this might be similar to the famous Dropbox situation. That could not be more delusional and representative of what HN became, a meme of itself.
The strategy is throw a little bit of money at everything, hope one of them will become a unicorn, everyone gets richer.
Rinse and repeat.
You're right though ... these YC batches are not what they used to be. AI is hot right now, so it seems YC is throwing money at anything that seems like it can at least actually do something (not that it is necessarily good). If that product doesn't get hot, who cares? Plenty more money to go around on the next batch, because one of them probably will!
Hmm that's fair, we're definitely not the most exciting launch out there compared to others in our batch.
I'd like to think the fact we do what we promise is exciting, but without trying the product hard to convey that well :)
[dead]
[dead]
[dead]