Page source is amazing. I can't remember the last time I've seen a serious YC company launch page with absolutely zero JavaScript. Even the CSS is just a single selector.
We should definitely further clarify this! We built Integuru as an internal tool while building the products for Taiki. Then we realized that other developers may need the agent, too, so we decided to open-source Integuru. In terms of the current focus for our team, we are spending most of our time on Integuru because newly requested integrations take some of our resources to build, and we want to continue improving the agent. I think the correct way to frame this is a market expansion, where we're expanding beyond the tax industry.
I just noticed over the weekend new Claude agreed to reverse engineer a graphql server with introspection turned off, something Im pretty sure it would have refused for ethical reasons before the new version
it kept writing scripts, i would paste the output, and it would keep going, until it was able to create its own working discount code on an actual retail website
The only issue with these kinds of things is breaking robots.txt rules and the possibility things will break without notice, and often
The use of unofficial APIs can be legally questionable [1]
We are working on a way to auto-patch internal APIs that change by having another agent trigger the requests.
Regarding the legality aspects — really appreciate you mentioning this — we’ve put a lot of thought into these issues, and it’s something we’re continually working on and refining.
Ultimately, our goal is to allow each developer to make their own informed decision regarding the policies of the platforms that they're working with. There are situations where unofficial APIs can be both legal and beneficial, such as when they're used to access data that the end user rightfully owns and controls.
For our hosted service, we aim to balance serving legitimate data needs with safeguarding against bad actors, and we’re fully aware this can be a tricky line to navigate. What this looks like in reality would be to prioritize use cases where the end-user truly owns the data. But we know this is not always black-and-white, and will come up with the right legal language as you recommended. What does help our case is that many companies are making unofficial APIs for their own purposes, so there are legal precedents that we can refer to.
I have to disagree, it is definitely not legal in the US to use unauthorized access points to access authorized data. Thats like saying you're allowed to get into your apartment through breaking your neighbors door and climbing between the windows
In the US this is pretty simply covered by Computer Misuse Act and Computer Fraud and Abuse Act, both federal laws
Im not claiming you're liable, just surprised no lawyer pointed this out at YC
If I open the Safeway app and it fetches what is available in a given store without any authentication and everyone sees the same data, that could possibly fall under that exemption.
If my browser is downloading some data, then what’s the difference if my AI agent is doing the same? I’ll even tell you it’s my browser. Who are you to say what qualifies as a browser?
A browser is a user agent, it's some software that makes requests to a server and renders them in a way I can understand. There's no difference between using a screen reader to vocalize content and using an AI agent to summarize it.
This analogy is completely off. A closer analogy is someone calls you on your phone letting you know they're here. You were expecting them, so you say "come on in." But, they were at the back door instead of the front door. I don't think anyone would consider that your friend did something illegal.
In my experience reverse engineering is often the easy bit, or at least easy compared to what follows: maintenance. Knowing both when and how it fails when it fails (eg in cases like when the API stops returning any results but is still otherwise valid). Knowing when the response has changed in a way that is subtle to detect, like they changed the format of a single field, which may still parse correctly but is now interpreted incorrectly.
We feel your pain with maintenance. We have plans to handle this by using LLMs to detect response anomalies.
From our experience, reverse engineering is still less prone to breakage compared to traditional browser automation. But we definitely want to make integrations even more reliable with maintenance features.
Wouldn't something like snapshot testing from a scheduled probe be more effective and reliable than using an LLM?
Every X hours test the endpoints and validate the types and field names are consistent... If they change then trigger some kind of alerting mechanism to the user.
if the types and field names change, our parsing script should be able to detect that so it should be covered. I was talking about handling the subtle changes that are undetectable by checking field types and names
This is awesome, but I'm not sure what the long-term use case for the intersection of low-latency integration and non-production-stable is? I'm saying this as someone with way more experience than I'd like to in using reverse-engineered APIs as part of production products... You inevitably run into breakages, sometimes even actively hostile platforms, which will degrade user experience as users wait for your 1day window to fix their product again.
Though I suppose if you can auto-fix and retry issues within ~1minute or so it could work?
This is a very important question. Thank you for bringing this up! Currently it requires human intervention to auto-fix integrations as someone needs to trigger the correct network request. We are planning on having another agent that triggers the network requests through interacting with the UI and then passing the network request to Integuru.
I think it's pretty likely that they just don't look at or test Newpipe when they change their APIs. If the change doesn't break any official clients, it goes through.
With how large Youtube is, I iimagine API changes are not infrequent.
Brilliant. Is the next part to monitor and autocorrect breakage when the API in scope changes unexpectedly underneath the system? This is a pain point of workflow automation systems that integrate with APIs in my experience, typically requiring a human to triage an alert (due to an unexpected external API change), pause worker queues, ship a fix, and then resume queue processing.
Currently you need to trigger the UI actions manually to generate the network requests used by Integuru. But we're planning automate the whole thing by having another agent auto-trigger the UI actions to generate the network requests first, and then have Integuru reverse-engineer the requests.
Ah, by clicking on the Taiki logo to see what the ... parent company? ... builds, I now understand how this came about. And I'll be honest, as someone who hates all that tax paperwork gathering with all my heart, this launch may have gotten you a new customer for Taiki :-)
Also, just as a friendly suggestion, given what both(?) products seemingly do, this section could use some love other than "we use TLS": https://www.taiki.ai/faq#:~:text=How%20does%20Taiki%20handle... since TLS doesn't care about storing credentials in plain text in a DB, for example
---
p.s. the GitHub organization in your .gitmodules is still pointing to Unofficial-APIs which I actually think you should have kept o/
Thank you for your suggestions, and really glad to hear you're excited about Taiki! We will update the the FAQ with your suggestions — honestly, this part of the website is a bit outdated, and we will make sure to change it.
Regarding the Unofficial-APIs name, it was a really tough decision. We liked the name a lot but just thought it was a bit long. A Real pleasant surprise that you found it :)
There are a lot of companies using old custom or self hosted webapps that they control but can't change - maybe the 3rd party that built it kept the code, maybe its an orphan product, maybe the silo that owns it won't build an api.
Anyway a lot of good points here about legalities she shifting APIs, but I think there are plenty of situations where this is great and none of that applies.
I've spent plenty of time trying to dig into the network tab to automate requests to a website without an API. Cool to see the process streamlined with LLMs. Wishing you all the best of luck!
Will this work for SSR applications? e.g. think old school net or jsp apps which make network requests then receive HTML which then needs to be parsed in order to understand the key pieces of information and then additional network requests?
I've found it relatively straight forward to reverse engineer SPA requests however with server side rendered apps, yow would your service handle that?
Good question. Finding the request that's responsible for the action you want will be a bit trickier for SSR, but it's still possible for most cases. It auto-generates regex (for now) to parse out needed info out of the html template.
Another thing I've seen is that some of these old school apps are sending certain requests that don't modify the page but set server side context which subsequent requests are dependent on.
For example, set context to a particular group and then subsequent navigation depends is filtered on that group even though the filter is not explicit on the client side but due to state stored in the session remotely.
This can also have implications on concurrency for a given session where you need to either create separate sessions or make sure there is some lock on particular parts of server side state.
Would this type of this eventually be possible? Or at least hooks in able for us to add custom code such as session locks
Very interesting to hear about your experience here! We haven't come across a website that has this design and don't offer support for this just yet. We can certainly implement if more people face a similar situation.
Would be cool to use a proxy to MITM to twiddle the bits (with its own API) if the use case isn't supported by a browser or robotic process automation driving the app's client side UX.
Very cool! If Megacorps insist on pulling the Web 2.0 promises of APIs from under us then we will build them ourselves in the spirit of adversarial interoperability.
It's time for there to be a legal protection framework for OSS maintainers to stop being bullied with legal threats from Megacorps.
Nice work, congrats!
How do you deal with security related stuff like recaptcha, signed requests and so on?
Do you also support internal APIs of mobile applications? If so, how do you deal with AppCheck / PlayIntegrity / Android Key Attestation / Apple App Attest?
Thank you! Integuru itself doesn't handle recaptchas and signed requests, but we have a hosted solution where we use third-party services to handle recaptchas and manually create integrations for handling signed requests.
We do not directly support APIs for mobile applications; however, if you use MITM software and get all the network requests into a .har file, Integuru should work as expected. We do not handle AppCheck ATM at the moment unfortunately.
This is really awesome. There's several platforms that intentionally gate keep their API and it makes really annoying to build integrations with them. How do you go about these platforms and not breaking their TOS?
Thank you! There are definitely platforms that intentionally gate-keep their APIs. A good example is LinkedIn, which many companies still try to force-build their own integrations with. Our goal is to allow each developer to make their own informed decision regarding the policies of the platforms that they're working with. For our hosted service, we want to prioritize use cases where the end-user truly owns the data. We can also refer to legal precedent cases where many other companies make unofficial APIs.
I don't think it really matters to them. As a provider giving access to these platforms, they're not the user (and they didn't agree to the terms). the end user did, so it's on them to decide whether they risk getting terminated or whatnot
Again, it is one of these nice interesting products which should be open source but you shouldn't have taken the VC funding. That immediately is a red flag and this product is guaranteed in the future to go south.
Thank you! As long as the network request contains the query, it should work as expected. So yes it should work with introspection disabled graphQL APIs. Excited to see what you do with it!
Hell yeah! Love to see this launch. We have spent a lot of time at Wren recently trying to reverse engineer some local law APIs to help make renewable energy developer lives easier (less parsing through hundreds of PDFs, dead links, etc.) -- going to try this out and see if it can speed up our workflow.
If your landing page doesn't look like this, you've launched too late: https://integuru.ai
Page source is amazing. I can't remember the last time I've seen a serious YC company launch page with absolutely zero JavaScript. Even the CSS is just a single selector.
I'm a fan.
I wish I could do this… best part of building for devs is being able to provide simple, good UX with minimal UI.
Still looks more interesting than that Next.js landing page template used by every startup these days.
Their website is this one though. :) https://www.taiki.ai/
@richardzhang what is the relationship between taiki and integuru? is this a pivot?
We should definitely further clarify this! We built Integuru as an internal tool while building the products for Taiki. Then we realized that other developers may need the agent, too, so we decided to open-source Integuru. In terms of the current focus for our team, we are spending most of our time on Integuru because newly requested integrations take some of our resources to build, and we want to continue improving the agent. I think the correct way to frame this is a market expansion, where we're expanding beyond the tax industry.
I don't know what my PM would say but to me this is "excellent and appealing design"
This is what happens when your daily grind is cutting through all kinds of atrocious and excessive "web design" in order to get at information.
Literally peak graphics.
I just noticed over the weekend new Claude agreed to reverse engineer a graphql server with introspection turned off, something Im pretty sure it would have refused for ethical reasons before the new version
it kept writing scripts, i would paste the output, and it would keep going, until it was able to create its own working discount code on an actual retail website
The only issue with these kinds of things is breaking robots.txt rules and the possibility things will break without notice, and often
The use of unofficial APIs can be legally questionable [1]
[1] https://law.stackexchange.com/questions/93831/legality-of-us...
As the authors of essentially a hacking tool, I would expect at least some legal boilerplate language about not being liable
We are working on a way to auto-patch internal APIs that change by having another agent trigger the requests.
Regarding the legality aspects — really appreciate you mentioning this — we’ve put a lot of thought into these issues, and it’s something we’re continually working on and refining.
Ultimately, our goal is to allow each developer to make their own informed decision regarding the policies of the platforms that they're working with. There are situations where unofficial APIs can be both legal and beneficial, such as when they're used to access data that the end user rightfully owns and controls.
For our hosted service, we aim to balance serving legitimate data needs with safeguarding against bad actors, and we’re fully aware this can be a tricky line to navigate. What this looks like in reality would be to prioritize use cases where the end-user truly owns the data. But we know this is not always black-and-white, and will come up with the right legal language as you recommended. What does help our case is that many companies are making unofficial APIs for their own purposes, so there are legal precedents that we can refer to.
I have to disagree, it is definitely not legal in the US to use unauthorized access points to access authorized data. Thats like saying you're allowed to get into your apartment through breaking your neighbors door and climbing between the windows
In the US this is pretty simply covered by Computer Misuse Act and Computer Fraud and Abuse Act, both federal laws
Im not claiming you're liable, just surprised no lawyer pointed this out at YC
There is a carve out if the data is "publicly available": https://en.wikipedia.org/wiki/HiQ_Labs_v._LinkedIn
If I open the Safeway app and it fetches what is available in a given store without any authentication and everyone sees the same data, that could possibly fall under that exemption.
If my browser is downloading some data, then what’s the difference if my AI agent is doing the same? I’ll even tell you it’s my browser. Who are you to say what qualifies as a browser?
The law will say what qualifies as a browser.
Computer programmers are not legal experts lol. The law is not a program.
The difference between you accessing it and a computer accessing it makes these things different.
A browser is a user agent, it's some software that makes requests to a server and renders them in a way I can understand. There's no difference between using a screen reader to vocalize content and using an AI agent to summarize it.
Just have the AI use the browser.
This analogy is completely off. A closer analogy is someone calls you on your phone letting you know they're here. You were expecting them, so you say "come on in." But, they were at the back door instead of the front door. I don't think anyone would consider that your friend did something illegal.
Yeah, the CFAA doesn't work by analogy unfortunately.
CFAA has recently (2021) been limited by Van Buren ruling.
In my experience reverse engineering is often the easy bit, or at least easy compared to what follows: maintenance. Knowing both when and how it fails when it fails (eg in cases like when the API stops returning any results but is still otherwise valid). Knowing when the response has changed in a way that is subtle to detect, like they changed the format of a single field, which may still parse correctly but is now interpreted incorrectly.
How do you keep up with the maintenance?
We feel your pain with maintenance. We have plans to handle this by using LLMs to detect response anomalies.
From our experience, reverse engineering is still less prone to breakage compared to traditional browser automation. But we definitely want to make integrations even more reliable with maintenance features.
Wouldn't something like snapshot testing from a scheduled probe be more effective and reliable than using an LLM?
Every X hours test the endpoints and validate the types and field names are consistent... If they change then trigger some kind of alerting mechanism to the user.
if the types and field names change, our parsing script should be able to detect that so it should be covered. I was talking about handling the subtle changes that are undetectable by checking field types and names
This is awesome, but I'm not sure what the long-term use case for the intersection of low-latency integration and non-production-stable is? I'm saying this as someone with way more experience than I'd like to in using reverse-engineered APIs as part of production products... You inevitably run into breakages, sometimes even actively hostile platforms, which will degrade user experience as users wait for your 1day window to fix their product again.
Though I suppose if you can auto-fix and retry issues within ~1minute or so it could work?
This is a very important question. Thank you for bringing this up! Currently it requires human intervention to auto-fix integrations as someone needs to trigger the correct network request. We are planning on having another agent that triggers the network requests through interacting with the UI and then passing the network request to Integuru.
New pipe breaks regularly. It's almost like YouTube changes the API on purpose to hurt 3rd party clients that don't show ads.
Either that, or they just straight up don't care.
I think it's pretty likely that they just don't look at or test Newpipe when they change their APIs. If the change doesn't break any official clients, it goes through.
With how large Youtube is, I iimagine API changes are not infrequent.
What's the stance on security for handling private tokens/cookies/sessions/etc?
This is certainly an important question. We use a third-party vault to store tokens/keys.
Brilliant. Is the next part to monitor and autocorrect breakage when the API in scope changes unexpectedly underneath the system? This is a pain point of workflow automation systems that integrate with APIs in my experience, typically requiring a human to triage an alert (due to an unexpected external API change), pause worker queues, ship a fix, and then resume queue processing.
Love the landing page, please keep it.
Thanks and yes that's part of the roadmap!
Currently you need to trigger the UI actions manually to generate the network requests used by Integuru. But we're planning automate the whole thing by having another agent auto-trigger the UI actions to generate the network requests first, and then have Integuru reverse-engineer the requests.
Ah, by clicking on the Taiki logo to see what the ... parent company? ... builds, I now understand how this came about. And I'll be honest, as someone who hates all that tax paperwork gathering with all my heart, this launch may have gotten you a new customer for Taiki :-)
Also, just as a friendly suggestion, given what both(?) products seemingly do, this section could use some love other than "we use TLS": https://www.taiki.ai/faq#:~:text=How%20does%20Taiki%20handle... since TLS doesn't care about storing credentials in plain text in a DB, for example
---
p.s. the GitHub organization in your .gitmodules is still pointing to Unofficial-APIs which I actually think you should have kept o/
Thank you for your suggestions, and really glad to hear you're excited about Taiki! We will update the the FAQ with your suggestions — honestly, this part of the website is a bit outdated, and we will make sure to change it.
Regarding the Unofficial-APIs name, it was a really tough decision. We liked the name a lot but just thought it was a bit long. A Real pleasant surprise that you found it :)
Wow this is great! I think this is kind of the future of automation and "computer use" once LLMs become powerful enough.
Every task on the web can be reduced down to a series of backend calls, and the key is extracting out the minimal graph that can replicate that task.
Thank you!
There are a lot of companies using old custom or self hosted webapps that they control but can't change - maybe the 3rd party that built it kept the code, maybe its an orphan product, maybe the silo that owns it won't build an api.
Anyway a lot of good points here about legalities she shifting APIs, but I think there are plenty of situations where this is great and none of that applies.
Really digging this idea.
I've spent plenty of time trying to dig into the network tab to automate requests to a website without an API. Cool to see the process streamlined with LLMs. Wishing you all the best of luck!
Thank you!
Will this work for SSR applications? e.g. think old school net or jsp apps which make network requests then receive HTML which then needs to be parsed in order to understand the key pieces of information and then additional network requests?
I've found it relatively straight forward to reverse engineer SPA requests however with server side rendered apps, yow would your service handle that?
Good question. Finding the request that's responsible for the action you want will be a bit trickier for SSR, but it's still possible for most cases. It auto-generates regex (for now) to parse out needed info out of the html template.
Another thing I've seen is that some of these old school apps are sending certain requests that don't modify the page but set server side context which subsequent requests are dependent on.
For example, set context to a particular group and then subsequent navigation depends is filtered on that group even though the filter is not explicit on the client side but due to state stored in the session remotely.
This can also have implications on concurrency for a given session where you need to either create separate sessions or make sure there is some lock on particular parts of server side state.
Would this type of this eventually be possible? Or at least hooks in able for us to add custom code such as session locks
Very interesting to hear about your experience here! We haven't come across a website that has this design and don't offer support for this just yet. We can certainly implement if more people face a similar situation.
Would be cool to use a proxy to MITM to twiddle the bits (with its own API) if the use case isn't supported by a browser or robotic process automation driving the app's client side UX.
I was talking about web apps. But yeah, for old school desktop apps or windows native proxy MITM works
Very cool! If Megacorps insist on pulling the Web 2.0 promises of APIs from under us then we will build them ourselves in the spirit of adversarial interoperability.
It's time for there to be a legal protection framework for OSS maintainers to stop being bullied with legal threats from Megacorps.
The best ideas are ones that start off with an internal painpoint. Congrats!
Thank you!
Nice work, congrats! How do you deal with security related stuff like recaptcha, signed requests and so on?
Do you also support internal APIs of mobile applications? If so, how do you deal with AppCheck / PlayIntegrity / Android Key Attestation / Apple App Attest?
Thank you! Integuru itself doesn't handle recaptchas and signed requests, but we have a hosted solution where we use third-party services to handle recaptchas and manually create integrations for handling signed requests.
We do not directly support APIs for mobile applications; however, if you use MITM software and get all the network requests into a .har file, Integuru should work as expected. We do not handle AppCheck ATM at the moment unfortunately.
This is really awesome. There's several platforms that intentionally gate keep their API and it makes really annoying to build integrations with them. How do you go about these platforms and not breaking their TOS?
Thank you! There are definitely platforms that intentionally gate-keep their APIs. A good example is LinkedIn, which many companies still try to force-build their own integrations with. Our goal is to allow each developer to make their own informed decision regarding the policies of the platforms that they're working with. For our hosted service, we want to prioritize use cases where the end-user truly owns the data. We can also refer to legal precedent cases where many other companies make unofficial APIs.
I don't think it really matters to them. As a provider giving access to these platforms, they're not the user (and they didn't agree to the terms). the end user did, so it's on them to decide whether they risk getting terminated or whatnot
If they have deeper pockets than the user, they're the ones who will get sued for abuse they enable.
Again, it is one of these nice interesting products which should be open source but you shouldn't have taken the VC funding. That immediately is a red flag and this product is guaranteed in the future to go south.
Very cool, congratulations! Would this work for graphql APIs with introspection disabled?
Thank you! As long as the network request contains the query, it should work as expected. So yes it should work with introspection disabled graphQL APIs. Excited to see what you do with it!
going from tax api reverse engineering to making it easier to reverse engineer any API is smart pivot
Hell yeah! Love to see this launch. We have spent a lot of time at Wren recently trying to reverse engineer some local law APIs to help make renewable energy developer lives easier (less parsing through hundreds of PDFs, dead links, etc.) -- going to try this out and see if it can speed up our workflow.
Thank you! Would love your feedback after you use it!
congratulations! this is such a cool idea
Thank you!