I don't get it. I have used Phind a lot over the last year but now I type in the same prompts I used in the past and it's not phind anymore, and it doesn't work for me at all.
The problem is I don't think every answer needs a mini-app. I'd argue there are very few answers that do.
For example, it feels like Google's featured snippet (quick answer box) but expanded. But the thing is, many people don't like the feature snippet, and there's a reason it doesn't appear for many queries - it doesn't contribute meaningfully to those.
This functionality is doing exactly the opposite of the process of building good web apps: Rather than "unpacking functionality" and making it specific for an audience, it "packs" all functionality into a generalized use case, at the cost of becoming extremely mediocre for each use case, which makes it precisely worse than any other tool you'd use for that job.
As a specific example, I clicked your apartments in LES search (https://www.phind.com/search/find-me-options-for-a-72e019ce-...) and it shows us just 4 listings...? It shows some arbitrary subset of all things I could find on StreetEasy, and then provides a subset of the search functionality, losing things such as days on market, neighborhood, etc.
It's a cool demo, but "on-demand software" is exactly "Solution-In-Search-of-a-Problem".
The difficult part you need to ask is, like feature snippet, what are the questions worth solving with this, and is the pain point big enough that it's worth solving?
I tend to agree: I don’t understand what the “one-off app” is trying to achieve. In the example of the rental apartment—the user specified the parameters in the query. Just apply them, right?
I offer this in the spirit of feeling like I’m missing something, not out of negativity—I just genuinely don't understand the proposition.
What’s the advantage of trying to extract and normalize features from already-messy data sources, then provide controls that duplicate the query, rather than just applying the query and returning the results? Isn’t the user turning to a natural-language LLM specifically to avoid operating idiosyncratic UI controls?
For that matter, it takes time to learn to use an interface effectively. To understand how what it says it’s doing connects to what it’s actually doing. I know I can always trust McMaster Carr’s filter controls, and I know I can never trust Amazon’s wacky random ones.
It seems to me that it’s much harder to pick the right controls and make them work correctly than it is to throw some controls in an interface. Maybe that’s what I’m missing: that just wiring in controls in the first place is the hard part for most people who don’t work in this space.
Is the idea here that I’d need to learn a brand new interface, and figure out whether I can trust it, with every query?
A hypothesis here is that well-crafted UI helps you understand/see options for what you don't yet know.
For example, here's an example for a "day trip plan in Bristol" that contains a canonical example (directly based on the query), but also a customization widget that presents some options that you might not have already thought about if you were just doing a text-based followup.
> I don’t understand what the “one-off app” is trying to achieve.
Many years ago in college I worked on building Java applets that let kids visualize math related concepts. Sliders make things like sine/cosine and all sorts of other cool stuff way way more intuitive. We had a applet that, let you do ridiculous comparisons, to visualize how many empire state buildings a marathon is in length, etc. We had an primitive 'engine' simulator that let you adjust inputs on a steam engine. stuff like that
Thanks for the feedback, and I agree that it is very much early days for this product category. To be clear, our goal is to make the software specific for an audience: you. What's exciting, though, is that models are rapidly improving at building on-demand software and this will directly benefit Phind. There are still many edge cases, but I think it will get better quickly.
I would like to see a detection of when I want a one sentence answer and when I want a full interactive explanation with flowcharts and tables and diagrams.
My most common usecase now is "give me a quick answer because I don't want to wade through the search engine results page and then wade through the blog to get my one liner. Eg: "what's the command line to untar an xz over ssh?"
I hear you, but why not use something like Google AI Mode or AI Overviews for that? That's pretty hard to beat for simple questions in terms of speed, especially for one-liners.
Hmm this answer seems to have ignored everything I said, provided a generic answer ("you") which is exactly the problem, and doubled down on models/technology portion (and edge cases?) which is neither built by Phind nor did I question.
I asked about the Peninsula campaign during the Civil War and it gave me an overview, a map, profiles (with photos) of the main military commanders, a relevant Youtube video ... rough edges but overall love the format.
Rough edges:
- aspect ratios on photos (maybe because I was on mobile, cropping was weird)
- map was very hard to read (again, mobile)
- some formatting problems with tables
- it tried to show an embedded Gmap for one location but must have gotten the location wrong, was just ocean
There is a small bug in your onboarding flow: when I select research model and upgrade on phind on my phone it shows some features, but it is not possible to scroll down to the purchase button.
and after about 90 seconds the mini app was created which had a few sliders for cardamom, cinnamon, ginger which was really confusing, then it showed a bunch of other stuff which was also completely useless. I did the same search on Google ( https://tinyurl.com/47sh4eah ) and did not dislike the answer bc i know it didn't burn 1000s of tokens for that query. Sorry for being a bit harsh but I have never seen wastage of resources as bad as this.
This might be super-obvious or it might already exist but can you make Phind create mobile apps? I don't know of any site that builds a mobile app and actually gives you the app instead of give you half the app or ask to pay for credits and so on and never showing you the full real app that you can install on your phone and actually use.
I tried it out with a relatively basic Medicinal Chem/Pharmacology question, asking for an interactive Structure-Activity-Relationship viewer:
> "Build an interactive app showing SAR for a congeneric series. Use simple beta-2 agonists (salbutamol -> formoterol -> salmeterol). Display the common phenethylamine scaffold with R-group positions highlighted, and let me toggle substituents to see how logP, receptor binding affinity, and duration of action change."
It did not quite get it right. It put a bunch of pieces together, but the interactivity/functionality didn't work and choice of visualization was poor for the domain:
This is pretty cool, I asked it to visualize national import/export data, it did alright.
I was hoping to get a map with arrows like "$35B in agriculture" from China to USA. I wasn't able to make it do that, but the information was still there presented in a reasonable way!
Application error: a client-side exception has occurred while loading www.phind.com (see the browser console for more information).
Getting this error the homepage. In the browser console I am just seeing
Content-Security-Policy: (Report-Only policy) The page’s settings would block a script (script-src-elem) at https://www.phind.com/_next/static/chunks/c857e369-746618a9672c8ed0.js?dpl=dpl_4dLj9qrNQMh6evFNeDZbEJjTnT9B from being executed because it violates the following directive: “script-src 'none'”
GET https://www.phind.com/_next/static/chunks/4844-90bb89386b9ed987.js?dpl=dpl_4dLj9qrNQMh6evFNeDZbEJjTnT9B [HTTP/1.1 403 403 Forbidden 716ms]
Awesome job is it possible to have predefined specing features and perhaps a layout as well for instance if I need a specific layout, font , order , certain UI elements it could be for accessibilitys' sake or preference , can you make the internal code used for app creation be configurable insuring security of course.
hey michael,
long term phind user here. phind became absolute sh*t. almost every answer is wrong. web search should be on by default to get accurate info. but even then is ends up hallucinating a lot.
if every response starts with "You're absolutely right -- ..." you know phind is hallucinating and you can immediately close the tab.
hey, sorry to hear that. web search is on by default, but we had some teething issues with it in the last hour. it should be fully fixed now. can you send some links that failed?
people often can't share their searches due to privacy concerns, maybe you should at least provide an email address so they can share it privately? rather than posting on HN (going forward, does you app have a feedback button in each search? if not it should)
>A geometry app with nodes which interact based on their coordinates which may be linked to describe lines or arcs with side panels for variables and programming constructs.
While I initially noted it as not showing up, after a while, things did appear, but what I'm getting isn't what I would consider usable, and in particular, the requested areas for values and variables do _not_ appear at the side as requested and it's not workable for my needs/expectations.
I agree that this answer was a bit wacky. Phind Fast is the fast and free model. Selecting Phind Large, GPT-5.1, and Claude models would be better for a modeling task like this.
This is good. It’s fascinating how it spins up interactive pages instantly. Some of the mini-apps actually feel useful, but others break in ways you wouldn’t expect.
I’m curious to see how it evolves with more complex, multi-step queries.
First: my sense is that for most use cases, this will begin to feel gimmicky rather quickly and that you will do better by specializing rather than positioning yourself next to ChatGPT, which answers my questions without too much additional ceremony.
If you have any diehard users, I suspect they will cluster around very particular use cases, say business users trying to create quick internal tools, users who want to generate a quick app on mobile, scientists that want quick apps to validate data. Focusing on those clusters (your actual ones, not these specific examples) and building something optimized for their use cases seems likelier to be a stronger long term play for you
Secondly, I asked it to prove a theorem, and it gave me a link to a proof. This is fine, since LLM generated math proofs are a bit of a mess, but I was surprised that it didn't offer any visualizations or anything further. I then asked it for numerical experiments that support the conjecture, and it just showed me some very generic code and print statements for a completely different problem, unrelated to what I asked about. Not very compelling
Finally, and least important really: please stop submitting my messages when I hit return/enter! Many of us like to send more complex multi-line queries to LLMs
First time I'm seeing valid business advice on HN - unlike the infamous Dropbox comment haha :) But I strongly agree with the above advice on specializing for a vertical and hope the founders take it seriously!
It's definitely cool and engineering wise close to SOTA given lovable and all of the app generators.
But, assuming you are trying to be in between lovable and google, how are you not going to be steamrolled by google or perplexity etc the moment you get solid traction? Like, if your insight for v3 was that the model should make its own tools, so even less hardcoded, then i just dont see a moat or any vertical direction. What really is the difference?
Thanks, and great question. The custom Phind models are really key here -- off-the-shelf models (even SOTA models from big labs) are slow and error-prone when it comes to generating full websites on-the-fly.
Our long-term vision is to build a fully personalized internet. For Google this is an innovator's dilemma, as Google currently serves as a portal to the current internet.
At least to me, this is totally fresh take on AI and providing answers. OpenAI is burning through billions without trying to make nicer interface or just come up with some innovation how to train models (Qwen and Minimax). Unlike Claude who tries to smother you with content and emojis, I got clean and focused answer to my query and an app.
Again, love it, thank you. If you have to sell yourself, make sure you get a lot of billions.
BTW I saw this approach with mini apps with Cove.ai and it surprised me how useful it can be. I got simulations of some business ideas I was developing there and it was really useful.
Great reminder that phind still exists, with Gemini Enterprise and ChatGPT + phind always creating massive diagrams I kinda stopped using it unconsciously. Maybe I'll give it a try again.
Hey to be fair getting in the front page of HN floods a site with traffic and that’s even harder for an AI app. Just wait a bit and will likely be fine.
Congrats on the launch and keep up the great work.
I don't get it. I have used Phind a lot over the last year but now I type in the same prompts I used in the past and it's not phind anymore, and it doesn't work for me at all.
Dey phucked up phind.
sorry to hear that -- could you please elaborate?
The problem is I don't think every answer needs a mini-app. I'd argue there are very few answers that do.
For example, it feels like Google's featured snippet (quick answer box) but expanded. But the thing is, many people don't like the feature snippet, and there's a reason it doesn't appear for many queries - it doesn't contribute meaningfully to those.
This functionality is doing exactly the opposite of the process of building good web apps: Rather than "unpacking functionality" and making it specific for an audience, it "packs" all functionality into a generalized use case, at the cost of becoming extremely mediocre for each use case, which makes it precisely worse than any other tool you'd use for that job.
As a specific example, I clicked your apartments in LES search (https://www.phind.com/search/find-me-options-for-a-72e019ce-...) and it shows us just 4 listings...? It shows some arbitrary subset of all things I could find on StreetEasy, and then provides a subset of the search functionality, losing things such as days on market, neighborhood, etc.
It's a cool demo, but "on-demand software" is exactly "Solution-In-Search-of-a-Problem".
The difficult part you need to ask is, like feature snippet, what are the questions worth solving with this, and is the pain point big enough that it's worth solving?
I tend to agree: I don’t understand what the “one-off app” is trying to achieve. In the example of the rental apartment—the user specified the parameters in the query. Just apply them, right?
I offer this in the spirit of feeling like I’m missing something, not out of negativity—I just genuinely don't understand the proposition.
What’s the advantage of trying to extract and normalize features from already-messy data sources, then provide controls that duplicate the query, rather than just applying the query and returning the results? Isn’t the user turning to a natural-language LLM specifically to avoid operating idiosyncratic UI controls?
For that matter, it takes time to learn to use an interface effectively. To understand how what it says it’s doing connects to what it’s actually doing. I know I can always trust McMaster Carr’s filter controls, and I know I can never trust Amazon’s wacky random ones.
It seems to me that it’s much harder to pick the right controls and make them work correctly than it is to throw some controls in an interface. Maybe that’s what I’m missing: that just wiring in controls in the first place is the hard part for most people who don’t work in this space.
Is the idea here that I’d need to learn a brand new interface, and figure out whether I can trust it, with every query?
A hypothesis here is that well-crafted UI helps you understand/see options for what you don't yet know.
For example, here's an example for a "day trip plan in Bristol" that contains a canonical example (directly based on the query), but also a customization widget that presents some options that you might not have already thought about if you were just doing a text-based followup.
https://www.phind.com/search/make-me-a-day-plan-ac8c583b-ce6...
> I don’t understand what the “one-off app” is trying to achieve.
Many years ago in college I worked on building Java applets that let kids visualize math related concepts. Sliders make things like sine/cosine and all sorts of other cool stuff way way more intuitive. We had a applet that, let you do ridiculous comparisons, to visualize how many empire state buildings a marathon is in length, etc. We had an primitive 'engine' simulator that let you adjust inputs on a steam engine. stuff like that
Thanks for the feedback, and I agree that it is very much early days for this product category. To be clear, our goal is to make the software specific for an audience: you. What's exciting, though, is that models are rapidly improving at building on-demand software and this will directly benefit Phind. There are still many edge cases, but I think it will get better quickly.
I would like to see a detection of when I want a one sentence answer and when I want a full interactive explanation with flowcharts and tables and diagrams.
My most common usecase now is "give me a quick answer because I don't want to wade through the search engine results page and then wade through the blog to get my one liner. Eg: "what's the command line to untar an xz over ssh?"
I hear you, but why not use something like Google AI Mode or AI Overviews for that? That's pretty hard to beat for simple questions in terms of speed, especially for one-liners.
Hmm this answer seems to have ignored everything I said, provided a generic answer ("you") which is exactly the problem, and doubled down on models/technology portion (and edge cases?) which is neither built by Phind nor did I question.
I asked about the Peninsula campaign during the Civil War and it gave me an overview, a map, profiles (with photos) of the main military commanders, a relevant Youtube video ... rough edges but overall love the format.
Rough edges: - aspect ratios on photos (maybe because I was on mobile, cropping was weird) - map was very hard to read (again, mobile) - some formatting problems with tables - it tried to show an embedded Gmap for one location but must have gotten the location wrong, was just ocean
Thanks for the feedback, this is helpful!
There is a small bug in your onboarding flow: when I select research model and upgrade on phind on my phone it shows some features, but it is not possible to scroll down to the purchase button.
thanks for reporting!
I asked about twinning extra spicy tea bc i had just made it for me: https://www.phind.com/search/twinnings-extra-spicy-tea-bd067...
and after about 90 seconds the mini app was created which had a few sliders for cardamom, cinnamon, ginger which was really confusing, then it showed a bunch of other stuff which was also completely useless. I did the same search on Google ( https://tinyurl.com/47sh4eah ) and did not dislike the answer bc i know it didn't burn 1000s of tokens for that query. Sorry for being a bit harsh but I have never seen wastage of resources as bad as this.
Oh wow, yeah this one is pretty funny. My attempt gave more reasonable results: https://www.phind.com/search/twinnings-extra-spicy-tea-b1742....
Holy shit those sliders are so funny what was it even trying to do?
I have no idea.
Yikes, lol.
Impressive. A few weeks ago I asked Claude how to use FreeCad, and I got stuck and Claude couldn't help me get out.
When I told Phind I'm a complete novice, it came up with very detailed instructions and troubleshooting tips.
This might be super-obvious or it might already exist but can you make Phind create mobile apps? I don't know of any site that builds a mobile app and actually gives you the app instead of give you half the app or ask to pay for credits and so on and never showing you the full real app that you can install on your phone and actually use.
Phind user for ~2 years.
What type of apps would you like to see it make? How does this version of Phind work for it? And thanks for sticking with us :)
Neat idea!
I tried it out with a relatively basic Medicinal Chem/Pharmacology question, asking for an interactive Structure-Activity-Relationship viewer:
It did not quite get it right. It put a bunch of pieces together, but the interactivity/functionality didn't work and choice of visualization was poor for the domain:https://www.phind.com/search/find-me-options-for-a-72e019ce-...
Thanks for the feedback! The model you use in Phind makes a big impact. Claude 4.5 Opus in Phind gave a better answer than Phind Fast here: https://www.phind.com/search/build-an-interactive-app-showin....
Gemini 3's new "Dynamic View" responses does a pretty good job:
https://gemini.google.com/share/e0cdb00b1854
This is pretty cool, I asked it to visualize national import/export data, it did alright.
I was hoping to get a map with arrows like "$35B in agriculture" from China to USA. I wasn't able to make it do that, but the information was still there presented in a reasonable way!
Interesting -- could you try with a vanilla browser (no extensions or VPN) please? Preferably Chrome or Safari.
It seems to work except when I connect to my work VPN, which is very permissive -- I haven't observed it to break anything else
Awesome job is it possible to have predefined specing features and perhaps a layout as well for instance if I need a specific layout, font , order , certain UI elements it could be for accessibilitys' sake or preference , can you make the internal code used for app creation be configurable insuring security of course.
hey michael, long term phind user here. phind became absolute sh*t. almost every answer is wrong. web search should be on by default to get accurate info. but even then is ends up hallucinating a lot.
if every response starts with "You're absolutely right -- ..." you know phind is hallucinating and you can immediately close the tab.
hey, sorry to hear that. web search is on by default, but we had some teething issues with it in the last hour. it should be fully fixed now. can you send some links that failed?
people often can't share their searches due to privacy concerns, maybe you should at least provide an email address so they can share it privately? rather than posting on HN (going forward, does you app have a feedback button in each search? if not it should)
anyway I think you need better QA processes
been waiting for this, thanks for bringing it to the fore!
We tried to do this for learning purposes with Reasonote, but the tech wasn't quite there yet.
I'm excited to dig back in with some newer models.
Tried the prompt:
>A geometry app with nodes which interact based on their coordinates which may be linked to describe lines or arcs with side panels for variables and programming constructs.
which resulted in:
https://www.phind.com/search/a-geometry-app-with-nodes-ed416...
which didn't seem workable at all, and notable was lacking in a side panel.
Hi, I just clicked the link and it's showing up for me. Could you refresh?
While I initially noted it as not showing up, after a while, things did appear, but what I'm getting isn't what I would consider usable, and in particular, the requested areas for values and variables do _not_ appear at the side as requested and it's not workable for my needs/expectations.
I agree that this answer was a bit wacky. Phind Fast is the fast and free model. Selecting Phind Large, GPT-5.1, and Claude models would be better for a modeling task like this.
[flagged]
Congrats on the launch I love the idea! Super exciting to see these generative UIs
I tried to make it generate an explainer page and it created an unrelated page: https://www.phind.com/search/explain-to-me-how-dom-66e58f3f-...
Hi, apologies for this -- it seems to have written a syntax error that it then failed to auto-fix (hence the white screen).
I tried generating your answer again: https://www.phind.com/search/explain-to-me-how-dom-78d20f04-....
This is good. It’s fascinating how it spins up interactive pages instantly. Some of the mini-apps actually feel useful, but others break in ways you wouldn’t expect.
I’m curious to see how it evolves with more complex, multi-step queries.
We're using a similar approach at https://hallway.com ... launching soon!
OK, I've had a chance to play with it in earnest.
First: my sense is that for most use cases, this will begin to feel gimmicky rather quickly and that you will do better by specializing rather than positioning yourself next to ChatGPT, which answers my questions without too much additional ceremony.
If you have any diehard users, I suspect they will cluster around very particular use cases, say business users trying to create quick internal tools, users who want to generate a quick app on mobile, scientists that want quick apps to validate data. Focusing on those clusters (your actual ones, not these specific examples) and building something optimized for their use cases seems likelier to be a stronger long term play for you
Secondly, I asked it to prove a theorem, and it gave me a link to a proof. This is fine, since LLM generated math proofs are a bit of a mess, but I was surprised that it didn't offer any visualizations or anything further. I then asked it for numerical experiments that support the conjecture, and it just showed me some very generic code and print statements for a completely different problem, unrelated to what I asked about. Not very compelling
Finally, and least important really: please stop submitting my messages when I hit return/enter! Many of us like to send more complex multi-line queries to LLMs
Good luck
First time I'm seeing valid business advice on HN - unlike the infamous Dropbox comment haha :) But I strongly agree with the above advice on specializing for a vertical and hope the founders take it seriously!
The loading issues should be fixed now (as of 11am PST). Apologies for this -- one of our search providers went down right as we launched :(
It's definitely cool and engineering wise close to SOTA given lovable and all of the app generators.
But, assuming you are trying to be in between lovable and google, how are you not going to be steamrolled by google or perplexity etc the moment you get solid traction? Like, if your insight for v3 was that the model should make its own tools, so even less hardcoded, then i just dont see a moat or any vertical direction. What really is the difference?
Thanks, and great question. The custom Phind models are really key here -- off-the-shelf models (even SOTA models from big labs) are slow and error-prone when it comes to generating full websites on-the-fly.
Our long-term vision is to build a fully personalized internet. For Google this is an innovator's dilemma, as Google currently serves as a portal to the current internet.
That is really cool! Congrat on the launch!
I was surprised not to see a share and embed button. I would expect that could be huge for growth.
Thank you! There is a share button in the upper-right corner of the answer page screen :)
I love the direction. It feels really fresh.
Thank you, great to hear :)
I-LOVE-IT!
At least to me, this is totally fresh take on AI and providing answers. OpenAI is burning through billions without trying to make nicer interface or just come up with some innovation how to train models (Qwen and Minimax). Unlike Claude who tries to smother you with content and emojis, I got clean and focused answer to my query and an app.
Again, love it, thank you. If you have to sell yourself, make sure you get a lot of billions.
BTW I saw this approach with mini apps with Cove.ai and it surprised me how useful it can be. I got simulations of some business ideas I was developing there and it was really useful.
Great reminder that phind still exists, with Gemini Enterprise and ChatGPT + phind always creating massive diagrams I kinda stopped using it unconsciously. Maybe I'll give it a try again.
Yep, we heard that feedback loud and clear; the diagrams are a lot less annoying in this new version.
[flagged]
Hey to be fair getting in the front page of HN floods a site with traffic and that’s even harder for an AI app. Just wait a bit and will likely be fine.
Congrats on the launch and keep up the great work.
Hi, sorry about that -- we are receiving an HN traffic "hug" spike right now and I'm working on getting that fixed ASAP.