My first thought when seeing this is, could I use this as a "progress map" for a subject I'm learning? So add my own notes, and use AI to find and recommend more resources?
My second thought is, can you build one of these for everything I've ever learned, and want to learn?
I've long (15 years?) been waiting for a system that knows not only my interests, but my knowledge, and can use that data to find or generate the optimal learning experience for any subject.
(Khan Academy used to have a big interconnected graph of how all the knowledge on their platform fit together (dependencies) but for some reason they removed it...)
AI is getting pretty close, especially now that they've rolled out memory and conversations... wild times we live in!
@andai check out https://www.perplexity.ai/spaces
its _kind of_ what you are describing.. it's UX is unstructured compared to a mind-map or timeline. But we are starting to see the nascent stages of where all this is going. exciting times indeed.
I like this a lot. Great for autodidacts like myself. Often when entering a new topic I’m faced with many unknown unknowns. I don’t know _what_ I should be learning. So having an LLM effectively lay out a course of study would be very helpful.
I considered that but if I were the user I'd be wary of adding my own keys to a random person's website haha, but now that you mentioned that, since the code it's open-source I guess it's fine, thanks for the feedback!
Thanks for that! You can use something like gifski to turn that video into a gif so that you can embed it into the README. Here's an example from the gifski repo: https://github.com/ImageOptim/gifski
You can use the CLI version but they also have executables with a dead simple GUI if you're so inclined. I have only ever used the GUI and it's perfect on a Mac (just drag and drop your video into it). Not sure if it's the exact same on Windows but I imagine it's amazing there too
I noticed that, at least with the models I tested (gpt 3.5, 4o and llama 3.1 8b), to get a response with just the JSON and then have it follow the exact structure so it correctly renders the topic and subtopics was the hardest part.
Ended up having to prompt I think twice (at the beginning and the end) so it finally followed the exact JSON structure.
This reminds me of https://tree-of-knowledge.org/, posted a few months ago on HN. The branching/exploratory/canvas approach is better UX than a chat box.
How do you generate / validate the links to learn more? If they're generated by the LLM there's a really high chance they are hallucinated and won't work.
Very interesting!
My first thought when seeing this is, could I use this as a "progress map" for a subject I'm learning? So add my own notes, and use AI to find and recommend more resources?
My second thought is, can you build one of these for everything I've ever learned, and want to learn?
I've long (15 years?) been waiting for a system that knows not only my interests, but my knowledge, and can use that data to find or generate the optimal learning experience for any subject.
(Khan Academy used to have a big interconnected graph of how all the knowledge on their platform fit together (dependencies) but for some reason they removed it...)
AI is getting pretty close, especially now that they've rolled out memory and conversations... wild times we live in!
Smells like a knowledge graph
@andai check out https://www.perplexity.ai/spaces its _kind of_ what you are describing.. it's UX is unstructured compared to a mind-map or timeline. But we are starting to see the nascent stages of where all this is going. exciting times indeed.
that's a very interesting use case, could be the long-term vision for the project, thanks for sharing!
I like this a lot. Great for autodidacts like myself. Often when entering a new topic I’m faced with many unknown unknowns. I don’t know _what_ I should be learning. So having an LLM effectively lay out a course of study would be very helpful.
glad you liked it! hope it’s useful
I'd say the README should have a pic of the results otherwise I have to install it and run it to see if I want to install it and run it
Also why not host it online and let users bring their own keys?
just updated the readme with the video: https://www.youtube.com/watch?v=Y-9He-tG3aM
I considered that but if I were the user I'd be wary of adding my own keys to a random person's website haha, but now that you mentioned that, since the code it's open-source I guess it's fine, thanks for the feedback!
Thanks for that! You can use something like gifski to turn that video into a gif so that you can embed it into the README. Here's an example from the gifski repo: https://github.com/ImageOptim/gifski
You can use the CLI version but they also have executables with a dead simple GUI if you're so inclined. I have only ever used the GUI and it's perfect on a Mac (just drag and drop your video into it). Not sure if it's the exact same on Windows but I imagine it's amazing there too
Nice! Will replace the screenshot with a gif, if that doesn’t work for me I guess ffmpeg may be able do that too, thanks!
Ffmpeg can output a gif. The only difficult part might be figuring out which options you need to get the quality you want.
That’s cool! It would be great if you could easily expand each subtopic into further sub-subtopics.
Was there anything particularly interesting about how you built it or the prompts needed to get decent results?
I noticed that, at least with the models I tested (gpt 3.5, 4o and llama 3.1 8b), to get a response with just the JSON and then have it follow the exact structure so it correctly renders the topic and subtopics was the hardest part.
Ended up having to prompt I think twice (at the beginning and the end) so it finally followed the exact JSON structure.
You can use structured outputs to tell ChatGPT-4o to create specific JSON matching a schema: https://platform.openai.com/docs/guides/structured-outputs/i...
It's a bit annoying because the schema has some limitations but it works with enough elbow grease
interesting, didn’t know about that feature, thanks for sharing!
I love the idea but, I have no Open AI API credits
If you setup Ollama and download a local model, all you have to do is follow the readme instructions, let me know if you need any help!
Would be great to have a video of it working so I can see what it does before installing. Thanks!
Also, I’m generally interested in UIUX variations around LLMs. Hoping to see a round up of examples like this, at some point.
This reminds me of https://tree-of-knowledge.org/, posted a few months ago on HN. The branching/exploratory/canvas approach is better UX than a chat box.
just uploaded a demo on youtube: https://www.youtube.com/watch?v=Y-9He-tG3aM thanks for checking out!
How do you generate / validate the links to learn more? If they're generated by the LLM there's a really high chance they are hallucinated and won't work.
Human intervention would be necessary for any kind of reliable knowledge curation if that were the goal . https://en.wikipedia.org/wiki/World_Brain
to be quite honest, I don't, just manually tested with different topics and got working links almost every time but agree, that can definitely happen
You could have a second automatic step that searches the web for the title of the link and validates it or gets the correct one.
Cool project!
good idea, I'll add that to the improvements list, thanks!
nice work!
hmm...perhaps there could be some compounded synergies with my https://VisualFlows.io
// also made with ReactFLoW. i will DM you..
Could you add a screenshot based demo or example to the main page?
i'm guessing you were referring to the OP's app? he just added the video demo. but if not, our main page IS the app ;-)
Do you have any examples to look at? All I can see in the readme is a page with a search field and no mind maps.
just uploaded a demo on youtube: https://www.youtube.com/watch?v=Y-9He-tG3aM thanks for checking out!