30 comments

  • andai 2 hours ago

    Very interesting!

    My first thought when seeing this is, could I use this as a "progress map" for a subject I'm learning? So add my own notes, and use AI to find and recommend more resources?

    My second thought is, can you build one of these for everything I've ever learned, and want to learn?

    I've long (15 years?) been waiting for a system that knows not only my interests, but my knowledge, and can use that data to find or generate the optimal learning experience for any subject.

    (Khan Academy used to have a big interconnected graph of how all the knowledge on their platform fit together (dependencies) but for some reason they removed it...)

    AI is getting pretty close, especially now that they've rolled out memory and conversations... wild times we live in!

    • hm-nah 9 minutes ago

      Smells like a knowledge graph

    • artur_makly 2 hours ago

      @andai check out https://www.perplexity.ai/spaces its _kind of_ what you are describing.. it's UX is unstructured compared to a mind-map or timeline. But we are starting to see the nascent stages of where all this is going. exciting times indeed.

    • arthurtakeda an hour ago

      that's a very interesting use case, could be the long-term vision for the project, thanks for sharing!

  • null0pointer 18 minutes ago

    I like this a lot. Great for autodidacts like myself. Often when entering a new topic I’m faced with many unknown unknowns. I don’t know _what_ I should be learning. So having an LLM effectively lay out a course of study would be very helpful.

    • arthurtakeda 15 minutes ago

      glad you liked it! hope it’s useful

  • airstrike 5 hours ago

    I'd say the README should have a pic of the results otherwise I have to install it and run it to see if I want to install it and run it

    Also why not host it online and let users bring their own keys?

    • arthurtakeda 5 hours ago

      just updated the readme with the video: https://www.youtube.com/watch?v=Y-9He-tG3aM

      I considered that but if I were the user I'd be wary of adding my own keys to a random person's website haha, but now that you mentioned that, since the code it's open-source I guess it's fine, thanks for the feedback!

      • airstrike 4 hours ago

        Thanks for that! You can use something like gifski to turn that video into a gif so that you can embed it into the README. Here's an example from the gifski repo: https://github.com/ImageOptim/gifski

        You can use the CLI version but they also have executables with a dead simple GUI if you're so inclined. I have only ever used the GUI and it's perfect on a Mac (just drag and drop your video into it). Not sure if it's the exact same on Windows but I imagine it's amazing there too

        • arthurtakeda 4 hours ago

          Nice! Will replace the screenshot with a gif, if that doesn’t work for me I guess ffmpeg may be able do that too, thanks!

          • nosioptar an hour ago

            Ffmpeg can output a gif. The only difficult part might be figuring out which options you need to get the quality you want.

      • cj 5 hours ago

        That’s cool! It would be great if you could easily expand each subtopic into further sub-subtopics.

        Was there anything particularly interesting about how you built it or the prompts needed to get decent results?

        • arthurtakeda 4 hours ago

          I noticed that, at least with the models I tested (gpt 3.5, 4o and llama 3.1 8b), to get a response with just the JSON and then have it follow the exact structure so it correctly renders the topic and subtopics was the hardest part.

          Ended up having to prompt I think twice (at the beginning and the end) so it finally followed the exact JSON structure.

  • SuperHeavy256 24 minutes ago

    I love the idea but, I have no Open AI API credits

    • arthurtakeda 17 minutes ago

      If you setup Ollama and download a local model, all you have to do is follow the readme instructions, let me know if you need any help!

  • dr_dshiv 5 hours ago

    Would be great to have a video of it working so I can see what it does before installing. Thanks!

    Also, I’m generally interested in UIUX variations around LLMs. Hoping to see a round up of examples like this, at some point.

  • afro88 5 hours ago

    How do you generate / validate the links to learn more? If they're generated by the LLM there's a really high chance they are hallucinated and won't work.

    • downboots 2 hours ago

      Human intervention would be necessary for any kind of reliable knowledge curation if that were the goal . https://en.wikipedia.org/wiki/World_Brain

    • arthurtakeda 5 hours ago

      to be quite honest, I don't, just manually tested with different topics and got working links almost every time but agree, that can definitely happen

      • afro88 5 hours ago

        You could have a second automatic step that searches the web for the title of the link and validates it or gets the correct one.

        Cool project!

        • arthurtakeda 5 hours ago

          good idea, I'll add that to the improvements list, thanks!

  • artur_makly 3 hours ago

    nice work!

    hmm...perhaps there could be some compounded synergies with my https://VisualFlows.io

    // also made with ReactFLoW. i will DM you..

    • pryelluw 3 hours ago

      Could you add a screenshot based demo or example to the main page?

      • artur_makly 2 hours ago

        i'm guessing you were referring to the OP's app? he just added the video demo. but if not, our main page IS the app ;-)

  • hmottestad 5 hours ago

    Do you have any examples to look at? All I can see in the readme is a page with a search field and no mind maps.