Show HN: I built a synthesizer based on 3D physics

(anukari.com)

462 points | by humbledrone a day ago ago

108 comments

  • AaronAPU 15 hours ago

    Glad I’m not the only audio developer around here.

    The landing page needs an immediate audio visual demo. Not an embedded YouTube but a videojs or similar. Low friction get the information of what it sounds and feels like immediately.

    My 2 cents

    • kookamamie 10 hours ago

      Exactly. Had to scroll for ages to find anything to do with demo audio. A good demo song/track should be the first thing on the page, I think.

    • senbrow 13 hours ago

      1000% - I had to be able to find something listenable

    • jahnu 6 hours ago

      > Glad I’m not the only audio developer around here

      There are a few of us :)

      This synth is very cool. Highly original. Kudos.

  • deng 8 hours ago

    This looks incredible! But to be honest, it also looks incredibly daunting.

    As a programmer and former physicist, I'm fascinated. As a musician, I'm not sure. At the moment, my feeling is that your landing page primarily addresses me as a programmer/physicist, and I'll definitely try it. But if you also want to sell this to musicians, what is really missing are more complex sound examples, like a tour of the existing presets and how you can manipulate them. There is your introduction video, but to be perfectly honest, the sounds you feature there do not really impress me. From what I can hear there, it very much sounds like the already existing physical modeling plugins, for instance AAS Chromaphone, and I already have plenty of those and they are much easier to use (also, their product page is a good example on how to sell a product to musicians). I can see of course that your VST allows me to dive much deeper into the weeds, and as a programmer/physicist I'm interested, but the musician in me is doubtful if the invested work will be worth with.

    Again, this looks awesome, and I really hope you can make this into a business, so please see my critique above as encouragement.

    • deng 4 hours ago

      OK, I've played around with the demo and insta-bought it, if just to support you. This is incredible work.

      • polotics 2 hours ago

        Same here, and it is excellent. I am getting a few buffer-drop clicking on an M3 MBP, reducing the polyphony solved it, but just in case, to the author: how much more efficiency you think you can still add to this amazing plugin?

  • airstrike 16 hours ago

    Really cool stuff! I would suggest putting a 60-second video at the very top of the page that stitches together short clips of the many ways it is awesome.

  • nayuki 20 hours ago

    This reminds me of the reverse, where music drives 3D animations. I remember Animusic from the early decade of 2000.

    https://en.wikipedia.org/wiki/Animusic , https://www.animusic.com/ , https://www.youtube.com/results?search_query=animusic , https://www.youtube.com/@julianlachniet9036/videos

    • humbledrone 19 hours ago

      I'm a huge fan of Animusic. I remember seeing it for the first time in some big fancy mall in LA and they had it projected on a wall, and I was blown away. It was absolutely an inspiration! Animusic -type ideas are a big part of why I made the 3D graphics fully user-customizable, for anyone who wants to go deep down that rabbit hole.

    • omneity 5 hours ago

      This rings such a vague and distant bell...

      I'm several videos in and totally hooked, thank you for sharing. This would be an amazing interactive music app in VR, both to perform and to record trippy music videos.

    • mjcohen 19 hours ago

      I have the first two Animusic reels (vhs and dvd) and thought they were great. Unfortunately, the creator scammed people by taking money for Animusic 3 and then not making anything.

      Most of them are on youtube.

  • tarentel a day ago

    Not sure I'll ever use this as it seems like a lot of work but wanted to say thank you for allowing me to download a demo without giving an email.

    Also, even though I said I wouldn't use it, something that would be nice is a master volume, maybe I missed it. I often use VSTs standalone and being able to change the volume without messing with the preset would make it a bit easier to use.

    Definitely the most interesting synth I've ever seen.

    • humbledrone a day ago

      Thanks, yeah, it really should have master volume -- you didn't miss it, it's just not there yet!

  • sitkack 21 hours ago

    I would love to watch (and listen) to a discussion between you and Noah from Audiocube, https://news.ycombinator.com/item?id=42877399 https://main.audiocube.app/ a 3d spatial DAW.

    • humbledrone 19 hours ago

      I have been peripherally aware of Audiocube for a while, and it seems ridiculous that he and I have not interacted in any way. Maybe I'll bug him sometime. :)

  • ssfrr 17 hours ago

    I’m very curious about your experience doing audio on the GPU. What kind of worst-case latency are you able to get? Does it tend to be pretty deterministic or do you need to keep a lot of headroom for occasional latency spikes? Is the latency substantially different between integrated vs discrete GPUs?

    • humbledrone 15 hours ago

      Short answer: it has been a big pain in the butt. The GPU hardware is mostly really great, but the drivers/APIs were not designed for such a low-latency use case. There's (for audio) a large overhead latency in kernel execution scheduling. I've had to do a lot of fun optimization in terms of just reducing the runtime of the kernel itself, and a lot of less-fun evil dark magic optimization to e.g. trick macOS into raising the GPU clock speed.

      Long answer: I've written a fair bit about this on my devlog. You might check out these tags:

      https://anukari.com/blog/devlog/tags/gpu https://anukari.com/blog/devlog/tags/optimization

      • ssfrr 13 hours ago

        Thanks for the extra info, I read through some of your entries on GPU optimization and it definitely seems like it's been a journey! Thanks for blazing the trail.

  • florilegiumson 17 hours ago

    Really cool to see GPUs applied to sound synthesis. Didn’t realize that all one needed to do to keep up with the audio thread was to batch computations at the size of the audio thread. I’m fascinated by the idea of doing the same kind of thing for continua in the manner of Stefan Bilbao: https://www.amazon.com/Numerical-Sound-Synthesis-Difference-...

    Although I wonder if mathematically it’s the same thing …

  • adzm 18 hours ago

    Note they are referring to Mick Gordon who is notable for the recent DOOM soundtrack. DOOM Eternal has a truly phenomenal score. Mick Cormick is a mistake I believe.

    Congratulations!!

  • sunray2 a day ago

    Thank you for this, it looks very cool!

    Remind me of Korg's Berlin branch with their Phase8 instrument: https://korg.berlin/ . Life imitates art imitates life :)

    I highly support and encourage this. Is there a way I could contribute to Anukari at all (I'm a physicist by day)? These kinds of advancements are the stuff I would live for! However I should stay rooted in what's possible or helpful: I'm not sure if this is open-source for example. As long as I could help, I'm game.

    • humbledrone a day ago

      For the foreseeable future I'm just going to be working on stability/performance, but eventually I will get back to adding more cool physics stuff. It's not open-source, but certainly I'd enjoy talking to a real physicist (I'm something a couple notches below armchair-level). Hit me up at evan@anukari.com sometime if you like!

      • sunray2 21 hours ago

        Thanks, will hit you up later!

        I was using the demo just now: the sounds you get out of this are actually better than I expected! And I see what you meant in the videos about intuitive editing, rather than abstract.

        Although, I was often hitting 100% CPU with some presets, with the sound glitching accordingly. So I could experiment only in part. I'm on an M1 Pro; initially I set 128 buffer sample size in Ableton but most presets were glitching, I then set to 2048 just to check for improvement, which it did, nevertheless it does seem a bit high. Maybe my audio settings are incorrect? I can give more info later if it helps you.

        • humbledrone 19 hours ago

          Yeah performance at low buffer sizes is a big challenge, generally I recommend 512 or higher, which I know is not great but right now it's the most practical thing. The issue is that the computation is all done on the GPU, and there's a round-trip latency that has to be amortized. One day I'd like to convince Apple to work on the kernel scheduling latency...

  • 1R053 8 hours ago

    I think to be useful it needs a mode of playing, that is always musical / in tune.

    Not yet sure how to really do it, but one concept I like from NI plugins is that you have multiple keyboard zones: one zone is for notes, others are e.g. for patterns or styles. Imagine a guitar where one zone is for the chord type and tone, another for the striking pattern...

    The challenge here is probably the resonance algo for multiple systems based on multiple notes... Maybe the piano concept would be handy here... imagine instead of having 3 strings like on the piano the instrument to be one system for each key... that excite each other via air or direct resonance points... the systems should be automatically tuned based on one reference system (e.g. using automatic string length or tension scaling)

    Anyway, amazing work and having it on GPU allows this really to scale.

  • imhoguy 19 hours ago

    This is so cool and has unlimited potential, like you could model real instruments, e.g. guitar to experiment with resonant chamber shapes, materials etc. Can't upvote enough on good old perpetual licensing model!

  • michaelhoney 18 hours ago

    So many of us have ideas for something cool and never build them. You did it. I salute you, you madman

  • corytheboyd a day ago

    Whoa this looks really cool! I love how you made something physically 3D to stand out in a world full of 2D knobs and sliders… but it still has 2D sliders because those work the best for dialing things in with precision.

  • modeless 16 hours ago

    Love physics based audio! Using the GPU is a great idea.

    Another physical audio simulation I like is the engine sound simulator made by AngeTheGreat: https://youtu.be/RKT-sKtR970?si=t193nZwh-jaSctQM

    • humbledrone 16 hours ago

      His stuff is so incredibly cool. He has a video on physical modeling for trumpets using the GPU and for a second I thought he might be building a competitor! :)

  • smolder 10 hours ago

    I have mixed feelings about all of my supposed clever ideas being executed on by other people way ahead of me, but this is cool and you have my respect.

    • devrandoom 5 hours ago

      Haha, I know that feeling. Usually I get an idea, wait 4 years and someone else will have done it.

      Even "bad" ideas. I had an idea in the mid 90s about mass internet surveillance. But then I thought it would be so disgusting that noone would do it. I was naive.

  • hamoid 18 hours ago

    Looks very fun :-) Does anyone notice issues with the sound quality? In many of the examples I hear clicking: sometimes as if the attack is too high, or as if there is some kind of aliasing or sample rate issue, or just clipping. Probably noticeable with headphones. For instance in the announcement video at 3:11 or in the "J.S. Bach, Prelude in C Major (BWV 846)" video between 4.4s and 7.2s. It's somewhat visible if I load that audio in Audacity and turn on the spectrogram view with these settings: Logarithmic, 200 to 6000 Hz. Algorithm: Reassignment, 1024, Blackman-Harris, 1. Colors: 50, 40, 50.

    What's odd is that I hear the glitches in in Firefox and in the file downloaded with yt-dlp, but not in Chromium. Is Google serving me bad audio on purpose?

    Correction: some videos also do have glitchs on Chromium.

    • humbledrone 18 hours ago

      Screen-recording Anukari has been a bit of a challenge, as OBS works best while using GPU encoding, and also seems to do things that the GPU doesn't like in general (and Anukari uses the GPU). I suspect what you're hearing in the videos has to do with that. But also I'm sure that the model for the mic compression could be improved, and I'm not sure about the default attack time, etc.

      • fc417fc802 17 hours ago

        If screen recording is actually the thing causing the issue you might try CPU encoding with one of the fast lossless codecs and doing the "real" encoding in a second pass later. As a bonus, software encoding should also give a higher quality result. That does require an SSD and quite a bit of free space though.

  • siavosh 18 hours ago

    So beautiful - I wonder what kind of an instrument an AI can build, creating sounds never before heard...

  • gregschlom a day ago

    Absolutely awesome! I know nothing about music production but I want to play with it just for fun. Maybe a very simplified, web-based version for people who just to play a bit? Would be awesome.

    Congrats on the hard work and the launch, in any case!

    Edit: I see you have a demo mode, that's great! Exactly what I was looking for

  • danw1979 8 hours ago

    Sounds like a lot of work has gone into to the infrastructure of getting this to work on the GPU.

    Did you find it more interesting doing this or the physics simulation ?

    Have you considered simulating strings or moving air too ?

  • nyanpasu64 11 hours ago

    The sounds seem to mostly be modulated sines with limited timbral variety? I'm not sure how https://youtu.be/NYX_eeNVIEU?t=179 got harmonic series out of the building blocks.

  • bufferoverflow 20 hours ago

    Dang it. I am working on the same thing, but in 2D.

    • sunray2 20 hours ago

      Don't be discouraged! It might even be that 2D is better than 3D in this case: it's all about how it sounds, right? And if a 2D simulation can be less expensive than a 3D while sounding just as good or better, it works in your favour!

      I think that's the real key to this stuff: what makes these things actually sound good?

    • IshKebab 20 hours ago

      I think 2D is probably the better move for this thing. 3D doesn't really open many possibilities that you can't do in 2D, and it adds loads of UI awkwardness.

    • humbledrone 19 hours ago

      There are a lot of advantages to 2D -- you could simulate more objects and more complex interactions with the lesser computational demands, and as other comments say, it will likely be way easier to build a better GUI. Think about how many compressor VSTs there are out there, and still people keep making them! And a 2D Anukari could be much more different from 3D Anukari than most compressors are from one another.

    • chadcmulligan 17 hours ago

      3d is cooler, but 2d is easier to use for people, there was some research on it, though don't have it at hand

      edit: a hn thread https://news.ycombinator.com/item?id=19961812

  • an_aparallel 20 hours ago

    Hey evan Just wondering, can you import 3d models into this envireonment? Im still pining for a less "code" driven environment for this than max/msp modalsys.

    • humbledrone 19 hours ago

      You can replace all the 3D models and skyboxes. Everything is in open formats, wrapped up in zip files. See:

      https://anukari.com/support/faq#custom-skyboxes https://anukari.com/support/faq#custom-skins

      AFAIK nobody has attempted this yet, so the write-up might not be perfect. If you try it, let me know how it goes!

      • an_aparallel 17 hours ago

        Awesome, say i import a bugle model, for arguments sake, does anakuri support a wind stream? I dont see air/wind listed as an object like you see bow and pluck

        • humbledrone 16 hours ago

          No wind-type model so far, though the bow model can do some rather flute-y sounds (in fact for the current bow model, I might even argue that it is better for pan flute stuff than sounding like a realistic bow).

          • an_aparallel 11 hours ago

            Thanks Evan, is that because wind models are complex to model?

  • dylanz 18 hours ago

    This is insane. I've used tons of virtual synths in my life but this is by far the coolest I've ever seen. Mick's video was amazing!

  • KeplerBoy 6 hours ago

    I don't have the first idea about audio but damn this looks powerful.

  • brookst 13 hours ago

    I’m so tempted to buy, but some info is missing on the website:

    - If I buy once can I run it on both my Windows desktop and MacBook travel computer?

    - If so, are files compatible between them?

    - What are GPU requirements on Windows? I’m sure it scales, but is a 3080 overkill or not enough?

    • mutagen 11 hours ago

      My account shows 3 devices available to install on and I can disable computers on demand. Runs well on my M1 and on my 3060 and even all but the most demanding of assemblies on my little work laptop with onboard Intel graphics.

      I assume files are compatible, presets are the same on both MacOS and Windows.

  • ziddoap 15 hours ago

    I'm not really familiar with audio stuff, but holy do I ever appreciate the write-ups you've done. This is absolutely fascinating stuff. I'm eager to keep reading. The video from Mick Gordon was awesome, too.

    Congratulations on the launch, and best of luck!

  • jbverschoor 10 hours ago

    Man. “Sheet music is not portable”. I immediately thought about the Apple Vision Pro

    The battery-less medium is something we do desperately try to mimic

  • junon 21 hours ago

    Ha! I've had this idea for ages but never had the urge to make it. I'll have to play with it, thanks for sharing!

  • peteforde 14 hours ago

    I am your target market, and I can't wait to try out the demo once I'm back from Superbooth and allowed to be distracted even a little.

    ... but I wanted to say that even with all of the glowing feedback, US$70 for a beta v1 soft synth is a big enough ticket that it will be off-putting to some and difficult to afford for others. Yes, there are many [much] more expensive virtual instruments, and this occupies a pretty unique space. But if you're open to feedback, this is my initial gut reaction.

    One thing I am surprised by is that there's no mention of VR/AR ambitions. When I fantasize about 3D instruments, I do so in the context of wanting to interact with them in a space I inhabit. Does this speak to you as well?

  • rvba a day ago

    The website should have a better youtube video and at the beginning

    • humbledrone 21 hours ago

      You are 100% correct. It turns out that I am an OK engineer, and a terrible marketer. There is a LOT that I need to improve on the site and the videos.

      • mkl 19 hours ago

        Every one of those pretty-looking screenshots should have a play button with a few seconds of audio. It's really strange to market a synthesiser with visuals instead of sound.

      • tbalsam 19 hours ago

        Yes, it's a synthesizer -- you may know it inside and out, but having demo videos showing what it can do will help people with no context get that quick "ahhh, that makes sense" moment from things. :)

      • pierrec 14 hours ago

        >a terrible marketer

        I wouldn't go so far, apart from this point the landing page is excellent.

      • shannifin 21 hours ago

        Some little audio examples would also be nice so visitors don't have to scroll through the video to hear them.

        Still, awesome work!

        • imhoguy 20 hours ago

          Yeah, all the cool 3D pictures should play demo videos on click!

  • erwincoumans 13 hours ago

    Very nice to see. Maybe nice in VR, streaming to Quest 3/Apple Vision Pro (OpenVR/WebXR?)

  • titaphraz 18 hours ago

    Seriously cool! A Linux build on the horizon?

  • dfedbeef a day ago

    Any chance we'll get a Linux VST or CLAP?

    • humbledrone a day ago

      Linux: I very much want to do this, but unfortunately it's lower priority than getting things rock-solid on windows/mac. It's not a gigantic amount of work, but it's not trivial. Hopefully I can do it once things calm down with the Beta.

      CLAP: I'm using the JUCE framework for plugin integrations, which doesn't currently support CLAP. But their roadmap says that the next major version will support CLAP, and I will definitely implement that in Anukari. Not sure when JUCE 9 comes out though, it could be a while.

      • dfedbeef an hour ago

        Hopefully they make it easy. WINE has some issues with JUCE 8 I think.

        It seems like vst problems with WINE always comes down to issues with license auth and graphics libs.

      • thrtythreeforty 20 hours ago

        There's an external plugin to emit a CLAP plugin for JUCE 8: https://github.com/free-audio/clap-juce-extensions

        It works quite well, but it's also reasonable to wait for official framework support.

        • humbledrone 19 hours ago

          Thanks, that could be good if the JUCE support is going to be way off.

  • royal__ 18 hours ago

    This is crazy, incredible work.

  • ghawkescs 21 hours ago

    Incredible work and a very creative product. I can't wait to see what is created using Anukari.

  • ww520 21 hours ago

    Wow. This is so cool. It opens up a different approach to the problem.

  • throwpoaster a day ago

    Very neat!

    Is the simulation deterministic?

    • humbledrone a day ago

      Yep! I have a lot of unit/integration tests that were a lot easier to write and more reliable by making the simulation fully-deterministic. It does produce slightly different results on different GPUs due to small differences in the FP operations. (For this application it's really beneficial to let the compiler go crazy with FP re-ordering to get speed.)

      • fsckboy 21 hours ago

        does it benefit from or even require top end GPUs to get the best results?

        • humbledrone 19 hours ago

          mutagen sums up my experience pretty well -- I have tested it on a laptop with a pretty wimpy Intel Iris chipset and it definitely works, but you might be limited in terms of how complex of a preset you can run. But there is a LOT of fun to be had with small presets so it may still be worthwhile. Be sure to install the absolute most up-to-date drivers from Intel directly, and use a larger buffer size.

        • mutagen 19 hours ago

          I've been able to run it on an Intel laptop with integrated video. I haven't been able to test the most complex models / presets, I might give that a shot this weekend and see where it falls apart.

  • Eduard 18 hours ago

    is this about sound generation? Because I didn't find any sample sounds on this wall of text and pictures.

    • humbledrone 17 hours ago

      Sorry about that. It's just me working on it, and so far my personal tenet is that the product itself is the top priority, and all else including marketing comes second, so none of the website/youtube/etc are as good as I'd like. Possibly I'll soon have money to hire help with some of that, or I'll get the product to a place where I'm happy enough to work more on marketing myself.

  • pjbk a day ago

    Cool idea. Sounds like AAS Chromaphone on steroids.

  • akomtu 15 hours ago

    At a glance, this looks like a bunch of coupled oscillators. A natural extension of this idea is strings: a 1d array of oscillators modelling a wave equation. For example, a piano sound can be modelled by attaching a basic oscillator to one end of a string and a mic to the other end of the string. The string and the oscillator push each other, creating the piano tone. Real pianos use 3 such string with different properties.

    Another idea. What if you make a circular string and attach 1 or more oscillators at random points? Same idea as above, but more symmetric. This "sound ring" instrument may produce unreal sounds.

  • chaosprint 12 hours ago

    very impressive. is it built with juce?

  • sneak 10 hours ago

    Consider putting the purchase CTA inside the app, and just having a giant DOWNLOAD NOW button above the fold. The download for the demo and the real one should be the same download.

    Then I don’t have to make a buying decision up front - I can get it on my computer and running first in all cases.

  • fractallyte 21 hours ago

    I second the recommendation: even if you don't have a Twitter/X account, find some way to watch Mick Gordon's session!

    I find it hysterically funny, but at the same time, it really shows what this synth is capable of.

    Excellent!

  • exodust 12 hours ago

    I like the perpetual license, no AI, and customisable 3D models and animations. This last feature hopefully opens up potential for making creatively synchronised graphics such as animating the expression - say the mouth shape on a 3D face in response to modulation. I wonder if the animation needs to be a fixed amount of frames or length?

  • nprateem 12 hours ago

    Your beta video assumes I know why I should care and jumps straight into technicals.

    This is potentially new to producers. Tell them why they should care first.

  • carterschonwald a day ago

    This is super super duper cool. Thx for sharing

  • TheOtherHobbes 19 hours ago

    Fun :)

  • drcongo a day ago

    I like the look of this, do you have any plans to release it for iPad?

    • humbledrone 21 hours ago

      The thought has definitely occurred to me. It's always been in the back of my mind on the "if it shows some success..." list. Glad to hear there's interest, I think it would be really fun if I implemented proper multi-touch. There are some other details I'd need to think through, though, since right now it mostly assumes you have a MIDI keyboard, but on an iPad it's just begging for touch controls for a lot of that stuff.

      • endofreach 21 hours ago

        I'd focus on ipad & an awesome multitouch experience. App store sales are easier & i'd bet apple would spotlight it.

  • yapyap a day ago

    It definitely looks really cool! As an outsider to the audio stuff with an okay amount of knowledge I’m curious as to the workflow for sure

  • moffkalast 4 hours ago

    Ah yes the new The Incredible Machine sequel is looking good.

  • anigbrowl 19 hours ago

    Not supported: Intel-based Macs

    Boo

    • TheOtherHobbes 19 hours ago

      I don't think Intel Macs have the grunt for this. Physical modelling of complex networks is pretty intensive.

      You can take it a stage further and model networks of complex shapes like metal plates. That gets even more interesting because you get multiple resonant modes.

      In the limit you could use finite element modelling to create precise simulations of acoustic instruments - like all of the strings in a piano, all of the dampers, the resonator, and the wooden enclosure.

      But that's a brute force way to do it, and there isn't nearly enough compute available to make it happen in real time. (You might be able to do it on a supercomputer. I'm not aware of anyone's who's tried.)

      • anigbrowl 18 hours ago

        I feel like you're overlooking the fact that it's GPU based (sorta the whle point) and that's why it also runs on Windows.

  • newobj a day ago

    Whoa this is sick

  • badmonster a day ago

    looks really cool! congrats!

  • vid43 a day ago

    This is so cool.