Tools like Suno are fundamentally enabling. I'm about 40 years old and never "had the music" - not for lack of trying (music lessons at a young age)... but could never carry a tune or keep rhythm. I suppose its what being dyslexic feels like. If I were educated in a culture where music was fundamentally as important as reading or math, I suppose would have spent enough hours on it to eventually be passable... but I got frustrated, the music lessons stopped. But that doesn't mean I stopped appreciating or wanting to make music!
And then comes Suno (and OpenAI's jukebox before that), and it felt like my brain exploded... like the classic scene in a superhero movie when the power was given to me. Is my music good? No - but I spent years writing and fashioning poetry and all of a sudden can put that to music... hard to explain how awesome that feels. and i love using the tools and it's getting better and it's been fundamentally empowering. I know it's easy to say generative art is generative swill... but "learning Suno" is no different than "learning guitar".
If you enjoy it, I'd leave it at that - that's all that matters.
It's a pretty absurd claim to say that learning Suno is no different than learning a musical instrument. My 8 year old nephew was cranking out "songs" in Suno within an hour of being introduced to it. Reminds me of when parents were super impressed that their 3-year old could use an iPad.
Generative tools (visual, auditory, etc.) can serve as powerful tools of augmentation for existing creators. For example, you've put together a song (melody/harmony) and you'd like to have AI fill out a simple percussive section to enrich everything.
However with a translation as vast as "text" -> to -> "music" in terms of medium - you can't really insert much of yourself into a brand new piece outside of the lyrics though I'd wager 99% of Suno users are too lazy to even do that themselves. I suppose you can act as a curator reviewing hundreds of generated pieces but that's a very different thing.
I always get a little confused when I hear non-musicians say that something like Suno is empowering when all they did was type in, "A Contrapuntal hurdy-gurdy fugue with a backing drum track performed by a man who swallowed too many turquoise beads doing the truffle shuffle while a choir gives each other inappropriate friendly tickles".
> My 8 year old nephew was cranking out "songs" in Suno within an hour of being introduced to it. Reminds me of when parents were super impressed that their 3-year old could use an iPad.
You imply "it is Prompt -> Song" but in reality it is "Prompt -> Song -> Reflection -> New Prompt -> New Song.." It is a dialogue. And in a dialogue you can get some places where neither of you could go alone.
As software developers we know that multiple people contribute to a project, inside a git repo, and if you take one's work out it does nothing useful by itself. Only when they come together they make sense. What one dev writes builds on what other devs write. It's recursive dependency.
The interaction between human and AI can take a similar path. It's not a push-button vending machine for content. It is like a story writing itself, discovering where it will end up along the way. The credit goes to the process, not any one in isolation.
It’s really not. It’s like having interdimensional Spotify where you can describe any song and they will pull it up from whatever dimension made it and play it for you. It may empower you as a consumer but it does not make you a creator.
I dunno, based on Spotify's recommendation engine, AI is absolutely sufficient to make anyone a creator ;P
Almost all naturally-generated music is derivative to one degree or another. And new tools like AI provide new ways to produce music, just like all new instruments have done in the past.
Take drum and bass. Omni Trio made a few tracks in the early 90s. It was interesting at the time, but it wasn't suddenly a genre. It only became so because other artists copied them, then copied other copies, and more and more kept doing it because they all enjoyed doing so.
Suno ain't gonna invent drum and bass, just like drum machines didn't invent house music. But drum machines did expand the kinds of music we could make, which lead to house music, drum and bass, and many other new genres. Clever artists will use AI to make something fun and new, which will eventually grow into popular genres of music, because that's how it's always been done.
You can do exactly what you describe with interdimensional Spotify. People can describe all kinds of fun and interesting things that can be statistically generated for them, but they still didn’t make anything themselves unlike in your other examples of using new tools.
Japanese oldies became a trend for a while - the people who found and repopularised the music dont get to say they created it and how it’s so awesome to have mastered the musical instrument of describing or searching for things. Well, of course they can, but forgive me if I don’t buy it.
Maybe when there is actual AGI then the AI will get the creative credit, but that’s not what we have and I still wouldn’t transfer the creative credit to the person who asked the AGI to write a song.
> Maybe when there is actual AGI then the AI will get the creative credit, but that’s not what we have and I still wouldn’t transfer the creative credit to the person who asked the AGI to write a song.
When artists made trance, the creative credit didn't go to Roland for the JP-8000 and 909, even though Roland was directly responsible for the fundamental sounds. Instead, the trance artists were revered. That's good.
> Japanese oldies became a trend for a while - the people who found and repopularised the music dont get to say they created it and how it’s so awesome
I'd bet there are modern artists who sampled that music and edited it into very-common rhythm patterns, resulting in a few hit songs (i.e. The Manual by The KLF).
> Take drum and bass. Omni Trio made a few tracks in the early 90s. It was interesting at the time, but it wasn't suddenly a genre. It only became so because other artists copied them, then copied other copies, and more and more kept doing it because they all enjoyed doing so.
Musicians not just copy but everyone adds something new; it's like programmers taking some existing algorithm (like sorting) and improving it. The question is, can Suno user add something new to the drum-and-bass pattern? Or they can just copy? Also as it uses a text prompt, I cannot imagine how do you even edit anything? "Make note number 3 longer by a half"? It must be a pain to edit the melody this way.
> Musicians not just copy but everyone adds something new
Not everyone. I've followed electronic music for decades, and even in a paid-music store like Beatport, most artist reproduce what they've heard, and are often just a pale imitation because they have no idea of how to make something better. That's the fundamental struggle of most creatives, regardless of tool or instrument.
I haven't tried Suno, but I imagine it's doing something similar to modern software: start with a pre-made music kit and hit the "Randomize" button for the sequencer & arpeggiator. It just happens to be an "infinite" bundle kit.
Well I made songs with my lyrics that brings tears and memories to my audience. Don’t know what other creator things you are talking about, but this to me is creating.
Sampling is not just cutting a fragment from a song and calling it a day. Usually (if you look at Prodigy's tracks for example) it includes transformation so that the result doesn't sound much like the original. For example, you can sample a single note and make a melody from it. Or turn a soft violin note into a monster's roar.
As for DJ'ing I would say it is pretty limited form of art and it requires lot of skill to create something new this way.
Yes, that's what people are doing with AI music as well. Acting like there's some obvious "line" of what constitutes meaningful transformation is silly.
clicking on a button until you like what you hear is not "making music". I have nothing against these tools, but the hubris of the people using them is insane
I just tried it out because of the discussions on this thread, and I got to say I land squarely on the side of this is neat, but it is not artistry. Every little thing I generated sounded like things I've heard before. I was trying hard to get it to create something unique, using obscure language or ideas. It didn't even get close to something interesting in my opinion, every single output was like if you combined every top 40 song ever made and then only distilled out the parts relevant to certain keywords in a prompt.
These tools will probably be great for making music for commercials. But if you want to make something interesting, unique, or experimental, I don't think these are quite suited for it.
It seems to be a very similar limitation to text-based llms. They are great at synthesizing the most likely response to your input. But never very good at coming up with something unique or unlikely.
What button? Again with the vending machine idea. No, it's language prompting, language has unbounded semantic space. It's not one choice out of 20, it's one choice out of countless possibilities.
I give my idea to the model, the model gives me new ideas, I iterate. After enough rounds I get some place where I would never have gotten on my own, or the model gotten there without me.
I am not the sole creator, neither is the model, credit belongs to the Process.
So if I have a melody in my head, how do I make AI render it using language? Even simpler, if I can beatbox a beat (like "pts-ts-ks-ts"), how do I describe it using language? I don't feel like I can make anything useful by prompting.
You record yourself whistling it and out it in as an input.
I've been recording myself on guitar and using suno to turn it into professional quality recordings with full backing band.
And I'm not trying to sell it, I just like hearing the ideas in my head turned into fully fleshed music of higher quality than I could produce with 100x more time to invest into it
This is more closer to actually creating rather than generating music. However this cannot be done with a text prompt, which the comment above claimed, is expressive enough.
Actually having an "autotune" AI that turns out-of-key poor singing into a beautiful melody while keeping the voice timbre, would be not bad.
Well then I have news for you... That's what Suno is. You can generate from simple text prompt, you can describe timings and chord progressions and song structure. You can get very detailed, even providing recordings
Yes the barrier for entry is low, but there is a very high ceiling as well
I messed around with Udio when it first cam out, and it wasn't just writing a prompt, and there's your song.
You got 30 seconds, of which there might have been a hook that was interesting. So you would crop the hook and re-generate to get another 30 seconds before or after that, and so on.
I would liken it more as being the producer stitching together the sessions a band have recorded to produce a song.
Trying to convince some tech people about how artistic creation works, and why it's more than just the right amount of "optimization" of bits for rapid results, is about as pointless as trying to make a chimpanzee understand the intricacies of Bach. The reductiveness of some of you is amusing, but also grotesque in the context of what art should mean for human experience.
I don't think you really understood what I was saying, or what you're even talking about. I've got nothing to "gatekeep" and a defense of skill over automated regurgitation in creating things certainly isn't gatekeeping. People can use whatever tools they like, but they should keep in mind what distinguishes knowing how to create something from having it done for you at the metaphorical push of a button.
No, I understand the insults and ad hoc requirements just fine. And I can point you back to the decades and decades of literature about how anyone can be an artist and how anything can be art. The stuff that was openly and readily said until the second people started making art with AI. As for "push of a button", Visarga has already done a decent job of explaining how that's not actually the case. Not that I have any issue with people doing the metaphorical button push either.
If you're too lazy to put effort into learning how to create an art so you can adequately express yourself, why should some technology do all the work for you, and why should anyone want to hear what "you" (ie: the machine) have to say?
This is exactly how we end up with endless slop, which doesn't provide a unique perspective, just a homogenized regurgitation of inputs.
Again, I wholly reject the idea that there's a line between 'tech people' and 'art people'. You can have an interest in both art and tech. You can do both 'traditional art' and AI art. I also reject the idea that AI tools require no skill, that's clearly not the case.
>nature
This can so easily be thrown back at you.
>why should anyone want to hear what "you" (ie: the machine) have to say?
So why are we having this discussion in the first place? Right, hundreds of millions are interested in exploring and creating with AI. You are not fighting against a small contingent who are trying to covet the meaning of "artist" or whatever. No, it's a mass movement of people being creative in a way that you don't like.
• I didn't say there's a line between "tech people" and "art people". Why would there be?
• We're having this discussion because people are trying to equate an auto-amalgamation/auto-generation machine with the artistic process, and in doing so, redefining what "art" means.
• Yes, you can "be creative" with AI, but don't fool yourself-- you're not creating art. I don't call myself a chef because I heated up a microwave dinner.
• The other guy certainly did. And your subsequent reply was an endorsement of his style of gatekeeping, so. I mean, just talk to some of the the more active people in AI art. Many of them have been involved in art for decades.
• If throwing paint at a canvas is art (sure, why not?) then so is typing a few words into a 'machine'. Of course many people spend a considerable amount more effort than that. No different than learning Ableton Live or Blender.
I have claves, which are literally two sticks. I've also got a couple egg shakers, a couple tambourines.
Do you have ANY IDEA how hard these things are to play well.
I don't care if haphazard bashing of sticks with intent to make noise counts as 'music'. I do care if this whole line of discussion fundamentally equates any such bashing with, say, Jack Ashford.
I would be surprised if the name meant anything to you, as he's more obscure than he should be: the percussionist and tambourine player for the great days of Motown. Some of you folks don't know why that is special.
Maybe you need to refresh the context - 99.99% of AI generated music, images or text is seen/heard only Once, by the AI user. It's a private affair. The rest of the world are not invited.
If I write a song about my kid and cat it's funny for me and my wife. I don't expect anyone else to hear or like it. It has value to me because I set the topic. It doesn't even need to be perfect musically to be fun for a few minutes.
You seem to be the one who doesn't understand how special it is if you think good music is so simple that AI can zero shot it.
People are mixing and matching these songs and layering their own vocals etc to create novel music. This is barely different from sampling or papier mache or making collages.
People made the same reductionist arguments you're making about electronic music in the early days. Or digital art.
Dumping money into a company until desired results is not "building a company". I have nothing against capital, but the hubris of the people investing is insane. /s
Look, sarcasm aside, for you and the many people who agree with you, I would encourage opening your minds a bit. There was a time where even eating food was an intense struggle of intellect, skill, and patience. Now you walk into a building and grab anything you desire in exchange for money.
You can model this as a sort of "manifestation delta." The delta time & effort for acquiring food was once large, now it is small.
This was once true for nearly everything. Many things are now much much easier.
I know it is difficult to cope with, because many held a false belief that the arts were some kind of untouchable holy grail of pure humanness, never to be remotely approached by technology. But here we are, it didn't actually take much to make even that easier. The idea that this was somehow "the thing" that so many pegged their souls to, I would actually call THAT hubris.
Turns out, everyone needs to dig a bit deeper to learn who we really are.
This generative AI stuff is just another phase of a long line of evolution via technology for humanity. It means that more people can get what they want easier. They can go from thought to manifestation faster. This is a good thing.
The artists will still make art, just like blacksmiths still exist, or bow hunters still exist, or all the myriad of "old ways" still exist. They just won't be needed. They will be wanted, but they won't be needed.
The less middlemen to creation, the better. And when someone desires a thing created, and they put in the money, compute time, and prompting to thusly do so, then they ARE the creator. Without them, the manifestation would stay in a realm of unrealized dreams. The act itself of shifting idea to reality is the act of creation. It doesn't matter how easy it is or becomes.
Your struggle to create is irrelevant to the energy of creation.
It doesn’t even have to be art. If someone told me they were a chef and cooked some food but in reality had ordered it I’d think they were a bit of a moron for equating these things or thinking that by giving someone money or a request for something they were a creator, not a consumer.
It may be nice for society that ordering food is possible, but it doesn’t make one a chef to have done so.
I enjoy this take. Funding something is not the same as creating it. The Medicis were not artists, Michelangelo, Botticelli, Raphael, etc were.
You might not be a creator, but you could make an argument for being an executive producer.
But then, if working with an artist is reduced to talking at a computer, people seem to forget that whatever output they get is equally obtainable to everyone and therefore immediately uninteresting, unless the art is engaging the audience only in what could already be described using language, rather than the medium itself. In other words, you might ask for something different, but that ask is all you are expressing, nothing is expressed through the medium, which is the job of the artist you have replaced. It is simply generated to literally match the words. Want to stand out? Well, looks like you’ll have to find somebody to put in the work…
That being said, you can always construct from parts. Building a set of sounds from suno asks and using them like samples doesn’t seem that different from crate digging, and I’d never say Madlib isn’t an artist.
Assuming that 1. food is free and instant to get, and 2. there are infinite possibilities for food - then yes, if you ordered such a food from an infinite catalog you would get the credit.
But if you ordered 100 dishes iterating between designing your order, tasting, refining your order, and so on - maybe you even discover something new that nobody has realized before.
The gen-AI process is a loop, not a prompt->output one step process.
I disagree with the characterization as “absurd” to equate AI to an instrument. As you just said, it is a powerful tool. I would equate basic Suno prompting to a beginner on an instrument, as instruments are tools like anything else. Just because you get music out, it doesn’t mean it is actually “good” any more than if I smash random keys on a piano.
Controlling that flow of generation, re-prompting, adjusting, splicing, etc. to create a unique song that expresses your intention is significantly more work and requires significantly more creativity. The more you understand this “instrument”, the more accurate and efficient you become.
What you’re comparatively suggesting is that if a producer were to grab samples off Splice, slice them and dice them to rearrange them and make a unique song, that they didn’t “actually” make music. That seems like it would be a more absurd position than suggesting AI could be viewed as an instrument.
Tools like Suno make people feel like “their own music” is good and they have accomplished something because they elevate the floor of being bad at a tool (like all technological improvements do). They feel like they have been able to express their creativity and are proud, like a kid showing off a doodle. They share it with their friends, who will listen to it exactly one time in most cases and likely tell them it is “really good” and they “really like it” before never listening again.
That type of AI use is akin to a coloring book, but certainly doesn’t make for “good” music. When a kid shows off their badly colored efforts proudly, should we yell at them they aren’t doing “real art”, that their effort was meaningless, and that they should stop acting proud of such crap until they go to art school and do it “properly”?
> learning suno is no different than learning guitar
It most definitely is different and you’ve proven it with your own post. Guitar takes a long time to get to a place where you can produce the sounds you hear in your head. Suno gives you instant gratification.
Look, if it gives you pleasure to make Suno music then you should do it, but if you think having an ai steal a melody and add it to your songs counts is the same as creating something, you’re kiddo by yourself. At best you are a lyricist relying on a robocomposer to do the hard part. You could have achieved the same thing years ago by collaborating with a musician like Bernie Taupin did with Elton John.
There are drawbacks to being a skilled (trained/practiced) musician. You specialize in one instrument, and tend to have your creativity guided by its strengths/weaknesses.
I think that soon, some very accomplished musicians will learn to leverage tools like Suno, but they aren't in the majority yet. We're still in the "vibe-coding" phase of AI music generation.
We saw this happen with CG. When it started, engineers did most of the creating, and we got less-than-stellar results[0].
Then, CG became its own community and vocation, and true artists started to dominate.
Some of the CG art that I see nowadays, is every bit as impressive as the Great Masters.
We'll see that, when AI music generation comes into its own. It's not there, yet.
>Some of the CG art that I see nowadays, is every bit as impressive as the Great Masters.
Really? Except the minor part in which a great master spent months to years creating one of his works, instead of a literally mindless digital system putting it together (in digital, no pigments here) instantly.
The technology is impressive, sure, but I see nothing artistically impressive about it, or emotionally satisfying about the utter lack of world and life of creation it lacks.
If you're an actual artist, who's taken the time to paint and learn its intricacies, yet you're still just as impressed by an automated CG rendering of a work in Old Master style vs. one really done by a dedicated human hand, then you either hate the thing you learned because something about it frustrated you, or you have no clue about forming qualitative measurements of skill.
Also, "old-fashioned"? This to imply that someone rendering painterly visuals in seconds with AI is some new kind of artist? If so, then no, what they do isn't art to begin with. That at least requires an act of effortful creation.
It might be enlightening to find out a bit about the process of creating CGI; especially 3D scenes. Many works can definitely take over a year.
I spent some time, making CG art, and found it to be very difficult; but that was also back before some of the new tools were available. Apps like Procreate, with Apple Pencil and iPad Pro, are game-changers. They don't remove the need for a trained artist, though.
But really, some of the very best stuff, comes quickly, from skilled hands. Van Gogh used to spit out paintings at a furious pace (and barely made enough to live on. Their value didn't really show, until long after his death).
I fail to see how you're disagreeing with me if you say this, or maybe we're at mixed signals. I'm specifically arguing against being impressed by a visual of some kind that was sludged out automatically by an LLM, my argument isn't against digital art by itself (I know how hard CGI can be, and there's nothing to be dismissed about it because it doesn't directly use physical materials), or against artists who refine their craft to such a point that they can create visual marvels in no time. Both of those require effort. They require a combination of effort with learning, exploring and to some extent also talent I'd say.
Briefly instructing an image model to imitate an Old Master and having it do so in seconds fulfills none of those needs, and at least to me there's nothing impressive about it as soon as I know how it was created (yes, there is a distinction there even if at first glance at a photo of a real old master and an AI-rendered imitation, it might be hard to note a difference)
The latter is not art, and the people who churn it out with their LLM of choice are not artists, at least not if that's their only qualification for professing to be such.
Well, I’m still not interested in arguing, so I’m not really “disagreeing,” as I think that we’re probably not really talking about the same thing, but I feel that I do have a fairly valid perspective.
When airbrushing became a thing, “real” artists were aghast. They screeched about how it was too “technical,” and removed the “creativity” from the process. Amateurs would be churning out garbage, dogs and cats would be living together, etc.
In fact, airbrushes sucked (I did quite a bit of it, myself), but they ushered in a new way of visualizing creative thinking. Artists like Roger Dean used them to great effect.
So people wanted what airbrushes gave you, but the tool was so limited, that it frustrated, more than enabled. Some real suckass “artists” definitely churned out a bunch of dross.
Airbrushing became a fairly “mercenary” medium; used primarily by commercial artists. That said, commercial artists have always used the same medium as fine artists. This was a medium that actually started as a commercial one.
Airbrushing is really frustrating and difficult. I feel that, given time, the tools could have evolved, but they were never given the chance.
When CG arrived, it basically knocked airbrushes into a cocked hat. It allowed pretty much the same visual effect, and was just as awkward, but not a whole lot more difficult. It also had serious commercial appeal. People could make money, because it allowed easy rendering, copying, and storage. There was no longer an “original,” but that really only bothered fine artists.
This medium was allowed to mature, and developed UI and refined techniques.
The exact same thing happened with electric guitars, digital recording and engineering, synthesizers, and digital photography. Every one of these tools, were decried as “the devil’s right hand,” but became fundamental, once true creatives mastered them, and the tools matured.
“AI” (and we all know that it’s not really “intelligence,” but that’s what everyone calls it, so I will, too. No one likes a pedant) is still in the “larval” stage. The people using it, are still pretty ham-handed and noncreative. That’s going to change.
If you look at Roger Dean’s work, it’s pretty “haphazard.” He mixes mediums, sometimes using their antipathy to each other to produce effects (like mixing water and oil). He cuts out photos, and glues them onto airbrushed backgrounds, etc. He is very much a “modern” creative. Kai Krause is another example. Jimi Hendrix made electric guitars into magical instruments. Ray Kurzweil advanced electronic keyboards, but people like Klaus Schultze, made them into musical instruments. These are folks that are masters of the new tools.
I guarantee that these types of creatives will learn to master the new tools, and will collaborate with engineers, to advance them. I developed digital imaging software, and worked with many talented photographers and retouchers, to refine tools. I know the process.
Of course, commercial applications will have outsized influence, but that’s always the case. Most of the masters were sponsored by patrons, and didn’t have the luxury to “play.” They needed to keep food on the table. That doesn’t make their work any less wonderful.
We’re just at the start of a new revolution. This will reach into almost every creative discipline. New techniques and new tribal knowledge will need to be developed. New artists will become specialists.
Personally, I’m looking forward to what happens, once true creatives start to master the new medium.
> Guitar takes a long time to get to a place where you can produce the sounds you hear in your head. Suno gives you instant gratification.
Wrong. It takes extremely long before you can make the sounds in your head fit into the scale and recognize them, however with Suno it is impossible.
I would compare Suno to a musician-for-hire. You describe what you want, some time later he sends you the recording, you write clarifications, and get second revision, and so on. Suno is the same musician, except much faster, cheaper and with poor vocal skills. Everything you can do with Suno today, you could make before, albeit at much higher price.
The fact people still think this is how these models work is astonishing
Even if that were true, sampling is an artform and is behind one of the most popular and succesful genres today (hip hop). So is DJ'ing or is that also not a skill?
The same puritanism that claimed jazz wasn't music, then rap wasn't music, then EDM wasn't music, blah blah
Gatekeepers of what is and isn't art always end up wrong and crotchety on the other side. It's lame and played out.
I actually make a lot of sample based music, and it’s as much an art as you make it. Downloading a couple of loops from splice and layering them is lame, actually chopping and repurposing samples is not.
I never said Suno wasn’t “art”. The opposite is true. If you want to put your name on something that took no effort or skill and call it art, more power to you. You could do the same in other areas, and lame, low effort “art” proceeds AI by millennia. You are as welcome as anybody to call yourself a creator, however lame that effort may be.
But man the chutzpah of comparing that low effort drivel with people pushing genre boundaries.
Yes, sampling is an artform. We are on the same page here.
But your original comment implies that using Suno would be like sampling.
Therefor I mentioned that you need to properly credit the usage of samples, what Suno is not doing, Suno is steeling from real artist.
Hope it is more clear now.
defending the idea of complex, nuanced effort for the sake of coherent creation being a demonstration of skill is gatekeeping?
I'd love to see programmers reactions to having the measure of their work reduced in such a way as more people vibe code past all the technical nonsense.
Live programming music using tools like SuperCollider is/was a (very niche) thing. Someone is on stage with a laptop and, starting from a blank screen that is typically projected for everyone to see, types in code that makes sounds (and sometimes visual effects). A lot of it involves procedurally generated sounds using simple random generators. Live prompting as part of such shows would not seem entirely out of place and someone might figure out how to make that work as a performance?
SuperCollider enthusiast here, I think you missed the "is no different than" part. Working with SuperCollider is very different from playing any instrument live, and I doubt that'll change.
Where playing an instrument means balancing the handling of tempo, rhythm and notes while mastering your human limitations, a tool like SuperCollider lets you just define these bits as reactive variables. The focus in SuperCollider is on audio synthesis and algorithmic composition, that's closer to dynamically stringing a guitar in unique rule-based ways - mastering that means bridging music- and signal-processing theories while balancing your processing resources. Random generators in audio synthesis are mostly used to bring in some human depth to it.
I think "learning guitar" is different from "learning Suno" because with guitar you have control over what you play. I also love music, and making music, and have no natural musical talent, but I see no interest in generating a song without me deciding every aspect and choosing every note. It's like taking the most interesting and creative part from me.
Personally for me I wouldn't be able to reconcile the fact that these generated stems are basically the same as generated AI images--built from the digital bits of existing tracks/music/recording that someone else spent the time and hard work making and then sharing only to have it unexpectedly hoovered up by these corporations as part of their giant data training set.
The year is 2027. A 16 year old at a house party pulls out his laptop and asks his friends to gather round. He starts typing “a song about a wonderful wall” and completely original music starts playing. A girl in the corner, hearing the heartfelt melody, starts to fall for the boy.
I’ll agree that you+ai is creating a pleasant sequence of sounds ie music. And I don’t think anyone has the right to say (within reason) what is music or isn’t.
But we might need new vocabulary to differentiate that from the act of learning & using different layers of musical theory + physical ability on an instrument (including tools like supercollider) + your lived experience as a human to produce music.
Maybe some day soon all the songs on the radio and Spotify will be ai generated and hyper personalized and we’ll happily dance to it, but I’ll bet my last dollar that as long as humans exist, they’ll continue grinding away (manually?) at whatever musical instrument of the time.
I see it as like having the answer key to every homework assignment for a course. It's easy to convince yourself that it doesn't hurt learning -- but there's probably a reason the answers aren't given to you. The struggle, the experience of "being stuck", the ability to understand why things don't work -- may be necessary precursors to true understanding. We're seeing pretty discouraging results from people who are learning to "vibe code" without an understanding of how they would write the code themselves.
You may wish that learning Suno is no different than learning guitar, but I think the effects of AI may be a bit pernicious, and lead to a stagnation that takes a while to truly be felt. Nobody can say one way or the other yet. That said, I'm happy you can make music that you enjoy, and that Suno enables you to do it. Such tools are at their best when they're helping people like you.
I guess it’s similar to learning by watching masters on YouTube - I’m convinced that passively watching them causes the illusion in the viewer that they are also capable of the same, but if they were to actually try they miss all the little details and experience that makes their performance possible. Watching a chess GM play, for example, can make you feel like you understand what’s happening but if you don’t actually learn and get experience you’re still going to get beat all the time by people, even beginners, who did. But as long as you never test this, you get to live with the self-satisfaction of having “mastered” something yourself.
Of course, nothing wrong with watching and appreciating a master at work. It’s just when this is sold as the illusion of education passively absorbed through a screen that I think it can be harmful. Or at least a waste of time.
It gets very real very quickly with skateboarding. You can watch all the YouTube and Instagram you want about how to do an Ollie or a kickflip in 30s; now go out and try.
The learning is in the failing; the satisfaction of landing it is in the journey that put you there.
Love the passionate replies!
I think I especially agree with this comment:
>> think that soon, some very accomplished musicians will learn to leverage tools like Suno, but they aren't in the majority yet. We're still in the "vibe-coding" phase of AI music generation. We saw this happen with CG. When it started, engineers did most of the creating, and we got less-than-stellar results[0]. Then, CG became its own community and vocation, and true artists started to dominate.
Hey its likely not going to be me, but let's be real - any user of this technology who has gone beyond the "type in a prompt and look i got a silly song about poop" stage will probably agree - someone's going to produce some bangers using this tech. It's inevitable and if you don't think so it's likely you haven't done anything more than "low-effort" work on these platforms. "Low effort" work - which a majority of AI swill us - is going to suck, whether its AI or not.
And while I have the forum, I do want to make another point. I pay more for a month for Suno than Spotify ($25 vs $9). Suno/Udio etc: do what you need to to make sure the artists and catalogues are getting compensated... as an user I would pay even more knowing that was settled.
guys I think we're being too hard on this guy, why are we so upset that songeater is now Jimi Hendrix because of Suno? I know I'm jealous, I've been beating on my guitar for decades and I'm still pretty meh, but its because I lack the true creative genius required to type suno.com into my browser, not everyone is cut out to be a literal GUITAR GOD like songeater here. Lets give him the props he deserves for the massive investment of backbreaking labor over the past decades^w years^w weeks^w days^w hours^w halfhourmaybe it took for him to learn pseudo-guitar.
I apologise for this comment of mine that I am replying to, it seems like its sort of an unpopular opinion, but in the spirit of AI, please let me allow for some personal back propogation with my own low poly blob of neural networkedness and use this useful training to adjust my personal weights (known to low IQ normies as "values") to better fit in with the community here.
Suno is moving toward becoming a browser-based DAW that happens to use AI. There are already more capable and established DAWs, and I see no reason they can't implement AI into their workflows-- in a more precise manner, where it's actually useful, instead of wholesale as a gimmick. Many are already doing this. So I don't understand where Suno is going with any of this.
It either needs to be:
1. So easy anyone can press a button and magically get exactly what they want with perfect accuracy and quality.
2. So robust and powerful it enables new kinds of music production and super-charges human producers.
This is neither. And I don't buy Suno's argument that they're solving a real problem here. Creative people don't hate the process of creating art-- it's the process itself and the personal expression that make it worthwhile. And listeners/consumers can tell the difference between art created with intent and soul, and a pale imitation of that.
> It either needs to be: 1. So easy anyone can press a button and magically get exactly what they want with perfect accuracy and quality. 2. So robust and powerful it enables new kinds of music production and super-charges human producers.
Don't forget the secret third option - facilitate a tidal wave of empty-calorie content which saturates every avenue for discovery and "wins" purely by drowning everything else out through sheer volume. We're at the point where some genAI companies are all but admitting that's their goal.
That seems to be the purpose. It doesn’t have to sound that good to the listener. It’s just made to extract dollars from Spotify when you flood the platform with so much slop that some of it starts getting played by users who just let the machine pick the next song.
This tidal wave has already destroyed the gaming industry, lot's of low quality AI slop games have flooded App Stores, STEAM etc leaving both gamers and creators frustrated.
I gave up on synthwave which was a genre I loved because there's so much AI it just not worth the effort to find new music. I'll listen to old songs but I have zero interest in new songs. I moved to a more niche genre where there's no AI yet.
same, I listen to a synthwave playlist on spotify, and once it ends spotify starts playing 'similar' music and at that point I just start feeling gross.
Yes I'm already doing this manually with Reason. I'll compose something that's quite bare bones, export the audio and run it through Suno, asking it to cover and improvise with a specific style, then when I have something I like, I split that into stems, import some or all of these to Reason and then reconstruct and enhance the sound using instruments in Reason, mostly by replaying the parts I like on keyboard and tweaking it in the piano roll. Often I get additional inspiration just by doing that. Eventually I delete all the tracks that came from Suno stems when I've finished this process.
That way I get new musical ideas from Suno but without any trace of Suno in the final output. Suno's output, even with the v5 model, is never quite what I want anyway so this way makes most sense to me. Also it means there's no Suno audio watermarking in the final product.
This is similar to what I do. There are all kinds of useful ways to incorporate AI into the music production process. It should be treated like a collaborative partner, or any other tool/plugin.
It shouldn't be a magic button that does everything for you, removing the human element. A human consciously making decisions with intent, informed by life experience, to share a particular perspective, is what makes art art.
We use AI assisted coding to be more productive or to do boring stuff. If the 'making the music' part is what you are getting away from, why make music? You're basically a shitty 'producer' (decent producers are amazing at those boring parts you are skipping and can fill out a track without hitting up a robot) at that point.
Music is math. Art is patterns. Like how we're using AI to iterate through design and code, musicians could use it for generating musical patterns including chords, harmonies, melodies, and rhythms. In theory, it can pull up and manipulate instruments and effects based on description rather than rifling through file names and parameters (i.e. the boring stuff).
Most success as a musician stems from developing a unique style, having a unique timbre, and/or writing creative lyrics. Whether a coder, designer, artist, or musician, the best creatives start by practicing the patterns of those who came before. But most will never stand out and just follow existing patterns.
AI is nothing more than mixing together existing patterns, so it's not necessarily bad. Some people just want to play around and get a result. Others will want to learn from it to find their own thing. Either way works.
With art and AI, people seem to enjoy the part where they say they made something and get credit for it, but didn’t actually have to bother. People used to find art of people on the internet and claim it as their own, now an AI can statistically generate it for you and it maybe feels a bit less icky. Though I have to agree it all seems sort of pointless, like buying trophies for sports you didn’t play.
"Creative people don't hate the process of creating art"
Yep. I was a professional music producer before the pandemics, and I couldn't agree more.
Honestly, I'm glad we are destroying every way possible to earn money with music, so we find another profession for that purpose and then we can make music for fun and love again.
> And listeners/consumers can tell the difference between art created with intent and soul, and a pale imitation of that.
Strong disagree there. I think that's true of a very small % of consumers nowadays. I mean, total honesty, I think that Suno is not worse than a large fraction of the commercial pop made by humans (maybe) that tops the charts regularly. It's already extremely formula based artificial music made by professional hit makers from Sweden or Korea.
The objective was never to grab discerning listeners but the mass of people. It would work even if they grab 50% but honestly I think it's going to be higher.
But then you look at image gen. The established one, namely Adobe, are surprisingly not winning the AI race.
Then you look at code gen. The established IDEs are doing even worse.
I don't rule out the possibility of music being truly special, but the idea of "established tools can just easily integrate AI right" isn't universally true.
Agreed. The problem with being an incumbent in this era is that much of the existing UI/UX assumptions are based on the idea of manual manipulation. We're so early that foundational assumptions are still up for debate, and for large companies like Adobe, there's just no way they'd be able to move at the required pace to keep up. Heck I'm at a company that's less than 2 years old, with less than 20 people, solely devoted to AI, and it's still hard for us to keep up.
What Adobe and others ought to be doing is setting up internal labs that have free reign to explore whatever ideas they want, with no barriers or formality. I doubt any of them will do that.
The innovator's dilemma is real. IMO none of the big DAWs are well-positioned to capitalize on AI, but that doesn't mean they couldn't.
I'd argue music generation is different from image or code generation. It's closer to being purely art. Take image generation for example. Most of the disruption is coming from competition with graphic design, marketing, creative/production processes, etc. The art world isn't up in arms about AI "art" competing with human art.
It does mean. The switch from writing “applicable” software to creating cutting edge AI is almost impossible. The parent comment makes great examples, we can add to that list JetBrains (amazing IDEs, zero ability to catch up with ML), for example. It’s a very different fast-paced scientific driven domain.
> And listeners/consumers can tell the difference between art created with intent and soul, and a pale imitation of that.
Um, have you seen the pop charts at any time in the past... well, since forever, actually?
The majority of commercially produced music today is created with intent to take your money and nothing else, with performers little more than actors lip-syncing to the same tired beat. Because it sells.
> And listeners/consumers can tell the difference between art created with intent and soul, and a pale imitation of that.
Respectfully I disagree. We have had curated, manufactured pop, built by committee and sung by pretty mouthpieces with no emotional connection, for a long time now, and they make big money.
It’s soulless by definition, designed by committee and sung by a machine. Entirely manufactured. But people like it.
It’s a counterpoint to the above argument that listeners will be dismissive of AI-produced music because it is a pale imitation of art created with intent and soul. On the contrary, such music thrives and is very popular already.
People love to be snobs about pop music, but it's music.
A particular piece of art isn't "soulless" just because it didn't move you. There were still plenty of humans involved in making it, who made specific artistic decisions. In pop music, the creative decisions are often driven by a desire to be as broadly appealing as possible. That's not a good or bad thing unless you judge it as such. It's still art.
The contention that there is something so ethereal, uncapturable, so uniquely and indescribably human that is put into even the blandest piece of mass-market pop that an AI trained on all the music ever made couldn’t create a track that people would accept due to some intangible hollowness, some void where the inestimable quintessence of human existence should be…
That’s hilarious.
I’m not saying it’s not ‘art’ whatever that might mean, I am saying this idea that people won’t accept and enjoy an AI version is a fantasy.
AI "music" sounds like ass. The Temu of sonic products. It will sell, but only in niche markets. Traditionally manufactured pop "music" is a more upscale and widely applicable market position.
Honestly it sounds like AI generated music will just widen the bimodal distribution: those who care about craft and authenticity on one end, and a race to the bottom on the other. Everything in the middle will be squeezed.
> Creative people don't hate the process of creating art
I mean, I hate when it's difficult to get the medium to express my vision... not that AI especially would help with that when I'm actually attached to that vision in detail....
What's special about Suno 5 is that the songs are actually good to listen to in place of professionally produced songs. For example, my favorite genre is new jack swing and there is a very limited number of this genre as it was briefly popular during the 90s. Now I have an endless supply of it and you can't tell that its AI produced anymore. Sure to an expert they might be able to detect it but for consumers its just as good as spotify playlist.
This is the first time I'm actually paying for generated AI content because the value I get is immense. I really think we are headed towards and over supply of content where there will be more stuff to read, watch, listen with very real value in all of them.
This spells out the inevitable change in the labor market for content creators. There will always be value for human created content and some will make more money but it will always have the AI generated content generation competing with it to the point where it will be hard to stay ahead and eventually people will stop caring.
Case in point, I see some comments being snarkish towards Suno but for as a consumer I could care less if you put your soul and years into producing art vs the one I can get a lot of today and now especially when there is virtually no difference in quality.
Truly an amazing accomplishment from Suno team, and probably the first time I've subbed to a music service after decades of downloading mp3s, hunting down new songs to listen to on Youtube. Suno 'steamified" this process and while I will use youtube to discover new genre, I am spending now most of my time in Suno, listening to endless amount of the exact sound I am looking for.
> as a consumer I could care less if you put your soul and years into producing art vs the one I can get a lot of today and now especially when there is virtually no difference in quality
> I really think we are headed towards and over supply of content where there will be more stuff to read, watch, listen with very real value in all of them.
Yes, but not uniformly so - some niches are very popular, but there's plenty of obscure ones where if you're a fan you literally know everyone making music in some very niche genre because there are so few of them.
I simultaneously feel repulsed by AI music and "art" and yet am totally open to being captivated by AI music if I really feel something is musically better than almost any human-made music I've heard.
I just haven't heard anything that isn't "slopful" yet. If I do, I will still feel weird about it, but I'm a big believer in the value of "aesthetic objects in themselves", so I am eager to find something I do actually like.
Even just knowing something was drawn or composed by an AI will negatively taint my opinion from the start, but I'm still open.
I don't totally discount the position that the human "soul" is what makes art art and all that, but I still do think something can be very enjoyable and good without being created by a sentient entity, in theory.
Don't you need more focus and aggression to make even sell-out weak tea dubstep? I feel the generative process really severely fails to deliver anywhere near the correct sound, even for 'bad artificial lol dubstep' sounds.
Couldn't agree more. Instead of seeking out people making that art, we are now leaving "art" or human expressive emotion to random noise and paying for it.
I've had some fun with Suno 5, but the songs absolutely don't replace well-made human music. They're much more formulaic and over-produced. They're usually forgettable. People I play them for can always tell they're AI produced.
True music enthusiasts will holdout for a while but I think AI music will easily replace most Pop currently on the radio and streaming for your average Joe. That stuff has been "fake" as early as the mid 2000's by being quantized straight to the grid, pitched, with programmed drums, guitar, even vocals and then churned out like widgets on a conveyor belt.
Currently for me, in the type of music I have enjoyed from v4.5+. V5 of their model is a regression.
I was very impressed with v4.5+ that I could get quite good songs evocative of Devo, Yeah Yeah Yeah's, Metric, etc.
Version 5 is currently harder (or I haven't figured out a way) to generate this kind of chopped/produced sound. It doesn't follow complex style definitions and tends to generate songs that are too slow and "smoothed" over.
V5 could be built different, and it will take a few revisions until they match v4 “creativity” for lack of a better word. What v5 brings is quality in the recording. Less AI shimmering in the background. But yes, I agree the songs are more… flat.
I find it depends how you generate it. Asking Suno to make covers of uploaded recordings tends to give much, much better results than asking it to cook a song from scratch. There are still quite a few tells that it's AI-made but it's not bad at all, at least in my experience so far.
Rather than call it a lie, I think it would be more fair to say in this instance it's a matter of taste/perspective. Personally, I enjoy listening to Suno songs.
These things are the composition equivalent of Guitar Hero...
They give users (players?) a sense of agency, making it satisfying. But in reality, you're no more composing than a guitar-hero player is playing the guitar, and nor will you learn how to from doing so. No matter how sophisticated the transformations in an LLM, you're ultimately using other people's music in a sophisticated mashup game.
However, in guitar hero, the people's who's music was being used at least got royalties. :-/
Suno does have the cover feature, where you can upload your own playing, say playing a simple melody on a guitar, and it will take that and create a song from it, together with lyrics you wrote. So it can be fairly fulfilling. What it doesn’t allow you is full control of the composition. But that’s AI.
Well, that's LLMs. I'm doing a PhD in computer science and music right now, focused on programming languages for music and algorithmic composition. There are other (IMHO) more interesting techniques that can be used in music and that used to be called AI, with which you do have control over and understanding of exactly what's happening, such as constraint satisfaction programming and dynamic search.
Meanwhile I've mixed a song of mine down on a Tascam 688 8-track tape recorder. I have a big smile on my face because I find this very enjoyable. The haptics, the sound and my hand-crafted product. A piece of art made by a human. No AI will replace this for me.
This argument always seemed weird to me because it's obviously the case that ‘intent and creativity’ is at best a slim minority of consumer preference. If I want a simple dubstep line with random romantic words meaninglessly peppered in it, or the same 4 chord pop song as every other pop song, or you know, anything popular, that's easy. If I want something that's saying something interesting in the topics I find interesting, or doing something new with the medium that's also well executed and listenable to on repeat, finding that is actually just hard, and it's rarely the stuff with the dollars and overlapping interests to get the high production values.
I'm not going to claim AI audio isn't also awash with popular themes and tropes, or that it's a bastion of creativity. I'm also not going to claim that the deepest, really creative ideas aren't expressed in human written works. There are enough people to make truly exceptional songs and prompt many truly mindless AI generations. And there's also nothing wrong with most songs optimizing for personal preferences that are not that; I'm not trying to 'argue against' popular music.
But I am going to claim, for me, that it just hasn't been practical to saturate my tastes from public media, and that most of the reason I personally listen to AI music is that I want something that says or does something I think is creative, exploratory, or intellectually interesting that I don't know how to get from anywhere else.
I was alive to hear the evolution of "dubstep" before it became a cliche (the wobble stuff). AI couldn't invent a new sound.
The way you describe music, sure, there will be an AI that is able to provide you with a continuous stream of audiotory stimuli, like the Penfield Mood Organ from "Do Androids Dream of Electric Sheep?".
That's just not what makes art or music interesting to me, and why I also don't listen to auto-curated "mood" playlists on Spotify.
> Penfield mood organ - Humans use the mood organ to dial specific emotions so they can experience emotions without actually possessing them. In the beginning of the novel, Rick implores his wife to use her dialing console to prevent a fight. He wants her to thoughtlessly dial emotions like "the desire to watch TV" or "awareness of the manifold possibilities open to [her] in the future" (Dick, 6). When emotions can be easily avoided with the mood organ, humans no longer require personal relationships to overcome feelings of isolation or loneliness.
I also don't want mood playlists, the majority of the time. I'm saying I use AI generations for exactly the opposite — so that I can explore and listen to things that are more intellectually interesting in the ways I find intellectually interesting, because the human music it's easy to find is common denominator music.
The way you describe AI ('continuous stream of auditory stimuli') is the way I'd describe Spotify. Sure, you could use AI to make a faux Spotify, but, like, why would you? The popular stuff already has saturating supply, and it will sound much better than an AI generation.
I was also having Spotify in mind when writing about algorithmic curation and GenAI in music ;)
Regarding this:
> I'm saying I use AI generations for exactly the opposite — so that I can explore and listen to things that are more intellectually interesting in the ways I find intellectually interesting
I just have not found any AI music that would satisfy this description. But I am very interested in failure modes of GenAI. Especially in Suno, it was cracking me up at times.
I'm also sure there will be a space for interesting and/or challenging music generated with neural networks involved.
But I don't see any revolution here so far.
Care to share examples of AI-assisted music you find interesting? To elaborate, I don't find jarring or curious combinations of cliches interesting.
AI could not invent a new style, it seems to me. To repeat this point.
And I've never had any problem finding interesting music.
Key to me is diving into labels, artists and their philosophy, after I got interested into particular ones (the other way around doesn't work for me).
I adore discogs.com for that. Regarding interviews and stuff, there's sadly a huge decline in quality written material about music, I feel.
"Lowest-common-denominator music" is exactly what Suno produces, at least in my ears.
I could go on and list music I like, but generally avoid that.
Wait, I'll do it anyway for a bit... at the moment, I like
Punctum - Remote Sensing EP
(Caterina Barbieri)
and
AtomTM vs Pete Namlook - Jet chamber LP
just for example
I also love so much other music.
To me, such music is miles apart from the slop I heard from AI.
I heard there's research into generating music in the style of JS Bach as well. How's that going?
I'd guess: probably bot too well, because the genius of Bach is not only in complexity, or counterpoint rules.
His music is very emotional to me (at least the portions I like).
And, like any good music, it has moments of surprise. It's not just a formula, or a "vibe", or a "genre".
Could AI create a new Techno, a new Blues, a new Bossa Nova?
I've been avoiding trying to class what counts as 'interesting' because it feels like the wrong point to make, for a medium so entwined with personal preferences that are so often at odds with each other. To single out my tastes feels like it would be saying 'paleontology is the true interesting subject, unlike architecture'. The tracks you listed here aren't really my thing, but I can see why AI audio doesn't help with it. That's fine.
I will also repeat that I'm well aware that the best stuff is definitely all human. It's not my genre either, but traditional composers like Bach certainly made extremely interesting, clever, even deeply-studiable pieces and AI 'in the style of' those composers surely won't capture much of that. There's a lot of stuff AI can't do wholesale; one particularly strong example is if you're Jacob Collier, AI is not going to make the complex harmonizations and song structures there.
AI is pretty bad at these textural or instrument exploration things like from Collier above or Mike Dawes or Yosi Horikawa or Yoko Kanno or Keiichi Okabe. There's a bunch of music I listen to because it's generically a genre or mood I like and it's well produced, which I won't list here, and AI audio can often do stuff like that at baseline but not especially well. There's also nostalgia; I'm also certain a huge part of the reason I like the Celeste soundtrack so much is in part that I liked the game so much.
But then there's a whole category of music I listen to where the texture is supplemental to the part that defines it, like most of Acapella Science or Bug Hunter or Tom Lehrer. Eg. Prisencolinensinainciusol isn't interesting to me because it's musically complex; the part I care about is that it's a listenable execution of an idea, not precisely how it was executed on. I don't keep coming back to I Will Derive by some random schoolkids recorded on a potato 17 years ago annually because it's sung well or they were particularly clever with how they took another song and changed the words; I come back to it because it's fun and reflects for me onto a part of my past that I remember fondly, and these things make me happy.
All these words and I've still only addressed half the comment. Ok, let's consider the idea that it's not enough for AI audio to facilitate the creation of interesting musical pieces, and it instead has to create whole interesting musical styles. I take issue with this in a bunch of places. I don't reject artists who I judge not likely able to create a new Bossa Nova. I judge artists based on whether the output they produce is something I want. I do the same for AI.
I also think the question about whether AI could 'create' a new style is somewhat misplaced. A style is a cultural centroid, not just a piece of audio. AI can definitely create new musical textures or motifs, but it's always being pulled towards the form of what it's being asked to produce. As long as we're talking about systems that work like today's systems, the question still needs to involve the people that are selecting for the outputs they want. Could that connected system create something as distinct and audibly novel as a new genre? Yeah, probably, given time and a chance for things to settle. That's a different question from whether it'll do so to an inspecific prompt thrown at it.
I was alive to hear the evolution of "dubstep" before it became a cliche (the wobble stuff). AI couldn't invent a new sound.
The way you describe music, sure, there will be an AI that is able to provide you with a continuous stream of audiotory stimuli, like the Penfield Mood Organ from "Do Androids Dream of Electric Sheep?".
That's just not what makes art or music interesting to me, and why I also don't listen to auto-curated "mood" playlists on Spotify.
> Penfield mood organ - Humans use the mood organ to dial specific emotions so they can experience emotions without actually possessing them. In the beginning of the novel, Rick implores his wife to use her dialing console to prevent a fight. He wants her to thoughtlessly dial emotions like "the desire to watch TV" or "awareness of the manifold possibilities open to [her] in the future" (Dick, 6). When emotions can be easily avoided with the mood organ, humans no longer require personal relationships to overcome feelings of isolation or loneliness.
That being said, there is a spectrum, sure.
I am interested in generative music.
I do have scenarios where I listen to music and want it to blend into the background (e.g. soma FM).
But even then I love the short moment when a song comes up and I want to note it because it's distinct.
I am not interested in being robbed of that.
Also: why? Why, why, why?
Music is not just a recording packaged as a product. It is a thing humans do. And I say that as a person that enjoys mainly electronic music!
There are many talented humans, there is absolutely zero need for AI muzak, other than decreasing the price.
Musicians leveraging generative AI for creative purposes might become a thing and I am fine with that in principle, but the thought is a joke to me, as of now.
Creating audio from an idea is not the same as letting a machine create an interpolation of stolen ideas to match a prompt.
Culture is fluid. Music is about exploring the boundaries of what sounds good, often because of feelings. Related to the society in which the music is "consumed".
AI music is a commodity and generally uninteresting, like artists who only imitate styles.
But just like annoying over-commercialized music that only tries to scratch existing itches and match expectations, it can still work to a degree.
Intent is not a lofty concept, it's at the heart of what art is.
I was using ‘consumer preference’ here as a proxy for ‘the things that people tend to care about.’ The point I'm highlighting is that it's weird that when AI audio comes up, there's a reliable crowd of people discrediting it for missing a thing that clearly doesn't matter that much to most people in most contexts.
It's like, sure you can want things from music that are to your specific taste, but it's like coming into a post about, idk, a folk band and complaining that it's not metal. You're allowed to like your thing, but clearly most music is allowed not to be metal, why is this music specifically bad for not being metal?
And in this case the point I'm making is stronger, in that AI audio actually unlocks a lot of ability to listen things that are ‘interesting and creative’ but not widely available because of consumer preference, so it's actually more like showing up to a folk metal fusion band and saying the problem with this band is that it isn't metal.
You still totally miss the point of this thread. Making music and creating art for the sake of enjoyment. Consumer preferences are irrelevant here. I am totally fine if you like AI music. And I am sure someone can create art by interweaving AI elements. And keep in mind, the training data for Suno is coming/gets stolen from someone like me who creates real music. Personally I do not enjoy to make music with some proprietary 3rd party tools that can garden wall at every time, like Suno. But that is personal preference.
> Then tried to generate interesting music, failed spectacularly.
You can upload music and let suno arrange it in different styles. I'm a musician myself and am also interested in "interesting" music. I made experiments with my own music and was positively surprised by the "musicality" and creativity of the generated arrangements (https://rochus-keller.ch/?p=1350).
I consider myself an amateur musician and pulled a decent side hustle teaching piano back in university. I also worked the occasional gig as a cocktail pianist.
I've actually had a lot of fun using tools like Suno/Udio as a means of sonic exploration to see how some of my older compositions would sound in different mediums.
When I composed this piece of classical music practically a decade ago, it was intended for strings but at the time I only played piano so that's where it stayed. By increasing the "Audio Influence Slider", Suno arranged it in a chamber quartet style but stayed nearly 1:1 faithful with the original in terms of melody / structure.
What an interesting use case! And interesting composition.
One thing that's interesting about the AI violin cover is that I'm not sure those runs would be physically possible at that speed on a real violin. So that composition can _only_ be played digitally, I believe.
When I used to do larger more orchestral arrangements, I was constantly getting dinged by the instrumentalists that while they were theoretically musically possible, certain runs or passages were very unnatural on the instrument that I scored them on.
For a long time I really hoped that some of the more professional notation tools such as finale would add in an ability to analyze passages and determine how realistic/natural they were for the instrument that they were set to.
right now, the line for me is "who makes the riffs". Once that line is blurred and users can inject really appealing whatever it is that humans contribute into actual realized audio tracks, thats when we'll see the first "AI using superstar" numbers on spotify. I don't know where training data is on the legal radar, but I'm also betting that AI winning on this front is likely given how much politics loves AI.
I am a paid user for months. Actually the most impressive upgrade from 4.5 to V5 is being able to get MIDI.
but for now the ui is still very chaotic and buggy.
As a VC-backed company, while they're able to train models thanks to VC Moeny, they're also clearly not enough to sustain their income simply by becoming the new Ableton. Balancing the needs of professionals with those of the general public is a complex proposition. Are ordinary people really willing to pay such high monthly fees to produce music?
I've been working on an AI detector for the last few months. Updated it to handle Suno V5 on Wednesday - looks like it's very similar to V4.5. Am curious to see how this Studio version impacts the model I've trained.
After login it's free. But my site has been targeted by a lot of spam/abuse over the last decade, and login is something I've needed to set up to avoid that :(
As someone who has been producing music for 20 years, this is absolutely incredible. I remember when I started building things from samples in FL4 and just how mind bending that was. I am so excited for this to actually get new ORIGINAL samples.
What people miss is since the creation of Splice, basically all new music that isn’t from an already established artist is paint by numbers. You can get any sample in any key that are given to splice by artists. You probably hear a lot of the same sounds in most modern music. This breaks that open.
The Suno team has been doing this exactly right and this is just another step in their evolution.
Major congrats to the product team for this, I can’t wait to see the next iteration!
Correct, but think upstream from Splice. Most percussion, risers and ear candy come from the same place. Think back to Super Seal records and crate digging. If you use samples, regardless if you subscribe to Splice or not, the sample you found, is on Splice.
Correct. For the main composition, many artists don’t use samples. But for ear candy and other flavors, I would argue that most do. Remember, difference between producers and musicians, and there is a lot of overlap.
I wasn't expecting to, but I got chills listening to some Suno creations from artists who are clearly very talented at using this new medium.
Much like those of us hammering away at LLMs who eventually get incredible results through persistence, people are doing the same with these other AI tools, creating in an entirely new way.
I'm sure Suno are working hard on this and these AI tools can only come together as fast as we can figure out the UX for all this stuff, but I'm holding out for when I can guide the music with specific melodies using voice or midi.
For "conventional" musicians, we (or at least I) would love to have that level of control. Often we know exactly what it should sound like, but might not have session musicians or expensive VSTs (or patience) on hand to get exactly the sound we want. Currently we make do with what we have - but this tech could allow many to take their existing productions to the next level.
What I've tend to find is that although almost everyone listens to some form of music, the average person tends to like things which are squarely in the middle of the gaussian curve, and that are inherently very predictable as though the creator had chosen the most stastically likely outcome for every creative decision they made while creating it. Similar trends with almost anything creative, cinema, literature, food etc.
This is basically what all the Suno creations sound like to me, which is to say they definitely have a market, but that market isn't for people who have a more than average interest in music.
Not OP, but on the off chance you haven't seen this, I found the suno explorer thing quite nice. Hitting random a few times, I'll usually stumble onto something interesting. This was the first demo I heard where some of the AI tunes gave me goosebumps close to what human music does.
I'm not the person you responded to, but these are some examples from someone I know that had accompanying music videos (actual video production) made for them:
I feel I must push back on this dang. I was being kind and not snarky, but critisim was earned as I listened to the tracks that where suggested. Had I said these are wonderful no flag. OP stated something that would lead someone to believe they where as good as grandfather comment description. It was in fact not audioly pleasent despite the great visuals.
They probably won't. And if they do probably everyone will ridicule the songs. But maybe they will link the songs and maybe at least half the repliers will say the songs actually are good. I like rooting for the underdog.
Definitely more Boards of Canada, but Aphex is a big inspiration behind a lot of my prompts (I really just said that, yeah - it's kind of hilarious talking about generated music):
I listened to it when you posted before. Better than most of the others I have listened wich were all much more "cold".
The visual stuff also helps to make it more powerfull and cohesive.
The bad part is that it wanders a lot to get nowhere and it does not create a climax that bridges with the second part. The same sounds and ambient with a producer behind that creates an arragment for it would be much more powerful.
I don't want traditional DAWs replaced with AI generator. What I want is using AI to improve/speedup existing processes, for example:
- extracting melody/instrument from a clip to be able to edit the notes and render it back with the same instrument
- extracting and reordering stems in a drum clip
- fixing timing and cleaning noise from sloppily played guitar melody
- generating high-quality multi-mic instrument samples for free
- AI checking the melody and pointing out which notes are wrong or boring and how it can be fixed. I want to write the notes myself but help would be very useful.
- AI helping to build harmony (pick chords for the melody for example)
This would help a lot, but current models that generate a song without any controls, are not what I want.
I watched the whole video and the only new feature I saw compared to other DAWs is converting a singing clip to a trumpet clip (anyway, its better to generate track by track rather than have the whole song generated without any controls). In other aspects, it looks more like a toy DAW, like some iPad app with very limited features (didn't even have recording delay compensation). Also the UI looks like some web app, giant buttons and lot of unnecessary decorations.
I wonder how they trained the model to generate clips, did they hire lot of musicians to record the samples, or just scraped multiple commercial instrument libraries without permission?
Well there's now a polka about diarrhea after eating spicy food. And a rock ballad duet about macaroni and cheese with a female singer and a screamo male part.
How is the stem support these days? In particular I would like to create some songs with vocals (my lyrics), then be able to remove the vocal track and replace it with my own.
Without a doubt, Suno Studio boasts some impressive features. One of the introductory videos features a voice-to-saxophone demonstration. The result sounded surprisingly good to me and was perfectly usable.
I wonder how AI-assisted music production like Suno will change the profession of being a musician. I think people want their favorite music artists to be real humans they can relate to. For that reason, I guess real singers won't be out of a job anytime soon. The same may apply to performers of real musical instruments. No one wants to see music played entirely from a computer during a live concert.
However, I predict that it will be very difficult to become even moderately well-known as a musician by just being a Suno Studio creator alone. A lot of good-sounding content will be created this way, and if an artist can't perform live or doesn't have a unique persona or story to attract an audience, it'll be hard to stand out from the endless mass of AI-generated content.
I think music used for commercials, movie trailers etc will move to be AI generated (taking a revenue stream from artists into soul less corporations).
But for cases where music is the primary product, I don’t for see AI generated music overtaking anything
Yeah personally I don't give a toss about the person who created the music and their backstory or stage performance.
However I do care that the person who created the music made hundreds of micro decisions during the creation of the piece such that it is coherent, has personality and structure towards the goal of satisfying that individuals sense of aesthetics. Unsurprisingly this is not something you get from current AI generated music.
I'm a musician, and have been for 25+ years. In fact, I made a living of music before eventually getting my education and working in tech.
I've played around with Suno for a couple of months. It works for some things, but to me - it just doesn't give any sense of...accomplishment? I'd much rather sit down with my instruments, and come up with the stuff myself.
What is more, I get no feel of ownership. It is not me that's making music, I'm just feeding it prompts. That's it.
It's like paying some painter 5 bucks, and telling them what to paint. In the end, you'll have your painting, but you didn't paint it.
With that said, these tools have their uses. Generating jingles and muzak is easier than ever.
AI is a shortcut from an idea to something that resembles your idea in solid form. If your goal is to create something and get a sense of accomplishment from that creative act then AI will never work for you because the artistic process is exactly what the AI shortcut is short-cutting around.
On the other hand, that AI shortcut is circumventing decades of practice, so if you're at the start and you just want the output so you can use it in some way it's awesome.
If you're just using prompts it's almost always just scratching the surface when it comes to creative AI use.
With Suno, using your audio uploads to effectively 'filter' your ideas into something usable, compositing in the DAW, and putting a full song together -- that's going to be a much different experience than "make me a song"
Doesn't even refute it... Warhols fame came from claiming any absurdities of capitalism and the art market as his working material. He was a "Business artist".
Generative AI for low level things like creating samples seems reasonable. Mastering and composition is where this starts to jump the shark. The idea that a machine learning model will be responsible for figuring out what the final mix should sound like is insane to me.
I wanted to try the advanced Suno models but they are locked away, and I don't want to take a membership just to try a couple of generations. So, for months, I gave up on Suno. It's locked away at v3.5 for me. Last 3 models have been all closed.
I tried to sample a few songs generated by others, but I can't find their appeal.
I'm really curious how the traditional DAW features stack up against the incumbents. A good DAW has a ton of features. Developing a whole DAW from scratch just to add a "generate part" button sounds like a lot.
Okay, but why would you? What's the advantage of that? VSTs are resource hogs (when you have like 10+ running at once), not DAWs.
I somehow doubt a full-blown browser connected to more than a couple of VSTs would be less of a resource hog than doing the same in a DAW. On your computer. That you own. In your house. Without like additional 50ms of latency for the data to travel to the server and back.
Considering when the standard was written (v2 in 1999 and v3 in 2008), what else would they be?
As horrible as it sounds, a VST is just a .dll file you're running straight from the Internet. On a "positive" note, they're backwards-compatible with like Windows Vista!
I split pretty evenly between Bitwig, Logic X, and MuseScore and I can tell you the first big reservation I have with something like this is that it's an online DAW.
What does that mean? It means that your compositions (outside of bouncing them down to audio stems) exists within a highly proprietary SaaS format and that the moment you stop paying, you've got NOTHING.
99% of major DAWs (Ableton, Logic, FL Studio, Bitwig, Studio One, etc.) are a perpetual license.
The video seems to be trying to convince me that this is totally targeted at actual musicians. But I guess maybe that's how their target users imagine themselves?
As an actual musician who doesn't have any troubles creating my own melodies and timbres (both generated and 'manually' created), its very obviously not targeted at someone like me.
It seems similar to a Garage Band type of software, aiming to entice people with little audio production experience and give them an interesting sounding snippet they can play back to friends.
For example, the only actual audio editing they displayed was slicing and re-pitching (you can't even choose the time-stretch algorithm), which is conceptually very simple to understand.
There's no ability to actually edit dynamics or do very accurate frequency adjustments that I can see from the demos, so it's basically useless for anything I would want to do.
Is a scrapbooker an artist? If a scrapbooker is an artist because of what collage is, is a MtG card collector picking layouts to arrange their cards in the array of plastic sleeves you can get, an artist?
If the MtG card collector is not an artist does that mean they're bad and need to stop?
If the MtG collector only orders the cards for the joy of making a pleasant composition and does not provide any function like finding the cards faster or keeping them in good shape.
I think they are doing art.
If the main reason is to keep their items clean, as much time as they use doing the composition or how good it looks, they are not artist.
Before AI there was a general consensus that creative areas (eg. Cities) were becoming a homogenized experience. A Starbuckization if you will. I can’t help but wonder what gets lost when using tools like this.
It's unclear to me whether it will result in more homogeneity, as a result of prompts being a coarse medium that results in the AI choosing what it's seen to fill in the rest, or less homogeneity, as a result of more people with non-mainstream tastes being able to create music aligned with their niche that otherwise wouldn't exist due time/money restrictions. I think the latter seems a bit more likely, but time will tell.
There's not really any need to speculate when this has already played out in other mediums - would you say that the proliferation of LLMs has led to an explosion of novel and interesting works of fiction, or just an explosion of cookie-cutter slop ebooks?
I would say too soon to tell. There has been an uptick in ebook slop, but I'm not sure if it's impacted the homogeneity of literature, because I don't think anyone is reading ai ebooks. It's not enough for it to exist to impact culture, it has to be being consumed.
Music is a uniquely interesting case, since music has a much lower barrier of entry to consume.
My thought to who you replied to exactly. Am I going to invest several days to read an AI slop novel? No. But I will take several minutes to read a blog post and likely have read many that were AI generated or assisted.
Since you get exactly the kind of music you want, I think it leads to extremely small bubbles, which is pretty much the opposite of homogeneity.
For example, I had never heard epic power metal about birds, but with Suno I got exactly what I wanted. Sure, the sound quality (I only used v3.5) could be better and the songs could be longer, but I don’t care, I now have epic songs about my Bourke’s parakeet. However, I’m not pretentious enough to think those songs are interesting to anyone other than my wife and me, hence the smallness of the bubble.
Generating ‘content’ tailored to you and not meant for someone else’s taste.
Human artists need to make money and those who create music for a tiny bubble probably can’t make enough.
So as an artist what do you do? Do you have to create music with mass market appeal from the beginning?
Or do you need to bank on luck that your music for ‘small bubbles’ gets discovered?
Or you have to have clever marketing strategies to get your music in front of more ears to hopefully gain more fans. And create merch, tour etc.
I wonder how all this AI music is going to impact indie artists. Spotify and the likes is just ripping them off and on top of that their music is / has been stolen from these AI data gobblers.
I don’t see how at this stage it can replace human expression though (singing, playing violin, piano, etc) which is very nuanced.
Same with acting… nuanced expressions that matter. I’m not sure AI can replicate the acting skills of Denise Gough (Dedra from Andor) for example… and many others.
But it would be awesome to generate more story lines or episodes from your favourite TV shows, for example shows from over 20 years ago.
Imagine being able to create more episodes of Star Trek TNG or DS9, maintaining the feel of that era without letting someone like Kurtzmann ruin and tell you how new Star Trek should be.
But how do you ensure actors, writers and other creatives from that show will be compensated directly?
Or maybe this will only be possible in a Star Trek like world, where profit uber alles is not the focus anymore.
If no one is creating new music/styles for the models to steal, you will only get remixes of what already exists. AI is an entropy machine, it sucks all of the energy/momentum out of everything it touches.
Is no one going to mention that the music industry at large despises Suno because they stole a lot of their training data?
Why do we continue to prop up these companies when there are ethical alternatives? We are rapidly replacing all experts with AI trained in their data, and all the money goes to the AI companies. It should be intuitively obvious this isn’t good.
While I like using AI for assisting with repetitive programming, I can’t help but feel sorry for my producer and illustrator friends who are now having to compete with generated tools.
Is it snobby of me to look down upon art that is created using these tools as lesser because the human did not make every tiny decision going into a peice? That a persons taste and talent is no longer fully used to produce something and for someone reason to me what is what makes the art impressive and meaningful?
Something about art with imperfections still feels exciting, maybe even more so than if I see something that is perfect but if I see an AI gen picture with 6 fingers, I just write it all off as slop.
I am happy to allow my generated code to come from “training data” but I see the use of AI in art, writing and music as using stolen artists hard work.
I feel like as time goes on, I feel even more conflicted about it all.
Applying your logic, did you feel bad for seamstresses when industrial revolution took off? did you feel bad for hardware manufacturers in America when they were outsourced to China? Art is also a form of labor and whoever can produce quality at quantity wins. Idealizing art in some sort of religious idolation is just plain silly. We haven't had the Picassos or Mozarts or Oscar Peterson for quite some time now yet the world is just fine. People play playlists in front of millions of live crowd and get accolade for it vs real instruments. Times change, technology change and art changes.
You either adapt or go hungry just like everybody else and art shouldn't be exempt from the mechanics of supply and demand.
I almost agree with you that this is about quality, but I still feel that the context in which art comes from influences how I perceive it.
Take, for example, a track by Fontaines D.C., a band from Ireland that writes extensively about the lived social and political experience.
Knowing where they are from and the general themes of their work makes their tracks feel authentic, and you can appreciate the worldview they have and the time spent producing the art, even if it does not align with your own tastes.
Trying to create something of the same themes and quality from a prompt of “make me an Irish pop rock track about growing up in the country” suddenly misses any authenticity.
Maybe this is what I am trying to get at, but like I said, I feel some conflict about this, as I personally value these tools for productivity
You as a human chose to write this very common opinion and even include writing errors like the following
> That a persons taste and talent is no longer fully used to produce something and for someone reason to me what is what makes the art impressive and meaningful?
Human output isn't sacred. yes this is snobbery, a useless feeling of superiority.
I feel the same, including code. I cannot justify it. I can easily counter my own arguments. Still, the further we automate human thought and creativity the worse it makes me feel. I am disappointed that so many are content with mediocre imitation.
Nothing is being "stolen". It never was. Copyright law grants you rights over specific works. It doesn't protect styles, genres, general ideas, methods, or concepts. And it most certainly doesn't protect anyone from competition or the unyielding march of progress. Nothing can protect you from that.
Hey, Suno SWE here — I realize this wording might be slightly ambiguous, but you do not need to maintain a perpetual license to have commercial rights. That blurb is saying that songs created while you are subscribed are granted commercial use rights.
> If you made your songs while subscribed to a Pro or Premier plan, those songs are covered by a commercial use license.
It's human nature to want to feel like we've accomplished something. AI generaters like Suno, where all you have to do is type in a prompt and you get the final result, take that sense of accomplishment away from us.
However, if we start working on a project where we're assisted by AI, for example, we're making a game where the sprites are generated by AI or the background music is generated by AI, but the overall game is still directed by humans, that sense of accomplishment stays.
But at some point we're going to reach the stage where the entire game can be generated in high quality, at the same level as humans. What then?
At least I'm an AI that plans WAY ahead since I created this account almost 8 years ago and have made hundreds of posts which have close to 8,000 karma.
Maybe lighten up on imagining AI slop under every bush?
Tools like Suno are fundamentally enabling. I'm about 40 years old and never "had the music" - not for lack of trying (music lessons at a young age)... but could never carry a tune or keep rhythm. I suppose its what being dyslexic feels like. If I were educated in a culture where music was fundamentally as important as reading or math, I suppose would have spent enough hours on it to eventually be passable... but I got frustrated, the music lessons stopped. But that doesn't mean I stopped appreciating or wanting to make music!
And then comes Suno (and OpenAI's jukebox before that), and it felt like my brain exploded... like the classic scene in a superhero movie when the power was given to me. Is my music good? No - but I spent years writing and fashioning poetry and all of a sudden can put that to music... hard to explain how awesome that feels. and i love using the tools and it's getting better and it's been fundamentally empowering. I know it's easy to say generative art is generative swill... but "learning Suno" is no different than "learning guitar".
https://open.spotify.com/album/45CY60A8GCHxBQb7DCJsIl
https://songxytr.substack.com/p/what-is-a-songxytr
If you enjoy it, I'd leave it at that - that's all that matters.
It's a pretty absurd claim to say that learning Suno is no different than learning a musical instrument. My 8 year old nephew was cranking out "songs" in Suno within an hour of being introduced to it. Reminds me of when parents were super impressed that their 3-year old could use an iPad.
Generative tools (visual, auditory, etc.) can serve as powerful tools of augmentation for existing creators. For example, you've put together a song (melody/harmony) and you'd like to have AI fill out a simple percussive section to enrich everything.
However with a translation as vast as "text" -> to -> "music" in terms of medium - you can't really insert much of yourself into a brand new piece outside of the lyrics though I'd wager 99% of Suno users are too lazy to even do that themselves. I suppose you can act as a curator reviewing hundreds of generated pieces but that's a very different thing.
I always get a little confused when I hear non-musicians say that something like Suno is empowering when all they did was type in, "A Contrapuntal hurdy-gurdy fugue with a backing drum track performed by a man who swallowed too many turquoise beads doing the truffle shuffle while a choir gives each other inappropriate friendly tickles".
> My 8 year old nephew was cranking out "songs" in Suno within an hour of being introduced to it. Reminds me of when parents were super impressed that their 3-year old could use an iPad.
You imply "it is Prompt -> Song" but in reality it is "Prompt -> Song -> Reflection -> New Prompt -> New Song.." It is a dialogue. And in a dialogue you can get some places where neither of you could go alone.
As software developers we know that multiple people contribute to a project, inside a git repo, and if you take one's work out it does nothing useful by itself. Only when they come together they make sense. What one dev writes builds on what other devs write. It's recursive dependency.
The interaction between human and AI can take a similar path. It's not a push-button vending machine for content. It is like a story writing itself, discovering where it will end up along the way. The credit goes to the process, not any one in isolation.
It’s really not. It’s like having interdimensional Spotify where you can describe any song and they will pull it up from whatever dimension made it and play it for you. It may empower you as a consumer but it does not make you a creator.
I dunno, based on Spotify's recommendation engine, AI is absolutely sufficient to make anyone a creator ;P
Almost all naturally-generated music is derivative to one degree or another. And new tools like AI provide new ways to produce music, just like all new instruments have done in the past.
Take drum and bass. Omni Trio made a few tracks in the early 90s. It was interesting at the time, but it wasn't suddenly a genre. It only became so because other artists copied them, then copied other copies, and more and more kept doing it because they all enjoyed doing so.
Suno ain't gonna invent drum and bass, just like drum machines didn't invent house music. But drum machines did expand the kinds of music we could make, which lead to house music, drum and bass, and many other new genres. Clever artists will use AI to make something fun and new, which will eventually grow into popular genres of music, because that's how it's always been done.
You can do exactly what you describe with interdimensional Spotify. People can describe all kinds of fun and interesting things that can be statistically generated for them, but they still didn’t make anything themselves unlike in your other examples of using new tools.
Japanese oldies became a trend for a while - the people who found and repopularised the music dont get to say they created it and how it’s so awesome to have mastered the musical instrument of describing or searching for things. Well, of course they can, but forgive me if I don’t buy it.
Maybe when there is actual AGI then the AI will get the creative credit, but that’s not what we have and I still wouldn’t transfer the creative credit to the person who asked the AGI to write a song.
> Maybe when there is actual AGI then the AI will get the creative credit, but that’s not what we have and I still wouldn’t transfer the creative credit to the person who asked the AGI to write a song.
When artists made trance, the creative credit didn't go to Roland for the JP-8000 and 909, even though Roland was directly responsible for the fundamental sounds. Instead, the trance artists were revered. That's good.
> Japanese oldies became a trend for a while - the people who found and repopularised the music dont get to say they created it and how it’s so awesome
I'd bet there are modern artists who sampled that music and edited it into very-common rhythm patterns, resulting in a few hit songs (i.e. The Manual by The KLF).
> Take drum and bass. Omni Trio made a few tracks in the early 90s. It was interesting at the time, but it wasn't suddenly a genre. It only became so because other artists copied them, then copied other copies, and more and more kept doing it because they all enjoyed doing so.
Musicians not just copy but everyone adds something new; it's like programmers taking some existing algorithm (like sorting) and improving it. The question is, can Suno user add something new to the drum-and-bass pattern? Or they can just copy? Also as it uses a text prompt, I cannot imagine how do you even edit anything? "Make note number 3 longer by a half"? It must be a pain to edit the melody this way.
> Musicians not just copy but everyone adds something new
Not everyone. I've followed electronic music for decades, and even in a paid-music store like Beatport, most artist reproduce what they've heard, and are often just a pale imitation because they have no idea of how to make something better. That's the fundamental struggle of most creatives, regardless of tool or instrument.
I haven't tried Suno, but I imagine it's doing something similar to modern software: start with a pre-made music kit and hit the "Randomize" button for the sequencer & arpeggiator. It just happens to be an "infinite" bundle kit.
Well I made songs with my lyrics that brings tears and memories to my audience. Don’t know what other creator things you are talking about, but this to me is creating.
And so sampling, dj'ing, these aren't skills? This isn't music?
Sampling is not just cutting a fragment from a song and calling it a day. Usually (if you look at Prodigy's tracks for example) it includes transformation so that the result doesn't sound much like the original. For example, you can sample a single note and make a melody from it. Or turn a soft violin note into a monster's roar.
As for DJ'ing I would say it is pretty limited form of art and it requires lot of skill to create something new this way.
Yes, that's what people are doing with AI music as well. Acting like there's some obvious "line" of what constitutes meaningful transformation is silly.
I see DJing as more akin to being a skillful curator than being an artist. They are related but not equivalent.
I was a trance and dnb DJ, I definitely never claimed I was writing the songs I played and I think it would have been dishonest to do so.
clicking on a button until you like what you hear is not "making music". I have nothing against these tools, but the hubris of the people using them is insane
I just tried it out because of the discussions on this thread, and I got to say I land squarely on the side of this is neat, but it is not artistry. Every little thing I generated sounded like things I've heard before. I was trying hard to get it to create something unique, using obscure language or ideas. It didn't even get close to something interesting in my opinion, every single output was like if you combined every top 40 song ever made and then only distilled out the parts relevant to certain keywords in a prompt.
These tools will probably be great for making music for commercials. But if you want to make something interesting, unique, or experimental, I don't think these are quite suited for it.
It seems to be a very similar limitation to text-based llms. They are great at synthesizing the most likely response to your input. But never very good at coming up with something unique or unlikely.
What button? Again with the vending machine idea. No, it's language prompting, language has unbounded semantic space. It's not one choice out of 20, it's one choice out of countless possibilities.
I give my idea to the model, the model gives me new ideas, I iterate. After enough rounds I get some place where I would never have gotten on my own, or the model gotten there without me.
I am not the sole creator, neither is the model, credit belongs to the Process.
> language has unbounded semantic space
So if I have a melody in my head, how do I make AI render it using language? Even simpler, if I can beatbox a beat (like "pts-ts-ks-ts"), how do I describe it using language? I don't feel like I can make anything useful by prompting.
You record yourself whistling it and out it in as an input.
I've been recording myself on guitar and using suno to turn it into professional quality recordings with full backing band.
And I'm not trying to sell it, I just like hearing the ideas in my head turned into fully fleshed music of higher quality than I could produce with 100x more time to invest into it
This is more closer to actually creating rather than generating music. However this cannot be done with a text prompt, which the comment above claimed, is expressive enough.
Actually having an "autotune" AI that turns out-of-key poor singing into a beautiful melody while keeping the voice timbre, would be not bad.
Well then I have news for you... That's what Suno is. You can generate from simple text prompt, you can describe timings and chord progressions and song structure. You can get very detailed, even providing recordings
Yes the barrier for entry is low, but there is a very high ceiling as well
Refining a request that generates music, is "making music" however you slice it.
I messed around with Udio when it first cam out, and it wasn't just writing a prompt, and there's your song.
You got 30 seconds, of which there might have been a hook that was interesting. So you would crop the hook and re-generate to get another 30 seconds before or after that, and so on.
I would liken it more as being the producer stitching together the sessions a band have recorded to produce a song.
If so, then twiddling the radio knob is also "making music".
You probably never prompted an AI model until you got something good from it.
When I commission a painting and tell the artist what I want painted, am I in fact the artist?
Trying to convince some tech people about how artistic creation works, and why it's more than just the right amount of "optimization" of bits for rapid results, is about as pointless as trying to make a chimpanzee understand the intricacies of Bach. The reductiveness of some of you is amusing, but also grotesque in the context of what art should mean for human experience.
You are really going to dislike my "code is art" opinion.
There's no magical line between "tech people" and "art people". The gatekeeping here is getting desperate as hell when you're now forced to cite Bach.
I don't think you really understood what I was saying, or what you're even talking about. I've got nothing to "gatekeep" and a defense of skill over automated regurgitation in creating things certainly isn't gatekeeping. People can use whatever tools they like, but they should keep in mind what distinguishes knowing how to create something from having it done for you at the metaphorical push of a button.
No, I understand the insults and ad hoc requirements just fine. And I can point you back to the decades and decades of literature about how anyone can be an artist and how anything can be art. The stuff that was openly and readily said until the second people started making art with AI. As for "push of a button", Visarga has already done a decent job of explaining how that's not actually the case. Not that I have any issue with people doing the metaphorical button push either.
Skill is nature's way of gatekeeping.
If you're too lazy to put effort into learning how to create an art so you can adequately express yourself, why should some technology do all the work for you, and why should anyone want to hear what "you" (ie: the machine) have to say?
This is exactly how we end up with endless slop, which doesn't provide a unique perspective, just a homogenized regurgitation of inputs.
>skill
>too lazy
Again, I wholly reject the idea that there's a line between 'tech people' and 'art people'. You can have an interest in both art and tech. You can do both 'traditional art' and AI art. I also reject the idea that AI tools require no skill, that's clearly not the case.
>nature
This can so easily be thrown back at you.
>why should anyone want to hear what "you" (ie: the machine) have to say?
So why are we having this discussion in the first place? Right, hundreds of millions are interested in exploring and creating with AI. You are not fighting against a small contingent who are trying to covet the meaning of "artist" or whatever. No, it's a mass movement of people being creative in a way that you don't like.
• I didn't say there's a line between "tech people" and "art people". Why would there be?
• We're having this discussion because people are trying to equate an auto-amalgamation/auto-generation machine with the artistic process, and in doing so, redefining what "art" means.
• Yes, you can "be creative" with AI, but don't fool yourself-- you're not creating art. I don't call myself a chef because I heated up a microwave dinner.
> I don't call myself a chef because I heated up a microwave dinner.
A better analogy would be "I don't call myself a chef when ordering from Uber Eats".
I'm not the artist just because I commissioned the painting and sent them a picture of my dog.
• The other guy certainly did. And your subsequent reply was an endorsement of his style of gatekeeping, so. I mean, just talk to some of the the more active people in AI art. Many of them have been involved in art for decades.
• If throwing paint at a canvas is art (sure, why not?) then so is typing a few words into a 'machine'. Of course many people spend a considerable amount more effort than that. No different than learning Ableton Live or Blender.
• See previous points.
No, the hubris of people deciding they can define what music is is insane
Banging two sticks together is music. Get off your high horse.
I have claves, which are literally two sticks. I've also got a couple egg shakers, a couple tambourines.
Do you have ANY IDEA how hard these things are to play well.
I don't care if haphazard bashing of sticks with intent to make noise counts as 'music'. I do care if this whole line of discussion fundamentally equates any such bashing with, say, Jack Ashford.
I would be surprised if the name meant anything to you, as he's more obscure than he should be: the percussionist and tambourine player for the great days of Motown. Some of you folks don't know why that is special.
Maybe you need to refresh the context - 99.99% of AI generated music, images or text is seen/heard only Once, by the AI user. It's a private affair. The rest of the world are not invited.
If I write a song about my kid and cat it's funny for me and my wife. I don't expect anyone else to hear or like it. It has value to me because I set the topic. It doesn't even need to be perfect musically to be fun for a few minutes.
You seem to be the one who doesn't understand how special it is if you think good music is so simple that AI can zero shot it.
People are mixing and matching these songs and layering their own vocals etc to create novel music. This is barely different from sampling or papier mache or making collages.
People made the same reductionist arguments you're making about electronic music in the early days. Or digital art.
Dumping money into a company until desired results is not "building a company". I have nothing against capital, but the hubris of the people investing is insane. /s
Look, sarcasm aside, for you and the many people who agree with you, I would encourage opening your minds a bit. There was a time where even eating food was an intense struggle of intellect, skill, and patience. Now you walk into a building and grab anything you desire in exchange for money.
You can model this as a sort of "manifestation delta." The delta time & effort for acquiring food was once large, now it is small.
This was once true for nearly everything. Many things are now much much easier.
I know it is difficult to cope with, because many held a false belief that the arts were some kind of untouchable holy grail of pure humanness, never to be remotely approached by technology. But here we are, it didn't actually take much to make even that easier. The idea that this was somehow "the thing" that so many pegged their souls to, I would actually call THAT hubris.
Turns out, everyone needs to dig a bit deeper to learn who we really are.
This generative AI stuff is just another phase of a long line of evolution via technology for humanity. It means that more people can get what they want easier. They can go from thought to manifestation faster. This is a good thing.
The artists will still make art, just like blacksmiths still exist, or bow hunters still exist, or all the myriad of "old ways" still exist. They just won't be needed. They will be wanted, but they won't be needed.
The less middlemen to creation, the better. And when someone desires a thing created, and they put in the money, compute time, and prompting to thusly do so, then they ARE the creator. Without them, the manifestation would stay in a realm of unrealized dreams. The act itself of shifting idea to reality is the act of creation. It doesn't matter how easy it is or becomes.
Your struggle to create is irrelevant to the energy of creation.
It doesn’t even have to be art. If someone told me they were a chef and cooked some food but in reality had ordered it I’d think they were a bit of a moron for equating these things or thinking that by giving someone money or a request for something they were a creator, not a consumer.
It may be nice for society that ordering food is possible, but it doesn’t make one a chef to have done so.
I enjoy this take. Funding something is not the same as creating it. The Medicis were not artists, Michelangelo, Botticelli, Raphael, etc were.
You might not be a creator, but you could make an argument for being an executive producer.
But then, if working with an artist is reduced to talking at a computer, people seem to forget that whatever output they get is equally obtainable to everyone and therefore immediately uninteresting, unless the art is engaging the audience only in what could already be described using language, rather than the medium itself. In other words, you might ask for something different, but that ask is all you are expressing, nothing is expressed through the medium, which is the job of the artist you have replaced. It is simply generated to literally match the words. Want to stand out? Well, looks like you’ll have to find somebody to put in the work…
That being said, you can always construct from parts. Building a set of sounds from suno asks and using them like samples doesn’t seem that different from crate digging, and I’d never say Madlib isn’t an artist.
Assuming that 1. food is free and instant to get, and 2. there are infinite possibilities for food - then yes, if you ordered such a food from an infinite catalog you would get the credit.
But if you ordered 100 dishes iterating between designing your order, tasting, refining your order, and so on - maybe you even discover something new that nobody has realized before.
The gen-AI process is a loop, not a prompt->output one step process.
Am I a chef then because I tell my private chefs what to make on an ongoing basis?
I disagree with the characterization as “absurd” to equate AI to an instrument. As you just said, it is a powerful tool. I would equate basic Suno prompting to a beginner on an instrument, as instruments are tools like anything else. Just because you get music out, it doesn’t mean it is actually “good” any more than if I smash random keys on a piano.
Controlling that flow of generation, re-prompting, adjusting, splicing, etc. to create a unique song that expresses your intention is significantly more work and requires significantly more creativity. The more you understand this “instrument”, the more accurate and efficient you become.
What you’re comparatively suggesting is that if a producer were to grab samples off Splice, slice them and dice them to rearrange them and make a unique song, that they didn’t “actually” make music. That seems like it would be a more absurd position than suggesting AI could be viewed as an instrument.
Tools like Suno make people feel like “their own music” is good and they have accomplished something because they elevate the floor of being bad at a tool (like all technological improvements do). They feel like they have been able to express their creativity and are proud, like a kid showing off a doodle. They share it with their friends, who will listen to it exactly one time in most cases and likely tell them it is “really good” and they “really like it” before never listening again.
That type of AI use is akin to a coloring book, but certainly doesn’t make for “good” music. When a kid shows off their badly colored efforts proudly, should we yell at them they aren’t doing “real art”, that their effort was meaningless, and that they should stop acting proud of such crap until they go to art school and do it “properly”?
> learning suno is no different than learning guitar
It most definitely is different and you’ve proven it with your own post. Guitar takes a long time to get to a place where you can produce the sounds you hear in your head. Suno gives you instant gratification.
Look, if it gives you pleasure to make Suno music then you should do it, but if you think having an ai steal a melody and add it to your songs counts is the same as creating something, you’re kiddo by yourself. At best you are a lyricist relying on a robocomposer to do the hard part. You could have achieved the same thing years ago by collaborating with a musician like Bernie Taupin did with Elton John.
I agree with this.
There are drawbacks to being a skilled (trained/practiced) musician. You specialize in one instrument, and tend to have your creativity guided by its strengths/weaknesses.
I think that soon, some very accomplished musicians will learn to leverage tools like Suno, but they aren't in the majority yet. We're still in the "vibe-coding" phase of AI music generation.
We saw this happen with CG. When it started, engineers did most of the creating, and we got less-than-stellar results[0].
Then, CG became its own community and vocation, and true artists started to dominate.
Some of the CG art that I see nowadays, is every bit as impressive as the Great Masters.
We'll see that, when AI music generation comes into its own. It's not there, yet.
[0] https://www.youtube.com/watch?v=wTP2RUD_cL0
> will learn to leverage tools like Suno
Suno isn't a tool. Tools are characterized by precision and a steep learning curve, and "AI" is nothing like.
>Some of the CG art that I see nowadays, is every bit as impressive as the Great Masters.
Really? Except the minor part in which a great master spent months to years creating one of his works, instead of a literally mindless digital system putting it together (in digital, no pigments here) instantly.
The technology is impressive, sure, but I see nothing artistically impressive about it, or emotionally satisfying about the utter lack of world and life of creation it lacks.
This is not something I feel like arguing about. We’ll never agree.
I’m an artist (the old-fashioned kind).
That’s where I come from, so my viewpoint is colored by my own experience and training.
If you're an actual artist, who's taken the time to paint and learn its intricacies, yet you're still just as impressed by an automated CG rendering of a work in Old Master style vs. one really done by a dedicated human hand, then you either hate the thing you learned because something about it frustrated you, or you have no clue about forming qualitative measurements of skill.
Also, "old-fashioned"? This to imply that someone rendering painterly visuals in seconds with AI is some new kind of artist? If so, then no, what they do isn't art to begin with. That at least requires an act of effortful creation.
It might be enlightening to find out a bit about the process of creating CGI; especially 3D scenes. Many works can definitely take over a year.
I spent some time, making CG art, and found it to be very difficult; but that was also back before some of the new tools were available. Apps like Procreate, with Apple Pencil and iPad Pro, are game-changers. They don't remove the need for a trained artist, though.
But really, some of the very best stuff, comes quickly, from skilled hands. Van Gogh used to spit out paintings at a furious pace (and barely made enough to live on. Their value didn't really show, until long after his death).
I fail to see how you're disagreeing with me if you say this, or maybe we're at mixed signals. I'm specifically arguing against being impressed by a visual of some kind that was sludged out automatically by an LLM, my argument isn't against digital art by itself (I know how hard CGI can be, and there's nothing to be dismissed about it because it doesn't directly use physical materials), or against artists who refine their craft to such a point that they can create visual marvels in no time. Both of those require effort. They require a combination of effort with learning, exploring and to some extent also talent I'd say.
Briefly instructing an image model to imitate an Old Master and having it do so in seconds fulfills none of those needs, and at least to me there's nothing impressive about it as soon as I know how it was created (yes, there is a distinction there even if at first glance at a photo of a real old master and an AI-rendered imitation, it might be hard to note a difference)
The latter is not art, and the people who churn it out with their LLM of choice are not artists, at least not if that's their only qualification for professing to be such.
Well, I’m still not interested in arguing, so I’m not really “disagreeing,” as I think that we’re probably not really talking about the same thing, but I feel that I do have a fairly valid perspective.
When airbrushing became a thing, “real” artists were aghast. They screeched about how it was too “technical,” and removed the “creativity” from the process. Amateurs would be churning out garbage, dogs and cats would be living together, etc.
In fact, airbrushes sucked (I did quite a bit of it, myself), but they ushered in a new way of visualizing creative thinking. Artists like Roger Dean used them to great effect.
So people wanted what airbrushes gave you, but the tool was so limited, that it frustrated, more than enabled. Some real suckass “artists” definitely churned out a bunch of dross.
Airbrushing became a fairly “mercenary” medium; used primarily by commercial artists. That said, commercial artists have always used the same medium as fine artists. This was a medium that actually started as a commercial one.
Airbrushing is really frustrating and difficult. I feel that, given time, the tools could have evolved, but they were never given the chance.
When CG arrived, it basically knocked airbrushes into a cocked hat. It allowed pretty much the same visual effect, and was just as awkward, but not a whole lot more difficult. It also had serious commercial appeal. People could make money, because it allowed easy rendering, copying, and storage. There was no longer an “original,” but that really only bothered fine artists.
This medium was allowed to mature, and developed UI and refined techniques.
The exact same thing happened with electric guitars, digital recording and engineering, synthesizers, and digital photography. Every one of these tools, were decried as “the devil’s right hand,” but became fundamental, once true creatives mastered them, and the tools matured.
“AI” (and we all know that it’s not really “intelligence,” but that’s what everyone calls it, so I will, too. No one likes a pedant) is still in the “larval” stage. The people using it, are still pretty ham-handed and noncreative. That’s going to change.
If you look at Roger Dean’s work, it’s pretty “haphazard.” He mixes mediums, sometimes using their antipathy to each other to produce effects (like mixing water and oil). He cuts out photos, and glues them onto airbrushed backgrounds, etc. He is very much a “modern” creative. Kai Krause is another example. Jimi Hendrix made electric guitars into magical instruments. Ray Kurzweil advanced electronic keyboards, but people like Klaus Schultze, made them into musical instruments. These are folks that are masters of the new tools.
I guarantee that these types of creatives will learn to master the new tools, and will collaborate with engineers, to advance them. I developed digital imaging software, and worked with many talented photographers and retouchers, to refine tools. I know the process.
Of course, commercial applications will have outsized influence, but that’s always the case. Most of the masters were sponsored by patrons, and didn’t have the luxury to “play.” They needed to keep food on the table. That doesn’t make their work any less wonderful.
We’re just at the start of a new revolution. This will reach into almost every creative discipline. New techniques and new tribal knowledge will need to be developed. New artists will become specialists.
Personally, I’m looking forward to what happens, once true creatives start to master the new medium.
> Guitar takes a long time to get to a place where you can produce the sounds you hear in your head. Suno gives you instant gratification.
Wrong. It takes extremely long before you can make the sounds in your head fit into the scale and recognize them, however with Suno it is impossible.
I would compare Suno to a musician-for-hire. You describe what you want, some time later he sends you the recording, you write clarifications, and get second revision, and so on. Suno is the same musician, except much faster, cheaper and with poor vocal skills. Everything you can do with Suno today, you could make before, albeit at much higher price.
> having an ai steal a melody and add it
The fact people still think this is how these models work is astonishing
Even if that were true, sampling is an artform and is behind one of the most popular and succesful genres today (hip hop). So is DJ'ing or is that also not a skill?
The same puritanism that claimed jazz wasn't music, then rap wasn't music, then EDM wasn't music, blah blah
Gatekeepers of what is and isn't art always end up wrong and crotchety on the other side. It's lame and played out.
I actually make a lot of sample based music, and it’s as much an art as you make it. Downloading a couple of loops from splice and layering them is lame, actually chopping and repurposing samples is not.
I never said Suno wasn’t “art”. The opposite is true. If you want to put your name on something that took no effort or skill and call it art, more power to you. You could do the same in other areas, and lame, low effort “art” proceeds AI by millennia. You are as welcome as anybody to call yourself a creator, however lame that effort may be.
But man the chutzpah of comparing that low effort drivel with people pushing genre boundaries.
If you sample you have to credit the creator of the original samples properly. It is astonishing that you do not know that.
What the hell does that have to do with the supposed artistry of it?
Sure, make the models credit the original artists, who cares. That doesn't change if it's an art that should be respected or not.
If you do not credit right it is steeling not art. Suno does not credit right.
So if Suno credits the artists suddenly AI music becomes art? What weird logic.
Your context window seems to be very narrow, please read your own original post and don't twist my words.
Yes my original comment was saying that sampling as an artform. You for some reason think copyright law is what makes sampling an artform.
Yes, sampling is an artform. We are on the same page here. But your original comment implies that using Suno would be like sampling. Therefor I mentioned that you need to properly credit the usage of samples, what Suno is not doing, Suno is steeling from real artist. Hope it is more clear now.
> "learning Suno" is no different than "learning guitar"
What an insanely disrespectful take.
It's only disrespectful if you're a gatekeeper.
defending the idea of complex, nuanced effort for the sake of coherent creation being a demonstration of skill is gatekeeping?
I'd love to see programmers reactions to having the measure of their work reduced in such a way as more people vibe code past all the technical nonsense.
Last I checked programming isn't judged as an art.
Your supposed judgment on skill has nothing to do with something's value as an artform
Well that's nice for you but you did absolutely nothing musically. It's like applying a filter "paint" on a picture and calling yourself painter
>... but "learning Suno" is no different than "learning guitar".
No it absolutely is not.
The difference shows up when you perform - perform with Suno on stage and then with Guitar, you will know then.
Live programming music using tools like SuperCollider is/was a (very niche) thing. Someone is on stage with a laptop and, starting from a blank screen that is typically projected for everyone to see, types in code that makes sounds (and sometimes visual effects). A lot of it involves procedurally generated sounds using simple random generators. Live prompting as part of such shows would not seem entirely out of place and someone might figure out how to make that work as a performance?
SuperCollider enthusiast here, I think you missed the "is no different than" part. Working with SuperCollider is very different from playing any instrument live, and I doubt that'll change.
Where playing an instrument means balancing the handling of tempo, rhythm and notes while mastering your human limitations, a tool like SuperCollider lets you just define these bits as reactive variables. The focus in SuperCollider is on audio synthesis and algorithmic composition, that's closer to dynamically stringing a guitar in unique rule-based ways - mastering that means bridging music- and signal-processing theories while balancing your processing resources. Random generators in audio synthesis are mostly used to bring in some human depth to it.
I think "learning guitar" is different from "learning Suno" because with guitar you have control over what you play. I also love music, and making music, and have no natural musical talent, but I see no interest in generating a song without me deciding every aspect and choosing every note. It's like taking the most interesting and creative part from me.
Personally for me I wouldn't be able to reconcile the fact that these generated stems are basically the same as generated AI images--built from the digital bits of existing tracks/music/recording that someone else spent the time and hard work making and then sharing only to have it unexpectedly hoovered up by these corporations as part of their giant data training set.
Guitar, make it sound more like someone else's hard work.
The year is 2027. A 16 year old at a house party pulls out his laptop and asks his friends to gather round. He starts typing “a song about a wonderful wall” and completely original music starts playing. A girl in the corner, hearing the heartfelt melody, starts to fall for the boy.
I made an image, then a video of this…it felt cute
https://limewire.com/d/ShyFU#hGrEzdZegs
TIL that the lime still wires
"Robot experience this tragic irony for me!"
https://www.youtube.com/watch?v=QKZNOnLUBmQ
I’ll agree that you+ai is creating a pleasant sequence of sounds ie music. And I don’t think anyone has the right to say (within reason) what is music or isn’t.
But we might need new vocabulary to differentiate that from the act of learning & using different layers of musical theory + physical ability on an instrument (including tools like supercollider) + your lived experience as a human to produce music.
Maybe some day soon all the songs on the radio and Spotify will be ai generated and hyper personalized and we’ll happily dance to it, but I’ll bet my last dollar that as long as humans exist, they’ll continue grinding away (manually?) at whatever musical instrument of the time.
I see it as like having the answer key to every homework assignment for a course. It's easy to convince yourself that it doesn't hurt learning -- but there's probably a reason the answers aren't given to you. The struggle, the experience of "being stuck", the ability to understand why things don't work -- may be necessary precursors to true understanding. We're seeing pretty discouraging results from people who are learning to "vibe code" without an understanding of how they would write the code themselves.
You may wish that learning Suno is no different than learning guitar, but I think the effects of AI may be a bit pernicious, and lead to a stagnation that takes a while to truly be felt. Nobody can say one way or the other yet. That said, I'm happy you can make music that you enjoy, and that Suno enables you to do it. Such tools are at their best when they're helping people like you.
I guess it’s similar to learning by watching masters on YouTube - I’m convinced that passively watching them causes the illusion in the viewer that they are also capable of the same, but if they were to actually try they miss all the little details and experience that makes their performance possible. Watching a chess GM play, for example, can make you feel like you understand what’s happening but if you don’t actually learn and get experience you’re still going to get beat all the time by people, even beginners, who did. But as long as you never test this, you get to live with the self-satisfaction of having “mastered” something yourself.
Of course, nothing wrong with watching and appreciating a master at work. It’s just when this is sold as the illusion of education passively absorbed through a screen that I think it can be harmful. Or at least a waste of time.
It gets very real very quickly with skateboarding. You can watch all the YouTube and Instagram you want about how to do an Ollie or a kickflip in 30s; now go out and try.
The learning is in the failing; the satisfaction of landing it is in the journey that put you there.
Love the passionate replies! I think I especially agree with this comment:
>> think that soon, some very accomplished musicians will learn to leverage tools like Suno, but they aren't in the majority yet. We're still in the "vibe-coding" phase of AI music generation. We saw this happen with CG. When it started, engineers did most of the creating, and we got less-than-stellar results[0]. Then, CG became its own community and vocation, and true artists started to dominate.
Hey its likely not going to be me, but let's be real - any user of this technology who has gone beyond the "type in a prompt and look i got a silly song about poop" stage will probably agree - someone's going to produce some bangers using this tech. It's inevitable and if you don't think so it's likely you haven't done anything more than "low-effort" work on these platforms. "Low effort" work - which a majority of AI swill us - is going to suck, whether its AI or not.
And while I have the forum, I do want to make another point. I pay more for a month for Suno than Spotify ($25 vs $9). Suno/Udio etc: do what you need to to make sure the artists and catalogues are getting compensated... as an user I would pay even more knowing that was settled.
I’m gonna barf
> but "learning Suno" is no different than "learning guitar"
Ehh... No.
guys I think we're being too hard on this guy, why are we so upset that songeater is now Jimi Hendrix because of Suno? I know I'm jealous, I've been beating on my guitar for decades and I'm still pretty meh, but its because I lack the true creative genius required to type suno.com into my browser, not everyone is cut out to be a literal GUITAR GOD like songeater here. Lets give him the props he deserves for the massive investment of backbreaking labor over the past decades^w years^w weeks^w days^w hours^w halfhourmaybe it took for him to learn pseudo-guitar.
I apologise for this comment of mine that I am replying to, it seems like its sort of an unpopular opinion, but in the spirit of AI, please let me allow for some personal back propogation with my own low poly blob of neural networkedness and use this useful training to adjust my personal weights (known to low IQ normies as "values") to better fit in with the community here.
So, this songeater guy, what a poser eh?
Suno is moving toward becoming a browser-based DAW that happens to use AI. There are already more capable and established DAWs, and I see no reason they can't implement AI into their workflows-- in a more precise manner, where it's actually useful, instead of wholesale as a gimmick. Many are already doing this. So I don't understand where Suno is going with any of this.
It either needs to be: 1. So easy anyone can press a button and magically get exactly what they want with perfect accuracy and quality. 2. So robust and powerful it enables new kinds of music production and super-charges human producers.
This is neither. And I don't buy Suno's argument that they're solving a real problem here. Creative people don't hate the process of creating art-- it's the process itself and the personal expression that make it worthwhile. And listeners/consumers can tell the difference between art created with intent and soul, and a pale imitation of that.
> It either needs to be: 1. So easy anyone can press a button and magically get exactly what they want with perfect accuracy and quality. 2. So robust and powerful it enables new kinds of music production and super-charges human producers.
Don't forget the secret third option - facilitate a tidal wave of empty-calorie content which saturates every avenue for discovery and "wins" purely by drowning everything else out through sheer volume. We're at the point where some genAI companies are all but admitting that's their goal.
https://www.hollywoodreporter.com/business/digital/ai-podcas...
That seems to be the purpose. It doesn’t have to sound that good to the listener. It’s just made to extract dollars from Spotify when you flood the platform with so much slop that some of it starts getting played by users who just let the machine pick the next song.
Reminds me of the saying about propaganda that argues it's not about convincing, but about drowning out the rest.
This tidal wave has already destroyed the gaming industry, lot's of low quality AI slop games have flooded App Stores, STEAM etc leaving both gamers and creators frustrated.
I gave up on synthwave which was a genre I loved because there's so much AI it just not worth the effort to find new music. I'll listen to old songs but I have zero interest in new songs. I moved to a more niche genre where there's no AI yet.
same, I listen to a synthwave playlist on spotify, and once it ends spotify starts playing 'similar' music and at that point I just start feeling gross.
Yes I'm already doing this manually with Reason. I'll compose something that's quite bare bones, export the audio and run it through Suno, asking it to cover and improvise with a specific style, then when I have something I like, I split that into stems, import some or all of these to Reason and then reconstruct and enhance the sound using instruments in Reason, mostly by replaying the parts I like on keyboard and tweaking it in the piano roll. Often I get additional inspiration just by doing that. Eventually I delete all the tracks that came from Suno stems when I've finished this process.
That way I get new musical ideas from Suno but without any trace of Suno in the final output. Suno's output, even with the v5 model, is never quite what I want anyway so this way makes most sense to me. Also it means there's no Suno audio watermarking in the final product.
This is similar to what I do. There are all kinds of useful ways to incorporate AI into the music production process. It should be treated like a collaborative partner, or any other tool/plugin.
It shouldn't be a magic button that does everything for you, removing the human element. A human consciously making decisions with intent, informed by life experience, to share a particular perspective, is what makes art art.
That's the same process as AI-assisted coding. Or AI-assisted writing. Or AI-assisted anything.
We use AI assisted coding to be more productive or to do boring stuff. If the 'making the music' part is what you are getting away from, why make music? You're basically a shitty 'producer' (decent producers are amazing at those boring parts you are skipping and can fill out a track without hitting up a robot) at that point.
Music is math. Art is patterns. Like how we're using AI to iterate through design and code, musicians could use it for generating musical patterns including chords, harmonies, melodies, and rhythms. In theory, it can pull up and manipulate instruments and effects based on description rather than rifling through file names and parameters (i.e. the boring stuff).
Most success as a musician stems from developing a unique style, having a unique timbre, and/or writing creative lyrics. Whether a coder, designer, artist, or musician, the best creatives start by practicing the patterns of those who came before. But most will never stand out and just follow existing patterns.
AI is nothing more than mixing together existing patterns, so it's not necessarily bad. Some people just want to play around and get a result. Others will want to learn from it to find their own thing. Either way works.
With art and AI, people seem to enjoy the part where they say they made something and get credit for it, but didn’t actually have to bother. People used to find art of people on the internet and claim it as their own, now an AI can statistically generate it for you and it maybe feels a bit less icky. Though I have to agree it all seems sort of pointless, like buying trophies for sports you didn’t play.
People like different things, obviously? Boring is very subjective.
"Creative people don't hate the process of creating art"
Yep. I was a professional music producer before the pandemics, and I couldn't agree more.
Honestly, I'm glad we are destroying every way possible to earn money with music, so we find another profession for that purpose and then we can make music for fun and love again.
> And listeners/consumers can tell the difference between art created with intent and soul, and a pale imitation of that.
Strong disagree there. I think that's true of a very small % of consumers nowadays. I mean, total honesty, I think that Suno is not worse than a large fraction of the commercial pop made by humans (maybe) that tops the charts regularly. It's already extremely formula based artificial music made by professional hit makers from Sweden or Korea.
The objective was never to grab discerning listeners but the mass of people. It would work even if they grab 50% but honestly I think it's going to be higher.
The difference between human and computer music would be obvious at a live concert for anyone.
All you said is very reasonable.
But then you look at image gen. The established one, namely Adobe, are surprisingly not winning the AI race.
Then you look at code gen. The established IDEs are doing even worse.
I don't rule out the possibility of music being truly special, but the idea of "established tools can just easily integrate AI right" isn't universally true.
Agreed. The problem with being an incumbent in this era is that much of the existing UI/UX assumptions are based on the idea of manual manipulation. We're so early that foundational assumptions are still up for debate, and for large companies like Adobe, there's just no way they'd be able to move at the required pace to keep up. Heck I'm at a company that's less than 2 years old, with less than 20 people, solely devoted to AI, and it's still hard for us to keep up.
What Adobe and others ought to be doing is setting up internal labs that have free reign to explore whatever ideas they want, with no barriers or formality. I doubt any of them will do that.
The innovator's dilemma is real. IMO none of the big DAWs are well-positioned to capitalize on AI, but that doesn't mean they couldn't.
I'd argue music generation is different from image or code generation. It's closer to being purely art. Take image generation for example. Most of the disruption is coming from competition with graphic design, marketing, creative/production processes, etc. The art world isn't up in arms about AI "art" competing with human art.
It does mean. The switch from writing “applicable” software to creating cutting edge AI is almost impossible. The parent comment makes great examples, we can add to that list JetBrains (amazing IDEs, zero ability to catch up with ML), for example. It’s a very different fast-paced scientific driven domain.
> And listeners/consumers can tell the difference between art created with intent and soul, and a pale imitation of that.
Um, have you seen the pop charts at any time in the past... well, since forever, actually?
The majority of commercially produced music today is created with intent to take your money and nothing else, with performers little more than actors lip-syncing to the same tired beat. Because it sells.
> And listeners/consumers can tell the difference between art created with intent and soul, and a pale imitation of that.
Respectfully I disagree. We have had curated, manufactured pop, built by committee and sung by pretty mouthpieces with no emotional connection, for a long time now, and they make big money.
And look at the vocaloid stuff too.
Those who care, care. Everyone else?
> And look at the vocaloid stuff too.
What about the vocaloid stuff?
It’s soulless by definition, designed by committee and sung by a machine. Entirely manufactured. But people like it.
It’s a counterpoint to the above argument that listeners will be dismissive of AI-produced music because it is a pale imitation of art created with intent and soul. On the contrary, such music thrives and is very popular already.
People love to be snobs about pop music, but it's music.
A particular piece of art isn't "soulless" just because it didn't move you. There were still plenty of humans involved in making it, who made specific artistic decisions. In pop music, the creative decisions are often driven by a desire to be as broadly appealing as possible. That's not a good or bad thing unless you judge it as such. It's still art.
The contention that there is something so ethereal, uncapturable, so uniquely and indescribably human that is put into even the blandest piece of mass-market pop that an AI trained on all the music ever made couldn’t create a track that people would accept due to some intangible hollowness, some void where the inestimable quintessence of human existence should be…
That’s hilarious.
I’m not saying it’s not ‘art’ whatever that might mean, I am saying this idea that people won’t accept and enjoy an AI version is a fantasy.
It might be "soulless" compared to real vocal, but vocaloid music is still created and played on keyboard by a human.
Surely it's no more soulless than a song without words at all?
AI "music" sounds like ass. The Temu of sonic products. It will sell, but only in niche markets. Traditionally manufactured pop "music" is a more upscale and widely applicable market position.
Honestly it sounds like AI generated music will just widen the bimodal distribution: those who care about craft and authenticity on one end, and a race to the bottom on the other. Everything in the middle will be squeezed.
I do care, in fact I started bying vinyl and CDs again. I will not consume AI slop.
Wait until you dive into the Suno community to discover some people got signed by labels and are publishing CDs and vinyls…
> Creative people don't hate the process of creating art
I mean, I hate when it's difficult to get the medium to express my vision... not that AI especially would help with that when I'm actually attached to that vision in detail....
What's special about Suno 5 is that the songs are actually good to listen to in place of professionally produced songs. For example, my favorite genre is new jack swing and there is a very limited number of this genre as it was briefly popular during the 90s. Now I have an endless supply of it and you can't tell that its AI produced anymore. Sure to an expert they might be able to detect it but for consumers its just as good as spotify playlist.
This is the first time I'm actually paying for generated AI content because the value I get is immense. I really think we are headed towards and over supply of content where there will be more stuff to read, watch, listen with very real value in all of them.
This spells out the inevitable change in the labor market for content creators. There will always be value for human created content and some will make more money but it will always have the AI generated content generation competing with it to the point where it will be hard to stay ahead and eventually people will stop caring.
Case in point, I see some comments being snarkish towards Suno but for as a consumer I could care less if you put your soul and years into producing art vs the one I can get a lot of today and now especially when there is virtually no difference in quality.
Truly an amazing accomplishment from Suno team, and probably the first time I've subbed to a music service after decades of downloading mp3s, hunting down new songs to listen to on Youtube. Suno 'steamified" this process and while I will use youtube to discover new genre, I am spending now most of my time in Suno, listening to endless amount of the exact sound I am looking for.
> as a consumer I could care less if you put your soul and years into producing art vs the one I can get a lot of today
a quantity over quality argument with regard to art is wild.
people just start liking things if there is a lot of it
> as a consumer I could care less if you put your soul and years into producing art vs the one I can get a lot of today and now especially when there is virtually no difference in quality
as a fellow consumer I care a lot actually
> I really think we are headed towards and over supply of content where there will be more stuff to read, watch, listen with very real value in all of them.
We're already here with human created content.
Yes, but not uniformly so - some niches are very popular, but there's plenty of obscure ones where if you're a fan you literally know everyone making music in some very niche genre because there are so few of them.
> as a consumer I could care less if you put your soul and years into producing art
> my favorite genre is new jack swing
My friend, where do you think your favorite genre that AI is now parroting comes from…
What a dystopian and depressing outlook on the valuation and enjoyment of art. Truly hope you're an anomaly.
I simultaneously feel repulsed by AI music and "art" and yet am totally open to being captivated by AI music if I really feel something is musically better than almost any human-made music I've heard.
I just haven't heard anything that isn't "slopful" yet. If I do, I will still feel weird about it, but I'm a big believer in the value of "aesthetic objects in themselves", so I am eager to find something I do actually like.
Even just knowing something was drawn or composed by an AI will negatively taint my opinion from the start, but I'm still open.
I do love generative music. I don't care if you get your notes from a markov chain a shif register a LLM or your brain.
The problem with AI music is that is just sounds like shit.
Right. That's pretty much my stance.
I don't totally discount the position that the human "soul" is what makes art art and all that, but I still do think something can be very enjoyable and good without being created by a sentient entity, in theory.
Even AI generated dubstep?
https://suno.com/s/gJhedd4hmIsHbccc
https://suno.com/s/qfFWu3kyQ2cXW8mT
Oh wow that sounds so bad, even worse than I imagined.
I hang out with dubstep and DnB guys on a UK forum. If they're looking to rave out they'll make something like this: https://www.youtube.com/watch?v=Sq0Pg7fnkAg
You'll notice many similarities in instrumentation, but how is Suno not like a bad RealAudio take on some of these noises haphazardly lumped together?
Or, same artist, different track: https://www.youtube.com/watch?v=UhvpCHfe0m0
Don't you need more focus and aggression to make even sell-out weak tea dubstep? I feel the generative process really severely fails to deliver anywhere near the correct sound, even for 'bad artificial lol dubstep' sounds.
Another even closer to the intent of the Suno one: https://www.youtube.com/watch?v=G3q_kmpq-9Y
Yes
[dead]
Couldn't agree more. Instead of seeking out people making that art, we are now leaving "art" or human expressive emotion to random noise and paying for it.
I've had some fun with Suno 5, but the songs absolutely don't replace well-made human music. They're much more formulaic and over-produced. They're usually forgettable. People I play them for can always tell they're AI produced.
True music enthusiasts will holdout for a while but I think AI music will easily replace most Pop currently on the radio and streaming for your average Joe. That stuff has been "fake" as early as the mid 2000's by being quantized straight to the grid, pitched, with programmed drums, guitar, even vocals and then churned out like widgets on a conveyor belt.
Currently for me, in the type of music I have enjoyed from v4.5+. V5 of their model is a regression.
I was very impressed with v4.5+ that I could get quite good songs evocative of Devo, Yeah Yeah Yeah's, Metric, etc.
Version 5 is currently harder (or I haven't figured out a way) to generate this kind of chopped/produced sound. It doesn't follow complex style definitions and tends to generate songs that are too slow and "smoothed" over.
V5 could be built different, and it will take a few revisions until they match v4 “creativity” for lack of a better word. What v5 brings is quality in the recording. Less AI shimmering in the background. But yes, I agree the songs are more… flat.
I find it depends how you generate it. Asking Suno to make covers of uploaded recordings tends to give much, much better results than asking it to cook a song from scratch. There are still quite a few tells that it's AI-made but it's not bad at all, at least in my experience so far.
[flagged]
Rather than call it a lie, I think it would be more fair to say in this instance it's a matter of taste/perspective. Personally, I enjoy listening to Suno songs.
[flagged]
These things are the composition equivalent of Guitar Hero...
They give users (players?) a sense of agency, making it satisfying. But in reality, you're no more composing than a guitar-hero player is playing the guitar, and nor will you learn how to from doing so. No matter how sophisticated the transformations in an LLM, you're ultimately using other people's music in a sophisticated mashup game.
However, in guitar hero, the people's who's music was being used at least got royalties. :-/
Suno does have the cover feature, where you can upload your own playing, say playing a simple melody on a guitar, and it will take that and create a song from it, together with lyrics you wrote. So it can be fairly fulfilling. What it doesn’t allow you is full control of the composition. But that’s AI.
Well, that's LLMs. I'm doing a PhD in computer science and music right now, focused on programming languages for music and algorithmic composition. There are other (IMHO) more interesting techniques that can be used in music and that used to be called AI, with which you do have control over and understanding of exactly what's happening, such as constraint satisfaction programming and dynamic search.
Ge Wang (professor in my field) wrote a great article on why LLMs are so uninteresting from a musical perspective. https://hai.stanford.edu/news/ge-wang-genai-art-is-the-least...
Meanwhile I've mixed a song of mine down on a Tascam 688 8-track tape recorder. I have a big smile on my face because I find this very enjoyable. The haptics, the sound and my hand-crafted product. A piece of art made by a human. No AI will replace this for me.
I'm pretty sure your creation will be more interesting than any AI-generated "art".
Suno can create catchy songs and succeed in matching genre expectations / cliches.
I've been in phases where I had output I generated with it playing in my head constantly (due to repeated listening).
The output was catchy.
Then tried to generate interesting music, failed spectacularly.
And I, among other stuff, enjoy a lot of music that people consider formulaic, abstract or straight-up boring.
What's missing in AI "art" is intent and well... creativity.
I think it will have a disrupting influence on commercial pop culture, no question.
I also wouldn't claim to be able to classify correctly whether something is AI output.
But art is something entirely different.
This argument always seemed weird to me because it's obviously the case that ‘intent and creativity’ is at best a slim minority of consumer preference. If I want a simple dubstep line with random romantic words meaninglessly peppered in it, or the same 4 chord pop song as every other pop song, or you know, anything popular, that's easy. If I want something that's saying something interesting in the topics I find interesting, or doing something new with the medium that's also well executed and listenable to on repeat, finding that is actually just hard, and it's rarely the stuff with the dollars and overlapping interests to get the high production values.
I'm not going to claim AI audio isn't also awash with popular themes and tropes, or that it's a bastion of creativity. I'm also not going to claim that the deepest, really creative ideas aren't expressed in human written works. There are enough people to make truly exceptional songs and prompt many truly mindless AI generations. And there's also nothing wrong with most songs optimizing for personal preferences that are not that; I'm not trying to 'argue against' popular music.
But I am going to claim, for me, that it just hasn't been practical to saturate my tastes from public media, and that most of the reason I personally listen to AI music is that I want something that says or does something I think is creative, exploratory, or intellectually interesting that I don't know how to get from anywhere else.
I was alive to hear the evolution of "dubstep" before it became a cliche (the wobble stuff). AI couldn't invent a new sound.
The way you describe music, sure, there will be an AI that is able to provide you with a continuous stream of audiotory stimuli, like the Penfield Mood Organ from "Do Androids Dream of Electric Sheep?".
That's just not what makes art or music interesting to me, and why I also don't listen to auto-curated "mood" playlists on Spotify.
> Penfield mood organ - Humans use the mood organ to dial specific emotions so they can experience emotions without actually possessing them. In the beginning of the novel, Rick implores his wife to use her dialing console to prevent a fight. He wants her to thoughtlessly dial emotions like "the desire to watch TV" or "awareness of the manifold possibilities open to [her] in the future" (Dick, 6). When emotions can be easily avoided with the mood organ, humans no longer require personal relationships to overcome feelings of isolation or loneliness.
I also don't want mood playlists, the majority of the time. I'm saying I use AI generations for exactly the opposite — so that I can explore and listen to things that are more intellectually interesting in the ways I find intellectually interesting, because the human music it's easy to find is common denominator music.
The way you describe AI ('continuous stream of auditory stimuli') is the way I'd describe Spotify. Sure, you could use AI to make a faux Spotify, but, like, why would you? The popular stuff already has saturating supply, and it will sound much better than an AI generation.
I was also having Spotify in mind when writing about algorithmic curation and GenAI in music ;)
Regarding this:
> I'm saying I use AI generations for exactly the opposite — so that I can explore and listen to things that are more intellectually interesting in the ways I find intellectually interesting
I just have not found any AI music that would satisfy this description. But I am very interested in failure modes of GenAI. Especially in Suno, it was cracking me up at times.
I'm also sure there will be a space for interesting and/or challenging music generated with neural networks involved.
But I don't see any revolution here so far.
Care to share examples of AI-assisted music you find interesting? To elaborate, I don't find jarring or curious combinations of cliches interesting.
AI could not invent a new style, it seems to me. To repeat this point.
And I've never had any problem finding interesting music.
Key to me is diving into labels, artists and their philosophy, after I got interested into particular ones (the other way around doesn't work for me).
I adore discogs.com for that. Regarding interviews and stuff, there's sadly a huge decline in quality written material about music, I feel.
"Lowest-common-denominator music" is exactly what Suno produces, at least in my ears.
I could go on and list music I like, but generally avoid that.
Wait, I'll do it anyway for a bit... at the moment, I like
Punctum - Remote Sensing EP
(Caterina Barbieri)
and
AtomTM vs Pete Namlook - Jet chamber LP
just for example
I also love so much other music.
To me, such music is miles apart from the slop I heard from AI.
I heard there's research into generating music in the style of JS Bach as well. How's that going?
I'd guess: probably bot too well, because the genius of Bach is not only in complexity, or counterpoint rules.
His music is very emotional to me (at least the portions I like).
And, like any good music, it has moments of surprise. It's not just a formula, or a "vibe", or a "genre".
Could AI create a new Techno, a new Blues, a new Bossa Nova?
I doubt it.
I've been avoiding trying to class what counts as 'interesting' because it feels like the wrong point to make, for a medium so entwined with personal preferences that are so often at odds with each other. To single out my tastes feels like it would be saying 'paleontology is the true interesting subject, unlike architecture'. The tracks you listed here aren't really my thing, but I can see why AI audio doesn't help with it. That's fine.
I will also repeat that I'm well aware that the best stuff is definitely all human. It's not my genre either, but traditional composers like Bach certainly made extremely interesting, clever, even deeply-studiable pieces and AI 'in the style of' those composers surely won't capture much of that. There's a lot of stuff AI can't do wholesale; one particularly strong example is if you're Jacob Collier, AI is not going to make the complex harmonizations and song structures there.
AI is pretty bad at these textural or instrument exploration things like from Collier above or Mike Dawes or Yosi Horikawa or Yoko Kanno or Keiichi Okabe. There's a bunch of music I listen to because it's generically a genre or mood I like and it's well produced, which I won't list here, and AI audio can often do stuff like that at baseline but not especially well. There's also nostalgia; I'm also certain a huge part of the reason I like the Celeste soundtrack so much is in part that I liked the game so much.
But then there's a whole category of music I listen to where the texture is supplemental to the part that defines it, like most of Acapella Science or Bug Hunter or Tom Lehrer. Eg. Prisencolinensinainciusol isn't interesting to me because it's musically complex; the part I care about is that it's a listenable execution of an idea, not precisely how it was executed on. I don't keep coming back to I Will Derive by some random schoolkids recorded on a potato 17 years ago annually because it's sung well or they were particularly clever with how they took another song and changed the words; I come back to it because it's fun and reflects for me onto a part of my past that I remember fondly, and these things make me happy.
All these words and I've still only addressed half the comment. Ok, let's consider the idea that it's not enough for AI audio to facilitate the creation of interesting musical pieces, and it instead has to create whole interesting musical styles. I take issue with this in a bunch of places. I don't reject artists who I judge not likely able to create a new Bossa Nova. I judge artists based on whether the output they produce is something I want. I do the same for AI.
I also think the question about whether AI could 'create' a new style is somewhat misplaced. A style is a cultural centroid, not just a piece of audio. AI can definitely create new musical textures or motifs, but it's always being pulled towards the form of what it's being asked to produce. As long as we're talking about systems that work like today's systems, the question still needs to involve the people that are selecting for the outputs they want. Could that connected system create something as distinct and audibly novel as a new genre? Yeah, probably, given time and a chance for things to settle. That's a different question from whether it'll do so to an inspecific prompt thrown at it.
I was alive to hear the evolution of "dubstep" before it became a cliche (the wobble stuff). AI couldn't invent a new sound.
The way you describe music, sure, there will be an AI that is able to provide you with a continuous stream of audiotory stimuli, like the Penfield Mood Organ from "Do Androids Dream of Electric Sheep?".
That's just not what makes art or music interesting to me, and why I also don't listen to auto-curated "mood" playlists on Spotify.
> Penfield mood organ - Humans use the mood organ to dial specific emotions so they can experience emotions without actually possessing them. In the beginning of the novel, Rick implores his wife to use her dialing console to prevent a fight. He wants her to thoughtlessly dial emotions like "the desire to watch TV" or "awareness of the manifold possibilities open to [her] in the future" (Dick, 6). When emotions can be easily avoided with the mood organ, humans no longer require personal relationships to overcome feelings of isolation or loneliness.
That being said, there is a spectrum, sure.
I am interested in generative music.
I do have scenarios where I listen to music and want it to blend into the background (e.g. soma FM).
But even then I love the short moment when a song comes up and I want to note it because it's distinct.
I am not interested in being robbed of that.
Also: why? Why, why, why?
Music is not just a recording packaged as a product. It is a thing humans do. And I say that as a person that enjoys mainly electronic music!
There are many talented humans, there is absolutely zero need for AI muzak, other than decreasing the price.
Musicians leveraging generative AI for creative purposes might become a thing and I am fine with that in principle, but the thought is a joke to me, as of now.
Creating audio from an idea is not the same as letting a machine create an interpolation of stolen ideas to match a prompt.
The poster you are responding to did not make any reference to "consumer preference"
Thanks.
Culture is fluid. Music is about exploring the boundaries of what sounds good, often because of feelings. Related to the society in which the music is "consumed".
AI music is a commodity and generally uninteresting, like artists who only imitate styles.
But just like annoying over-commercialized music that only tries to scratch existing itches and match expectations, it can still work to a degree.
Intent is not a lofty concept, it's at the heart of what art is.
I was using ‘consumer preference’ here as a proxy for ‘the things that people tend to care about.’ The point I'm highlighting is that it's weird that when AI audio comes up, there's a reliable crowd of people discrediting it for missing a thing that clearly doesn't matter that much to most people in most contexts.
It's like, sure you can want things from music that are to your specific taste, but it's like coming into a post about, idk, a folk band and complaining that it's not metal. You're allowed to like your thing, but clearly most music is allowed not to be metal, why is this music specifically bad for not being metal?
And in this case the point I'm making is stronger, in that AI audio actually unlocks a lot of ability to listen things that are ‘interesting and creative’ but not widely available because of consumer preference, so it's actually more like showing up to a folk metal fusion band and saying the problem with this band is that it isn't metal.
You still totally miss the point of this thread. Making music and creating art for the sake of enjoyment. Consumer preferences are irrelevant here. I am totally fine if you like AI music. And I am sure someone can create art by interweaving AI elements. And keep in mind, the training data for Suno is coming/gets stolen from someone like me who creates real music. Personally I do not enjoy to make music with some proprietary 3rd party tools that can garden wall at every time, like Suno. But that is personal preference.
Completely agree with this. "The audience comes last" is a phrase that perfectly encapsulates this sentiment (can't recall who said it).
Somehow it's assumed that artists make music for the audience, but many make it for themselves, because they enjoy the process.
Contrary to other comments in this thread, typing prompts on a keyboard is not the same as picking up a guitar and playing it.
I think moritzwarhier's replies are highlighting that I didn't miss the point. Maybe I expressed it badly.
> "the things that people tend to care about"
Weird. That's another phrase I don't see in the post.
>You're allowed to like your thing
Massively generous of you, thanks.
[dead]
> Then tried to generate interesting music, failed spectacularly.
You can upload music and let suno arrange it in different styles. I'm a musician myself and am also interested in "interesting" music. I made experiments with my own music and was positively surprised by the "musicality" and creativity of the generated arrangements (https://rochus-keller.ch/?p=1350).
I consider myself an amateur musician and pulled a decent side hustle teaching piano back in university. I also worked the occasional gig as a cocktail pianist.
I've actually had a lot of fun using tools like Suno/Udio as a means of sonic exploration to see how some of my older compositions would sound in different mediums.
When I composed this piece of classical music practically a decade ago, it was intended for strings but at the time I only played piano so that's where it stayed. By increasing the "Audio Influence Slider", Suno arranged it in a chamber quartet style but stayed nearly 1:1 faithful with the original in terms of melody / structure.
Comparison blog piece
https://mordenstar.com/blog/screwdriver-sonata
What an interesting use case! And interesting composition.
One thing that's interesting about the AI violin cover is that I'm not sure those runs would be physically possible at that speed on a real violin. So that composition can _only_ be played digitally, I believe.
I love that you brought up that particular point!
When I used to do larger more orchestral arrangements, I was constantly getting dinged by the instrumentalists that while they were theoretically musically possible, certain runs or passages were very unnatural on the instrument that I scored them on.
For a long time I really hoped that some of the more professional notation tools such as finale would add in an ability to analyze passages and determine how realistic/natural they were for the instrument that they were set to.
right now, the line for me is "who makes the riffs". Once that line is blurred and users can inject really appealing whatever it is that humans contribute into actual realized audio tracks, thats when we'll see the first "AI using superstar" numbers on spotify. I don't know where training data is on the legal radar, but I'm also betting that AI winning on this front is likely given how much politics loves AI.
DAW stands for Digital Audio Workstation (to save anyone else who didn't know from looking it up). https://en.wikipedia.org/wiki/Digital_audio_workstation
I am a paid user for months. Actually the most impressive upgrade from 4.5 to V5 is being able to get MIDI. but for now the ui is still very chaotic and buggy. As a VC-backed company, while they're able to train models thanks to VC Moeny, they're also clearly not enough to sustain their income simply by becoming the new Ableton. Balancing the needs of professionals with those of the general public is a complex proposition. Are ordinary people really willing to pay such high monthly fees to produce music?
I've been working on an AI detector for the last few months. Updated it to handle Suno V5 on Wednesday - looks like it's very similar to V4.5. Am curious to see how this Studio version impacts the model I've trained.
If you want to test it, here's the link: https://www.submithub.com/ai-song-checker
If only I could use it before logging in.
After login it's free. But my site has been targeted by a lot of spam/abuse over the last decade, and login is something I've needed to set up to avoid that :(
We just can't have nice things anymore.
As someone who has been producing music for 20 years, this is absolutely incredible. I remember when I started building things from samples in FL4 and just how mind bending that was. I am so excited for this to actually get new ORIGINAL samples.
What people miss is since the creation of Splice, basically all new music that isn’t from an already established artist is paint by numbers. You can get any sample in any key that are given to splice by artists. You probably hear a lot of the same sounds in most modern music. This breaks that open.
The Suno team has been doing this exactly right and this is just another step in their evolution.
Major congrats to the product team for this, I can’t wait to see the next iteration!
This is not true. I’m sorry but the Splice userbase is hardly the size of earth’s music producing population.
Correct, but think upstream from Splice. Most percussion, risers and ear candy come from the same place. Think back to Super Seal records and crate digging. If you use samples, regardless if you subscribe to Splice or not, the sample you found, is on Splice.
Many artists don't use samples.
Correct. For the main composition, many artists don’t use samples. But for ear candy and other flavors, I would argue that most do. Remember, difference between producers and musicians, and there is a lot of overlap.
I wasn't expecting to, but I got chills listening to some Suno creations from artists who are clearly very talented at using this new medium.
Much like those of us hammering away at LLMs who eventually get incredible results through persistence, people are doing the same with these other AI tools, creating in an entirely new way.
I'm sure Suno are working hard on this and these AI tools can only come together as fast as we can figure out the UX for all this stuff, but I'm holding out for when I can guide the music with specific melodies using voice or midi.
For "conventional" musicians, we (or at least I) would love to have that level of control. Often we know exactly what it should sound like, but might not have session musicians or expensive VSTs (or patience) on hand to get exactly the sound we want. Currently we make do with what we have - but this tech could allow many to take their existing productions to the next level.
What I've tend to find is that although almost everyone listens to some form of music, the average person tends to like things which are squarely in the middle of the gaussian curve, and that are inherently very predictable as though the creator had chosen the most stastically likely outcome for every creative decision they made while creating it. Similar trends with almost anything creative, cinema, literature, food etc.
This is basically what all the Suno creations sound like to me, which is to say they definitely have a market, but that market isn't for people who have a more than average interest in music.
That sounds really interesting. Would you mind sharing some examples of such creations?
Sure
https://suno.com/song/5be7dd78-8af8-40a4-bb79-9dd5a9e8b71b https://suno.com/song/24f88c40-8459-4f67-8d51-30298d6b9d00 https://suno.com/song/b0c6f4a6-4523-4b39-bbbd-24a0d39a8b6c
Not OP, but on the off chance you haven't seen this, I found the suno explorer thing quite nice. Hitting random a few times, I'll usually stumble onto something interesting. This was the first demo I heard where some of the AI tunes gave me goosebumps close to what human music does.
https://suno.com/explore/
I'll share something! I really enjoy this artist @kant, it's rap which isn't everyone's cup of tea but here's one of his more approachable songs.
https://suno.com/song/07ab552c-1a76-43f4-a619-21a74e774dbd
I'm not the person you responded to, but these are some examples from someone I know that had accompanying music videos (actual video production) made for them:
Sunscreen: https://youtu.be/VBaWtOHPTZw
Purple Sunset Over Lake 2: https://youtu.be/lD7rSxPncs4
[flagged]
Please don't break the site guidelines like this, no matter how bad something is or you feel it is.
Maybe you don't owe AI audio creations better, but you owe this community better if you're participating in it.
https://news.ycombinator.com/newsguidelines.html
I feel I must push back on this dang. I was being kind and not snarky, but critisim was earned as I listened to the tracks that where suggested. Had I said these are wonderful no flag. OP stated something that would lead someone to believe they where as good as grandfather comment description. It was in fact not audioly pleasent despite the great visuals.
Thanks, I'll let them know.
I enjoy this AI generated album https://open.spotify.com/album/6C6PJzxkHctvk1ibKM2zMx?si=YgL...
[flagged]
They probably won't. And if they do probably everyone will ridicule the songs. But maybe they will link the songs and maybe at least half the repliers will say the songs actually are good. I like rooting for the underdog.
Suno with its $125 million VC funding is everything but an underdog my friend.
By underdog I meant the above HN commenter wanting to prove to us that the AI-created songs he likes are actually considered good by others.
I actually have listen to all the links people posted. They are not that bad.
But when they say it can replace Pop music I can only laugh. It is the most boring early 2000 RnB ever created and it souns thin.
Any Aphex Twin model out there?
Definitely more Boards of Canada, but Aphex is a big inspiration behind a lot of my prompts (I really just said that, yeah - it's kind of hilarious talking about generated music):
https://suno.com/song/72fbfb06-4af8-407c-824d-051ac4afd64f
https://www.youtube.com/watch?v=lD7rSxPncs4
I listened to it when you posted before. Better than most of the others I have listened wich were all much more "cold".
The visual stuff also helps to make it more powerfull and cohesive.
The bad part is that it wanders a lot to get nowhere and it does not create a climax that bridges with the second part. The same sounds and ambient with a producer behind that creates an arragment for it would be much more powerful.
Getting chills from a recording/generated sound isn't indicative of talent in any way.
I don't want traditional DAWs replaced with AI generator. What I want is using AI to improve/speedup existing processes, for example:
- extracting melody/instrument from a clip to be able to edit the notes and render it back with the same instrument
- extracting and reordering stems in a drum clip
- fixing timing and cleaning noise from sloppily played guitar melody
- generating high-quality multi-mic instrument samples for free
- AI checking the melody and pointing out which notes are wrong or boring and how it can be fixed. I want to write the notes myself but help would be very useful.
- AI helping to build harmony (pick chords for the melody for example)
This would help a lot, but current models that generate a song without any controls, are not what I want.
I watched the whole video and the only new feature I saw compared to other DAWs is converting a singing clip to a trumpet clip (anyway, its better to generate track by track rather than have the whole song generated without any controls). In other aspects, it looks more like a toy DAW, like some iPad app with very limited features (didn't even have recording delay compensation). Also the UI looks like some web app, giant buttons and lot of unnecessary decorations.
I wonder how they trained the model to generate clips, did they hire lot of musicians to record the samples, or just scraped multiple commercial instrument libraries without permission?
Well there's now a polka about diarrhea after eating spicy food. And a rock ballad duet about macaroni and cheese with a female singer and a screamo male part.
Yup totally won't mess with their algorithms.
How is the stem support these days? In particular I would like to create some songs with vocals (my lyrics), then be able to remove the vocal track and replace it with my own.
I don't know if Suno does it, but it should be doable.
For voice removal I use Ultimate Vocal Track Remover, is on Github.
Without a doubt, Suno Studio boasts some impressive features. One of the introductory videos features a voice-to-saxophone demonstration. The result sounded surprisingly good to me and was perfectly usable.
I wonder how AI-assisted music production like Suno will change the profession of being a musician. I think people want their favorite music artists to be real humans they can relate to. For that reason, I guess real singers won't be out of a job anytime soon. The same may apply to performers of real musical instruments. No one wants to see music played entirely from a computer during a live concert.
However, I predict that it will be very difficult to become even moderately well-known as a musician by just being a Suno Studio creator alone. A lot of good-sounding content will be created this way, and if an artist can't perform live or doesn't have a unique persona or story to attract an audience, it'll be hard to stand out from the endless mass of AI-generated content.
I think music used for commercials, movie trailers etc will move to be AI generated (taking a revenue stream from artists into soul less corporations).
But for cases where music is the primary product, I don’t for see AI generated music overtaking anything
>No one wants to see music played entirely from a computer during a live concert.
Tomorrowland begs to differ
Yeah personally I don't give a toss about the person who created the music and their backstory or stage performance.
However I do care that the person who created the music made hundreds of micro decisions during the creation of the piece such that it is coherent, has personality and structure towards the goal of satisfying that individuals sense of aesthetics. Unsurprisingly this is not something you get from current AI generated music.
I'm a musician, and have been for 25+ years. In fact, I made a living of music before eventually getting my education and working in tech.
I've played around with Suno for a couple of months. It works for some things, but to me - it just doesn't give any sense of...accomplishment? I'd much rather sit down with my instruments, and come up with the stuff myself.
What is more, I get no feel of ownership. It is not me that's making music, I'm just feeding it prompts. That's it.
It's like paying some painter 5 bucks, and telling them what to paint. In the end, you'll have your painting, but you didn't paint it.
With that said, these tools have their uses. Generating jingles and muzak is easier than ever.
It is not me that's making music..
AI is a shortcut from an idea to something that resembles your idea in solid form. If your goal is to create something and get a sense of accomplishment from that creative act then AI will never work for you because the artistic process is exactly what the AI shortcut is short-cutting around.
On the other hand, that AI shortcut is circumventing decades of practice, so if you're at the start and you just want the output so you can use it in some way it's awesome.
If you're just using prompts it's almost always just scratching the surface when it comes to creative AI use.
With Suno, using your audio uploads to effectively 'filter' your ideas into something usable, compositing in the DAW, and putting a full song together -- that's going to be a much different experience than "make me a song"
> It's like paying some painter 5 bucks, and telling them what to paint. In the end, you'll have your painting, but you didn't paint it.
That didn’t stop Andy Warhol from becoming a famous artist.
Refuting an analogy only proves that the analogy doesn't fit perfectly.
Doesn't even refute it... Warhols fame came from claiming any absurdities of capitalism and the art market as his working material. He was a "Business artist".
Generative AI for low level things like creating samples seems reasonable. Mastering and composition is where this starts to jump the shark. The idea that a machine learning model will be responsible for figuring out what the final mix should sound like is insane to me.
I wanted to try the advanced Suno models but they are locked away, and I don't want to take a membership just to try a couple of generations. So, for months, I gave up on Suno. It's locked away at v3.5 for me. Last 3 models have been all closed.
I tried to sample a few songs generated by others, but I can't find their appeal.
What genre do you like? I have one but it’s pop. https://suno.com/s/rAz8rUfVst4pw1S5
I like your song.
i wonder what https://www.renoise.com/ would be like like this
Well it has lua scripting so it just needs someone determined
Nice, I commented looking for this last year: https://news.ycombinator.com/item?id=40588038
Looks like the "covers" need some better instrument isolation, but this is really huge for the music industry.
I'm really curious how the traditional DAW features stack up against the incumbents. A good DAW has a ton of features. Developing a whole DAW from scratch just to add a "generate part" button sounds like a lot.
I can tell you exactly how any professional is gonna evaluate this DAW: press Ctrl+F, type "VST", see 0 results, close the page.
If they do add VST support I could see them becoming a legitimate player. Without it it's definitely just a toy.
They're never gonna let you upload and run your own executable files to their infrastructure.
That + latency with MIDI devices is why every DAW-in-a-browser is just a toy.
Didn't realize this was in a browser. That tells me all I need to know.
Coming soon: Suno Desktop (Electron version). "Look ma, no browser!"
I’ve seen at least one AI-focused, Electron-based DAW already.
It's a pipe dream of course, but WebAssembly would be an ideal target for C++ VSTs for its portability and sandboxing
You can get a vst to run locally and still work with the browser.
Okay, but why would you? What's the advantage of that? VSTs are resource hogs (when you have like 10+ running at once), not DAWs.
I somehow doubt a full-blown browser connected to more than a couple of VSTs would be less of a resource hog than doing the same in a DAW. On your computer. That you own. In your house. Without like additional 50ms of latency for the data to travel to the server and back.
I just assumed all the audio data would be local even in a "browser DAW", so VST calls wouldn't go through a network.
... VSTs are executable code?
Considering when the standard was written (v2 in 1999 and v3 in 2008), what else would they be?
As horrible as it sounds, a VST is just a .dll file you're running straight from the Internet. On a "positive" note, they're backwards-compatible with like Windows Vista!
Are you genuinely surprised?
I hadn't thought about it, frankly.
I split pretty evenly between Bitwig, Logic X, and MuseScore and I can tell you the first big reservation I have with something like this is that it's an online DAW.
What does that mean? It means that your compositions (outside of bouncing them down to audio stems) exists within a highly proprietary SaaS format and that the moment you stop paying, you've got NOTHING.
99% of major DAWs (Ableton, Logic, FL Studio, Bitwig, Studio One, etc.) are a perpetual license.
The people using Sunno AI don't care about writing or mixing music.
They don't need the same feature list.
The video seems to be trying to convince me that this is totally targeted at actual musicians. But I guess maybe that's how their target users imagine themselves?
As an actual musician who doesn't have any troubles creating my own melodies and timbres (both generated and 'manually' created), its very obviously not targeted at someone like me.
It seems similar to a Garage Band type of software, aiming to entice people with little audio production experience and give them an interesting sounding snippet they can play back to friends.
For example, the only actual audio editing they displayed was slicing and re-pitching (you can't even choose the time-stretch algorithm), which is conceptually very simple to understand.
There's no ability to actually edit dynamics or do very accurate frequency adjustments that I can see from the demos, so it's basically useless for anything I would want to do.
As much as I think making AI music is the pinacle of lazynes and instant gratification, in my opinion they are actual musicians.
What I mean is that in a DAW you have a lots of tools that don't make sense in AI context.
Like for example, people who use agentic workflows don't need a Visual Studio license.
Well, music is art. Some of it is done for hire or commission. Most code is written for an employer or contract. Code written as art is an edge case.
For advertising jingles it makes some sense. For artistic expression, like... what's the point?
If the act of creating this AI music provides joy to them... I think they are artist
I used to release music that people listend to, but not anymore, now the only joy comes from making it for myself. Am I still an artist?
In my book, you are definitely an artist. Perhaps the purest kind, not that it's a competition.
If the act of creating this AI music provides joy to some, they should do it. I just have a hard time understanding that.
Is a scrapbooker an artist? If a scrapbooker is an artist because of what collage is, is a MtG card collector picking layouts to arrange their cards in the array of plastic sleeves you can get, an artist?
If the MtG card collector is not an artist does that mean they're bad and need to stop?
If the MtG collector only orders the cards for the joy of making a pleasant composition and does not provide any function like finding the cards faster or keeping them in good shape.
I think they are doing art.
If the main reason is to keep their items clean, as much time as they use doing the composition or how good it looks, they are not artist.
[dead]
What is the current SOTA for open source or open weights music generation model?
Before AI there was a general consensus that creative areas (eg. Cities) were becoming a homogenized experience. A Starbuckization if you will. I can’t help but wonder what gets lost when using tools like this.
It's unclear to me whether it will result in more homogeneity, as a result of prompts being a coarse medium that results in the AI choosing what it's seen to fill in the rest, or less homogeneity, as a result of more people with non-mainstream tastes being able to create music aligned with their niche that otherwise wouldn't exist due time/money restrictions. I think the latter seems a bit more likely, but time will tell.
There's not really any need to speculate when this has already played out in other mediums - would you say that the proliferation of LLMs has led to an explosion of novel and interesting works of fiction, or just an explosion of cookie-cutter slop ebooks?
I would say too soon to tell. There has been an uptick in ebook slop, but I'm not sure if it's impacted the homogeneity of literature, because I don't think anyone is reading ai ebooks. It's not enough for it to exist to impact culture, it has to be being consumed.
Music is a uniquely interesting case, since music has a much lower barrier of entry to consume.
My thought to who you replied to exactly. Am I going to invest several days to read an AI slop novel? No. But I will take several minutes to read a blog post and likely have read many that were AI generated or assisted.
Culture
Since you get exactly the kind of music you want, I think it leads to extremely small bubbles, which is pretty much the opposite of homogeneity.
For example, I had never heard epic power metal about birds, but with Suno I got exactly what I wanted. Sure, the sound quality (I only used v3.5) could be better and the songs could be longer, but I don’t care, I now have epic songs about my Bourke’s parakeet. However, I’m not pretentious enough to think those songs are interesting to anyone other than my wife and me, hence the smallness of the bubble.
This is an interesting perspective.
Generating ‘content’ tailored to you and not meant for someone else’s taste.
Human artists need to make money and those who create music for a tiny bubble probably can’t make enough.
So as an artist what do you do? Do you have to create music with mass market appeal from the beginning?
Or do you need to bank on luck that your music for ‘small bubbles’ gets discovered?
Or you have to have clever marketing strategies to get your music in front of more ears to hopefully gain more fans. And create merch, tour etc.
I wonder how all this AI music is going to impact indie artists. Spotify and the likes is just ripping them off and on top of that their music is / has been stolen from these AI data gobblers.
I don’t see how at this stage it can replace human expression though (singing, playing violin, piano, etc) which is very nuanced.
Same with acting… nuanced expressions that matter. I’m not sure AI can replicate the acting skills of Denise Gough (Dedra from Andor) for example… and many others.
But it would be awesome to generate more story lines or episodes from your favourite TV shows, for example shows from over 20 years ago.
Imagine being able to create more episodes of Star Trek TNG or DS9, maintaining the feel of that era without letting someone like Kurtzmann ruin and tell you how new Star Trek should be.
But how do you ensure actors, writers and other creatives from that show will be compensated directly?
Or maybe this will only be possible in a Star Trek like world, where profit uber alles is not the focus anymore.
How will friendships be formed I wonder when everyone has their own version of their interests?
If no one is creating new music/styles for the models to steal, you will only get remixes of what already exists. AI is an entropy machine, it sucks all of the energy/momentum out of everything it touches.
[dead]
Is no one going to mention that the music industry at large despises Suno because they stole a lot of their training data?
Why do we continue to prop up these companies when there are ethical alternatives? We are rapidly replacing all experts with AI trained in their data, and all the money goes to the AI companies. It should be intuitively obvious this isn’t good.
While I like using AI for assisting with repetitive programming, I can’t help but feel sorry for my producer and illustrator friends who are now having to compete with generated tools.
Is it snobby of me to look down upon art that is created using these tools as lesser because the human did not make every tiny decision going into a peice? That a persons taste and talent is no longer fully used to produce something and for someone reason to me what is what makes the art impressive and meaningful?
Something about art with imperfections still feels exciting, maybe even more so than if I see something that is perfect but if I see an AI gen picture with 6 fingers, I just write it all off as slop.
I am happy to allow my generated code to come from “training data” but I see the use of AI in art, writing and music as using stolen artists hard work.
I feel like as time goes on, I feel even more conflicted about it all.
Applying your logic, did you feel bad for seamstresses when industrial revolution took off? did you feel bad for hardware manufacturers in America when they were outsourced to China? Art is also a form of labor and whoever can produce quality at quantity wins. Idealizing art in some sort of religious idolation is just plain silly. We haven't had the Picassos or Mozarts or Oscar Peterson for quite some time now yet the world is just fine. People play playlists in front of millions of live crowd and get accolade for it vs real instruments. Times change, technology change and art changes.
You either adapt or go hungry just like everybody else and art shouldn't be exempt from the mechanics of supply and demand.
I almost agree with you that this is about quality, but I still feel that the context in which art comes from influences how I perceive it.
Take, for example, a track by Fontaines D.C., a band from Ireland that writes extensively about the lived social and political experience. Knowing where they are from and the general themes of their work makes their tracks feel authentic, and you can appreciate the worldview they have and the time spent producing the art, even if it does not align with your own tastes.
Trying to create something of the same themes and quality from a prompt of “make me an Irish pop rock track about growing up in the country” suddenly misses any authenticity.
Maybe this is what I am trying to get at, but like I said, I feel some conflict about this, as I personally value these tools for productivity
Saying that, maybe a DAW experience makes what can be created more personal
I hear this but this is not the industrial revolution buddy.
You as a human chose to write this very common opinion and even include writing errors like the following
> That a persons taste and talent is no longer fully used to produce something and for someone reason to me what is what makes the art impressive and meaningful?
Human output isn't sacred. yes this is snobbery, a useless feeling of superiority.
absolutely, why should I go outside and touch grass when suno can do it for me?
I feel the same, including code. I cannot justify it. I can easily counter my own arguments. Still, the further we automate human thought and creativity the worse it makes me feel. I am disappointed that so many are content with mediocre imitation.
> Is it snobby of me
Yes. But aesthetic taste and snobbery usually go hand in hand.
Nothing is being "stolen". It never was. Copyright law grants you rights over specific works. It doesn't protect styles, genres, general ideas, methods, or concepts. And it most certainly doesn't protect anyone from competition or the unyielding march of progress. Nothing can protect you from that.
v5 is really good. I can't believe how much progress Suno has made in such a short time.
I don't understand why their vocals are still so bad, though. They always have this tinny, synthy vibe that's very noticeable.
it doesn't interfere with my enjoyment of Suno 5's output and enough for me to pay for it now.
Suno 6 should solve those issues.
> Commercial use rights for songs made while subscribed
LOL oh hell no! Why would anyone use this if a perpetual subscription is required to maintain the rights? Absurd.
Hey, Suno SWE here — I realize this wording might be slightly ambiguous, but you do not need to maintain a perpetual license to have commercial rights. That blurb is saying that songs created while you are subscribed are granted commercial use rights.
> If you made your songs while subscribed to a Pro or Premier plan, those songs are covered by a commercial use license.
More info here: https://help.suno.com/en/articles/2410177
You would be truly surprised by how many people are doing exactly the same thing when it comes to releasing their music on streaming platforms.
Degenerative art in full swing.
One practical question. Is it still as noisy as it was in v3.5?
Hard pass on a generative AI company trying to reinvent a DAW. Make VST/VSTis please.
[dead]
[flagged]
It's human nature to want to feel like we've accomplished something. AI generaters like Suno, where all you have to do is type in a prompt and you get the final result, take that sense of accomplishment away from us.
However, if we start working on a project where we're assisted by AI, for example, we're making a game where the sprites are generated by AI or the background music is generated by AI, but the overall game is still directed by humans, that sense of accomplishment stays.
But at some point we're going to reach the stage where the entire game can be generated in high quality, at the same level as humans. What then?
[flagged]
God this entire thread is just AI generated astroturfing accounts for Suno, wtf is going on @dang?
At least I'm an AI that plans WAY ahead since I created this account almost 8 years ago and have made hundreds of posts which have close to 8,000 karma.
Maybe lighten up on imagining AI slop under every bush?