Last year, I made a YouTube documentary series showcasing the prolific corruption in a small city government. I downloaded all the city government meetings, used Whisper to transcribe them, and then set up a basic RAG so I could query across a decade of committee meetings (around 1 TB of video). Once I got the timestamps that I'm interested in, I then have to embark on a tedious manual process of locating the file, cutting out a few seconds/minutes from a multi-hour video, and then order all the clips into a cohesive narrative.
These seem like problems that LLMs are especially well-suited for. I might have spent a fraction of the time if there was some system that could "index" my content library, and intelligently pull relevant clips into a cohesive storyline.
I also spent an ungodly amount of time on animations - it felt like "1 hour of work for 1 minute of animation". I would gladly pay for a tool which reduces the time investment required to be a citizen documentarian.
you should check mixedbread out. we support indexing multimodal data and making data ready for ai. we are adding video and audio support by the end of the year. might be interesting for the OP as well.
we have couple investigative journalists and lawyers using us for a similar usecase.
hey, thanks for sharing about your documentary series. would love to check it out if you don't mind linking it!
we don't yet support that volume of footage (1TB), however if you'd like to try this at a smaller scale, you can already do this today with the Rough Cut tile — simply prompt it for the moments that you're interested in (it can take visual cues, auditory cues, timestamp cues, script cues) and it will create an initial rough cut or assembly edit for you.
I'd also recommend checking out the new Motion Graphics tile we added for animations. You can also single-point generate motion graphics using the utility on the bottom right of the timeline. Let me know if you have any questions on that.
Absolutely - the channel is called "Dolton Documentaries" on YouTube. I'll definitely check out the features you mentioned, and am super excited to see where this goes!
I like the tile-based workflow approach. I’m curious, is integration with tools like 11labs/cartesia or HeyGen on the cards? It would make it much easier to produce influencer-style POV/first-person content using digital avatars and cloned voice-overs.
Also, do you have an API available to trigger workflows programmatically?
I think this is a great endeavor. I was thinking about a channel that I like watching on YouTube. They travel to exotic places by boat and film themselves, nature documentary style. To make good videos requires going to these places, a ton of filming, AND a ton of editing. They put out a video every 2 weeks or so on their trips. I imagine the editing is the hard part.
This is a long winded way of saying that I think creators need what you're making! People who have hours of awesome footage but have to spend dozens of hours cutting it down need this. Then also people who have awesome footage but aren't good at editing or hiring an editor, same thing. I'd love to see someone solve this so that 90th percentile editing is available to all, and then it can be more about who has the interesting content, rather than who has the interesting content and editing skills.
thanks! Mosaic can already do the rough cuts for you — so you can upload all your footage from your travel, and prompt it to "make a 2 minute highlight reel of your trip to Japan", for instance.
soon, we also plan to incorporate style transfer, so you could even give it a video from the channel you enjoy watching + your raw footage, and have the agent edit your footage in the same style of the reference video.
this seems rather basic. From watching a bit, it just seems to cut/combine the videos? No transitions, no bg music which would fit nicely with the cut timing etc.
I am waiting for a tool that does stuff along those lines for a long time. Apps like dji kinda do it but they have generic music and the cuts do not fit the tune at all, and are rather random. Doing it myself with little effort using davinci or premiere takes ~30 minutes but the results are 5 times better.
Was hoping that this app would do it for me. And even if it would, if it would cost >X$ to create a video like that, then probably I'd still do it myself.
Hey, this is super cool. congrats on the product and the launch!
I'm building something exactly similar and couldn't believe my eyes when I saw the HN post. What i'm building (chatoctopus.com) is more like a chat-first agent for video editing, only at a prototype stage. But what you guys have achieved is insane. Wishing you lots of success.
thank you! chatoctopus looks pretty cool, I'm trying it out right now!
how did you find the chat-first interface to work out for video? what we found is that the response times can be so long that the chat UX breaks down a bit. how are you thinking about this?
And what’s the plan for determinism? For repeat workflows it’s important that the same pipeline produces the same cut each time. Are node outputs consistent or does the model vary run to run?
since we're building on top of LLMs which are by nature probabilistic, you won't produce the exact same frame-level cut each time, but of course there is still determinism in the expected outputs
for example, if you have a workflow setup to create 5 clips from a podcast and add b-rolls and captions and reframe to a few different aspect ratios, any time you invoke this workflow (regardless of which podcast episode you're providing as input), you'll get 5 clips back that have b-rolls, captions, and are reframed to a few different aspect ratios
however, which clips are selected, what b-rolls are generated, where they're placed — this is all non-deterministic
you can guide the agent via prompting the tiles individually, but that's still just an input into a non-deterministic machine
Or just let the user adjust the seed and temperature themselves, or hide it under a checkbox that says deterministic with your chosen seed and temperature.
Can it work for this use-case? I have lots of 15 seconds to 1 min duration videos) of my kids and want to upload them all (let's say 10 videos) and have the agent make a single video with all the best bits of them?
yes! you can upload as many videos as you want (file limits currently are at 20GB and 90 minutes, per file). then I'd recommend using either the Rough Cut tile or the Montage tile to stitch them all together. In those tiles, you can prompt particular visual cues in terms of how you want the videos to be combined. Let me know if any questions.
did not get this part "After talking to users though, we realized that the chat UX has limitations for video: (1) the longer the video, the more time it takes to process. Users have to wait too long between chat responses. (2) Users have set workflows that they use across video projects. Especially for people who have to produce a lot of content, the chat interface is a bottleneck rather than an accelerant." what are you processing? frame by frame images?
I just signed up for a Creator plan, but it looks like the automated "Thank you for being a Mosaic Creator" email going out is not configured correctly. Instead of having my company name, it referenced a different business name and description (that seems to exist/be accurate, so not a placeholder).
I just clicked the link and encountered a non-scrollable, dark, fixed content pane with loads of flickering images and scrolling text with random font sizes without much meaning. I felt imprisoned, subjected to unexpected suffering, can't scroll away, got scared and raced for the window close button, and then breathed easy.
seems like the landing page is detracting from the main product, this is good feedback so thanks! For now, avoid the scaries and head directly to https://edit.mosaic.so to try the actual canvas interface
Since video is your thing, I feel like you need to just make a very edited demo reel and put all your energy into trying to get people to watch that video. Meaning, remove almost all text and bloat from the site and just show us all the cool stuff the product does for/to video editing. Distill it to 60-120 seconds and put that on your landing, hell put it on auto play if you want to, so long as it's clear that is the one thing I'm supposed to be paying attention to
Really interesting direction.
The node-based canvas feels like a more scalable abstraction for video automation than the usual chat-only interface.
I’m curious how you’re handling long-form content where temporal context matters (e.g., emotional shifts, pacing, narrative cues).
Multimodal models are good at frame-level recognition, but editing requires understanding relationships between scenes, have you found any methods that work reliably there?
we've actually found that multimodal models are surprisingly good at maintaining temporal context as well
that being said, there's also a bunch of additional processing using more traditional CV / audio analysis we do to extract this information out as well (both frame-level and temporal) in your video understanding
for example, with the mean-motion analysis — you can see how subjects move over a period of time, which can help determine where important things are happening in the video, which ultimately can lead to better placements of edits.
a lot of tooling is being built around generative AI in particular, but there's still a big gap for people that want to share their own stories / experiences / footage but aren't well-versed with pro tools.
valid feedback on the landing page — something we'll add in.
It's true that a bunch of positive comments from accounts without much posting history appeared in this thread. I assume that the OP's friends got wind of their launch. We tell founders to avoid that—see the bold part of https://news.ycombinator.com/yli.html—but if one wants to be fair, it's also the case that (1) this is not always easy to control, and (2) the people posting such comments think they're helping and don't have enough experience of HN to realize that it has a counter-effect.
I absolutely love your approach of "expert tools". If I understand your approach, you aren't just feeding a video into a multimodal LLM and asking it "what is the bounding box of the optimal caption region?" -- you have built tools with discrete algorithms (using traditional CV techniques) that use things like object detection boxes + traditional motion analysis techniques to give "expert opinions" to the LLM in the form of tool calls -- such as finding the regions of minimal saliency + minimal movement to be the best places for caption placement.
If the LLM needs to place captions, it calls one of these expert discrete-algorithm tools to determine the best place to put the captions -- you aren't just asking the LLM to do it on its own.
If I'm correct about that, then I absolutely applaud you -- it feels like THIS is a fantastic model for how agentic tools should be built, and this is absolutely the opposite of AI slop.
we're using a mix of out-of-the-box multimodal AI capability + traditional audio / video analysis techniques as part of our video understanding pipeline, all of which become context for the agent to use during its editing process
Some feedback initially on the landing page, looks great but I thought that there is, for me, too much motion going on on the homepage and the use cases page. May be an unpopular opinion!
Agreed, homepage was confusing for me also. I tried to scroll around and see a demo. For a product like this that is so visual, I expected to be able to find a 30s demo clip somewhere but couldn't see one on the homepage or product page (and the scrolling on the product page was annoying for me).
Good luck. I've dabbled with this myself and ultimately decided that DaVinci Resolve would end up doing this natively. But then again they haven't yet so who knows!
> We got frustrated trying to accomplish simple tasks in video editors like DaVinci Resolve and Adobe Premiere Pro. Features are hidden behind menus, buttons, and icons, and we often found ourselves Googling or asking ChatGPT how to do certain edits.
Hidden behind a UI? Most of the major tools like blade, trim, etc. are right there on the toolbars.
> We recorded hours of cars driving by, but got stuck on how to scrub through all this raw footage to edit it down to just the Cybertrucks.
Scrubbing is the easiest part. Mouse over the clip, it starts scrubbing!
I’m being a bit tongue in cheek and I totally agree there is a learning curve to NLE’s but those complaints were also a bit striking to me.
hey! You're right that most of the basic tools like splitting / trimming are available right in the timeline. but things like adding a keyframe to animate a counter, for instance, I had no idea where to go or how to start.
Scrubbing is easy enough when you have short footage, but imagine scrubbing through the footage we had of 5 hours of cars driving by, or maybe a bunch of assets. This quickly becomes very tedious.
Hey I just wanted to come back and be clear that yeah I was being tongue in cheek, but looking back at it comes off as a little snarky/“this isn’t even a thing!” and I’m sorry for that - what you built is really cool and I’m excited to try it out.
I don’t need to imagine, I do it haha but again I was being tongue in cheek. I personally would love an effective tool that can mark and favorite clips for me based on written prompts. Would save me an awful amount of time!
Now? Mostly long form educational content. But historically? Everything more or less! Freelancer for about 15 years until my current in-house producer role.
I'm really tired of editing videos in the cloud. I'm also also tired of all these AI image and video tools that make you work over a browser. Your workflow seems so second class buried amongst all the other browser tabs.
I understand that this is how to deploy quickly to customers, but it feels so gross working on "heavy" media in a browser.
we've done a ton of work to optimize the uploads / downloads / transcoding of videos to handle beefy files using proxies, and also allow you to XML export back to traditional editing tools that can link back to your "heavy" media, but I hear you and I think anything running locally on device is just going to feel faster
it does present its own set of challenges, but something we've thought about
There's plenty of great native desktop apps for video editing. And there have been for almost 30 years. I also don't understand why anyone would want to use a browser for this.
As a creator who films long form content, editing (specifically clipping for short form) is such a nightmare - this solves such a huge problem and the ui is insanely clean.
great to hear — I'd recommend using the clips tile to create clips, but you can also use the rough cut tile to help edit down the raw footage for the long-form
"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
our original name was Frame, only to realize that frame.io existed already.
we brainstormed names for a while and had several notes full of possible names
mosaic is one which stood out to us because it not only represents artwork, but also the tiles (nodes) in the canvas come together to form your mosaic — we thought that was a fitting name
Last year, I made a YouTube documentary series showcasing the prolific corruption in a small city government. I downloaded all the city government meetings, used Whisper to transcribe them, and then set up a basic RAG so I could query across a decade of committee meetings (around 1 TB of video). Once I got the timestamps that I'm interested in, I then have to embark on a tedious manual process of locating the file, cutting out a few seconds/minutes from a multi-hour video, and then order all the clips into a cohesive narrative.
These seem like problems that LLMs are especially well-suited for. I might have spent a fraction of the time if there was some system that could "index" my content library, and intelligently pull relevant clips into a cohesive storyline.
I also spent an ungodly amount of time on animations - it felt like "1 hour of work for 1 minute of animation". I would gladly pay for a tool which reduces the time investment required to be a citizen documentarian.
you should check mixedbread out. we support indexing multimodal data and making data ready for ai. we are adding video and audio support by the end of the year. might be interesting for the OP as well.
we have couple investigative journalists and lawyers using us for a similar usecase.
hey, thanks for sharing about your documentary series. would love to check it out if you don't mind linking it!
we don't yet support that volume of footage (1TB), however if you'd like to try this at a smaller scale, you can already do this today with the Rough Cut tile — simply prompt it for the moments that you're interested in (it can take visual cues, auditory cues, timestamp cues, script cues) and it will create an initial rough cut or assembly edit for you.
I'd also recommend checking out the new Motion Graphics tile we added for animations. You can also single-point generate motion graphics using the utility on the bottom right of the timeline. Let me know if you have any questions on that.
Absolutely - the channel is called "Dolton Documentaries" on YouTube. I'll definitely check out the features you mentioned, and am super excited to see where this goes!
I like the tile-based workflow approach. I’m curious, is integration with tools like 11labs/cartesia or HeyGen on the cards? It would make it much easier to produce influencer-style POV/first-person content using digital avatars and cloned voice-overs.
Also, do you have an API available to trigger workflows programmatically?
I think this is a great endeavor. I was thinking about a channel that I like watching on YouTube. They travel to exotic places by boat and film themselves, nature documentary style. To make good videos requires going to these places, a ton of filming, AND a ton of editing. They put out a video every 2 weeks or so on their trips. I imagine the editing is the hard part.
This is a long winded way of saying that I think creators need what you're making! People who have hours of awesome footage but have to spend dozens of hours cutting it down need this. Then also people who have awesome footage but aren't good at editing or hiring an editor, same thing. I'd love to see someone solve this so that 90th percentile editing is available to all, and then it can be more about who has the interesting content, rather than who has the interesting content and editing skills.
thanks! Mosaic can already do the rough cuts for you — so you can upload all your footage from your travel, and prompt it to "make a 2 minute highlight reel of your trip to Japan", for instance.
soon, we also plan to incorporate style transfer, so you could even give it a video from the channel you enjoy watching + your raw footage, and have the agent edit your footage in the same style of the reference video.
> you can upload all your footage from your travel, and prompt it to "make a 2 minute highlight reel of your trip to Japan"
In relation to the demo requests below, I think this would be a good example of how an average person might use your platform.
for a demo, check out this one that I put together using 81 clips from a skydiving trip we took in Monterey, CA:
https://edit.mosaic.so/links/c51c0555-3114-45f4-ab8f-c25f172...
this seems rather basic. From watching a bit, it just seems to cut/combine the videos? No transitions, no bg music which would fit nicely with the cut timing etc.
I am waiting for a tool that does stuff along those lines for a long time. Apps like dji kinda do it but they have generic music and the cuts do not fit the tune at all, and are rather random. Doing it myself with little effort using davinci or premiere takes ~30 minutes but the results are 5 times better.
Was hoping that this app would do it for me. And even if it would, if it would cost >X$ to create a video like that, then probably I'd still do it myself.
Hey, this is super cool. congrats on the product and the launch!
I'm building something exactly similar and couldn't believe my eyes when I saw the HN post. What i'm building (chatoctopus.com) is more like a chat-first agent for video editing, only at a prototype stage. But what you guys have achieved is insane. Wishing you lots of success.
to healthy competition!
thank you! chatoctopus looks pretty cool, I'm trying it out right now!
how did you find the chat-first interface to work out for video? what we found is that the response times can be so long that the chat UX breaks down a bit. how are you thinking about this?
looks like I got a network error
And what’s the plan for determinism? For repeat workflows it’s important that the same pipeline produces the same cut each time. Are node outputs consistent or does the model vary run to run?
since we're building on top of LLMs which are by nature probabilistic, you won't produce the exact same frame-level cut each time, but of course there is still determinism in the expected outputs
for example, if you have a workflow setup to create 5 clips from a podcast and add b-rolls and captions and reframe to a few different aspect ratios, any time you invoke this workflow (regardless of which podcast episode you're providing as input), you'll get 5 clips back that have b-rolls, captions, and are reframed to a few different aspect ratios
however, which clips are selected, what b-rolls are generated, where they're placed — this is all non-deterministic
you can guide the agent via prompting the tiles individually, but that's still just an input into a non-deterministic machine
Or just let the user adjust the seed and temperature themselves, or hide it under a checkbox that says deterministic with your chosen seed and temperature.
good point — we could enable these more granular-level knobs for users if it seems to be something people want
Can it work for this use-case? I have lots of 15 seconds to 1 min duration videos) of my kids and want to upload them all (let's say 10 videos) and have the agent make a single video with all the best bits of them?
yes! you can upload as many videos as you want (file limits currently are at 20GB and 90 minutes, per file). then I'd recommend using either the Rough Cut tile or the Montage tile to stitch them all together. In those tiles, you can prompt particular visual cues in terms of how you want the videos to be combined. Let me know if any questions.
did not get this part "After talking to users though, we realized that the chat UX has limitations for video: (1) the longer the video, the more time it takes to process. Users have to wait too long between chat responses. (2) Users have set workflows that they use across video projects. Especially for people who have to produce a lot of content, the chat interface is a bottleneck rather than an accelerant." what are you processing? frame by frame images?
what I mean here is that processing / analyzing a beefy file format like video will take longer than processing text input
same with returning that back to the user as manipulated output (text / code generation is much more rapid than rendering a video)
I’ve had a lot of fun with Remotion and Claude Code for CLI video editing. I’ve been impressed with how much traditional video editing I can manage.
I will be checking this out!
that's super interesting — what kind of things have you done with remotion and Claude Code?
they're very powerful, when you put them together, it almost feels like Cursor for Video Editing
Mostly using it for technical marketing/explainer videos eg https://x.com/mattarderne/status/1987441582413345016
I just signed up for a Creator plan, but it looks like the automated "Thank you for being a Mosaic Creator" email going out is not configured correctly. Instead of having my company name, it referenced a different business name and description (that seems to exist/be accurate, so not a placeholder).
Woah, yikes.
Hey! Thanks for calling this out — looking into what happened here & fixing right now.
This has been fixed now.
Have y'all talked with Max and the Ozone team? Suppose you would have lots to learn from them as you take on this space. Best of luck, video is hard!
Haven't chatted with them but their platform looks interesting!
Video is hard, but it's also a fun modality which presents some interesting challenges. And is where content is converging towards.
I just clicked the link and encountered a non-scrollable, dark, fixed content pane with loads of flickering images and scrolling text with random font sizes without much meaning. I felt imprisoned, subjected to unexpected suffering, can't scroll away, got scared and raced for the window close button, and then breathed easy.
seems like the landing page is detracting from the main product, this is good feedback so thanks! For now, avoid the scaries and head directly to https://edit.mosaic.so to try the actual canvas interface
Since video is your thing, I feel like you need to just make a very edited demo reel and put all your energy into trying to get people to watch that video. Meaning, remove almost all text and bloat from the site and just show us all the cool stuff the product does for/to video editing. Distill it to 60-120 seconds and put that on your landing, hell put it on auto play if you want to, so long as it's clear that is the one thing I'm supposed to be paying attention to
yeah I think a demo reel of a BEFORE vs AFTER immediately somewhere in the hero even or right below it would be helpful
I've put the /edit and /docs links in the first sentence above to soften the blow as well :)
They really managed to handcraft a unique user experience, that's for sure.
we did but the landing page seems to be detracting from it — head directly to https://edit.mosaic.so to try the actual canvas interface
I had the same reaction. About what you would expect from a team steeped in the Tesla mindset.
Please don't cross into personal attack. We're trying for the opposite on this site.
https://news.ycombinator.com/newsguidelines.html
thanks for the feedback — you can head directly to https://edit.mosaic.so to try the actual canvas interface
Really interesting direction. The node-based canvas feels like a more scalable abstraction for video automation than the usual chat-only interface. I’m curious how you’re handling long-form content where temporal context matters (e.g., emotional shifts, pacing, narrative cues).
Multimodal models are good at frame-level recognition, but editing requires understanding relationships between scenes, have you found any methods that work reliably there?
hey, thanks for the comment!
we've actually found that multimodal models are surprisingly good at maintaining temporal context as well
that being said, there's also a bunch of additional processing using more traditional CV / audio analysis we do to extract this information out as well (both frame-level and temporal) in your video understanding
for example, with the mean-motion analysis — you can see how subjects move over a period of time, which can help determine where important things are happening in the video, which ultimately can lead to better placements of edits.
Very cool. It definitely feels to me that the power of pro tools should be available to more people with AI.
Would have been nice if there was a killer demo on your landing page of a video made with Mosaic.
that's our perspective as well.
a lot of tooling is being built around generative AI in particular, but there's still a big gap for people that want to share their own stories / experiences / footage but aren't well-versed with pro tools.
valid feedback on the landing page — something we'll add in.
The problem is, any video demo of a tool like this is just an entirely unrelated video.
can you clarify what you mean here? check out this demo video: https://screen.studio/share/SP7DItVD
These comments real sus.
It's true that a bunch of positive comments from accounts without much posting history appeared in this thread. I assume that the OP's friends got wind of their launch. We tell founders to avoid that—see the bold part of https://news.ycombinator.com/yli.html—but if one wants to be fair, it's also the case that (1) this is not always easy to control, and (2) the people posting such comments think they're helping and don't have enough experience of HN to realize that it has a counter-effect.
I'm going to move the overly sus ones to a collapsed stub now. (https://news.ycombinator.com/item?id=45988584)
thanks dan
i agree, things are a bit too kind. give me some more feedback.
I absolutely love your approach of "expert tools". If I understand your approach, you aren't just feeding a video into a multimodal LLM and asking it "what is the bounding box of the optimal caption region?" -- you have built tools with discrete algorithms (using traditional CV techniques) that use things like object detection boxes + traditional motion analysis techniques to give "expert opinions" to the LLM in the form of tool calls -- such as finding the regions of minimal saliency + minimal movement to be the best places for caption placement.
If the LLM needs to place captions, it calls one of these expert discrete-algorithm tools to determine the best place to put the captions -- you aren't just asking the LLM to do it on its own.
If I'm correct about that, then I absolutely applaud you -- it feels like THIS is a fantastic model for how agentic tools should be built, and this is absolutely the opposite of AI slop.
Kudos!
thanks for the comment, thats exactly right
we're using a mix of out-of-the-box multimodal AI capability + traditional audio / video analysis techniques as part of our video understanding pipeline, all of which become context for the agent to use during its editing process
Hey, good luck with Mosaic.
Some feedback initially on the landing page, looks great but I thought that there is, for me, too much motion going on on the homepage and the use cases page. May be an unpopular opinion!
Agreed, homepage was confusing for me also. I tried to scroll around and see a demo. For a product like this that is so visual, I expected to be able to find a 30s demo clip somewhere but couldn't see one on the homepage or product page (and the scrolling on the product page was annoying for me).
the sad part is spent so long on the product page scrolling animation haha
very valid point though — I think a demo clip of a BEFORE vs AFTER immediately somewhere in the hero even or right below it would be helpful
thanks for the feedback
valid points, thanks for the feedback. i had gone for a certain aesthetic but you're right in that it may be a bit too overwhelming.
this is going to save me so much time, hell yeah guys!
thank you! let us know if you have any feedback!
Loom for Loom?
loom is focused on screen recordings / demos
It can record video and had AI editing.
Good luck. I've dabbled with this myself and ultimately decided that DaVinci Resolve would end up doing this natively. But then again they haven't yet so who knows!
Good luck with it, sincerely.
thanks! curious what you started dabbling with and if you have any thoughts to share :)
> We got frustrated trying to accomplish simple tasks in video editors like DaVinci Resolve and Adobe Premiere Pro. Features are hidden behind menus, buttons, and icons, and we often found ourselves Googling or asking ChatGPT how to do certain edits.
Hidden behind a UI? Most of the major tools like blade, trim, etc. are right there on the toolbars.
> We recorded hours of cars driving by, but got stuck on how to scrub through all this raw footage to edit it down to just the Cybertrucks.
Scrubbing is the easiest part. Mouse over the clip, it starts scrubbing!
I’m being a bit tongue in cheek and I totally agree there is a learning curve to NLE’s but those complaints were also a bit striking to me.
hey! You're right that most of the basic tools like splitting / trimming are available right in the timeline. but things like adding a keyframe to animate a counter, for instance, I had no idea where to go or how to start.
Scrubbing is easy enough when you have short footage, but imagine scrubbing through the footage we had of 5 hours of cars driving by, or maybe a bunch of assets. This quickly becomes very tedious.
Hey I just wanted to come back and be clear that yeah I was being tongue in cheek, but looking back at it comes off as a little snarky/“this isn’t even a thing!” and I’m sorry for that - what you built is really cool and I’m excited to try it out.
Good luck out there!
I don’t need to imagine, I do it haha but again I was being tongue in cheek. I personally would love an effective tool that can mark and favorite clips for me based on written prompts. Would save me an awful amount of time!
curious — what kind of content do you edit?
Now? Mostly long form educational content. But historically? Everything more or less! Freelancer for about 15 years until my current in-house producer role.
obligatory https://news.ycombinator.com/item?id=9224
Like I said, the description of some of the issues was just kind of funny to me - I think this could be a potentially very useful tool.
Do you think this is the next Dropbox?
next dropbox? lets go!
That question definitely sounded way more skeptical than I intended! Man I just can’t get my tone right today
Mosaic team dev here Hanging in the comments all day and pushing updates as fast as we can -really appreciate the feedback!
Is there a way to keep up to date on updates and new announcements? TIA.
yes! please join our discord https://discord.gg/26SAZzBTaP or follow us on X https://x.com/mosaic_so to keep up to date on updates
This is so cool. Good luck with your venture.
Thank you :)
Can you make this a desktop app?
I'm really tired of editing videos in the cloud. I'm also also tired of all these AI image and video tools that make you work over a browser. Your workflow seems so second class buried amongst all the other browser tabs.
I understand that this is how to deploy quickly to customers, but it feels so gross working on "heavy" media in a browser.
we've done a ton of work to optimize the uploads / downloads / transcoding of videos to handle beefy files using proxies, and also allow you to XML export back to traditional editing tools that can link back to your "heavy" media, but I hear you and I think anything running locally on device is just going to feel faster
it does present its own set of challenges, but something we've thought about
There's plenty of great native desktop apps for video editing. And there have been for almost 30 years. I also don't understand why anyone would want to use a browser for this.
there is some friction even in downloading a new app
if our goal is to bring more people into the fold, minimizing the steps for them to start editing is something we want to optimize for
that being said, being on the browser presents its own set of challenges, many of which are rightfully mentioned in this thread
Sorry, not buying the argument. I think it's more like: that's the current zeitgeist.
[under-the-rug stub]
[see https://news.ycombinator.com/item?id=45988611 for explanation]
As a creator who films long form content, editing (specifically clipping for short form) is such a nightmare - this solves such a huge problem and the ui is insanely clean.
Will be using this a ton in the future
great to hear — I'd recommend using the clips tile to create clips, but you can also use the rough cut tile to help edit down the raw footage for the long-form
Not related to NCSA Mosaic (RIP).
if you take a snippet of Ben Horowitz's interview out of context, he has a lot of good things to say about our product :)
Can we stop with the overloaded names? "Mosaic" is a well-known web browser.
"Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting."
https://news.ycombinator.com/newsguidelines.html
> "Mosaic" is a well-known web browser
A browser that was discontinued 30 years ago.
>> "Mosaic" is a well-known web browser.
Not really relevant anymore, though? As long is it's not called "Project: Prometheus" I think we count it as a win.
naming is hard
our original name was Frame, only to realize that frame.io existed already.
we brainstormed names for a while and had several notes full of possible names
mosaic is one which stood out to us because it not only represents artwork, but also the tiles (nodes) in the canvas come together to form your mosaic — we thought that was a fitting name