I wonder if there's opportunities for collaboration. It seems we're the only cloud-agnostic video embedding model that allows users to own their embeddings.
Here's a reverse video search tutorial: https://www.youtube.com/watch?v=H92cEhG9uMI&ab_channel=Mixpe...
I made an experiment in a similar style a while ago, but I decided it was too difficult to keep going as a "tiny" side project so never really released anything beyond a demo that you can see here:
There could be really interesting abstractions that people might build on top of this. Like automatically creating and animating infographics, making background sounds, or video cutting and recycling. If you spin this 100x further an entire video creation studio might emerge.
Which parts of Video Infrastructure do you want to build first? Which other higher-level parts could be built by you or users? Where could this go?
I’ve struggled to find a pure client-side encoder that is as fast, lightweight and high quality (in terms of lossiness) as what I had going with mp4-h264[1]. I suspended the project out of legal concern but it seems like the patents are finally starting to run their course, and it might be worth exploring it again. I’ve been able to use it to stream massive 8k near-pixel-perfect MP4s for generative art (where quality is more important than file size), compared to WebCodecs which always left me with a too-lossy result.
Noob question : How would you explain in the simplest form the difference between FOSS and source-available. In other words, what does Remotion do not have that would make it FOSS?
How does this compare to Remotion^ which uses "React" mental model?
Would you mind sharing a bit about your pivot? I always find these stories interesting!
Use case is a service where people can upload certain data and I use that to generate a video. Let's say I gave you the option to make a speed gauge video, that display the values you input, one after another, for a second each. If you upload 60 values, that will be a minute video. But if you upload your speed each second for an hour, that will be an hour long video. But should ideally not take an hour to render. Unfortunately I've seen most browser based tools can't render faster than playback. So would have the user watch the whole video to actually download it.
On a selfish note, as a canvas library developer/maintainer, I do have questions around your choice of Motion Canvas: what attracted you to that library in particular (I'm assuming it's the Editor, but could be wrong)?
On a broader note, my main interest in canvas+video center around responsive, interactive and accessible video displays in web pages. Have you had any thoughts on how you'd like to develop Revideo to support these sorts of functionalities?
I’ve only skimmed the docs and nothing jumped out on this: would it be possible to use a 3d canvas context? For example, integrate a dynamic three.js layer/asset into the video?
When text-to-code capabilities of LLMs become more mature, libraries like these are going to create a lot of novel uses cases and opportunities.
Seems like there might be room for a "LangChain for Video" in this space...