vessenes
Paper includes a blender plugin. Very nice. To my non-professional eye, this looks like it works well enough for an indie game dev to give it a try. The method integrates with existing poly-based scenes, so the days of needing a digital hero model built by hand might be over soon. Now can you use a physical model, which is cool and fun.

The general idea here by the way is to take one unconstrained Gaussian pass, hold those gaussians aside, then take another that focuses on densification (they call it regularization), trying to hard constrain the gaussians to the more dense areas of detail. That second set is called the frosting, and it has the nice properties of being smoother; you don’t render with it, because it’s too smooth. But you do use it to build your meshes. Then they do some magic to key the OG gaussians off the regularized ones, which I didn’t understand.

Upshot, this is basically SOTA or close for most Gaussian techniques, definitely SOTA for editable Gaussian techniques in terms of render quality, and also pretty fast — training a model is something like 90 minutes on a V100. They mentioned in passing in the paper that you can use unconstrained mobile photos, and demo a few, but they didn’t talk a lot about the pipeline there. There are just starting to be high quality pipelines that can estimate scene setups without high grade camera information — I’ll be curious to look at the code and see what they do there.

peppertree
I can see next paper where you use raytracing for refractive objects, pbr for hard surfaces, and gs for soft surfaces.
adzm
I can't be the only one who read "edible" in the title. Fascinating work regardless