Skip to content

Gaussian splat automated photobooth: 7-cameras quick demo!

May 12, 2026

Link Embed Gallery (1 links, 1 cols)
1. https://www.youtube.com/watch?v=J5rD3vKRivk
Hidden in edit mode to keep the editor stable

3D Gaussian Splat photobooth demo with the latest CameraServer / XangleCS beta pipeline.

Yes, this flow is fully automated.

In the run shown here, it takes about 37 seconds to go from capture to the final result: a rendered MP4 of the Gaussian Splat. That is the part I wanted to highlight most in this short demo. This is not only about producing a 3D reconstruction. It is about taking a real capture, pushing it through the processing stack, and ending with a final watchable file without stopping for a bunch of manual steps in the middle.

What you are seeing here is a much more practical version of the workflow: trigger the cameras, let the dataset land, let process pick the job up automatically, let the Gaussian Splat pipeline do its work, and let the Splat Player turn that result into a finished camera move for the MP4 render. That is a very different experience from the older style of workflow where the splat itself was the finish line and the final playback path still had to be figured out separately afterward.

One of the biggest improvements is on the Splat Player side. The splat is not just a static artifact anymore. It can now move through preset-driven playback, which makes the result much more usable for actual demos, photobooth outputs, branded activations, and repeatable production. Instead of opening a splat and improvising the camera every time, I can choose a preset and get a move that already feels deliberate.

That matters because playback is not a small detail. A technically successful splat can still feel weak if the camera path is awkward, random, or disconnected from the subject. The preset system is what starts to turn the reconstruction into a presentation format. It gives the result shape. It gives it rhythm. It makes the final MP4 feel intentional instead of accidental.

There are already many playback presets available, and that part is important because speed is a big deal when you want repeatable output. If I want a fast orbit, a more dramatic reveal, or a consistent demo path that I know works well on a certain kind of subject, I can start from a preset instead of rebuilding the move from scratch each time. But the other half of the update is just as important: I am not locked into those defaults. If I want a different feel, I can make my own preset and save the move I actually want.

That makes the system useful in two ways at once. Presets make it fast to get a result, and custom presets make it possible to build a recognizable style. For a booth, an activation, or any recurring setup, that means the playback can become part of the look of the experience instead of just being a generic orbit around the subject.

Another part I wanted to highlight is that the selected preset is not only a viewer convenience. In the current flow, it is what tells the Gaussian Splat branch to produce the final MP4 render. In other words, the preset is not just decoration on top of the pipeline. It is part of the handoff from processing to final output. That is why this feels like such a meaningful step forward. The render path is now tied to an intentional playback choice.

The automation on the process side is also getting much more useful. JPG Alignment and Gaussian Splat can be treated as separate automation branches, which means the system can be configured around the kind of result I want instead of forcing everything into one fixed path. If I want a fast aligned replay, I can use JPG Alignment. If I want the 3D playback result, I can enable Gaussian Splat and let the selected preset drive the final MP4. If I want both outputs from the same capture, both branches can run independently.

That separation is a big deal in practice. A fast JPG-based result and a more advanced Gaussian Splat result do not have to compete for the same exact role anymore. They can serve different needs from the same dataset. One can be the fast operational output. The other can be the more impressive showcase output. That is a much better fit for real event workflows, real demos, and real iteration.

For me, that is what makes this short clip more interesting than a normal reconstruction test. It is not only proof that the software can build a splat. It is proof that the capture-to-processing-to-presentation chain is tightening up. Once the data is captured, the system can keep moving without a lot of babysitting.

The speed number matters here too. Around 37 seconds from capture to a final splat MP4 on my setup is the kind of result that starts to feel operational, not just experimental. Of course, timing will depend on the machine, the dataset, the rig, the preset, and the render settings. But the point is that this is moving toward a workflow that makes sense in live environments, not just offline testing.

Another thing worth pointing out is the camera spacing in this demo. Here, the cameras are actually very close to each other. That kind of tight arrangement comes more from my traditional bullet-time work than from what I would consider the ideal setup for Gaussian Splat.

For Gaussian Splat, I generally want more spacing between cameras so the system gets stronger viewpoint differences across the subject. That wider spread tends to be a better fit for reconstruction and for more interesting final playback. So this demo is not meant to show the perfect Gaussian Splat rig geometry. It is more about showing that the pipeline is already working end to end, even on a setup that is still physically biased toward classic bullet-time habits.

That distinction matters to me. The result here is already fast and usable, but if I were optimizing specifically for Gaussian Splat instead of for traditional bullet-time capture, I would spread the cameras farther apart.

So this short demo is really showing several recent improvements at once:

  • fully automated capture to final result
  • about 37 seconds from trigger to finished splat MP4 on my setup
  • Splat Player playback presets for repeatable motion
  • the ability to create your own presets when you want a more specific move
  • separate JPG Alignment and Gaussian Splat automation branches inside process
  • a more practical handoff from reconstruction to final presentation
  • a working end-to-end result even on a tight camera layout that is not yet the ideal Gaussian Splat configuration

Current requirements for this workflow on my side:

  • RealityScan 2.1
  • Postshot
  • Nvidia GPU, I am using an RTX 4090
  • XangleCS beta v26.05.12

If you are working on photobooths, event capture, creative kiosks, or experimental multi-camera 3D workflows, this is the part I think is getting interesting now: not just generating a splat, but automating the path from capture to a finished playback file that is actually ready to show.

More CameraServer / XangleCS updates soon.

Powered by Beeboo · 260513.102047