Scaffold-GS: Technical Deep Dive
Structured 3D Gaussians for View-Adaptive Rendering
Authors: Tao Lu, Mulin Yu, Linning Xu, Yuanbo Xiangli, Limin Wang, Dahua Lin, Bo Dai. Primary reference: Lu et al., Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering [1].
Introduction
We have covered NeRFs [3] and 3DGS [2] in previous posts.
Scaffold-GS addresses these limitations by introducing a hierarchical, region-aware scaffold of anchor points and by predicting Gaussian attributes on-the-fly in a view-adaptive manner. The method preserves the efficiency of primitive-based rasterization yet recovers robustness and compactness via structure-aware encoding and dynamic decoding of local Gaussians [1].
This blog post adapts and expands the original paper into a publication-ready technical exposition.
Preliminaries (Revision)
A 3D Gaussian used in 3D-GS is an anisotropic Gaussian density centered at :
with covariance factorized as so that remains positive semidefinite (here is a diagonal scale matrix and a rotation). Color is typically represented by SH coefficients or direct RGB, and each Gaussian bears an opacity . These 3D Gaussians are projected to 2D Gaussians on the image plane and rasterized with differentiable tile-based splatting. Pixel colors are composed via an ordered -blending accumulation:
where is the projected 2D Gaussian and denotes the Gaussians overlapping pixel . The whole pipeline is differentiable and amenable to gradient-based optimization [2].
(References: NeRF background [3]; the canonical 3D-GS work [2].)
Key idea of Scaffold-GS (high level)
Scaffold-GS replaces an unstructured, per-Gaussian optimization with a two-layer hierarchical representation:
- Anchor points: a sparse, regularized voxelized grid of anchors initialized from SfM point clouds (COLMAP) that roughly encode where scene content exists; each anchor stores a compact, learnable feature and scale.
- Neural Gaussians: spawned on-the-fly from each anchor during rendering: for each visible anchor we predict the attributes (position offsets, opacity, color, scale, rotation) of a small set of Gaussians via dedicated MLPs conditioned on view information (distance and view direction) and the anchor feature.
This design yields three practical effects: (i) geometry-aware distribution of Gaussians (anchors scaffold the coverage), (ii) view-adaptive Gaussians (attributes are decoded on demand, so they can vary with camera pose), and (iii) compactness, the model stores anchors + MLPs instead of millions of independent Gaussian parameters. See Fig.2 in [1] for global overview.
Mathematical and algorithmic details
We now step through the major components and reproduce the core equations from the paper.
1. Anchor scaffold initialization
Start from an SfM point cloud (COLMAP) and voxelize at grid size :
where denotes deduplication of voxel center coordinates and is rounding to nearest voxel center. Each voxel center becomes an anchor with parameters:
- local context feature ,
- learnable anisotropic scaling ,
- learnable offsets (these define relative positions of spawned neural Gaussians).
This initialization concentrates anchors where SfM believes geometry exists and reduces the irregularity of raw SfM points.
2. Multi-resolution view-dependent anchor feature
To make anchor features view-adaptive and multi-scale, the authors maintain a feature bank per anchor:
i.e., the base feature and two downsampled/sliced variants. Given a camera position and anchor position , compute relative distance and direction:
A tiny MLP maps to a 3-way softmax weight vector:
and the enhanced anchor feature is a weighted sum:
Intuition: the learned weights select an appropriate resolution mixture depending on viewing distance and orientation, enabling coarse vs. fine local detail to be modulated automatically. Implementation uses slicing/repetition to cheaply create multi-resolution features (supplementary details in [1]).
3. On-the-fly neural Gaussian derivation (decoding)
For each anchor visible in the camera frustum, spawn candidate neural Gaussians. Their positions are computed by learned offsets scaled by the anchor’s per-axis scale:
where denotes elementwise scaling. The remaining Gaussian attributes are decoded in a single pass from , and via small MLPs:
where is a quaternion parameterizing orientation, a scale, and the color. Importantly, opacity thresholding prunes trivial Gaussians:
This dynamic decoding reduces both computation (only decode for visible anchors) and overfitting (Gaussians adapt with view).
4. Differentiable rasterization & image formation
Each neural Gaussian is projected and rasterized like in 3D-GS; the final pixel color is computed by the standard ordered alpha blend (Eq.2). During training, gradients flow to anchors, offsets, feature banks, and the MLP weights that decode attributes, enabling end-to-end optimization. The architecture leverages tile-based rasterizers for GPU efficiency, matching real-time constraints.
5. Dynamic anchor refinement: growing & pruning
Initial anchors from SfM can be noisy or sparse. Scaffold-GS refines anchors during training using a neural-Gaussian-based gradient signal:
- Growing: accumulate gradients of neural Gaussians grouped per voxel over iterations; if the accumulated gradient norm exceeds a threshold, spawn new anchors (improves coverage where reconstruction frustrates gradients).
- Pruning: remove anchors that consistently spawn neural Gaussians with negligible opacity or low importance, controlling memory.
This bi-directional policy maintains compactness while recovering coverage in missed areas. Ablations in the paper show that growing is essential to fidelity while pruning controls size.
6. Loss and regularization
Training is supervised by reconstruction and regularization losses. The total loss is:
where is the per-pixel L1 between rendered and ground truth RGB, is a perceptual structural similarity loss [7], and is a volume regularizer to discourage excessive occupied volume (e.g., penalizing total volume implied by Gaussian scales or a product of scales across axes). The paper gives implementation details and weight choices in the supplementary material.
Implementation notes (practical recipe)
Below are reproducible details distilled from the paper and supplement:
- Initialization: obtain SfM points and camera poses using COLMAP [4]. Voxelize with chosen by scene scale (see supplement). Deduplicate voxels to get anchors.
- Anchor feature size: -dim latent per anchor; build a feature bank by slicing and repeating the vector to form multi-resolution variants.
- k (neural Gaussians per anchor): typical but paper shows robustness across choices, active Gaussians converge to a stable number via pruning.
- MLP architectures: small MLPs for , , , , ; attributes are decoded in one forward pass (efficiency). Exact layer widths/depths are provided in the supplement.
- Filtering for speed: two pre-filters are used: (1) view frustum culling of anchors, and (2) opacity threshold (). Filtering dramatically increases FPS with negligible fidelity loss when properly tuned.
- Training schedule: mix of multi-scale training (coarse → fine) on datasets; loss weights and optimizer schedules follow the supplementary. The authors report ~100 FPS at 1K resolution at inference for typical scenes.
Experiments and empirical findings
Scaffold-GS is evaluated across synthetic (Blender), Mip-NeRF360, Tanks & Temples, BungeeNeRF, and VR-NeRF datasets. Key empirical takeaways:
- Better generalization and robustness: while 3D-GS can achieve slightly higher training PSNR (indicating overfitting to views), Scaffold-GS exhibits higher testing PSNR and consistently better generalization to unseen views, particularly in scenes with varying levels of detail or difficult view-dependent effects.
- Significant storage reduction: the anchor+MLP representation requires far less storage than a fully optimized Gaussian cloud (e.g., orders-of-magnitude reductions reported for some datasets). Table comparisons in the paper show storage drops from hundreds of MB to a few dozen MB while keeping comparable fidelity.
- Speed tradeoffs: with frustum culling and opacity filtering, inference speed matches or closely trails 3D-GS, while delivering denser coverage and fewer artifacts.
- Ablations: the paper carefully ablates k, filtering strategies, and anchor refinement. Results show that (i) filtering primarily impacts speed, not fidelity; (ii) growing is crucial for recovering from poor SfM initializations; and (iii) pruning tames memory growth without hurting quality when combined with growth.
Why it works and where to be careful
Why it reduces redundancy. Anchors impose spatial structure: instead of letting Gaussians drift to fit each view, anchors constrain where Gaussians are spawned. The view-conditioned decoding then lets local appearance change with viewpoint without proliferating primitives. This balances compactness (few anchors + MLPs) and expressiveness (many view-adaptive Gaussians when necessary).
View-adaptivity is key. Many artifacts in 3D-GS stem from baking view effects per Gaussian. By decoding attributes conditioned on camera direction and distance, Scaffold-GS models view-dependent BRDF-like changes implicitly and smoothly.
Limitations and failure modes.
- Dependence on SfM: anchor initialization relies on reasonable SfM points. Extremely poor SfM may require more aggressive growing or additional priors. The authors mitigate this via gradient-based growing but practitioners should still check SfM quality.
- MLP overhead: although the MLPs are small and decoding is limited to visible anchors, for extremely dense anchor grids the decoding cost scales up. Filtering strategies are essential to keep real-time targets.
- Complex materials & lighting: as with most view-synthesis methods that use image supervision, disentangling lighting and view-dependent material responses remains implicit and can lead to hallucinations under strong illumination changes.
Practical tips for researchers & engineers
- Start with quality SfM: use COLMAP and visually inspect point clouds. If SfM is noisy, increase growing sensitivity early in training.
- Tune to scene scale: voxel size controls anchor density; prefer slightly coarse anchors and rely on growing to fill missing areas.
- Set opacity threshold () conservatively: too high removes thin structures; too low wastes rendering time. Use the paper’s recommended sweep.
- Batch decode MLPs: implement the attribute MLPs so that all decoded attribute vectors are produced in a single fused forward pass per anchor to maximize GPU efficiency.
- Combine several evaluation metrics: PSNR, SSIM, LPIPS, and storage/FPS together tell a fuller story (the paper reports all).
Conclusion
Scaffold-GS presents a principled and practical strategy to reconcile the competing demands of fidelity, compactness and speed in Gaussian-based neural rendering. Its anchor scaffold enforces geometry-aware allocation of primitives while the view-conditioned MLP decoders provide the flexibility to produce view-dependent, multi-scale appearance without exploding the number of stored primitives. Empirically, Scaffold-GS achieves strong generalization, reduced storage, and real-time capable inference using standard rasterization pipelines [1].
For applied teams building real-time view synthesis systems, Scaffold-GS is worth implementing as a next step beyond vanilla Gaussian splatting: it is especially attractive when storage, robustness to viewpoint change, and multi-scale scene coverage are priorities.
Important (References)
[1] Lu, T., Yu, M., Xu, L., Xiangli, Y., Wang, L., Lin, D., & Dai, B. (2023). Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering. arXiv preprint arXiv:2312.00109.
[2] Kerbl, B., Kopanas, G., Leimkühler, T., & Drettakis, G. (2023). 3D Gaussian Splatting for Real-Time Radiance Field Rendering. ACM Transactions on Graphics (ToG), 42(4), Article 139.
[3] Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2020). NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis. Communications of the ACM, 65(1), 99-106.
[4] Schönberger, J. L., & Frahm, J.-M. (2016). Structure-from-Motion Revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4104-4113.
[5] Müller, T., Evans, A., Schied, C., & Keller, A. (2022). Instant Neural Graphics Primitives with a Multiresolution Hash Encoding. ACM Transactions on Graphics (ToG), 41(4), Article 102.
[6] Hedman, P., Philip, J., Price, T., Frahm, J.-M., Drettakis, G., & Brostow, G. (2018). Deep Blending for Free-Viewpoint Image-Based Rendering. ACM Transactions on Graphics (ToG), 37(6), Article 257.
[7] Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Transactions on Image Processing, 13(4), 600-612.