Mobile QR Code QR CODE

2024

Acceptance Ratio

21%

REFERENCES

1 
H.-Y. Shum, S. B. Kang, and S.-C. Chan, ``Survey of image-based representations and compression techniques,'' IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, no. 11, pp. 1020-1037, Nov. 2003.DOI
2 
R. Jensen, A. Dahl, G. Vogiatzis, E. Tola, and H. Aanæs, ``Large scale multi-view stereopsis evaluation,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 406-413, Jun. 2014.DOI
3 
Y. Yao, Z. Luo, S. Li, J. Zhang, Y. Ren, and L. Zhou, ``Blendedmvs: A large-scale dataset for generalized multi-view stereo networks,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 1790-1799, Jun. 2020.DOI
4 
A. Knapitsch, J. Park, Q.-Y. Zhou, and V. Koltun, ``Tanks and temples: Benchmarking large-scale scene reconstruction,'' ACM Transactions on Graphics, vol. 36, no. 4, pp. 1-13, Jul. 2017.DOI
5 
B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng, ``NeRF: Representing scenes as neural radiance fields for view synthesis,'' Communications of the ACM, vol. 65, no. 1, pp. 99-106, Dec. 2021.DOI
6 
S. Choi, Q.-Y. Zhou, S. Miller, and V. Koltun, ``A large dataset of object scans,'' arXiv preprint arXiv:1602.02481, 2016.DOI
7 
A. Tewari, O. Fird, J. Ties, V. Sitzmann, S. Lombardi, et al., ``State of the art on neural rendering,'' Computer Graphics Forum, vol. 39, no. 2 pp. 701-727, Jul. 2020.DOI
8 
L. Liu, J. Gu, K. Z. Lin, T.-S. Chua, and C. Theobalt, ``Neural sparse voxel fields,'' Advances in Neural Information Processing Systems, vol. 33, pp. 15651-15663, 2020.DOI
9 
C. Sun, M. Sun, and H.-T. Chen, ``Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction,'' Proc. of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5459-5469, Jun. 2022.DOI
10 
S. Fridovich-Keil, A. Yu, M. Tancik, W. Chen, B. Recht, and A. Kanazawa, ``Plenoxels: Radiance fields without neural networks,'' Proc. of IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5501-5510, Jun. 2022.DOI
11 
S. J. Garbin, M. Kowalski, M. Johnson, J. Shotton, and J. Valentin, ``FastNeRF: High-fidelity neural rendering at 200fps,'' Proc. of IEEE/CVF International Conference on Computer Vision (ICCV), pp. 14346-14355, Oct. 2021.DOI
12 
C. Reiser, S. Peng, Y. Liao, and A. Geiger, ``KiloNeRF: Speeding up neural radiance fields with thousands of tiny mlps,'' Proc. of IEEE/CVF International Conference on Computer Vision (ICCV), pp. 14335-14345, Oct. 2021.DOI
13 
A. Chen, Z. Xu, A. Geiger, J. Yu, and H. Su, ``TensoRF: Tensorial radiance fields,'' Proc. of European Conference on Computer Vision, vol. 13692, pp. 333-350, Nov. 2022.DOI
14 
T. Müller, A. Evans, C. Schied, and A. Keller, ``Instant neural graphics primitives with a multiresolution hash encoding,'' ACM Transactions on Graphics, vol. 41, no. 4 pp. 1-15, Jul. 2022.DOI
15 
A. Yu, R. Li, M. Tancik, H. Lio, R. Ng, and A. Kanazawa, ``PlenOctrees for real-time rendering of neural radiance fields,'' Proc. of IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5752-5761, Oct. 2021.DOI
16 
Z. Chen, T. Funkhouser, P. Hedman, and A. Tagliasacchi, ``MobileNeRF: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 16569-16578, Jun. 2023.DOI
17 
A. Chen, Z. Xu, F. Zhao, X. Zhang, F. Xiang, and J. Yu, ``MVSNeRF: Fast generalizable radiance field reconstruction from multi-view stereo,'' Proc. of IEEE/CVF International Conference on Computer Vision (ICCV), pp. 14124-14133, Oct. 2021.DOI
18 
P. Wang, Y. Liu, Z. Chen, L. Liu, Z. Liu, and T. Komura, ``F$^2$-NeRF: Fast neural radiance field training with free camera trajectories,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 4150-4159, Jun 2023.DOI
19 
K. Deng, A. Liu, J.-Y. Zhu, and D. Ramanan, ``Depth-supervised NeRF: Fewer views and faster training for free,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 12882-12891, Jun. 2022.DOI
20 
R. Clark, ``Volumetric bundle adjustment for online photorealistic scene capture,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 6124-6132, Jun. 2022.DOI
21 
B. Kerble, G. Kopanas, T. Leimkühler, and G. Drettakis, ``3D Gaussian splatting for real-time radiance field rendering,'' ACM Transactions on Graphics, vol. 42, no. 4, pp. 1-14, Mar. 2023.DOI
22 
B. Roessle, N. Müller, L. Porzi, S. R. Bulò, P. Kontschieder, and M. NieSSner, ``GANeRF: Leveraging Discriminators to Optimize Neural Radiance Fields,'' arXiv preprint arXiv:2306.06044, 2023.DOI
23 
J. Kulhanek and T. Sattler, ``Tetra-NeRF: Representing neural radiance fields using tetrahedra,'' arXiv preprint arXiv:2304.09987, 2023.DOI
24 
D. Lee, M. Lee, C. Shin, and S. Lee, ``Deblurred neural radiance field with physical scene priors,'' arXiv preprint arXiv:2211.12046, 2022.DOI
25 
F. Warburg, E. Weber, M. Tancik, A. Holynski, and A. Kanazawa, ``Nerfbusters: Removing ghostly artifacts from casually captured NeRFs,'' arXiv preprint arXiv:2304.10532, 2023.DOI
26 
K. Zhou, W. Li, Y. Wang, T. Hu, N. Jiang, and X. Han, ``NeRFLiX: High-quality neural view synthesis by learning a degradation-driven inter-viewpoint MiXer,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 12363-12374, Jun. 2023.DOI
27 
P. Wang, L. Liu, Y. Liu, C. Theobalt, T. Komura, and W. Wang, ``Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction,'' arXiv preprint arXiv:2106.10689, 2021.DOI
28 
J. Y. Zhang, G. Yang, S. Tulsiani, and D. Ramanan, ``NeRS: Neural reflectance surfaces for sparse-view 3d reconstruction in the wild,'' Advances in Neural Information Processing Systems, vol. 34, pp. 29835-29847, 2021.DOI
29 
T. Takikawa, A. Glassner, and M. McGuire, ``A dataset and explorer for 3D signed distance functions,'' Journal of Computer Graphics Techniques, vol. 11, no. 2, Apr. 2022.URL
30 
L. Yariv, J. Gu, Y, Kasten, and Y. Lipman, ``Volume rendering of neural implicit surfaces,'' Advances in Neural Information Processing Systems, vol 34, pp. 4805-4815, 2021.DOI
31 
D. Azinovic, R. Martin-Brualla, D. B. Goldman, M. ´ Nießner, and J. Thies, ``Neural RGB-D surface reconstruction,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 6290-6301, Jun. 2022.DOI
32 
R. Liu, R. Wu, B. V. Hoorick, P. Tokmakov, S. Zakharov, and C. Vondrick, ``Zero-1-to-3: Zero-shot one image to 3d object,'' Proc. of IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9298-9309, Oct. 2023.DOI
33 
J. Tang, T. Wang, B. Zhang, T. Zhang, R. Yi, L. Ma, and D. Chen, ``Make-it-3d: High-fidelity 3d creation from a single image with diffusion prior,'' arXiv preprint arXiv:2303.14184, 2023.DOI
34 
J. Ling, Z. Wang, and F. Xu, ``ShadowNeuS: Neural sdf reconstruction by shadow ray supervision,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 175-185, 2023.DOI
35 
R. A. Rosu and S. Behnke, ``PermutoSDF: Fast multi-view reconstruction with implicit surfaces using permutohedral lattices,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 8466-8475, 2023.DOI
36 
C. Wang, M. Chai, M. He, D. Che, and J. Liao, ``Clip-NeRF: Text-and-image driven manipulation of neural radiance fields,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 3835-3844, 2022.DOI
37 
C. Wang, R. Jiang, M. Chai, M. He, D. Chen, and J. Liao, ``NeRF-Art: Text-driven neural radiance fields stylization,'' IEEE Transactions on Visualization and Computer Graphics, vol. 30, no. 8, pp. 4983-4996, 2024.DOI
38 
A. Radford et al., ``Learning transferable visual models from natural language supervision,'' Proc. of the 38 th International Conference on Machine Learning, vol. 139, pp. 8748-8763, Jul. 2021.DOI
39 
Y.-H. Huang, Y. He, Y.-J. Yuan, Y.-K. Lai, and L. Gao, ``StylizedNeRF: consistent 3D scene stylization as stylized nerf via 2D-3D mutual learning,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. pp. 18342-18352, Jun. 2022.DOI
40 
X. Li, Z. Cao, H. Sun, J. Zhang, K. Xian, and G. Lin, ``3D cinemagraphy from a single image,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 4595-4605, Jun. 2023.DOI
41 
C. Jambon, B. Kerbl, G. Kopanas, S. Diolatzis, T. Leimkühler, and G. Drettakis, ``NeRFshop: Interactive editing of neural radiance fields,'' Proc. of he ACM on Computer Graphics and Interactive Techniques, vol. 6, no. 1, pp. 1-21, Mar. 2023.DOI
42 
A. Haque, M. Tancik, A. A. Efros, A. Holynski, and A. Kanazawa, ``Instruct-NeRF2NeRF: Editing 3D scenes with instructions,'' arXiv preprint arXiv:2303.12789, 2023.DOI
43 
K. Kania, K. M. Yi, M. Kowalski, T. Trzciniski, and ´ A. Tagliasacchi, ``CoNeRF: Controllable neural radiance fields,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 18623-18632, Jun. 2022.DOI
44 
Z. Li, T. Müller, A. Evans, R. H. Taylor, M. Unberath, and M.-Y. Liu, ``Neuralangelo: High-fidelity neural surface reconstruction,'' Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 8456-8465, Jun. 2023.DOI