Chen Yiwen

I am Yiwen, a second year Ph.D. student at Nanyang Technological University focusing on 3D AIGC, with S-Lab. I am fortunate to be supervised by Prof. Guosheng lin. I got my Bachelor’s degree in Computer Science at Beihang University in 2022, where I spent a wonderful time working with Prof. Si Liu.

Email  /  Google Scholar  /  Github

profile photo

Research

My research interests lie on Gaussian Splatting, Diffusion Models, Text-to-3D and 3D GAN. I am eager to investigate the infinite potential of AIGC.

Learn to Optimize Denoising Scores for 3D Generation: A Unified and Improved Diffusion Prior on NeRF and 3D Gaussian Splatting.
Xiaofeng Yang*, Yiwen Chen*, Cheng Chen, Chi Zhang, Yi Xu, Xulei Yang, Fayao Liu, Guosheng Lin
arxiv, 2023
project page / code / arXiv

A unified framework aimed at enhancing the diffusion priors for 3D generation tasks.

GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting.
Yiwen Chen*, Zilong Chen*, Chi Zhang, Feng Wang, Xiaofeng Yang, Yikai Wang, Zhongang Cai, Lei Yang, Huaping Liu, Guosheng Lin
CVPR, 2024
project page / code / arXiv

GaussianEditor provides controllable, diverse, and interactive high-resolution 3D editing, needing only 2-7 minutes and 10-20GB of GPU memory on a single A6000 GPU.

IT3D: Improved Text-to-3D Generation with Explicit View Synthesis.
Yiwen Chen*, Chi Zhang*, Xiaofeng Yang, Zhongang Cai,
Gang Yu, Lei Yang, Guosheng Lin
AAAI, 2024
project page / arXiv

IT3D attempts to solve oversaturation and unrealistic problems of text-to-3D methods through explicit view synthesis.

StyleAvatar3D: Leveraging Image-Text Diffusion Models for High-Fidelity 3D Avatar Generation
Chi Zhang, Yiwen Chen, Yijun Fu, Zhenglin Zhou, Gang Yu, Billzb Wang, BIN FU, Tao Chen, Guosheng Lin, Chunhua Shen
arxiv, 2023
project page / arXiv

StyleAvatar3D generates multi-view images of avatars in various styles without the need of real world datasets.

MoDA: Modeling Deformable 3D Objects from Casual Videos
Chaoyue Song, Tianyi Chen, Yiwen Chen, Jiacheng Wei, Chuan-Sheng Foo, Fayao Liu, Guosheng Lin
Arxiv, 2023
project page / code / arXiv

MoDA models the shape, texture and motion of deformable 3D objects from monocular casual videos.

StyleRF: Zero-shot 3D Style Transfer of Neural Radiance Fields
Kunhao Liu, Fangneng Zhan, Yiwen Chen, Jiahui Zhang, Yingchen Yu, Abdulmotaleb El Saddik, Shijian Lu, Eric Xing,
CVPR, 2023
project page / code / arXiv

StyleRF is a new 3D style transfer technique that resolves the dilemma of accurate geometry reconstruction, high-quality stylization, and being generalizable to arbitrary new styles.


Web page design credit to Jon Barron