Fenggen Yu(余锋根)

I am an Applied Scientist at Amazon. I obtained my Ph.D. from Gruvi Lab, school of Computing Science at Simon Fraser University, under supervision of Prof. Hao(Richard) Zhang. I received my bachelor degree and master degree from Nanjing University, under supervision of Associate Prof. Yan Zhang. My research interests are on foundational models for 3D shape analysis/segmentation/labeling/grounding, 3D reconstruction/generation/abstraction/editing, Image and video generation/editing, including spatial intelligence/world model/functional AI/emboided AI.

News


Jan 24, 2026. I served in the SIGGRAPH 2026 Technical Papers Committee.

Jan 1, 2025. One paper accepted to ICASSP 2026.

Nov 24, 2025. I Joined Amazon Prime Video MGM Studio to work on Image/Video/3D/4D generation related projects.

Aug 1, 2025. One paper accepted to ACM MM 2025.

Jan 24, 2025. I served in the SIGGRAPH 2025 Technical Papers Committee.

Sep 23, 2024. I Joined Amazon WWW Store as Applied Scientist to work on 3D Vision projects.

Sep 15, 2024. I gave a talk Learning Structured Representations of 3D CAD Models in GAMES WEBINAR.

Aug 15, 2024. I served in the AAAI 2025 Committee.

July 1, 2024. Three papers accepted to ECCV 2024.

Photo in Hollywood, LA, 2026.

Selected Publications

(More in here)

Hao Sun, Fenggen Yu, Huiyao Xu, Tao Zhang, Changqing Zou.

LL-Gaussian: Low-Light Scene Reconstruction and Enhancement via Gaussian Splatting for Novel View Synthesis.

[Paper], Accepted to [ACM MM 2025].

We propose LL-Gaussian, a novel framework for 3D reconstruction and enhancement from low-light sRGB images, enabling pseudo normal-light novel view synthesis.

Fenggen Yu, Yiming Qian, Xu Zhang, Francisca Gil-Ureta, Brian Jackson, Eric Bennett, and Hao(Richard) Zhang.

DPA-Net: Structured 3D Abstraction from Sparse Views via Differentiable Primitive Assembly.

[Paper], Accepted to [ECCV 2024].

We present a differentiable rendering framework to learn structured 3D abstractions in the form of primitive assemblies from sparse RGB images capturing a 3D object.

Fenggen Yu, Qimin Chen, Maham Tanveer, Ali Mahdavi-Amiri, and Hao(Richard) Zhang.

D2CSG: Unsupervised Learning of Compact CSG Trees with Dual Complements and Dropouts.

[Paper], Accepted to [NeurIPS 2023].

We present D2CSG, a neural model composed of two dual and complementary network branches, with dropouts, for unsupervised learning of compact constructive solid geometry (CSG) representations of 3D CAD shapes.

© All Rights Reserved | Designed by Fenggen Yu