MEAT: Multiview Diffusion Model for Human Generation on Megapixels with Mesh Attention
CVPR 2025
- Yuhan Wang1
- Fangzhou Hong1
- Shuai Yang2
- Liming Jiang1
- Wayne Wu3
- Chen Change Loy1
1S-Lab, Nanyang Technological University 2 Peking University 3 UCLA
MEAT is the first human multiview diffusion model that can generate dense, view-consistent multiview images at a resolution of 1024x1024.
Abstract
Multiview diffusion models have shown considerable success in image-to-3D generation for general objects. However, when applied to human data, existing methods have yet to deliver promising results, largely due to the challenges of scaling multiview attention to higher resolutions. In this paper, we explore human multiview diffusion models at the megapixel level and introduce a solution called mesh attention to enable training at 1024x1024 resolution. Using a clothed human mesh as a central coarse geometric representation, the proposed mesh attention leverages rasterization and projection to establish direct cross-view coordinate correspondences. This approach significantly reduces the complexity of multiview attention while maintaining cross-view consistency. Building on this foundation, we devise a mesh attention block and combine it with keypoint conditioning to create our human-specific multiview diffusion model, MEAT. In addition, we present valuable insights into applying multiview human motion videos for diffusion training, addressing the longstanding issue of data scarcity. Extensive experiments show that MEAT effectively generates dense, consistent multiview human images at the megapixel level, outperforming existing multiview diffusion methods.
Comparison with Multiview Diffusion Methods
Comparison with Monocular Reconstruction Methods
Mesh Attention for Cross-view Consistency Preservation