Video diffusion models have advanced rapidly in the recent years as a result of series of architectural innovations (e.g., diffusion transformers) and use of novel training objectives (e.g., flow matching). In contrast, less attention has been paid to improving the feature representation power of such models. In this work, we argue that video diffusion model training can benefit from aligning the intermediate features of the video generator with features obtained from pre-trained vision encoders. We propose a new metric and conduct an in-depth analysis of various vision encoders to evaluate their discriminability and temporal consistency, thereby assessing their suitability for video feature alignment. Based on the analysis, we present Align4Gen which provides a novel multi-feature fusion and alignment method integrated into video diffusion model training. We evaluate Align4Gen both for unconditional and class-conditional video generation tasks and show that it results in improved video generation as quantified by various metrics.
We present Align4Gen, a training-time framework designed to enhance video diffusion models by leveraging the rich and diverse representations of pre-trained vision encoders. To guide this design, we first introduce a novel metric that measures both the discriminability and temporal consistency of visual features. Through extensive analysis, we find that features from image-based encoders not only provide stronger semantic signals but also exhibit greater temporal invariance compared to those from video-based models. Additionally, we observe that different image encoders capture complementary frequency information, with some focusing on coarse structures and others emphasizing fine-grained details. Based on these insights, Align4Gen introduces a feature fusion and alignment strategy that integrates multi-frequency features from multiple image encoders into the training process. By aligning the intermediate features of a video diffusion model with those extracted from frozen pre-trained encoders, the model is encouraged to learn temporally consistent and semantically meaningful representations. This alignment is applied during training without altering the inference pipeline. Align4Gen is evaluated on both unconditional and class-conditional video generation tasks and consistently improves the temporal quality, visual fidelity, and motion smoothness of the generated videos.