Repository | Book | Chapter

Automated music video generation using multi-level feature-based segmentation

Jong-Chul Yoon, In-Kwon Lee, Siwoo Byun

pp. 385-401

The expansion of the home video market has created a requirement for video editing tools to allow ordinary people to assemble videos from short clips. However, professional skills are still necessary to create a music video, which requires a stream to be synchronized with pre-composed music. Because the music and the video are pre-generated in separate environments, even a professional producer usually requires a number of trials to obtain a satisfactory synchronization, which is something that most amateurs are unable to achieve.Our aim is automatically to extract a sequence of clips from a video and assemble them to match a piece of music. Previous authors [8, 9, 16] have approached this problem by trying to synchronize passages of music with arbitrary frames in each video clip using predefined feature rules. However, each shot in a video is an artistic statement by the video-maker, and we want to retain the coherence of the video-maker's intentions as far as possible.

Publication details

DOI: 10.1007/978-0-387-89024-1_17

Full citation:

Yoon, J. , Lee, I. , Byun, S. (2009)., Automated music video generation using multi-level feature-based segmentation, in B. Furht (ed.), Handbook of multimedia for digital entertainment and arts, Dordrecht, Springer, pp. 385-401.

This document is unfortunately not available for download at the moment.