表題番号:2024C-444
日付:2024/12/17
研究課題漫画作品を対象としたデザイン支援技術の開発
研究者所属(当時) | 資格 | 氏名 | |
---|---|---|---|
(代表者) | 理工学術院 基幹理工学部 | 講師 | 福里 司 |
- 研究成果概要
- Converting video sequences into comics is a powerful way to efficiently understand their stories and enables people to have unique experiences due to the different sizes and shapes of comic panels. However, compared to comics, video sequences contain temporal information about the story, so important information about the story may be lost when making comics. Then, in recent years, comic designers have extended the traditional comics to past video segments of a few seconds onto each comic panel instead of static images, called “dynamic” comics. The advantage of dynamic comics is that characters’ physical/emotional movements and camera work in videos can be easily represented. In addition, dynamic comics can create a multimodal narrative space where time and space collaboratively influence the unfolding of the story. However, even though various algorithms have been proposed for automatically generating standard comics (= static comics) from image/video sequences, it remains challenging to make dynamic comics. The main reason is that it requires special skills (i.e., image/video manipulation while imagining compositions and final designs) and users must repeat (1) switching between video editing tools and image editing tools and (2) checking results until the user is satisfied, which is a time-consuming and tedious process. On top of that, the state-of-the-art technologies using artificial intelligence methods fall short in interpreting users’ detailed intentions in video editing and comic layout creation, and the generated results may lack precise control capabilities. Therefore, the present paper aims at reducing the manual effort required to make dynamic comics. This research proposes a first-step system that enables users, even if non-professional users, to interactively design dynamic comics from video sequences, by integrating a simple video editor into a comic layout editor. First, users manually prepare video segments on the video editor and make comic layouts using a parametric model. Then, the system automatically assigns the video segments to each panel. Our system can combine various existing methods such as automatic segmentation.