表題番号:2025E-026 日付:2026/02/04
研究課題Pedestrian Trajectory Prediction with Efficient Interaction between Spatial and Temporal Feature
研究者所属(当時) 資格 氏名
(代表者) 理工学術院 情報生産システム研究センター 助手 李 東晨
研究成果概要

I conducted two complementary studies on generalized, deployment-oriented pedestrian trajectory prediction by integrating social scene information and motion priors. The first study, training-free prediction via segmentation-guided path planning, derives a walkable-region representation from scene segmentation and converts it into a grid or cost map. Future trajectories are generated through classical path planning while enforcing feasibility constraints from the environment and simple kinematic consistency from short observations, such as heading and speed trends. This eliminates the need for model training or dataset-specific fine-tuning, which is advantageous under data scarcity, domain shift, or rapid deployment constraints.

The second study, Kinematic Temporal VAE for generalized pedestrian prediction, proposes a lightweight temporal generative model that explicitly encodes kinematic and temporal structure. Given a short observed trajectory, the model learns a compact latent representation and produces multi-modal future trajectories that preserve motion realism, for example smoothness and physically consistent progression. The design emphasizes efficient inference and robust generalization across heterogeneous scenes and observation conditions, and it supports uncertainty-aware prediction by sampling diverse plausible futures.

Overall, these works bridge classical feasibility priors and probabilistic sequence generation. The segmentation-guided planning approach provides strong environmental validity without training, while the kinematic temporal VAE captures uncertainty and behavioral diversity with an efficient learned model. Together, they offer a practical toolkit for pedestrian prediction in autonomous driving and mobile robotics, improving reliability across varying social scene contexts and deployment settings.