表題番号:2025C-091 日付:2026/04/02
研究課題確率解析に基づく確率的機械学習の研究
研究者所属(当時) 資格 氏名
(代表者) 理工学術院 基幹理工学部 教授 笠井 裕之
研究成果概要
Our research outcomes this year include the following two topics. The first one is not directly related to the stochastic regime, but sampling-based walk exploration in graph neural networks can be analyzed probabilistically, which is a future research topic. The second one is an optimization of noise combination in diffusion models. 

(1) Graph data, with its structural variability, represents complex real-world phenomena such as chemical compounds, protein structures, and social networks. Traditional Graph Neural Networks (GNNs) primarily rely on the message-passing mechanism, but their expressive power is limited, and their predictions lack explainability. To address these limitations, researchers have focused on graph substructures. SGNNs compute graph representations from bags of subgraphs to enhance their expressive power. However, they often rely on predefined, algorithm-based sampling strategies that are inefficient. GNN explainers adopt data-driven approaches to generate salient subgraphs that provide explanations. Nevertheless, their explanation is difficult to translate into practical improvements on GNNs. To overcome these issues, we propose a novel self-supervised framework that integrates SGNNs with the generation approach of GNN explainers, named the Reinforcement Walk Exploration SGNN (RWE-SGNN). Our approach features a sampling model trained in an explainer fashion, optimizing subgraphs to enhance model performance. To achieve a data-driven sampling approach, we propose a novel walk exploration process that efficiently extracts important substructures, simplifying the embedding process and avoiding isomorphism issues. Moreover, we prove that our proposed walk exploration process has equivalent generation capability to the traditional subgraph generation process. 

(2) Pretrained diffusion models have demonstrated strong capabilities in zero-shot inverse problem solving by incorporating observation information into the generation process of the diffusion models. However, this presents an inherent dilemma: excessive integration can disrupt the generative process, while insufficient integration fails to emphasize the constraints imposed by the inverse problem. To address this, we propose Noise Combination Sampling, a novel method that synthesizes an optimal noise vector from a noise subspace to approximate the measurement score, replacing the noise term in the standard Denoising Diffusion Probabilistic Models process. This enables conditional information to be naturally embedded in the generation process without relying on stepwise hyperparameter tuning. Our method can be applied to a wide range of inverse problem solvers, including image compression, and, particularly when the number of generation steps T is small, achieves superior performance with negligible computational overhead, significantly improving robustness and stability.