| Title |
Diffusion Model-based Generative Event Grid Representation Prediction for Image Restoration |
| Authors |
맹주완(Ju-Wan Maeng) ; 오진선(Jin-Seon Oh) ; 김태현(Tae-Hyun Kim) ; 이수찬(Soo-Chahn Lee) |
| DOI |
https://doi.org/10.5573/ieie.2025.62.11.55 |
| Keywords |
Image deblurring; Event camera; Joint deblurring and low light enhancement; Diffusion model |
| Abstract |
Although many prior works in image restoration particularly deblurring have leveraged event grid representation synthesized from recordings by event cameras to achieve impressive results, these approaches cannot be applied in smartphone or conventional digital camera environments that lack event sensors. One might consider using an event simulator, but in real-world test scenarios where only a single blurred image is available (without any sequence of sharp frames), simulator-based approaches are also infeasible. This limitation underscores the need for a model that predicts an event-grid representation. In this paper, we train a model to generate event-grid representation directly from low-quality (especially blurred) images, and we insert this generative module upstream of existing deblurring or joint deblurring and low light enhancement networks for end-to-end training, thereby improving restoration performance over conventional backbones. We demonstrate that even in environments without event cameras, the synthetically generated event data produced by our grid-channel generation model provides substantial benefits across a variety of image restoration tasks. |