|
Title |
Graphic Model Generation with Damage Information from Deep Learning-Based UAV
|
|
Authors |
김유빈(Yu-Been Kim) ; 이종한(Jong-Han Lee) |
|
DOI |
https://doi.org/10.11112/jksmi.2026.30.2.10 |
|
Keywords |
무인항공기; 딥러닝; 손상 탐지; 손상 정량화; 위치 추적; 그래픽 모델 UAV; Deep learning; Damage detection; Damage quantification; Localization; Graphic model |
|
Abstract |
This study proposes an automated method for generating a damage-integrated graphic model by combining UAV-based deep learning damage detection, quantification, and localization. The proposed framework can detect three types of bridge damage― crack, spalling and exposed rebar―using fine-tuned YOLOv11 models. For detected damages, cracks are quantified at the pixel level to measure length and width, while spalling and exposed rebar are quantified to measure length and width based on bounding box dimensions. The detected damages are then localized by establishing correspondences between UAV imagery and the graphic model, allowing damage visualization at their corresponding locations within the model. Compared to conventional BIM-based approaches, which are often complex and computationally heavy, the proposed method leverages lightweight graphic models that provide advantages in intuitive visualization and rapid damage integration. This approach supports efficient bridge condition assessment and maintenance decision-making, particularly when integrated with large-scale UAV datasets.
|