| Title |
Keypoint-based Re-identification with Part-level Semantic Attributes |
| Authors |
이수빈(Subin Lee) ; 박진선(Jinsun Park) |
| DOI |
https://doi.org/10.5573/ieie.2025.62.12.119 |
| Keywords |
Person re-identification; Vision-language model; Keypoint-based feature localization; Semantic attribute embedding; Occlusion handling |
| Abstract |
Person re-identification (Re-ID) remains a challenging task due to occlusion and appearance ambiguity frequently encountered in real-world crowded scenes. To address these issues, we propose a part-level Re-ID framework that integrates semantic attributes using keypoint-based prompts and vision-language models (VLMs). Our method describes each body part using a natural language sentence and embeds the corresponding semantics into the spatial visual features, enabling robust and discriminative representations even under partial occlusion or appearance variation. Specifically, we leverage a CLIP-based pretrained model to select the most relevant attribute sentence for each detected keypoint region and inject it into the corresponding spatial features as semantic guidance during encoding. Experiments conducted on the Occluded-Duke and Market-1501 datasets demonstrate that our method outperforms existing approaches. |