• Conférence
  • Ingénierie & Outils numériques

Conférence : Communications avec actes dans un congrès international

Within the framework of Industry 5.0, affordances enable intuitive and adaptive interactions between operators and their industrial work environments. Accurately perceiving these affordances enhances overall production performance, safety, and operator effectiveness. This paper focuses on the initial step of a larger affordance characterization pipeline: detecting tools used by operators during manual assembly tasks. To address the challenges of data scarcity and annotation effort in industrial contexts, we train a custom YOLOv9-based deep learning model on a data-augmented dataset combining realworld and synthetic images, automatically generated from a digital model of an industrial workstation in Unity3D. Through extensive experiments, we varied dataset sizes (50–300 images) and real-world data proportions in the data-augmented train datasets (0%–50%), to assess their impact on tool detection. Results show that only 10% of real-world data is sufficient to achieve strong performance across all data-augmented dataset sizes. A tool specific analysis reveals that visual characteristics such as size and shape influence detection. These findings highlight the effectiveness of combining synthetic and real data to reduce annotation effort while supporting robust tool detection for affordance characterization.