FUSION – Framework for Universal Software Integration in Open robotics


- Partners: Assystem, CESI, Conscience Robotics.
- Call for projects: Regionalized Idemo
- CESI project budget: €564k
- Project launch: November 14, 2023
- Project duration: 48 months
The explosion of robotics and its uses has led robot manufacturers to develop their own programming software to equip their machines with intelligent capabilities. However, these programming environments have certain limitations: they are very often proprietary and therefore do not allow access to lower programming layers; interoperability between different robots is difficult since each robot has its own programming environment.
To overcome these constraints, robotic middleware emerged in the early 2000s. The concept is simple: robotic middleware is a software overlay that can be installed on an existing operating system (such as Linux or Windows) and natively contains generic tools and libraries to facilitate the programming of robots’ intelligence capabilities.
In 2015, Magyar et al. proposed a comparative study of the four main robotics middlewares: RT-Middleware, ROS, OPRoS, and Orocos. A similar and more comprehensive study was undertaken by Sahni et al. in 2019. The authors identified 14 main middlewares and referenced: natively integrated tools and libraries; those that are not; the operating system and programming language; and the data transmission mode (synchronous/asynchronous).
Of all these, ROS (Robot Operating System), developed in 2009 by Quinley et al. as part of the STAIR project, is the most widely used middleware in the scientific community and probably within companies. Despite the extreme popularity of this software, it must be noted that it is only suitable for users with a high level of expertise in robotics and programming, as it has no graphical interface, all operations are performed using command lines, and it does not currently integrate new technologies in eXtend Reality (virtual reality, augmented reality, mixed reality), which open up new possibilities in human-robot interactions.
It is within this context that the FUSION project is positioned, whose main objective is to democratize the use of robotics by bringing about a real revolution that puts the user at the center of the system, through:
- The introduction of XR for robotic mission design and remote robot operation via digital twins,
- The reuse and sharing of software building blocks available to all,
- An innovative approach to robotic perception using semantic maps to update the digital twin, making robots increasingly autonomous. Our ambition is to modernize the industry by making it possible for any user to use a robotic system supported by Artificial Intelligence and XR.
Our objectives are therefore as follows:
- Provide designers with an integrated development environment (or IDE) that allows them to intuitively create and share high-level robotic features (called skills) based on reusable robotic building blocks.
- Make skills available to all robot users on a universal store.
- Integrate robotic perception based on environment recognition and the concept of semantic maps for next-generation autonomous navigation and efficient updating of digital twins of the environment.
- Create eXtended Reality (or XR) interfaces associated with digital twins that allow a designer, within the framework of robotic skills, or a business user to program and manipulate the robot with an innovative HMI.
- Demonstrate the relevance of the innovation through a concrete application case implementing all the techniques mentioned above to meet a need for the nuclear industry with a robot programmed and controlled by XR and IDE.
Achievements as of March 31, 2025:
The main achievements of the FUSION project are:
- Continued development of XR interfaces (WP2) (see Figure 1)
o Integration of Place and Object retrieval via API, with logo display.
o Addition of the “Explore the area” skill: predefined multi-point routes for the robot.
o Integration of an ambient camera into the robot’s available streams.
o Securing exchanges with the Conscience API via encryption of commands and responses.
o Addition of the “3D Scan” skill to generate a representation of the environment.
o Display of visual feedback (validation or red cross) when an entity is moved.
o Implementation of Pause, Stop, and Resume actions during movement.
o Creation of the “Remove” skill to delete objects, places, and entities.
o Refactoring of several modules to improve readability and maintainability.
o Receipt and commissioning of the Ubik robot in the laboratory.


- Continuation of work on Gaussian Splatting approaches (WP3)
o Data collection in the CESI (Flexible Production Workshop) environment and reception hall for initial implementations, tests, and evaluations of 3D renderings (see Figure 2).
o Launch of a study on the detailed evaluation of approaches (GS) on weakly textured objects using the Digital Twin to determine their points of sensitivity.

- Interactions WP2 – WP3
o Initial trials of integrating GS rendering into VR interfaces to enhance operator immersion during the robot control phase
Human resources:
- Research Engineer Recruitment:
o Céliane Deschamps, Research Engineer, “XR Interfaces – WP3”
o Mickyas Tamiru Asfaw, Research Engineer, “Perception – WP2” - PhD Student Recruitment:
o Majd Karoui, PhD student, “XR interfaces – WP3” - Intern recruitment:
o 1 four-month internship completed by Julien Petit-Colboc