{"id":191254,"date":"2025-10-22T16:00:51","date_gmt":"2025-10-22T14:00:51","guid":{"rendered":"https:\/\/lineact.cesi.fr\/?post_type=projets&#038;p=191254"},"modified":"2025-10-29T08:52:08","modified_gmt":"2025-10-29T07:52:08","slug":"scopes-collaborative-semantics-for-evidentiary-perception-of-situations","status":"publish","type":"projets","link":"https:\/\/lineact.cesi.fr\/en\/projets\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\/","title":{"rendered":"Scopes\u2014collaborative semantics for evidentiary perception of situations"},"content":{"rendered":"\n<div class=\"wp-block-group is-style-editorial-aside-big has-white-background-color has-background is-layout-constrained wp-block-group-is-layout-constrained\">\n<figure class=\"wp-block-image size-full is-resized list-partners\"><img loading=\"lazy\" decoding=\"async\" width=\"1093\" height=\"269\" src=\"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/2025-10-06_09h20_44.png\" alt=\"\" class=\"wp-image-190010\" style=\"width:565px;height:auto\" srcset=\"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/2025-10-06_09h20_44.png 1093w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/2025-10-06_09h20_44-360x89.png 360w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/2025-10-06_09h20_44-500x123.png 500w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/2025-10-06_09h20_44-768x189.png 768w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/2025-10-06_09h20_44-1024x252.png 1024w\" sizes=\"auto, (max-width: 1093px) 100vw, 1093px\" \/><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-group section is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-group wrapper__inner is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-group editorial__picture editorial__aside editorial__aside--big editorial__aside--picture is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\"><\/div>\n\n\n\n<div class=\"wp-block-group is-layout-constrained wp-block-group-is-layout-constrained\">\n<figure class=\"wp-block-image alignright size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"827\" height=\"256\" src=\"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/2025-10-06_09h18_25.png\" alt=\"\" class=\"wp-image-190008\" style=\"width:413px;height:auto\" srcset=\"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/2025-10-06_09h18_25.png 827w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/2025-10-06_09h18_25-360x111.png 360w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/2025-10-06_09h18_25-500x155.png 500w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/2025-10-06_09h18_25-768x238.png 768w\" sizes=\"auto, (max-width: 827px) 100vw, 827px\" \/><\/figure>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading title--5\"><span><span class=\"icon\" aria-hidden=\"true\"><\/span> <\/span><\/h2>\n<\/div>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Partners:<\/strong> CESI LINEACT, LITIS, IRSEEM<\/li>\n\n\n\n<li><strong>Call for projects:<\/strong> ANR ASTRID Robotics 2021<\/li>\n\n\n\n<li><strong>CESI project budget (funding):<\/strong> \u20ac210k (\u20ac105k)<\/li>\n\n\n\n<li><strong>Overall budget (funding):<\/strong> \u20ac547k (\u20ac300k)<\/li>\n\n\n\n<li><strong>Project launch:<\/strong> January 1, 2022<\/li>\n\n\n\n<li><strong>Project duration:<\/strong> 39 months<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p>The fields of Industry 5.0 and defense are increasingly relying on systems of systems where robotic agents must adapt to the humans with whom they interact. The use of heterogeneous fleets of agents equipped with perception devices is a godsend that, after merging individual information, to propose solutions to problems such as optimizing fleet operation, securing convoys, improving safety and security for human operators, and increasing flexibility following the reconfiguration of situations or the environment.<\/p>\n\n\n\n<p>The pooling of information makes it possible to produce a global view of the situation based on the individual perceptions of each robotic or non-robotic agent. Each individual perception module produces an interpretation of the scene that is inherently subject to uncertainty. The consequences of deploying the fleet in complex or hostile environments must also be considered. The communication link required for information exchange is subject to bandwidth, which can be very limited or even non-existent when the link is broken, even temporarily. The positions of the viewpoints needed to create the situational view are also dependent on the quality of information from location sources when available.<\/p>\n\n\n\n<p>The <strong>SCOPES<\/strong> project proposes to develop a solution for producing an augmented situational view of uncertainty as a source of decision-making information. The project&#8217;s contributions will be:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>A formal representation of the situational view, integrating the various sources of uncertainty, allowing for human interpretation.<\/li>\n\n\n\n<li>A robust localization method based on the graph paradigm and semantic information provided by each agent,<\/li>\n\n\n\n<li>A functional specification and associated datasets for objective and quantitative evaluation of collaborative perception situations through the use of the project partners&#8217; remarkable technological platforms.<\/li>\n<\/ul>\n<\/div>\n\n\n\n<p>The SCOPES project will result in TRL 4-level outputs. The project&#8217;s value to economic stakeholders has been recognized by its certification by <strong><a href=\"https:\/\/www.nae.fr\/\">NAE (Normandie AeroEspace)<\/a><\/strong>.<\/p>\n\n\n\n<p><strong>Achievements as of December 31, 2024:<\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Formalization and representation of an evidential semantic grid.<\/li>\n\n\n\n<li>Robust and lightweight localization and mapping using graphs in complex environments through spatio-temporal consistency.<\/li>\n\n\n\n<li>Methodology for data production using extended reality.<\/li>\n\n\n\n<li>Situation overview taking into account sources of uncertainty (HMI).<\/li>\n\n\n\n<li>Representation, restitution, and visualization of uncertainty in a map for human decision-making.<\/li>\n\n\n\n<li>Software production:<br>o Simulated data generation code (CARLA\/OpenScenario),<br>o Code for estimating evidential semantic occupancy grids,<br>o Code for the SLAM RGBD approach with spatial and temporal consistency on Github,<br>o Software for multimodal augmentation of RGBD data,<br>o Software for visualizing evidential semantic occupancy grids in a digital twin.<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"713\" height=\"571\" data-id=\"190152\" src=\"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-1-3.bmp\" alt=\"\" class=\"wp-image-190152\" srcset=\"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-1-3.bmp 713w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-1-3-360x288.jpg 360w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-1-3-500x400.jpg 500w\" sizes=\"auto, (max-width: 713px) 100vw, 713px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"400\" height=\"361\" data-id=\"190154\" src=\"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-2-5.bmp\" alt=\"\" class=\"wp-image-190154\" srcset=\"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-2-5.bmp 400w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-2-5-360x325.jpg 360w\" sizes=\"auto, (max-width: 400px) 100vw, 400px\" \/><\/figure>\n<figcaption class=\"blocks-gallery-caption wp-element-caption\">Image 1 : Collaborative perception from a heterogeneous fleet of sensors and vectors. Image 2 : Evidential semantic grid.<\/figcaption><\/figure>\n\n\n\n<figure class=\"wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex\">\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"709\" height=\"886\" data-id=\"190144\" src=\"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-3-4.bmp\" alt=\"\" class=\"wp-image-190144\" srcset=\"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-3-4.bmp 709w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-3-4-360x450.jpg 360w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-3-4-500x625.jpg 500w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-3-4-627x783.jpg 627w\" sizes=\"auto, (max-width: 709px) 100vw, 709px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"827\" height=\"931\" data-id=\"190148\" src=\"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-4-2.bmp\" alt=\"\" class=\"wp-image-190148\" srcset=\"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-4-2.bmp 827w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-4-2-360x405.jpg 360w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-4-2-500x563.jpg 500w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-4-2-768x865.jpg 768w, https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/7.2-image-4-2-696x783.jpg 696w\" sizes=\"auto, (max-width: 827px) 100vw, 827px\" \/><\/figure>\n<figcaption class=\"blocks-gallery-caption wp-element-caption\">Image 3 : Robust semantic map by SLAM. Image 4 : Multimodal augmented data<\/figcaption><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>The project&#8217;s closing meeting took place on March 13 at the CESI campus in Rouen, attended by partners and funders, and the project officially ended on March 31, 2025.<\/p>\n<\/div>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Industry 5.0 and defense rely on systems that integrate fleets of robotic agents capable of interacting and adapting to humans. The fusion of their data optimizes operations, enhances security, safety, and flexibility, while improving human-machine cooperation.<\/p>\n","protected":false},"featured_media":191689,"menu_order":0,"template":"","categories":[528],"tags":[],"class_list":["post-191254","projets","type-projets","status-publish","has-post-thumbnail","hentry","category-en"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Scopes\u2014collaborative semantics for evidentiary perception of situations - CESI LINEACT<\/title>\n<meta name=\"description\" content=\"Industry 5.0 and defense bring robots and humans together to optimize operations, enhance safety, and increase flexibility.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/lineact.cesi.fr\/en\/projets\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Scopes\u2014collaborative semantics for evidentiary perception of situations - CESI LINEACT\" \/>\n<meta property=\"og:description\" content=\"Industry 5.0 and defense bring robots and humans together to optimize operations, enhance safety, and increase flexibility.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/lineact.cesi.fr\/en\/projets\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\/\" \/>\n<meta property=\"og:site_name\" content=\"CESI LINEACT\" \/>\n<meta property=\"article:modified_time\" content=\"2025-10-29T07:52:08+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/image-scopes-2-scaled.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"2048\" \/>\n\t<meta property=\"og:image:height\" content=\"1366\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/projets\\\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\\\/\",\"url\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/projets\\\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\\\/\",\"name\":\"Scopes\u2014collaborative semantics for evidentiary perception of situations - CESI LINEACT\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/projets\\\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/projets\\\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/lineact.cesi.fr\\\/wp-content\\\/uploads\\\/2025\\\/10\\\/image-scopes-2-scaled.jpg\",\"datePublished\":\"2025-10-22T14:00:51+00:00\",\"dateModified\":\"2025-10-29T07:52:08+00:00\",\"description\":\"Industry 5.0 and defense bring robots and humans together to optimize operations, enhance safety, and increase flexibility.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/projets\\\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/projets\\\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/projets\\\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\\\/#primaryimage\",\"url\":\"https:\\\/\\\/lineact.cesi.fr\\\/wp-content\\\/uploads\\\/2025\\\/10\\\/image-scopes-2-scaled.jpg\",\"contentUrl\":\"https:\\\/\\\/lineact.cesi.fr\\\/wp-content\\\/uploads\\\/2025\\\/10\\\/image-scopes-2-scaled.jpg\",\"width\":2048,\"height\":1366,\"caption\":\"Business hand robot handshake, artificial intelligence digital transformation\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/projets\\\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Projets\",\"item\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/projets\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Scopes\u2014collaborative semantics for evidentiary perception of situations\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/#website\",\"url\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/\",\"name\":\"CESI LINEACT\",\"description\":\"Laboratoire de recherche et d&#039;innovation\",\"publisher\":{\"@id\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/#organization\",\"name\":\"CESI LINEACT\",\"url\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/lineact.cesi.fr\\\/wp-content\\\/uploads\\\/2022\\\/11\\\/cropped-LOGOTYPE_CESI_QUADRI_RVB_2.png\",\"contentUrl\":\"https:\\\/\\\/lineact.cesi.fr\\\/wp-content\\\/uploads\\\/2022\\\/11\\\/cropped-LOGOTYPE_CESI_QUADRI_RVB_2.png\",\"width\":862,\"height\":112,\"caption\":\"CESI LINEACT\"},\"image\":{\"@id\":\"https:\\\/\\\/lineact.cesi.fr\\\/en\\\/#\\\/schema\\\/logo\\\/image\\\/\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Scopes\u2014collaborative semantics for evidentiary perception of situations - CESI LINEACT","description":"Industry 5.0 and defense bring robots and humans together to optimize operations, enhance safety, and increase flexibility.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/lineact.cesi.fr\/en\/projets\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\/","og_locale":"en_US","og_type":"article","og_title":"Scopes\u2014collaborative semantics for evidentiary perception of situations - CESI LINEACT","og_description":"Industry 5.0 and defense bring robots and humans together to optimize operations, enhance safety, and increase flexibility.","og_url":"https:\/\/lineact.cesi.fr\/en\/projets\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\/","og_site_name":"CESI LINEACT","article_modified_time":"2025-10-29T07:52:08+00:00","og_image":[{"width":2048,"height":1366,"url":"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/image-scopes-2-scaled.jpg","type":"image\/jpeg"}],"twitter_card":"summary_large_image","twitter_misc":{"Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/lineact.cesi.fr\/en\/projets\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\/","url":"https:\/\/lineact.cesi.fr\/en\/projets\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\/","name":"Scopes\u2014collaborative semantics for evidentiary perception of situations - CESI LINEACT","isPartOf":{"@id":"https:\/\/lineact.cesi.fr\/en\/#website"},"primaryImageOfPage":{"@id":"https:\/\/lineact.cesi.fr\/en\/projets\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\/#primaryimage"},"image":{"@id":"https:\/\/lineact.cesi.fr\/en\/projets\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\/#primaryimage"},"thumbnailUrl":"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/image-scopes-2-scaled.jpg","datePublished":"2025-10-22T14:00:51+00:00","dateModified":"2025-10-29T07:52:08+00:00","description":"Industry 5.0 and defense bring robots and humans together to optimize operations, enhance safety, and increase flexibility.","breadcrumb":{"@id":"https:\/\/lineact.cesi.fr\/en\/projets\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/lineact.cesi.fr\/en\/projets\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lineact.cesi.fr\/en\/projets\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\/#primaryimage","url":"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/image-scopes-2-scaled.jpg","contentUrl":"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2025\/10\/image-scopes-2-scaled.jpg","width":2048,"height":1366,"caption":"Business hand robot handshake, artificial intelligence digital transformation"},{"@type":"BreadcrumbList","@id":"https:\/\/lineact.cesi.fr\/en\/projets\/scopes-collaborative-semantics-for-evidentiary-perception-of-situations\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/lineact.cesi.fr\/en\/"},{"@type":"ListItem","position":2,"name":"Projets","item":"https:\/\/lineact.cesi.fr\/en\/projets\/"},{"@type":"ListItem","position":3,"name":"Scopes\u2014collaborative semantics for evidentiary perception of situations"}]},{"@type":"WebSite","@id":"https:\/\/lineact.cesi.fr\/en\/#website","url":"https:\/\/lineact.cesi.fr\/en\/","name":"CESI LINEACT","description":"Laboratoire de recherche et d&#039;innovation","publisher":{"@id":"https:\/\/lineact.cesi.fr\/en\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/lineact.cesi.fr\/en\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/lineact.cesi.fr\/en\/#organization","name":"CESI LINEACT","url":"https:\/\/lineact.cesi.fr\/en\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/lineact.cesi.fr\/en\/#\/schema\/logo\/image\/","url":"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2022\/11\/cropped-LOGOTYPE_CESI_QUADRI_RVB_2.png","contentUrl":"https:\/\/lineact.cesi.fr\/wp-content\/uploads\/2022\/11\/cropped-LOGOTYPE_CESI_QUADRI_RVB_2.png","width":862,"height":112,"caption":"CESI LINEACT"},"image":{"@id":"https:\/\/lineact.cesi.fr\/en\/#\/schema\/logo\/image\/"}}]}},"_links":{"self":[{"href":"https:\/\/lineact.cesi.fr\/en\/wp-json\/wp\/v2\/projets\/191254","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lineact.cesi.fr\/en\/wp-json\/wp\/v2\/projets"}],"about":[{"href":"https:\/\/lineact.cesi.fr\/en\/wp-json\/wp\/v2\/types\/projets"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/lineact.cesi.fr\/en\/wp-json\/wp\/v2\/media\/191689"}],"wp:attachment":[{"href":"https:\/\/lineact.cesi.fr\/en\/wp-json\/wp\/v2\/media?parent=191254"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lineact.cesi.fr\/en\/wp-json\/wp\/v2\/categories?post=191254"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lineact.cesi.fr\/en\/wp-json\/wp\/v2\/tags?post=191254"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}