Intel® Scenescape
Intel® SceneScape is a multimodal scene intelligence software framework that is used for monitoring and tracking type use cases, and to create a fabric of interconnected, intelligent scenes.
ExploreProduct Description
Overview
Intel® SceneScape unlocks business applications from raw sensor data by providing an abstraction layer built on a digital twin of each scene. Objects, people, and vehicles within the scene are represented as overlays on the dynamic structure of the digital twin. Applications, autonomous systems, and mobile systems securely access the digital twin abstraction layer to make decisions about the state of the scene, such as whether a person is in danger, a part is worn or broken, someone has been waiting in line too long, a product has been mis-shelved, or a child has called out for help. Powerful AI algorithms and AI hardware crunch all available sensor data to maintain the 4D scene graph (3D plus time), as quickly and accurately as possible. With Intel® Distribution of OpenVINO™ toolkit, Intel® SceneScape is able to use raw sensor data to create the 4D semantic digital replica by ingesting detections from 2D cameras and mapping them into the abstraction layer. The Intel® Distribution of OpenVINO™ toolkit also helps to abstract the different types of Intel® hardware accelerators, including CPU, GPU, VPU, FPGA, and GNA, enabling developers to write code once and deploy it.
Highlights
Scene Context: Scene and Sensor Management utilizes knowledge about sensors to apply scene context. For example, the position of a smart camera in a building allows for mapping the context of detected people from the camera view into building coordinates.
Multimodal detection: Multimodal tracking allows users to decide what sensors best fit their operational needs. Intel SceneScape readily handles visual, infrared, radio frequency (RF), Intel® RealSense™ tracking and depth sensing cameras, or even other environmental sensors.
Multi-sensor data fusion: Multiple sensor fusion means Intel SceneScape will detect an object of interest at multiple angles in different sensors but know to only display the object of interest once on the scene graph removing duplicates and reducing error.