The U.S. Army has put out a research solicitation for technology that will improve situational awareness on the tactical battlefield.
The Small Business Technology Transfer (STTR) proposal, “Real Time 3-D Modeling and Immersive Visualization for Enhanced Soldier Situation Awareness,” seeks technology that can visualize internal and external structures of buildings as well as potential threats, and then disseminate that information to soldiers and small-unit leaders. “This must be combined with the ability to simultaneously self-locate and construct a textured, 3-D environment map, often in the absence of GPS,” states the proposal, which closes March 28.
The Army specifies several objectives for the project, including algorithms that can use video and 3-D point cloud data from air and ground sensors to create high-fidelity, geo-referenced 3-D models. Beyond visualization, the Army also wants an automated detection feature that can alert users to the presence of moving or stationary objects, and the ability for users to customize that feature.
The Army notes the project will require cutting-edge technology in multiple fields, including high speed graphics computing, 3-D imaging, virtual reality, and visualization. More significant, for this project to work, fundamental advances will be needed in other areas such as real-time 3-D point cloud processing algorithms. The proposal calls for solutions with civilians as well as military applications, including commercial robotics, video games, training and law enforcement.
“A key point is that real-time streamlining of accurate, dynamic data and imagery equips the troops with a deeper awareness and understanding of their ever changing surroundings,” says Eric Simon, vice-president of modeling and simulation at Presagis.
“Enhancing situational awareness allows troops to better plan and mitigate the risks of the mission because the are able to make decisions at any given moment that reflect the current environment, rather than making a decision based on a static picture taken prior to the start of the mission.”
Simon pointed to several challenges that the project must overcome.
“Fusing images from various sources is complicated,” he said. “The fact that these sources have potentially different levels of fidelity makes it even more challenging. Another challenge is the identification and attribution of objects in real-time. There are currently some of these capabilities that are being prototyped, or used in a simplified way, but not to the extent of this project.”