Finding The Right Stuff In Full Motion Video

Helping analysts deal with the massive data volume of aerial video

0 Ratings

Imagine you are watching a compound or city neighborhood busy with activity. It is your job to identify any people in the scene with bad intentions in order to alert your soldiers on the ground. What if you miss something because there is so much going on? Having the capability to be automatically alerted when suspicious activity/behaviors are evident in the scene is game changing and can be the difference between life and death for those who depend on your assessment of the scene.


A team of U.S. forces are on their way to a compound of interest that seems to be involved with suspicious insurgent activity. There is a UAV overhead delivering full motion video (FMV) to a team of analysts, tasked with monitoring all the activity at the compound to ensure a safe mission for the troops on the ground. The area is extremely active, so how do the analysts determine threatening vs. non-threatening activity? Advanced analytics developed through this SBIR can focus the analyst’s attention on the most important events, through real-time alerts and queries against large video archives.


To dramatically reduce workload and improve surveillance, analysts need a technology that accurately and automatically detects and characterizes the appearance and behavior of moving objects in video. Without these advanced analytics, analysts must manually identify and detect vehicles, persons, objects, and threats. Fatigue, quality of video, and slow response time can be life threatening for those on the ground. Some video may contain so many moving objects that its impossible for analysts to manually assess all of them, leading to a much higher risk of missing important or threatening activity. Analysts might not record certain behaviors that only become relevant after the fact. The U.S. and our allies employ a vast array of video sensors on the battlefield. Video analysts struggle under an ever increasing volume of video data, from various sensors and locations. The current process requires multiple analysts to manually view and annotate the data, leaving much critical data unexploited. This process cannot scale to future data sizes. Data that is exploited in real time is often reviewed with a specific objective in mind, leading to missing other, relevant content.


Kitware teamed with DARPA and AFRL researchers to develop technologies addressing the needs of operational video analysts. Building on multiple efforts over the last 8 years, a sophisticated set of technologies for automatic analysis of full motion video has been developed. On prior efforts, we’ve teamed with some of the premier research institutions in the country to take the state-of-the-art in video analysis technologies and apply it to automated FMV analysis. In our recent efforts with AFRL, we’ve focused on extending those technologies to additional problems, such as automated change detection, improving their reliability and performance, and making them more usable for an analyst.

"Leveraging behavior-based recognition, auto-classification, and indexing dramatically reduces the work of analyzing each piece of video data, so analysts can evaluate more data with less effort." — Matthew Turek


Our approach to automated video analysis leverages state-of-the-art techniques from academia and industry developed over multiple efforts and years. Our algorithms automatically ingest streams of video data, searching for objects, such as moving people and vehicles. Moving objects are detected and tracked, and then characterized in terms of their motion, appearance, and behavior. The information about these objects and behaviors are recorded in a database. A novel user interface allows an analyst to query the system for specific types of behavior, perhaps at particular locations and times. The system can also learn from user feedback to narrow the search results further. In addition, a user can provide a video clip with a new behavior example, previously unseen by the system. In conjunction with limited user feedback, the system can search for examples of the new behavior in a historical video archive. Automated algorithms can provide alerts to user specified behaviors in particular areas or to cue the analyst to changes in the scene, such as vehicles that have left or entered the scene. These features have the potential to significantly reduce the workload on a group of analysts, automating tedious techniques and enabling them to more effectively process large volumes of video data.


Preserving the lives of our soldiers, allies, and civilian non-combatants is a top priority. Video analysts currently struggle under a high volume of video data, only exploiting a fraction of it to assess potential threats, leaving our personnel at risk. The developed technology automatically classifies and indexes the content of video, increasing the efficacy of each analyst to identify threats and communicate the information to the team on the ground.

This line of research and development has helped our company build up a portfolio of state-of-the-art video analysis capabilities. We have collaborated with multiple US government organizations to continue to develop these technologies and to start transitioning them to operational analyst use. These video analysis capabilities will be a critical piece of our company’s technology portfolio, particularly as the commercial and hobbyist usage of UAVs and video takes off.

Our project has helped develop automated video analysis technologies that will be crucial for future US competitiveness. These technologies are not just important militarily, but will be ever more important in the nascent commercial and hobbyist UAV market. This market is poised to be a significant commercial opportunity in the US and abroad, and the technologies developed on this effort will be highly relevant in that space.

"Now, analysts can quickly gain a comprehensive understanding of what they are watching. This capability could be transformative for video exploitation and will help save lives." — Anthony Hoogs, Senior Director of Computer Vision at Kitware Inc

Kitware, Inc.

Clifton Park, NY

Kitware, Inc. is a leader in the creation and support of open-source software and state-of-the-art computing technology. Kitware provides robust scientific software solutions, developing technology for real-world data that can be transferred and readily deployed to yield dramatic advances.

Anthony Hoogs, PH.D. Anthony Hoogs, PH.D.

Anthony Hoogs, PH.D.

Director of Computer Vision

Matt Turek, PH.D. Matt Turek, PH.D.

Matt Turek, PH.D.

Assistant Director of Computer Vision


Vision with a Purpose: Inferring the Function of Objects in Video





For more exciting Air Force launch stories, visit



One Stop Space Shop


A Lighter Way To Control Light