It’s a common science fiction theme that drones and other means of surveillance may someday be used to track individuals, using AI and teams of government officials to identify people from aerial images. There’s no need to get nervous yet, though – a paper published by two UK-based researchers says that facial recognition from aerial footage – especially any method that requires a human being to try to match drone images with images from a passport or drivers license – isn’t particularly effective.
Matthew C. Fysh and Markus Bindemann, both researchers from the School of Psychology at the University of Kent, published a paper titled Person Identification from Drones by Humans: Insights from Cognitive Psychology on September 28, 2018. The full article is available for download here. Fysh and Bindemann concluded people have trouble performing facial recognition accurately from drone images.
Facial recognition is a well established method of surveillance that has been used for the last century. Closed-circuit television (CCTV), also known as video surveillance, has been used for decades in a variety of applications: homes, offices, warehouses and in urban environments all frequently utilize CCTV. In 2012, the number of CCTV cameras in London was estimated at around one camera for every 14 people, adding up to a total of about 422,000. Estimates say that there are between 4 million and 5.9 million CCTV surveillance cameras in the UK, according to a report from the British Security Industry Association (BSIA).
In the US, while not as prevalent as in the UK, CCTV footage was used after the Boston Marathon bombings to identify the suspects in the case, and has been used to monitor major events for years.
But does drone footage work just as well? Comparing high quality images to drone images, the study finds that for facial recognition, the instances of a identity match are reduced from about 90% using high quality images to about 48% using low resolution images.
Automating this process by the use of AI software may prove a better first screening for identity matching: but this process usually calls for an image to be identified or “flagged” by AI, and then reviewed by a person. That’s where the system tends to fail. Even using two images of high quality, taken minutes apart, humans demonstrate a considerable margin of error in deciding if a person is the same or not. These images, designed to show the best case scenario, demonstrate that people cannot accurately find an identity match 20% of the time.
The top row shows two different images of the same person are presented (identity match), whereas the bottom pair depicts two different people (an identity mismatch).
Researchers say that there are a number of challenges that cause degradation in aerial images: including the altitude of the drone and the presence of trees and other objects that may interfere with the clarity and angle of the images.
Miriam McNabb is the Editor-in-Chief of DRONELIFE and CEO of JobForDrones, a professional drone services marketplace, and a fascinated observer of the emerging drone industry and the regulatory environment for drones. Miriam has penned over 3,000 articles focused on the commercial drone space and is an international speaker and recognized figure in the industry. Miriam has a degree from the University of Chicago and over 20 years of experience in high tech sales and marketing for new technologies.
For drone industry consulting or writing, Email Miriam.
TWITTER:@spaldingbarker
Subscribe to DroneLife here.
[…] an occasional series of pieces on Surveillance and Security, we have recently written about facial recognition via drones, which some researchers say is still […]