Artificial intelligence is amazing in all of its guises, whether that’s a DJI drone using computer vision to stop short of a collision or an Intel-powered UAV using machine learning to identify whales. There are plenty of applications combining AI and drones that are changing lives and industries for the better.
However, when developing software, algorithms and machines capable of making decisions and judgements that affect humans, care obviously needs to be taken. This week, research conducted at the University of Cambridge, India’s National Institute of Technology and the Indian Institute of Science has raised ethical questions over the combination of drones, aerial surveillance and AI.
The technology developed allows a drone fly above crowds and use machine learning to detect signs of violence.
In a paper titled “Eye in the Sky,” the researchers describe a Parrot AR quadcopter, which sends live video from above a crowd for real-time analysis. Then, machine learning algorithms churn away at the footage and observe the poses of people in shot. The system is able to recognize posture and gestures in the footage that has previously been designated as “violent.”
‘What is violence?’ you might ask. Well, for the sake of the research, the team suggested five poses: strangling, punching, kicking, shooting, and stabbing.
An eye in the sky
On paper, that sounds like a great idea. Drones can hover over festivals, sporting events, any place where people are gathering – and alert authorities when something untoward is occurring. These kinds of events can be difficult to police; sometimes an aerial view is needed. If that aerial view can also pinpoint violence, what’s not to love?
The problem lies in defining violence and training an AI system to recognize it with accuracy. There’s the slippery slope of automated surveillance. What if it moves beyond violence to spot suspicious behavior, not just violence? Who defines what that is and what safeguards are in place to stop that power from being abused?
These are important ethical questions that need to be answered before we see this kind of automated policing in action. It’s also worth bearing in mind that the technology isn’t quite there yet. In terms of accuracy, the system is currently 94 percent accurate at identifying “violent” poses, but that number drops when more people appear in the frame. It dropped to 79 percent accurate when there were 10 people in the scene, for example.
https://twitter.com/mer__edith/status/1004339248089231360
Either way, it’s an interesting application of drone technology – and no doubt this won’t be the last we hear about automated, machine learning-driven aerial surveillance. In the right hands, it’s difficult to argue that it could help prevent crimes and bring people to justice.
But in the wrong hands, whether that’s an authoritarian government or an unchecked police force, it could just be another tool of tyranny.
Malek Murison is a freelance writer and editor with a passion for tech trends and innovation. He handles product reviews, major releases and keeps an eye on the enthusiast market for DroneLife.
Email Malek
Twitter:@malekmurison
Subscribe to DroneLife here.
[…] post Using Drones and AI For Automated Crowd Surveillance appeared first on […]