Paranoia Mounts with Eyeballs at the Edge – My VantagePoint

With Eyeballs at the Edge becoming more prevalent – meaning smart video cameras in every hallway, building entrance, street corner, and more – there are also calls from all over to limit AI video analytics to protect privacy.. Who wants to think that big brother, any big brother, is following them around in their daily lives like some futuristic sci-fi movie?

As edge computing blossoms along with new video analytics chips that are especially well-suited for edge applications, this risk isn’t far-fetched. Perhaps the highest risk will come from municipal governments, which can deploy smart video cameras across an entire metropolis, and yet may not have the most secure and sophisticated security and privacy practices. Less concerning is the Orwellian concern that gaggles of companies with access to disparate files of venue video could somehow stitch it together into a coherent perspective. Like any relatively-new technology, there are societal benefits and costs to AI video analytics – so what are the tradeoffs?

If you are a parent trying to find your kidnapped child, you want facial recognition systems to work. If your child is under attack at a middle school, you want these systems to rapidly identify all adults coming on campus. If you are a potential sexual assault victim walking late at night back to your dorm, you want these systems to work and serve as a deterrent. If you are at an airport that is on high alert searching for suspects on a no-fly list, you probably also want these systems to work. What if a hospital discovered one of their staff members had the coronavirus? Wouldn’t it be useful to the health and safety of everyone else to know when and where that employee was during the days just prior to the symptoms becoming obvious?

That all said –

if you are at a shopping mall you may not want the mall operator to identify you and treat you differently than any other patron that walks into a store. You might be OK with the mall counting people to help manage the food court staffing or to make sure the restrooms are clean – but that’s probably your = limit. Likewise, if you are in a city park, you don’t mind the city knowing when an adult is in the park, but you might not want them to know who you are and correlate it to anything about you out there on the web or social media. What if you had a mistaken arrest in your distant past? Do you want ghosts of a mistaken infraction to haunt you due to the power of ML and AI analytics and statistics? Do you want law enforcement to shadow your movements via 100s of eyeballs at the edge, no matter where you are in your town? And, given the state of the art – all of this could, and in fact most certainly will result in false positive recognitions. Really? That is creepy for sure; or even worse.

 Ethicists have not yet arrived at an accepted answer to the tradeoffs between individual and societal rights, and the balance between those seems to teeter back and forth over time. It’s my view from the cheap seats of the technology sector that – as with so many things in life there may be an 80/20 rule of moderation between rights of commercialization versus societal benefit and the risk of bodily or social harm. What isn’t clear though is if there is an 80/20 middle ground. Perhaps it is only 50/50. Further, are we even to the point in the evolution of facial recognition that we can be so precise?  Today’s state of the art seems to have a high false positive rate. Therefore, the risks for misuse or unacceptable consequences is potentially large. AI recognizing a human shape isn’t bad. True rock solid, “I’d bet my life on it” facial recognition is an entirely different matter – especially from any meaningful distance. To make matters worse, it is very clear that our body politic is not yet at a state that it can process real technology and scientific facts and come to answers derived from facts. Facts and data alone are endlessly spun based upon agendas totally unrelated to the matter at hand.

So, what to do here?

In some very real sense, society has in fact arrived at a paradigm that is governing this balance. Ethics is part of this, as is a certain amount of capitalism mixed in with “what can we (or should we) get away with” thinking. But how so? If we stop to consider it, companies that build products and services,, and the organizations that use and deploy these products and capabilities themselves, are balancing, every single day, the potential benefits of these capabilities against a future “jury of your peers” review process and the impact of that review on the company brand every day.  That is the de facto control mechanism now in place. A future jury might see the world very differently on these matters than a corporation. Added to this are broad regulatory mandates – the EU being the most engaged in this area. Is this de facto mechanism sufficient? It is hard to tell.  Do I want my children, friends, and family to be safe and benefit from the potential of facial recognition from video AI analytics – absolutely. Do I want a virtual law enforcement ghost to shadow my children, friends, and family due to a false positive facial recognition that has the potential to interfere with their lives – absolutely not.  What I know is that we are going to see some amazingly useful benefits on everything from simple people detection to more advanced video AI on the edge. Yet, at the same time, one day each of us is sure to read a horrendous story about someone whose life was ruined as a direct result of this technology. Moderation, the golden rule, and the hope that our de facto legal risk-driven system will work for us are the best we’ve got at this point.

This blog was originally authored by Chief Technology Officer, Eric Broockman.

Get the latest stories sent straight to your inbox!

Casos Relacionados