Beyond being abused, there are many ways for this technology to fail. Among the most pressing are misidentifications that can lead to false arrest and accusations. … Mistaken identity is more than an inconvenience and can lead to grave consequences.
If facial recognition technology is being used on your campus, would you know about it? If it is being used, do you know how it’s impacting the communities you serve?
It’s easy to check out when you hear about facial recognition technology. The term still conjure up images of Minority Report or more recently, thoughts of China. Facial recognition use in every day life feels kind of far off if you’re not working in AI or security industry spaces.
If you’re working in a cozy office, it’s easy to ignore the plight of Amazon warehouse workers and scroll right past their stories in your feed.
So I encourage you to watch this short clip from Frontline, not just so you understand what happens behind the scenes when you click on that purchase button. I want you to see how Amazon is shaping our workplaces.
This focus on data-driven management and efficiency over people won’t be limited to Amazon in the future. Amazon is a leader in everything they do. When they experiment with data-driven management and efficiency and it works, others will follow. From the video:
“Amazon is the cutting edge. Other warehouses are starting to adopt these technologies. Other companies are starting to do what Amazon is doing. Data collection can become the standard for all workers. You’re never good enough. You’re never able to keep up.”
Data-driven management mixed with workplace surveillance creates a brutal work environment. This shouldn’t be what we’re building for the future of work.
I don’t know what the solution is. Listen to these stories. Support unions. Don’t order Prime (or order it less). If you’re in tech, don’t use your talents to work for Amazon.
These workers don’t deserve this. This isn’t the future of work we deserve. We have the power to change it.
Just dropping this magnificent podcast episode off here.
Ifeoma Ajunwa, the author of the upcoming book, the Quantified Worker, goes pretty deep on automated hiring systems, and how humans encode bias into AI-powered hiring systems. She shares examples of how hiring platforms powered by AI are problematic and the impact on job seekers:
“A lot of hiring systems make use of machine learning algorithms. Algorithms are basically a step by step process for solving any known problem. What you have is a defined set of inputs and you’re hoping to get a defined set of outputs like hire, don’t hire, or in between. When you have machine learning algorithms, it kind of makes it murkier. You have a defined set of inputs, but the algorithm itself is learning. So the algorithm itself is actually creating new algorithms, which you are not defining. The algorithm is learning to how you react from the choices it gives you. It creates new algorithms from that. It can become murky in terms of discerning what attributes the algorithm is defining as important because it’s constantly changing.”