Podcast rec: The Quantified Worker and Worker Surveillance with Ifeoma Ajunwa

Just dropping this magnificent podcast episode off here.

Ifeoma Ajunwa, the author of the upcoming book, the Quantified Worker, goes pretty deep on automated hiring systems, and how humans encode bias into AI-powered hiring systems. She shares examples of how hiring platforms powered by AI are problematic and the impact on job seekers:

“A lot of hiring systems make use of machine learning algorithms. Algorithms are basically a step by step process for solving any known problem. What you have is a defined set of inputs and you’re hoping to get a defined set of outputs like hire, don’t hire, or in between. When you have machine learning algorithms, it kind of makes it murkier. You have a defined set of inputs, but the algorithm itself is learning. So the algorithm itself is actually creating new algorithms, which you are not defining. The algorithm is learning to how you react from the choices it gives you. It creates new algorithms from that. It can become murky in terms of discerning what attributes the algorithm is defining as important because it’s constantly changing.”

Plus, she covers how employers spy on their workers. This is a must listen for anyone curious about AI in the workplace.

Follow her @iajunwa.


This call is being monitored (and used to discipline call center workers)

The premise of using affect as a job-performance metric would be problematic enough if the process were accurate. But the machinic systems that claim to objectively analyze emotion rely on data sets rife with systemic prejudice, which affects search engine results, law enforcement profiling, and hiring, among many other areas. For vocal tone analysis systems, the biased data set is customers’ voices themselves. How pleasant or desirable a particular voice is found to be is influenced by listener prejudices; call-center agents perceived as nonwhite, women or feminine, queer or trans, or “non- American” are at an entrenched disadvantage, which the datafication process will only serve to reproduce while lending it a pretense of objectivity.

Recorded for Quality Assurance

All of us are used to hearing the familiar phrase “This call is being monitored for quality assurance” when we contact customer service.

Most of us don’t give a second thought to what happens to the recording after our problem is solved.

The article above takes us in the new world of call center work, where your voice is monitored, scored by AI, and used to discipline workers.

“Reps from companies claim their systems allow agents to be more empathetic, but in practice, they offer emotional surveillance suitable for disciplining workers and manipulating customers. Your awkward pauses, over-talking, and unnatural pace will be used against them.

The more I read about workplace surveillance, the more dystopian the future of work looks. Is this really what we want? Is this what managers and leadership want?

What if we used the voice analysis on leadership. Why aren’t we monitoring and analyzing how leadership speaks to their subordinates or peers in meetings? Grant it, I don’t think that’d actually produce a healthy work environment but it only seems like a fair deal for leadership who implement and use these algorithms in their organizations.

On a related note, there’s a collection of papers out from Data & Society that seek to “understand how automated, algorithmic, AI, or otherwise data-driven technologies are being integrated into organizational contexts and processes.” The collection, titled Algorithms on the shop floor: Data driven technologies in organizational contexts, shows off the range of contexts in which new technology is fundamentally reshaping our workforce.

With companies racing to implement automated platforms and AI technology in the workplace, we need so much more of this research.