These are the “robots” that are taking your jobs

Since I’ve written a book with the phrase “beat the robots” in the title, I get asked a lot whether or not robots are going to take our jobs.

It’s a common question but a bit off the mark. Robots aren’t taking our jobs (unless you’re in manufacturing or retail, in which case robots are actually taking jobs).

Instead, it’s software that is changing how we do our jobs and in some cases, creating fewer job opportunities in traditional occupations. This software is usually called automation software or RPA – Robotic Process Automation. It’s sophisticated software that mimics repetitive human tasks and does them 24/7.

But in some rarer cases, this software is completely taking jobs.

Case in point, this article: Microsoft lays off journalists to replace them with AI

Microsoft is laying off dozens of journalists and editorial workers at its Microsoft News and MSN organizations. The layoffs are part of a bigger push by Microsoft to rely on artificial intelligence to pick news and content that’s presented on MSN.com, inside Microsoft’s Edge browser, and in the company’s various Microsoft News apps. Many of the affected workers are part of Microsoft’s SANE (search, ads, News, Edge) division, and are contracted as human editors to help pick stories.

The craziest part in that article beyond the fact humans were being replaced by AI is that they had to clarify that the editors were in fact human.

In a time of global pandemic and anti-racist protests, we need good journalists who understand nuance and context more than ever. Corporate America doesn’t seem to agree.

And it’s of course, this automation trend not limited to writers. In January, the mega entertainment channel, iHeartRadio laid off hundreds of DJs and replaced them with AI:

The dominant player in U.S. radio, which owns the online music service iHeartRadio and more than 850 local stations across the United States, has called AI the muscle it needs to fend off rivals, recapture listeners and emerge from bankruptcy. The company, which now uses software to schedule music, analyze research and mix songs, plans to consolidate offices around what executives call “AI-enabled Centers of Excellence.”

(Side note: This is my plug for the best, non AI radio station out there: KEXP, whose fundraising tag line in 2018 was robot-free radio and continues to play human curated playlists)

With coronavirus accelerating automation in the workplace, I’m not optimistic this trend is going to go away.

Curious about this subject? I recommend checking out my book on how you can adapt to this changing workplace.

robots taking jobs

A selection of dystopian af quotes on surveillance in schools

I’d love to write a more thorough post on this subject (and maybe soon I will) but for now I’m just going to drop some terrifying quotes from a recent Guardian article, Clear backpacks, monitored emails: life for US students under constant surveillance. The entire article should be a must-read for all parents in hopes that the more we understand, the better we’ll be about asking tough questions on surveillance in schools.

I’m dropping these quote nuggets for thoughtful discussion for you and your partner:

Tech companies are now offering a range of products that help schools track the websites kids are visiting and the searches they are making; that monitor everything students are writing in school emails, chats and shared documents; or that even attempt to track what students are posting on their public social media accounts.

Some parents said they were alarmed and frightened by schools’ new monitoring technologies. Others said they were conflicted, seeing some benefits to schools watching over what kids are doing online, but uncertain if their schools were striking the right balance with privacy concerns. Many said they were not even sure what kind of surveillance technology their schools might be using, and that the permission slips they had signed when their kids brought home school devices had told them almost nothing.

“It’s the school as panopticon, and the sweeping searchlight beams into homes, now, and to me, that’s just disastrous to intellectual risk-taking and creativity.”

As of 2018, at least 60 American school districts had also spent more than $1m on separate monitoring technology to track what their students were saying on public social media accounts, an amount that spiked sharply in the wake of the 2018 Parkland school shooting, according to the Brennan Center for Justice, a progressive advocacy group that compiled and analyzed school contracts with a subset of surveillance companies.

There are virtual learning platforms, platforms for coordinating with teachers, platforms that specialize in teaching kids math.
“They are all mandatory, and the accounts have been created before we’ve even been consulted,” he said. Parents are given almost no information about how their children’s data is being used, or the business models of the companies involved.

Will the data generated by the accounts his kids use at school be factored into decisions about whether they get a job later in life, or how much they have to pay for insurance? “It’s not really a far future,” he said.

Parents, I encourage you to read the whole thing. Then start asking questions and hosting discussions with your community and school about the impact of surveillance in schools:

  • Who benefits from the surveillance of children?
  • Who suffers from the surveillance of children?
  • How much money is made off of the surveillance of children?
  • What are better ways to solve problems around safety in the classroom?
  • How are children responding to increased surveillance?
  • How would you feel if this tech was incorporated into the workplace? (btw this surveillance tech is surely coming for the workforce and in some places, it is already here.
  • How might the data that is being collected going to be used in the future?
  • What predictions are being made with this data?
  • How many false positives occur with this technology?
  • What is the recourse for someone falsely identified as a suspect/troublemaker/future crime commiter by surveillance technology?
  • How should children or parents challenge surveillance in schools?
  • How should children or parents opt out of surveillance in schools?

And if you’re reading this and you’re thinking, my family has nothing to hide, we follow the rules, then I encourage you to read this entire piece by a leading AI researcher and teacher:

8 Things You Need to Know About Surveillance

Here’s the tl;dr outline from the article:

surveillance in schools

The big, disturbing AI experiment in the classroom

“These classrooms are laboratories for future generations… just how this all works out won’t be apparent until they become adult citizens.”

China’s capacity to implement and integrate new AI technology into their society never fails to shock me. The video below on all the ways China is using AI in the classroom is no different.

Watching it reminded me of a term I just learned, parental anxiety management. The term comes from the article, The Case Against Spying on Your Kids With Apps, which does an excellent job of showing off the creepy ways parents can track their kids alongside reasons why they shouldn’t. The takeaway is that surveillance tech markets themselves as the solution to parents’ collective anxiety about their kids safety and health.

While China’s government certainly plays a massive role in the expansion of AI, the cultural acceptance of new technology that supposedly benefits our kids isn’t limited to China. Watch the Chinese parents share why they think it’s a good idea to use artificial intelligence in the classroom, and you’ll see little difference between those parents and US parents who just want what’s best for their kids.

I can only hope that our government steps up and puts huge limits on AI in the classroom so our kids don’t end up monitored, tracked, and shamed as they go through the education system.

This call is being monitored (and used to discipline call center workers)

The premise of using affect as a job-performance metric would be problematic enough if the process were accurate. But the machinic systems that claim to objectively analyze emotion rely on data sets rife with systemic prejudice, which affects search engine results, law enforcement profiling, and hiring, among many other areas. For vocal tone analysis systems, the biased data set is customers’ voices themselves. How pleasant or desirable a particular voice is found to be is influenced by listener prejudices; call-center agents perceived as nonwhite, women or feminine, queer or trans, or “non- American” are at an entrenched disadvantage, which the datafication process will only serve to reproduce while lending it a pretense of objectivity.

Recorded for Quality Assurance

All of us are used to hearing the familiar phrase “This call is being monitored for quality assurance” when we contact customer service.

Most of us don’t give a second thought to what happens to the recording after our problem is solved.

The article above takes us in the new world of call center work, where your voice is monitored, scored by AI, and used to discipline workers.

“Reps from companies claim their systems allow agents to be more empathetic, but in practice, they offer emotional surveillance suitable for disciplining workers and manipulating customers. Your awkward pauses, over-talking, and unnatural pace will be used against them.

The more I read about workplace surveillance, the more dystopian the future of work looks. Is this really what we want? Is this what managers and leadership want?

What if we used the voice analysis on leadership. Why aren’t we monitoring and analyzing how leadership speaks to their subordinates or peers in meetings? Grant it, I don’t think that’d actually produce a healthy work environment but it only seems like a fair deal for leadership who implement and use these algorithms in their organizations.

On a related note, there’s a collection of papers out from Data & Society that seek to “understand how automated, algorithmic, AI, or otherwise data-driven technologies are being integrated into organizational contexts and processes.” The collection, titled Algorithms on the shop floor: Data driven technologies in organizational contexts, shows off the range of contexts in which new technology is fundamentally reshaping our workforce.

With companies racing to implement automated platforms and AI technology in the workplace, we need so much more of this research.


Didn’t see this coming: Facial recognition at summer camp

I spend a lot of time reading and writing about AI in the workplace which means I spend a lot of time reading about AI in general. But I wasn’t at all prepared for this:

Now hundreds of summer camps across the United States have tethered their rustic lakefronts to facial-recognition software, allowing parents an increasingly omniscient view into their kids’ home away from home.

If you just yelled what the fuckity fuck when you read that quote, than you’re really not going to like the article, As summer camps turn on facial recognition, parents demand: More smiles, please. The article details how summer camps are using facial recognition tech to keep parents up to date on teens often without their kids knowing it.

I spend a lot of time reading about AI products and their impact on society, but using facial recognition on teens at a summer camp (and a phone-free one at that) so companies can sell fear and anxiety to parents who then transfer that anxiety right back onto their kids, really caught me off guard.

If this fires you up, follow @ruchowdh and @hypervisible on Twitter.

Excellent analysis by @drewharwell – “Some of the kids… are so accustomed to constant photography that they barely notice the camera crew.” – we are acclimating the next generation to a surveillance state.— Rumman Chowdhury (@ruchowdh) August 9, 2019

Then read Shoshana Zuboff’s new book surveillance capitalism.