“I want people to know how powerless you feel when your income comes from a faceless app and when you open it up one morning, things are just different and you’re earning less money and there’s no boss you can talk to, you weren’t told about it, you just see your income is lower today and you just have to deal with it.”
Management by algorithm and faceless bosses, this is the future of work. Consider last week’s report from Business Insider:
“A new report indicates that the company doesn’t just track worker productivity at its warehouses — it also has a system that can automatically generate the paperwork to fire them if they’re not meeting targets.”
Companies like Uber and Amazon are leading the way for workforces managed by algorithm. They’re experimenting with the most vulnerable workers first – contractors. But you can expect them to apply what they learn to the white collar workforce next.
Though Intel forecasts flat sales in 2019, people inside the company said this week’s layoffs don’t appear to be strictly a cost-cutting move. Rather, they said the cuts appeared to reflect a broad change in the way Intel is approaching its internal technical systems… Intel will now consolidate operations under a single contractor, the Indian technology giant Infosys.
Intel is laying off hundreds of their IT staff, according to the Oregonian. Unless you or a friend or family member is immediately affected, you’ve probably scrolled right past the news. That’s no shame on you; stories of layoffs are a dime a dozen in our newsfeeds. It’s easy to scroll right on past.
One of the things that sort of keeps us up at night is if you think about the way that we check that our current systems are fair in, say, criminal justice is that we have a system of appeals. We have a system of rulings. You actually have a thing called due process, which means you can check the evidence that’s being brought against you. You can say, “Hey, this is incorrect.” You can change the data. You can say, “Hey, you’ve got the wrong information about me.”
This is actually not how AI works right now. In many cases, decisions are gonna be made about you. You’re not even aware that an AI system is working in the background. Let’s take HR for a classic case in point right now. Now, many of you have probably tried sending CVs and résumés in to get a job. What you may not know is that in many cases, companies are using AI systems to scan those résumés, to decide whether or not you’re worthy of an interview, and that’s fine until you start hearing about Amazon’s system, where they took two years to design, essentially, an AI automatic résumé scanner. – How will AI change your life? AI Now Institute founders Kate Crawford and Meredith Whittaker explain.
Everyone who works on AI products needs to understand the ethical implications of their work. AI engineers and product managers need to understand their product’s impact on users. Business leaders and engineers need to bring in diverse voices and specialties to help ensure their product doesn’t have negative implications. Human resources leads need to hire interdisciplinary workers, who connect the dots between design, engineering, and business performance.
All of this is of course easier said than done. Judging by the many, many, many fails in AI product development, we aren’t even close to that point inside of AI organizations. These “fails” have a tremendous impact on people’s lives.
Ethics is a loaded term and businesses aren’t quite sure what ethics and AI even looks like. Just look at the recent dissolving of Google’s AI ethics board. While many questioned who got to be on that board, many others questioned exactly how an ethics board translates into ethical business practices and products.
Thankfully there are several individuals and organizations working at the intersection of AI and ethics. My personal favorite is the AI Now Institute. I could have pulled so many other impactful quotes from their recent interview on the Recode Decode podcast. Have a listen to that episode to get your head around the many challenges of AI and ethics. And if you’re really into AI and ethics, check out this list of people to follow on Twitter.
Now that my first book on the future of work is moving forward, I’m turning my research towards AI and ethics, specifically how organizations train talent to reduce bias in AI products. So expect more of this type of content in the coming months.
I’m also speaking at Portland’s Machine Learning for All conference on how to have curious conversations. I’ll be teaching software and machine learning engineers how to hone their soft skills to build connections and work interdisciplinary to ensure they’re bringing the right voices into their work.
I just learned how to write a first draft of a book. I’m fresh off of four months of nearly daily writing to wrangle 60,427 words into a first draft. Actually I topped out around 68,000 but chopped it down before handing it over to a developmental editor who will cut it down even more. My first draft is quite the beast.
Wrangling your ideas and thoughts into a coherent narrative isn’t easy. It doesn’t come naturally to most. The funny thing about writing a book is that you don’t need to be good at writing to write a book. Instead, you need to be good at discipline. You need to commit to writing until it’s all out of your head and fight the feeling of quitting because your words look so awkward outside of your head. You also have to slay the procrastination monster on the regular.
I’ve spent the last four months learning how to write a first draft. I learned how to manage the logistics of writing at volume. I learned how to build an outline, find a pace that worked for me, and manage my writing time. Most importantly, I learned how overcome doubt.
Writing a book is a BFD and it’s pretty overwhelming at the start. So I started the process by breaking it down into manageable parts. I’ve conquered the first part: writing a first draft.
Here’s what worked to get me there and what might work for you if you’re trying to write your first draft.
Talespin, a VR/AR/AI company is bringing soft skills training to organizations using VR and AI. Call it a Choose Your Own Virtual Reality Management Adventure, these training tools help managers and leadership develop the soft skills they need to perform in complex organizations.
Employers are in a desperate search for employees with soft skills. As we retreat more into our digital spaces we are collectively losing the ability to have conversations with one another. The result is that our relationships, collaboration, and creativity suffer in the workplace. Soft skills are all about people: how to work with, talk to, learn from, give feedback to, negotiate with, listen to, create with, people.
Enter more tech to solve the problem.
My first reaction was this: shouldn’t people learn people skills through interaction with… people? Why are we outsourcing people skills to the virtual machines? How do fake humans teach humans how to be more human?
Also this tech is an indirect threat to my own work. I teach people and organizations how to build soft skills. From relationship building to negotiation to how to have curious conversations, I help people build their soft skills. So yeah, maybe I felt a bit threatened when I first saw it.
Then I stepped back. And I looked closer. And I saw the truly wild stuff going on with this tech. From the article:
“The great thing about VR is you can do something that’s rare in nature, and give people extra repetitions,” Bailenson says. “The cool part of using computer graphics for this, virtual humans, is you can go through as the manager and have this difficult conversation—then you can relive the experience from the point of view of the employee, get to hear your voice coming out of an avatar you’ve chosen to look like you. Now that you’ve got this newly emotionally understood information from being on the receiving end of this bad news, you get to repeat it and do it again.” – Boss Acting Nicer Recently? You May Have VR to Thank
Honestly, I can think of at least five managers from my past who could have used training like this. A lot of HR Tech companies are developing AI that will make your manager worse. Talespin is using AI and VR in an attempt to make them better.
People still need to practice building soft skills outside of a VR experience, so my work isn’t going away any time soon. But it’s wild to see this type of training applied using new technology. In the future I’d love to see research around how this emotional impact from virtual reality scenarios changes in managers for the better.
I’m also stoked for all the potential types of jobs emerging tech creates. As a creative who runs in HR circles (and worked in HR), I find the HR industry borderline stifling for creative types. Seeing a creative HR product that aims to improve the lives of employees is a welcome surprise.
I’m also curious about employees in this field. I’m curious who writes the scripts, how they work with designers, how the characters are modeled. After all, it’s real humans who build the fake humans who teach humans how to be more human.
I’m curious what type of employees they hire. What skills and backgrounds make up their teams? What type of employees succeed at their company? (Update: it looks like men. More than 90% of their 40+ employees on LinkedIn are men… that’s obviously a problem, especially when it comes to scenarios navigating inclusion in the workplace)
Buried at the bottom of an an HBR post titled 8 Ways Machine Learning is Improving Company Processes, is a little nugget about the ways machine learning might soon affect career planning. Machine learning could help employees in navigate their career development by providing:
Recommendations (that) could help employees choose career paths that lead to high performance, satisfaction, and retention. If a person with an engineering degree wishes to run the division someday, what additional education and work experience should they obtain, and in what order?
Could this be a career coach in the future of work? It’s a fascinating idea and I’d love to see it in practice. We’ve already seen machine learning technology take over some parts of a career advisors job. There’s even a chatbot in development that’s trying to be a career coach (let’s hope they’re better than LinkedIn’s mediocre job recommendation algorithm.) IBM uses AI to guide job seekers through their search.
A good career coach will listen to you, help you work out ideas, guide you through an ambiguous process, support you emotionally, and reflect your own words back to you. Machine learning technology can’t do this yet, in answer to my clickbait title.
But there aren’t enough good career coaches to go around. And few people can even afford a good career coach. Moreover, not every organization offers career coaching that helps employees navigate their next steps. Tools that help people navigate a world full of increasingly ambiguous career paths are mighty helpful.
Like many jobs, career coaches won’t be fully replaced by robots or artificial intelligence anytime soon. There will always be people who prefer working with people over machines. But the role of career coaches will change as new tools and technology emerge. Career coaches need to be aware of these changes. The workplace and available roles are shifting rapidly. Career coaches need to be able to coach their clients through these changes. They need to rethink outdated career advice, especially given that our job search is becoming less human. University career departments in particular need to upskill.
Today’s post is brought to you by my half way mark to 50K words for #NaNoWritMo. I’m deep into a chapter on the future of work for my book and still finding a ton of good content to write about. The challenge of course is to write about it and not just read about it. Reading is not writing, I have to remind myself a bajillion times a day.
If you’re into this type of stuff, subscribe and I’ll send you things about careers, future of work, and probably a bunch of gifs.
In the age of big data, a measure-everything mindset is emerging. Julia Ticona, a sociologist and researcher with the Data and Society think tank in New York, says that the same types of apps that track and keep tabs on restaurant workers or delivery people 24/7 are now migrating to white-collar jobs.
But while service and manufacturing industry workers are more used to overt productivity measurements, such systems are often sold to office workers as opportunities to maximize their own productivity, she explains. “For lower wage folks, it’s about scheduling and hours,” says Ticona. “For the white collar folks, it’s about being the ‘best you.’” The inevitable future of Slack is your boss using it to spy on you
There’s so much in this article about all the ways your employer uses new technology and invasive data collection techniques to spy on you at work. There’s even an example of a company that tracks their employees outside of work hours. Your workplace is creeping ever closer to the Circle.
So much of the future of work is focused on robots taking our jobs. But that discussion overlooks much of what’s happening outside of robots, mainly the erosion of employee privacy. The idea that companies should have the rights to all data an employee produces in the course of their workday is absurd. Employee surveillance shouldn’t be normalized. Moreover, we need more discussion about the people making decisions about what constitutes worker productivity. Who are they and how are they qualified to make these decisions? You can bet the executives and upper management aren’t being tracked like this.
I disagree that this is all inevitable. We have the power to say no to it. We have the power to teach emerging leaders how to not to use this technology or point out the potential for abuse. Employee privacy shouldn’t be a trade off for a paycheck. Employees have the power to ask questions: How are you using my personal data? What data are you monitoring? What assumptions are you making about my work when you build productivity measuring algorithms?”
Future employees have the power to ask the right questions during their job interviews. Let’s start teaching people the right questions to ask in an interview for a white collar role. How do you measure success in this role? How do you track worker productivity? How much data do you collect on your employees and what do you use it for?
We’re in the middle of a massive transition to a quantified workplace where leadership wants to measure everything in the pursuit of pure productivity. The people who are impacted most under this system must participate in shaping this transformation and pushing back.
“Workers increasingly see assignments and wages doled out by artificial systems rather than human managers, and have to rely on AI, not HR, when things go wrong. According to tech experts, the rise of algorithms is changing not only how we earn a living, but who gets access to jobs and other opportunities — if their data checks out — or not.” – Forbes, Algorithms And ‘Uberland’ Are Driving Us Into Technocratic Serfdom
The Forbes article was referencing the book UBERLAND: How Algorithms Are Rewriting The Rules Of Work, which has just rocketed to the top of my reading list. Until then, I’m definitely looking out for the author on the podcast circuit.
How much employee data collection is too much? Because it seems our employers – or at least the big corporate ones – want every single piece of your personal data. Is there any option for pushing back on your employer’s personal data grab?
The Kaiser Family Foundation’s annual review of employer-based insurance shows that 21% of large employers collect health information from employees’ mobile apps or wearable devices, as part of their wellness programs — up from 14% last year.
David Autor, an economist at M.I.T., says it is plausible to foresee a future in which — as airlines have done — hotels deploy humans to tend to elite guests and automated systems for everybody else. Workers generate costs well beyond their hourly wage, Professor Autor argued. They get sick and take vacations and require managers. “People are messy,” he noted. “Machines are straightforward.”