Hiring practices are about to get even more opaque

All that advice about plugging keywords into your resume to make sure it passes the ATS systems is about to be useless. Here’s an excerpt from AI for Recruiting: A Definitive Guide to for HR Professionals by Ideal.com, a AI-powered resume screening and candidate tracking solution for busy recruiters.

Intelligent screening software automates resume screening by using AI (i.e., machine learning) on your existing resume database. The software learns which candidates moved on to become successful and unsuccessful employees based on their performance, tenure, and turnover rates. Specifically, it learns what existing employees’ experience, skills, and other qualities are and applies this knowledge to new applicants in order to automatically rank, grade, and shortlist the strongest candidates.The software can also enrich candidates’ resumes by using public data sources about their prior employers as well as their public social media profiles.

Now for all the questions: What are the “other qualities” that they measure? How much weight do they give to experience vs. skills? How much data does a company need to use these algorithms effectively? How does a company without loads of data use this technology? Who decides which data to use? Who reviews the training data for accuracy and bias – the company or the vendor? How does this company avoid bias, especially if people who advance are all white men (due to unconscious bias in the promotion process)? What data points are most valuable on candidates social profiles? Which social profiles are they pulling from? Are personal websites included? Which companies are using this technology? Are candidates without publicly available social media data scored lower? Of the companies using these technologies, who’s responsible for asking the questions above?

This technology gives a whole new meaning to submitting your resume into a black hole.

Are you prepared to diversify your career?

“More and more independent thinkers are realizing that when being an employee is the equivalent to putting all your money into one stock – a better strategy is to diversify your portfolio. So you’re seeing a lot more people looking to diversify their career.” What jobs will be around in 20 years?

I like the concept of diversifying careers. It’s the first time I’ve read it like this. I usually refer to this idea as a tactic – collect skills. But the idea of diversifying your career makes it more strategic. To succeed in the future, people need to think beyond one industry, one set of skills, a single professional domain of expertise.

I’ve been stuck on the idea that the word career is terribly outdated. The definition of career is,

an occupation undertaken for a significant period of a person’s life and with opportunities for progress.

Yet we’re seeing more people take on different occupations throughout their life. And with automation and the drastic changes coming for the workforce, there’s less guarantee for long-term opportunities and progress.

We need a new way of talking about careers that gets people thinking about upskilling, continuous learning, adaptation, growth-mindset, creative solutions, etc. The term career is rooted in the idea of stability and the idea that you’ll be rewarded just by showing up and doing your job. And that’s pretty much the opposite of the future of work:

Professor Richard Susskind, author of The Future of the Professions and Tomorrow’s Lawyers, echoes this distinction. “What you’re going to see for a lot of jobs is a churn of different tasks,” he explains. “So a lawyer today doesn’t develop systems that offer advice, but the lawyer of 2025 will. They’ll still be called lawyers but they’ll be doing different things.”

I’m building a school that teaches these concepts but I cringe when I pitch the idea because I have to use the term career, a term rooted in old-school thinking. So to get people thinking about new career expectations, I’m trying out new terms: career portfolios, portable careers, fluid careers, mobile careers. Then again, sometimes I just wrap it in clickbait: robot-proof careers.

Will black box algorithms be the reason you don’t get your next job?

A good example is today’s workplace, where hundreds of new AI technologies are already influencing hiring processes, often without proper testing or notice to candidates. New AI recruitment companies offer to analyze video interviews of job candidates so that employers can “compare” an applicant’s facial movements, vocabulary and body language with the expressions of their best employees. But with this technology comes the risk of invisibly embedding bias into the hiring system by choosing new hires simply because they mirror the old ones.

– Artificial Intelligence—With Very Real Biases

Beyond bias we should be asking serious questions about the data that these algorithms are based on: what data are they using to determine the connection between facial movements, vocabulary, and body language as predictors of job performance?

More from the article above:

“New systems are also being advertised that use AI to analyze young job applicants’ social media for signs of “excessive drinking” that could affect workplace performance. This is completely unscientific correlation thinking, which stigmatizes particular types of self-expression without any evidence that it detects real problems. Even worse, it normalizes the surveillance of job applicants without their knowledge before they get in the door.

The Future of Work from an L&D perspective

As stewards of your company’s value, you need to understand how to get your people ready—not because it’s a nice thing to do but because the competitive advantage of early adopters of advanced algorithms and robotics will rapidly diminish. Simply put, companies will differentiate themselves not just by having the tools but by how their people interact with those tools and make the complex decisions that they must make in the course of doing their work. The greater the use of information-rich tools, the more important the decisions are that are still made by people. That, in turn, increases the importance of continuous learning. Workers, managers, and executives need to keep up with the machines and be able to interpret their results. – Putting Lifelong Learning on the Agenda,McKinsey Insights

Here’s a company that’s living that advice:

“The future of learning sabbaticals at Buffer is closely tied with our desire to help create the future of work. There’s a quote from Stephanie Ricci, head of learning at AXA that’s really powerful in explaining how much impact learning will have for employees in the future:

“By 2020, the core skills required by jobs are not on the radar today, hence we need to rethink the development of skills, with 50% of our jobs requiring significant change in terms of skillset”

That is a huge amount of jobs that will require new skills and for organizations and workers that means a lot of learning and developing.”

Why this company implemented a learning sabbatical for its employees, FastCo

Do you ever feel like you need to go back to school so you can catch up?

This thirst for AI has pushed all AI-related courses on Stanford to way over their capacity. CS224N: Natural Language Processing with Deep Learning had more than 700 students. CS231N: Convolutional Neural Networks for Visual Recognition had the same. According to Justin Johnson, co-instructor of CS231N, the class size is exponentially increasing. At the beginning of the quarter, instructors for both courses desperately scramble to find extra TAs. Even my course, first time offered, taught by an obscure undergraduate student, received 350+ applications for its 20 spots. Many of the students who took these courses aren’t even interested in the subject. They just take those courses because everyone is doing it”

-excerpt from Confession of a so-called AI Expert.

The author, Chip Hyuen, is a third year student and TensorFlow TA at Stanford. She’s got a fab internship at Netflix and a killer writing style. The full article is a must-read, in part so you can fully appreciate the last sentences:

“Maybe one day people would realize that many AI experts are just frauds. Maybe one day students would realize that their time would be better spent learning things they truly care about. Maybe one day I would be out of job and left to die alone on the sidewalk. Or maybe the AI robot that I build would destroy you all. Who knows?”

AI is going to make your asshole manager even worse

Before you continue reading, reflect on the last bad manager you had. Remember how they made you feel. Remember the things they did that made your life miserable. Remember the incompetence. Remember that managers don’t get promoted to management because they’re good managers.

I know, it’s not pleasant. I’ve have some pretty awful managers too (but I’ve also had a billion jobs so it’s inevitable).

Ok. Now read on.

HR tech is hot. Nearly $2 billion in investment hot. And AI is hotter than bacon. So combining HR tech and AI is a sizzling idea (still with me?).

Enter all the startups ready to make managers lives easier/employees lives more miserable with algorithms to solve all the HR problems. The Wall Street Journal takes a peak into the future of management in How AI is Transforming the Workplace:

“Veriato makes software that logs virtually everything done on a computer—web browsing, email, chat, keystrokes, document and app use—and takes periodic screenshots, storing it all for 30 days on a customer’s server to ensure privacy. The system also sends so-called metadata, such as dates and times when messages were sent, to Veriato’s own server for analysis. There, an artificial-intelligence system determines a baseline for the company’s activities and searches for anomalies that may indicate poor productivity (such as hours spent on Amazon), malicious activity (repeated failed password entries) or an intention to leave the company (copying a database of contacts).Customers can set activities and thresholds that will trigger an alert. If the software sees anything fishy, it notifies management.”

Now remember your asshole manager. Imagine if they had access to this tool. Imagine the micromanagement.

Brutal.

(Side note: I wonder if employees get access to their bosses computer logs. Imagine that!)

Let’s keep going.

Another AI service lets companies analyze workers’ email to tell if they’re feeling unhappy about their job, so bosses can give them more attention before their performance takes a nose dive or they start doing things that harm the company.

Yikes.

It’s hard not to read that as an unhappy worker is somehow a threat to the company. Work isn’t all rainbows and unicorns. We can’t be happy 40 hours a week even in the best of jobs. Throughout our work lives we deal with grief, divorce, strained friendships, children, boredom, indecision, bad coworkers, bad bosses, bad news, financial stress, taking care of parents, etc etc etc. And sometimes that comes out in the course of our days spent buried in emails. The idea of management analyzing your emails on the watch for anything that isn’t rainbows ignores the reality of our work lives.

What data is the algorithm built on? What are the signs of unhappiness? Bitching about a coworker? Complaining about an unreasonable deadline? Micromanaging managers? What’s the time frame? One day of complaints or three weeks? Since algorithms take time to tweak and learn, what happens to employees (and their relationships with management) who are incorrectly flagged as unhappy while the algorithm learns?

Moreover, what do those conversations look like when “unhappy” employees are being called into management’s office?

Manager: Well we’ve called you in because our Algorithm notified me that you’re unhappy in your role.

Employee:

Manager: Right… so … can you tell me what’s making you so unhappy?

Employee: I’m fine.

Manager:

Not according to The Algorithm. It’s been analyzing all your emails. I noticed you used the word “asshat” twice in one week to describe your cubicle mate. Your use of the f word is off the charts compared to your peers on the team. You haven’t used an exclamation point to indicate anything positive in at least three weeks. The sentiment analysis shows you’re an 8 out of 10 on the unhappy chart. Look, here’s the emoji the algorithm assigned to help you understand your unhappiness level.

Employee: It’s creepy you’re reading my emails.

Manager:

Now remember, you signed that privacy agreement at the beginning of your employment and consented to this. You should never write anything in a company email that you don’t want read.

Employee:

And do companies who purchase this technology even ask the hard questions?

The issue I have with this tech, apart from it being ridiculously creepy, is that it makes some seriously bad assumptions. They assume:

  • All managers have inherently good intentions
  • All managers are competent
  • All organizations train their managers on how to be effective managers
  • All organizations train their managers on appropriate use of technology
  • Managers embrace new technology

Those are terrible assumptions. Here’s a brief, non-exhaustive list of issues I’ve had with managers over the past ten years:

  • Managers who can’t define what productivity looks like (beyond DO ALL THE THINGS)
  • Managers who can’t set and communicate goals
  • Managers who can’t listen to concerns voiced by the team (big egos)
  • Managers who can’t understand lead scoring and Google analytics (from the CEO and VP of sales and marketing no doubt)
  • Managers who can’t use a conference call system (technology-am-I-right?!)
  • Managers with no interpersonal communication skills and lack of self-awareness

Maybe we can all save ourselves by adding a new question when it’s our turn to ask questions in the interview:

“Tell me about your approach to management. What data do you use to ensure your AI technology accurately assesses employee happiness?”

Maybe I’m just cynical. Maybe it’s because I’ve had a few too many bad managers (as have my peers.). Maybe I just feel sorry for good employees struggling under bad management. And maybe organizations should get better about promoting people who can manage (i.e. people with soft skills) instead of those who can’t before this technology is adapted.

Anyhow, to wrap up, this whole post has my feeling so grateful for the good managers I’ve had. The ones who got it right. Who listened, encouraged, and provided constructive feedback on all my work. And though I’m sure they’re not reading this post, a shout out to my favorite, amazing managers from two very different jobs: Kirsten and Cathy. They didn’t need an algorithm to understand their team performance and employee happiness. They had communication skills, empathy, and damn good skills that made working for them a delight.

Adventures in awkward storytelling

I’m working on many projects right now: I’m consulting, writing, and building. Eventually everything will be under one big reveal but I’m not there yet. So when someone asks me what I do I have a ton of flexibility in how I answer. I love the challenge of trying out new professional narratives in casual networking situations.

Last week I bombed hard as I was telling a new professional narrative. At a dinner party with my partner’s coworkers, someone said to me “So I hear you’re working on some coaching stuff.” I winced a bit. I’m not coaching. In fact, I’m trying to avoid coaching. So I tried out a new story:

Me: I used to coach but not anymore. Now I’m doing some consulting, working with career services to upgrade their curriculums for international students. But that’s just for right now because I’m launching a school to prepare students for the Fourth Industrial Revolution.

Him: Awkward silence and polite smile. 

Imagine it’s a fine summer evening and you’re enjoying some delicious ceviche talking amongst the group about the fresh scallops and vacation. And then someone tells you their working on preparing people for the Fourth Industrial Revolution.

WTF does that even mean?

He had no idea. I don’t blame him. I don’t even know why I said it. A polite silence ensued. He walked away. I went back to eating my ceviche and wallowed in the awkwardness.

Then I made a mental note: spend a little less time on the interwebs reading reports of robots taking over all the jobs and more time talking to real people.