Start upskilling for AI now

In 2017, roughly 70,000 postings requested AI skills in the U.S., according to our analysis of job postings. That’s a significant change, amounting to growth of 252% compared to 2010. Burning Glass also found that demand for AI skills is now showing up in a wide range of industries including retail, health care, include finance and insurance, manufacturing, information and professional services, technical services, and science/research. – Burning Glass Technologies

I’ve been seeing AI skills pop up in random job posts. I’ve wondered if it’s part of a bigger trend. It’s hard to get perspective since I’m not in the job market. Amazon leads the hiring for AI skills by a mile but GM, Accenture and Deloitte are also investing heavily. The most in-demand AI skills:

software developer/engineer, data scientist, data mining/data analyst, data engineer, computer systems engineer/architect, medical secretary, systems analyst, product manager and business management analyst.

Medical secretary threw me for a loop. Maybe because they’re working with new AI medical technology? Regardless it’s time to upskill.


AI for the doctor’s office

SmartExam acts as a virtual physician’s assistant – an automated medical resident, if you will – that enables primary care providers to deliver efficient remote care while cutting costs and improving outcomes… The intelligent software dynamically interviews patients, using answers to garner more information and support providers in the care delivery process… SmartExam lets providers achieve as much, or more, in a two-minute virtual patient visit as the 20 minutes of provider time needed for an office visit, the company said… “It allows clinicians to operate at the tops of their licenses,” said Constantini. “They can focus on what they do best — diagnosis and treatment.” – Bright.MD raises another $8M for “virtual physician’s assistant” SmartExam

I wonder if current medical students are taught how to integrate AI software into their training.

Hiring practices are about to get even more opaque

All that advice about plugging keywords into your resume to make sure it passes the ATS systems is about to be useless. Here’s an excerpt from AI for Recruiting: A Definitive Guide to for HR Professionals by, a AI-powered resume screening and candidate tracking solution for busy recruiters.

Intelligent screening software automates resume screening by using AI (i.e., machine learning) on your existing resume database. The software learns which candidates moved on to become successful and unsuccessful employees based on their performance, tenure, and turnover rates. Specifically, it learns what existing employees’ experience, skills, and other qualities are and applies this knowledge to new applicants in order to automatically rank, grade, and shortlist the strongest candidates.The software can also enrich candidates’ resumes by using public data sources about their prior employers as well as their public social media profiles.

Now for all the questions: What are the “other qualities” that they measure? How much weight do they give to experience vs. skills? How much data does a company need to use these algorithms effectively? How does a company without loads of data use this technology? Who decides which data to use? Who reviews the training data for accuracy and bias – the company or the vendor? How does this company avoid bias, especially if people who advance are all white men (due to unconscious bias in the promotion process)? What data points are most valuable on candidates social profiles? Which social profiles are they pulling from? Are personal websites included? Which companies are using this technology? Are candidates without publicly available social media data scored lower? Of the companies using these technologies, who’s responsible for asking the questions above?

This technology gives a whole new meaning to submitting your resume into a black hole.

Harsh words, harsher realities

“You’re out of time. If you can’t already write a piece of code to find the longest palindrome in a string, you probably won’t be able to do so before the automation revolution deals a body blow to your banking job sometime around 2022. Cathy Bessant, the chief technology and operations officer at Bank of America, said as much in conversation with Bloomberg last month. If you’re a bank employee who’s technologically illiterate, Bessant said it’s no good rushing to do a few coding courses on the side. You’re too late: things are moving too fast. “The kind of skills that we’ll need have to be taught beginning at a much earlier age,” said Bessant. “Whether you can train the same worker at the same time you’re changing their job remains to be seen.” – Can’t Code? The only other thing that will save your job

Immediate thoughts: 

Does this include executives and leadership?

Are they doing any work to train their best and brightest in these skills?

Where will these bankers go now?

Does it even matter since this is the new reality:

“Huy Nguyen Trieu, the former head of macro structuring at Citi, told us he knows of a team of just four algorithmic traders who now manage 70% of the trades that were done by 140 people in 2010″

Luckily there’s a sliver of hope for bankers in form of soft skills:

“Not for nothing has Goldman Sachs president David Solomon been extolling the virtues of a well-rounded education that incorporates public speaking and communication. Just as banks need geeks, they’ll also need exceptionally charismatic individuals to act as the face of the new automated reality.”

Maybe I should launch a new workshop as part of my power skills series: How to Charm the Pants Off of Your Audience and Save Your Job

Will black box algorithms be the reason you don’t get your next job?

A good example is today’s workplace, where hundreds of new AI technologies are already influencing hiring processes, often without proper testing or notice to candidates. New AI recruitment companies offer to analyze video interviews of job candidates so that employers can “compare” an applicant’s facial movements, vocabulary and body language with the expressions of their best employees. But with this technology comes the risk of invisibly embedding bias into the hiring system by choosing new hires simply because they mirror the old ones.

– Artificial Intelligence—With Very Real Biases

Beyond bias we should be asking serious questions about the data that these algorithms are based on: what data are they using to determine the connection between facial movements, vocabulary, and body language as predictors of job performance?

More from the article above:

“New systems are also being advertised that use AI to analyze young job applicants’ social media for signs of “excessive drinking” that could affect workplace performance. This is completely unscientific correlation thinking, which stigmatizes particular types of self-expression without any evidence that it detects real problems. Even worse, it normalizes the surveillance of job applicants without their knowledge before they get in the door.

AI is going to make your awful manager even worse

Before you continue reading post-click bait title reflect on the last bad manager you had. Remember how they made you feel. Remember the things they did that made your life miserable. Remember the incompetence. Remember that managers don’t get promoted to management because they’re good managers.

I know, it’s not pleasant. I’ve have some pretty awful managers too (but I’ve also had a billion jobs so it’s inevitable).

Ok. Now read on.

HR tech is hot. Nearly $2 billion in investment hot. And AI is hotter than bacon. So combining HR tech and AI is a sizzling idea (still with me?).

Enter all the startups ready to make managers lives easier/employees lives more miserable with algorithms to solve all the HR problems. The Wall Street Journal takes a peak into the future of management in How AI is Transforming the Workplace.

“Veriato makes software that logs virtually everything done on a computer—web browsing, email, chat, keystrokes, document and app use—and takes periodic screenshots, storing it all for 30 days on a customer’s server to ensure privacy. The system also sends so-called metadata, such as dates and times when messages were sent, to Veriato’s own server for analysis. There, an artificial-intelligence system determines a baseline for the company’s activities and searches for anomalies that may indicate poor productivity (such as hours spent on Amazon), malicious activity (repeated failed password entries) or an intention to leave the company (copying a database of contacts).Customers can set activities and thresholds that will trigger an alert. If the software sees anything fishy, it notifies management.”

Now remember your asshole manager. Imagine if they had access to this tool. Imagine the micromanagement.


(Side note: I wonder if employees get access to their bosses computer logs. Imagine that!)

Let’s keep going.

Another AI service lets companies analyze workers’ email to tell if they’re feeling unhappy about their job, so bosses can give them more attention before their performance takes a nose dive or they start doing things that harm the company.


It’s hard not to read that as an unhappy worker is somehow a threat to the company. Work isn’t all rainbows and unicorns. We can’t be happy 40 hours a week even in the best of jobs. Throughout our work lives we deal with grief, divorce, strained friendships, children, boredom, indecision, bad coworkers, bad bosses, bad news, financial stress, taking care of parents, etc etc etc. And sometimes that comes out in the course of our days spent buried in emails. The idea of management analyzing your emails on the watch for anything that isn’t rainbows ignores the reality of our work lives.

What data is the algorithm built on? What are the signs of unhappiness? Bitching about a coworker? Complaining about an unreasonable deadline? Micromanaging managers? What’s the time frame? One day of complaints or three weeks? Since algorithms take time to tweak and learn, what happens to employees (and their relationships with management) who are incorrectly flagged as unhappy while the algorithm learns?

Moreover, what do those conversations look like when “unhappy” employees are being called into management’s office?

Manager: Well we’ve called you in because our Algorithm notified me that you’re unhappy in your role.


Manager: Right… so … can you tell me what’s making you so unhappy?

Employee: I’m fine.


Not according to The Algorithm. It’s been analyzing all your emails. I noticed you used the word “asshat” twice in one week to describe your cubicle mate. Your use of the f word is off the charts compared to your peers on the team. You haven’t used an exclamation point to indicate anything positive in at least three weeks. The sentiment analysis shows you’re an 8 out of 10 on the unhappy chart. Look, here’s the emoji the algorithm assigned to help you understand your unhappiness level.

Employee: It’s creepy you’re reading my emails.


Now remember, you signed that privacy agreement at the beginning of your employment and consented to this. You should never write anything in a company email that you don’t want read.



And do companies who purchase this technology even ask the hard questions?

The issue I have with this tech, apart from it being ridiculously creepy, is that it makes some seriously bad assumptions. They assume:

  • All managers have inherently good intentions
  • All managers are competent
  • All organizations train their managers on how to be effective managers
  • All organizations train their managers on appropriate use of technology
  • Managers embrace new technology

Those are terrible assumptions. Here’s a brief, non-exhaustive list of issues I’ve had with managers over the past ten years:

  • Managers who can’t define what productivity looks like (beyond DO ALL THE THINGS)
  • Managers who can’t set and communicate goals
  • Managers who can’t listen to concerns voiced by the team (big egos)
  • Managers who can’t understand lead scoring and Google analytics (from the CEO and VP of sales and marketing no doubt)
  • Managers who can’t  use a conference call system (technology-am-I-right?!)
  • Managers with no interpersonal communication skills and lack of self-awareness

Maybe we can all save ourselves by adding a new question when it’s our turn to ask questions in the interview:

“Tell me about your approach to management. What data do you use to ensure your AI technology accurately assesses employee happiness?”

Maybe I’m just cynical. Maybe it’s because I’ve had a few too many bad managers (as have my peers.). Maybe I just feel sorry for good employees struggling under bad management. And maybe organizations should get better about promoting people who can manage (i.e. people with soft skills) instead of those who can’t before this technology is adapted.

Anyhow, to wrap up, this whole post has my feeling so grateful for the good managers I’ve had. The ones who got it right. Who listened, encouraged, and provided constructive feedback on all my work. And though I’m sure they’re not reading this post, a shout out to my favorite, amazing managers from two very different jobs: Kirsten and Cathy. They didn’t need an algorithm to understand their team performance and employee happiness. They had communication skills, empathy, and damn good personalities that made working for them a delight.

Chatbot Conversation Design: The future of English major jobs?

“So what’re you going to do with an English degree?” – Clueless relatives and friends of English majors everywhere. 

English majors have skills. They create narratives. They’re creative or at least understand the creative process. They’re comfortable with ambiguity, critical thinkers, can make sense out of massive amounts of information, and have damn good command of the English language.They’re good at thinking from difference perspectives (the foundation of UX!). Yet English majors get a lot of shit for their pursuit of words and language despite the fact it’s going to be English majors with mad soft skills who will survive the future robots-take-our-jobs-apocolypse.

Soon the answer to WTF-are-you-going-to-do-with-an-English-degree may just be: conversation design. Chatbots are everywhere which means there’s a need for people who can write the scripts and design conversation flow. There’s not a steady stream of conversation design jobs yet but I’m seeing more pop up. Yesterday I saw the job post above and it screamed English major (albeit and English major with UX training but hey that’s what GA is for). Excellence in English writing and communications? Check. Copywriting and content creation? Check, easy to come by for any English major whose ever had a blog, run a club’s social accounts in school, etc. Knowledge of current conversational bots? Check, they’re everywhere. The rest can be gained with a little YouTube tutoring and Googling. Chatbot conversation designer for english majors

I’m a bit obsessed with chatbot design right now. I was super impressed by Cindy Gallop’s negotiation chatbot. Mostly though I’m curious about the people who design the conversations, how chatbots improve, and the fine line between shitty and helpful. I also think there’s great potential for chatbots in the career advising space. I’d love to work on a project designing a chatbot for career changers. So if you’re a chatbot company interested in exploring this area, get in touch with me.

Can Artificial Intelligence find me a job?

Imagine if LinkedIn had a smart technology that guided you through each step of your job search. Imagine if it could accurately match you to jobs based on your background, conduct a skill gap analysis, and recommend courses to make you more qualified for a job. Imagine if it could pair you with a mentor and recommend conversational topics and questions based on mutual interests.

Admittedly, that’s all a bit of a wish list. But my hopes were up when I saw a IBM College tweet about a new service with Watson. For job seekers interested in working at IBM, Watson will help provide “job recommendations that match your skills and interests.” Watson, the do-it-all cognitive technology, is dipping its non-existent toes into career coach waters. As a career coach who’s spent years helping people figure out which jobs are right for them, I had to give Watson a try.

Interacting with Watson starts off easy. Like any good coach, Watson gives you options. It offers the option to explore common questions, answer questions about your experience, or upload your resume to let Watson recommend opportunities for you. I chose the easiest option, the resume upload, because it’s the laziest.

Seconds later, Watson had a list of job recommendations and the initial recommendations were in line with my background. It recommended three job categories at IBM to explore: Marketing, Consulting, and HR. Each category contained 50 jobs. Watson ranked each job by best match, with an icon indicating how well I matched the job opportunity and an info box showing which skills made me a match for the job. Unfortunately, the job opportunities ranged greatly in experience level, education and responsibilities. Oddly internship opportunities ranked high in my results, though I’ve been out of grad school for 8 years and have 10 years of relevant experience. I assumed Watson would only recommend relevant jobs related to my years of experience.

Feeling mildly overwhelmed with 150 matched opportunities, I returned to the beginning to answer questions so Watson could get to know me better. Watons’s questions were related to my work experience, skills, and passion. After answering all of them, Watson recommended a new category to explore: Design and Offer Management. It was a happy discovery. I’m obsessed with UX and immediately found a cool job for a User Experience Designer for Bluemix Garage, their innovation and transformation consultancy which does work with startup communities around the world. Dreamy.

Watson made discovering opportunities relatively seamless because I didn’t have to have to experiment with keywords or job titles to find jobs that may be a good fit, a challenge most job seekers struggle with. Watson also shows which of your skills matched you to a job. Compare that to LinkedIn’s job recommendations which are frequently odd and a mystery, and suddenly Watson seemed quite helpful.

Unfortunately Watson’s helpful magic stalled as I moved from recommendations to interactive chat. As I explored recommended categories, Watson encouraged me: “While you explore these jobs, feel free to ask me any questions you may have about IBM. For example, I want to know about… to learn about company culture, locations and more.”

So I inquired. “Do you have jobs in Portland?” Watson displayed jobs in my category that were tagged “multiple cities” as well as San Francisco and Austin, etc. Was Watson making a hipster connection? I’m still not sure if they have jobs in Portland.

Then I thought of questions a college student might ask. I asked, “what jobs do you have for college graduates?” Watson replied: Based on your resume, it looks like you would be interested in these job categories. You can learn more about these categories or explore opportunities in each category.

I tried to get more specific. “What internships do you have for college graduates?” Watsons repeated the same answer as above. “What is the best way to get a job at IBM?” Same reply, Based on your resume… I asked “How long is the hiring process?” I was directed to a web page on the recruiting process which had little detail. I asked if IBM has MBA-level jobs. The response was “Super, I found opportunities in one job category” The category: Project executive. I tried to clarify, “Are there roles for MBAs in strategy?”

Watson’s response: “Sorry that area is out of my expertise.” Watson apparently hasn’t met IBM’s MBA team.

With daily articles on artificial intelligence and the power of machine learning appearing in my newsfeed regularly, it’s easy to get caught up in the hype of cognitive technology. I made a lot of assumptions as I started to interact with Watson, all driven by hype. I assumed Watson would show me unique opportunities based on my specific questions. I assumed it would offer insights about working at IBM beyond a link to a corporate webpage. I assumed it understood job seekers better. Watson isn’t there yet. To be fair, this service is in its infancy. IBM notes that Watson is learning and can’t answer all the questions.

I remain optimistic though. Artificial intelligence applied to the job search is a potential that is too good to ignore. Some companies already see the future. WayUp just raised $18 million for their platform which uses machine learning to improve job matches between students in students and employers. Looking beyond improved matching, a smart service that helps people navigate the job search – an anxious, joyless, and time consuming process that everyone dislikes – is enticing. There are not enough human career coaches to assist people through the coming workforce disruption. People need guidance as they think through retraining options and upskilling. A smarter Watson could serve as a virtual career coach and support system to help people navigate an increasingly ambiguous future of work.

I look forward to that day.

Soft skills are anything but soft

“If there’s one lesson you can take away from the work I’ve done recently on social skills is that you need to have both types of skills. The thing about being a good conversationalist is that lots of people are. So that alone won’t get you anywhere. What you need is to be well-rounded, I don’t mean that in a loose way but in a rigorous way. Try to be good at two things, especially two things that are not that closely related to each other. Two things that it’s uncommon to be good at together. One of them is that most people are really good coders or programmers, a lot of them might be not so socially skilled. So if you can do both those things you’re going to be incredibly valuable because you have an unusual combination of skills and you’re hard to replace. So if you got good technical skills and soft skills you’re like gold to employer. So seek out opportunities to be good at unusual combination of things.”

– David Demming, Professor of Education and Economics at the Harvard Graduate School of Education, giving advice to employees on the role of soft skills in the future of work, on the Future of Work Podcast episode, The Future of Education, Skills, and the Economy.

There’s much to dive into in this podcast: the unbundling of higher education, the role of soft skills with AI technology and who is responsible for teaching those skills, income inequality, and a discussion on what we actually mean when we refer to the skills gap.

I’m also on a mission to reframe soft skills. Soft skills are power skills. If you can build relationships, influence, and communicate your ideas in a powerful narrative with impact, those are power skills. There’s nothing soft about those skills.