So about that graduate program you’re thinking about doing

Nearly 30% of professionals believe their skills will be redundant in the next 1-2 years, if they aren’t already, with another 38% stating they believe their skills will be outdated within the next 4-5 years. – LinkedIn Economic Graph

Has anyone told the students who are putting down 10K for graduate certificates or taking on $90k in debt to pursue uncertain career paths that are at risk for AI disruption? Who’s working to make sure that these programs – especially those outside of elite schools – prepare students for emerging jobs?

Who is responsible for that discussion? Admissions? Career services? Deans?

Hiring practices are about to get even more opaque

All that advice about plugging keywords into your resume to make sure it passes the ATS systems is about to be useless. Here’s an excerpt from AI for Recruiting: A Definitive Guide to for HR Professionals by, a AI-powered resume screening and candidate tracking solution for busy recruiters.

Intelligent screening software automates resume screening by using AI (i.e., machine learning) on your existing resume database. The software learns which candidates moved on to become successful and unsuccessful employees based on their performance, tenure, and turnover rates. Specifically, it learns what existing employees’ experience, skills, and other qualities are and applies this knowledge to new applicants in order to automatically rank, grade, and shortlist the strongest candidates.The software can also enrich candidates’ resumes by using public data sources about their prior employers as well as their public social media profiles.

Now for all the questions: What are the “other qualities” that they measure? How much weight do they give to experience vs. skills? How much data does a company need to use these algorithms effectively? How does a company without loads of data use this technology? Who decides which data to use? Who reviews the training data for accuracy and bias – the company or the vendor? How does this company avoid bias, especially if people who advance are all white men (due to unconscious bias in the promotion process)? What data points are most valuable on candidates social profiles? Which social profiles are they pulling from? Are personal websites included? Which companies are using this technology? Are candidates without publicly available social media data scored lower? Of the companies using these technologies, who’s responsible for asking the questions above?

This technology gives a whole new meaning to submitting your resume into a black hole.

Will black box algorithms be the reason you don’t get your next job?

A good example is today’s workplace, where hundreds of new AI technologies are already influencing hiring processes, often without proper testing or notice to candidates. New AI recruitment companies offer to analyze video interviews of job candidates so that employers can “compare” an applicant’s facial movements, vocabulary and body language with the expressions of their best employees. But with this technology comes the risk of invisibly embedding bias into the hiring system by choosing new hires simply because they mirror the old ones.

– Artificial Intelligence—With Very Real Biases

Beyond bias we should be asking serious questions about the data that these algorithms are based on: what data are they using to determine the connection between facial movements, vocabulary, and body language as predictors of job performance?

More from the article above:

“New systems are also being advertised that use AI to analyze young job applicants’ social media for signs of “excessive drinking” that could affect workplace performance. This is completely unscientific correlation thinking, which stigmatizes particular types of self-expression without any evidence that it detects real problems. Even worse, it normalizes the surveillance of job applicants without their knowledge before they get in the door.

GE helps employees make their internal moves

GE isn’t a company that comes to mind as innovative, yet their current work in talent development and helping employees navigate their careers is quite forward-thinking:

Using data on the historical movement of GE employees and the relatedness of jobs (which is based on their descriptions), the app helps people uncover potential opportunities throughout the company, not just in their own business unit or geography. Lots of companies post open positions on their websites. What’s different about this tool, says Gallman, is that it shows someone jobs that aren’t open so that he or she can see what might be possible in his or her GE career.

Showing employees what’s possible, regardless if the opportunity is available, is a smart move. It helps anchor the company in the employees mind, giving them a path to work towards. I left a few jobs because I had no idea of what was possible (and neither did my boss). Having multiple paths to explore can open up valuable conversations and go a long ways in retaining talent. Pair that with a new tool that “recommends the training or education someone needs to better perform his or her existing job and to progress.” GE is making clever use of new analytics and algorithmic tools to retain employees.

How was this algorithm designed?

Algorithms are everywhere. They make decisions for us and most the time we don’t realize it. Remember the United story where the passenger was violently ripped out of his seat? The decision to remove that specific was the result of an algorithm.

As more algorithms shape our life we must ask questions like who’s designing these algorithms, what assumptions do these designers make, and what are the implications of those assumptions?

So I’m giving a huge shout out to the podcast 99% Design for their episode on how algorithms are designed.

The Age of the Algorithm

Featuring the author of Weapons of Math Destruction, the episode takes a look at the subjective data used for algorithms that determine recidivism rates and reject job applicants. The examples used and questioned raised in this episode should have us asking more questions about the people and companies designing the algorithms that run in the background of our online and offline lives.

“Algorithms … remain unaudited and unregulated, and it’s a problem when algorithms are basically black boxes. In many cases, they’re designed by private companies who sell them to other companies. The exact details of how they work are kept secret.”

Do AI company founders watch Black Mirror?

“Cameras are no longer just for memories but are fundamental to improving our daily lives – both in our personal and professional lives.” – It’s Coming, The Internet of Eyes will allow objects to see, The Next Web

Read the glowing article above where founders gush over a soon-to-be world in which all inanimate objects have tiny cameras that monitor our everyday movements. How does it make you feel? Is this the first time you’ve ever heard of the Internet of Eyes?

“Similar to the Internet of Things, the IoEyes is a network of cameras and visual sensors connected via the internet enabling the collection and exchange of visual data on a scale unimaginable before.”

This was the first time I’ve heard of the Internet of Eyes (IoEyes) and it’s absolutely terrifying. Equally terrifying are the founders who believe “IoEyes will only have a positive effect on society as a whole.” These guys seem to be clueless about the negative impact these technologies will have on society. You’d think there’d be a second thought on the “trillions of frames of potentially actionable data” they’re sucking up when data breaches are happening at record paces. Or maybe the founders just don’t care because profit&brand. And they’re doing it all to give us a better quality of life, to give us things like better data from our toothbrushing experience:

Imagine performing a simple daily task and knowing what’s going on inside your body.A real-time visual feed of you brushing your teeth will generate not just one visual signal but millions of layers of signals, including analyzing heart rates, blood conditions, DNA structure, temperature, and emotional state.”

Regardless, these founders (and maybe tech journalists) need to take a break from building (and reporting on) the future of surveillance for a bit of Netflix and chill with Black Mirror. Black Mirror is notorious for it’s dark take on how technologies affect society. Their episodes stay in your head way beyond episode. The series makes you rethink the impact of technologies in a visceral way. Every time I read an article like the one above it makes me wonder if any of these founders watch the show.

So my Netflix and chill recommendation for the founders is as follows. Start with the episode, The Entire History of You. Then move on to Nosedive followed swiftly by Shut up and Dance. Throw in the Christmas episode for fun.

Then get back to me about how positive these technological advances are for society.

PS: IoEyes also helping to reinforce those pesky gender stereotypes and support controlling personalities:

“The benefits of biometrics and sensors offer invaluable support. From deterring people from driving when they are too intoxicated, to making sure your teenage daughter isn’t bringing home that boy you don’t like when you aren’t around.” 


How Artificial Intelligence will change the world

“Many people I know which are older than I am usually talk about having one job, and one job for life. However, almost everybody who is the age of my students are talking about having multiple jobs. I will be a consultant here, a consultant there, I will work with this company for three days and so on.” Maja Pantic, professor of affective and behavioral computing at Imperial College London,

The Guardian Science podcast hosted a live event on How Artificial Intelligence will change the world featuring a panel of leading scientists and a robot ethicist. The podcast is worth listening to in full, especially as they go in depth to talk about the different between narrow and general AI and the implications of general AI.

Like most panels on the future of AI, the discussion changes to jobs and how artificial intelligence will affect them.

Maja Pantic, professor of affective and behavioral computing at Imperial College London:

“The assembly jobs, those are already taken by robots, industry robots [that perform] very simple techniques. However, I believe the Fourth Industrial Revolution is about to come or is coming each day closer. It’s because of how the whole world is moving. There are a couple of things that are important. So one is digitization. Many people I know which are older than I am usually talk about having one job, and one job for life. However, almost everybody who is the age of my students are talking about having multiple jobs. I will be a consultant here, a consultant there, I will work with this company for three days and so on. So it will be the way we do the jobs. Because we have the internet and we can have a lot of different jobs and doing these pieces and giving our expertise as needed. A lot of jobs will be a symbiosis between machines and humans. Doctors already do that.”

Alan Winfield, professor of robot ethics at UWE, Bristol:

“It’s pretty clear that when a job is threatened, even by change, it doesn’t even have to be threatened by going out of existence, just by change, and it’s a job that has a great deal of political or social voice, there is going to be a lot of grumbling heard. Any routine job that you can give a crisp problem definition of, that is somewhat threatened. It may take a long while to before you get there but that’s why I have the best, safest job ever: philosopher. Nobody has a clue what it is, not even philosophers! But in general this is true for many of jobs. Many jobs have some weird core where it’s slightly ill defined what’s going on. But then you have the routine parts and they can be automated. Whether we want to automate them or not depends on how we want to style the job.”

Maja again, this time on the tech industry’s poaching of the brightest minds on AI:

“All these PhD students which they took and all these post-docs which they took, were educated by us, by public money. So it’s absolutely not true that the innovation is theirs and that it can remain in private domain. This is absolutely outrageous that we currently have Google, Amazon, and Facebook, like five companies that are taking absolutely everybody in academia, the  phd’s and post-docs. Because we don’t have the next generation. Who will actually educate those people who need reeducation? Who will educate our kids? I think this is outrageous that they will also – because they bought all these really smart guys, they will actually own the innovation.”

Thought parking:

  • Career education is stuck in the one job for life mentality.
  • I wonder how different generations will adapt to jobs that are a symbiosis between human and machine. I’ve had plenty of managers who can’t grasp PowerPoint and CRMs. How do managers plan for that symbiosis now?
  • Job styling seems like it could be a job in it’s own right – an ethnographer who observes the day to day work of employees, conducts interviews with those who do the tasks, and develops recommendations on how automation can improve job categories.
  • I’ve read plenty of articles about tech companies poaching from academia. I always thought of it in positive terms – the researchers are going to make so much more money and see their impact so much quicker – yet never considered the implications for future generations. Each time tech poaches from academia there are fewer people to teach, mentor, engage, and contribute to the higher education communities.