“I think it’s important to note what the use of facial recognition [in airports] means for American citizens,” Jeramie Scott, director of EPIC’s Domestic Surveillance Project, told BuzzFeed News in an interview. “It means the government, without consulting the public, a requirement by Congress, or consent from any individual, is using facial recognition to create a digital ID of millions of Americans.” – The US Government Will Be Scanning Your Face At 20 Top Airports, Documents Show
Facial recognition systems are headed to the airport, and it’s happening at a rapid pace, without public commentary and guardrails around data privacy or data quality.
Consider this news alongside of new reporting from ProPublica, that found TSA’s body scanning technology discriminates against black women by regularly flagging black women as security threats, resulting in increased screening for those women. Then add to that the reporting out from the New York Times last week called The Privacy Project. One of the most impactful reports took data from three cameras in Bryant Park and used it to create a facial recognition tracking software for less than $100. In the end they used it to identify one of the people in the park and it only took a few days work.
An AI dystopia in which bias is encoded into the algorithms and marginalized communities are further marginalized is hurtling towards us faster than the average person can keep up.
The result of increased use of facial recognition in public spaces puts our society on track to developing a system thats not entirely different from China’s social credit system. From the Buzzfeed article quoted above:
The big takeaway is that the broad surveillance of people in airports amounts to a kind of “individualized control of citizenry” — not unlike what’s already happening with the social credit scoring system in China. “There are already people who aren’t allowed on, say, a high-speed train because their social credit scores are too low,” he said, pointing out that China’s program is significantly based in “identifying individual people and tracking their movements in public spaces though automated facial recognition.”
It all reminded me of a tweet I saw this week which captures my frustration at American journalist’s continued reporting on China’s social credit system while ignoring our own American AI nightmare that’s headed full stem ahead:
*whispers* the us invests in mass surveillance and social credit systems the same way china does and yet some of us only ever point to china with outrage and it’s getting tiring— a once blue haired enby from oakland | tired of it (@WellsLucasSanto) April 16, 2019
The use of facial recognition technology isn’t limited to the government . Companies are doing a bang up job already using facial recognition technology in unsuspecting places:
All of this makes me wonder: how do average people, those outside of tech, academia, and spheres of influence, push back against these technologies? Can you opt out of facial recognition tech at the airport? How do you know to opt out if you didn’t know it was being used to begin with? What happens when you opt out? Will you be subjected to more invasive searches? Will opting out delay your next flight? So many questions and sadly zero answers.
Talespin, a VR/AR/AI company is bringing soft skills training to organizations using VR and AI. Call it a Choose Your Own Virtual Reality Management Adventure, these training tools help managers and leadership develop the soft skills they need to perform in complex organizations.
Employers are in a desperate search for employees with soft skills. As we retreat more into our digital spaces we are collectively losing the ability to have conversations with one another. The result is that our relationships, collaboration, and creativity suffer in the workplace. Soft skills are all about people: how to work with, talk to, learn from, give feedback to, negotiate with, listen to, create with, people.
Enter more tech to solve the problem.
My first reaction was this: shouldn’t people learn people skills through interaction with… people? Why are we outsourcing people skills to the virtual machines? How do fake humans teach humans how to be more human?
Also this tech is an indirect threat to my own work. I teach people and organizations how to build soft skills. From relationship building to negotiation to how to have curious conversations, I help people build their soft skills. So yeah, maybe I felt a bit threatened when I first saw it.
Then I stepped back. And I looked closer. And I saw the truly wild stuff going on with this tech. From the article:
“The great thing about VR is you can do something that’s rare in nature, and give people extra repetitions,” Bailenson says. “The cool part of using computer graphics for this, virtual humans, is you can go through as the manager and have this difficult conversation—then you can relive the experience from the point of view of the employee, get to hear your voice coming out of an avatar you’ve chosen to look like you. Now that you’ve got this newly emotionally understood information from being on the receiving end of this bad news, you get to repeat it and do it again.” – Boss Acting Nicer Recently? You May Have VR to Thank
Honestly, I can think of at least five managers from my past who could have used training like this. A lot of HR Tech companies are developing AI that will make your manager worse. Talespin is using AI and VR in an attempt to make them better.
People still need to practice building soft skills outside of a VR experience, so my work isn’t going away any time soon. But it’s wild to see this type of training applied using new technology. In the future I’d love to see research around how this emotional impact from virtual reality scenarios changes in managers for the better.
I’m also stoked for all the potential types of jobs emerging tech creates. As a creative who runs in HR circles (and worked in HR), I find the HR industry borderline stifling for creative types. Seeing a creative HR product that aims to improve the lives of employees is a welcome surprise.
I’m also curious about employees in this field. I’m curious who writes the scripts, how they work with designers, how the characters are modeled. After all, it’s real humans who build the fake humans who teach humans how to be more human.
I’m curious what type of employees they hire. What skills and backgrounds make up their teams? What type of employees succeed at their company? (Update: it looks like men. More than 90% of their 40+ employees on LinkedIn are men… that’s obviously a problem, especially when it comes to scenarios navigating inclusion in the workplace)
Students are looking for ways to beat AI recruiting tools like HireVue. And now coaching services are offering help:
“A start-up called Finito claims it can coach candidates to beat AI for as long as it takes them to get a job — but at a total cost of nearly £9,000. Candidates are steered through interview dry runs and get tips on what skills are needed to get past robot selections, in sectors including finance, public relations and the arts. They then watch footage back to spot foibles that could be flagged up as nerves.”
Curious about how AI technology might change your job? The NYT offers a glimpse at how algorithms are changing traditional roles. In retail, fashion buyers who are normally tasked with making purchasing decisions, are increasingly using algorithms to do the task. These algorithms make fashion decisions and predict the next big trend, a task normally associated creative geniuses. With so much consumer data, predicting trends and stock levels is left to the machines, no intuition needed.
“Retailers adept at using algorithms and big data tend to employ fewer buyers and assign each a wider range of categories, partly because they rely less on intuition.
At Le Tote, an online rental and retail service for women’s clothing that does hundreds of millions of dollars in business each year, a six-person team handles buying for all branded apparel — dresses, tops, pants, jackets.”
The result is two-fold: the industry is using fewer buyers in the decision-making process and retailers are increasingly hiring people who can “stand between machines and customers.” The article notes that there are plenty of areas where automation can’t do the job. Negotiating with suppliers, assessing fabric transparency, and styling all need a human touch.
Instead of replacing all the humans, algorithms are changing how we work. As a result, future roles (and managers) will demand employees who understand understand how to use algorithms to make decisions that improve the final product, while also understanding the limitations of the technology.
In the future of work (which is already here and we need a better phrase), we’re going to need a lot more of these employees.
Today I tried the Google trick to read a WSJ article, Seven Jobs Robots Will Expand, whose title is clickbait for future of work people like myself. Most of WSJ is behind a paywall but normally you can access an article through a simple Google search. But it turns out WSJ closed their Google loophole some time back. In the course of researching why they did that (to get more subscribers obvi) and new methods to get around the paywall (there aren’t any) I found something far more interesting. WSJ has applied a machine learning model to predict whether or not you’ll subscribe to their paper. Based on that score they’ll decide whether or not to show you the article you requested. Visitors are a categorized into hot, warm or cold. More on this move from NiemenLab:
Non-subscribed visitors to WSJ.com now each receive a propensity score based on more than 60 signals, such as whether the reader is visiting for the first time, the operating system they’re using, the device they’re reading on, what they chose to click on, and their location (plus a whole host of other demographic info it infers from that location). Using machine learning to inform a more flexible paywall takes away guesswork around how many stories, or what kinds of stories, to let readers read for free, and whether readers will respond to hitting paywall by paying for access or simply leaving.
This is wild. I’m off to go play with new browsers to see if I can get that clickbait article (this is the only time I ever use sad Safari).
What tweaks could we make to the college curriculum that would help students prepare for the changing workforce? This quote from the article, The Global University Employability Ranking 2017, at the Times Higher Education, offers a clever solution:
“The way organisations have to work these days needs to be very fluid. In that kind of world it is important to have people who are really flexible, able to create networks within their organisations and very comfortable working in virtual teams and particularly [what we call] leading beyond authority: not necessarily having to get things done because they are in a team that has a boss,” he says.
But he is “not sure” that the implications of this are “well understood by the academic world and, therefore, when we throw a new graduate into [work] it can be quite overwhelming [for the graduate]”. One solution, he suggests, is for university courses to have more group projects, with assessment focused on the process that the participants go through, rather than the outcome.
Flourishing in such an environment requires “reflection and understanding”, and especially learning from mistakes, Saha says. He is sceptical that this aspect of professional competence is well explored in universities currently, but “in the working world, that is the bit that can be make or break”.
He’s spot on in his assessment and solution. Focusing on group work and assessing participants on their process, instead of outcomes, could go a long way to help students identify their strengths, weaknesses, and improve their leadership and collaboration skills. What really struck me in that sentence is that focusing on process, rather than outcomes, is the opposite of American business culture. American learning and working culture is focused specifically on outcomes – we’re obsessed with assessing programs. Managers evaluate employees based on their results, not collaboration.
I’ve never in my work life been on a team that was evaluated on how well they worked on a project together. It’s almost a revolutionary suggestion.
Hybrid jobs are all the rage currently and are some of the top paying jobs in the market right now. If you’ve got soft skills, business acumen, and technical skills, you’ve got the ticket to a high paying job.
Hybrid roles are super interesting to follow because they are so new. Their descriptions and responsibilities differ from one organization to another. This is particularly the case with AI interaction designers, a emerging job category I’m paying a lot of attention to lately (in part because I’m slightly obsessed with chatbot design as of late.) Diane Kim, who designs the friendly virtual assistant bot at x.ai, summed up this emerging field in her interview with Wendy and Wade, a career advising chatbot:
“The fact that AI Interaction Design is so new gives me the freedom to be experimental. I also have the unique opportunity to be part of defining an entirely new field. This is actually both what is most exciting and most challenging about my job…But it’s challenging because none of us really know what this is yet — we’re all figuring it out together. It’s really different from, say, being a recent grad in your typical UX role for a visual interface, with decades of research and best practices to follow. We don’t have the same industry standards or guidelines yet for conversational design, but the fun part is figuring them out as we go.”
So it’s within that context that I examined this AI chatbot writer role from JustAnswers.
The skill requirements on this role are massive. Let’s break it down.
You need quantitiatve and qualitative skills
You need to be a seriously good at writing (perfect tone!)
You need to understand Sales (identify (and contribute to?) revenue opps!)
You need be an experimenter – test and retest
You need mad research skills
You need the collaboration skills to work with diverse teams
You need to understand user experience
You need to dive into professional fields that requires years AND be required to anticipate which quesitons users would ask AND write the answers.
This is one hell of a robust skill set. That last ask – expert with diving into deep professional fields like medicine and law – really threw me off. Who is this person? And will you pay them a shit ton of money for this expertise and skill set?
It’s likely this job is like most job postings: crammed with all the ideal things. There is probably flexibility – an applicant doesn’t have to have all those things.
I’m curious about how much this role pays because writing is an underpaid profession. Some managers who don’t write assume it’s easy – after all they write emails and reports! Copy is everywhere and people assume it’s easy to produce. Thoughtful copy – the kind that strikes the perfect tone! – takes time and creativity to produce. People in quantitative fields tend to overlook that.
But bad writing, especially in AI conversation design, leads to awkward interactions with the product. For example this was my recent convo with a new recruiting bot Robo Recruiter:
If writing is underpaid but AI is a hot hot hot field, how much should we be paying our AI chatbot writers?
I’m crowdsourcing your answers below in the comments: how much do you think this job pays? Do you think it pays as much as a machine learning engineer? As a product manager?
I’m a liberal arts grad. I love words and language. I teach soft skills. Qualitative data is my jam. I’m also obsessed with machine learning (ML) and artificial intelligence (AI).
In 2015 I tumbled down the AI rabbit hole after discovering a long read on the fabulous site Wait But Why. The site explains complex ideas paired with hilarious stick figures. The two part series on AI, The Artificial Intelligence Revolution, was my gateway article to the world of AI, and later ML as part of AI.
So far my self-directed learning journey has only included reading about AI and writing about its affectonhiring and the future of work. I can’t code in Python (with zero plans to do anything with R). My data background includes data analytics, cleaning data, and putting it into Tableau but nothing close to data scientist. I also have no interest in going that far professionally. As a non-tech person trying to access ML/AI, it’s been a challenge to figure out where I fit in. I’ve uncharacteristically avoided meetup groups or conferences on the subject since I don’t have the tech skills.
Last month I changed that. I got tired of reading. I wanted idea exchanges. So I attended a ML/Al unconference in PDX. And hot damn I found my people!
An unconference is the opposite of the standard conference setup. Instead of corporate-sponsored keynotes paired with bland chicken and an abundance of shy speakers who read PowerPoints, the participants chose the content. We pitched and voted on what they wanted to talk about. The result was facilitated conversations about subjects we were curious about and a format that flowed. It was the ideal setup for idea exchange and learning.If you’re conference weary an unconference will restore your faith in professional development.
Many people at the unconference were data scientists or computer scientists, and some working on ML projects. A few were students or job seekers. I met one other person who is like me, a communications expert without a technical background who works for a machine learning platform, BigML (and they’re doing rad stuff).
In our sessions we covered a roving range of topics about ML/AI: novel data sets, making AI more accessible to the masses, establishing trust with users, data security, AI decision making re: self-driving cars and the Arizona accident, becoming a data scientist and machine learning engineer, the future of companies and jobs (my pitch!), learning ML/AI as a new person (do you learn the math, the code, or find a project first? plenty of debate on this!), and plenty more side conversations that spilled out of the main sessions.
As an non-tech outsider it’s a bit intimidating to participate in such a cutting-edge tech space. I think ML/AI people forget that at times. One of the guys I met at the conference noted that when you’re an expert it’s hard to remember how hard it is for others to start in your field. I’ll add that this goes double if you’re in a quant and code heavy field like machine learning. Luckily most everyone at the unconference made it easy to participate (as did the unconference format).
My main takeaway though is that you don’t need to be a software engineer, data science expert, or code wizard to understand ML/AI.
So for all the people who are curious about ML/AI but don’t know how to start engaging in these communities, here’s how.
Learn the basics: Know the difference between machine learning and AI; understand the difference between Artificial Narrow Intelligence, Artificial General Intelligence, and Artificial Super Intelligence; understand the basics of data science. There are no shortage of intro articles and videos on the subject (two examples below).
Prior to the uconference I was slightly worried I’d be left out of the conversation if it turned to technical. I prepared by returning to a set a YouTube videos I’d skimmed a while back: Fun and Easy Machine Learning. The YouTube list animates over 15 models to better understand machine learning.
Ignore the math and coding right now: Unless you want to become a data scientist or machine learning engineer, ignore it. You don’t need it to understand the basics or to explore products or impacts of ML/AI. For example, the Fun and Easy Machine Learning series sometimes dives into the math behind the models. Treat it as you would a foreign language; when you don’t the meaning keep moving forward and focus on what you do understand. Fill in the blanks later.
Read everything about ML/AI in the area you’re interested in. ML/AI for non tech people is a huge field. So narrow it down. Start with general articles about artificial intelligence and learn about it’s expected impact. The World Economic Forum has good articles with a global perspective. For business impacts, check out this history of ML/AI technology by industry/verticals. Then head over to CB Insights to study ML/AI companies (and subscribe to their newsletter as they’re cutting edge everything). Then pick an industry that interests you. Either one that you work in or one that you want to work in. Read everything you can about how machine learning is affecting that industry (it’s affecting all of them – right now finance, healthcare, and insurance are some of the industries talked about the most.) Explore products and platforms in that industry that use ML/AI. Read case studies. I study the future of work. So I read everything I can about ML/AI and it’s affect on workers and organizations: McKinsey, AXIOS, MIT, plus I play with HR Tech.
Avoid the hype. It’s easy to get caught up in the shiny promised of AI. Instead, pay attention to counter narratives, often published outside of the tech reporting ecosystem. Find the counter narrative about AI in your field. I read the amazing research and work by Audrey Waters at Hack Education for a counter narrative to AI edtech hype. Explore bias in ML/AI. Understand how AI isn’t neutral and that gender and race bias is coded into AI systems. Weapons of Math Destruction is an excellent book (and 99% Design has a good podcast on it). We need diverse perspectives and people in ML/AI fields to fight these bias, and non-technical people are part of that fight.
Take a course: FutureLearn, an online learning platform with a name after my own heart, offers an Intro to Data Mining course where you’ll learn the basics of classification algorithms. It’s a smooth intro to applied machine learning. They also offer an advanced course to build your skills further.
Go to an event and talk to people: This is the intimidating part. But get over it, embrace the awkwardness, and commit to asking curious questions. Remind yourself of the things that you know. Write down the things that you want to learn. Talk to people until you get the answers to your questions. Ask people how they got into their work, what impact they’re having, and how they’d explain their work to a non tech person. Tell them you’re curious. Some people will just talk at you. Others will teach you. Keep in touch with the people who teach you and simply move on from the ones who talk at you.
Get a project: This builds on not worrying about the math and coding. Instead, get a project. What problem do you want to solve? What problem does your organization need to solve? What data is available? What data is missing? How could ML/AI solve your problem? Starting there will help you lead you in the right direction. You might not have an answer right away. That’s ok. It make take a while to solve it. But that’s the point. You’re learning. Ambiguity is part of the process. So ask around your workplace. Visit the data science or computer science team in your organization (assuming you have one). Find a data scientist in your network or at ML/AI events and ask them how they’d solve your problem. Ask them to break it down. Ask a computer science student what they think.
Start with curiosity, ignore the part about not having a technical background, and see where it takes you.
In 2017, roughly 70,000 postings requested AI skills in the U.S., according to our analysis of job postings. That’s a significant change, amounting to growth of 252% compared to 2010. Burning Glass also found that demand for AI skills is now showing up in a wide range of industries including retail, health care, include finance and insurance, manufacturing, information and professional services, technical services, and science/research. – Burning Glass Technologies
I’ve been seeing AI skills pop up in random job posts. I’ve wondered if it’s part of a bigger trend. It’s hard to get perspective since I’m not in the job market. Amazon leads the hiring for AI skills by a mile but GM, Accenture and Deloitte are also investing heavily. The most in-demand AI skills:
software developer/engineer, data scientist, data mining/data analyst, data engineer, computer systems engineer/architect, medical secretary, systems analyst, product manager and business management analyst.
SmartExam acts as a virtual physician’s assistant – an automated medical resident, if you will – that enables primary care providers to deliver efficient remote care while cutting costs and improving outcomes… The intelligent software dynamically interviews patients, using answers to garner more information and support providers in the care delivery process… SmartExam lets providers achieve as much, or more, in a two-minute virtual patient visit as the 20 minutes of provider time needed for an office visit, the company said… “It allows clinicians to operate at the tops of their licenses,” said Constantini. “They can focus on what they do best — diagnosis and treatment.” – Bright.MD raises another $8M for “virtual physician’s assistant” SmartExam
I wonder if current medical students are taught how to integrate AI software into their training.