How to learn about ML/AI if you don’t have tech skills

Art by AI

I’m a liberal arts grad. I love words and language. I teach soft skills. Qualitative data is my jam. I’m also obsessed with machine learning (ML) and artificial intelligence (AI).

In 2015 I tumbled down the AI rabbit hole after discovering a long read on the fabulous site Wait But Why. The site explains complex ideas paired with hilarious stick figures. The two part series on AI, The Artificial Intelligence Revolution, was my gateway article to the world of AI, and later ML as part of AI.

So far my self-directed learning journey has only included reading about AI and writing about its affect on hiring and the future of work. I can’t code in Python (with zero plans to do anything with R). My data background includes data analytics, cleaning data, and putting it into Tableau but nothing close to data scientist. I also have no interest in going that far professionally. As a non-tech person trying to access ML/AI, it’s been a challenge to figure out where I fit in. I’ve uncharacteristically avoided meetup groups or conferences on the subject since I don’t have the tech skills.

Not me.

Last month I changed that. I got tired of reading. I wanted idea exchanges. So I attended a ML/Al unconference in PDX. And hot damn I found my people!

An unconference is the opposite of the standard conference setup. Instead of corporate-sponsored keynotes paired with bland chicken and an abundance of shy speakers who read PowerPoints, the participants chose the content. We pitched and voted on what they wanted to talk about. The result was facilitated conversations about subjects we were curious about and a format that flowed. It was the ideal setup for idea exchange and learning. If you’re conference weary an unconference will restore your faith in professional development.

Many people at the unconference were data scientists or computer scientists, and some working on ML projects. A few were students or job seekers. I met one other person who is like me, a communications expert without a technical background who works for a machine learning platform, BigML (and they’re doing rad stuff).

In our sessions we covered a roving range of topics about ML/AI: novel data sets, making AI more accessible to the masses, establishing trust with users, data security, AI decision making re: self-driving cars and the Arizona accident, becoming a data scientist and machine learning engineer, the future of companies and jobs (my pitch!), learning ML/AI as a new person (do you learn the math, the code, or find a project first? plenty of debate on this!), and plenty more side conversations that spilled out of the main sessions.

As an non-tech outsider it’s a bit intimidating to participate in such a cutting-edge tech space. I think ML/AI people forget that at times. One of the guys I met at the conference noted that when you’re an expert it’s hard to remember how hard it is for others to start in your field. I’ll add that this goes double if you’re in a quant and code heavy field like machine learning. Luckily most everyone at the unconference made it easy to participate (as did the unconference format).

My main takeaway though is that you don’t need to be a software engineer, data science expert, or code wizard to understand ML/AI.

So for all the people who are curious about ML/AI but don’t know how to start engaging in these communities, here’s how. 

Learn the basics: Know the difference between machine learning and AI; understand the difference between Artificial Narrow Intelligence, Artificial General Intelligence, and Artificial Super Intelligence; understand the basics of data science. There are no shortage of intro articles and videos on the subject (two examples below).

Here’s a helpful Quora answer about the differences between a data scientist and a machine learning engineer. 

Prior to the uconference I was slightly worried I’d be left out of the conversation if it turned to technical. I prepared by returning to a set a YouTube videos I’d skimmed a while back: Fun and Easy Machine Learning. The YouTube list animates over 15 models to better understand machine learning.

Ignore the math and coding right now: Unless you want to become a data scientist or machine learning engineer, ignore it. You don’t need it to understand the basics or to explore products or impacts of ML/AI. For example, the Fun and Easy Machine Learning series sometimes dives into the math behind the models. Treat it as you would a foreign language; when you don’t the meaning keep moving forward and focus on what you do understand. Fill in the blanks later.

Read everything about ML/AI in the area you’re interested in. ML/AI for non tech people is a huge field. So narrow it down. Start with general articles about artificial intelligence and learn about it’s expected impact. The World Economic Forum has good articles with a global perspective. For business impacts, check out this history of ML/AI technology by industry/verticals. Then head over to CB Insights to study ML/AI companies (and subscribe to their newsletter as they’re cutting edge everything). Then pick an industry that interests you. Either one that you work in or one that you want to work in. Read everything you can about how machine learning is affecting that industry (it’s affecting all of them – right now finance, healthcare, and insurance are some of the industries talked about the most.) Explore products and platforms in that industry that use ML/AI. Read case studies. I study the future of work. So I read everything I can about ML/AI and it’s affect on workers and organizations: McKinsey, AXIOS, MIT, plus I play with HR Tech.

Avoid the hype. It’s easy to get caught up in the shiny promised of AI. Instead, pay attention to counter narratives, often published outside of the tech reporting ecosystem. Find the counter narrative about AI in your field. I read the amazing research and work by Audrey Waters at Hack Education for a counter narrative to AI edtech hype. Explore bias in ML/AI. Understand how AI isn’t neutral and that gender and race bias is coded into AI systems. Weapons of Math Destruction is an excellent book (and 99% Design has a good podcast on it). We need diverse perspectives and people in ML/AI fields to fight these bias, and non-technical people are part of that fight. 

Take a course: FutureLearn, an online learning platform with a name after my own heart, offers an Intro to Data Mining course where you’ll learn the basics of classification algorithms. It’s a smooth intro to applied machine learning. They also offer an advanced course to build your skills further.

Go to an event and talk to people: This is the intimidating part. But get over it, embrace the awkwardness, and commit to asking curious questions. Remind yourself of the things that you know. Write down the things that you want to learn. Talk to people until you get the answers to your questions. Ask people how they got into their work, what impact they’re having, and how they’d explain their work to a non tech person. Tell them you’re curious. Some people will just talk at you. Others will teach you. Keep in touch with the people who teach you and simply move on from the ones who talk at you.

Get a project: This builds on not worrying about the math and coding. Instead, get a project. What problem do you want to solve? What problem does your organization need to solve? What data is available? What data is missing? How could ML/AI solve your problem? Starting there will help you lead you in the right direction. You might not have an answer right away. That’s ok. It make take a while to solve it. But that’s the point. You’re learning. Ambiguity is part of the process. So ask around your workplace. Visit the data science or computer science team in your organization (assuming you have one). Find a data scientist in your network or at ML/AI events and ask them how they’d solve your problem. Ask them to break it down. Ask a computer science student what they think.

Start with curiosity, ignore the part about not having a technical background, and see where it takes you.

Your university is watching/nudging you

Universities are now collecting loads of data on students from physical whereabouts, to courses progress, to when they get online, to even what they do when they’re online.

The president of Purdue penned an op-ed to challenge higher education (and hopefully edtech) to think critically about how we use students’ data especially when it comes to behavioral nudging, lest we end up with a Chinese-like social rating system:

Somewhere between connecting a struggling student with a tutor and penalizing for life a person insufficiently enthusiastic of a reigning regime, judgment calls will be required and lines of self-restraint drawn. People serene in their assurance that they know what is best for others will have to stop and ask themselves, or be asked by the rest of us, on what authority they became the Nudgers and the Great Approvers. Many of us will have to stop and ask whether our good intentions are carrying us past boundaries where privacy and individual autonomy should still prevail.

Even more reason to make that LinkedIn connection

I’m still on an HR Tech deep dive. This time I found a remarkable platform that takes a proactive approach to employee referrals. Teamable helps employees make referrals and reach out to their contacts for opportunities. They do it by mining current employees’ social contacts and building profiles of potential candidates.

Here’s how it works:

This is even more motivation to connect with people: build relationships and get discovered.

I’m still conflicted about all the HR Tech that creeps on you. There’s a great deal of social scraping going on across HR Tech. But at least this platforms helps existing employees improve their referrals (and get money) and helps people who are actively building relationships get seen and hopefully hired.

Here’s a little more on what Teamable is up to and what they’ll do with their $5 mil round of funding that they don’t need.

BONUS: The founder’s badass bio mentioned rugby and travel, which is basically the greatest:


Rugby and travel also taught me everything I need to know about business.

Hype slayer

Which reminds me:

Navigating AI in the Job Hunt

Just dropping this Guardian article off here: ‘Dehumanising, impenetrable, frustrating’: the grim reality of job hunting in the age of AI

It features plenty of questions we should all be asking about AI in the job search. It also centers the discussion on the maddening experience of searching for work when AI is your evaluator and the gate keeper to getting hired. It’s ironic that organizations want more employees with soft skills yet the recruiting experience is transforming into a less human process. On top of that we’re outsourcing the ability to identify the relevant soft skills to technology that still isn’t very good at them.

This shift has already radically changed the way that many people interact with prospective employers. The standardised CV format allowed jobseekers to be evaluated by multiple firms with a single approach. Now jobseekers are forced to prepare for whatever format the company has chosen. The burden has been shifted from employer to jobseeker – a familiar feature of the gig economy era – and along with it the ability of jobseekers to get feedback or insight into the decision-making process. The role of human interaction in hiring has decreased, making an already difficult process deeply alienating.

Beyond the often bewildering and dehumanising experience lurk the concerns that attend automation and AI, which draws on data that’s often been shaped by inequality. If you suspect you’ve been discriminated against by an algorithm, what recourse do you have? How prone are those formulas to bias, and how do the multitude of third-party companies that develop and license this software deal with the personal data of applicants? And is it inevitable that non-traditional or poorer candidates, or those who struggle with new technology, will be excluded from the process?

Job seekers will be battling the robots on two sides: in the recruiting process and as they advance in their careers. It’s not going to get any easier.

Are employers telling candidates AI is evaluating them?

There’s a stand out line from a recent INC article on how AI is changing the hiring process. In the post, AI Is Now Analyzing Candidates’ Facial Expressions During Video Job Interviewsthe journalist asks:

“Are job candidates told that their facial expressions will be analyzed by algorithm?”

It’s a basic question that needs more examining as new technologies that use AI to screen candidates become more mainstream in the hiring process. The product in question here is Hirevue, a video interview platform that uses machine learning to make predictive assessments about a candidate’s future performance. It’s received over $93 million in funding and is used by a variety of organizations like Unilever, Goldman Sachs, Atlanta Public Schools, and BYU.

HireVue is one of the most high profile technologies in the HR Tech space. They’re using technology that enables recruiters to hire more efficiently. But the technology fundamentally changes the way candidates interact with employers and how they are evaluated. A journalist over at Business Insider tried the software and describes the process:

HireVue uses a combination of proprietary voice recognition software and licensed facial recognition software in tandem with a ranking algorithm to determine which candidates most resemble the ideal candidate. The ideal candidate is a composite of traits triggered by body language, tone, and key words gathered from analyses of the existing best members of a particular role.

After the algorithm lets the recruiter know which candidates are at the top of the heap, the recruiter can then choose to spend more time going through the answers of these particular applicants and determine who should move onto the next round, usually for an in-person interview.

The journalist also reported how awkward the experience is. You’re not interacting with anyone during the experience. Instead you’re staring at your own face. And it’s not just journalists who feel this way. For a good chuckle, take a look at the feedback on a HireVue experience on Reddit:

Hirevue interviews are awkward as well. from jobs

Should I send a thank you letter after a Hirevue interview? from jobs

Was Goldman Sach’s HireVue interview really awkward, or is it just me? from cscareerquestions

Interestingly none of these posts talk about being evaluated by AI. A quick look through company tutorials on how to use HireVue doesn’t say anything about AI making judgements about your microexpressions and voice.

Obviously this isn’t a representative sample. But companies have a responsibility to tell candidates how they’re being evaluated. And candidates need to ask tougher questions about the evaluation process so they can prepare and adapt accordingly.

And for job seekers who are navigating this impersonal world of HR Tech, here’s some handy advice from the INC article:

“For job candidates, knowing your emotions will be read, it’s a good reason not to apply for any job or to any company you’re not genuinely enthusiastic about. Or it may be a good reason to brush up on your acting skills.”

If you’re curious, here’s how HireVue works:

 

So about that job offer at Facebook

Corporate surveillance is all the rage among the top tech companies according to this Guardian article, How Silicon Valley keeps a lid on leakers:

For low-paid contractors who do the grunt work for big tech companies, the incentive to keep silent is more stick than carrot. What they lack in stock options and a sense of corporate tribalism, they make up for in fear of losing their jobs. One European Facebook content moderator signed a contract, seen by the Guardian, which granted the company the right to monitor and record his social media activities, including his personal Facebook account, as well as emails, phone calls and internet use. He also agreed to random personal searches of his belongings including bags, briefcases and car while on company premises. Refusal to allow such searches would be treated as gross misconduct.

There are some truly shitty practices happening at top technology companies like Facebook and Google. The paranoia is so bad in some companies that “some employees switch their phones off or hide them out of fear that their location is being tracked.”

So how does a job seeker know to avoid companies that treat their employers like this? And does it even matter because the long term benefits of getting Facebook or Google on your resume and working on cutting edge projects outweigh the risks of daily corporate surveillance? (yes, it should matter, but try telling that to a new graduate)

Maybe these practices are more of a reflection on just how comfortable we seem to be getting with corporate surveillance in our professional and personal lives.