The dystopian nightmare that’s coming for your international vacations

“I think it’s important to note what the use of facial recognition [in airports] means for American citizens,” Jeramie Scott, director of EPIC’s Domestic Surveillance Project, told BuzzFeed News in an interview. “It means the government, without consulting the public, a requirement by Congress, or consent from any individual, is using facial recognition to create a digital ID of millions of Americans.” – The US Government Will Be Scanning Your Face At 20 Top Airports, Documents Show

Facial recognition systems are headed to the airport, and it’s happening at a rapid pace, without public commentary and guardrails around data privacy or data quality.

Consider this news alongside of new reporting from ProPublica, that found TSA’s body scanning technology discriminates against black women by regularly flagging black women as security threats, resulting in increased screening for those women. Then add to that the reporting out from the New York Times last week called The Privacy Project. One of the most impactful reports took data from three cameras in Bryant Park and used it to create a facial recognition tracking software for less than $100. In the end they used it to identify one of the people in the park and it only took a few days work.

An AI dystopia in which bias is encoded into the algorithms and marginalized communities are further marginalized is hurtling towards us faster than the average person can keep up.

The result of increased use of facial recognition in public spaces puts our society on track to developing a system thats not entirely different from China’s social credit system. From the Buzzfeed article quoted above:

The big takeaway is that the broad surveillance of people in airports amounts to a kind of “individualized control of citizenry” — not unlike what’s already happening with the social credit scoring system in China. “There are already people who aren’t allowed on, say, a high-speed train because their social credit scores are too low,” he said, pointing out that China’s program is significantly based in “identifying individual people and tracking their movements in public spaces though automated facial recognition.”

It all reminded me of a tweet I saw this week which captures my frustration at American journalist’s continued reporting on China’s social credit system while ignoring our own American AI nightmare that’s headed full stem ahead:

*whispers* the us invests in mass surveillance and social credit systems the same way china does and yet some of us only ever point to china with outrage and it’s getting tiring— a once blue haired enby from oakland | tired of it (@WellsLucasSanto) April 16, 2019

Consider this: Just last month, landlords in NYC announced their interest in install ing facial recognition technology in rent subsidized apartments. Yet in Beijing, 47 public housing projects used the technology last year.

The use of facial recognition technology isn’t limited to the government . Companies are doing a bang up job already using facial recognition technology in unsuspecting places:

All of this makes me wonder: how do average people, those outside of tech, academia, and spheres of influence, push back against these technologies? Can you opt out of facial recognition tech at the airport? How do you know to opt out if you didn’t know it was being used to begin with? What happens when you opt out? Will you be subjected to more invasive searches? Will opting out delay your next flight? So many questions and sadly zero answers.

Stop shaming job hoppers: The future of work belongs to the job hoppers

Hopping right on out of that bad job

Though Intel forecasts flat sales in 2019, people inside the company said this week’s layoffs don’t appear to be strictly a cost-cutting move. Rather, they said the cuts appeared to reflect a broad change in the way Intel is approaching its internal technical systems… Intel will now consolidate operations under a single contractor, the Indian technology giant Infosys.

Intel is laying off hundreds of their IT staff, according to the Oregonian. Unless you or a friend or family member is immediately affected, you’ve probably scrolled right past the news. That’s no shame on you; stories of layoffs are a dime a dozen in our newsfeeds. It’s easy to scroll right on past.

In March alone, EA laid off 350 people. Bed, Bath and Beyond laid off 150 workers. SAP is cutting 450 US jobs, Oracle is heading into layoffs and playing coy, but rumors are estimating it will be in the thousands. PayPal plans to cut close to 400 jobs. It was just announced that 1,500 employees are losing their jobs by September at a Fiat Chrysler plant. In Wisconsin, Shopko is laying off 1,700 people. Disney’s recent merger with Fox is generating speculation that anywhere from 4,000 to 10,000 workers will be laid off.

Continue reading →

If you want to understand AI and ethics, start with this podcast

One of the things that sort of keeps us up at night is if you think about the way that we check that our current systems are fair in, say, criminal justice is that we have a system of appeals. We have a system of rulings. You actually have a thing called due process, which means you can check the evidence that’s being brought against you. You can say, “Hey, this is incorrect.” You can change the data. You can say, “Hey, you’ve got the wrong information about me.”

This is actually not how AI works right now. In many cases, decisions are gonna be made about you. You’re not even aware that an AI system is working in the background. Let’s take HR for a classic case in point right now. Now, many of you have probably tried sending CVs and résumés in to get a job. What you may not know is that in many cases, companies are using AI systems to scan those résumés, to decide whether or not you’re worthy of an interview, and that’s fine until you start hearing about Amazon’s system, where they took two years to design, essentially, an AI automatic résumé scanner. – How will AI change your life? AI Now Institute founders Kate Crawford and Meredith Whittaker explain.

Everyone who works on AI products needs to understand the ethical implications of their work. AI engineers and product managers need to understand their product’s impact on users. Business leaders and engineers need to bring in diverse voices and specialties to help ensure their product doesn’t have negative implications. Human resources leads need to hire interdisciplinary workers, who connect the dots between design, engineering, and business performance.

All of this is of course easier said than done. Judging by the many, many, many fails in AI product development, we aren’t even close to that point inside of AI organizations. These “fails” have a tremendous impact on people’s lives.

Ethics is a loaded term and businesses aren’t quite sure what ethics and AI even looks like. Just look at the recent dissolving of Google’s AI ethics board. While many questioned who got to be on that board, many others questioned exactly how an ethics board translates into ethical business practices and products.

Thankfully there are several individuals and organizations working at the intersection of AI and ethics. My personal favorite is the AI Now Institute. I could have pulled so many other impactful quotes from their recent interview on the Recode Decode podcast. Have a listen to that episode to get your head around the many challenges of AI and ethics. And if you’re really into AI and ethics, check out this list of people to follow on Twitter.

Now that my first book on the future of work is moving forward, I’m turning my research towards AI and ethics, specifically how organizations train talent to reduce bias in AI products. So expect more of this type of content in the coming months.

I’m also speaking at Portland’s Machine Learning for All conference on how to have curious conversations. I’ll be teaching software and machine learning engineers how to hone their soft skills to build connections and work interdisciplinary to ensure they’re bringing the right voices into their work.