Hey everyone, welcome back to Bionic Bug podcast! You’re listening to episode 31. This is your host Natasha Bajema, fiction author, futurist, and national security expert. I’m recording this episode on November 25, 2018.
Let’s talk tech:
- My first headline is “Wanted: The ‘perfect babysitter.’ Must pass AI scan for respect and attitude” published in the Washington Post on November 23
- Machine learning tools are beginning to impact our lives in big ways without us taking a moment to think through the limitations of such tools or the potential consequences. It’s scary.
- This article describes new software tools/services driven by machine-learning to improve the screening of potential employees—in this case, babysitters.
- Parents are increasingly turning to online services like Predictum to make choices about babysitters. This service leverages “artificial intelligence” to screen the social media activity of prospective babysitters to generate an automated risk rating. Babysitters are assessed for their risk of negative behaviors such as drug abuse, violence, bullying, and harassment.
- The system is a black box – it spits out a number, but doesn’t explain how it produced its ratings.
- The article talks about parents who previously thought their babysitter was trustworthy, but began to have doubts when the risk assessment score came back as a 2 instead of a perfect 1.
- The system is based on the concept that social media shows a person’s character. I can imagine a future system also analyzing a person’s google searches, in which case I and all other fiction writers are doomed.
- This is problematic on so many levels.
- A computer software tools tends to produce more trust than human instinct. This is problematic if the tool is flawed. And I’ve never met a computer software that wasn’t flawed.
- Unlike your standard computer software, humans don’t program the rules for machine learning tools. A machine learning tool determines rules based on patterns of data.
- Here’s a question: how many of us are honest on social media? Platforms like Facebook, Instagram and Twitter lend themselves to the creation of online personas – designer versions of our true selves. Are the images we show to the public for consumption actually the full truth.
- How well can these tools determine the context for social media posts? Can they decipher between sarcastic and serious posts? How do these tools distinguish between things people actually say and what articles or people they may be quoting.
- According to the article, “The technology is reshaping how some companies approach recruiting, hiring and reviewing workers, offering employers an unrivaled look at job candidates through a new wave of invasive psychological assessment and surveillance.”
- My second headline is “China blacklists millions of people from booking flights as ‘social credit’ system introduced” published on the Independent on November 23.
- I’ve talked about such headlines before. Many of you know that China plans to fully implement its social credit system by 2020. This system tracks the behavior of citizens based on the their data and evaluates them for “trustworthiness”, which is determined by the behaviors the government wishes to encourage or discourage. Citizens with low scores will be denied access to travel, high-speed internet, good schools, certain jobs, certain hotels and even the right to own pets.
- You might breathe a sigh of relief that at least you don’t live in China. Most democracies have resisted the alluring pull of monitoring technologies in the name of protecting privacy. Or have we? If our data trail is not being funneled to our government, then to whom are we giving the power? And do we trust them to do the right thing?
Let’s turn to Bionic Bug. Last week, Rob, Lara and Vik went to check out Linda’s apartment, but just missed her. Let’s find out what happens next.
The views expressed on this podcast are my own and do not reflect the official policy or position of the National Defense University, the Department of Defense or the U.S. Government.