I haven’t written about robots for a while – however – that doesn’t mean I’m not following all things robot-related! In addition, the annual Disrupt Sydney conference is a little over a month away (Friday 21 September) and this year’s theme is: Robots Against the Machine? What chance do we have of ‘raging against the machine’ if we are trained to act like robots?
We’ll be posting more about Disrupt Sydney over the coming weeks, including special rates for readers of our Friday Faves! (Here’s a link to Nat’s first blog post about our workshop).
Now – back to the robots. This week I’ve selected 2 readings – intentionally, as they represent contrasting opinions on our future with robots. Both articles appear in the New York Times Big Ideas: A special section of the Times’ philosophy series, The Stone, in which authors, artists, philosophers, scientists and entrepreneurs answer the question, “What does it mean to be human today?”
Both articles also appear to accept the existence of robots and AI in our current and future lives. In the first article, I personally align with the more optimistic future outlined. However, Sherry Turkle has always presented alternative scenarios and views with balance. Views that frequently challenge my own perspective, while reminding me to question and challenge my position.
Article 1: We Are Merging With Robots. That’s a Good Thing.
“The old boundaries of the human self are being blurred by technology. The risks are real, but the potential is astounding.”
The article starts with a brief review of the current state: how emerging technologies more broadly, and robots more specifically, are enabling us to achieve tasks and activities – some of which may have seemed previously impossible.
“It’s… a world permeated by a growing swath of alien intelligences.”
The author then poses a question about the blur between being human and technology (or machines). Followed by a warning – or is it an observation?
“Part of this process involves getting used to the alien nature and pervasive reach of the many new subintelligences that now surround us.”
Again – more questions: “How should we negotiate this dauntingly large space of human possibility? And what costs are we willing to tolerate along the way?”
The key elements of the answer: practice and ethics. What are we willing to accept to enhance our lives? And what are the implications? Intriguing and powerful opportunities lie ahead, perhaps the tension arises more from the speed of change without stopping to consider the ethics and impacts.
Article 2: There Will Never Be an Age of Artificial Intimacy
“Robots may be better than nothing, but they still won’t be enough.”
Sherry Turkle, an MIT professor, illustrates her views through storytelling, scenarios that allow us to imagine how the future may play out. This article is about empathy, about relationships with innate objects – or robots – allowing us to avoid or augment or replace real-life relationships. Not a new perspective, one that has already been played out in movies like Her.
Here’s her punch line:
“Yet through our interactions with these machines, we seem to ignore this fact; we act as though the emotional ties we form with them will be reciprocal, and real, as though there is a right kind of emotional tie that can be formed with objects that have no emotions at all.”
Now – as someone that tends to give names to all sorts of objects – from my first few cars (I’ve stopped now) to pot plants, and some devices (not my laptop or mobile) – I’m not surprised by this comment. However, the concern Turkle raises is a valid reminder: people are prepared to substitute companion robots on the basis that it would be “better than nothing”. We’re prepared to delegate our care of the elderly to robots, and even as our life partners because they won’t break your heart! Her opinion weaves a pathway through immortality and what we’re going to be prepared to sacrifice and leaves us with this thought:
“Being human today is about the struggle to remain genuinely empathic ourselves. To remember why it matters, to remember what we cherish.”
Perhaps her concerns can be combined with Andy Clark’s optimism – in practice and with ethics we have the ability to navigate these future scenarios. Another common element: the authors both raise concerns that require us to imagine how the future with robots and AI will play out. Not later – now – it’s not sci-fi anymore, it’s real.
This post is part of a weekly Friday Faves series contributed by the team at Ripple Effect Group. Read the entire series and collections from other team members here.