More about robots and building on last week’s thought-provoking questions, this week’s articles demonstrate some of the current research and the impact. When you read these – just remember some of the concerns from last week (how we may emotionally engage – or not – with robots and what we need to think about) – I think both these articles raise more concerns while exploring the exciting sort of research that helps us understand ourselves at the same time as developing new technologies.
Article 1: How rude humanoid robots can mess with your head
This is interesting research – what if you humanoid robot gave you negative responses when you asked a question? How would you react? The research being conducted by a group in France is investigating how a robot’s attitude may affect your ability to do a task. (Perhaps not different to co-workers or managers?). It also extends to the influence of robots and how they may be able to influence children to make decisions.
The key finding:
“…how the development of advanced social robots is far outpacing our understanding of how they’re going to make us feel. What these studies suggest is that humanoid robots can manipulate us in complex ways. And scientists are just barely beginning to understand those dynamics…”
As previously highlighted – we need to understand this impact further before we find ourselves encouraging or limiting our behaviours in ways that are not intended – particularly children!
Article 2: How to make a robot use theory of mind
Meanwhile, other researchers are trying to create robots that can respond to our needs by anticipating what is we need. The opening example is useful to understand this further. Your in an elevator, you see your colleagues running along the corridor. You put your arm into the door opening to hold the lift for your colleagues to catch your lift. You knew what their behaviour was – you understood that their rapid movement indicated them trying to catch your lift, and you responded to hold the lift for them. Easy, everyday sort of reactions… unless you’re trying to program a robot with what is called predictive social skills!
The research being conducted at the University of New England is attempting to develop:
“…understanding through “simulation theory of mind,” an approach to AI that lets robots internally simulate the anticipated needs and actions of people, things and other robots—and use the results (in conjunction with pre programmed instructions) to determine an appropriate response.”
This sounds almost unachievable to me – as it requires us to understand ourselves first. This is where the research into Theory of Mind – a term used to describe the:
“…ability to predict the actions of self and others by imagining ourselves in the position of something or someone else.”
This type of programming is not machine learning – this requires a simulation-based approach within an internal programmed model. They have had some limited success with simple tasks but while the theory of mind is not well understood in humans, it makes you realise that some of the hype around robots taking over our jobs is still a long way off.
The article describes further some of the intentions – what else they’re trying to develop and the impact.
Moral of the story: While a robot may be programmable to be rude or cheeky or encouraging, don’t expect them to hold the lift door open for you anytime soon!
Read Article 1: https://www.wired.com/story/how-rude-humanoid-robots-can-mess-with-your-head
Read Article 2: https://www.scientificamerican.com/article/how-to-make-a-robot-use-theory-of-mind/
This post is part of a weekly Friday Faves series contributed by the team at Ripple Effect Group. Read the entire series and collections from other team members here.