Chats with robots 'not far off'

Robots and computers could soon be having meaningful conversations and even arguments with humans.

A new research project will develop systems that allow men and machines to debate decisions.

It opens up the possibility of human operators discussing action plans with robots and, if necessary, ordering them to break rules.

For their part, the computers would be able to argue in favour of decisions or inform their operators that certain tasks are impossible.

Early versions of the software could be available in just three years.

Lead researcher Dr Wamberto Vasconcelos, from the University of Aberdeen, said the aim is to increase human trust in intelligent technology.

"Autonomous systems such as robots are an integral part of modern industry, used to carry out tasks without continuous human guidance," he said.

"Employed across a variety of sectors, these systems can quickly process huge amounts of information when deciding how to act. However, in doing so, they can make mistakes which are not obvious to them or to a human.

"Evidence shows there may be mistrust when there are no provisions to help a human to understand why an autonomous system has decided to perform a specific task at a particular time and in a certain way.

"What we are creating is a new generation of autonomous systems which are able to carry out a two-way communication with humans."

Talking computers with the ability to converse with humans have long been a mainstay of science fiction. Examples include Hal, the deadpan-voiced computer in the film '2001: A Space Odyssey', which goes mad and sets out to murder the crew of a spaceship.

The system Dr Vasconcelos is developing will communicate with words on a computer screen rather than speech. Potential applications could include unmanned robot missions to planets or the deep sea, defence systems and exploring hostile environments such as nuclear installations.

A typical dialogue might involve a human operator asking a computer why it decided on a particular decision, what alternatives there might be and why these were not followed.

"It gives the human operator and opportunity to challenge or overrule the robot's decision," said Dr Vasconcelos.

"You can authorise the computer system to break or bend the rules if necessary, for instance to make better use of resources or in the interests of safety.

"Ultimately, this conversation is to ensure that the system is one the human is comfortable with. But the dialogue will be a two-way thing. The supervisor might not like a particular solution but the computer might say: sorry, this is all I can do."

One factor that has to be taken into account is ensuring the computer's responses do not seem threatening, rude or confrontational.

"That's something we're going to have to look at," said Dr Vasconcelos. A psychologist has joined the team to help with this aspect of the research.

Conversing with robots would actually make humans more accountable, since failures could not conveniently be blamed on computer error, Dr Vasconcelos added.

"With power also comes responsibility. All these dialogues are going to be recorded, so there is a name to blame if something goes wrong. It's a good side-effect."


Email Updates

Receive our lunchtime briefing straight to your inbox

More in this Section

International Space Station photobombs eclipse

Donald Trump 'spouts rubbish' on Twitter, says North Korea

The best reactions to Donald Trump looking directly at the Sun during the eclipse

Four to appear in court in connection with Barcelona attacks


Lifestyle

Scoring a goal for the homeless

Making Cents: It’s barking mad not to consider your pet costs

From Russia with love (of dancing)

The Islands of Ireland: Knight to remember on Clare Island

More From The Irish Examiner