Improving communication between humans and robots in 20 noisy questions

Hero and his team may have discovered a better way to facilitate communication using a twist on the classic game of 20 Questions.

Digital composite of human and robot hand Enlarge

A team of information scientists led by Alfred Hero, John H. Holland Distinguished University Professor of EECS at the University of Michigan, have discovered a better way to facilitate communication between humans and robot, using a twist on the classic game of 20 Questions. The unexpected technique borrows from classic information theory.

As robots and other autonomous systems become more pervasive in our everyday lives, there will be more and more opportunities for meaningful collaborations. We can already rely on our electronic friends to play our favorite music, call a friend, or help us remember what to buy. And in 2004, computers began playing the game of 20 Questions with consumers of the popular game, and winning.

In the game of 20 Questions, a computer figures out what you’re thinking by asking you 20 questions or less, all with questions that require a simple yes or no answer. The potential applications extend far beyond a fun game, however.

Future scenarios of human/computer communication include gathering information from a wide array of individuals to assist in potentially life-saving scenarios. Protecting soldiers in enemy territory is one such scenario; locating the source of a deadly flu outbreak is another.

The game of 20 Questions, with its binary yes/no framework, works well for computers because computers are very good at calculations and logic. Humans, on the other hand, are experts at qualitative observations – making some kind of value judgement based on what they are sensing through sight, sound, touch, smell, and taste.

In 2009, Prof. Hero embarked on research with former student Ted Tsiligkaridis (PHD EE:S 2014) that focused on the uniquely human qualitative answers that could be gained through a computer interacting with a person.

A simple example of the process is what happens when you are in the optometrist’s office to get your eyes checked. With a series of yes and no questions, the doctor finds you the correct lens for your eyes. However, just as the process becomes more difficult for humans to decide which option is better, Hero and Tsiligkaridis realized that in general, the more questions asked in any scenario, the greater the input of “noise,” or uncertainty into the equation.

“We contributed 2 things to this stage in the study,” said Hero. “We determined the optimality of a rule called the bisection rule, sometimes called the Golden Ratio Search, which tells you how to subdivide. And we showed that it was the best rule even when the response was noisy.”

And importantly, they also generalized the rule to multiple people, making it a crowd sourcing problem.

The next exciting stage of the research, said Hero, happened when he partnered with Michigan postdoctoral researcher Hye Won Chung.

Chung had the rather audacious idea of creating algorithms for the 20 Questions technique from a non-adaptive perspective, rather than adaptive, which was the accepted norm since the 1990’s.

It simply didn’t make sense, in most people’s minds, to think that asking all 20 questions at once could provide the same results as an adaptive method, which offers the computer a chance to assimilate new information before asking the next question; in essence, to learn from previous answers.

“But we showed you could do it – you just had to pose the correct 20 questions,” said Hero.

They came up with their novel approach by resorting to classic Information Theory, which had its roots in the theories of Claude Shannon.

“By combining superposition coding and unequal error protection,” explains Hero, “we’re able to achieve the same rate, the same fast few numbers of questions needed, as in the adaptive method. But the benefit is, we have a simpler mechanism. We don’t have to keep recalculating each time we get an answer to a question because it’s all designed in advance. We only use the channel once (twice to then get the answer back). And because there is always channel delay, which is multiplied for each additional question asked, it’s faster. We also don’t need to compute 20 times; we compute one time.”

“In summary – our method has none of the disadvantages that folklore was saying, it’s faster, and there is less disruption from noisy channels.”

How does this work? The robot formulates all 20 questions in advance, taking into account the fact that when you give answers, you’re going to be more certain of some answers than others, which introduces varying degrees of noise. “That’s where the unequal error protection comes in,” says Hero.

The human then responds to all the questions at once, and the robot makes a decision based on its calculation of the responses. For example, the robot may decide to move in a certain direction in order to view something better, or take better measurements.

The technique also facilitates information exchange between robots and other computers, or sensor networks.

Hero is now extending this method to crowd sourcing, so the robot can draw on the knowledge of thousands of people – bringing it into the realm of big data.

The research was published as “Unequal error protection querying policies for the noisy 20 questions problem,” by H. W. Chung, B. M. Sadler, L. Zheng, A. O. Hero in IEEE Transactions on Information Theory, vol. 64, no. 2, pp. 1105-1131, February 2018.

Alfred Hero is the John H. Holland Distinguished University Professor of EECS and the R. Jamison and Betty Williams Professor of Engineering. In addition to his home department of Electrical Engineering and Computer Science, he is a Professor of Biomedical Engineering and Statistics. He is also co-director of the Michigan Institute for Data Science (MIDAS).

Dr. Hye Won Chung is an assistant professor at the School of Electrical Engineering at KAIST. She studied under Lizhong Zheng, Professor of Electrical Engineering at the Massachusetts Institute of Technology. Brian Sadler, who has been affiliated with the 20 Questions research from the beginning, is the U.S. Army Senior Scientist for Intelligent Systems at the Army Research Laboratory.