Researchers Aim to Spot a Liar Using Online Polygraph

March 19, 2019

Can you spot a liar?

It’s tricky enough in face-to-face conversations that offer facial expressions, gestures and tone of voice because those physical cues add context. Spotting a liar gets even tougher in blind computer conversations.

Florida State University researcher Shuyuan Ho wants to shed those blinders by creating a revolutionary online polygraph.

“The future of my research is an online polygraph that could be used many different ways,” said Ho, an associate professor in the College of Communication and Information. “You could use it for online dating, Facebook, Twitter — the applications are endless. I think the future is unlimited for an online polygraph system.”

Ho envisions a future where technology can identify liars and truth-tellers based on the words they write in electronic messages. Her latest research dove into the murky depths of internet deception where trolling, identity theft and phishing for credit card numbers snag an increasing number of online users.

The research study, published in the journal Computers in Human Behavior, detailed the findings of an online game that she created to measure truthful and deceptive communications between two people.

Ho parsed the words in those conversations, hoping to extract context from millions of bits of data in many messages — described as language-action cues — just as people get context from seeing physical cues that indicate whether someone is telling the truth or lying.

She says the results were startling.

The experiments revealed a person could spot lies in messages about 50 percent of the time, while a machine-learning approach could identify deception with an accuracy rate ranging from 85 to 100 percent.

Ho is excited about the potential for this research. “I want to get the world’s attention on this research so we can hopefully make it into a commercial product that could be attached to all kinds of online social forums,” Ho said. “This basic research offers great potential to develop an online polygraph system that helps protect our online communication.”

Ho supervises the iSensor Lab on FSU’s campus where researchers conduct experiments to better understand deception in online communications. To facilitate those conversations, she created an online game designed to identify language cues that unmask deceivers and truth-tellers. The game randomly assigned players to play the roles of “The Saint” and “The Sinner.” As sinners and saints interacted via computers, researchers in the iSensor Lab captured those conversations and used machine-learning technology to scrutinize patterns of words and writing.

Some fascinating language tendencies emerged from that analysis. The lying sinners were found to be less expressive, but they used more decorative words per message. They displayed more negative emotions and appeared more anxious when they communicated with truth-tellers.

Deceivers also took less time to respond and used more words of insight, such as “think” and “know,” and they tended to use more words of certainty, including “always” or “never.”

Conversely, truth-tellers used more words of speculation, such as “perhaps” and “guess,” and they took longer to respond to inquiries. These saints provided more reasoned ideas by using words of causation — “because” — and they expressed more reflective thinking with words like “should” and “could.”

“Truth-tellers tend to say ‘no’ a lot. Why?” Ho asked. “They like to provide emphasis when they explain their reasons. If you ask them, ‘Is this true?’ They tend to say ‘no’ because there is another true reason.”

Researchers also calculated time lags between every sentence, and even parts of a sentence, by placing time stamps on the words. That precise breakdown clearly showed how much a person paused during interactions — another language-action cue. Those pauses might have been so slight that they would not necessarily be noticeable to a person, but machine-learning technology could spot it.

Ho said she hopes her research will eventually give people better protection when they’re online.

“I think we all have good common sense about the people we meet face to face, but how much common sense do we have with the strangers we encounter online where you can meet a lot of people very fast,” Ho said. “This research is so important because it can provide another reference point offering more protection. All of society can benefit.”

This research was funded by the National Science Foundation, Florida Center for Cybersecurity Collaborative Seed Grant and Florida State University Council for Research and Creativity Planning Grant.

Source: Florida State University

Topics Florida

Was this article valuable?

Here are more articles you may enjoy.

Latest Comments

  • March 23, 2019 at 8:06 pm
    PouellerBeaReport Fail says:
    You have plenty of free time to be able to store and retrieve prior posts. It's a shame you don't use your time for something constructive.
  • March 20, 2019 at 2:48 pm
    Captain Planet says:
    You mean like, "Romney is going to win in a landslide", "Obamacare will be found unconstitutional by the Supreme Court", and "HEALTH CARE SPECIFICCS will be the healthcare ref... read more
  • March 20, 2019 at 2:40 pm
    Captain Planet says:
    MARCH 20, 2019 AT 12:37 PM Craig Cornell says: LIKE OR DISLIKE: Thumb up 1Thumb down 1 Partisan . . . Hack. Do you even realize how boring you are? How predictable and lacking... read more
More News
More News Features