Towards Detecting Deception using K-Nearest Neighbour Model Abstract Security over the years remains a major concern of all especially the law enforcement agencies

Towards Detecting Deception using K-Nearest Neighbour Model
Abstract
Security over the years remains a major concern of all especially the law enforcement agencies. One way of arresting this concern is to be able to reliably detecting deception. Detecting deception remains a difficult task as no perfect method has been found for the detection. Past researches made use of a single cue (verbal or nonverbal), it was found that examining combinations of cues will detect deception better than examining a single cue. Since no single verbal or non-verbal cue is able to successfully detect deception the research proposes to use both the verbal and nonverbal cues to detect deception. Therefore, this research aims to develop a KNN model for classifying the extracted verbal, nonverbal and VerbNon features as deceptive or truthful. The system extracted desired features from the dataset of Perez-Rosas. The verbal cues capture the speech of the suspect while the nonverbal cues capture the facial expressions of the suspect. The verbal cues include the voice pitch (in terms of variations), frequency perturbation also known as jitters, pauses (voice or silent), and speechrate (is defined as the rate at which the suspect is speaking). The Praat (a tool for speech analysis) was used in extracting all the verbal cues. The nonverbal features were extracted using the Active Shape Model (ASM). The work was implemented in 2015a MatLab. The classification was done using KNN model. KNN performed well with VerbNon dataset with a percentage score of 96.2%.

Introduction
Deception, an everyday occurrence is as old as life itself. It had its origin in the Garden of Eden when Eve was deceived by the serpent. It is asserted that life and deception are inseparable since life is linked intrinsically with information which in turn is synonymous with communication. In warfare, deception dated back to 800 BC when it was first used as key to success (Baron-Cohen et. al., 2001). In the biological world, some organisms display deceptive characteristics (as a way of life) like the mimicry of plants to attract pollinators or the camouflage of fish to escape predators. Some others exhibit deception as a cognitively conscious character, as in the behaviour of monkeys and of humans to mislead their colleagues to obtain a benefit for themselves (Picornell, 2013).
The crucial question is to which behaviour attention should be paid. This question is difficult to answer, as research has shown that deception itself is not related to a unique pattern of specific behaviours (DePaulo et. al., 1985; Ekman, 1992; Vrij, 1998, 2000; Zuckerman et. al., 1981). In other words, there is nothing like Pinocchio’s growing nose. However, liars might experience emotions while lying and the three most common types of emotion associated with deceit as captured in Ekman (1989, 1992) are fear, excitement (‘duping delight’) and guilt. Detecting deception remains a difficult task (Navarro and Schafer, 2001) as no perfect method has been found for the detection (DePaulo et. al., 2003). In fact, multiple studies have established that lie detection results in a 50/50 chance even for experienced investigators. Although detecting deception remains difficult, investigators increase the odds for success by learning a few basic nonverbal (psychological) and verbal (speech) cues of deception. Lying requires the deceiver to keep the fact straight, make the story believable, and be able to withstand scrutiny. In Navarro and Schafer (2001), it was stated that when individuals tell the truth, they often make every effort to ensure that other people understand while liars on the other hand attempt to manage peoples’ perceptions. Consequently, people unwittingly signal deception via nonverbal and verbal cues (Cues are those indicators or variables that can be observed and measured and are believed to be indicative of deception).
Repeated studies have shown that traditional methods of detecting deception during interviews succeed only 50% of the time, even for experienced law enforcement officers. In spite of this, investigators still need the ability to test the veracity of those they interview. To do so, investigators require a model that incorporates research with empirical experience to differentiate honesty from deception.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Unfortunately, no particular nonverbal or verbal cue evinces deception. Investigators’ abilities to detect deceptive behaviour depend largely on their ability to observe, catalogue, and differentiate human behaviour. Sporer and Schwandt, (2006, 2007) stated that deception could be detected by observing non-verbal behaviour such as body language and vocal pitch. Vrij (2008) found that examining multiple cues was significantly more reliable indicator of deception than examining a single cue.

Since no single verbal or non-verbal cues is able to successfully detect deception (based on past works), the research proposes to use combination of verbal and nonverbal cues (VerbNon) to detect deception.

Related Works
Detecting deception has been an issue in scientific research as no single cue can reliably detect deception (Vrij, 2000; 2008). Human investigators perform a little better than chance and as such a reliable means to effectively detect deception becomes paramount.

DePaulo et. al., (2013) studied the behaviour of people when they are lying compared with when they are telling the truth. Their research results show that liars are less forthcoming than truth tellers, and they tell less compelling tales. The researcher also reported that liars make a more negative impression and are tense. However, behaviours showed no discernible links, or only weak links, to deceit. Cues to deception were more pronounced when people were motivated to succeed, especially when the motivations were identity relevant rather than monetary or material. Cues to deception were also stronger when lies were about transgressions. These cues are verbal and nonverbal. Verbal cues are linguistic patterns exhibited in spoken messages while nonverbal cues are leakages or deformations that occur in the body channels of the deceiver.
The work of Enos et. al., (2006) examined certain systematically identi?able segments — called CRITICAL SEGMENTS — that bear propositional content directly related to the topics of most interest in the interrogation. They augmented the approach with techniques for adjusting the class imbalance in the data. The results, as much as 23.8% relative improvement over chance, substantially exceed human performance at the task of TRUTH and LIE classification. Further, models generated using these segments employ features consistent with hypotheses in the literature and the expectations of practitioners (Reid, 2000) about cues to deception.

Zhou et. al., (2003) stated that since some cues (micro expressions) appears in a microseconds, detecting deception by trained and untrained professionals becomes a little better than chance. They stated further that creating or developing an automated tool that will help in flagging these deceptive cues is paramount.

Rothwell et. al., (2006) focused on how the behaviour of previously unseen persons can be charted using back-propagation neural network. The work was carried out using a simulated theft scenario where 15 participants were asked to either steal or not to steal some money and were later interviewed about the location of the money. A video of each interview was presented to an automatic system, which collected vectors containing nonverbal behavioural data. Each vector represented a participant’s nonverbal behaviour related to “deception” or “truth” for a short period of time. These vectors were used for training and testing a back-propagation ANN which was subsequently used for charting the behavioural state of others.
In Almela et. al., (2012) the authors in their work address the question pertaining to the nature of deception language. The research aimed at the exploration of deceit in Spanish written communication. The work designed an automatic classifier based on Support Vector Machines (SVM) for the identification of deception in an ad hoc opinion corpus. In order to test the effectiveness of the LIWC2001categories in Spanish, the authors drew a comparison with a Bag-of-Words (BoW) model. The results indicate that the classification of the texts was successful. They concluded that the findings were potentially applicable to forensic linguistics and opinion mining, where extensive research on languages other than English is needed.

Bachenko et.al., in (2008) developed and implemented a system for automatically identifying deceptive and truthful statements in narratives and transcribed interviews. The research focused exclusively on verbal cues to deception for this initial experiment, ignoring at present potential prosodic cues. The authors describe a language-based analysis of deception that we was constructed and tested using “real world” sources such as criminal narratives, police interrogations and legal testimony. The analysis comprises two components: a set of deception indicators that were used for tagging a document and an interpreter that associates tag clusters with deception likelihood. The researchers tested the analysis by identifying propositions in the corpus that could be verified as true or false and then comparing the predictions of our model against this corpus of ground truth. The analysis achieved an accuracy of 74.9%.
Vrij et. al., (2008) examined the hypotheses that (1) a systematic analysis of nonverbal behaviour could be useful in the detection of deceit and (2) that lie detection would be most accurate if both verbal and nonverbal indicators of deception are taken into account. Seventy-three nursing students participated in their study about “telling lies” and either told the truth or lied about a film they had just seen. The interviews were videotaped and audiotaped, and the nonverbal behaviour (NVB) and speech content of the liars and truth tellers were analyzed, the latter with the Criteria-Based Content Analysis technique (CBCA) and the Reality Monitoring (RM) technique. Results revealed several nonverbal and verbal indicators of deception. On the basis of nonverbal behaviour alone, 78% of the lies and truths were correctly classified. The researcher speculated that a higher percentage could be correctly classified when all three detection techniques (i.e., NVB, CBCA, RM) were taken into account.
Since no single cue can reliably detect deception, a combination of verbal and nonverbal cues will help in detecting deception to a reasonably degree. Perez-Rosas et. al., (2015) presented a multimodal deception detection model using real-life occurrences of deceit. They introduced a novel dataset covering recordings from public real trials and street interviews, and used this dataset to perform both qualitative and quantitative experiments. The analysis of nonverbal behaviours occurring in deceptive and truthful videos brought insight into the gestures that play a role in deception. They built classifiers relying on individual or combined sets of verbal and nonverbal features and achieve accuracies in the range of 77-82%. Their automatic system outperforms the human detection of deceit by 6-15%. Their work is the first to automatically detect instances of deceit using both verbal and nonverbal features extracted from real deception data. But the work is not a fully automated deception detection system, since they used human coders to extract the cues.

System Design
Dataset
The dataset consists of real-life trial videos (Perez-Rosas, 2015), some of these videos are publicly available on YouTube channels and other public websites. The dataset contains statements made by exonerees after exoneration and a few statements from defendants during crime-related TV episodes. The speakers in the videos are either defendants or witnesses. The video clips are labelled as deceptive or truthful based on a guilty verdict, not-guilty verdict, and exoneration.
The dataset consists of 121 videos including 61 deceptive and 60 truthful trial clips. The average length of the videos in the dataset is 28.0 seconds. The average video length is 27.7 seconds and 28.3 seconds for the deceptive and truthful clips, respectively. The data consists of 21 unique female and 35 unique male speakers, with their ages approximately ranging between 16 and 60 years.

Feature Extraction
Features are the characteristics of the objects of interest or salient features in an image. Feature extraction is the technique of extracting these salient features from images of different abnormal categories in such a way that class similarity is either minimized or maximized.

In classification problem, the use of salient features is essential for accuracy according to Hermosilla et al (2010). The use of a model that can fit the shape of the image of interest from the dataset becomes paramount. The verbal and the nonverbal cues data was extracted from a sufficient voice and video database. The verbal Cues were extracted using Praat while the nonverbal cues were extracted using Active Shape Model (ASM).

Model Design
For extracting the Pitch feature, Praat uses the autocorrelation algorithm as shown in equation 1.

rx(?)?rx,w(?)/rw(?) 1
Where rx(?) represent autocorrelation of the original signal, rx,w(?) is the autocorrelation of the windowed signal and rw(?)is the autocorrelation of the window.For the jitter extraction, the algorithm used is presented in equation 2.

jitter=100(N-1)?i=2N?i-?i-1 2
?i is the fundamental frequency
The pause will be extracted using equation 3.

Pa=Tt-Pt 3
Where Pa is the total number of Pauses, Tt is Total length of time taken for the suspect to talk, Pt is the phonetic time (actual time taken to talk).

The speechrate is extracted using equation 4.

Sr=NsTt 4
Where speechrate is denoted as Sr, number of syllabus as Ns, and total time taken as Tt.

The nonverbal cues (that is the facial expressions) to be extracted using the Active Shape Model (ASM) are: Eyelid Blinking, Lip movement, eyebrown movement. To form the shape model, lot of training examples (in this case, different faces) were collected and the correspondence for each of the training examples were formed. Consider a person (a face), j from the set of training examples, the nth feature points of the person j and for all the training set is given by equation 5.

zj=x1j,y1j,x2j,y2j, x3j,y3j…xnj,ynj. 5
Since all the shapes may not be properly aligned, the shapes are rotated and translated to be centred at the origin (0, 0). After translation, the dimension of the set of aligned shapes is reduced using PCA. Any shape can then be approximated using equation 6.

Z=?+Pb 6
where b is the model parameters, P=V1 , V2 …Vk .
K- NEAREST NEIGHBOUR
In KNN classification, neighbours closest to the data with unknown classification are used to classify the data. In using KNN, three things are required:
The set of stored records known as the training dataset
Distance metric to compute distance between records
The value of K which is the number of nearest neighbours to retrieve.

The dataset were the features extracted using ASM and Praat.

The dataset are grouped into:
The verbal dataset
The nonverbal dataset
VerbNon dataset (combination of the verbal and nonverbal dataset)
Euclidean Distance: After acquiring the dataset, the next step is to calculate the distance to determine the nearest point to the new data. The Euclidean distance was used as shown in equation 7.

7
Where are points in the feature set.After calculating the distance, the next step is to determine the class from the nearest neighbour list. This is done by considering the majority vote of class labels among the K-nearest neighbours.

The vote is weighed according to distance using equation 8.

8
the value of was calculated using equation 9.

9
Where is the nearest neighbours to retrieve, is the total number of training dataset that is considered
Result and Discussion
Performance Metrics
The metrics used for carrying out the performance evaluation are listed as:
False Positive Rate (FPR):
FN,R=FPTP+FP 10
True Positive Rate (TPR):
TP,R=TPTP+FN 11
Accuracy: The overall accuracy is given by the sum of true and false utterances correctly classified, out of all the classifications carried out. It is the number of correct predictions over the total number of predictions.

Accuracy=TP+FPTP+TN+FP+FN 12
Where TP, TN, FP and FN are the True positive, True negative, False positive and False negative values respectively.

Confusion Matrix: It is a table used to describe the performance of the classification model on the dataset.

Figure 1 shows the confusion matrix for K- Nearest Neighbour classifier. From the figure, 833 data were correctly classified as truthful which corresponds to 96.2% while 17 were wrongly classified as deceptive corresponding to 3.2%. Also, 516 were correctly classified as deceptive representing 96.8% and 33 falsely classified as truthful representing 3.8%. Figure 2 Shows the ROC of the KNN.

Figure 1: Confusion Matrix

Figure 2: ROC Curve

Figure 3: Accuracy vs Error
From Figure 3, it is observed that as Accuracy is increasing error is reducing. KNN performs well with combination of verbal and nonverbal cues.

This research uses verbal, nonverbal and VerbNon cues (a combination of both cues) to detect deception. The verbal cues was extracted using Praat while the nonverbal was extracted using Active Shape Model.
The proposed system was implemented using Matlab 2015a on window 7 with 2GB RAM. Each of the extracted data was divided into training data and test data. The classification was done using KNN model on the different dataset and at the end of the comparative analysis it was discovered that KNN model work well on VerbNon dataset to detect deception. The result obtained using only verbal cue was 93.4% while that of nonverbal cue was 95.6% but on VerbNon yielded 96.2% which is far better than the chance level of 50%.

Conclusion
References
BIBLIOGRAPHY l 1033 A., Vrij. Detecting Lies and Deceit: Pitfalls and Opportunities. Chichester: Wiley, 2008.

A., Vrij. “Telling and detecting lies as a function of raising the stakes.” In New Trends in Criminal Investigation and Evidence, by M.M Kommer, J.F Nijboer and J.M Reintjes C.M Breur, 699-709. Belgium: Antwerpen, 2000.

Almela A., Valencia-Garcia R., and Cantos P. “Seeing through Deception: A Computational approach to deceit detection in written communication.” In proceedings of the Workshop on Computational Approaches to Deception Detection. Avignon, France, 2012. 15-22.

Bachenko Joan, Eileen Fitzpatrick and Michael Schonwetter. “Verification and implementation of language based deception indicators in civil and criminal narratives.” proceedings of the 22nd International Conference on Computational Linguistics. Manchester, U.K, 2008. 41-48.

Baron-Cohen Simon, Wheelwright Sally, Hill Jacqueline, Raste Yogini and Plumb Ian. “The Reading the Mind in the Eyes Test Revision Version: A Study with Normal Adults, and Adults with Asperger Syndrome or High-functioning Autism.” Journal of Child Psychology and Psychiatry, 2001: 241-251, vol.42, no 2.

Depaulo B. M., Linsay J. J., Malone B. E., Muhlenbruck L., Charlton K., and Cooper H. “Cues to Deception.” Psychological Bulletin, 2003: 74-112, vol 129.

Depaulo B. M., Stone J. I., and Lassiter G. D. “Deceiving and Detecting Deceit. The Self and Social.” 323-370. New York: McGraw-Hill, 1985.

Enos F., Benus S., Cautin R. L., Graciarena M., Hirschberg J., and Shriberg E. “Personality factors in human deception detection: Comparing human to machine performance.” Proceedings of the 9th International Conference on Spoken Language Processing, ISCA 2006. Pittsburgh, USA, 2006.

I.C, Picornell. “Cues to Deception in a Textual Narrative Context: Lying in Written Witness Statements.” A Tesis Submitted for the Degree of Doctor of Philosophy to Centre for Forensic Linguistics, School of Languages and Social Sciences, Aston University. 2013.

Navarro J., and Schafer R. J. Detecting Deception. FBI Law Enforcement Bulletin, 2001.

P., Ekman. Telling Lies: Clues to deceit in the market place, politics and marriage. New York: Norton, 1985/1992.

Paul, Ekman. “An Argument for Basic Emotions.” Journal of Cognition and Emotion, 1992: 169-200, vol 6, no 3/4.

Perez-Rosas V., Abouelenien M., Mihalcea R., Xiao Y., Linton C. J., Burzo M. “Verbal ans Nonverbal Clues for Real-Life Deception Detection.” Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Portugal: Association for Computational Linguistics, 2015. 2336-2346.

Reid J. and Associates, The Reid Technique of Interviewing and INterrogation. Chicago: John E. Reid and Associates, 2000.

Rothwell J., Bandar Z., O’Shea J., and McLean D. “Charting the bahavioural state of a person using a Backpropagation Neural Network.” Journal of Neural Computing and Applications, 2006.

Sporer S. L., and Schwandt B. “Paraverbal Correlates of Deception: A meta-analysis.” Applied Cognitive Psychology, 2006: 421-446, vol. 20.

Vrij A., Fisher, Mann and Leal. “A Cognitive Load Approach to Lie Detection.” Journal of Investigative Psychology and Offender Profiling, 2008: 39-43, vol. 5, issue 1-2.

Zhou L., Twitchell D. P., Qin T., Burgoon J. K., and Nunamaker J. F., Jr. “An exploratory study into deception detection in text-based computer-mediated communication.” Proceedings of the 36th Annual Hawaii International Conference on System Sciences . Hawaii, 2003.

Zuckermzn M., Depaulo B. M., and Rosenthal R. “Verbal and Nonverbal Communication of Deception.” In Advances in Experimental Social Psychology, by Berkowitz L., 1-59. New York: Academic Press, 1981.

x

Hi!
I'm Sarah!

Would you like to get a custom essay? How about receiving a customized one?

Check it out