The resulting 36-item test was reliable and quick to administer. The selection was based on the items’ discrimination parameters, retaining only the most informative items to investigate the latent ability. Data from 1,002 online participants were analyzed using both a unidimensional and a bifactor model, and showed that the item pool could be considered unidimensional. To address these issues, the authors propose a new test, named Facial Expression Recognition Test (FERT), developed using an item response theory two-parameter logistic model. Existing tests typically base item evaluation on consensus or expert judgment, but these methods could favor items with high agreement over items that better differentiate ability levels and they could not formally test the item pool for unidimensionality. For this reason, evaluating and selecting stimuli is of crucial importance. However, using actor portrayals adds a layer of difficulty in developing such a test: the portrayals may fail to be convincing and may convey a different emotion than intended. Typical emotion recognition tests are assumed to be unidimensional, use pictures or videos of emotional portrayals as stimuli, and ask the participant which emotion is depicted in each stimulus. However, more attentional resources were allocated for real faces during the late processing stage.ĭetecting the emotional state of others from facial expressions is a key ability in emotional competence and several instruments have been developed to assess it. Cartoon faces showed a higher processing intensity and speed than real faces during the early processing stage. ![]() Due to the sample size, these results may suggestively but not rigorously demonstrate differences in facial expression recognition and neurological processing between cartoon faces and real faces. The behavioral results showed that the reaction times for happy faces were shorter than those for angry faces that females showed a higher accuracy than did males and that males showed a higher recognition accuracy for angry faces than happy faces. In addition, the results showed a significant difference in the brain regions as reflected in a right hemispheric advantage. The ERP results revealed that cartoon faces caused larger N170 and VPP amplitudes as well as a briefer N170 latency than did real faces that real faces induced larger LPP amplitudes than did cartoon faces. Reaction time, recognition accuracy, and the amplitudes and latencies of emotion processing-related ERP components such as N170, VPP (vertex positive potential), and LPP (late positive potential) were used as dependent variables. Using event-related potentials (ERPs), we conducted a facial expression recognition experiment with 17 university students to compare the processing of cartoon faces with that of real faces. Besides real faces, people also encounter numerous cartoon faces in daily life which convey basic emotional states through facial expressions. The type of visual presentation has proven to be important for recognizing some emoticons, but not for all of them.įaces play important roles in the social lives of humans. Other expressions of emotions were equally well assessed independently of the type of visual presentation. Emotion of sorrow was most accurately recognized in the assessment of emoticon, and the expression of disgust was recognized worst on the emoticon. The facial expression of fear was most accurately assessed in the drawing of the human face. The results showed that there is an interaction of the type of emotion being evaluated and the type of visual presentation, F(10 290) = 10.55, p <. ![]() As the dependent variable, we used the number of accurately recognized facial expressions in all 18 situations. ![]() As factors, the type of displayed emotion varied (happiness, sorrow, surprise, anger, disgust, fear), as well as the type of visual presentation (photo of a human face, a drawing of a human face and an emoticon). The task for the participant was to click on the emotion he thought was shown on the stimulus. Stimuli contained facial expressions, shown as a photograph, face drawing, or as an emoticon. Therefore, the aim of the study was to examine the accuracy of recognizing facial expression of emotions in relation to the type of emotion and the type of visual presentations. Results of previous studies point to the importance of different face parts for certain emotion recognition, and also show that emotions are better recognized in photographs than in caricatures of faces.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |