Skip to main content

Advances, Systems and Applications

Exploring cross-cultural and gender differences in facial expressions: a skin tone analysis using RGB Values

Abstract

Facial expressions serve as crucial indicators of an individual's psychological state, playing a pivotal role in face-to-face communication. This research focuses on advancing collaboration between machines and humans by undertaking a thorough investigation into facial expressions. Specifically, we delve into the analysis of emotional variations related to changes in skin tone across different genders and cultural backgrounds (Black and white). The research methodology is structured across three phases. In Phase I, image data is acquired and meticulously processed from the Chicago face dataset, resulting in 12,402 augmented images across five classes (Normal case, Benign case, Adenocarcinoma, Squamous-cell-carcinoma, Large-cell-carcinoma). Phase II involves the identification of Regions of Interest (ROI) and the extraction of RGB values as features from these ROIs. Various methods, including those proposed by Kovac, Swift, and Saleh, are employed for precise skin identification. The final phase, Phase III, centers on the in-depth analysis of emotions and presents the research findings. Statistical techniques, such as Descriptive statistics, independent sample T-tests for gender and cross-cultural comparisons, and two-way ANOVA, are applied to RED, BLUE, and GREEN pixel values as response variables, with gender and emotions as explanatory variables. The rejection of null hypotheses prompts a Post Hoc test to discern significant pairs of means. The results indicate that both cross-cultural backgrounds and gender significantly influence pixel colors, underscoring the impact of different localities on pixel coloration. Across various expressions, our results exhibit a minimal 0.05% error rate in all classifications. Notably, the study reveals that green pixel color does not exhibit a significant difference between Anger and Neutral emotions, suggesting a near-identical appearance for green pixels in these emotional states. These findings contribute to a nuanced understanding of the intricate relationship between facial expressions, gender, and cultural backgrounds, providing valuable insights for future research in human–machine interaction and emotion recognition.

Introduction

Facial expressions are a universal and profound means of human communication, serving as windows into our thoughts and emotions. They convey a wealth of information, from subtle nuances to intense feelings, playing an indispensable role in our daily interactions. Understanding the intricacies of facial expressions is not only vital in deciphering the emotions of those around us but also holds great promise in advancing human–machine interactions and cross-cultural communication. The ability to recognize and respond to emotions through facial cues is an essential element of empathy and effective social engagement, and it has the potential to revolutionize the way we interact with technology and each other. In recent years, the convergence of machine learning, computer vision, and artificial intelligence has provided us with the tools and methodologies to delve deeper into the world of facial expressions. By discerning subtle changes in facial features, such as muscle contractions, skin tone, and microexpressions, we can gain insights into the emotional states of individuals. This capability is invaluable in improving human–computer interaction and has applications in fields as diverse as psychology, healthcare, marketing, and beyond.

Emotion is naturally based together intellectual states carried concerning by physiological changes, variously connected with contemplations, feelings, social reactions, and a level of happiness or disheartening. It’s proved by different studies how the emotions of humans mess with human skin and the positive and negative consequences due to emotions on our bodies. As anxiety is the youth's principal enemy nowadays It has the potential to prematurely age your face over time. When neuropeptide receptors in your skin receive information, discomfiture can go from your brain to your skin, causing you to blush and your cheeks to turn red. The sympathetic nervous system's sensitivity controls how often and readily one blushes, as well as how hot one's skin feels. Similarly, if we talk about fear, it means you are threatened or in danger in that condition the brain's first response is to signal the adrenal glands to release epinephrine (adrenaline). As a result, human skin gets pale, heart rate speed increases, and if you require a burst of energy to run rapidly, blood is sent to the body's huge power muscles.

Adrenaline also diverts some of the blood away from the skin and face, constricting the blood vessels of the skin to regulate and restrict bleeding in the event of a wound. Psych dermatology (PD) is the study of how the mind and skin interact, emphasizing psychopathological variables in the onset, progression, and prognosis of dermatitis illnesses. Even with skin disorders, PD has proven to disclose buried psychological issues [1]. The skin is the most noticeable part of our body that could be impacted by psychological factors it’s made up of three basic layers as shown in Fig. 1: Epidermis, Dermis, and Hypodermis. The epidermis is the top layer, the hypodermis is the bottom layer, and the dermis is the layer between the two mentioned above. As a result, many forms of emotional conflicts are determined or dependent on which skin layer is affected. This is unmistakable proof of the link between emotions and skin.

Fig. 1
figure 1

The structure of human skin consists of three layers [2]

This research is motivated by the growing need to comprehend and harness the power of facial expressions in an increasingly interconnected and technologically driven world. The ability to recognize emotions accurately and empathetically can be pivotal in creating more intuitive and responsive machines, capable of understanding and accommodating the emotional needs of their users. Additionally, it can facilitate cross-cultural understanding by bridging the gap in non-verbal communication cues that can vary widely across different cultures and regions. In this context, our study aims to develop an architecture for analyzing emotions from human skin, which adds a novel dimension to the study of facial expressions. By examining how skin tone varies across different localities, genders, and emotions, we seek to shed light on the intricate relationship between human emotions and their external manifestations. The outcomes of this research have the potential to not only advance the field of emotion recognition but also have practical applications in technology, psychology, and human well-being. This study endeavors to contribute to a deeper understanding of facial expressions and their significance in human–machine interactions, ultimately paving the way for more emotionally intelligent and culturally sensitive technology.

The main contribution of this work is a comprehensive examination of facial expressions, specifically analyzing emotional variations related to changes in skin tone across different genders and cultural backgrounds (Black and white). The study employs a structured three-phase methodology:

  • Data acquisition and preparation using the Chicago face dataset, which, after augmentation, yields a substantial number of images classified into five different cases.

  • Identification of Regions of Interest (ROI) in the images and the extraction of RGB values from these ROIs, followed by skin identification using methods from Kovac, Swift, and Saleh.

Through rigorous statistical analysis, including Descriptive statistics, independent sample T-tests, and two-way ANOVA, this research demonstrates that both gender and cultural background significantly influence pixel colors in facial images. This indicates that regional influences do impact the color of pixels associated with emotional expressions. The study's findings provide a deeper insight into how emotions, gender, and cultural backgrounds interplay in facial imagery, with a notable observation that the green pixel color did not significantly differ between Anger and Neutral emotions.

Related work

The skin is a physiologically related organ to expressive activities: sweating, pallor, redness, and eagerness can all be symptoms of corporal activation, conveying a variety of sentimental states. The link between mental distress and skin changes has long been a source of fascination for clinicians and researchers. Because the human epidermis, unlike many other organs in the body, reacts quickly to mental stress, more than a few authors set out to prove the supposed "connection of brain & skin" [3]. It also states that physical, mental, and emotional stress all have an impact on the skin, which could be due to a variety of factors. Hormone relapse during stressful situations promotes irritation and decreased flow of blood to the skin, stimulating skin nerves and promoting inflammatory allergic reactions, with systemic changes in immunological and neuroendocrine parameters [4].

As per Carlos F. [5] the hue of one's skin is an effective way of visibly transmitting feelings. It is hypothesized that observable facial colors enable spectators to effectively convey and visually understand sentiments even when facial muscles are not activated. "Are visible face colors consistent and different across emotion types and optimistic vs adverse valence?" and "Does the human visual system employ these facial colors to decode emotion from faces?" are two of the research questions. are noticed. Abdul Rahman K. in [6] highlights the appropriateness of Galvanic skin reactions as a psychological marker for monitoring participants' moods and tension during the presentation of three sets of photographs (Happy, Neutral, and Sad) and after using GSR [7] for the said purpose it is concluded that the use of GSR is might not be reliable.

In [8] It is being researched whether facial color affects facial expression recognition and vice versa. The ATR face expression database was used in several trials, while SPSS was employed for statistical analysis. In face perception, the results revealed an interacting but disproportionate link between facial color and expression. Color analysis of facial skin is done by Geovany A. [9] in which it is presented that only the changes in facial skin color can be used to infer a person's emotional state. A dataset of spontaneous human emotions is developed with a diverse range of human volunteers of various ages and nationalities. Different experiments were conducted by using many ML algorithms as well as decision trees [10, 11], multinomial logistic regression [12], and latent-dynamic conditional random field [13].

Geovany A. achieved a 77.08% accuracy rate. In [14] A.K. Dabrowska defines the link between skin function, barrier quality and elements that are dependent on the body. A number of in vivo studies concentrating on the variety of features related to human skin concerning barrier qualities and different factors dependent on the body have been conducted, and it has been determined that skin characteristics vary across persons of various ages, genders, ethnicities, and skin types. Body-dependent elements that influence human skin are depicted in Fig. 2.

Fig. 2
figure 2

Body-dependent elements that influence human skin

Posed facial expressions refer to those artificially generated upon instruction, typically induced under controlled conditions or within predetermined observational environments. Contrarily, spontaneous expressions emerge organically without an individual's overt awareness [15,16,17,18,19,20]. Historically, Facial Expression Recognition (FER) research has predominantly centered on posed expressions, a preference attributed to the challenges associated with securing datasets encompassing genuine, spontaneous expressions. A prevailing technique researchers employ to induce authentic emotional responses involves presenting subjects with emotion-evoking cinematic material. However, it was observed that inducing genuine expressions of sadness and fear remained arduous. Yet, a study contended that while inducing fear was intricate, the same was not true for sadness, possibly due to disparities in the selected video stimuli. Intriguingly, anger surfaced as an emotion particularly challenging to stimulate through visual media, given its inherent need for deeper personal engagement. An additional complexity pertains to the veracity of spontaneous expressions. The contextual backdrop wherein subjects are placed can potentially skew genuine reactions, instigating altered or guarded expressions. Despite these intricacies, a significant faction within the psychological research community posits the superiority of spontaneous over posed expression recognition, attributing it to the authenticity and nuanced emotional intricacies it captures. This investigation aims to juxtapose and elucidate findings from both posed and spontaneous facial expression datasets [21,22,23,24,25].

Methodology

For the analysis of facial expressions through static images, a specialized algorithm is required to extract skin color information. Our approach to detecting human skin features is based on the most up-to-date research in the field. This work is divided into three distinct phases. In Phase I, we focus on data acquisition and preparation. Phase II involves the identification of Regions of Interest (ROI), feature extraction, and skin identification. Feature extraction is a critical step in the process of human skin color detection. We have examined various statistical techniques and parameters to ensure the accuracy of this phase. In our research, we implement a technique to extract pixel values from emotional images and then further extract RGB values. After extracting pixel values, we move on to Phase III, where we assess whether the detected skin is indeed human. This crucial descriptor is applied to human skin to obtain essential information, such as pixel values of the area of interest for each image corresponding to basic emotions like fear, anger, neutrality, and happiness. The feature data is organized and input into a dedicated Excel sheet after extraction from the images. We then conduct further analysis using different statistical techniques, including independent sample T-Test, descriptive statistics, and two-way ANOVA, with the response variables being the RED, BLUE, and GREEN values. Figure 3 illustrates the complete framework.

Fig. 3
figure 3

Representation of the proposed framework

Data acquisition

Data Acquisition plays a vital role in computer science with other fields. This work shows that the gathered data can be used for the improvement of efficiency and to ensure the reliability of Emotion recognition in different ways. All the processing is done on Chicago Face Database version 2.0.3 July 2016 [26] for the achievement of results. CFD consists of 158 resolution and standardized images of White and Black females and males who belong to the age group of 18 to 40 years. The development of CFD involves gathering stimuli and assembling the data about every target. High-resolution images are taken and the digital images of objects which are showing a range of facial expressions under consistent conditions for example lighting, eye level, face angle, etc. Some of the acquired images from the CFD dataset are in Fig. 4.

Fig. 4
figure 4

Sample images from Chicago face dataset [26] A Female “Black” B Male “Black” C Female “White” D Male “White”

Preparation and description of dataset

The Chicago Face Database (CFD) is a publicly available dataset designed to provide high-resolution, standardized photographs of male and female faces of varying ethnicity and age. Created by researchers at the University of Chicago, the dataset aims to aid in psychological and neuroscientific studies that require stimulus control across participants in areas like facial perception and emotion recognition. Chicago Face Database (CFD) contains images of 597 individuals: 158 Black (77 Female, 81 Male), 218 White (108 Female, 110 Male), 79 Asian (40 Female, 39 Male), 80 Latino (39 Female, 41 Male), and 62 other/mixed-race individuals (32 Female, 30 Male).

Here are some of the key features of the CFD:

  • Diverse Representation: The CFD contains faces of individuals from various ethnic backgrounds, including White, Black, Hispanic, Asian, and more. This makes the dataset uniquely suited for studies that wish to understand ethnic differences in facial perception.

  • Standardized Conditions: Faces in the CFD were captured under controlled lighting conditions and are centered with neutral facial expressions. This uniformity ensures that any observed effects in studies using this dataset can be attributed to the variables of interest rather than differences in photo conditions.

  • Variety of Measures: Along with the photographs, the dataset also provides measures such as subjective ratings on perceived attractiveness, dominance, trustworthiness, and age, among others. This allows for richer analyses and exploration of how faces vary along these dimensions.

  • Facial Manipulations: The CFD has been used to create facial stimuli with manipulated features (like changing apparent age or morphing faces together) to study specific research questions in facial perception.

The CFD is a valuable tool for researchers in the fields of psychology, neuroscience, and computer science, especially those focusing on facial recognition, emotion processing, and other related areas. One of the most essential aspects of research is to choose or create a solid dataset for it. The data set is prepared according to the requirements of this work. As Male and Female belongs to the black and white category and are divided into different folder as shown below in Fig. 5 which elaborates the preparation of the dataset in detail.

Fig. 5
figure 5

Preparation of dataset for further processing

Region of interest

Region of interest is part of any image that you need to operate or filter in some way. All the images used for that research are cropped according to the requirement so that the interested regions can be more visible. This work defines only one ROI in a single image and more than one region can also be defined if required. The right cheek is extracted which is shown below in Fig. 6.

Fig. 6
figure 6

Representing obtained region of interest

Four coordinates (or a pair of size 2 tuples) are required for cropping. The first set of coordinates specify the Upper left-hand corner of the ROI and the other two denote the bottom right-corner coordinates of the ROI. For our case the coordinates for the ROI would be (1050, 880) and (1100, 880) (assuming row-major indexing is used). Images are just large matrices where each value is a pixel positioned row-wise and column-wise accordingly. Cropping the image is just obtaining the sub-matrix of the image matrix. The size of the sub-matrix (cropped image) can be of our choice and mainly it is the height and width. There needs to be one important thing for the image to be cropped, i.e., the starting position. The starting position helps obtain the sub-matrix from that position and depending upon height and width we can easily crop-cut the image.

The three important things are:

  • Starting position.

  • Length (height).

  • Width.

Based on these three things, we can construct our cropping function completely.

Equation 1 shows how the cropped function works.

$$ROI\;=\;imcrop\;\left(I,\;\left[x_{min}y_{min}\;WidthHeight\right]\right);$$
(1)

ROI with its coordinates is shown below in Fig. 7.Whereas:

Fig. 7
figure 7

Coordinates of ROI

$${x}_{min} = ({x}_{1}, { x}_{2})$$
(2)
$${y}_{min} = ({y}_{1}, { y}_{2})$$
(3)
$$Width = ({x}_{2}-{x}_{1})$$
(4)
$$Height = ({y}_{2}-{y}_{1})$$
(5)

Feature extraction

The main features of this work depend upon pixel values. These Values for each pixel represent or depict how splendid that pixel is, and what should be its color. So, in the modest case of binary images, A value of the pixel is a 1-bit number representing either the background or foreground. The smallest numbers that are closest to zero represent black, and the larger numbers that are near 255 represent white. Instead of using pixel values in this work main focus is on RGB values extracted from the interested region pixel values. Figure 8 will give more clarity around this idea.

Fig. 8
figure 8

Extraction of RGB values from each individual pixel

Algorithm: Feature extraction

figure a

Skin identification

Skin detection from images serves as the foundation for various applications, including face detection and recognition, identification of inappropriate content, person tracking, hand gesture recognition, and assessment of emotional states. In the present study, we have employed four distinct techniques for skin identification. Each of these methods has demonstrated significant efficacy and is outcome-driven. Notably, these methods [27] (RGB ratios), [28] (RGB ratios), [29] (RGB ranges), and [30] (relationship) are grounded on RGB ratios, RGB ranges, and their relative relationships. Additionally, Siddiqui [15] expanded upon these techniques in his research, proposing an innovative approach specifically tailored for human skin detection.

Osman [27] method is based on RGB Ratio and according to Osman pixel is the skin pixel if:

$$0.0 \le ({R}-{G})/(R+G) \le 0.5$$
(6)
$${B}/(R+G)\le 0.5$$
(7)

Swift [28] defines a pixel is not a skin color pixel if:

$$B \,>\, R\,or\,G\,<\, B\,or\,G\,>\,R\,or\,B\,<\,R/4\,or\,B\,>\,200$$
(8)

Saleh [29] defines a pixel as skin if:

$${R}-{G}>20,{R}-{G}<80$$
(9)

Kovac [30] defines below mentioned rules that a pixel is a skin pixel if:

$${1}^{\mathrm{st}}\mathrm{ Rule}:\boldsymbol{ }{R}>95, {G}>40, {B}>20$$
(10)
$${2}^{\mathrm{nd}}\mathrm{ Rule}:{M}{a}{x}{i}{m}{u}{m} \left({R},{G},{B}\right)-{Minimum} \left({R},{G},{B}\right)>15$$
(11)
$${3}^{\mathrm{rd}}\mathrm{ Rule}:|{R-G}|>15$$
(12)
$${4}^{\mathrm{th}}\mathrm{Rule}:{R}>{G}\&{R}>{B}$$
(13)

Analysis of emotions

The RGB values, once procured post the skin identification procedure, underwent rigorous analysis employing a myriad of statistical methodologies. Specifically, we utilized the Independent Sample T-Test, Descriptive Statistics, two-way Analysis of Variance (ANOVA), and the post-HOC test for a more granular understanding of the data. Our analysis was rooted in three primary categorical distributions, namely: Gender, Group classification, and Ethnocultural variances. The efficacy of emotion recognition within the ambit of this study was commendably high. However, noteworthy discrepancies were observed in the emotion recognition capabilities when juxtaposed with extant literature and studies. The culmination of our research yielded a comprehensive dataset delineating key features. This dataset not only holds substantial academic merit but also offers potential applicability in advancing medical research endeavors.

Parameter description

This study used the following parameter settings:

Dataset:

  • Source: Chicago Face Dataset.

  • Original Images: 1,030.

  • Augmented Images: 12,402.

  • Classes: 5 (Normal case, Benign case, Adenocarcinoma, Squamous-cell-carcinoma, Large-cell-carcinoma).

  • Regions of Interest (ROI):

  • Extraction of RGB values as features.

  • Focusing primarily on emotional variations in relation to skin tone changes.

Skin identification methods:

  • Kovac.

  • Swift.

  • Saleh.

  • Statistical Techniques for Analysis:

  • Descriptive statistics.

  • Independent sample T-tests (For gender and cross-cultural comparisons).

Two-way ANOVA:

  • Response Variables: RED, BLUE, and GREEN pixel values.

  • Explanatory Variables: Gender and emotions.

Post HOC testing:

  • Performed after rejecting the null hypothesis.

  • Aim: Determine which pairs of means are significant.

Error rate:

  • 0.05% error across all classifications.

Observational note:

  • No significant difference in the green pixel color between the emotions of Anger and Neutral.

Results

Demographic characteristics

Total respondents in this research are 145 out of which 66 belong to White culture and 79 from Black culture. Similarly, if we talk about Gender data then we can say that the Male respondents are 62 and the female respondents are 83 from both localities. Figure 9 shows the graphical representation total respondents.

Fig. 9
figure 9

Demographic characteristics in the study

The total number of respondents in the original database was 158 but due to some technical issues,, the image processing of 13 respondents was not possible.

Group wise statistics

In Table 1 the descriptive statistics for the pixel color concerning gender is presented. The highest Mean ± SD is 197.09 ± 1.687 for male respondents in the red color pixel. Similarly, Females having the highest Mean ± SD is 202.95 ± 17.051.

Table 1 Group statistics as per RGB values

Figure 10 describes the complete graphical representation of Group statistics in which RGB values of each Male and Female are describing their intensity values of mean and standard deviation individually.

Fig. 10
figure 10

Mean and Std. Deviation differences among RGB values of male and female

Evaluation through T-Test

Independent sample t-test in Table 2 shows that there is a statistically significant difference among the Cross-cultural respondents (i.e., White and Black) concerning the gender (Male/Female), indicating that Cross-cultural has an impact on the pixel colors. Table 2 represents an independent sample T-Test for cross-cultural.

Table 2 Independent sample T-Test for cross-cultural (White/Black)

The independent sample T-Test shows that there is a statistically significant difference (Sig. < 0.05) among the male and female respondents concerning the three emotion colors, (Red, Green and Blue), indicating that gender has an impact on the pixel colors as shown below in Table 3.

Table 3 Independent sample t test (Gender)

ANOVA by taking RGB as the response variable individually

A two-way Analysis of Variance was applied to the Response variable (Red) as per Table 4 with Cross-Culture, Gender, and Emotions for testing the significance of the red color pixel with cross-culture, gender, and emotions. The results showed there is a significance difference at a 5% level of significance. As the results from ANOVA are significant, therefore, the Post Hoc LSD test was applied and found significant results for all four emotions.

Table 4 ANOVA Table for response variable (Red)

By changing the factors i.e., color, gender and emotions, the red color in the pixel will change according to factor types (Cross-Cultural; White, Black), (Gender; Male, Female), and (Emotions; Anger, fear, Happy, Neutral) of course the response variable will change.

Table 5 shows the results of Response variable (Red) with Cross-Cultural, gender and emotions as factors. The results are statistically significant (Sig < 0.05). This means that there is an impact of cross-cultural, gender and emotions on the red color pixels (Response variable). By changing the factors i.e. cross-cultural gender and emotions, the red color in the pixel will change according to factor types (Cross-cultural; White, Black), (Gender; Male/Female) and (Emotions; Anger, fear, Happy, Neutral) of course the response variable will change.

Table 5 Multiple comparisons with respect to Red values

Two-way ANOVA was applied to Response variable (Green) as shown below in Table 6 with Cross-culture, gender, and emotions for testing the significance of the green color pixel with cross-cultural, gender, and emotions. It also shows the same results as per the response variable Red.

Table 6 ANOVA Table for response variable (Green)

The above table shows the results of the Response variable (Green) with Cross-Cultural, Gender and Emotions as factors. By changing the factors i.e. Cross-Cultural, emotions and gender, the Green color in the pixel will change according to factor types (Cross-Cultural; White, Black), (Gender; Male/Female) and (Emotions; Anger, fear, Happy, Neutral) of course the response variable will change.

The above Table 7 explains the Post Hoc LSD test for emotions with a response variable (Green), the emotion anger is statistically different at a 5% level of significance from the other three emotions (Fear, Happy, and Neutral) for the green color pixel. However, the emotion fear with neutral is statistically insignificant (Sig. > 0.05).

Table 7 Multiple comparisons with respect to Green

Table 8 presents the results of our analysis with "Blue" as the response variable, and we considered three factors: Cross-Cultural background, Gender, and Emotions. The statistical results indicate that there is a significant impact of these factors on the "Blue" color (Response variable). In other words, the differences observed in the "Blue" color are not due to random chance but are influenced by the specific factors we studied.

Table 8 ANOVA table for response variable (Blue)

To provide a deeper understanding, the breakup of categories are:

  • Cross-Cultural Factor: This factor looks at the impact of different cultural backgrounds, specifically White and Black in this context, on the "Blue" color. It means that the "Blue" color in the pixel changes based on the cultural background. People from different cultural backgrounds may express their emotions differently, and this can be reflected in the "Blue" color of the pixel.

  • Gender Factor: The Gender factor considers Male and Female categories. It suggests that the "Blue" color varies depending on the gender of the individuals being studied. This implies that males and females may display different "Blue" color patterns in response to the same emotional stimuli, further indicating a gender-related influence on this color component.

  • Emotions Factor: This factor explores the impact of different emotions (Anger, Fear, Happy, Neutral) on the "Blue" color. Emotions have a substantial effect on how our skin reflects light and, consequently, the "Blue" color in the pixel. For example, someone experiencing anger may exhibit a different "Blue" color response than someone feeling happiness.

Table 9 explains the Post Hoc LSD test for emotions, the emotion anger is statistically different (Sig < 0.05) with other three emotions (Fear, Happy and Neutral) for blue color pixel. Similarly, the Emotion Fear with Happy, Neutral and Anger, the emotion happy with fear, neutral and anger, the emotion neutral with fear and happy are statistically different for blue color pixel.

Table 9 Multiple comparisons with respect to Blue

Discussion

The study you described has practical implications in several domains, particularly in the fields of psychology, human–computer interaction, and machine learning. Here are some practical implications of this research:

  • Emotion Recognition Technology: The study's focus on analyzing facial expressions and their connection to gender and cross-cultural differences has direct applications in the development of emotion recognition technology. Emotion recognition is increasingly important in fields like human–computer interaction, virtual reality, and customer service. Understanding how skin tone, gender, and cultural background affect the recognition of emotions can lead to more accurate and culturally sensitive emotion recognition systems.

  • Cross-Cultural Sensitivity: The research findings regarding cross-cultural differences in facial expressions and skin tone can be applied to enhance cross-cultural communication and sensitivity. In diverse workplaces or international business settings, this knowledge can help people better understand and interpret the emotional cues of individuals from different cultural backgrounds, potentially reducing miscommunication and misunderstandings.

  • Psychological Research: The study contributes to our understanding of how gender and cultural factors influence facial expressions and emotions. Psychologists and researchers can use this information to design more culturally inclusive studies and interventions. It can also help therapists and counselors develop more effective approaches to working with clients from different cultural backgrounds.

  • Human–Machine Interaction: In human–computer interaction and artificial intelligence, the research findings can be used to improve how machines interpret and respond to human emotions. For example, chatbots and virtual assistants could become more sensitive to users' emotional states and provide more appropriate and empathetic responses.

  • Diversity and Inclusion: The findings on gender and cross-cultural differences may be valuable in promoting diversity and inclusion in various settings. Organizations can use this information to create more inclusive workplaces and services that consider the emotional expressions and needs of individuals from diverse backgrounds.

  • Product Design and Marketing: Companies in the consumer product and advertising industries can benefit from this research by creating products and advertisements that resonate with the emotional expressions and expectations of different demographic groups. This can lead to more effective marketing strategies and product designs.

  • Education and Training: Educators and trainers can incorporate the knowledge gained from this study into programs that help people develop their emotional intelligence and cultural competence. This can be particularly important in professions where understanding and responding to emotions is crucial, such as teaching, healthcare, and customer service.

  • Policy and Legislation: In some cases, these findings may inform policies and legislation related to discrimination, bias, or cultural sensitivity, helping to address issues related to fairness and equality.

It's important to note that the practical implications of this study should be applied with caution, considering ethical and privacy concerns, and taking into account the individual's consent and cultural context. Additionally, ongoing research and validation of these findings may be necessary to ensure their applicability in various real-world scenarios. The study described has several limitations that should be considered when interpreting its results and practical implications:

  • Sample Size and Diversity: The study's findings are limited by the size and diversity of the dataset used. If the dataset is not representative of a broad range of ages, ethnicities, and cultural backgrounds, the generalizability of the results may be compromised.

  • Cultural Sensitivity: While the study examines cross-cultural differences, it may oversimplify the complexities of cultural expressions and emotions. Emotions and their expressions are highly context-dependent, and cultural nuances can be challenging to capture accurately.

  • Data Acquisition and Selection: The accuracy of the study depends on the quality and representativeness of the data acquired. If the data selection process is biased or incomplete, it could lead to erroneous conclusions.

  • Skin Identification Methods: The study uses various methods for skin identification, and the accuracy of these methods can vary. The choice of method and its settings may impact the results. Some methods may not work equally well for all skin tones.

  • Emotion Analysis: The accuracy of emotion analysis from facial expressions is a subject of ongoing debate in the field. Emotions are complex and can manifest differently in different individuals. The algorithms used for emotion analysis may have limitations in accurately categorizing emotions.

  • Simplification of Emotions: The study focuses on a limited set of emotions (e.g., anger and neutral). Emotions are multifaceted, and individuals may express them in various ways. The study's findings may not apply to a broader range of emotions.

  • External Factors: The study does not consider external factors that can influence facial expressions, such as social context, individual differences, or non-verbal cues. These factors can significantly impact the interpretation of facial expressions.

  • Ethical and Privacy Concerns: The use of facial recognition technology and the collection of facial expression data raise ethical and privacy concerns. It is essential to address issues related to informed consent, data security, and potential misuse of the data.

  • Statistical Assumptions: The statistical analyses used in the study may make certain assumptions that are not always met. Violation of these assumptions can affect the validity of the results.

  • Temporal Aspect: The study appears to be cross-sectional, examining emotions at a single point in time. Emotions can change over time, and a longitudinal study might provide a more comprehensive understanding of emotional expression.

  • Technology Limitations: The study might not account for advances in technology that could impact the accuracy of emotion recognition and skin tone identification. New methods or tools might be available that could enhance accuracy. Study needs to be focused on other researchers working on improving the results [25, 31,32,33].

  • Causation vs. Correlation: The study may identify correlations between variables (e.g., skin tone and emotion expression), but it may not establish causation. Causation would require additional experimental designs or longitudinal studies.

  • Generalization: Findings from this study should not be overgeneralized to all situations and populations. The relationships between gender, culture, skin tone, and emotions are likely to be context-specific.

In conclusion, while the study provides valuable insights into the relationship between facial expressions, gender, skin tone, and emotions, its limitations should be acknowledged. The incorporation of state-of-the-art image processing methods holds the promise of significantly enhancing the accuracy of results in facial expression analysis. The continual evolution of image processing techniques enables researchers to employ advanced methodologies that can capture nuanced details and improve the overall precision of emotion recognition systems [34,35,36,37,38,39].

Utilizing the latest image processing methods offers several potential benefits:

  • Feature extraction refinement:

Modern image processing methods provide sophisticated algorithms for feature extraction. By leveraging these techniques, researchers can refine the extraction of facial features critical for emotion analysis, ensuring a more comprehensive and accurate representation of expressions.

  • Deep learning architectures:

Deep learning, particularly convolutional neural networks (CNNs), has demonstrated exceptional capabilities in image analysis. Implementing the latest CNN architectures allows for hierarchical feature learning, enabling the model to discern intricate patterns in facial expressions that may be challenging for traditional methods [40,41,42,43,44].

  • Real-time processing:

Recent advancements in image processing have led to the development of real-time processing capabilities. This is particularly beneficial for applications where timely and accurate emotion recognition is essential, such as in human–computer interaction systems or affective computing.

  • Noise reduction and augmentation:

Cutting-edge image processing methods provide efficient ways to reduce noise and enhance image quality. Additionally, data augmentation techniques can be employed to generate diverse facial expressions, augmenting the training dataset and improving the model's ability to generalize to various expressions.

  • Adaptive algorithms:

Adaptive algorithms, capable of dynamically adjusting parameters based on contextual information, contribute to the adaptability of the system across different individuals, lighting conditions, and cultural nuances, ultimately improving the accuracy of emotion recognition.

Conclusions

The primary objective of this study is to develop an architecture for analyzing emotions through human skin. After subjecting the extracted features to various statistical tests, it is evident that different factors, such as localities, have a discernible impact on pixel colors. The results of independent t-tests reveal significant differences across both cross-cultural and gender categories, suggesting that these factors influence pixel colors. To further investigate the significance of red, green, and blue color pixels concerning cross-culture, gender, and emotions, we applied a two-way Analysis of Variance (ANOVA). The results indicate a significant difference at a 5% level of significance. Subsequently, a Post Hoc LSD test was conducted, which confirmed significant results for all four emotions, except for the green color. Interestingly, gender also emerged as a significant variable affecting pixel colors. This implies that the pixel colors for male respondents differ from those of female respondents. However, it is worth noting that for the Anger and Neutral emotions, the green pixel color did not show a significant difference, suggesting that the green color pixels for Anger and Neutral emotions are nearly identical.

Availability of data and materials

The data presented in this study is available on request from the corresponding author. The data is not publicly available due to privacy issues.

References

  1. Hafi B, Abdul Latheef EN, Uvais NA, Jafferany M, Razmi TM, Afra TP, SilahAysha KS (2020) Awareness of psychodermatology in Indian dermatologists: A South Indian perspective. Dermatol Ther 33(6):e14024

    Article  Google Scholar 

  2. Dyring-Andersen B, Løvendorf MB, Coscia F, Santos A, Møller LBP, Colaço AR, Mann M (2020) Spatially and cell-type resolved quantitative proteomic atlas of healthy human skin. Nature Commun 11(1):1–14

    Article  Google Scholar 

  3. Mento C, Rizzo A, Muscatello MRA, Zoccali RA, Bruno A (2020) Negative emotions in skin disorders: a systematic review. Int J Psychol Res 13(1):71–86

    Article  Google Scholar 

  4. Pavlovic S, Daniltchenko M, Tobin DJ, Hagen E, Hunt SP, Klapp BF, Peters EM (2008) Further exploring the brain–skin connection: stress worsens dermatitis via substance P-dependent neurogenic inflammation in mice. J Invest Dermatol 128(2):434–446

    Article  Google Scholar 

  5. Benitez-Quiroz CF, Srinivasan R, Martinez AM (2018) Facial color is an efficient mechanism to visually transmit emotion. Proc Natl Acad Sci 115(14):3581–3586

    Article  Google Scholar 

  6. Eesee AK. (2019). The suitability of the Galvanic Skin Response (GSR) as a measure of emotions and the possibility of using the scapula as an alternative recording site of GSR. In 2019 2nd International Conference on Electrical, Communication, Computer, Power and Control Engineering (ICECCPCE) 80–84. IEEE, USA

  7. Iadarola G, Poli A, Spinsante S. (2021). Analysis of Galvanic Skin Response to Acoustic Stimuli by Wearable Devices. In 2021 IEEE International Symposium on Medical Measurements and Applications (MeMeA) 1–6. IEEE, USA

  8. Nakajima K, Minami T, Nakauchi S (2017) Interaction between facial expression and color. Sci Rep 7(1):1–9

    Article  Google Scholar 

  9. Ramirez GA, Fuentes O, Crites Jr SL, Jimenez M, Ordonez J. (2014). Color analysis of facial skin: Detection of emotional state. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops 1:468–473

  10. Charbuty B, Abdulazeez A (2021) Classification based on decision tree algorithm for machine learning. J Appl Sci Technol Trends 2(01):20–28

    Article  Google Scholar 

  11. Bhatti UA, Marjan S, Wahid A, Syam MS, Huang M, Tang H, Hasnain A (2023) The effects of socioeconomic factors on particulate matter concentration in China’s: New evidence from spatial econometric model. J Clean Prod 417:137969

    Article  Google Scholar 

  12. Bhatti UA, Huang M, Neira-Molina H, Marjan S, Baryalai M, Tang H, Bazai SU (2023) MFFCG–Multi feature fusion for hyperspectral image classification using graph attention network. Exp Syst Appl 229:120496

    Article  Google Scholar 

  13. Neogi S, Dauwels J (2019) Factored latent-dynamic conditional random fields for single and multi-label sequence modeling. Pattern Recogn 122:108236

    Article  Google Scholar 

  14. Dąbrowska AK, Spano F, Derler S, Adlhart C, Spencer ND, Rossi RM (2018) The relationship between skin function, barrier properties, and body-dependent factors. Skin Res Technol 24(2):165–174

    Article  Google Scholar 

  15. Siddiqui KTA, Wasif A, (2015). Skin detection of animation characters. arXiv preprint arXiv:1503.06275.

  16. Zhang J, Zhu C, Zheng L, Xu K (2021) ROSEFusion: random optimization for online dense reconstruction under fast camera motion. ACM Transact Graph 40(4):1–17. https://doi.org/10.1145/3450626.3459676

    Article  Google Scholar 

  17. She Q, Hu R, Xu J, Liu M, Xu K, Huang H, (2022) Learning high-DOF reaching-and-grasping via dynamic representation of gripper-object interaction. ACM Trans Graph 41(4). https://doi.org/10.1145/3528223.3530091

  18. Xu J, Xhang X, Park SH, Guo K (2022) The alleviation of perceptual blindness during driving in urban areas guided by saccades recommendation. IEEE Trans Intell Transp Syst 23:1–11. https://doi.org/10.1109/TITS.2022.3149994

    Article  Google Scholar 

  19. Xu J, Xhang X, Park SH, Guo K (2022) The improvement of road driving safety guided by visual inattentional blindness. IEEE Trans Intell Transp Syst 23(6):4972–4981. https://doi.org/10.1109/TITS.2020.3044927

    Article  Google Scholar 

  20. Xu J, Guo K, Sun PZ (2022) Driving performance under violations of traffic rules: novice vs. Experienced drivers. IEEE Trans Intell Vehicles 7:908. https://doi.org/10.1109/TIV.2022.3200592

    Article  Google Scholar 

  21. Yan L, Shi Y, Wei M, Wu Y (2023) Multi-feature fusing local directional ternary pattern for facial expressions signal recognition based on video communication system. Alex Eng J 63:307–320. https://doi.org/10.1016/j.aej.2022.08.003

    Article  Google Scholar 

  22. Liu H, Xu Y, Chen F (2023) Sketch2Photo: synthesizing photo-realistic images from sketches via global contexts. Eng Appl Artif Intell 117:105608. https://doi.org/10.1016/j.engappai.2022.105608

    Article  Google Scholar 

  23. Liu X, Zhou G, Kong M, Yin Z, Li X, Yin L, Zheng W (2023) Developing multi-labelled corpus of twitter short texts: a semi-automatic method. Systems 11(8):390. https://doi.org/10.3390/systems11080390

    Article  Google Scholar 

  24. Zhang X, Huang D, Li H, Zhang Y, Xia Y, Liu J, (2023) Self-training maximum classifier discrepancy for EEG emotion recognition CAAI Transactions on Intelligence Technology https://doi.org/10.1049/cit2.12174

  25. Liu X, Wang S, Lu S, Yin Z, Li X, Yin L, Zheng W (2023) Adapting feature selection algorithms for the classification of Chinese texts. Systems 11(9):483. https://doi.org/10.3390/systems11090483

    Article  Google Scholar 

  26. Ma DS, Correll J, Wittenbrink B (2015) The Chicago face database: A free stimulus set of faces and norming data. Behav Res Methods 47(4):1122–1135

    Article  Google Scholar 

  27. Osman G, Hitam MS, Ismail MN, (2012). Enhanced skin colour classifier using RGB ratio model. arXiv preprint arXiv:1212.2692.

  28. Swift DB, (2006). Evaluating graphic image files for objectionable content. patent: US, 7027645, B2.

  29. Al-Shehri SA (2004) A simple and novel method for skin detection and face locating and tracking. In Asia-Pacific conference on computer human interaction. Springer, Berlin, pp 1–8

    Google Scholar 

  30. Kovac J, Peer P, Solina F (2003) Human skin color clustering for face detection. IEEE 2:144–148

    Google Scholar 

  31. Liu X, Shi T, Zhou G, Liu M, Yin Z, Yin L, Zheng W (2023) Emotion classification for short texts: an improved multi-label method. Hum Soc Sci Commun 10(1):306. https://doi.org/10.1057/s41599-023-01816-6

    Article  Google Scholar 

  32. Lu S, Liu M, Yin L, Yin Z, Liu X, Zheng W, Kong X (2023) The multi-modal fusion in visual question answering: a review of attention mechanisms. PeerJ Comp Sci 9:e1400. https://doi.org/10.7717/peerj-cs.1400

    Article  Google Scholar 

  33. Wang Y, Xu N, Liu A, Li W, Zhang Y (2022) High-order interaction learning for image captioning. IEEE Trans Circuits Syst Video Technol 32(7):4417–4430. https://doi.org/10.1109/TCSVT.2021.3121062

    Article  Google Scholar 

  34. W Nie Y Bao Y Zhao A Liu 2023 Long dialogue emotion detection based on commonsense knowledge graph guidance IEEE Trans Multimedia https://doi.org/10.1109/TMM.2023.3267295

  35. Shen X, Jiang H, Liu D, Yang K, Deng F, Lui JC, Liu J, Luo J (2022) PupilRec: leveraging pupil morphology for recommending on smartphones. IEEE Internet Things J 9(17):15538–15553. https://doi.org/10.1109/JIOT.2022.3181607

    Article  Google Scholar 

  36. Gao H, Liu Z, Yang CC (2023) Individual investors’ trading behavior and gender difference in tolerance of sex crimes: evidence from a natural experiment. J Empir Financ 73:349–368. https://doi.org/10.1016/j.jempfin.2023.08.001

    Article  Google Scholar 

  37. Liu Y, Li G, Lin L (2023) Cross-modal causal relational reasoning for event-level visual question answering. IEEE Trans Pattern Anal Mach Intell 45(10):11624–11641. https://doi.org/10.1109/TPAMI.2023.3284038

    Article  Google Scholar 

  38. Z Liu C Wen Z Su S Liu J Sun W Kong Z Yang 2023 Emotion-semantic-aware dual contrastive learning for epistemic emotion identification of learner-generated reviews in MOOCs IEEE Transact Neural Netw Learn Syst https://doi.org/10.1109/TNNLS.2023.3294636

  39. Bhatti UA, Tang H, Wu G, Marjan S, Hussain A (2023) Deep learning with graph convolutional networks: an overview and latest applications in computational intelligence. Int J Intell Syst 2023:1–28

    Article  Google Scholar 

  40. Hamid Y, Elyassami S, Gulzar Y, Balasaraswathi VR, Habuza T, Wani S (2023) An improvised CNN model for fake image detection. Int J Inf Technol 15(1):5–15

    Google Scholar 

  41. Anand V, Gupta S, Gupta D, Gulzar Y, Xin Q, Juneja S, Shaikh A (2023) Weighted average ensemble deep learning model for stratification of brain tumor in MRI images. Diagnostics 13(7):1320

    Article  Google Scholar 

  42. Ayoub S, Gulzar Y, Rustamov J, Jabbari A, Reegu FA, Turaev S (2023) Adversarial approaches to tackle imbalanced data in machine learning. Sustainability 15(9):7097

    Article  Google Scholar 

  43. Zhang Y, Chen J, Ma X, Wang G, Bhatti UA, Huang M (2024) Interactive medical image annotation using improved Attention U-net with compound geodesic distance. Expert Syst Appl 237:121282

    Article  Google Scholar 

  44. Wang S, Khan A, Lin Y, Jiang Z, Tang H, Alomar SY, Bhatti UA, (2023). Deep reinforcement learning enables adaptive-image augmentation for automated optical inspection of plant rust. Front Plant Science 14:1–15

Download references

Funding

Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R384), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia.

Author information

Authors and Affiliations

Authors

Contributions

Sajid Ali, Muhammad Abdullah Sarwar, Asad Khan, Muhammad Aamir &; Hend Khalid Alkahtani &; Samih M. Mostafa &;Yazeed Yasin Ghadi wrote the main manuscript. sad Khan, Muhammad Sharoze Khan, Hend Khalid Alkahtani &; Samih M. Mostafa &; Yazeed Yasin Ghadi arranged the funding. All authors reviewer the manuscript.

Institutional review board statement

Not applicable.

Corresponding authors

Correspondence to Asad Khan or Hend Khalid Alkahtani.

Ethics declarations

Consent for publication

Informed consent was obtained from all subjects involved in the study.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ali, S., Khan, M.S., Khan, A. et al. Exploring cross-cultural and gender differences in facial expressions: a skin tone analysis using RGB Values. J Cloud Comp 12, 161 (2023). https://doi.org/10.1186/s13677-023-00550-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13677-023-00550-3