Interpersonal behaviors like laughter, facial expressions, mimicry, and physical proximity are social signals that help us to influence and connect with other people. They are the glue that hold conversation, interactions, and relationships together. But social signals are subtle, dynamic, often co-occurring, and variable from one situation to the next. How do we understand and react to each other’s social signals? Why do they look and sound the way they do? Why are these nonverbal behaviors—to which we rarely pay explicit attention—so important for social connection? How are social signals shaped by the social environment? These are some of the questions studied in the EB Lab. Click on the purple icons below to learn more.
Ongoing research lines
The forms and functions of social signals
We study the social functions different nonverbal social signals serve. The physical form of a signal (how it looks or sounds) can tell us what those functions might be. For instance, we have examined the form-function mapping in laughter. Laughter is a ubiquitous social signal that can smooth interactions (Wood & Niedenthal, 2018). We identified three distinct social tasks accomplished by laughter (and their visual analogue, smiles; Martin, Rychlowska, Wood, & Niedenthal, 2017): rewarding the desirable behavior of others (reward), soothing and signaling nonthreat (affiliation), and indirectly challenging the status or behavior of others (dominance).
One study we conducted related participant judgments of the social intentions of 400 laughter samples to the laughs’ acoustic features, which I extracted using acoustic analysis software (Wood, Martin, & Niedenthal, 2017). Unique acoustics predicted the 3 proposed social functions, suggesting systematic changes to the voice during laughter can convey reward, affiliation, or dominance. These relationships often depended on the sex of the expresser: louder laughter, for instance, was perceived as more affiliative in males, but less affiliative in females. In a recent naturalistic study (Wood, in prep), we found that pairs of people laugh differently during rewarding, affiliative, and dominant conversations.
How environments shape social behavior
We study features of social environments that shape the use of nonverbal social signals. We argue that cultures arising from the intersection of other cultures, such as in the U.S., initially lacked a clear social structure, shared norms, and a common language (Niedenthal, Rychlowska, & Wood, 2017). Such cultures would have to increase their reliance on nonverbal signals, establishing a cultural norm of expressive clarity. For instance, 83 countries have made significant contributions to the current population of the U.S., a country high on ancestral diversity, but only 5 countries contributed to Russia's current population, making it low on ancestral diversity. We re-analyzed data from 92 cross-cultural emotion recognition studies (involving 82 unique countries) and showed that facial expressions from more heterogeneous cultures are better recognized cross-culturally than those from homogeneous cultures (Wood, Rychlowska, & Niedenthal, 2016). People from heterogeneous backgrounds also laugh and smile more frequently (Niedenthal, Rychlowska, Wood, & Zhao, 2018), which we suggest facilitates the formation of new social ties and communicates trustworthiness to strangers.
How we connect with our social networks
In a developing research line, we are studying how people make friends when they join a new social network, for instance when first year students arrive on a new college campus. An ongoing project uses computer vision to analyze the movements of people at a get-to-know-you mixer to see if we can predict who will end up well-connected or socially isolated months later. Another study found that estimates of a person’s prior exposure to cultural diversity predicts how well-connected they will become in a new social network (Wood, Kleinbaum, & Wheatley, in prep).
How we understand the social signals of others
People are experts at perceiving subtle nonverbal expressions and inferring the underlying emotions. Evidence suggests that part of this challenging perceptual task is accomplished through embodied simulation processes, in which the perceiver's brain and facial muscles partially recreate the perceived expression and associated emotion (Wood, Rychlowska, Korb, & Niedenthal, 2016). We have demonstrated that interfering with people's facial muscles while they look at facial expressions reduces the accuracy with which they can perceptually discriminate between and judge the meaning of various emotions. For instance, wearing a gel facemask that constricts facial movement disrupts people's ability to detect a facial expression next to a highly similar distractor (Wood, Lupyan, Sherrin, & Niedenthal, 2015). We also have a long-term collaboration with the University of Wisconsin Facial Nerve Clinic that will allow us to test, among other things, the impact of facial paralysis on emotion perception and experience (Korb, Wood, Banks, Agoulnik, Hadlock, & Niedenthal, 2016).
Recent work highlights the flexibility of emotion perception and shows that kids and adults track targets’ expressivity and update their responses accordingly (Plate*, Wood*, Woodard, & Pollak, 2019). We have also shown that dynamic facial expressions are perceived differently when combined with learned conventional gestures like thumbs up (Wood, Martin, Alibali, & Niedenthal, 2018).