January 12, 2026 – A new study shows that facial gestures aren’t controlled by two separate “systems,” as scientists long assumed, according to researchers from the Hebrew University of Jerusalem (HU). 

A new study published in  Science andled by Prof. Winrich A. Freiwald of The Rockefeller University in New York and Prof. Yifat Prut of the Edmond & Lily Safra Center for Brain Sciences at the Hebrew University, working with Dr. Geena Ianni and Dr. Yuriria Vázquez from The Rockefeller University, uncovers how the brain prepares and produces these gestures through a temporally organized hierarchy of neural “codes,” including signals that appear well before movement begins. They found that multiple face-control regions in the brain work together, using different kinds of signals: some are fast and shifting, like real-time choreography, while others are steadier. Remarkably, these brain patterns emerge before the face even moves, meaning the brain starts preparing a gesture in advance, shaping it not just as a movement, but as a socially meaningful message. 

For decades, neuroscience has leaned on a tidy division: lateral cortical areas in the frontal lobe control deliberate, voluntary facial movements, while the medial areas govern emotional expressions. This view was shaped in part by clinical evidence from individuals with focal brain lesions.  

“Facial gestures may look effortless,” the researchers note, “but the neural machinery behind them is remarkably structured and begins preparing for communication well before movement even starts.” 

By directly measuring activity from individual neurons across both cortical regions, the researchers found something striking both regions encode both voluntary and emotional gestures, and they do so in ways that are distinguishable well before any visible facial movement occurs. In other words, facial communication appears to be orchestrated not by two separate systems, but by a continuous neural hierarchy, where different regions contribute information at different time scales: some fast-changing and dynamic, others stable and sustained. 

By demonstrating that multiple brain regions work in parallel, each contributing different timing-based codes, the study opens new pathways for exploring how the brain produces socially meaningful behavior. Understanding how the brain builds them helps explain what can go wrong after brain injury or in conditions that affect social signaling. This may eventually guide new ways to restore or interpret facial communication when it’s lost. 

The research paper titled “Facial gestures are enacted through a cortical hierarchy of dynamic and stable codes” is now available in Science and can be accessed here.

Researchers:

Geena R. Ianni1, Yuriria Vázquez1, Adam G. Rouse2, Marc H. Schieber3, Yifat Prut4, Winrich A. Freiwald1,5

Institutions:
  1. Laboratory of Neural Systems, The Rockefeller University, New York,
  2. Department of Neurosurgery, Department of Cell Biology & Physiology, University of Kansas Medical Center
  3. Department of Neurology, University of Rochester Medical Center, Rochester, NY
  4. The Ruth & Stan Flinkman Family Chair in Brain Research, Edmond & Lily Safra Center for Brain Sciences, The Hebrew University of Jerusalem
  5. The Price Family Center for the Social Brain, The Rockefeller University, New York