Projects

Projects

R34 (NIH Planning Grant Program) Projects

R34DA059510 - A modeling framework and arena for measuring contextual influences of behavior
PI(s) - Dyer, Eva, McGrath, Patrick T*
Institution(s) - Georgia Institute of Technology

Social behaviors are essential for survival and reproduction. They also evolve quite rapidly and can vary even among closely related species. Traditionally, social behaviors are very difficult to study because of the complexity of their input, requiring conspecifics to trigger aggressive, cooperative, parental, or reproductive behaviors. Additionally, contextual data is important, such as hierarchical status and environmental factors can also play a role. This grant will propose to create a behavioral arena capable of mimicking natural environments that are required for social reproductive behaviors, including interactions between a large number of conspecifics, environmental factors such as male displays, and contextual data such as hierarchical status between various males. Tools will be created to track animals in this arena and build a computational frame work to measure and compare social behavioral dynamics. This work will utilize Lake Malawi cichlids, a powerful evolutionary model for identification of genes and neural circuit changes associated with differences in behavior. This project will generate new tools and datasets for modeling social behaviors, paving the way for a large-scale R01.

R34DA059509 - Behavioral quantification through active learning and multidimensional physiological monitoring
PI(s) - Grover, Pulkit, Kuang, Zheng, Rubin, Jonathan E, Yttri, Eric*
Institution(s) - Carnegie Mellon University, University of Pittsburgh

Naturalistic contexts provide the opportunity to study the brain and behavior in response to the ethological problems an animal is evolutionarily designed to solve. We seek to expand the capabilities of our current behavioral segmentation approaches to provide a more precise and comprehensive account of behavior. By incorporating recent innovations in machine learning, segmentation approaches that can account for behavioral dynamics at multiple timescales, and increased breadth in the sampling modalities used to classify behaviors, we will create a toolkit that our team and others can make use of to quantify complex, spontaneous behaviors. We will implement an analysis pipeline to capture and make use of patterns of mouse body position, vocalizations, and arousal states. We also aim to capitalize on recent insights into the role of the gut-brain axis in shaping behavior. After validating our acquisition and analytical approaches, we will monitor these outputs in response to controlled, parametric environmental manipulations in two distinct, ethologically-relevant contexts: intruder response to resident urine signals and limited access to water. The exploratory data collected in these experiments will be vital to validating our algorithmic advances and for piloting future grant proposals. The foundation of this work is a diverse team approach. Our team, comprised of experts in social behavior ethology, microbiota research information theory, and data-driven computational modeling, will take an end-to end approach in executing this proposal. By starting with experimental design informed by all parties, we will ensure that the resulting pipeline possesses sufficient structure and richness for meaningful analysis. the team will help guide long-term research avenues that are both ethologically appropriate and computationally rigorous. Lastly, we recognize that open access will greatly accelerate the validation and adoption of these technologies, a stated aim of this RFA. Dissemination and access to our deliverables will benefit substantially from ongoing relationships with the Pittsburgh Supercomputing Center and OpenBehavior. The partnered hardware and software advances of Aim 1a and 1b represent the overarching goal of this proposal, an advanced and comprehensive behavior segmentation platform. Aim 2 will interrogate temporally- dynamic urine protein signals and Aim 3 will study how progressively increasing thirst induced through water- restriction affect neurobehavioral measures. These contexts will be used to benchmark the broad applicability of Aim 1 – as well as to explore the potential to address targeted research questions within these frameworks.

R34DA059513 - Computational attribution and fusion of vocalizations, social behavior, and neural recordings in a naturalistic environment
PI(s) - Sanes, Dan Harvey*, Schneider, David Michael, Williams, Alexander Henry
Institution(s) - New York University

Social vocalizations and movement-generated sounds often provide pivotal knowledge about an animal’s identity, location, or state, yet most studies of natural behavior fail to integrate acoustic information with simultaneous recordings of high-dimensional neural activity and behavioral dynamics. This proposal will develop novel experimental and computational methods to attribute vocal and non-vocal sounds to individuals in a naturalistic, acoustically complex, multi-animal environment. By integrating this rich acoustic information with simultaneous video and wireless neural recordings, we seek to predict auditory cortical responses to auditory cues, as a function of social context and individual identity within the family. Aim 1 will develop new tools with which to attribute vocal and non-vocal sounds to individual animals in a multi-animal setting (i.e., the “who said what” problem). In Aim 1A, we will collect, curate, and publicly release a range of benchmark datasets containing simultaneous camera and microphone array recordings of multi-animal interactions with ground truth labels of sound sources. We will use these benchmarks to validate new models for sound localization. In Aim 1B, we will develop and release deep learning models that localize sounds with calibrated confidence intervals, using synchronized video measurements to enhance predictions. Aim 2 will use these tools to identify archetypal, acoustically-driven social behaviors. We will establish a new experimental paradigm that permits months-long monitoring of rodent social behavior in a large, naturalistic environment with simultaneous camera and microphone array recordings. Using this data, we will develop novel data analytic approaches that leverage synchronized audio and video data streams to identify social interaction sequences. A key goal is to assess individual differences in social behavior across families. Aim 3 is a proof-of-concept experiment in which we determine how acoustically-driven social behaviors (established in Aim 2) predict auditory cortex responses to both vocal or movement-generated sounds. To accomplish this, we will make continuous wireless electrophysiological recordings from the auditory cortex of adolescent and adult gerbils within their naturalistic family environment. We will build regression models to infer our ability to predict neural responses from auditory/behavioral covariates (encoding models).

R34DA059507 - Development of a smart aviary to probe neural dynamics of complex social behaviors in a gregarious songbird
PI(s) - Aflatouni, Firooz, Balasubramanian, Vijay, Daniilidis, Kostas, Schmidt, Marc F*
Institution(s) - University of Pennsylvania

The nervous system of social species has evolved to perceive and evaluate signals within a social context. Social information therefore must impact how the brain processes information, yet little is still known about how the brain integrates social information to produce actions in a social context. This lack of knowledge exists in part because social context is difficult to quantify and because the majority of studies are performed in species that do not have a particularly rich social structure. Here we propose to study the brown-headed cowbird (Molothrus ater), a highly gregarious songbird species whose social behavior has been well studied and where vocal and non-vocal communication signals form a central and critical component of its social system. We have created a “smart aviary” equipped with cameras and microphones that is capable of monitoring behavior in each individual during the entire breeding season. Our aim is to create a fully automated system using computer vision and machine learning technology to evaluate moment-to- moment behavioral interactions between all member of the group (9 females and 7 males) over the entire breeding season. We have assembled an interdisciplinary team of engineers, neurobiologists and computational scientists, to create a platform where we can record dynamics quantify learning directly the segment invasive in enable social develop quantify associated social and evaluate how brain are shaped within a complex social context over an ethologically relevant timescale. To moment-to-moment behavior in each individual bird, we are developing a novel machine approach that tracks each bird and predicts its position, orientation, pose, and shape from images using artificial neural networks and a 3D articulated mesh model. By collecting output of the model over consecutive frames we will obtain a pose trajectory, which we will and classify into discrete behaviora l types. We also aim to develop a miniature non- wirelessly powered and transmitting recording device optimized for long duration recording our aviary that critically does not impact bird individual or social behavior. Such a device would us to link neural activation patterns to discrete behavioral events (e.g. male song) within the context in which these specific events occurred. Supplied with our rich dataset, we aim to mathematical tools necessary to generate social network models that will allow us to the specific state of the bird social network associated with neural activation patterns with individual behavioral events. To the best of our nowledge, the proposal to link network state to neural activation in a precise quantitative manner has never before been , k attempted. Through these efforts, we will be well positioned to subsequently pursue a Targeted Brain Circuits Projects R01 to investigate in a quantitative manner how social context influences brain activity.

R34DA059718 - Harnessing biological rhythms for a resilient social motif generator
PI(s) - Padilla Coreano, Nancy *, Saxena, Shreya, Wesson, Daniel W
Institution(s) - University of Florida, Yale University

How does the brain enable social interactions? The study of social behavior in non-human animals has long relied on coarse behavioral metrics like time spent interacting with another animal or simply the numbers of interactions. Although this approach has informed major insights into neural circuits which have a role in sociability, we still do not know how these circuits orchestrate patterns of social behaviors, especially under different social contexts where interactions have nuanced differences. Our long-term goal is to identify the neural mechanisms supporting social behavior in affiliative vs. antagonistic social contexts. To close the knowledge gap towards this goal, in this R34 we will build artificial intelligence (AI) tools that are capable of integrating multivariate sources of behavior data to quantify spatiotemporal signatures or “motifs” of diverse repertoires of social behaviors. Behavioral motifs have the potential to be captured by means of examining concurrent autonomic rhythms, especially breathing and heart rate. Indeed, we have long known that changes in the frequency of these rhythms coincide with specific affective and behavioral contexts. However, spatiotemporal signatures of social behaviors have not been captured in prior studies which have considered either breathing or heart rate in isolation. Nor have prior studies unleashed the potential to identify novel social behavioral motifs by using these autonomic rhythms in combination with video measures. The research objective of this Brain Initiative proposal is to develop semi-supervised artificial intelligence methods that result in a hierarchical multi-timescale model of social behavioral motifs directly from video, breathing, heart rate, and movement data via a head-mounted accelerometer. To accomplish this, we will use partial labels of mouse social behaviors, as well as physiologic measurements, in order to elucidate the full range of social behavior motifs across affiliative vs. antagonistic contexts. In Aim 1, we will define low-dimensional social behavioral states while incorporating autonomic rhythms, while in Aim 2, we will elucidate a multi-timescale hierarchical representation of social behavior in affiliative vs. agonistic social contexts. For both aims, we will integrate computer vision techniques with high-dimensional video and physiological data from mice while varying their isolation levels and who they are interacting with. The end-product will be a validated toolkit enabling the sensitive and robust identification of behavioral motifs. The easy-to-use toolkit which we call the Social Motif generator (So-Mo) will enable future studies to probe neural circuits during complex mouse behaviors at unprecedented resolution.

R34DA059506 - High-resolution 3D tracking of social behaviors for deep phenotypic analysis
PI(s) - Dunn, Timothy William*, Olveczky, Bence P
Institution(s) - Duke University, Harvard University

The aim of this proposal is to plan for and deliver a proof-of-concept solution for an innovative and easy-to-use experimental platform for measuring and quantifying social behaviors in animal models. Efforts during this initial grant period will be restricted to rats and mice, experimental animals with rich social behaviors, but we hope in future iterations of this program to expand also to other model organisms, including birds and monkeys. To capture kinematic details of whole- body movement during social behaviors requires novel solutions for dealing with the inevitable occlusions that results from social interactions. To overcome the limitations of current approaches we will build and validate a novel deep neural network that learns to combine images across multiple synchronized cameras and infer the 3D physical coordinates of multiple animals. Preliminary studies have been very positive and suggest large improvements over current methods both when it comes to the range of social behaviors that can be tracked and the precision with which they can be measured. Importantly, all new technology will be readily shared with the scientific community, thereby leveraging from this single grant the potential for numerous investigators to dramatically improve the efficiency of their research programs requiring rigorous quantitative descriptions of animal behavior

R34DA059512 - High-throughput, high-resolution 3D measurement of ethologically relevant rodent behavior in a dynamic environment
PI(s) - Dunn, Timothy William*, Field, Gregory Darin, Tadross, Michael R
Institution(s) - Duke University

The aim of this proposal is to develop an innovative new system, including hardware assemblies and machine learning algorithms, for continuous, high-resolution 3D quantification of behavioral and eliciting stimulus dynamics in a natural mouse prey capture paradigm. The system will satisfy a critical unmet need for an easily adoptable, modern behavioral measurement technology that extends well beyond current offerings, which are difficult to set up and limited largely to measuring spontaneous animal movement in impoverished, static environments. Our system consists of a 3D convolutional neural network processing multi-perspective video recordings to provide detailed measurements of both predator (mouse) and prey (cricket) spatiotemporal movement patterns within an enclosed, compact apparatus permitting precise control over the visual environment. To reduce implementation complexity and enhance usability in other labs, the system will use only a single commercial video camera and a set of low-cost mirrors to provide the multiple perspectives required for 3D behavior tracking. By using only a single camera, we also reduce the instrument’s physical footprint, thus facilitating high-throughput studies across multiple setups. Furthermore, our 3D tracking algorithm will be built to support out-of-the-box generalization to cloned setups, meaning other labs can immediately start doing science with the instrument without laborious data labeling and training steps. As part of our system, we will also develop new methods for analyzing the rich 3D mouse and cricket data to isolate key kinematic and action variables along with comprehensive characterization of stimulus-behavior relationships. We will then investigate how these new measurements can be used to better understand retinitis pigmentosa and Parkinson’s disease. Preliminary experiments have been quite successful and illustrate the promise and power of our approach to collect large amounts of quantitative behavior data and identify new phenotypes of motor disorders. As our vision is to make as large of an impact as possible, our system and datasets will be shared openly with community to catalyze a wide range of new research into brain function and treatments for neurological disease.

R34DA059716 - Interpersonal behavioral synchrony in virtual and in-person dyadic conversation
PI(s) - Corcoran, Cheryl Mary*, Grinband, Jack, Parvaz, Muhammad Adeel
Institution(s) - Icahn School of Medicine at Mount Sinai, Columbia University

Human dyadic social communication entails a rich repertoire of expression, including not only face expression (and gaze), but also acoustics (prosody and pauses) turn-taking, gestures and language. Communication has evolved in humans within a social context, beginning with the parent-infant dyad, with mirroring of facial expressions and sounds. Its natural ecology is face-to-face dyadic interactions, both in-person and increasingly via remote platforms for teleconferencing and telehealth. Social communication is a “complex orchestration” in real time: its signals are multiple and temporally offset. It is a continuous exchange that is highly coordinated between speakers, with norms for turn-taking and alignment of face expression, gesture, semantic content and speech rates. As yet, a critical gap exists in that we lack the tools to quantify and analyze temporal patterns of multimodal communication behavior between two individuals in face-to-face communication, in an ecologically valid setting, that have the same rigor and reproducibility as do hyperscanning approaches to record brain activity during dyadic conversation. This tool must be developed to realize the true potential of second-person neuroscience. This planning proposal for tool development entails several key activities, beginning with the convening of a diverse multidisciplinary team of experts from various fields, including ethics/regulatory, anthropology, cognitive neuroscience, computer science, engineering, physics, mathematics, psychiatry and neurology. This team will discuss ethics, diversity, paradigm development, and computational frameworks, and providing iterative feedback and convening also with advocacy groups. Also, we will build two testing rooms for multimodal recording of dyadic communication, to demonstrate feasibility of acquiring and synching high temporal resolution data. Pilot EEG hyperscanning will be done concurrently in a subcohort. Further, given increased use of teleconferencing, dyadic communication data will be collected via remote platform and compared with in-person data, to determine how information may be degraded by differences in resolution and streaming delays. We will also develop computational frameworks for analyses of multimodal data.

R34DA059723 - Multimodal behavioral analysis of oromanual food-handling in freely moving animals
PI(s) - Shepherd, Gordon M
Institution(s) - Northwestern University

Oromanual food-handling – in which the hands and forelimbs work in a coordinated manner with the mouth and jaw to manipulate and consume a food item – is a fundamental behavior common to many rodent species as well as primates. Despite its ethological significance, oromanual food-handling has received remarkably little experimental attention, reflecting the technical challenges of recording at sufficient spatiotemporal resolution a behavior involving small, fast, and often visually occluded movements. We recently initiated efforts to overcome these challenges, developing paradigms for analyzing food-handling in mice using high-speed close-up video methods coupled with AI-based kinematic tracking. Here we propose to build on these advances through a set of planning activities leading to a powerful new approach for in-depth investigation of this behavior. The overall objective is to develop new experimental and analytical paradigms for recording food-handling behavior with high spatiotemporal resolution in freely moving animals, with a focus on understanding how elemental sub- movements are assembled into distinct goal-directed actions coordinated across multiple body parts. The technical approach includes design of a videographic recording arena incorporating a robotic camera positioning system. Electromyography from jaw-controller and forelimb muscles in freely moving mice will enable characterization of the elements of dexterous coordination involved in manipulating the food with both hands and jaw. Intranasal detection of breathing will enable characterization of sniff-related movements during food- handling, posited to represent an additional aspect of the behavior engaging the olfactory and respiratory systems. Exploratory studies will extend the approach in comparative, ecological, and developmental directions. The analytical approach will conceptualize the behavior in terms of the multiple components of the motor plant and behavioral modes and actions, including detailed ethogramming and incorporating machine learning-based tracking, modeling, and related computational methods. The anticipated results will constitute an innovative paradigm for quantitative analysis of food-handling, setting the stage for a future investigation of the neural mechanisms of this natural form of goal-directed dexterous behavior. Results from this research program furthermore have high potential to identify common principles of natural, complex, motor behavior in mammals in general.

R34DA059514 - Towards High-Resolution Neuro-Behavioral Quantification of Sheep in the Field to Study Complex Social Behaviors
PI(s) - Kemere, Caleb
Institution(s) - Rice University

Social animals, including humans, engage in complex collective behaviors in the field. While there are simple models of collective decision making and movement that are amenable to study in traditional laboratory environments, they inevitably fail to capture the full complexity of natural behaviors as they occur in the field. Moreover, standard mammalian laboratory species either exhibit only simple social behaviors (e.g., mice) or are too challenging to house in large groups (e.g., primates). Here, we will leverage decades of extensive experience studying sheep in their normal pasture settings and during interactions between ewes and their lambs. We propose to develop a paradigm for acquiring high resolution (in space and time) measurements of individual herd members, including head-mounted devices to sense their visual sensorium. We will test these devices in existing herds maintained for agricultural study, while also developing a robust paradigm for conducting neural recording experiments. If successful this work will lay the foundation for future study of the neural circuits underlying complex collective behaviors in a large-brained highly social animal model.

R34DA059500 - Transformative Optical Imaging of Brain & Behavior in Navigating Genetic Species
PI(s) - Nagel, Katherine, Schoppik, David*, Shaner, Nathan Christopher, Wang, Jane
Institution(s) - New York University School of Medicine, University of California San Diego, Cornell University

Our long-term goal is to define general principles that connect neuronal activity to unconstrained behaviors in natural sensory environments. Achieving this goal will require the development of new tools to quantitatively compare behavior across species in complex environments and to monitor neural activity in freely moving an- imals. Here we propose to bring together a diverse, multi-disciplinary team of five labs with a track record of productively measuring and modeling neural activity and behavior. We will use two parallel approaches to make progress towards imaging neural activity from freely moving small genetic model organisms as they navigate complex sensory environments. In Aim 1 we will develop behavioral apparatus that allows for experimenter control of sensory stimuli (mechanosensory flow, odor, and visual stimuli) while monitoring unconstrained be- havior at two resolutions. Low resolution measurements allow for quantification of stimulus-guided navigation, while high resolution measurements allow for detailed quantification of body posture and limb kinematics, and will eventually permit imaging of neural activity. In Aim 2, we will develop bioluminescence-based transgenic animals and techniques for imaging neural activity in freely moving flies and fish. Optimization of these reagents and protocols will allow for eventual simultaneous measurements of neural activity and behavior in larger and more complex environments. This work draws on our collective expertise in quantitative behavior, biolumines- cent indicators, real-time tracking of animal behavior at high resolution, and physical modeling of animal behav- ior. This work will advance technologies for studying neural activity in unconstrained animals and establish a collaborative team to pursue this work.

R34DA061984 - Quantifying organism-environment interactions in a new model system for neuroscience
PI(s) - Srivastava, Mansi
Institution(s) - Harvard University

Naturalistic contexts provide the opportunity to study the brain and behavior in response to the ethological problems an animal is evolutionarily designed to solve. We seek to expand the capabilities of our current behavioral segmentation approaches to provide a more precise and comprehensive account of behavior. By incorporating recent innovations in machine learning, segmentation approaches that can account for behavioral dynamics at multiple timescales, and increased breadth in the sampling modalities used to classify behaviors, we will create a toolkit that our team and others can make use of to quantify complex, spontaneous behaviors. We will implement an analysis pipeline to capture and make use of patterns of mouse body position, vocalizations, and arousal states. We also aim to capitalize on recent insights into the role of the gut-brain axis in shaping behavior. After validating our acquisition and analytical approaches, we will monitor these outputs in response to controlled, parametric environmental manipulations in two distinct, ethologically-relevant contexts: intruder response to resident urine signals and limited access to water. The exploratory data collected in these experiments will be vital to validating our algorithmic advances and for piloting future grant proposals. The foundation of this work is a diverse team approach. Our team, comprised of experts in social behavior ethology, microbiota research information theory, and data-driven computational modeling, will take an end-to end approach in executing this proposal. By starting with experimental design informed by all parties, we will ensure that the resulting pipeline possesses sufficient structure and richness for meaningful analysis. the team will help guide long-term research avenues that are both ethologically appropriate and computationally rigorous. Lastly, we recognize that open access will greatly accelerate the validation and adoption of these technologies, a stated aim of this RFA. Dissemination and access to our deliverables will benefit substantially from ongoing relationships with the Pittsburgh Supercomputing Center and OpenBehavior. The partnered hardware and software advances of Aim 1a and 1b represent the overarching goal of this proposal, an advanced and comprehensive behavior segmentation platform. Aim 2 will interrogate temporally- dynamic urine protein signals and Aim 3 will study how progressively increasing thirst induced through water- restriction affect neurobehavioral measures. These contexts will be used to benchmark the broad applicability of Aim 1 – as well as to explore the potential to address targeted research questions within these frameworks.

R34DA061924 - Mapping dynamic transitions across neural, behavioral, and social scales in interacting animals
PI(s) - Frohlich, Flavio, Zhang, Mengsen*
Institution(s) - Michigan State University, University of North Carolina at Chapel Hill

Mapping dynamic transitions across neural, behavioral, and social scales in interacting animals Human and animal behavior is shaped by many processes across spatiotemporal scales – from the activities of neurons to the dynamics of social interaction. Mapping behavior and brain dynamics across scales is key to a systemic understanding of cognition, neuropsychiatric disorders, and developing personalized interventions. In social neuroscience, the mapping between social behavior and brain dynamics was primarily achieved by constraining the behavior to a well-controlled, low-dimensional task space, where common linear statistical methods suffice to discover cross-scale relations. However, the complex, dynamic, and interactive nature of real-world social interaction is largely lost in such task-constrained settings. More recently, both human and animal social neuroscience began to embrace a multi-brain interactive approach, where brain activities were simultaneously recorded and found to be synchronized during live social interaction. Without task constraints, animals can adopt and transition between a diverse range of behavioral patterns, which are signs of nonlinear, high-dimensional dynamical systems. There is a critical need for a computational-experimental framework to characterize the complex dynamics of naturalistic interaction and connect them across neural and behavior scales. The main objective of this project is to develop a computational-experimental framework to construct multiscale models of naturalistic social interaction connecting the spiking dynamics of neurons, brain oscillations, body movements, and macroscopic behavioral states. To achieve this objective, the project will utilize simultaneously recorded behavioral and electrophysiology data from ferrets during naturalistic interaction. Ferrets are chosen as a model of dynamic social interaction for their high social skills and complex visuomotor communication, which allows for fine-grained characterization of social dynamics based on expressive body movements. At the neural level, ferrets’ frontoparietal networks exhibit similar oscillations to those of humans that were found to synchronize during social interaction, paving the way to future comparative studies. Animal data also provides the opportunity to include neuronal activity in the multiscale framework, which is not commonly accessible in humans. Computationally, the toolbox will build upon recent advances in topological time series analysis to extract states and transitions from complex dynamics at different scales of measurements, combined with computer vision techniques for motion tracking, machine learning for cross- scale mapping of state transitions, and expert annotations. This project integrates diverse perspectives from cognitive social neuroscience to nonlinear dynamics, computational topology, and machine learning. The project will significantly impact neuroscience by providing much-needed tools to examine multiscale relations in the brain and behavior in real-world settings and the future design of state-dependent treatments for neuropsychiatric and behavioral disorders, combining pharmacological treatments, brain stimulations, and psychosocial interventions.

R61/R33 (Translational Neural Devices) Projects

R61MH135106 - Synchronized neuronal and peripheral biomarker recordings in freely moving humans
PI(s) - Suthana, Nanthia A
Institution(s) - University of California, Los Angeles

Recent technological advancements have allowed for single-neuron and intracranial electroencephalographic (iEEG) recordings in freely moving humans. However, these implanted neural recording devices have not been integrated with non-invasive peripheral biochemical recordings. The emergence of an experimental platform combining mobile deep brain recordings with wearable biochemical and biophysical sensors for use in real-world settings is unprecedented. The proposed project will develop a novel platform that enables simultaneous single- neuron or iEEG, biochemical (cortisol, epinephrine), and biophysical (heart rate, skin conductance, and body and eye movements) activity to be recorded in freely moving human participants. As proof-of-concept, we will use this platform to investigate the neural and peripheral biomarker mechanisms underlying approach-avoidance behaviors during spatial navigation. Through an interdisciplinary collaboration between UCLA, Stanford University, and the Veteran’s Administration Greater Los Angeles Healthcare System (VAGLAHS), the program will have access to human participants whom will have implanted electrodes within prefrontal cortex, amygdala, hippocampus, or nucleus accumbens regions. The proposed project outcomes will empower future studies and other researchers to investigate, for the first-time, deep brain and peripheral biomarker mechanisms underlying freely moving human behavior in naturalistic and ecologically valid environments.

R61MH135109 - Capturing Autobiographical memory formation in People moving Through real-world spaces Using synchronized wearables and intracranial Recordings of EEG
PI(s) - Inman, Cory Shields
Institution(s) - University of Utah

This project aims to unlock the potential of combining wearable mobile recording devices, such as smartphones with continuous audio-visual, accelerometry, GPS, subjective report, autonomic physiology, and wearable eye tracking recordings, with precisely synchronized intracranial neural recordings during real-world behaviors. Autobiographical memory (AM) formation is a critical human behavior that has been difficult to study with traditional neuroimaging methods. It involves a range of real-world cognitive processes, including attention, decision making, emotion, episodic memory, social interactions, and navigation. AM refers to memory for one’s own life experiences. AMs are typically more detailed and personal than general episodic memories and due to this feature have thus been difficult to capture as they are being formed, particularly the neural correlates of AM encoding. By studying how the brain processes, encodes, and retrieves verifiable, real-world autobiographical experiences, we hope to gain new insights into cognitive and neural processes that can fail in neurological disorders like Alzheimer’s disease. There is a critical need to develop technical, methodological, and computational approaches to understanding the cognitive and neural mechanisms underlying memory-related behaviors in continuous, complex real-world settings, to then translate this understanding into reliable treatments for enhancing memory or cognition in daily life. The proposed project will take important first steps towards addressing these dire needs with a novel and unique approach to recording directly from the human brain as people navigate and create AMs in the temporal contexts and at the spatial scales of daily life. By capturing electrophysiological recordings synchronized with a novel experiential recording device, our project will take the key translational step needed to push our neuroscientific insights of autobiographical memory from the laboratory to one day restoring real-world memory for those suffering from devastating memory disorders. As neural stimulation tools and techniques for memory enhancement develop, insights from the proposed study will establish the foundation on which to build neuromodulation approaches that can rescue memory during real-world experiences. Thus the proposed research project aims to develop a smartphone-based recording application (CAPTURE app; R61 phase) synchronized with wearables and invasive neural recordings during real-world behaviors like autobiographical memory encoding (R33 phase). We will develop novel recording and analytic methods for integrating multimodal data streams with invasive neural recordings in humans during real-world experiences. Over 2,000 potential research participants have sensing and stimulation devices (i.e., NeuroPace Responsive Neurostimulation System; RNS) chronically implanted in their brains for the treatment of epilepsy in the U.S. Our next-generation tool and approach will allow us to precisely capture real-world behaviors that encompass a variety of cognitive processes like autobiographical memory formation, and synchronize this data with direct neural recordings in humans.

R61MH135114 - Integrated movement tracking for pediatric OPM-MEG studies of intellectual disability
PI(s) - Welsh, John P*, Roberts, Timothy P
Institution(s) - Seattle Children’s Hospital, Children’s Hospital of Philadelphia

This R61/R33 project will develop an advanced technology for non-invasive recording of whole-brain physiology with synchronized video-tracking of movement for use in children with intellectual disability and will use it to elucidate the brain-circuit electrophysiology of intellectual development. The technological advances will have immediate benefits for pediatric neurology and will be widely applicable to many neurological disorders in which safe and convenient, non-invasive recordings of brain physiology are desired to inform diagnosis, prognosis, and treatment. The R61 phase will be performed by FieldLine Medical (Boulder, CO) which will contribute their recent advances in optically-pumped magnetometer magnetoencephalography (OPM-MEG), a transformative technology for safe, physiological brain imaging that greatly increases sensitivity to brain electrical signals as compared to SQUID-MEG and EEG and provides greater coverage than invasive electrophysiology. FieldLine will: 1) expand the capabilities of their HEDscan OPM-MEG system, as a “wearable” brainwave scanning technology, for high-fidelity MEG recordings in freely moving children; and 2) integrate synchronized video- tracking of voluntary movements for kinematic analysis to create a technology named HEDscanV. The R33 phase will deploy HEDscanV in two pediatric neuroscience laboratories at the Children's Hospital of Philadelphia and Seattle Children's Research Institute. After validating HEDscanV in children against SQUID-MEG, the R33 phase will leverage advances in autism research enabled by sensory, motor, and associative learning paradigms that were developed by the MPIs to identify intellectual disability with high accuracy. By disseminating HEDscanV and sensory-motor paradigms across clinical sites in Philadelphia and Seattle, we will work together to identify the bandwidths of electrical activity coherence in brain circuits at the interface of movement and cognition that promote intellectual development. Our success will be ensured by the support of two nationally-recognized autism centers at the University of Washington and Children's Hospital of Philadelphia, where high-fidelity clinical assessments and diagnostic testing will be conducted. By establishing the locations and bandwidths of activity coherence in the brains of children that promote intellectual development, the project will begin to lay the essential groundwork needed to establish therapies intending to normalize brain pathophysiology and facilitate intellectual development in children with neurodevelopmental disorders.

R61MH135405 - Developing the Context-Aware Multimodal Ecological Research and Assessment (CAMERA) Platform for Continuous Measurement and Prediction of Anxiety and Memory State
PI(s) - Jacobs, Joshua*, Ortiz, Jorge, Widge, Alik S, Youngerman, Brett E
Institution(s) - Columbia University Health Sciences, Rutgers University, University of Minnesota-Twin Cities

This project seeks to develop the CAMERA (Context-Aware Multimodal Ecological Research and Assessment) platform, a state-of-the-art open multimodal hardware/software system for measuring human brain–behavior relationships. CAMERA will record neural, physiological, behavioral, and environmental signals, as well as measurements from ecological momentary assessments (EMAs), to develop a continuous high-resolution prediction of a person’s level of anxiety and cognitive performance. CAMERA will provide a significant advance over current methods for human behavioral measurement because it leverages the complementary features of multimodal data sources and combines them with interpretable machine learning to predict human behavior. A further distinctive aspect of CAMERA is that it incorporates context-aware, adaptive EMA, where the timing of assessments depend on the subject’s physiology and behavior to improve response rates and model learning. Our initial work on CAMERA focuses on predicting anxiety state and concurrent memory performance, but the platform is flexible for use in various domains. Our work on CAMERA consists of two phases. First, in the R61 phase, we will develop the CAMERA hardware/software framework, which includes methods for recording continuous neural, physiologic, audiovisual, and smartphone-usage data (Aim 1) and synchronizing these signals with intermittent EMAs (Aim 2). After demonstrating that CAMERA can successfully combine multimodal features to predict a subject’s anxiety state and memory efficiency (Aim 3), we will proceed to the R33 phase of the project. In the R33, we will use CAMERA in conjunction with closed-loop neurostimulation to modulate the subject’s anxiety state and associated memory performance (Aim 1), and to characterize the causal effect of modulation on neural, physiologic, and behavioral biomarkers (Aim 2). Beyond our initial work in the domain of anxiety and memory, we anticipate that CAMERA will have widespread impact by providing a general platform for exploratory and hypothesis-driven research on various aspects of complex human internal states, behavior, and cognition in real-world environments while minimizing burden on subjects.

R61MH135407 - Novel multimodal neural, physiological, and behavioral sensing and machine learning for mental states
PI(s) - Shanechi, Maryam
Institution(s) - University of Southern California

Studying how human brain activity gives rise to mental states can reveal the neural mechanisms of emotional functioning and provide novel neural-physiological markers to enable personalized therapies for diverse mental disorders. For brain monitoring alone, intracranial EEG (iEEG) can measure multi-region multiday brain activity with high temporal resolution. However, the above goals hinge upon the ability for simultaneous brain-behavior monitoring, which remains immensely difficult for mental states due to challenges on the physiology, behavior, machine learning, and ethics fronts. First, physiological monitoring beyond a single modality – e.g., electrodermal vs. cortisol – is not possible with current wearables and the demonstrated wearables do not measure cortisol. Second, behavioral monitoring during intracranial recordings is largely limited to self-reports, which are sparse. Also, while social processes are a major trans-diagnostic domain of emotional functioning in NIMH’s RDoC framework and adversely affected in diverse mental disorders, they are largely absent in current brain-behavior monitoring, which does not afford systematic scalable measurement of mental states during social interactions. Third, modeling of concurrent neural-physiological-behavioral data introduces a machine learning challenge, involving many modalities, nonlinearity, and mixed behaviorally relevant and irrelevant dynamics to dissociate. Finally, there are ethical issues. We build an interdisciplinary team of engineers, psychiatrists and behavioral scientists, computer scientists, neurosurgeons, neuroscientists, and neuroethicists to address these challenges. We will develop novel software and hardware tools to enable multimodal neural-physiological-behavioral sensing and machine learning for mental states within social processes and beyond. The R61 in years 1- 4 will develop and validate the tools in healthy subjects (Aims 1,2) and in epilepsy patients with already-implanted iEEG electrodes which cover many regions related to mental states (Aim 3). In R61, we develop i) an integrated wearable skin-like sensor for multimodal physiological, biomechanical, and cortisol sensing; ii) conversational virtual humans to evoke naturalistic social processes and enable emotion recognition using multimodal audio- visual-language modalities; and iii) a nonlinear, multimodal, brain-behavior modeling, learning, and inference framework for mental states. We will also study the ethics of multimodal data collection, mental privacy, and self- trust. Once the R61 tools are validated, we will combine them with intracranial brain activity in epilepsy patients in R33 in year 5 to learn multimodal biomarkers of mental states. Our approach spans multiple RDoC systems including Negative Valence, Arousal and Regulatory Systems, and Social Processes. It enables several levels of analysis including Circuits, Physiology, Behavior, and Self-Report. These systems span diverse disorders such as anxiety and depression. Thus, our multimodal, convergent, and integrated approach will likely enable unique brain-behavior insights into human emotion functioning applicable to broad domains of mental health.

R61MH138966 - A naturalistic multimodal platform for capturing brain-body interactions in people during physical effort-based decision making
PI(s) - Rozell, Christopher John
Institution(s) - Georgia Institute of Technology

The challenge of understanding how neural activity results in human behavior and cognition in health and disease is a crucial one for neuroscience. Traditional research methods often employ abstract tasks focusing on discrete cognitive processes, which may not fully capture the complexity of real-world behaviors and their neural underpinnings. This limitation has hindered progress in understanding and treating psychiatric disorders. In particular, motivational deficits, characterized by a reduced propensity to expend effort for rewarding outcomes, are pervasive across various disorders like major depression as well as many others (e.g., Parkinson's disease, schizophrenia). This deficit in effort-based decision- making (EBDM) leads to reduced quality of life in patients who experience these symptoms. To address this gap, there is a need to employ naturalistic paradigms with concurrent measurements of different systems of the body to characterize behavior comprehensively. The proposed project aims to develop and pilot a platform for synchronized multimodal measurement, the HOlistic Realtime Measurement of Effort-based deciSion-making (HORMES) system. The HORMES system will be centered around a new naturalistic EBDM (nEBDM) task in an immersive virtual environment requiring effortful locomotion. The system will measure behavior across decision-making, embodied, affective, and clinical domains, synchronized with measurements of relevant neural circuits at appropriate spatial and temporal scales, and include new analysis methods based on latent variable models to characterize brain-body interactions. The project consists of two phases: the R61 phase activities will include the design and refinement of the HORMES system using data from healthy participants and patients with major depression, while the R33 phase will pilot the system in a clinical trial with treatment-resistant depression patients undergoing subcallosal cingulate cortex deep brain stimulation. This experimental neuromodulation therapy often leads to changes in psychomotor, interoceptive, and cognitive symptoms at different timescales, as well as provides access to intracortical electrophysiology with extreme spatial specificity in a brain network known to be critical in nEBDM. Taken together, this trial is an ideal context for a pilot evaluation of the HORMES system in a longitudinal study. Furthermore, the project integrates neuroethics research and involves the creation of a Council of Lived Experience Advocates (CLEA), comprising patients with intracranial implants across a variety of disorders. The CLEA will provide input on the HORMES project and offer guidance on the ethical implications and future applications of high-resolution biobehavioral data. Successful completion of the project is expected to advance our understanding of motivational deficits and inform novel treatment for symptoms, representing a significant step towards bridging the gap between basic neuroscience research and clinical practice in psychiatry.

U24 Data Coordination and AI Center (DCAIC)

U24MH136628 - BBQS AI Resource and Data Coordinating Center (BARD.CC)
PI(s) - Ghosh, Satrajit*, Cabrera, Laura, Kennedy, David N
Institution(s) - Massachusetts Institute of Technology, Penn State, UMass Chan Medical School

Understanding the complex relationship between brain activity and behavior is one of the most exciting and challenging pursuits in neuroscience. The proposed BBQS AI Resource and Data Coordinating Center (BARD.CC) aims to facilitate innovative research in this area by managing, sharing, and harnessing the power of vast amounts of data and machine learning resources generated by various projects within the BBQS consortium. We will focus on five interrelated aims: 1) Data Management; 2) Data Standards; 3) Machine Learning and Artificial Intelligence (ML/AI) Resources; 4) Data Ecosystem; and 5) Dissemination, Training, and Coordination. The first aim is to serve as a hub for efficient data curation, management, and sharing. We will collaborate with other BBQS projects and coordinate with existing BRAIN data archives to curate and harmonize project data. Data management will be handled by a combination of automated data ingestion and human oversight, transitioning to a fully automated system over time. We will work with scientists and relevant communities to implement robust quality assurance and control solutions. The second aim focuses on establishing data standards for novel sensors and multimodal data integration, as informed by the use of existing standards and best practices from similar efforts. We will aggregate relevant standards for data and metadata, data processing methods, appropriate ontologies, and common data elements, and adapt as needed for evolving methodologies. The third aim involves the development and definition of ML/AI resources for BBQS. We will evaluate and curate relevant ML/AI models and platforms, aggregating datasets, models, and other ML/AI resources from both within and outside the BBQS consortium. These resources will be made available to consortium members, with each resource's origin documented and evaluated for performance and ethical generation and use. Moreover, all models will be made available through public repositories, allowing for widespread access and utilization. The fourth aim involves creating a cloud-based data ecosystem and computational platform. We will collaborate with relevant archives and computing facilities to develop a computational platform in the cloud. This platform will enable access to and processing of even very large data sets with commonly used pipelines and provide a wide range of users, even those with limited resources, with computational capability to analyze and visualize data, models, and model outputs. Finally, the fifth aim is centered around efficient dissemination, training, and coordination of BBQS research resources. We will coordinate data sharing, offer training on relevant topics like neuroinformatics, neuroethics, and ML/AI, and maintain a consortium Web portal. Furthermore, center staff will coordinate consortium activities like meetings, working groups, and policy and ethics discussions, ensuring smooth and effective operation. In summary, BARD.CC aims to catalyze the discovery of valuable insights from intricate relationships between brain activity and behavior, which in turn could advance neuroscience and our understanding of the human mind.

R24 Data repository

R24MH136632 - Ecosystem for Multi-modal Brain-behavior Experimentation and Research (EMBER)
PI(s) - Wester, Brock A
Institution(s) - Johns Hopkins University (Applied Physics Laboratory)

Neuroscience research has historically relied on observing tightly controlled behaviors in siloed laboratory experiments, constraining our understanding of the neural bases of complex behaviors observed in naturalistic settings. With ongoing advances in unobtrusive sensing technology, artificial intelligence, and machine learning (AI/ML), and availability of computing power, the field of neuroscience has been afforded an opportunity to make large-scale discoveries hitherto unimaginable. For this to be realized, however, it is crucial to facilitate secondary analyses that cut across individual datasets, allowing for research that transcends individual project designs. Such a goal cannot be achieved without a data archive that provides a compelling technical solution for storing and curating datasets, and that provides close integration with analytical resources that require minimal technical expertise to be leveraged. Here, we propose the Ecosystem for Multi-modal Brain-behavior Experimentation and Research (EMBER), a data archive specifically tailored to serve the unique needs of the Brain Behavior Quantification and Synchronization (BBQS) research community, which will be at the forefront of advancing neurobehavioral knowledge in coming years. At the heart of EMBER is a scalable, hybrid data archive which will house and manage multimodal and multi-species data collected by diverse research groups. Crucially, our hybrid architecture will not only automatically execute the optimal storage scheme for different modalities of data, leveraging existing BRAIN Informatics resources, but also achieve dual objectives of ensuring the security of behavioral and environmental data — which may include Protected Health Information (PHI) and Personally Identifiable Information (PII) — as well as expediting querying and data access not only within BBQS datasets but also with other BRAIN resources. Different cadres of EMBER users, such as BBQS data generators, analysts, as well as the broader neuroscience community will be able to ingest, curate, and instigate discovery from data using a user-friendly portal that will streamline highly technical data harmonization and synchronization steps. In particular, development, testing, and deployment of analysis tools will be supported by cloud-based sandboxes that are seamlessly integrated with ML/AI resources developed by the BBQS Data Coordination and Artificial Intelligence Center (DCAIC). Integral to EMBER’s success will be acceptance in the community as the gold-standard engine for discovery, providing utility beyond being simply a passive, program-mandated data archive. Throughout its lifecycle, we will nurture bidirectional collaboration with the data generators, analysts, as well as the broader neuroscience research community to introduce and maintain tools for sharing, querying, and analyzing data. We anticipate that EMBER and associated data resources will maximize the BBQS program’s potential to reach its ambitious objectives of transforming our understanding of the link between brain and behavior.

Institutions

Massachusetts Institute of Technology logo Georgia Institute of Technology logo Carnegie Mellon University logo University of Pittsburgh logo New York University logo University of Pennsylvania logo University of Florida logo Yale University logo Duke University logo Harvard University logo Icahn School of Medicine at Mount Sinai logo Columbia University logo Northwestern University logo Rice University logo NYU Langone Health logo University of California San Diego logo Cornell University logo University of California at Los Angeles logo University of Utah logo Seattle Children's logo Children's Hospital of Philadelphia logo Rutgers University logo University of Minnesota Twin Cities logo University of Southern California logo Penn State University logo UMass Chan Medical School logo Johns Hopkins University logo

*Contact PI/Project Lead