2022 Grant Recipients

About the CompX Faculty Grants Program

Winners of the 2021-2022 Neukom Institute CompX Faculty Grants Program for Dartmouth faculty have been announced for one-year projects. The Neukom Institute received over $675K in total requests and awarded $215K of financial support with an additional combination of programming support from Research Computing and the Neukom Scholars program.

The program seeks to fund both the development of novel computational techniques as well as the application of computational methods to research across the campus and professional schools.

Dartmouth College faculty including the undergraduate, graduate, and professional schools were eligible to apply for these competitive grants.

Note: * indicates an award that is partnered with assistance from Dartmouth College Research Computing.

Biology and Psychosocial & Brian Sciences

Aman Aberra*

Optical imaging and computational modeling of electric field stimulation of single neurons

amanaberra_headshot.jpg

Aman Aberra

Transcranial brain stimulation techniques allow for the noninvasive delivery of electrical current to the brain, which can be used to activate, inhibit, or modulate brain activity. These techniques allow for causal investigation of human brain function as well as targeted therapies for neurological and psychiatric disorders. However, despite decades of use, their neuromodulatory and therapeutic efficacy can be relatively weak and variable between individuals. A primary obstacle to developing more effective protocols and devices for brain stimulation is the gap in our understanding of the fundamental mechanisms by which single neurons and their microscopically thin axonal and dendritic branches respond to external electric fields. Furthermore, the underlying molecular properties of axons and synapses governing membrane excitability and synaptic transmission are not well characterized due to their relative inaccessibility to conventional electrical recording techniques.

This project seeks to address these challenges by combining optogenetic techniques for resolving the electrical activity and molecular properties of single cultured hippocampal neurons with biophysically realistic computational neuron models. First, we will use genetically encoded voltage indicators to record membrane polarization and excitation by electric fields at high spatiotemporal resolution, allowing us to map the subcellular effects of stimulation across a range of spatial and temporal parameters. Second, we will extend a recently published method for endogenous labeling of neuronal proteins using CRISPR-Cas9 to label and quantify the distribution of voltage-gated sodium and potassium channels in hippocampal axons. These electrical and molecular data will then be used to build and constrain the parameters of compartmental neuron models reproducing the experimental responses to stimulation. This project will provide a mechanistic framework for predicting the direct response of single neurons to electric fields, which is a critical and necessary step in understanding and optimizing the complex effects of brain stimulation on brain networks and behavior.

Film & Media Studies

John Bell

Augmenting the Organic World: Minimally Invasive Markers

John Bell

John Bell

Augmented Reality (AR) promises to extend our understanding of our surroundings by inserting digital constructs–sounds, images, text, and 3D models–into real-time camera views of the world around us. Driven by the growing capabilities of smartphones, AR has taken off as a medium for gaming, advertising, and industry training, among other fields. The methods behind these new applications, however, almost always assume an artificial setting such as a house or city where humans have control over its appearance and can use consistent physical structures as reference points to place digital constructs. This CompX project will investigate methods for anchoring AR constructs in an organic setting where visual reference points change rapidly and apply those methods by annotating a forest trail as part of a pilot project: Damiano Benvegnù's Entangled Ecologies.

The project setting requires development of an AR relocalization system that merges data from multiple tracking systems to accurately place digital artifacts. The system is based on "content clusters" containing annotations added to the environment at specific sites. Once the user arrives, the relocalization system will place digital constructs using a combination of GPS, multiple image recognition with custom fiducial markers to establish camera pose, and visual inertial tracking to orient the user when those markers are out of view. By merging multiple tracking methods we can take augmented reality to locations it's never been able to support before.

Anthropology

Jesse Casana

Drone Lidar 2.0

casana_iraq2019_copy.jpg

Jesse Casana

Over the past decade, aerial lidar has become a transformative tool for archaeology, enabling the discovery and documentation of vast, previously unknown ancient cities, agricultural landscapes, and ritual installations around the world. Using a low energy laser to scan the surface of the earth, lidar produces high-resolution topographic data, penetrating tree canopy and other dense vegetation and helping to reveal otherwise hidden archaeological features. Yet most lidar data are collected by a costly aircraft-mounted instrument, making aerial lidar surveys extremely expensive to conduct and thereby limiting the ability of researchers to deploy this powerful technology. Thanks to rapid developments in compact, low-cost lidar systems alongside ever more sophisticated drone technology and processing software, it is now possible to collect very high-resolution lidar data using a drone-mounted sensor, but the utility of these new systems for archaeological research remains largely untested.

Building on previous Comp-X supported research, this project will use a new, ultra-compact DJI L1 lidar sensor deployed on the Matrice 300 drone to undertake lidar surveys at several key archaeological sites in forested regions.  We plan to use the lidar system to investigate 1) an ancestral Puebloan settlement at Picuris Pueblo, New Mexico, where stone built houses and agricultural terraces are hidden below pinon-juniper forest, 2) at the Menominee Reservation, Wisconsin, where remains ancient house basins, raised fields, and ceremonial mounds are present throughout forests, and 3) in the Upper Connecticut Valley in New Hampshire and Vermont where both historic and prehistoric archaeological features are obscured by mixed deciduous-evergreen forest.  The project will explore the opportunities and challenges of this new generation of drone lidar technology, revealing many previously undocumented archaeological features and facilitating a range of other projects by Dartmouth researchers.

Government

Charles D. Crabtree

Discrimination Against those of Asian Descent

Charles Crabtree

Charles Crabtree

To what extent do members of the public and various economic elites (e.g., employers, elected officials, educators, physicians, etc.) discriminate against those of Asian descent in the many economic and social interactions that make up our everyday lives? In the last couple of years there have been many documented instances of hostility against Asians the world over that range from verbal harassment to brutal physical abuse. These have occurred in many advanced democracies including—but not limited to—the United States, France, Italy, Croatia, Finland, Hungary, Ireland, the Netherlands, Russia, Germany, Sweden, Belgium, Canada, and the UK as well as in various developing nations in South America, Oceania, and Africa. With public officials in many nations adding pejorative labels to COVID-19 (e.g. calling it the "Chinese virus") and racial animus on the rise against other racial/ethnic groups the world over, it is vitally important to explore the extent, scope, nature, and origins of hostility against those of Asian descent in economies across the globe.

This project's key objective is to expand our understanding of the scope, nature, and origins of discrimination against those of Asian descent across various OECD countries and, in so doing, to lay the groundwork for expanding the community of academic and non-academic leaders studying and seeking to address this core issue. To achieve this important goal, I will use a unique and novel combination of advanced methods for measuring discrimination in a series of survey experiments, correspondence experiments, and lab-in-the-field experiments to measure discriminatory behaviors against those of various Asian groups—including those from China, Japan, Korea, and other Asian nations. These will provide us with a comprehensive and robust understanding of the nature of bias against those of Asian descent in both high and low-stakes environments. 
 

Institute for Writing and Rhetoric

Christiane Donahue

Textual Moves and the Voices of Others: Longitudinal Research on Student Source Use

Taine Donahue

Taine Donahue

Tiane Donahue, Isaac Feldman, Nick Van Kley, Sarah Smith, Annika Konrad

Student writers work frequently with written assignments that involve interacting with other texts. The strategies they use and the ways they position themselves in relation to these other voices tell us volumes about their entrance into the scholarly community. Many of the texts students post to their college digital portfolios use and interact with sources, in relation to particular text types assigned and the context of different courses. Our project analyzes the qualitative data of these texts via two quantitative methods: human coding (a social sciences research approach) and NLP-driven automated coding, in order to identify statistically significant trends and put these quantitative results in conversation with our qualitative analyses, notably case study. We will use open source AntConc, and Atlas.ti, for some features analysis, similar to work in big data and digital humanities studies, and grounded in textual corpus analysis which is best adapted to studying change over time, patterns, and comparisons across contexts.

This descriptive study will produce simple analytics (for example, frequency of different text types and textual phenomena); identify statistically significant trends in source use, text use, and use of other kinds of evidence across different text types and course types; track statistically significant change over time in student productions and approaches; and track significant correlations among various contextual factors. The study will inform national conversations about how students work with sources to establish their authority, create strong arguments, and synthesize available knowledge. It will also enable network-building with other US and international teams studying these questions.

Classics

Julie Hruby

Associating Fingerprint Patterns with Age and Sex: A Quantifiable Approach

Julie Hruby

Hruby

A wide range of prehistoric and ancient Greek ceramic objects, including vessels, ceramic sculpture, seal impressions, and writing tablets preserve the fingerprint impressions of their producers. Traditionally, archaeologists have matched prints in an effort to understand ancient labor systems, but more recently, we have also begun to ask a much wider range of questions. The ages and sexes of producers are among those questions, but so far, the techniques that have been used to reconstruct those factors have typically been able to work on the level of populations rather than individuals, and they have also been subject to challenges posed by differential clay shrinkage rates.

The current project will improve the accuracy of sexing and aging producers of ancient Greek artifacts by using fingerprints that were accidentally impressed in objects made by modern Greek ceramicists as a reference sample. Fingerprint impressions from modern Greek adult potters of known sexes and age grades have already been collected and scanned with a high-resolution 3D scanner, and a Greek attorney has assisted us in complying with both European Union and Greek law as they relate to the collection of prints from juveniles. We have also begun the process of collecting and scanning modern prints from juveniles, and we will begin archiving our raw data.

Thayer School of Engineering

Jiwon Lee*

Hyperglycosylation of ImmunogensviaIn SilicoEngineering (HyperImmunISE)

Jiwon Lee

Jiwon Lee

Glycosylation is one of the most important post-translational modifications of proteins. Enveloped viruses such as HIV-1 and SARS-CoV-2 often exploit the machinery of their host to heavily glycosylate their surface proteins. These glycans can effectively mask various antigenic epitopes on viral proteins to avoid recognition by antibodies, allowing the virus to evade the immune system. We aim to leverage this process to develop an algorithm which introduces non-native glycans on immunogens to block areas on the protein surface and focus immune responses to a particular epitope, with the goal of designing vaccines focusing immune responses to epitopes associated with broad protection. Completion of this proposed study will facilitate in silico protein design and enable the use of these designing tools outside of our research group, as the computational design tool has versatility to engineer many different proteins and will be accessible publicly as a webserver. 

Geography

Justin Mankin*

National Attribution of Historical Climate Damages: Data in Service of Climate Litigation

Mankin

Mankin

Quantifying which nations are culpable for the economic impacts of anthropogenic warming is central to informing climate litigation and claims for restitution for climate damages. However, for a country seeking legal redress, the magnitude of economic losses from warming that are attributable to individual emitters is not known from existing work, undermining its standing for climate liability claims. We have addressed this gap, combining historical data with climate models of varying complexity in an integrated framework to quantify each nation's culpability for historical temperature-driven income changes in every other country. By linking individual emitters to country-level income losses from warming, our results provide critical insight into climate liability and national accountability for climate policy. Based on our collaboration with the Sabin Center for Climate Change Law at Columbia University, it is essential to publicly serve these data, which are the first of their kind, to support the domestic and international legal communities pursuing ongoing and future climate litigation. Our project has two goals: (1) build a Dartmouth-based website to publicly serve the data we have generated from this project to the legal community and (2) seed the next steps of our work furthering the attribution of climate damages. Our computational work provides evidence for liability claims of the monetary losses countries have suffered based on the actions of specific emitters. Crucially, the distribution of these impacts is highly unequal, emphasizing the inequities embedded in the causes and consequences of historical warming. Serving these data and the science that developed them in a transparent and interpretable manner, while positioning us to extend our computational accounting framework to other actors, such as individual firms.

Thayer School of Engineering

Colin Meyer*

Storage of water in snow during climate change: preferential flow

through snow

Meyer

Meyer

The Arctic is warming faster than anywhere else on earth. In 2019, the Summit region of the Greenland Ice Sheet experienced surface melting, which has only happened a handful of times in the last thousand years, and the frequency is increasing: the last time that the surface melted at Summit was in 2012. At the same time that new areas of Antarctica and Greenland are melting now because of climate change, alpine snowpacks are experiencing more weather variability, increasing the frequency of deadly avalanches and putting snow supply for water resources at risk. These downstream effects present a need for predictions of how snow will evolve on the surface of ice sheets and in alpine systems. Numerical models for these processes are being developed but are limited to one vertical dimensional and based on empirical parametrizations. In this CompX project, we will develop a two-dimensional snow evolution model to understand the heterogeneous infiltration of water into snowpacks. Focusing on the role of lateral water flow, we will numerically analyze the structure of flow pathways, location of refreezing, and net water fluxes. We will compare our simulation results to existing one dimensional simulations and field observations from the surface of the Greenland ice sheet. 

Chemistry

Katherine Mirica

VeRidium: A Virtual Reality Platform for University Science

Mirica

Mirica

Connecting the macroscopic and atomic dimensions represents one of the conceptual challenges that students face in chemistry and physics courses. In introductory courses, the concepts of quantum mechanics, crystal symmetry, and atomic orbitals pose challenges to students, as they represent a departure from classical continuum models to describe quantum phenomena that is not easily visualized in the macroscopic world. In more advanced courses that go beyond atoms, many challenges in the studies of molecular and materials structures arise from the difficulty in applying conventional methods of visualization to the three-dimensional (3D) models that are intrinsically inaccessible within the macroscopic world. To overcome this challenge, Virtual Reality (VR) experience can make it possible to go beyond the two-dimensional (2D) representation confines of the printed page or a screen to reveal the key features of the quantum world in 3D.

The overarching goal of this project is to develop a process for using VR in aiding visualization of atoms, molecules, and materials at the university level depth. The use of VR in Chemistry and Physics is not to replace the real hands-on laboratory experience, but to enable students to grasp the abstract concepts that typically require high cognitive load and high reliance on spatial cognition, where conventional visualization resources often prove inadequate. Our approach towards this goal is organized into two specific aims: (1) Partnership with a student-run Digital Applied Learning and Innovation (DALI) lab at Dartmouth to develop an independent VR app for Oculus Quest to assist students in learning solid state structure of materials; (2) Implementation of VR-based modules in courses to assess the efficacy of VR-based visualization on student learning experience, compared to traditional methods. Despite the enthusiasm for the use of VR in science education, there is currently limited information on the efficacy of VR in aiding 3D molecular and material visualization, lack of information about best practices, lack of clear scalable process for broad implementation and dissemination (e.g., cost, inclusion, etc.), and lack of VR-based educational resources at the university-level depth. This Neukom CompX grant will address these gaps in VR-based activity development and implementation.

Thayer School of Engineering

Elizabeth Murnane

Building Children's AI Literacy through Play-based Educational Technologies

 

Murnane

Murnane

Artificially intelligent (AI) technologies continue to integrate into daily life, including for children born into today's digital era. As AI advances, it is imperative to foster children's "AI literacy" to enhance their skills and savviness surrounding emerging technology. Research finds such introductions to computational concepts early in childhood help increase later interest in computer science, including among students from traditionally underrepresented groups. Early exposure can also build professional AI competencies that are likely to be in high demand and exceed workforce supply. Furthermore, demonstrating the limitations of AI can help prepare young people to guard against age-of-AI hazards such as data breaches, surveillance capitalism, biased algorithms, and dis/misinformation.

This CompX project will undertake the iterative design and proof-of-concept evaluation of a play-based educational platform that builds children's intuition and understanding of AI. Specifically, we will use the interactive scaffold of mini-games, crafted through human-centered design work with kids and families to develop effective, engaging, and inclusive learning activities. We have begun creating prototypes for a suite of games that will be playable on-demand and can be adaptively recommended. To evaluate our designs, we will conduct lab studies and short-term pilot deployments to measure children's performance on recognition and recall tests, assess self-efficacy and other psychological reactions, and gather metrics of user experience and system acceptance. Findings will inform longitudinal studies to monitor effects over time. Moving forward, we also aim to extend these ideas to additional aspects of computing/STEM education, other interaction paradigms, and broader learner populations.
 

Chemistry

Jacquelyne Read*

Development of Predictive Models for EDA Complex Formation in Asymmetric Photoredox Catalysis

Read

Read

Light touches every aspect of our daily life. Plants learned long ago to harvest visible light to make chemical compounds through the process known as photosynthesis, and chemists are still catching up. Chemical synthesis using light as an energy source allows us to generate radical intermediates under very mild conditions, which in turn serves as a powerful tool enabling a wide range of chemical transformations. This field has grown immensely over the past decade, paving the way for the development of synthetic methods capable of synthesizing high-value molecules.

Our studies focus on harnessing visible light for chemical reactivity through the in situ formation of photoexcited electron donor–acceptor (EDA) complexes. EDA complexes are ground-state aggregations of an electron-rich donor molecule and an electron-poor acceptor compound held together by weak intermolecular forces. Despite the evolution of strategies to exploit EDAs for use in chemical transformations, identification of new EDA complexes continues to be limited by trial-and-error empiricism. We will use the computational resources provided by this CompX Grant to create models capable of predicting new EDA molecular pairings, thereby expediting the development of new light-driven chemical transformations.

Orthopaedics - Geisel School of Medicine

Peter L Schilling, MD, MSC

Role of Transfer Learning in the Analysis of Medical Images

Schilling

Schilling

We seek to answer discrete questions about the role of transfer learning in the analysis of medical images – a necessary next step for advancing the application of deep learning in medical imaging, both in our lab and others.

The advent of deep learning has enabled huge advances in computer vision and thus medical imaging research. The societal benefit of automated interpretation of medical imaging studies, like X-rays, with deep learning is well described.  Broadly speaking, well-trained models have the potential to read imaging studies faster and more accurately while supporting clinicians' workloads and decision processes.  The advent of automated interpretation of medical images also democratizes access to expert interpretation of an imaging study anywhere a digital representation of the study can be obtained.  Since deep learning's emergence, the focus has been on ways to learn faster, with less data, and greater accuracy. The biggest success has been a technique called transfer learning.  Rather than train a computer vision model from scratch, a model developed for one task (e.g. identifying cats in natural images) is repurposed as the starting point for training a model for a different task (e.g., identifying fractures in a radiograph). As a result of the presumed benefits of transfer learning models, it has become the de-facto method for applying deep learning to medical imaging.  Yet, the benefits of transfer learning are not a given when it comes to medical imaging.  That's because successful transfer learning requires that the original and final task share enough similarity. When they don't, final models take longer to train, or lose accuracy.  Natural images (e.g., pictures of cats, dogs, cars, buses, etc.) and medical images (X-rays, MRIs, etc.) differ in fundamental ways and transfer learning's effectiveness is unclear given the imaging domains' unique attributes and tasks. It's possible that transfer learning starting from models trained on natural images offers little benefit to performance, while simple, lightweight models trained on domain-specific images perform comparably or better.  As such, our lab is working to answer discrete questions about the effectiveness of transfer learning in the analysis of medical images – specifically musculoskeletal radiographs.

 

Linguistics

James Stanford & Rolando A. Coto Solano

Gamification of Spoken Data Collection for Sociolinguistic Research

Stanford

Stanford

Using a novel approach for collecting speech samples for sociolinguistic research, we aim to produce the largest, most geographically comprehensive, audio dataset of North American English dialects ever attempted. The results will have the potential to greatly increase scholarly knowledge of English dialects and vowel systems in general. In this project, we will develop and implement a smartphone app game/research tool designed to be widely downloaded and used for free by iPhone and Android users. We will process the acoustic phonetic data using the online system DARLA, which was created with a prior Neukom CompX grant. Research output will include detailed acoustic analyses of vowel features and statistical correlations with geographic and demographic patterns. The results are likely to provide new, potentially transformative perspectives on vowel systems in North American English.

Rolando

Rolando

Psychological and Brain Sciences

Mark Thornton

Data-driven discovery and capture of collective human group states

Thorton

Thornton

Groups of people experience wide varieties of collective states, ranging from tense meetings to creative brainstorming sessions to relaxing get-togethers. Individuals can often skillfully "read the room" – picking up on the collective state of the group from cues such as body language, facial expressions, and tones of voice. This ability allows people to tailor their actions appropriately to the context and navigate complex social situations.

Artificial systems could also benefit from the ability to perceive human groups states – imagine a party where the music automatically changed to match to mood, or a tool to help leaders manage their team meetings. This project will develop a machine learning pipeline to discover group states directly from natural social interactions. By combining state-of-the-art models for detecting body pose, facial expressions, and speech characteristics, we will map out the space of possible groups states in a data-driven way. The ability to automatically detect where a group is within this space will facilitate a wide range of applications in human-computer interactions.