2016 Winners

Program Overview

Winners of the 2016-2017 Neukom Institute CompX Faculty Grants Program for Dartmouth faculty were announced with awards of up to $20,000 for one-year projects.

The program seeks to fund both the development of novel computational techniques as well as the application of computational methods to research across the campus and professional schools.

Dartmouth College faculty including the undergraduate, graduate, and professional schools were eligible to apply for these competitive grants. This year's winners are:

Note: An asterisk indicates a significant contribution from Research Computing.

Computer Science

Devin Balkcom and Emily Whiting

Computational Design of Deployable Structures

Deployable structures such as tents, inflatable rafts, folding space telescope mirrors, and toy lego buildings can be packed for easy storage or transit, and then reconfigured to serve their intended purposes. The time is right for computational exploration of automated design of deployable structures. 3D printing technology allows rapid testing of newly imagined geometries, and increasingly available computational power allows analysis of increasingly complex design problems. Furniture, buildings, and machines are typically built out of smaller pieces, allowing complex structures to be built from a small selection of easily manufacture-able parts. These smaller parts form a language for the design of devices; the range of what can be built depends on what parts are available. Some parts, such as screws and nuts, are intended to allow easy dis-assembly as well as assembly, to allow repair, replacement, or re-use. Parts may be connected in different ways: by glue, cement, or geometric constraints.

In this work, we propose to study three specific motivating example problems: reusable building blocks that can be joined together in interlocking rigid patterns without relying on fasteners or glue, linear chains that can be folded into lightweight rigid structures, and foldable origami structures that can be deployed automatically. These problems are chosen to span the study of re-usable building blocks of different dimensionality: 3D building blocks, 2D origami facets, and 1D chains. Understanding deployable structures made from simpler components, especially for this exploratory proposal, is a problem of fundamental science, but we also expect this work to have practical impact. We are particularly interested in understanding how to design medium and large-scale structures (such as prefabricated housing, deployable bridges, and trusses for construction in space) that are expensive or impractical to build using 3D printing technology, and which benefit from easy transport and re-usability.


Jesse Casana

Advancing Methodologies for Archaeological Aerial Thermography

One of the most under-utilized yet most promising methods for the discovery and mapping of archaeological remains is aerial thermographic imaging, the principles of which are relatively simple: due to differences in composition, density, and moisture content, materials on and below the ground surface absorb, emit, and reflect thermal infrared radiation at different rates.  Since the 1970s, archaeologists have speculated that a wide range of archaeological features, including buried walls, pits and ditches, tracks and pathways, and surface artifact concentrations, should all theoretically be visible in a thermal image if: 1) there is a sufficient contrast in the thermal properties of archaeological features and the surrounding soil matrix, and 2) if the image is acquired at a time in the diurnal cycle when such differences are pronounced.  Put simply, a buried stone wall will heat and cool at a different rate than surrounding soil, and at the right time of day or night, this difference is visible in a thermal image.

Despite the potential of aerial thermography in archaeology, the method has scarcely ever been employed, because until recently, acquiring high-resolution thermal images required a large, liquid nitrogen-cooled camera system, rigged onto a specially equipped plane that had to be flown perilously close to the ground by an experienced pilot.  However, recent advances in commercial drone technology, small uncooled thermal camera systems, and digital photogrammetry software packages together make collection, geometric correction, and mosaicking of thermal imagery relatively straightforward.  Recent research by this project’s PI demonstrated the power of these technologies to reveal archaeological remains in a case study at an ancient Chacoan settlement in New Mexico.

This CompX project will work to develop improved methods for collection and processing of aerial thermal imagery to aid in the discovery and documentation of archaeological sites.  We will employ a new camera system, mounted on a small quadcopter, to collect thermal imagery at much higher spectral and spatial resolution than has previously been possible.  The project will undertake aerial thermographic surveys at archaeological sites in the US, Mexico, Cyprus, and Iraq, and using data collected on these surveys, we will explore new quantitative raster-based methods for filtering out noise, improving feature recognition, and performing feature discrimination.  We anticipate that results of the project will greatly enhance our knowledge of individual archaeological sites targeted for survey, but more importantly, will develop a new set of methods for both collection and processing of thermal imagery in archaeology that will be of broad interest to researchers working around the globe.


Michael Casey

Collaborative Brain Waves: Active Electrode Brain Computer Interfaces for Collaboration

Measuring and Capturing Collaborative Problem Solving via Inverse EEG

That there exist differences in cognitive problem solving strategies between artists and scientists is not surprising. However, empirical evidence for such differences is scarse, but increasing [1]. fMRI scanning allows us to probe representations of problem solving in individuals, and to identify spatial and functional differences between groups of individuals [2]. However, the low temporal resolution of fMRI precludes us from studying processes of collaboration, synchronization, disruption, reaction, consensus, and discovery between individuals. Our goal is high temporal resolution EEG data collection and computational modeling for 3D spatial localization of EEG sources via inverse modeling [3] to understand mental processes of collaborative problem solving.

The goal is to see if there are significant measurable differences in mutual information of EEG between the SS and SA groups.

The computational problem is to interpret and understand our data, we need to go beyond analysis of the signal on the electrode and solve the inverse problem for source localization. For the EEG inverse problem, there exist nE instantaneous measurements and nV voxels in the brain, and the 3D voxels can be recovered by uniformly dividing the solution space, so each voxel has got a point source which may be a vector with three unknown components (i.e., the three dipole moments), or a scalar (unknown dipole amplitude, known orientation). Several algorithms have emerged from the signal separation and latent variable literature to solve the inverse modeling problem, including: Low resolution electromagnetic tomography (LORETA) [5], Focal under determined system solution (FOCUSS) [6], Recursive multiple signal classification (MUSIC) [7], standardized LORETA, and others. We will explore using these algorithms with high-quality data collection for modeling mental processes in collaborative problem solving. Prior source localization studies have shown that 32 channels are sufficient to yield excellent results with the inverse problem (thereby recovering 3d structural information about brain activation) [9]. However, the signals need to have sufficient gain and be relatively interference noise free. We propose to purchase enough EEG channels with active electrode caps to fulfill these requirements to improve the data analysis of our collaborative problem solving study.

Social Brain Sciences

Luke Chang

Crowd Sourced Development and Validation of Neuro-Computational Models of Affective Processes*

Crowd sourced development and validation of neuro-computational models of affective processes. Objective biomarkers of pathology exist for a number of diseases, and their development is one of the great advances of modern allopathic medicine. However, objective assessment of affective processes related to mental health disorders has lagged far behind. Currently, the only way to diagnose mental illnesses like depression is using self-reported symptoms such as increased feelings of sadness, guilt, or irritability and decreased interest in activities, concentration, and energy. Yet, these symptom-based diagnoses are astonishingly unreliable across providers (Kappa coefficient = 0.25)1 likely due to how such illnesses inherently degrade individuals’ ability to accurately make these judgments (e.g. depressive realism bias 2). Thus, developing reliable objective biomarkers could dramatically improve diagnosis and treatment by allowing mental health to be characterized on the basis of underlying neuropathology rather than external self-reported symptoms. Direct measures of brain function provide a promising area for developing biomarkers of emotion pathophysiology. In the past several years, major advances in combining functional magnetic resonance imaging (fMRI) with machine learning techniques—algorithms for finding predictive patterns in complex datasets—have brought the goal of fMRI-based assessment of affect within reach. We have recently demonstrated for the first time that fMRI activity can predict whether an individual person is experiencing high or low emotional responses to arousing pictures with over 90% accuracy 3. Critically, this biomarker is sensitive and specific to emotional responses, when compared with other salient and arousing affective events such as thermal pain.

This preliminary success raises a number of issues that must be addressed before fMRI-based biomarkers can be used in large-scale clinical trials and clinical practice, including demonstrating: a) robustness across laboratories and procedures, b) specificity to type of emotion and elicitation method, c) applicability to clinical populations, and d) sensitivity to responses in clinical interventions.

The goal of this project is to develop http://neuro-learn.org - an open-source web-based software platform that can facilitate neuroimaging data sharing and provide integrated machine-learning analysis tools. The Neurolearn platform will consist of three parts. First, it will provide an online repository that will facilitate the uploading, storing, viewing, and sharing of neuroimaging datasets. This repository will have user accounts with the ability to flexibly specify sharing permissions and also the ability to input metadata to accompany the imaging data. Second, the website will feature a server-side analysis toolkit running machine learning algorithms in Python that will facilitate the development and evaluation of brain based signatures of affect. This toolkit will provide an array of algorithms for performing regression and classification (e.g., support vector machines, penalized regressions, and random forests), multiple options for cross-validation (e.g., k-folds, and leave-one-subject-out), as well as methods to evaluate sensitivity and specificity of the brain patterns (e.g., receiver operator characteristic curves). Finally, this website will provide a clean, intuitive, and responsive web interface to use the machine learning tools built with Javascript, Bootstrap, HTML, CSS, and Flask. Neurolearn will store both person-level neuroimaging maps (with meta-data) and ‘research products’ or brain signatures (i.e. trained affect models) to be applied to new datasets. Users will be required to login to use the tool, and will be forced to specify the sharing permissions for their data, brain models, and test results (e.g., single user, a specific group of users, or the general public).

Neurolearn will leverage existing code from Neurosynth.org 4 and Neurovault.org 5 for uploading and storing image data, interactively viewing brain images, and searching and selecting data. Novel features that we will develop in our prototype include: (a) a database for neuroimaging “person-level maps” and “research products” (i.e., brain signature) data types; (b) meta-data specification and entry; (c) user accounts and sharing permissions; (c) server-side machine learning algorithms to generate new signature maps; and (d) server-side applications to apply signature maps to subject data and evaluate sensitivity/specificity related to user-specified outcomes. Finally, we will deploy our application to a local webserver that will be optimized to accommodate both the large storage demands for the imaging repository and the high RAM and CPU cores needed to simultaneously process multiple queued user jobs.


Jonathan Elliott

High-Dynamic Range Video of High-Contrast Fluorescence Overlays During Surgery*

Fluorescence guided surgery is a very promising new technique that causes brain tumors to glow during surgery. The patient is given a contrast agent, which is specially engineered to stick to their type of tumor, and then fluoresce during surgery when illuminated by a special light. New state-of-the-art contrast agents--ones that only stick to tumors, and stick to even the smallest tumors abundantly--are invisible to the unaided eye, because they give off light outside of the visible spectrum (i.e., near-infrared). We have sensitive cameras that can readily detect this "invisible" glow, even at very faint levels. The purpose of this project is to understand how these infrared images can be merged or blended with what a surgeon normally sees during surgery, in a way that is natural, doesn't require making adjustments or changing settings, and doesn't otherwise interfere with the normal procedure. The result will be an important tool that enables the surgeon to see this "invisible" information in a real-time, augmented reality sort of way.

Social Brain Sciences

Maria Gobbini

Brain Encoding and Decoding of Human Social Actions

During our everyday life we rely on our uncanny ability to recognize and infer intention from the actions of others. Such ability is unrivaled, and insights on how our brain empowers us with this ability can inform computer vision researchers by providing biologically-inspired mechanisms. In our understanding of other people’s actions, not only can we easily trace a distinction between social and non-social actions, but also we can also readily decipher the type of action within each domain and the associated intentions.  Imagine being invited at a dinner. Subtle conversational cues during that night can trigger a cascade of inferences (“why are they looking away so often?”, “they yawned: am I boring?”) that we can hardly control.

With this project, we want to understand how different type of actions with social and non-social meaning are represented in the brain by using functional Magnetic Resonance Imaging (fMRI) and machine learning methods, and investigate parallels with current state-of-the-art action recognition computational models.


Misha Gronas

A Fox Knows Many Things, but a Hedgehog Knows One Important Thing”: Modeling Scholarly Trajectories in Scientific Space

We frequently observe the two opposing strategies (cognitive styles) associated with breadth VS  depth, interdisciplinarity VS focus etc., in our own and others’ intellectual pursuits. Our task is in this project is to formalize this intuition and model such strategies in the domain of scientific research, by examining scientists’ publication histories. A publication history of a given scholar will be modeled as a trajectory or a path within the space of scholarly publications. These trajectories will then be analyzed in terms of typical patterns and pattern similarities. Our first task will be to create a typology of patterns (trajectories) observable in a given scholarly field, and across the fields. We will then correlate the specific path types (patterns) with objective measures of scholarly effectiveness/impact, such as citation indices, with an aim of predicting potentially “successful” academic career strategies. We will also analyze different factors influencing strategy choices, such as the specific field, discipline, stage of career, and gender. We can quantify interesting features of the trajectories, including: (1) experience (trajectory length), (2) interdisciplinarity of research interests (number of clusters), (3) erudition (the longest distance between different clusters), (4) focusing (the longest stay with one cluster). The final task is identifying whether certain typical career paths are associated with success in different fields.

Classics & Biological Sciences

Julie Hruby and Mark McPeek

Fingerprinting the Potters of Antiquity

Hundreds of thousands of archaeological artifacts from around the world preserve the impressions of ancient fingers and palms. Pottery, ceramic figurines, lost-wax-cast bronzes, plaster, and clay tablets all preserve prints. This project’s goal is to develop the methodology to evaluate the sex and, when possible, the ages of the producers of ancient clay artifacts. Studies of sexual dimorphism in modern fingerprints can be relatively reliable, reaching rates of accuracy comparable to those available from analysis of skeletal material. However, they rely on two-dimensional images of the prints of all ten fingers, and archaeological objects rarely preserve all ten prints. Our hypothesis is that the use of data from the third dimension will allow us to sex and age prints while working with smaller samples. We will use this funding toward the purchase of a very efficient, high-resolution laser 3D scanner that we will use to make the project feasible.

French & Italian

Kathrina LaPorta

Digitized Dissent: Mocking Monarchy in Absolutist France

The “Digitized Dissent” web-based critical edition is an interdisciplinary project linking history, literature, and digital humanities to provide a new way to read and study the pamphlets written against French monarch Louis XIV (1643-1715). The website will present a full critical edition of two satirical pamphlets, L’Alcoran de Louis XIV (1695) [Louis XIV’s Koran] and Conseil privé de Louis le Grand (1696) [The Privy Council of Louis the Great], supplemented by hyperlinked annotations, prefatory information, and contextual notes. It is our hope that the website will become a portal for the addition of other pamphlets, or in the future, for English translations of the texts.

The digital format will provide an interactive platform to showcase the textual features and scholarly interest of early modern pamphlets in new ways: 1) the inclusion of hyperlinks allowing users to “jump” between ideas that appear in several pamphlets will highlight – in a dynamic manner – the shared literary references in a corpus more typically read for its political content; 2) the digital scholarly apparatus will more seamlessly integrate the critical discourse with the primary texts themselves; and 3) the full-text search option will create a network of references that enhances the possibilities for critical engagement within an interactive infrastructure. By combining the erudition of a critical edition with the interactive display possibilities a web page can offer, the project will bring to life for contemporary readers the political personages who populated the early modern French cultural imaginary.


Kimberly Rodgers

Modeling Identity Dynamics and Uncertainty in Social Interaction: Bayesian Affect Control Theory

CompX funding for this project will support the development of a simulation and visualization interface, powered by a novel computational model of social interaction dynamics. Decades of sociological research has mapped the contours of shared cultural knowledge, exploring how we internalize beliefs about the relative status, power, and agency of particular social groups, and how these beliefs guide our interactions with others. Affect control theorists first developed models of this impression formation process in the 1970s, which have been used in conjunction with data about shared cultural sentiments to generate testable predictions about actors’ behavioral and emotional responses to ongoing social events. These predictions have been supported by survey, experimental, and naturalistic evidence in a research program spanning several decades.

Despite empirical support and conceptual compatibility with major theories of emotion and social cognition across the disciplines, ACT’s models do not suitably capture the uncertain, dynamic, and sometimes paradoxical nature of identity noted in the social psychological literature. The PIs have recently developed a Bayesian extension of ACT (BayesACT), which accounts for the dynamic fluctuation of identity meanings during social interaction, explains how actors learn and adjust meanings through social experience, and shows how stable patterns of social interaction can emerge from individuals’ uncertain and noisy perceptions of identities. Using partially observable Markov decision process (POMDP) models, BayesACT allows us to account for cultural consensus on meanings as well as subcultural and idiosyncratic deviations from that consensus. By modeling social beliefs as probability distributions, our model can account for the fact that people have multiple identities that influence their social actions, and handle dynamic adjustments of identity salience during social situations. Most importantly, BayesACT can account for both agency and social structure in modeling social interaction, and show how durable structures can emerge, even in highly uncertain situations.

Our theory and model of social interaction has already made a tremendous impact in the field, as it offers many novel contributions to the study of social interaction. Nonetheless, our efforts to this point have focused primarily on building the model and executing targeted, illustrative simulations. More comprehensive validation of the BayesACT family of models would be a significant step toward the broader development and application of the theory, including its application to study more complex network dynamics. CompX support for this work will be used to develop point-and-click simulation and visualization tools, which will allow researchers to generate precise, testable mathematical predictions about interaction dynamics based on our new model. These tools will allow users to set advanced model parameters to predict the consequences of, for instance, different levels of uncertainty or noise in identity representations, multiple hypotheses about an actor’s identity, disagreement in the actors’ interpretations of the event, or tradeoffs between the motive for meaning maintenance and other types of situated goals. Each of these reflects an open question in the social psychological literature, about which testable model predictions will be generated and around which novel empirical studies will be designed. Model predictions will be tested for accuracy through planned empirical work using a variety of methods (e.g., survey research, experiments, naturalistic observation).  

French & Italian

Scott M. Sanders

Visualization Tool for Multimedia in the Long Eighteenth Century*

This project proposes building a complex visual query and processing web interface on top of analysis tools and results from the Multimedia in the Long Eighteenth Century project. These visual tools and data visualizations will allow a scholar to interrogate, visualize and formulate additional computational analysis of textual works. The effect of this effort will be to create a Digital Humanities instrument that will allow the nonspecialist access to the tools, training sets and analysis that have been produced to support the search for musical paratext in large bodies of work.

Multimedia in the Long Eighteenth Century (MMLEC) seeks to quantify the frequency with which musical paratext, including both lyrics and musical notation, appear in English and French language novels published between 1688 and 1815.


Roberta Stewart

The Ancient Coin Museum Project*

Roman coinage offers a rare type of evidence from the ancient world: a continuous—at times annual—record. Of even more potential value for the ancient historian, in issuing coins the Roman government exploited the potential of a commonly circulated medium to carry value-­‐laden symbols. The individual coin represents the distillation of an artistic and aesthetic tradition (the iconography and design of the coin, both symbols and words) at a particular historical moment. The proposed computer-­‐based tool will facilitate the study of ancient coins in three important ways. Fundamentally, the computer allows for the development of a digital archive of the museum collection and so the preservation of knowledge. A further value-­‐added: the size of ancient coins and the oftentimes poor preservation of bronze coins represent a primary obstacle to their study, and digital images allow for necessary enhancement. Finally and crucially, interpretation of the symbols on the coin requires careful collection of contextual data in order to re-­‐create the visual experience of seeing the coin. Here the computational component provides a unique advantage. The computer permits both broad views and selective sorting of provenance or temporal range of the coins, their design, and iconographic detail. It also enables the complementary display of the multiple images and texts that help us reconstruct the visual memory and historical understanding of the ancient viewer. The coin tool thus will permit the modern observer to explore the textual and artistic contexts of coins, and the significance of coin symbols in historical moments.


John Voight and Edgar Costa

L-Functions and Modular Forms Database (LMFDB)

The Langlands program, first formulated by Robert Langlands in the 1960s, is a set of widespread conjectures that lie in deep theories of mathematical symmetry, and it gives schematic direction to navigate between a dizzying array of subfields of mathematics, including number theory, representation theory, algebraic geometry, and harmonic analysis--and in the 21st century its reach continues to expand.  Only recently has it become feasible to do large scale computational verification of the predictions of the Langlands program, to test the conjectures in higher-dimensional cases, and in particular to present the results in a way that is accessible widely to mathematicians.  To provide compelling visual and computational displays of the Langlands program "in action", a database was created called the L-functions and Modular Forms Database (LMFDB), available at the website http://www.lmfdb.org/.

We seek to advance fundamental computational research in the Langlands program in two aspects.  On one hand, we will work on improving the L-functions and Modular Forms Database user experience worldwide by improving the stability and reliability of the database itself.  On the other hand, we seek to advance both the mathematical underpinnings and the computational infrastructure of the LMFDB in the areas of genus 2 curves over the rationals and K3 surfaces.

Geography, Math, Biological Sciences, and Earth Sciences

John Winter, Dorothy Wallace, Matt Ayres, and Erich Osterberg

Expansion of Lyme Disease in the Northeast: Climate, Land Use, and Ticks

The incidence of Lyme disease is expanding and intensifying rapidly in the northeastern US for reasons that are not well understood.  An interdisciplinary team of researchers from Dartmouth’s departments of Geography, Biological Sciences, Earth Sciences, and Mathematics is working to determine the relative importance of climate change, evolving land use/land cover, and dynamics of host animal populations to the spread of Lyme disease throughout the Northeast.  The team is employing climate models, satellite-derived regional land cover, and mathematical models of the black-legged ticks that carry and transmit Lyme disease to assess how recent changes in landscape and climate have contributed to the current levels of Lyme disease in the Northeast, and simulate future Lyme disease incidence under various scenarios for 21st century climate and land cover change for this region.