Skip to main content
[an error occurred while processing this directive]

[an error occurred while processing this directive]
HomePrograms >

2015 CompX Winners

Neukom CompX Faculty Grants

Winners of the 2015-2016 Neukom Institute CompX Faculty Grants Program for Dartmouth faculty have been announced with awards of up to $20,000 for one-year projects.

The program seeks to fund both the development of novel computational techniques as well as the application of computational methods to research across the campus and professional schools.

Dartmouth College faculty including the undergraduate, graduate, and professional schools were eligible to apply for these competitive grants. This years winners are: 


Virginia Beahan    
Virginia Beahan
Senior Lecturer
Studio Art

Virginia Beahan, Studio Art; “Elegy for an Ancient Sea”

For the last three years, I have been photographing the landscape and built environment around the Salton Sea in Southern California.  This otherworldly area represents a collision of dramatic circumstances, both "natural" and man-made, and represents the myriad and complex challenges we face as a growing modern society.  The effects of economic inequalities, resource wars, and the insatiable demands of a consumer culture are starkly visible in an improbable landscape of extreme heat and little water.  And yet, this is also a place of dramatic beauty, inscribed by history and rich with possibility. This project will be exhibited at the Museum of Contemporary Art San Diego later this summer.  I am close to finishing the photographing phase, but wish to make one final trip this April.  I also have negatives that require scanning and proofing, and I will be creating about 30 large format digital prints for the exhibition.  A student assistant would be particularly helpful in making it possible for me to meet the deadlines that are fast approaching.  


Miles Blencowe    
Miles Blencowe
Physics & Astronomy

Miles Blencowe, Physics & Astronomy; “High Throughput Tracking of Beating Cilia”

This concerns a relatively new project involving a collaboration between Dr. Elizabeth Smith in the Biology Department and myself. Our CompX proposal is specifically to fund two visits by Dr. Andrew Berglund later this year; neither Dr. Smith’s nor my existing grants have specified funds to cover his visits.  Dr. Andrew Berglund, an expert on optical nanoparticle tracking, has been developing Matlab-°©‐based software that will accurately and efficiently track an evolving microtubule profile, the basic inner component of cilia as it slides relative to other neighboring microtubules.

Cilia or flagella are cylindrical appendages of biological cells, roughly 5-10 micrometers long and about 200 nanometers in thickness. Cilia are able to flex, waving back and forth with frequencies of tens of cycles per second. This periodic waving motion enables, for example, single celled organisms such as the e-coli bacterium or an algal cell such as chlamydomonas to swim in water, seeking and sweeping in nutrients. In mammals and humans in particular, cilia are required for sperm propulsion, removal of debris from the respiratory tract, circulation of cerebrospinal fluid, and the determination of the left-°© right body plan during development. Defects in cilia motility may result in impaired fertility, respiratory distress, hydrocephalus, heart defects, and other health conditions. Failure to form cilia in mice is often lethal; it is fair to say that the cilium organelle is required for life.


Nicola Camerlenghi    
Nicola Camerlenghi
Assistant Professor
Art History

Nicola Camerlenghi, Art History:  “The Virtual Basilica Project”

“The Virtual Basilica Project” aims to apply computational techniques of visualization to the humanistic discipline of architectural history. Creating digital reconstructions of one of the earliest and most important Christian buildings in Rome: the Basilica of San Paolo fuori le mura. Because a fire devastated the building in the nineteenth century, it has remained enigmatic and difficult to study for scholars. This novel approach promises to radically shift prevailing practices in this discipline. Until recently, scholars were limited by conventional modes of representation and analysis-sketches, plans, sections, elevations or even photographs-which isolate a work at a single moment in time. It is no wonder that buildings like San Paolo are rarely considered throughout their lives, for these visual tools curtail a holistic study of a work across its long history. But worse, they condition our understanding of architectural design and construction as a product, rather than a process. In contrast, I approach buildings as continuously morphing fabrics, the changes to which reveal which features were perceived to have ongoing relevance and which were deemed dispensable-which survived, and which were slated for demolition. “The Virtual Basilica Project” challenges the status quo by applying computational tools in order to better understand San Paolo as an aggregate of all its temporal moments. The project draws upon a vast compendium of analog media related to the no-longer extant basilica and combines these sources into a digital model of the structure through time.


Solomon Diamond    
Sol Diamond
Associate Professor
Thayer School

Sol Diamond, Thayer School; “Model Development for Computational Magnetic Particle Imaging”

Magnetic nanoparticles (mNP) hold great promise for use in medicine as targeted therapeutics and imaging contrast agents. Over the past decade researchers have begun to understand how to exploit the nonlinear magnetic properties of mNPs for medical imaging. This emerging method is broadly termed Magnetic Particle Imaging (MPI). Our group has recently introduced several new algorithms for mNP imaging that have opened the door to developing a relatively low cost and portable MPI system. Our fundamental innovation is to view MPI as a computational rather than hardware design problem.

We demonstrated this idea in a method we call Susceptibility Magnitude Imaging (SMI). We then extended our algorithm and demonstrated that it is possible to perform spectroscopic characterization of mNPs with AC Susceptibility Imaging (sASI). Most recently, we have demonstrated the theory necessary to surpass conventional resolution limits in a method we call nonlinear Susceptibility Magnitude Imaging (nSMI). Our prior work on MPI algorithms has been at a pilot scale with minimal imaging complexity and only 3 to 12 imaging voxels at resolutions of 5 to 10 mm.

To continue this work, we are now facing the significant challenge of scaling up these algorithms to meet clinical resolutions. We believe our computational MPI method could become a viable medical imaging alternative to Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) but we must first demonstrate 3-dimensional imaging with 1-mm resolution and the capability to dramatically scale up our image reconstructions.


Sergi Elizalde    
Sergi Elizalde
Associate Professor

Sergi Elizalde, Mathematics; “A mathematical model for the dynamics of tumor heterogeneity derived from clonal karyotypic evolution”

This is a new interdisciplinary project started in the fall of 2014 in collaboration with Samuel Bakhoum, from the Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York. Our goal is to understand the role of numerical chromosomal instability in the dynamics of tumors. Due to experimental limitations, fundamental characteristics of karyotypic changes in cancer are poorly understood. We have developed an experimentally inspired model for clonal karyotypic evolution, based on the potency and chromosomal distribution of oncogenes and tumor suppressor genes, which we have used so far to replicate some experimental data. In this project we plan to refine and enhance our model, run more simulations to understand key parameters that govern the dynamics of clonal karyotypic evolution, and analyze the model mathematically in order to explain why tumors evolve the way they do.


Maria Gobbini
Associate Professor

Maria Gobbini, PBS; “Supramodal Neural Representation of Familiar Individuals”

Efficient social interactions are based on quick recognition of identity and accurate reading of social cues. Faces and voices are the main vehicles for extracting this information during social interactions. Within the category of faces a qualitative difference characterizes the neural representation of familiar and unfamiliar faces. Personally familiar faces, in contrast to the faces of strangers, are detected faster (Gobbini et al. 2013a, 2013b) and recognized with great efficiency in conditions of poor visibility and over large changes in a head angle, lighting, partial occlusion, and age (Burton et al., 1999; Visconti di Oleggio Castello et al., 2014). The representation of personally familiar faces is amplified by person knowledge and emotion, which play a critical role in successful recognition (Gobbini & Haxby, 2007). By contrast, recognition of unfamiliar faces is surprisingly inaccurate (Burton et al., 1999; O’Toole et al. 2006). Thus, personally familiar faces are among the most highly-learned and salient visual stimuli for humans and are associated with changes in the representation of visual appearance and semantic knowledge that afford highly efficient and robust recognition.

Learning robust representations of familiar individuals that are invariant across stimulus variations, both within and between sensory modalities, is one of the most important functions for adaptive social behavior. The proposed research project will apply cutting edge computational approaches to discover and decode these representations, to investigate how they are instantiated in functional neural architecture, and to investigate the extent to which these representations are similar across individuals. Establishing methods for investigating these issues will provide a basis for further studies of social interaction and individual differences in social cognition that may play a role in social success and clinical disorders, such as social anxiety, autism, and schizophrenia.


Oxford: Disentangling the Representation of Identity from Head View Along the Human Face Processing Pathway


Ryan Halter    
Ryan Halter
Assistant Professor 
Thayer School

Ryan Halter, Thayer School; “Towards Autonomous Robotic Surgery”

Thegoalofthisprojectisto develop a software suite to record kinematic data from a surgical robot, use the data to visualize surgical tool trajectories during surgical procedures, and finally to establish an initial database of these trajectories  for certain surgical  tasks and procedures.  Our initial long-­‐term  goal is to use this database  to train a surgical  robot  to  perform  certain  surgical  tasks  autonomously   (potentially  using  image-­‐guided  feedback)  and ultimately  we  aim  to  push  the  frontier  of  medical  robotics  towards  near-­‐autonomous  surgery.  My  group  has extensive experience in developing surgical tools and medical imaging instruments for deployment during surgical procedures;  the aims outlined  here represent  a transition  in my labs activities  toward  evaluation  and control  of surgical  robot  manipulators,  with  the  long-­‐term  goal  of  bridging  our  experience  in  device  and  image  system development into the realm of surgical robotics.

Minimally-­‐invasive  robot-­‐assisted  surgery is becoming a standard technology  found in the modern operating room and is used for surgeries ranging from throat cancer removal to partial nephrectomy to radical prostatectomy.   The daVinci   Surgical   System   (Intuitive   Surgical,   Inc,   Sunnyvale,   CA)  is  a  multi-­‐armed,   surgeon-­‐controlled   device representing  the primary  surgical  robot  on the market.  It provides  stereoscopic  (3D) visualization  of the internal anatomy to the surgeon (without requiring a large incision). Typically, 3-­‐5 small (~1-­‐2 cm) incisions are made and long surgical instruments are guided through ports placed through these incisions. The tools are interfaced to the multi-­‐ jointed robotic arms. It is important to note that this device is a “fly-­‐by-­‐wire” device, meaning that at all times its movements are controlled by the surgeon – in this way, the robot acts as an extension of the surgeon. Besides being minimally invasive, the benefits associated with use of this technology include tremor reduction, 3D visualization of the surgical field, and 6 degree-­‐of-­‐freedom  articulation of the end-­‐effector  (active tool).   Currently, the robot does not perform any tasks autonomously and as a result there is still the chance that human-­‐error during the procedure could lead to acute or chronic post-­‐surgical complications. One approach to reduce the surgeon error and to improve outcomes may be to incorporate some autonomous control for certain tasks. Example tasks that might be automated include, suture tying, image-­‐guided resection, vessel occlusion, and vascular clip deployment.


Jane Hill
Associate Professor

Jane Hill, Thayer; “Integrating rich data from lung infections for contextually-framed biomarker discovery 

Recent advances in sequencing technologies have provided astonishing census information on the myriad bacteria, fungi, and viruses that live in and on our bodies, and these microbes have critical functions in human health and disease. Dartmouth has a very strong cystic fibrosis (CF) research community focused, in part, on microbial contributions to the progression of CF-related lung disease. People with CF have mutations in the CFTR gene, which is important for maintaining proper hydration of mucus membranes. In healthy lungs, mucus traps and helps to transport microbes and debris out of the lungs. In people with CF, the mucus is too thick for transport, allowing the trapped microbes to establish chronic, polymicrobial lung infections.1 The infections elicit strong inflammatory immune responses in the lung, which damages the tissues and causes lung function decline, and ultimately death.1 In addition, over time, these infections acquire traits such as antibiotic resistance, mucoidy due to excess exopolysaccharide production, and quorum sensing dysregulation that help the bacteria to evade eradication, and are correlated to worse patient outcomes.2 Therefore, understanding the polymicrobial communities in the CF lung – which microbes are present, and how these organisms are living, changing, and interacting with each other and their host – will have direct impacts on improving the methods for detecting and treating the infections, and thus the health and longevity of persons with CF.


Bill McKeeman
Adjunct Professor
Computer Science 

Bill McKeeman, Computer Science; “bGauss: A second language interface for MATLAB”

The general area is programming languages, the specific task is implementing a new scientific language. MATLAB is one of the most successful programming systems for scientific computation. There are more than 1,000,000 users worldwide; the system is actively supported by The MathWorks, a company based in Natick Massachusetts. MathWorks is, unfortunately, trapped by their user base: any change to the MATLAB programming language potentially invalidates billions of lines of code. They choose not to make such a move. In the 30+ year history of the company, the language has accumulated some flaws (we geeks call them “warts”). This work corrects the warts, simplifies the language, extends it, makes it more useable, and delivers it on top of the existing MATLAB. In effect we will be presenting a second language interface to the existing implementation, including tens of thousands of library functions already in the product and submitted by MATLAB users. The programming support tools in the MATLAB product are available to us, as developers, and to the eventual users of what we build. The new language is as similar to the old as possible within these constraints. We call it bGauss, implying the user can “be like Gauss,” the famous German computing prodigy.

An existing partial implementation of bGauss has been done in MATLAB (some with previous Neukom support). The technical details are beyond the scope of this proposal. In summary there are some principals being applied: whole-program compilation, static strong typing without declarations, functions as first class entities, user-defined operators, compilation at call-site, automatic parallel computation, the generalization of data access into three orthogonal primitives, and the simplification of name spaces. 


Deb Nichols

Deb Nichols, Anthropology; “Establishing a compositional database for DNA and isotopic analyses”

Last summer we excavated the earliest farming village and obsidian workshop in the northeastern Basin of Mexico. Another part of the project involves establishing a compositional database for this time period for Central Mexico for both obsidian and ceramics, which ties into our CompX grant. 

This literally is the earliest known village site that still exists in this region. We found a series of burials, include one high status individual that was completely unexpected—its clear that the founders of the village came from somewhere else—one hypothesis is that farmers expanded from areas to the south, but the burial patterns also suggest perhaps western Mexico where maize was first domesticated. No one has worked on a site of this time period in this region since the 1970s. In the intervening years, isotopic analyses of ancient bones and teeth have made it possible to track ancient migrations. Just this year the first ancient DNA study was done of remains dating to a later period in this region. This work was done at the University of Texas in a lab directed by Deborah Bolnick. I have been in touch with her and she thinks there is a very good chance of sufficient ancient DNA in the remains.  We have at least seven individuals represented----a burial we excavated at the very end of the field season had at least 3 individuals. Deborah and I are really excited to pursue this research—we already have lined up Rebecca Storey a biological anthropologist who specializes in analysis of ancient skeletons, to analyze the burials in Mexico and select the samples for DNA and isotopic analyses. Jime Mata-Muiguez is a grad student in Bolnik's lab who is interested in working with us and I attach the article they published this year on aDNA of Aztec remains. The DNA analysis as you may know applies significant computational components. This should be break through research---who were the first farmers?



Tracy Onega
Associate Professor
Geisel School

Tracy Onega, Geisel School; “A GeoComputational Approach to Giving Population Context to Social Media: ‘Textation’ without Representation? “

Every day, over 65 million tweets – or short messages - are sent on the social networking platform, Twitter. (1) Increasingly, researchers are using this voluminous source of social media data to track population trends, monitor illnesses, describe behaviors, and characterize diffusion of information. In addition to its large volume of messaging, Twitter is also attracting researchers due to its diverse population of over 255 million users (2), user profiles, and amenable Terms of Service. Public health and epidemiologic researchers are harnessing the potential of Twitter, and other social media, to study health-related characteristics of the population, and are now frequently designing health interventions based on social media. From 2010-2014, the number of PubMed-referenced studies using social media to study health-related topics has nearly doubled, from about 250 to nearly 500. But how representative of the population are these individuals? To date, tools to readily answer this question do not exist. Many studies focus on geographic distribution of individuals, such as in using Twitter to track spread of influenza in the U.S. (3) While the absolute numbers of individuals identified with the flu over time and location via Twitter is valuable, a critical need is to know the relative number of individuals affected; does 2,000 individuals represent 5% of the population or 50% of the population?

We aim to address this crucial gap in how public health and epidemiologic studies using social media are able to be designed and interpreted. By developing computational algorithms linking Twitter feeds with geographic information systems (GIS) and US Census data, we can develop a platform for periodic monitoring and reporting of how Twitter reflects the underlying population. This will enable researchers from all disciplines to draw more meaningful and appropriate conclusions from their findings, as well as use more sophisticated approaches for study design based on an understanding of the populations included. Our key objectives for this proposed work are:

Specific Aims: 

1. To characterize twitter data in relation to the underlying population composition. Using profile, location, and message content information, we will develop an algorithm to integrate geospatial and socio-demographic data and perform automated analyses to characterize the population represented by Twitter data across a range of spatial units.

2. Develop a platform for periodic monitoring and reporting of how Twitter reflects the population. For the work in Aim 1 to have an impact it needs to be accessible and current. Thus, we will create a platform for automated refreshing of the analytic results from the algorithm above and provide usable reporting tools.


Nick Reo
Assistant Professor
Native American Studies

“Developing novel computational approaches in support of indigenous-led Earth Stewardship of Great Lakes coastal wetlands”


Jim Stanford
Associate Professor

Jim Stanford, Linguistics; “Toward completely automated vowel extraction”

This project will use Automatic Speech Recognition (ASR) in a novel way that could have a significant impact on the field of sociolinguistics. Dialect research relies on quantitative acoustic analyses of vowel resonant frequencies or “formants.” In the last few years, sociolinguists have begun using semi-automated ASR methods to extract this acoustic information, such as Forced Alignment Vowel Extraction (FAVE, Rosenfelder, Fruehwald, Evanini and Yuan 2011). In the semi-automated approach, human annotators must first create sentence-level transcriptions from the voice recording. The system then aligns each segment of the acoustic spectrum with the phonemes in a sentence, and automatically measures the formant frequencies for every vowel. Semi-automated systems have accelerated the pace of sociolinguistic research, but such methods still require significant human effort to manually create the sentence-level transcriptions. We believe that sociolinguistics is on the brink of an even more transformative technology: large-scale, completely automated vowel extraction without any need for human transcription. With such a system, it would be possible to quickly extract pronunciation features from virtually limitless hours of recordings, including YouTube, large audio/video archives, and even live-streaming video. 

Recent website 

DARLA: Dartmouth Linguistic Automation, provides a suite of automated vowel formant extraction tools tailored to research questions in sociophonetics.



Wen Xing
Asian & Middle Eastern Languages 

Wen Xing, Asian & Middle Eastern Languages: “A Geographic Chronological Model of the Western Han (206 BCE – 25 CE) Chinese Clerical Calligraphy”

The Clerical Script, also known as Hanli or Han Clerical Script, was fully developed in the Han Dynasties (206 BCE – 220 CE) in China. It has been one of the most popular calligraphic styles that many Dartmouth students of CHIN 62.01 “Chinese Calligraphy” and CHIN 82 “Chinese Calligraphy and Manuscript Culture” choose to create their course projects. It is also a popular script in which forgeries of ancient Chinese bamboo and wood manuscripts were produced. As the first half of the Han Dynasties, the Western Han (206 BCE – 25 CE) is the period when the Han Clerical Script generated, developed, and finalized.  The Geographic Chronological Model (GCM) of the Western Han Chinese Clerical Calligraphy (WHCCC) that I propose here identifies at what rate a specific brushstroke demonstrates reliable Western Han Clerical brush techniques with particular geographic and chronological identities, as well as the geographic and chronological distributions of representative brush techniques in the WHCCC. A rate variation of a particular brush technique could indicate the probability of either qualification or authenticity of claimed Han Clerical brushstrokes, individual characters, or even whole pieces of manuscripts.


The proposed GCM features two aspects: (1) a chronological database of selected originally self-dated scripts written in the WHCCC, including images of both individual characters and whole manuscript pieces; and (2) a model of Geographic Information System (GIS) featuring both geographic and chronological distributions of particular brushstrokes and brush techniques in the WHCCC.



Last Updated: 1/18/17