Winners of the Neukom Scholars Program have been announced with awards of up to $1,000 per term for up to two terms.
The programs goal is to fund undergraduate students engaged in faculty-advised research in the development of novel computational techniques or the application of computational methods to problems in the Sciences, Social Sciences, Humanities, and the Arts.
Third and fourth year students are eligible to apply for these competitive grants. Winners for Summer, Fall and Winter terms are:
Amyotrophic lateral sclerosis (ALS), otherwise known as Lou Gehrig’s disease, is a neurodegenerative disease caused by an apoptosis of motor neurons. Though it may not be as prominent as Alzheimer’s disease or Parkinson’s disease, it is lethal enough that only about 10% of the patients survive for 10 years or so after the onset of symptoms. Our goal is to identify complex relationships between DNA sequence variations across the genome and risk of ALS. We are working with two patient data sets, one American and one Irish, each consisting of more than 400 patients and approximately 500,000 genetic variations. These data were analyzed for gene-gene interactions using the novel Multifactor Dimensionality Reduction (MDR) machine to learn algorithm to identify nonlinear gene-gene interactions. These pairwise interactions results were then analyzed and interpreted using EVA (Exploratory Visual Analysis). We found that genes associated with liver failure and liver diseases were statistically significant in the American patient data. Some of these genes were described as neuronal activity-related. Interestingly, we were able to reproduce the same result with the Irish data suggesting these results are not spurious.
Using what we have found so far as the foundation, we wish to further examine the relationship between liver failure, liver diseases, and ALS. Our goal is to develop a predictive model of ALS that moves beyond the one-factor at a time approach that depends heavily on univariate statistical methods. We anticipate that these novel approaches to genetic analysis will have a positive impact on understanding neurological diseases such as ALS.
Gene ontology analysis of pairwise genetic associations in two genome-wide studies of sporadic ALS BioData Mining 2012, 5:9 Published: 28 July 2012
I propose to develop a computational model of interaction between speakers of different dialects. By modeling actors interacting subject to the mechanisms thought to influence dialect change, interactions and trends can be compared to those predicted or expected by existing theories of dialectal change.
Sociolinguistic principles to be tested include the notion of prestige dialects – the idea that certain dialects or dialectal features are more respected and may confer certain advantages to their speakers. Along with this is the idea of covert prestige – the idea that speakers may also highly value speakers of their own dialects, in spite of the higher status of the prestige dialect. Sociolinguistics also includes network theory, "gravitational" theories of dialect contact between different-sized populations, and issues of child dialect acquisition. Prof. Stanford's own research has involved many of these overall principles as drawn from sociophonetic observations (Stanford 2008a-b, 2010a-b). However, he has always wanted to help cross-pollinate the observational results of his field with computational modeling. Major quantitative sociolinguistics journals such as Journal of Sociolinguistics and Language Variation and Change rarely include computational modeling. Instead, the paradigm of quantitative sociolinguistics (e.g., Labov 1994, 2001) remains largely focused on observing phonetic/syntactic variation and analyzing the results in various models of social theory, such as social constructionist approaches (Giddens 1979; Gergen 1994). Our proposed research project would therefore help build connections between computational science and sociolinguistics. I will use C++ to develop the model.
Most of us are familiar with the common metaphor for chaos, the butterfly effect: tiny happenings – the flapping of a butterfly’s wing in Brazil – may in time have significant consequences – altering the path of a tornado in Texas; classical chaos is the exponential divergence of infinitesimally perturbed trajectories. Ever since Poincaré’s pioneering work in the late 19th century, chaos has been shown to exist and play an important role in a variety of systems and processes, ranging from simple hand-held objects (e.g. a double-pendulum) to astronomical bodies (e.g. the Sun-Earth-Moon system, asteroids, even the motion of planets) to complex macroscopic phenomena (e.g. from the milk mixing in our coffee to financial markets, cardiac rhythms, and so on).
We will be studying the chaoticity properties of spin-1/2 (qubit) systems interacting according to a long-range dipolar Hamiltonian. While the problem of “many-body quantum chaos” is a fundamental open problem in quantum theory, this research is also directly relevant to quantum information science (QIS), in particular from the standpoint of building a quantum simulator on solid-state spin devices where dipolar couplings are naturally found in Nature. Video
The Lettres tahitiennes was first published in 1784 by Madame de Monbart and takes on Europe's recent encounter with the South Pacific island of Tahiti from the point of view of a young female Tahitian. We are working off a scanned copy of the edition available at the Bibliothèque nationale de France. We propose to convert the pdf version of the novel into a searchable and editable text and then, compile data about the three different methods used to establish the text of the novel, and compare their efficiency and reliability. The results of this study could be published in a collaborative article aimed at the larger community of scholars in the Humanities who prepare modern editions of early-modern texts.
After a devastating Civil War and a decade of Reconstruction, the United States enjoyed a period of massive economic growth fueled by technological advancement and industrial expansion. The rise of a modern economy during this time is often characterized as a period of gross excess and extravagance, famously described as “The Gilded Age” by American author Mark Twain. Modern corporations formed as investors and tycoons reaped the unregulated rewards of exploitation and opportunism in a volatile, fluctuating economy. The wealthy grew wealthier while masses of poor immigrants labored in factories for almost nothing. In spite of this, some of today's basic reforms and limitations on government and business were first conceived during this era. The Gilded Age saw the rise of Progressivism and the first credible nationwide movements in opposition to systematic exploitation and a quickly expanding wealth gap.
Our project seeks to identify and explain trends in corruption and reform during the American Gilded Age by using text-based analytics on digitized Gilded Age newspapers. Analysis will focus on word frequency, association between different words and phrases, and other natural language processing techniques. This data will be correlated with geographical and socioeconomic context based on the location and audience of newspapers. The ideas and opinions of millions of Americans can be gleaned from patterns in these pages of print. By understanding the beginnings of corruption and reform in an earlier period of capitalism, exploitation, and consumption, we can begin to explain the world as it is today and draw meaningful conclusions that explain the spread of corruption and the effectiveness of economic reforms. Video
Since high frequency waves attenuate rapidly in the earths crust, I look at borehole stations in the area (which have significantly less background noise) for precisely these high frequency signals, which indicate a local source. If these signals correspond with the arrival of large surface waves from distant, large magnitude earthquakes, this is good evidence that the energy from the teleseismic event triggered a local quake. The eight borehole stations in the region came online in December 2007, and so our project is constrained to all earthquakes that have occurred since then. We further constrain our data set by only looking at events of magnitudes 7.5 that occur over 2000 km away, and magnitudes 6.5 and over that occur within 2000 km. Earlier studies suggest that earthquakes of this size are on the lower boundary of being able to generate enough energy to cause local triggering.
The application of computational methods is critical to solving the problems of this project. I will be using two important programs written for Linux, SAC (Seismic Analysis Code) and GMT (Generic Mapping Tools), developed at the University of California system and the University of Hawaii, respectively. These languages and programs allow us to cut, filter, plot, manipulate, and otherwise analyze the vast amounts of data available from the Southern California Seismic Network.
In human image processing, it has been discovered that our visual preference over a repeated 2D image depends on the image content category. When viewing human faces, subjects rated familiar faces as more attractive; on the contrary, when viewing natural scenes, subjects showed higher preference over the novel ones (Zajonc, 1968; Park, Shimojo and Shimojo, 2010). Therefore, there exists a dichotomy between familiarity-driven and novelty-driven visual preferences in humans. Each image category possesses its unique visual features and image statistics. Human faces, for instance, differ largely from natural scenes in terms of spatial autocorrelation, distribution of contrast relationships, color statistics etc.
My research project will use computational modeling to identify statistical regularities that underlie human visual preferences, specifically the visual features that distinguish familiarity-driven and novelty-driven preferences. Given the computational perception methods, I could better characterize these visual features, and test their effects by simulating a human preference task. My ultimate vision of this project is to computationally identify feature-level components of human aesthetics, so that artificial intelligence can be built to facilitate visual art design.
In recent years, firms have fragmented their production chains across borders, with input suppliers spread across many countries. To quantify this process, Professor Johnson (with co-authors) has combined national input-output tables with bilateral trade data to form a global bilateral input-output table. This table describes how individual sectors in each country source inputs from home and abroad, as well as how countries trade final goods. That is, these tables provide a description of the network structure of global final and intermediate input linkages. To date, this work has focused on analysis of input sourcing networks at a single point in time.
The aim of the Neukom project is to use network analysis to characterize the structure of global input sourcing networks and changes in those networks through time. To analyze the structure of global trade network, we will employ a number of linear-algebraic computational techniques. The key idea is that the input-output matrix can be translated into an adjacency matrix, describing connectivity between sectors and countries. We will use this adjacency matrix to describe the network’s general properties. We can also identify the presence of clusters at different levels in the system. Further, we can assess the density and assortavity of the overall system (i.e. evaluate how plugged in different sectors/countries are and how likely each is to form links with others). Having described the properties of the network, we will trace out the “backbone” of the network (i.e. the most vital or intensive parts of the system). To aid understanding of the structure, we will then construct appropriate visualizations (maps and network graphs) of the network. Finally, we will build a framework for shock simulations, allowing us to analyze the effects of system-wide disruptions in economic activity (e.g. transmission of demand shocks across countries). Video
I am researching OpenCourseWare and other types of Open Education Resources(OER) while building a prototype website for sharing educational materials produced by Dartmouth.
My research involves all aspects of OER websites—creation, management, technology, funding, intellectual property, and the politics of institutional and faculty support. I hope to synthesize writings about and interviews from peer institutions and other OER organizations with interviews within the Dartmouth community in order to arrive at a resource sharing website that makes sense for Dartmouth and even pushes the greater movement for Open Education Resources.
Last Updated: 5/31/16