2020 Grant Recipients

About the CompX Faculty Grants Program

Winners of the 2022-2023 Neukom Institute CompX Faculty Grants Program for Dartmouth faculty have been announced for one-year projects. The Neukom Institute received over $700K in total requests and awarded $235K of financial support with an additional combination of programming support from Research Computing and the Neukom Scholars program.

The program seeks to fund both the development of novel computational techniques as well as the application of computational methods to research across the campus and professional schools.

Dartmouth College faculty including the undergraduate, graduate, and professional schools were eligible to apply for these competitive grants.


* indicates an award that is partnered with assistance from Dartmouth College Research Computing.

+ indicates an award that is partnered with an RA provided through the Scholars program

Earth Sciences

Akshay Mehra & Justin Strauss

Acquisition of Fixed-Wing Unmanned Aerial Vehicles for Training the Next Generation of Scientists





At all times, Earth's surface is in some state of being covered, denuded, shifted, and/or altered. Some of these processes, such as the movement of tectonic plates, are driven by large scale physical forcings, and can only be observed passively. Other fluctuations, including rising sea levels, desertification, and deforestation, are actively affected by human behavior. All of these changes have the potential to drastically alter the way(s) of life for the biological inhabitants of Earth. By reconstructing the past---and predicting the future---of the evolution of the planet's surface, we can better understand what impacts processes like mining, clear cutting, and a wide variety of other environmental and societal phenomena will have, and how we may avoid and/or mitigate undesired consequences of change.

One powerful observational methodology is remote sensing, which includes the vast amounts of data collected by an increasingly large number of space- or air-borne sensors. These data, when processed, can be (and have been) used to quantify a wide range of physical processes, including the waxing and waning of ice sheets, vegetation growth or loss, and even the creation of brand new landscapes. For those interested in studying how the surface of Earth has changed, is changing, and will change in the future, the viability of remotely sensed data means that a solid understanding of how to acquire, process, and analyze such information is necessary. Unmanned aerial vehicles (UAVs or ``drones"), which are relatively inexpensive and easily operated by small teams, are ideal tools to teach students these required skills.

This funding will enable the acquisition of two fixed-wing UAVs to both help train the next generation of field scientists as well as elevate remote sensing-based research at Dartmouth. The UAVs will be incorporated into ongoing projects---such as the mapping of two billion year old microbial buildups to better understand the co-evolution of life and environment on Earth---and become a central part of a remote sensing course in EARS. Additionally, owing to the ease of use of fixed-wing drones, we anticipate that faculty and students in Anthropology, Geography, Biological Sciences, and Environmental Studies), among others, will take advantage of the equipment for their own research goals.

Geisel School of Medicine

Aravindhan Sriharan*

Motivation- Flow Cytometry Technology and its Imporatance in Lymphoma/Leukemia Diagnosis



Almost all cancers are diagnosed by examination of their microscopic appearance. Systematic examination of microscopic morphology yields critical information regarding the type of cancer in the case at hand, whether it has features that can portend a good prognosis or a bad prognosis, and, in some cases, whether it would be amenable to certain targeted therapies.  This type of morphologic analysis has proven an invaluable tool, with impressive performance over millions of cases for over a century.  But, it is clear that some cancers can closely resemble other ones, and even mimic entirely benign lesions.  In some cases, additional  tools are needed.

Flow cytometry was developed to be provide information critical to the diagnosis of leukemias and lymphomas. The technique separates tumor cells and individually passes them through the path of a laser. By measuring the degree and type of scatter that the laser undergoes, different cell types can be identified. In addition, the cells are exposed to fluorescently labelled antibodies; if the cells express a given protein on the surface, the antibody will bind, and the dye will fluoresce once exposed to the laser.  In this way, approximately 10,000 cells from a given case are analyzed. The results are graphed on a scatterplot. The scatterplot is analyzed by a specially trained physician. The information gleaned is used to complement that obtained from analysis of the tumor's microscopic morphology.  In this way, morphology and flow cytometry have been used together, for decades, to yield an accurate diagnosis for millions of leukemia and lymphoma patients around the world.

But flow cytometry has its shortcomings. It requires tissue to be placed in a specialized medium soon after it is removed from the patient. If the specialized medium is not used, flow cytometry may be impossible for that patient's sample. In addition, flow cytometry is typically performed by specially trained technologists, prior to being analyzed by the pathologist. In resource-limited settings, such technologists are not readily available. Finally, flow cytometry is an antibody-based assay and therefore can be expensive, adding significantly to the cost of cancer care.

Our team has recently published related work, on the development of an algorithm capable of yielding other antibody-based data, from images of the tumor's microscopic appearance.  Our goal now is to develop and train an artificial intelligence algorithm capable of yielding data similar to that of flow cytometry. The development of a rapid, easy to use tool, capable of yielding clinically useful information, may prove beneficial to patients with these forms of cancer.

Physics and Astronomy

Brian Chaboyer*

The Age of the Universe 



Ever since Edwin Hubble's discovery in 1929 that the universe was expanding, a key goal of astronomy has been to determine the rate of expansion.  A puzzling discrepancy in the measurements of the expansion rate has emerged in recent years.  The `local' measurements (out to distances of 1 billion light-years) conflict with measurements on the largest scale (distances of 46 billion light-years).  The expansion rate of the universe is closely connected to the age of the universe, with faster expansion rates implying a younger age.  An accurate estimate of the age of the universe would provide an independent test of these conflicting estimates for the expansion rate of the universe. 

This project will determine an accurate age of the universe by estimating the ages of the oldest stars in our Milky Way galaxy. There is observational evidence which shows that the oldest stars formed when the universe was $0.5$ billion years old, and the age of the universe will be determined by adding the age of the oldest stars to their formation age.  To determine the ages of the oldest stars, we will generate a suite of computational stellar models using a Monte Carlo approach.  In this method, instead of using the best estimate for how matter behaves at high temperatures, we randomly select a particular value from a distribution of values (which reflect the known uncertainties) and then create a stellar model.  Each of these stellar models will be used to determine an estimate for the age of the oldest stars. The median value of these ages will be our best estimate for the ages of the oldest stars and the distribution of derived ages will provide an estimate of uncertainty in this age estimate.  

Earth Sciences

Carl Renshaw*

Monitoring Global-Scale Spatial and Temporal Variation in River Suspended Sediment Transport



Riverine transport of suspended sediment maintains riparian habitat and complexity, bolsters wetlands and deltas, delivers important nutrients to the oceans, and, through its influence on coastal erosion, impacts coastal infrastructure.  It is also a key indicator of anthropogenic disturbance to watersheds, such as that due to the proliferation of Artisanal Scale Gold Mining (ASGM) in the headwaters of the Amazon.  Robust monitoring techniques for riverine suspended sediment are key to understanding the likely sources and impacts of these disturbances.

We have recently developed a methodology to efficiently search through the millions of Landsat images of Earth to create a timeseries (since 1984) of the suspended sediment carried by any major river on Earth.  Significantly, we have shown that as few as five in situ samples of sediment concentration from a given river reduces the uncertainty in satellite estimates of sediment concentration by a factor of two.  This is a game changer in how we monitor rivers.  Historically agencies such as the United States Geological Survey put tremendous effort into building the infrastructure to monitor a single river location for a long period of time.  What the satellite record allows you to do is take a more cosmopolitan approach and focus on collecting just a small number of samples from a larger number of rivers.  For the many global rivers with sparse or no monitoring, well calibrated satellite-derived measurements may provide critical insight into patterns and trends in riverine suspended sediment transport that cannot be obtained using conventional methods due to logistical and financial constraints.

We are working to create a web platform where anyone can simply click on any major river on a map and the timeseries suspended sediment transport is automatically created. Further, we will enable anyone to submit concentration measurements from any location along a major river and those data will be automatically used to update and improve the accuracy of the satellite-based estimates.  There are many challenges to this, not the least of which is quality control.  But the potential payoff is huge. Currently we have good data on rivers in most of the developed world.  But we know little about rivers in the developing world.  This would provide an efficient method for improving our understanding of rivers in these parts of the world with a minimum of effort and cost.

Psychological and Brain Science

Caroline Robertson

Deep Learning Approaches to Identifying Signatures of Autism in Real-World Behavior

Caroline Robertson


Autism is estimated to affect 1/59 individuals in the United States today, but little is known about its neurobiology. Scientific progress is limited by a lack of strong, objective phenotypic markers of the condition. Recently, my lab and others have identified promising markers of autism in the domain of visual perception using in-lab tasks, but little is known about how visual symptoms manifest in real-world contexts.

Here, I propose to pilot a new approach to develop scalable, objective measures of autism in naturalistic visual behavior. Specifically, I propose to combine eye-tracking, immersive virtual reality (VR), and computational approaches to: (1) identify the key signatures of autistic visual behavior in complex, real-world environments, and (2) link these signatures to particular stages of neural processing using computational approaches. Going forward, we hope that these studies will pave the way for potential screening tools for autism and advance our understanding of the mechanistic underpinnings of the condition.  

French and Italian

Damiano Benvegnu*

Digital Pilgrims: Thick-Mapping the Smith Pond Shaker Forest



My project is an exploratory attempt to use augmented reality to create a "thick-map" application for the Smith Pond Shaker Forest (hereinafter SPSF) that can provide both a dynamic historical record of the human-nonhuman interaction in the area and an opportunity for those who want to hike the property to be socio-environmentally engaged.

The 995-acre SPSF is a property near Mascoma Lake in Enfield, NH, owned and managed by the Upper Valley Land Trust. The now-forested and seemingly pristine property is the result of a unique socio-environmental interaction between the Shakers – a Christian sect founded in England and then partially settled along the Mascoma Lake at the end of the eighteenth-century – and the surrounding landscape. The Shakers in fact greatly manipulated the natural streams and the wetland on this property to provide water for their settlement and mills. They not only created Smith Pond by building the original Shaker Dams, but several other remnants of the interaction between the religious community and the nonhuman environment are still today visible on the property in different states of preservation.

The goal of this project is to provide users with an iOS "thick-map" application which would allow them to experience the SPSF as a set of interacting landmarks (or interpretative points of access) capable of narrating socio-environmental events in space and time based on actors and actions, both biotic and abiotic, while they are actually walking through the landscape. Designed in collaboration with the Upper Valley Land Trust, the SPSF augmented reality app would offer an opportunity to experience one's ecological positionality through alternative cultural and material narratives of socio-environmental engagement within a discreet area.

Environmental Studies

David Lutz

A Wireless Sensor Network for Monitoriing Fine-Scale Forest Ecosystem Responses to an Invasive Insect Pest

Daivd Lutz


White Ash trees are all around us. There are billions of them located throughout the Eastern United States and in Vermont and New Hampshire, roughly 1 out of every 20 trees is an ash. However, the number of ash trees in North America is declining at an alarming rate due to the spread of an invasive pest, the Emerald Ash Borer. When a stand of ash is infested by the Emerald Ash Borer, the mortality rate is generally 99% and subsequently many states have seen a near-complete eradication of their ash populations. Ash trees play a unique role in forest ecosystems due to their late spring leaf-out times and associations with specific under-story plants. But, since the spread of this insect is rapid, there has been little time for scientists to take pre-invasion measurements and understand the full ramifications of the elimination of ash trees from forest landscapes.

Dartmouth owns a wood lot in Corinth, Vermont that consists of a very high density of ash trees (80%)but that has not yet been invaded by the Emerald Ash Borer. We suspect that this stand will be invaded in the next few years, but in the meantime, we have a unique opportunity to capture pre-invasion information about this threatened ecosystem. This project harnesses evolving new computational technologies to capture such data including a set of wireless sensor networks that operate autonomously and are controlled by open-source Arduino-based data loggers. These networks will collect information on air and soil properties that will help describe the drastic impacts of a biological invasion on nutrient and hydrological cycles. Additionally, we will utilize a set of bioacoustic recorders to collect information on bird populations to understand how changes in this forest type may affect bird species presence and absence. Once established, the network funded by this project will serve as a long-term data source for ecosystem science labs on campus, will provide the opportunity for inter-disciplinary training for students that connects computing with the environmental sciences, and will generate data for the broader scientific community on a critically important environmental issue.

Women's, Gender, and Sexuality Studies

Jacqueline Wernimont*

Trace: A Project Designed to Understand Several Interrelated Features of a Small Contained "Smart" Community



"Trace" is a new multi-disciplinary and multimodal project sparked by student research questions articulated in the 2019 "Data and Bodies" course. Inspired by Charlotte Kuller ('20), we propose a project to explore the use of high performance computing, large-scale activity datasets, data privacy, and creative visualization. There is ever-growing interest in data ethics and privacy and also in robust and compelling data visualization and this project thinks specifically about the affordances of media arts to render potentially sensitive data in ways that protect privacy. In the tradition of critical digital humanities and design, we plan to account for both the energy and currency costs of the project itself in order to ask "are the insights of this project worth the costs incurred?". While there is a significant body of literature that deals with speculative fiction, speculative design, and speculative engineering, our concept of "speculative accounting," which works to understand costs and income streams of possible future data deployments, is novel.

This project uses an 118 million record data set of anonymized activity in the Dartmouth ecosystem between June 2018 and December 2019. Using a variety of machine-learning tools and techniques, we will develop an interactive installation using a series of suspended individually addressable LED lights. One of the affordances of the individually addressable lights is that we can create a dynamic three dimensional but non-representational "map" of activity that people explore in both time and space. To help understand the costs of this work, we will synthesize the accountings from both prior phases to develop a speculative accounting methodology that allows both the program team and our viewers to evaluate the energy and fiscal resources that will be used to generate work like "Traces." We plan to create a speculative "balance sheet" that will both render visible the known costs and speculate about future storage and usage costs for this kind of work. This will function as a prompt for viewers to consider the value of computational data analysis, large scale data storage, and new media installations which is often taken as a given.


Jeremy Mikecz

The Andes Mapped: Deconstructing Early Colonial Texts and Reconstructive Indigenous Geographies and Histories



This project proposes and applies a novel approach to rethink early colonial history in the Americas. In particular, this project asks: How did the social, political, economic, and environmental geography of Indigenous Peru change over two centuries (from Inka to Spanish dominion)? The increasing availability of digitized historical texts and the development of new digital text analysis tools offers the potential to revolutionize our study of the human past. To this end, I have created the Early Colonial Andes (ECA) text corpus, the first digital collection of many of the most important Spanish and Andean-authored texts from the sixteenth and seventeenth-century Andes. Constructing this corpus has involved digitizing texts and then encoding information in these texts using a semi-automated approach with Python and xml/TEI.

Ultimately, I will link this 'database' of approximately 200 texts with an online query tool as well as a digital mapping platform, allowing scholars from around the world to perform complex queries of this corpus and to create maps showing the spatial distribution of information from these texts. Scholars have not yet explored the potential of such an approach to rethink and re-imagine early modern encounters and Indigenous geographies. CompX funding is allowing me to complete the first phase of the ECA digital text corpus (the digitization and encoding of texts) and to begin the second (digital text analysis) and third phases (the building of the interactive, online version of this corpus, complete with query and mapping tools). This work will benefit not only scholars of early colonial Peru and Indigenous-colonial encounters more broadly, but also researchers desiring new, digitally-aided methods for the analysis and recovery of hidden patterns found within the millions of texts appearing in the world's rapidly growing digital archives.


Julie Hruby

Associating Fingerprint Patterns with Age and Sex: A Quantifiable Approach

Julie Hruby


A wide range of prehistoric and ancient Greek ceramic objects, including vessels, ceramic sculpture, seal impressions, and writing tablets preserve the fingerprint impressions of their producers. Traditionally, archaeologists have matched prints in an effort to understand ancient labor systems, but more recently, we have also begun to ask a much wider range of questions. The ages and sexes of producers are among those questions, but so far, the techniques that have been used to reconstruct those factors have typically been able to work on the level of populations rather than individuals, and they have also been subject to challenges posed by differential clay shrinkage rates.

The current project will improve the accuracy of sexing and aging producers of ancient Greek artifacts by using fingerprints that were accidentally impressed in objects made by modern Greek ceramicists as a reference sample. Fingerprint impressions from modern Greek adult potters of known sexes and age grades have already been collected and scanned with a high-resolution 3D scanner, and a Greek attorney has assisted us in complying with both European Union and Greek law as they relate to the collection of prints from juveniles. We have also begun the process of collecting and scanning modern prints from juveniles, and we will begin archiving our raw data.


Justin Mankin*

National Attribution of Historical Climate Damages: Data in Service of Climate Litigation



Quantifying which nations are culpable for the economic impacts of anthropogenic warming is central to informing climate litigation and claims for restitution for climate damages. However, for a country seeking legal redress, the magnitude of economic losses from warming that are attributable to individual emitters is not known from existing work, undermining its standing for climate liability claims. We have addressed this gap, combining historical data with climate models of varying complexity in an integrated framework to quantify each nation's culpability for historical temperature-driven income changes in every other country. By linking individual emitters to country-level income losses from warming, our results provide critical insight into climate liability and national accountability for climate policy. Based on our collaboration with the Sabin Center for Climate Change Law at Columbia University, it is essential to publicly serve these data, which are the first of their kind, to support the domestic and international legal communities pursuing ongoing and future climate litigation. Our project has two goals: (1) build a Dartmouth-based website to publicly serve the data we have generated from this project to the legal community and (2) seed the next steps of our work furthering the attribution of climate damages. Our computational work provides evidence for liability claims of the monetary losses countries have suffered based on the actions of specific emitters. Crucially, the distribution of these impacts is highly unequal, emphasizing the inequities embedded in the causes and consequences of historical warming. Serving these data and the science that developed them in a transparent and interpretable manner, while positioning us to extend our computational accounting framework to other actors, such as individual firms.


Katherine Mirica

VeRidium: A Virtual Reality Platform for University Science



Connecting the macroscopic and atomic dimensions represents one of the conceptual challenges that students face in chemistry and physics courses. In introductory courses, the concepts of quantum mechanics, crystal symmetry, and atomic orbitals pose challenges to students, as they represent a departure from classical continuum models to describe quantum phenomena that is not easily visualized in the macroscopic world. In more advanced courses that go beyond atoms, many challenges in the studies of molecular and materials structures arise from the difficulty in applying conventional methods of visualization to the three-dimensional (3D) models that are intrinsically inaccessible within the macroscopic world. To overcome this challenge, Virtual Reality (VR) experience can make it possible to go beyond the two-dimensional (2D) representation confines of the printed page or a screen to reveal the key features of the quantum world in 3D.

The overarching goal of this project is to develop a process for using VR in aiding visualization of atoms, molecules, and materials at the university level depth. The use of VR in Chemistry and Physics is not to replace the real hands-on laboratory experience, but to enable students to grasp the abstract concepts that typically require high cognitive load and high reliance on spatial cognition, where conventional visualization resources often prove inadequate. Our approach towards this goal is organized into two specific aims: (1) Partnership with a student-run Digital Applied Learning and Innovation (DALI) lab at Dartmouth to develop an independent VR app for Oculus Quest to assist students in learning solid state structure of materials; (2) Implementation of VR-based modules in courses to assess the efficacy of VR-based visualization on student learning experience, compared to traditional methods. Despite the enthusiasm for the use of VR in science education, there is currently limited information on the efficacy of VR in aiding 3D molecular and material visualization, lack of information about best practices, lack of clear scalable process for broad implementation and dissemination (e.g., cost, inclusion, etc.), and lack of VR-based educational resources at the university-level depth. This Neukom CompX grant will address these gaps in VR-based activity development and implementation.

Geisel School of Medicine

Louis Vaickus*

Student Computational Resources for Deep Learning in Digital Pathology



The Pathology Virtual Laboratory is entering its 3rd year with a new crop of 35 interns. This year we have focused on diversity of applicants at all levels and have recruited from suburban, urban and rural high schools with an emphasis on underserved students. We have 6 students from previous years returning as "near peer" mentors to serve as team leaders. We have additionally added several new faculty mentors.

This research focuses on the integration of quantitative computational techniques into the practice of digital pathology with a focus on deep learning and digital histomorphometry. He is a founding member of the department of pathology and laboratory medicine's new research and educational program: Emerging Diagnostic and Investigational Technologies (EDIT). The EDIT lab's computational research and educational arm is focused on creating a high performance infrastructure for use by faculty and students. DPLM has invested significant capital in acquiring a state of the art computational resource (QDP-Alpha) at the Research Computing Discovery cluster. This resource, combined with the department's vast whole slide imaging and genomics database will enable our researchers and students to perform high quality machine learning investigations in an efficient manner. EDIT is also committed to creating a "virtual laboratory" to allow researchers of all skill levels to fully utilize QDP-Alpha. This resource will provide a secure environment for experimentation and prototyping with robust visualization capabilities. This resource will directly interface with QDP-Alpha to allow for quick deployment of mature algorithms and very large jobs. The Neukom CompX award will allow for the rapid creation of EDIT's virtual lab and ensure that our first crop of remote interns will have world class resources. Dr. Vaickus and the EDIT team are excited to see what our students will create.

Native American Studies

Michelle Brown*

Eel Elder VR and The Txitxardin Project



Eel Elder VR is a core component of The Txitxardin Project: a threefold approach to (re)coding Euskaldunak-eel relations through art and research. This VR expereince came about through my ongoing participation in the Indigenous Protocol and Artificial Intelligence Working Group and previous background as an invited Oculus LaunchPad participant. Eel Elder VR has two main goals: first, to (re)code players and readers. (Re)coding here is used to indicate hacking us as Western media users – altering how we have seen and consumed eels to (re)orient people to actively care for eels, not just "consider" them, as posited by recent extinction-as-spectacle publications. Secondly, this project takes up how AI and ancient eel relational practices might shape each other when both are framed as advanced technologies and these eel-coded AI are taken up as kin in their own right - entities that have networks of relations beyond those we currently mark as meaningful or important. 

Hack and (re)code here refer to human and nonhuman cognition and computation – our ancient and ongoing way of defining 'being Basque' is to speak the language, to allow it to carry and shape us. How it influences our thinking, how it proscribes our actions to the world around us is central to who we are. Moving away from more recent blood quantum and heteronormative reproduction 'proof' of belonging, this VR project invokes teaching and learning Euskara (Basque language) as a core component of the AI-eel-VR experience's power to code Euskara and non-Euskara players in generative ways.  This is not to say that our way of caring for our eels is the only one, nor are all eels universalized. This is for a specific lineage of eels, and our ways of tending to them as infused in the project components.

The second goal of the Eel Elder VR project involves taking up these relational practices as advanced technologies: how we care for eels and AI, and how AI and eels relate to and nourish us. How can AI be a thread of these networks of relations? How can it – and this project – preserve and (re)new Iparralde relational practices with our eel kin as climate change and overharvesting may mean the eels' physical extinction? Paraphrasing Melanie B Taylor in Afterlives of Indigenous Archives: Essays in honor of the Occom Circle, this is my way of populating eels' fertile afterlives in physical and digital spaces. This project involves thinking with eels to shape AI systems and recognizing the rich, active archive of their bodies and their relations into renewed afterlives. I cannot imagine futures without them: this is one way of sequencing them into AI-infused afterlives, allowing them to hack, wriggle, and corrupt highly corrupt systems and structures to thrive in unexpected places and spaces.

Physics and Astronomy

Miles Blencowe

Quantum Music



Most of us have an appreciation of music; a familiar song or part of a symphony can elicit an immediate emotive response on the part of the listener, without the need to understand any underlying musical theory. On the other hand, in the apparently far removed field of quantum physics, often-discussed notions such as particles behaving as quantum waves or being quantum entangled with each other may evoke a sense of curiosity and wonder, but without having much sense of their meaning. We have recently embarked on a multifaceted program that sets out to explore quantum physics through 'listening' and 'playing', with the dual goals to make quantum physics more accessible to the non-expert, as well as to ultimately develop new types of musical instruments and music based on quantum principles. The program involves a collaboration with quantum physicists, philosophers, musicians, and includes both graduate and undergraduate students.

In this project we will convert data sets obtained from various experimental quantum physics groups into audio signals (sonification) and compose a quantum soundscape out of the resulting audio signals. We will also play music with actual quantum systems; various companies are currently realizing quantum computers and making them accessible remotely to researchers for programming purposes. We will develop software that sonifies the quantum computer voltage signals as a given quantum code is being executed by the musician-programmer.  Through this project, we will explore the question: what can we learn about the quantum system dynamics by listening to it? We will also identify the corresponding elements making up a theory of quantum music, as well as aim to make quantum physics more accessible through listening. 

Art History

Nicola Camerlenghi

The Interactive Nolli Map Website of Rome - A New Edition 



The goal of this project is to revitalize, permanently preserve, and assure universal open access to the Interactive Nolli Map Website, which will go offline once Flash-based content is removed from browsers on December 31, 2020. The pioneering, but nearly 15-year-old website is widely considered an authoritative resource on architect and surveyor Giambattista Nolli (1701-56) and his 1748 masterpiece La Pianta Grande di Roma ("The Great Plan of Rome"). That map is a milestone of cartography and represents almost eight square miles of the densely-built city within the ancient walls as well as the surrounding terrain, identifies 1,320 sites of cultural significance, and renders hundreds of building interiors with detailed plans.

While the Nolli Map website was an innovative design in 2005, won many awards, and attracted over 325,000 unique visitors, advances in geospatial web technology bring new opportunities to reinvigorate this important resource. Our new site will feature a redesigned user interface and interactive map consistent with current web design, coding, and accessibility standards. We will transform the graphical presentation of the map into a geospatial web app, allowing us to overlay a trove of already georeferenced data spanning from antiquity to modern days. At the same time, we intend to make available the geographic features derived from the Nolli map as a set of open geodata complying with OGC (Open Geospatial Consortium) standards. The project is being undertaken in collaboration with scholars at University of Oregon and Stanford University.

Computer Science

Wojiech Jarosz*

Computational Appearance Matching of Translucent Materials



Most of the world around us is composed of materials that are translucent: that is, materials where light does not simply bounce off a surface, but penetrates into the material before scattering around volumetrically. This includes things that we immediately recognize as volumetric in nature, like clouds, fog, or smoke, but also materials that we might otherwise consider "opaque" or "solid," such as the foods we eat, the liquids we drink, and our own skin.

In this project we will develop new theoretical models and computational algorithms for predictively simulating and matching translucent material appearance. This problem is in the critical path of many disciplines of huge socio-economic importance. We will investigate fundamental problems spanning computer graphics and 3D printing (can we render realistic images of clouds, or 3D print an object that faithfully matches a target translucent appearance?), and partner with experts in atmospheric sciences (can we accurately deduce atmospheric content from satellite images of clouds?), and nuclear engineering (can we more accurately simulate particles in a nuclear reactor to ensure its safe operation?).