Summer 2018: Researching ALS across Different Model Organisms

By Sophia Nelson

One of the world’s leading neurodegenerative diseases, Amyotrophic Lateral Sclerosis (ALS), causes its destruction of the nervous system in largely unknown ways. Recent research has shown that a hexanucleotide repeat, GGGGCC in intron one of c9orf72, is the most frequent genetic cause of ALS, with the number of repeats in affected patients ranging anywhere from hundreds to thousands. While these repeat sequences are found in the non-coding region of the gene, they still appear to contribute to toxicity through repeat associated non-ATG translation, which can begin at any point without the presence of the start codon, ATG. This unconventional translation allows repeats to be translated into five different dipeptide repeat proteins (DPRs): GA, GP, GR, AP, and PR. One of these proteins, GA, forms paranuclear amyloidogenic aggregates which have been found in the brains of human ALS patients. However, the direct role of GA—or any DPR— in disease pathology and toxicity is not yet known, particularly because the toxicity of GA varies heavily based on the model system in use.

As a Velay Scholar this past summer, I got the chance to work with professors Robert Fairman and Roshan Jain to investigate the protein GA through a comparative study characterizing the aggregation and toxicity of GA in three model systems: worms (C. elegans), zebrafish (D. rerio), and fruit flies (D. melanogaster). The bulk of my work was in worms and flies, as my fish have not yet grown large enough for testing! I expressed GA in the neurons— ALS is known to attack motor neurons, so neuronal expression is important to study— in each of the two model systems and then performed behavioral and confocal imaging analysis for comparison. In the imaging studies, I was looking for multiple things: firstly, the large, paranuclear puncta that are hallmark of GA aggregation; and second, the localization pattern within each organism. Behaviorally, I was attempting to understand whether or not GA expression within neurons was toxic by comparing organism performance in simple behavioral assays both with and without GA. For worms, this behavior was thrashing; for flies, it was the ability of larvae to crawl. I learned so much throughout the summer! I dissected fruit flies (both larvae and adults) and removed and imaged their brains, which are barely the size of a poppyseed. I learned behavioral testing across all three organisms, as well as PCR genotyping, staining and imaging techniques, confocal microscopy, and the beginning stages of biochemical assays as well, such as lysate preparation.

The dissection of fruit fly larvae and brains showed that GA was heavily concentrated within their developing brains, particularly in the progenitor of the neural column and the neuronal ganglia that develop and associate with the eyes. The confocal images of my fly brains were likely the coolest result of my whole summer! In worms, GA puncta is found throughout the body; heavily concentrated in the brain and along both the ventral and dorsal cord. Behavioral assays indicated that the presence of GA had a negative effect on C. elegans thrashing. Larval crawling data was collected and differences between controls and GA positive larvae were found, but this data is still being followed up on to conclusively determine any effects. Biochemical results, obtained through SDD-AGE, will be gathered this spring. Overall, this study indicates that the form of GA aggregation is consistent across species despite slight differences in localization, and the presence of GA appears to have a negative effect on behavior, indicating it may have a role in disease toxicity and should be tested further.

Performing these experiments was an incredible experience! My mentors were both amazing and taught me so much. It was my first time working in a research lab, and as a biology major with minors in neuroscience minor and health studies, it has only confirmed my interests further. After graduation, I now plan on going to medical school and graduate school for an M.D.-Ph.D so that I can continue to perform research while also gain a clinical perspective and directly help patients. I am so excited to be able to apply the knowledge I gained this summer (and will continue to gain in my last year at Haverford) in my future career and work with a topic that has the potential to have a direct impact on many people’s lives.

Dissected and removed fruit fly larval brain neuronally expressing the protein GA. The middle section is the progenitor of the neural column.

Summer 2018: Mitigating ACEs at Vanderbilt Medical Center

“It’s easier to build strong children than to repair broken men.” – Frederick Douglass

Adverse childhood experiences (ACEs) come in many shapes and forms, including neglect, abuse, and household dysfunction. But how influential are they in a child’s health outcomes? Research has repeatedly shown that ACEs can significantly affect brain health enough to contribute to cognitive impairment, risky behavior in adult life, and long term risks of disease and mental illness. Therefore, we move onto the next question: How do we mitigate ACEs? That’s when I come in.

ACEs can range from being parenting related, to environmental.

This summer, I’m working alongside Dr. Seth Scholer, a pediatrician at Vanderbilt University Medical Center Children’s Hospital. Dr. Scholer has spent over a decade conducting research regarding ACEs, and how to successfully assess and alleviate them through pediatric primary care. With funding from the state of Tennessee, my research this summer has mostly focused on a randomized control trial (RCT) in which we hope to demonstrate that a brief parenting intervention can reduce unhealthy parenting tactics, thus nurture brain health in the clinic’s patients.

The utilization of an ACEs Screening Tool can improve health outcomes of children by identifying and addressing ACEs early in life.

My personal research project this summer is definitely simpler than an RCT, but has its own challenges. All previous research utilizing ACEs screening tools have taken place in pediatric clinics associated with research institutions such as Vanderbilt. However, the next step from here is employing a screening tool state-wide, which requires additional research that addresses how to implement the screening tool in private medical practices.

Therefore, I have been implementing an ACEs Algorithm and screening tool at a private pediatric primary care clinic for my summer research project. The screening tool is a quick survey that measures a child’s household/environmental stressors, and the degree to which their parent(s) use healthy discipline strategies. The ACEs Algorithm helps health-care providers interpret their patients’ scores, and points out when children are at low-high risk of ACEs. This is the first research study of its kind, and it requires working hands-on with the doctors and nurses at the private clinic to maximize the efficiency and effectiveness of the screening tool. Overall, this project has been a great opportunity to work along the front lines of ACEs research.

Health care providers use this ACEs Algorithm to interpret a child’s parenting-related ACEs and environmental ACEs (or other childhood stressors), after their caregiver completes a short ACEs Screening Survey. I worked with Dr. Scholer on the development of this algorithm throughout the summer, and this image is our final result.

As a Psychology major with minors in Neuroscience and Health Studies, this research experience perfectly fits the little niche formed from the intersection of my three fields of study. A typical day for me involves lots of patient/provider interaction and data management, with some manuscript and literature review writing stuck in between. This has helped me build concrete clinical research skills that are hard to learn in a classroom. Furthermore, I’m ecstatic about my ability to work within a research topic that is having a direct impact on people’s lives.


a nightmare about an important species which fails a quality control criterion


B. rapa is an important autopolyploid plant species in my research. I had spent a huge portion of my research time on selecting autopolyploids from polyploid plant species, while eliminating allopolyploid ones. During the height of my literature search, I had woken up to such a nightmare, which could potentially spoil all my previous effort. To my reassurance, the dream was only a dream…


art: June 18, 2018; text: July 5, 2018

How I got in Whalen’s Lab

As another internship season approaches, many friends are asking me how I got my summer 2017 biology research position without having taken any biology at Haverford. [1]

The process was not straightforward. I set out considering a major at Bryn Mawr, then halfway switched to Swarthmore, and ended up determined, “If biology, then Haverford.” Last summer was to get a taste of biology at Haverford.

Once the decision was made, I immediately reached out to Professor Whalen, whom I had chatted with at academic tea, a casual gathering of all departmental representatives at the beginning of every semester, to answer students’ questions. I had also browsed professors’ Haverford webpages, where their CVs and research directions are listed. Although my tentative major was biology, I could not understand the content of any project our biology professors listed. Still, marine science, drug resistance, and a photo of Professor Whalen smiling from a solid blue ocean-sky background attracted me above all.

I sent out my first email on Dec. 22, 2016, and went to play in New Haven. When I came back, a reply had lain in my mailbox since the day I left — what a quick response! To day, Professor Whalen’s efficiency is still surprising, motivating, and scaring me from time to time. Back then, I immediately arranged to meet with my first-ever mentor.

She showed me all her undertakings and some summer opportunities, and asked me to show up at lab meetings the coming semester. Since I could only work on-campus as an international student, we decided to begin with the “bacterial response to a chemical” project ongoing at her lab. That was it! I became part of Whalen’s lab. When the summer scholarship application season came, she instructed me to apply (Kovaric Fellowship [2]). When I failed, she applied funding from Provost for me, so I could get paid. Everything was settled as early as Mar.18, 2017, after which I just sat back and pictured the richness of the coming summer.


Takeaways for new applicants:

  1. Start collecting information early; browse professor/institution’s webpages, and from there find out more
  2. After narrowing down your choices, reach out (sometimes it takes longer to receive reply; don’t feel discouraged or overly anxious)
  3. Academic tea is a great space to ask any lay (or expert) questions about subject/ courses/ major/ internship/ …; professors are there for you
  4. Don’t be afraid if professor’s research seems hard to understand!


[1] only sophomores and above could take biology courses at Haverford

[2] funding opportunities please see:

Summer Research Report: Predicting March Madness

Last summer I received funding from a Velay Fellowship to do sports analytics research at Davidson College in North Carolina and my main project was developing an algorithm to predict outcomes in the NCAA Division I March Madness Men’s Basketball Tournament. I finished my project in August and was able to backtest five years and demonstrate that my algorithm would have outperformed other methods (including fivethirtyeight,  Power Rank, and numberFire), but I knew the first real test would come now, in March, as my algorithm first tackles a tournament field in real time. Below is the table of predicted outcomes produced and some further explanation and insight into what I’ve learned.

What am I looking at?

Each row of this table represents one of 64 teams in the 2018 March Madness Tournament. Choose a team in the first column and move to the second column to see the probability that that team will appear in the second round or, equivalently, the chance they will win their first matchup. The probabilities range from 0 to 1 with, for example, .54 indicating a 54% chance of the team successfully making it to the second round. The following columns will give the chance that the team appears in each of the subsequent rounds of the tournament, and the final column gives the probability that the team will be named champions. The chance any given team will appear in the second round (Round of 32) is greater than the chance they will appear in the third round (Sweet Sixteen) which is in turn greater than the chance they will appear in the fourth round (Elite Eight) and so on. According to this table, Virginia has the highest chance (99.2 percent) of winning their first game and making it to the second round and a 28.66 percent chance of winning the whole tournament.

What exactly determined these probabilities?

I pulled data on and and used JMP statistical software to look for correlations between over 50 team and player stats and game outcomes. Sometimes there is a clear correlation that can easily be modeled by a linear regression: for example, points per possession is strongly correlated to winning games. Other statistics are more complicated: for example, player experience doesn’t always strongly correlate to success. The best teams are often either heavy with “one-and-done” freshmen or loaded with experienced upperclassmen. JMP tools showed that the relationship between player experience and success was best modeled by a quadratic, not linear, equation. It was also important to be careful of team statistics that were highly correlated with each other. For example, my team’s turnovers and my opponents’ steals essentially measure the same component of the game—including both statistics would run the risk of overweighting the importance that component.

I used a Python program to implement the logistic regression I designed to predict every possible tournament matchup. In later rounds of the tournament it is important to note that we are dealing with compounding probabilities. There are multiple possible opponents a team could face in a later round so the probability they will win is the summation of the probability they will beat each potential opponent multiplied by the chance they will make it to the game, multiplied by the chance that that opponent will appear in the game. The final results were formatted in Tableau to create the above table.

How is this model different than other prediction methods?

There have been many, many previous efforts to correlate team and player statistics with winning games and use to regressions to predict future games. For the most part the quality of the two teams (dictated by their season stats) does a good job of indicating who is likely to win, but sometimes teams with worse stats beat teams with better stats, and sometimes those results are predictable. Most sports analysts would call those games “bad matchups.” My go-to example is Villanova and Butler. Over the past four years, Villanova has maintained a stronger statistical profile and consistently placed well above Butler in rankings and polls, but dropped three games in a row to Butler in 2017. That type of result inspired me to look for correlations between two teams’ stat differentials and result. Instead of a regression that predicts ‘how likely is a team this good to beat a team that good?’ I wanted a regression that looked at ‘how likely is this team to win against a team that’s this much better than them at shooting free throws and this much worse at causing turnovers?’ If another team popped up in the tournament that was clearly statistically inferior to Villanova but was strong in the same categories as Butler, my algorithm would have a better chance of picking up that potential upset.

Why publish probabilities and not just predict winners?

College basketball is inherently unpredictable, but analysts have shown both success and improvement. There certainly are methods that provide vast advantages over a 50-50 coin toss and some prediction algorithms have demonstrated upwards of 70 percent accuracy over tens of thousands of games. With better data and methods, accuracy has and likely will continue to improve, but the consensus is that the cap is well below perfection. There will never be a way to fully account for the freak accidents, the emotions, the technical failures, and other uncountables that can affect the outcome of a game. I’m personally inclined to believe that college basketball is no more than 80 percent predictable. A list of only predicted winners undoubtedly contains incorrect results and there would be no way to help you identify which those might be. Publishing a list of probabilities gives you an idea of which games are more competitive and likely to go either way.

How do I turn this information into a bracket?

The simplest way to translate this information into picks for your bracket would be to advance all the teams on your bracket with probability greater than .50 (fifty percent) of appearing in the second round, then advance the all the teams with greater than .50 probability of making the third round and so on.

Will this deliver a perfect bracket? Almost certainly not. Even if every probability was spot on (i.e. every team the algorithm gives a .25 probability of advancing actually has an exactly 25 percent chance) the chance that the more likely team would win in every matchup would still be 1 in a couple million. This table of probabilities will probably favor more than a couple losing teams and you are smart enough to pick some of those games. Perhaps a team is favored but their best player has a nagging injury and has under-performed the last few games, or perhaps a team isn’t favored but has just been gifted with a tournament location 20 miles from campus giving them pseudo-home court advantage. Another thing to think about is the value of predicting upsets, which in many bracket contests are rewarded with bonus points. It could be a smart idea to bet on a 12 seed that is given .4 probability because the expected return is higher than the safer five seed (compute .4 x reward for picking upset versus .6 x reward for picking winning high seed). There’s a lot to think about and the optimal way to think of this table is as a tool, not an authority.

What about the play-in games?

The March Madness Tournament actually begins with 60 teams set and four spots to be filled by the winners of four play-in games. This aspect was very difficult to build into my prediction table because the four play-in winners are not slotted into the same places in the bracket every year (sometimes more than one are put into the same region and none into another region). Thus I wrote my program to handle a 64-team single elimination tournament and predicted the play in games separately, using the same logistic regression based algorithm. The predicted winners of the play in games are among the teams included in table.

EEG and Eye Tracking: My Summer in the Compton Lab

This summer I worked in Rebecca Compton’s Cognitive Neuroscience lab, studying the effects mind wandering, ERNs (error related negativity), and error related alpha suppression. A majority of the summer was spent testing out and preparing the lab’s new Eye-tracking system, Tobii, and working with Curry 7–new EEG software. After learning the two new programs, the other RAs and myself began running participants for Becky’s grant proposal.

In Study 1a, we examined the differences in pupil diameter after correct and incorrect responses. Using Eprime and Tobii Eye-tracking software, we designed a Stroop task–a word color task where participants must press a key indicating the color of the word, not the meaning of the word–to analyze correct and incorrect responses. The task consisted of 6 blocks of 72 trials each. Participants responded with a ~93% overall rate of accuracy. In this study, we found a significant main effect of period, F(2,18) = 27.5, p < .001, indicating that pupil diameter was greatest following the response button press. We also found an interaction effect of trial type by period, F(2,18) = 7.5, p <.005, indicating that pupil diameter was significantly greater for errors compared to correct trials during the post-response period. This study replicated prior findings of error related pupil dilation.

In Study 1b, we combined Eye-tracking and EEG methods to simultaneously examine pupil diameter and EEG oscillations following correct and incorrect responses. Similarly to Study 1a, we found that pupil diameter was significantly greater for error vs. correct trials during the post-response period. There was a main effect of period, F(2,18) = 5.5, p <.02, and an interaction effect of trial by period, F(2,18) = 6.6, p < .008. Further, we found that there was more alpha related suppression following error trials compared to correct trails, F(1,9) = 11.6, p < ,01. These findings replicated Carp & Compton (2009)’s prior findings that there is great alpha suppression following error than correct trials.

Following Study 1a and 1b, this year we will be running participants for part 1c. We hope to replicate these findings with a larger sample size and to examine between and within-subjects correlations between error-related pupillary and EEG effects.

Thanks to Becky, Liz, Steph, and all of the Psych department and KINSC this summer for your support on our work!

Photoelasticity technique for studying a granular system — it reveals the “force chains”


Week 4 in the Harvey Lab- Calcium Confirmation

Xenic Results

By “calcium confirmation”, I mean that we have determined intracellular calcium is NOT involved in our algicidal compound’s mechanism of action.  Sometimes that’s how it goes in science, especially in a field where so little is known; you have to weed through many negative results to find the positive hits.

This is the case with my phytoplankton bioassays. Each new crude extract has the potential to contain an algicidal compound, but many crudes are not active against the phytos or even enhance phyto growth (which is cool too!).

The element of chance in my work is one of my favorite aspects. When the crudes are spun down, they look pretty much the same. But when cell counts for a particular crude come back 10 times lower than they started in an experiment, I think to myself, “Wow, whoa, what makes this one so special?” Another exciting aspect is the fact that we have the technology to find out exactly what compound makes them “special,” and then we can go a step further and determine exactly why they function in this “special” way in the ocean.

My campaign to elucidate this mechanism of action continues next week, as I test the cells for reactive oxygen species.