Due dates

  • In-class presentations: 10:15 - 11:30am, Thursday, October 21.
  • Final reports: 11:59pm, Monday, October 25.
  • Evaluation of team members: 11:59pm, Monday, October 25.

General instructions

Team work

You MUST work within your assigned teams.

  • Each team member must work on understanding the data, exploring the data, building the models and providing answers to the questions of interest. This should be the focus of the team meetings.
  • The other responsibilities must be divided according to the following designations:
    • Checker: Double-checks the work for reproducibility and errors. Also responsible for submitting the report and presentation files.
    • Coordinator: Keeps everyone on task and makes sure everyone is involved. Also responsible for coordinating team meetings and defining the objectives for each meeting.
    • Presenter: Primarily responsible for organizing and putting the team presentations together.
    • Programmer: Primarily responsible for all things coding. The programmer is responsible for putting everyone’s code together and making sure the final product is “readable”.
    • Writer: Prepares the final report.
  • All teams must rotate all designations from Team Project 1. That is, each role from Team Project 1 must be handled by a different team member for this project.

GitHub

Each MIDS team MUST collaborate using GitHub. This is not compulsory for non-MIDSters, however, I would recommend also collaborating using GitHub. A blank repository has been created for this project. Follow this link: https://classroom.github.com/a/LSi0yemR to gain access. You should already know how to clone the repository locally once you gain access. The first student to accept the invitation within each team will be responsible for creating the team name. All other members of the team will be able to join the team once the first person has added the team name. You should only join your pre-assigned team. Feel free to create other folders within the repository as needed but you must push your final reports and presentation files to the corresponding folders already created for you.

Presentations

Each team will have 6 minutes to present their findings in class. Feel free to get creative with the presentations; fun animations are welcome!

  • Teams 1, 3, 5, and 7 should only present findings and results for Part I of this project, while Teams 2, 4, 6, and 8 should only present findings and results for Part II of this project.
  • The presentation for each team should only cover a brief introduction as well as your most interesting and important findings. At least one EDA plot MUST be included.
  • If you choose to use presentation slides, you should create them with the time limit in mind. You should consider making between 5 and 7 slides (excluding the title slide), so that you have approximately one minute to get through each one. You are free to create your presentation slides using PowerPoint, LaTeX or any other application you choose.
  • The order of the in-class presentations will be randomized for each part of the project; each team is expected to come to class fully prepared to present first.
  • Each team’s presentation files must be submitted on Sakai by 9:00am on Thursday, October 21. The checker should upload the files to their Dropbox folder on Sakai.

Reports

Each team MUST turn in only one report with team members’ names at the top of the report, and the different designations (checker, coordinator, presenter, programmer, and writer).

  • Please limit your combined PDF document for both parts of the project to 10 pages in total. You will be penalized should your combined report exceed 10 pages!
  • Please type your reports using R Markdown, LaTeX or any other word processor but be sure to knit or convert the final output file to .pdf.
  • DO NOT INCLUDE R CODE OR OUTPUT IN YOUR SOLUTIONS/REPORTS All R code can be included in an appendix, and R outputs should be converted to nicely formatted tables. Feel free to use R packages such as kable, xtable, stargazer, etc.
  • All R-code must be included in the appendix. Feel free to also include any supplemental material that is important for your analysis, such as diagnostic checks or exploratory plots that you feel justify the conclusions in your report.
  • All reports must be submitted on Gradescope: go to Assignments \(\rightarrow\) Team Project 2 Reports. Gradescope will let you select your team mates when submitting, so make sure to do so. The checker is responsible for submitting the report.

Evaluations

All team members must complete a very short written evaluation, quickly describing the effort put forth by other team members.

  • The evaluation should include a list of all other team members (not including you) and considering all the work on the assignment that was not done by you, a breakdown of the fraction of work done by each other team member. For example, if you are on a 5 person team and the other team members all contributed equally, you would assign the fraction 1/4 to each of them (regardless of whether you all did 1/5 of the work overall, or whether you personally did half the work and the others each did 1/8).
  • In the case you feel one or more team members deserves a different grade on the assignment than the others, you should provide a description of why that member or members deserves a different grade. (In this case you can talk about your own relative contribution.)
  • Submit on Gradescope: go to Assignments \(\rightarrow\) Team Project 2 Evaluations.

Analyses

Your analyses MUST address the two sets of questions directly. The report should be written so that there is a section which clearly answers the first set of questions for Part I and another section which clearly answers the second set of questions about Part II. If you prefer, you can write two 5-paged reports, one to address each set of questions. Be sure to also include the following in your report:

  • the final models you ultimately decided to use,
  • clear model building, that is, justification for the models (e.g., why you chose certain transformations and why you decided the final models are reasonable),
  • model assessment for the final models,
  • the relevant regression output (includes: a table with coefficients and SEs, and confidence intervals),
  • your interpretation of the results in the context of the questions of interest, including clear and direct answers to the questions posed, and
  • any potential limitations of the analyses.

Part I: StreetRx

Introduction

In this case study, we consider data provided by StreetRx. From their website,

StreetRx (streetrx.com) is a web-based citizen reporting tool enabling real-time collection of street price data on diverted pharmaceutical substances. Based on principles of crowdsourcing for public health surveillance, the site allows users to anonymously report prices they paid or heard were paid for diverted prescription drugs. User-generated data offers intelligence into an otherwise opaque black market, providing a novel data set for public health surveillance, specifically for controlled substances.

Prescription opioid diversion and abuse are major public health issues, and street prices provide an indicator of drug availability, demand, and abuse potential. Such data, however, can be difficult to collect and crowdsourcing can provide an effective solution in an era of Internet-based social networks. Data derived from StreetRx generates valuable insights for pharmacoepidemiological research, health-policy analysis, pharmacy-economic modeling, and in assisting epidemiologists and policymakers in understanding the effects of product formulations and pricing structures on the diversion of prescription drugs.

StreetRx operates under strong partnership with the Researched Abuse, Diversion, and Addiction-Related Surveillance System (RADARS), a surveillance system that collects product- and geographically-specific data on abuse, misuse, and diversion of prescription drugs. The site was launched in the United States in November 2010. Since then, there have been over 300,000 reports of diverted drug prices. StreetRx has expanded into Australia, Canada, France, Germany, Italy, Spain, and the United Kingdom.

Data

This data is NOT to be shared outside of class and specifically, NOT to be shared beyond this case study.

The data can be found on Sakai. The file streetrx.RData contains the actual data, and the instructions plus data dictionary (also given below) can be found in the file StreetRx Data Dictionary and Instructions_1q19.docx.

Each team has been assigned a different drug or drug family for investigation as given below. Subset the data and only focus on the drug or drug family assigned to your team.

Team Drug
Group 1 Methadone
Group 2 Codeine
Group 3 Morphine
Group 4 Oxymorphone
Group 5 Diazepam
Group 6 Lorazepam
Group 7 Tramadol
Group 8 Hydrocodone

Variables provided to us by StreetRx are given below. You will only use a subset of the variables as indicated below.

Variable Description
ppm Price per mg – outcome of interest
yq_pdate Year and quarter drug was purchased (format YYYYQ, so a purchase in March 2019 would be coded 20191) – DO NOT USE
price_date Date of the reported purchase (MM/DD/YY) (a finer-grained time variable than yq_pdate) – DO NOT USE
city manually entered by user and not required – DO NOT USE
state manually entered by user and not required
country manually entered by user and not required – only has one unique value anyway, so discard!
USA_region based on state and coded as northeast, midwest, west, south, or other/unknown
source source of information; allows users to report purchases they did not personally make
api_temp active ingredient of drug of interest – use it to subset the data to the drug assigned to your team above
form_temp this variable reports the formulation of the drug (e.g., pill, patch, suppository)
mgstr dosage strength in mg of the units purchased (so ppm*mgstr is the total price paid per unit)
bulk_purchase indicator for purchase of 10+ units at once
primary_reason data collection for this variable began in the 4th quarter of 2016. Values include:
0 = Reporter did not answer the question
1 = To treat a medical condition (ADHD, excessive sleepiness, etc.)
2 = To help me perform better at work, school, or other task
3 = To prevent or treat withdrawal
4 = For enjoyment/to get high
5 = To resell
6 = Other reason
7 = Don’t know
8 = Prefer not to answer
9 = To self-treat my pain
10 = To treat a medical condition other than pain
11 = To come down
12 = To treat a medical condition (anxiety, difficulty sleeping, etc.)
DO NOT USE

Questions

Use a multi-level model to investigate factors related to the price per mg of your drug, accounting for potential clustering by location and exploring heterogeneity in pricing by location.

As part of your analysis, explore how the factors provided are, or are not, associated with pricing per milligram. One challenge with StreetRx data is that they are entered by users, so do bear in mind that exploratory data analysis will be important in terms of identifying unreasonable observations, given that website users may not always be truthful (e.g., I could go on the website now and say I paid a million dollars for one Xanax on the island of Aitutaki, and that would be reflected in the database).

Part II: Voting in NC (2020 General Elections)

Introduction

The North Carolina State Board of Elections (NCSBE) is the agency charged with the administration of the elections process and campaign finance disclosure and compliance. Among other things, they provide voter registration and turnout data online (https://www.ncsbe.gov/index.html, https://www.ncsbe.gov/results-data). Using the NC voter files for the general elections in November 2020, you will attempt to identify/estimate how different groups voted in the 2020 elections, at least out of those who registered. Here’s an interesting read on turnout rates for NC in 2016: https://democracync.org/wp-content/uploads/2017/05/WhoVoted2016.pdf (you might consider creating a similar graphic to the one on page 4).

Data

The data for this part of the project can be found on Sakai. The file voter_stats_20201103.txt contains information about the aggregate counts of registered voters by the demographic variables; the data dictionary can be found in the file DataDictionaryForVoterStats.txt. The file history_stats_20201103.txt contains information about the aggregate counts of voters who actually voted by the demographic variables.

You will only work with a subset of thoe overall data. Take a random sample of 25 counties out of all the counties in both datasets. You should indicate the counties you sampled in your final report. You will need to merge the two files voter_stats_20201103.txt and history_stats_20201103.txt by the common variables for the counties you care about. Take a look at the set of join functions in the dplyr package in R (https://www.rdocumentation.org/packages/dplyr/versions/0.7.8/topics/join) or the merge function in base R. I recommend the functions in dplyr. You may choose to merge the datasets before or after selecting the samples you want, but be careful if you decide to do the latter.

Unfortunately, the data dictionary from the NCSBE does not provide the exact difference between the variables party_cd and voted_party_cd in the history_stats_20201103.txt file (if you are able to find documentation on the difference, do let me know). However, I suspect that the voted party code encodes the information about people who changed their party affiliation as at the registration deadline, whereas the first party code is everyone’s original affiliation. Voters are allowed to change their party affiliation in NC so that lines up. The two variables are thus very similar and only a small percentage of the rows in the history_stats_20201103.txt file have different values for the two variables. I would suggest using the voted party code (voted_party_cd) for the history_stats_20201103.txt dataset.

You should discard the following variables before merging: election_date,stats_type, and update_date. Also, you can not really merge by or use the voting_method and voting_method_desc variables in your analysis either because that information is only available in the history_stats_20201103.txt data and not the other dataset. That means you should not use those two variables when merging.

Before discarding the variables however, you need to aggregate to make sure that you are merging correctly. As a simple example, suppose 4 males voted in person and 3 males voted by mail, you need to aggregate out the method of voting so that you have 7 males in total. This is because we are unable to separate people who voted by different voting methods in the voter_stats_20201103.txt we want to merge from. So, the simplest way is to use the aggregate function in R. As an example, the code:

aggregated_data <- aggregate(Data$total_voters,
                             list(Age=Data$age,Party=Data$party_cd),sum)

will sum all voters by all age groups and party. You can also use the dplyr package to aggregate in the same way if you prefer that.

Once you have this clean data for the history_stats_20201103.txt file, you should then go ahead to grab the information on total registered voters from voter_stats_20201103.txt, by merging by all variables in history_stats_20201103.txt, except total_voters.

Questions of interest

Your job is to use a hierarchical model to answer the following questions of interest.

  • How did different demographic subgroups vote in the 2020 general elections? For example, how did the turnout for males compare to the turnout for females after controlling for other potential predictors?
  • Did the overall probability or odds of voting differ by county in 2020? Which counties differ the most from other counties?
  • How did the turnout rates differ between females and males for the different party affiliations?
  • How did the turnout rates differ between age groups for the different party affiliations?


For Part II, basic model assessment and model validation is sufficient!

Grading

40 points: 20 points for each part.