1  Evaluation data: input/features

Below, the evaluation data is input from an Airtable, which itself was largely hand-input from evaluators’ reports. As PubPub builds (target: end of Sept. 2023), this will allow us to include the ratings and predictions as structured data objects. We then plan to access and input this data directly from the PubPub (API?) into the present analysis. This will improve automation and limit the potential for data entry errors.

Error in air_api_key(): AIRTABLE_API_KEY environment variable is empty. See ?airtabler for help.
Error in air_api_key(): AIRTABLE_API_KEY environment variable is empty. See ?airtabler for help.
Error in eval(expr, envir, enclos): object 'pub_records' not found
Error in eval(expr, envir, enclos): object 'pub_records' not found
Error in eval(expr, envir, enclos): object 'evals' not found
Error in eval(expr, envir, enclos): object 'evals' not found
Error in eval(expr, envir, enclos): object 'evals_pub' not found
Error in eval(expr, envir, enclos): object 'evals_pub' not found
Error in eval(expr, envir, enclos): object 'evals_pub' not found
Error in eval(expr, envir, enclos): object 'evals_pub' not found

Reconcile uncertainty ratings and CIs

Where people gave only confidence level ‘dots’, we impute CIs (confidence/credible intervals). We follow the correspondence described here. (Otherwise, where they gave actual CIs, we use these.)1

5 = Extremely confident, i.e., 90% confidence interval spans +/- 4 points or less)

For 0-100 ratings, code the LB as \(max(R - 4,0)\) and the UB as \(min(R + 4,100)\), where R is the stated (middle) rating. This ‘scales’ the CI, as interpreted, to be proportional to the rating, with a maximum ‘interval’ of about 8, with the rating is about 96.

4 = Very*confident: 90% confidence interval +/- 8 points or less

For 0-100 ratings, code the LB as \(max(R - 8,0)\) and the UB as \(min(R + 8,100)\), where R is the stated (middle) rating.

3 = Somewhat** confident: 90% confidence interval +/- 15 points or less

2 = Not very** confident: 90% confidence interval, +/- 25 points or less

Comparable scaling for the 2-3 ratings as for the 4 and 5 rating.

1 = Not** confident: (90% confidence interval +/- more than 25 points)

Code LB as \(max(R - 37.5,0)\) and the UB as \(min(R + 37.5,100)\).

This is just a first-pass. There might be a more information-theoretic way of doing this. On the other hand, we might be switching the evaluations to use a different tool soon, perhaps getting rid of the 1-5 confidence ratings altogether.

Error in eval(expr, envir, enclos): object 'evals_pub_long' not found
Error in eval(expr, envir, enclos): object 'evals_pub_long' not found
Error in eval(expr, envir, enclos): object 'evals_pub_long' not found

We cannot publicly share the ‘papers under consideration’, but we can share some of the statistics on these papers. Let’s generate an ID (or later, salted hash) for each such paper, and keep only the shareable features of interest

Error in `dplyr::select()`:
! Can't select columns that don't exist.
✖ Column `id` doesn't exist.
Error in eval(expr, envir, enclos): object 'evals_pub_long' not found
Error in eval(expr, envir, enclos): object 'paper_ratings' not found
Error in eval(expr, envir, enclos): object 'evals_pub_long' not found
Error in eval(expr, envir, enclos): object 'evals_pub_long' not found
Error in eval(expr, envir, enclos): object 'all_papers_p' not found
Error in eval(expr, envir, enclos): object 'all_papers_p' not found
Error in eval(expr, envir, enclos): object 'evals_pub' not found
Error in eval(expr, envir, enclos): object 'evals_pub' not found
Error in eval(expr, envir, enclos): object 'evals_pub_long' not found
Error in eval(expr, envir, enclos): object 'evals_pub_long' not found

  1. Note this is only a first-pass; a more sophisticated approach may be warranted in future.↩︎