C Abstract Review Systems at useR!

C.1 Prior to 2012

The number of abstracts prior to 2012 was low–there were ~185 abstracts submitted in 2011–and the review process was not very competitive, so a dedicated review system was not needed. A typical set-up was as follows:

  • Abstracts available to download individually or in combined PDF from a password-protected webpage (or emailed to reviewers)
  • Each abstract assigned to 1 reviewer, who was asked to:
    1. Check abstract suitable as talk, otherwise offer poster. Only totally unsuitable abstracts rejected.
    2. If relevant, decide if talk suitable for “Kaleidoscope” session, i.e., a higher profile session (single track or fewer sessions in parallel) highlighting talks across a range of topics.
  • Reviewers asked to add reviews to a CSV file and return

C.2 2012-2014

Between 2012 and 2014, useR! used a web application designed by Matt Shotwell.

  • Each abstract assigned to 1 reviewer who was asked to:

    1. Indicate yes/no/maybe and comment
    2. Group abstracts into sessions (max 4 per session) and propose session titles (suggestions for good pairings always encouraged in review, but here it was actually part of system). The session could be “Kaleidoscope” or “Poster” as appropriate.
  • Any member could review any abstract, view other people’s reviews and propose alternative groupings

In 2014, a Google spreadsheet was used, with links to PDFs of submitted abstracts, but moved to the web app because this allowed multiple reviews per abstract. That year people self-selected which abstracts to review. This is not ideal as inevitably some people do a lot more work than others and some abstracts are left unreviewed.

C.3 2015-2016

Used a web application designed by Torben Tvedebrink that had both a front end for submitters and a back end for reviewers (previously abstracts submitted by a separate web form).

  • Aimed for at least 2 reviews per abstract. They were asked to:
    1. Recommend format: poster, lightning talk, regular talk
    2. Evaluate: Accept/probably OK/probably not OK/reject + comment
  • Peter Dalgaard wrote tool to extract all information into a spreadsheet, which many reviewers found easier to skim through the abstracts and review
  • Reviewers could start reviewing as soon as abstracts came in and the Chair could accept reject as soon as there was a clear option (e.g. 2 x “accept” or 2 x “reject”)

By this stage the process is beginning to be more selective, but it is still feasible for the PC as a whole to discuss and decide on rejections. (~250 abstracts in 2015).

C.4 2017

Used a system implemented in Redmine by Open Analytics (co-organizers that year). The motivation was to avoid the disorganised “free-for-all” and allocate abstracts so that each abstract was reviewed by 2 reviewers.

It worked as follows:

  • All abstracts were submitted programmatically to a top-level project “abstract-mgt”. Abstracts received status “New”.
  • The chairs allocated abstracts to sub-projects corresponding to topics (e.g. “biostatistics”), with two reviewers per topic. Abstracts received status “Pending review”.
  • The reviewers received a notification when their abstracts were available and could start reviewing.
  • The first to review a submission changed the status to “first review complete”
  • The second to review a submission changes the status to “second review complete”
  • The chairs read the completed reviwes and change the status to “Accepted” or “Rejected” and the submission type to “Talk”, “Poster” or “Lightning Talk”.
  • Decisions are extract programmatically to email submitters and obtain the information to form the schedule.

R scripts were written to automate steps, e.g. allocate abstracts to sub-projects, download materials for a sub-project, etc. Redmine also allows bulk-editing, e.g. to change status for several issues at once.

Pros: Smooth process to assigning abstracts to reviewers and tracking progress; able to interact with system using R scripts. Cons: Fiddly to download and view materials for each submission (R script provided though); relied on some manual steps (review had to be entered online and reviewer had to change abstract status); reviews added in same space (project description field) so second reviewer would see first review before adding their own; review unstructured (no dedicated place to put overall recommendation). One reviewers comments were lost/not saved; inputting results online was slow.

We had ~400 abstracts plus ~20 extended abstracts for a Young Academic scholarship.

C.5 2018

UseR! 2018 used d a Shiny app for reviewing, code here: https://github.com/useR-2018/cooee.

  • Abstracts were submitted by Google form
  • Shiny app obtained data from Google Sheet to present to reviewers
  • Reviewers gave a score (in Aussie lingo: “Bloody ripper”, “Beaut”, “Okey-dokey”, “Sorry”) and a comment

Pros: All R solution Cons: Reviewing must all be done online Unsure: Were reviewers allocated to abstracts? Could reviews be edited?

C.6 2019

UseR! 2019 received 467 abstracts and used https://www.sciencesconf.org/, which integrated registration and abstract management.

  • Abstracts were allocated to a single topic (this required some manual curation, not least because we could not constrain submitters to a single topic choice during sciencesconf submission).
  • Reviewers were allocated 2-3 topics, such that each reviewer had 45-55 abstracts each and each abstract had two reviewers
  • Reviewers were provided with combined HTML of all abstracts plus combined HTML for each topic, for reference/offline review.
  • Reviews had to be completed on Sciences conf. The review form had the following parts:
    1. Overall score (0 - Faux-pas (reject); 4 - Cliché (possibly reject); 7 - Connoisseur (probably accept); 10 - Tour de force (definitely accept))
    2. Internal comment to chairs
    3. Optional comment that would be shared with contributor
    4. Recommended format (regular talk/lightning talk/poster)
    5. Recommended session topic

Pros: system to allocate abstracts to reviewers; structured review form.

Cons: reviews and final decisions had to be inputted manually - the interface required a lot of clicking on small buttons! Twice decisions were accidentally bulk-edited and some abstracts were accepted that were not intended to be; submitters could log in to system and see decisions before they were finalized.

C.7 Others

eRum 2020 used Shiny app developed by Federico Marini, tat allocated abstracts to reviewers and had google sheets in the background.

LatinR uses EasyChair for the abstract review, but Elio Campitelli wrote an R package to enable users to write abstract in markdown and submit abstract from R.

Under an R Consortium project started in 2016, we reviewed some open source conference management systems,see - report - presentation. The next best step was to work on a website template that could interface with different systems, see this report: https://github.com/lockedata/rcms/blob/master/milestone_2_evaluation_outcome_and_next_steps.md. This worked for satRdays (who we partnered with on this) and has given a useful template for 2020 that we want to keep using and developing. It could be good to build on this with our own review system.

Some other options that were not included in the R Consortium review, but have promise: - Microsoft CMT, free for academic conferences. This has some nice features (ability to import offline reviews, bulk import paper status, option to use Toronto Paper Matching System to match papers with reviewers. Reasonable documentation. - pretalx, actively developed conference management system using Python/Django. Option to self host (free), reasonable cost for academic events (~EUR 500, assuming 50% discount). Has a REST API, which should be useful for building R scripts around. Seems well documented. - OpenReview https://openreview.net/, used by EuroBioc2020. Needs some maintenance to get working, but the team behind it is super friendly and responsive so has potential. Possibly best for smaller conf (< 100 submissions).