NovEval

Automatically Evaluating Scholarly Novelty in Alignment with Human Assessments

NovEval is a tool designed to automatically evaluate the novelty of an academic manuscript. It estimates how rarely a sequence of words (i.e., the manuscript) occurs in the universe of scholarly discourse using a GPT-2 model trained solely on English Wikipedia. NovEval's judgment has been shown to significantly correlate with that of human experts. See our manuscript for details.

Please paste the manuscript you wish to examine into the box. Note that the manuscript should be longer than 2,560 tokens (approximately 2,000 words).

Token count: 0

The surprisal score for each token is calculated and visualized. The deeper the shade, the greater the "surprise" the customized GPT-2 would perceive. Hover over to view the corresponding surprisal, probability, and top candidates for the position.

NovEval is currently under active development. For improvements and suggestions, contact us.

Licensed under 0BSD | Version: 0.3.0