Replies: 2 comments 2 replies
-
Hey Casey! I'll take the questions in turn :) Code
Features
Okay, longer explanation: Right now, peer review in the journals serves two purposes: editoral feedback (helping authors polish their work) and quality control (keeping bad or dishonest work from getting published). The latter has stopped functioning. When a paper gets rejected from a reputable journal, a dishonest author can just take it to a pay to play journal that doesn't have quality checks. As far as the public is concerned, the end result is the same - it got published. There are over 10,000 journals, even academics are having a hard time keeping straight which ones are reputable. And it's changed - some open access journals that were promising early on have dropped in quality, meaning they've become a net negative on the CVs of academics who published with them before their quality dropped. My intention is to take peer review and split its two functions. Review on a draft will solely be about polishing the work and giving editorial feedback. The quality control will come from the reputation system post-publish. Here's how it works: An academic posts a draft and tags it with the relevant fields. Users who have enough reputation in those fields can offer reviews of the draft. The author can accept or reject the reivews based on whether or not they are helpful (granting reputation to those accepted). When the author is ready, they publish the paper. At that point, users who have enough reputation in those fields can vote it up or down (granting the author reputation) and post public responses (which themselves can be voted up or down by others in the fields with reputation and granting less reputation). Tieing reputation to the fields and restricting those features to reputation in those fields is necessary (at least at first) in order to get buy-in. Academics don't want to have to sift through a ton of feedback from people with out expertise in their fields. They wouldn't use the site. Eventually, if it gets traction, my thought was to introduce comments on public papers (separate from the responses) that anyone could post to ask questions, make suggestions, and generally interact with the experts in the field. In that way, the site could facilitate scientific communication. But I don't think that's an MVP feature. Higher-level Project Questions
|
Beta Was this translation helpful? Give feedback.
-
Thanks for the extremely thorough response, Dan. All of that makes complete sense to me, especially with the time frame you've got. I like how you're going about the reputation stuff, makes a lot more sense than what I was imagining! With all of that said, is there anything I can do to help things along? Also, is the code in |
Beta Was this translation helpful? Give feedback.
-
Hey Dan! Just some thoughts/questions I had while looking over the project:
Code
Any interest in using Typescript? If so, I've got a template project I use for my own projects that I could adapt this code to pretty quickly. It's based on an ejected create-react-app, so the scripts are a little heavy-handed, but solid. It supports TS for both client/server, and is already set up to serve both concurrently with hot-reloading, and for deployment to Heroku (granted you probably don't want that part, haha). Might need some tweaks to work in docker, but that shouldn't be too hard.
Any interest in using a component library like MaterialUI or ReactBootstrap?
I think I know your thoughts on eslint/prettier, but figured I'd ask if there's any interest in using them for consistency between devs?
I noticed it seems like you're planning to roll your own auth. Curious on your thoughts of using a third party like Okta for user management to potentially speed up the MVP? Granted, that introduces potential security and data management risks.
I have an urge to comment on some of the documentation to ask questions directly, but GH doesn't provide a good way to do this. Any thoughts on using google docs or notion or something for it?
What's the plan (if any yet) for database migrations?
Features
I noticed that the primary way for users to submit papers is to upload them. I'm assuming this is because most papers are going to be in PDF format and to avoid users from having to copy/paste/format things into a RTE or form of some sort? I was thinking that if papers are in text-format, users could easily comment inline on things like they do during GH reviews. Maybe something like https://mathpix.com/ocr could be used to convert an uploaded PDF, and then let the user tweak as needed before submission. Could also be a good way to provide diffs for updates and "PRs".
I was curious about the peer review / reputation idea outlined in the peer-review planning doc -- it seems to be suggesting that only people with a high enough reputation (in a given field?) can peer review docs. Just an thought/idea I had about this - would be interesting if anyone could potentially comment on a doc, but if their reputation was low, their comments would be below users with higher reputation and perhaps dimmed to indicate the user has a low rep. This allows anyone to potentially point out interesting points or flaws in a paper, while gaining reputation if others upvote their comments. Spam could be an issue, though. Maybe handled through moderation and collapsing of comments that are severely downvoted?
Higher-level Project Questions
These might be easier to chat about out loud rather than here, but I figured I'd ask questions here while they're on my mind.
Beta Was this translation helpful? Give feedback.
All reactions