Original post: http://www.scienceoverflow.com/
This is just a beta/gamma idea. I will be glad to hear your opinion about it!
What is ScienceOverflow?
A project for a non-profit organization providing an alternative model to centralize and distribute the scientific work inside the scientific community. The idea is to proportionate a peer-review self-regulated by the community. It is a system to publish papers and other scientific work, a place to share negative solid scientific results, raw data, and non-publishable work but with potential to generate new work and collaboration with others.
The aim is to be independent of current journalism, but still compatible with the actual system of “counting publications” which is the current measurement of scientific production.
Inside the community a roster of reviewers will emerge, each one will have a “qualification” gained over time based in community/author opinions, or intrinsically gained because their current expertise in the field before ScienceOverflow. The authors pre-classify its own work based on its own expectations. The classification ranges from draft (not yet publishable work), work that is open source and editable by the proper community (git, or wiki like), raw data, code, or paper. Paper category is the bridge to the actual system, and it will count as a publication for the actual metrics.
If the author wants to publish a paper, or upgrade from a draft to a paper, it must be peer-reviewed by the community, as happens now. Each paper will have a “review threshold” in function of the policies taken into account reviewing it. This method will allow to discover and properly tag “really high quality work” in the process.
Only experienced and well-considered reviewers (A reviewers) will proportionate that “A quality” to a paper. This grade will be similar or equivalent to the impact factor of top journals (Physical Review Letters, Nature or Science, few others), and with some extra value because the scaling review process. B papers are the standard, really good pieces of work. B papers impact factor corresponds to the serious but narrow journals out there. C papers are work that is published because it is solid science, but the impact factor of it might be low. It corresponds to negative results, failure of experiments or similar. There is no equivalent to this category out there because no journal wants to publish things that may lower its impact factor. But it is good science and might be useful for others in the same field.
- Draft. You are writing “online” in a public form. Your draft is able to receive comments, notes, and every kind of community interaction, but it is not officially peer-reviewed until you think it is ready (may be never). The draft is tagged with the proper fields, as regular papers. And it can be found by the search motors. But the interaction with the community is expected to be low because its limited visibility.
- Open Manuscript. Git/wiki inspired. Your draft is “editable” by any user. They “commit” (git style) new additions, or patch something, in a new branch. Then you can merge that branch into the main one. There is no authorship recognition, the content is open source by nature, but the “project” must be referenced if it is used in future private work. Ideally there will be some recognition to members who contribute the most.
- Raw data. Bundle of “open source” data with authorship recognition. It can be used and referenced by any future work. Detailed information of data acquisition required, and might require review as well as papers.
- Scripts. Plug-ins. Code projects. Open source, but citations of main developer/main group might be imposed.
Papers: Reviewed and published, (DOI number, citable, etc). A, B or C in function of the expected impact factor, and the review process. But the impact factor (and other metrics) can grow after this in the community interaction.
The process to get a paper might have a cost associated to it. Get a DOI (digital identifier) has a cost. The user might pay for it. If the author self-rates his work high, a higher rate can be charged (to access top reviewers). This does not mean that you can only get a A paper if you pay it. ScieceOverflow will keep being a non-profit organization. /TODO
The paper category must be peer-reviewed, by who? By the community? Triggered by what? People won’t spend its valuable time searching in a sea of chaotic papers, looking for something to review. To solve this, the webpage will assign a paper to each user once in a while in correspondence with its expertise and willingness. The eligible users for that are registered with real ID, and they are associated with some research institute. The page itself is free access, with free information, but interaction and access to profiles requires registration.
The process of becoming a reviewer (a prestigious position in the community) is simple. Authors that are publishing in a field, this is as well the criteria of journals in the market. They search for them based on published articles, tags associated, previous reviews, etc. If a registered user doesn’t qualify as a “raw” reviewer (90%??), they start as editors.
- Editor (non-scientific content) (lv0): First filter, it only takes into account the editing properties. Proper language, format policies. It should also apply some automatic plagiarism algorithm. Not scientific content related. It also takes into account the author expectations of the work, and if they are right, send it to the next review step.
- Reviewer C (lv1): it can review C papers.
- Reviewer B (lv2). “
- Reviewer C (lv 3). They only receives lvA papers. They will put much more effort on it. They can squeeze a bit more in the reviews, asking for more data, etc.All their answers must be well elaborated, and comment/guide to author must be provided. They can downgrade a paper to level B paper if not enough significance.
- Reviewer lv4+ . Position of honor in the community, walking gods.
Users can move between levels with some requisites, which the main one is the approval from the general community and from high level reviewers.
If a reviewer enter the system (name match data base of articles), the webpage will assign a paper to him, every X days/weeks/months. The user can tune this parameter. As soon as the system achieves a critical point, the engine will be ON.
What is the actual position of reviewers? The system work exclusively due to their work. They spend time on others work. Why? Work ethic maybe?. The payment is: the good sensation about a well done job, and nothing else. The money generated by the publication go to others. It is an altruistic job, that they have to pay for doing it (their university buys the article at the end)
How can scientists, as a community, reward good reviewers? It is the main question.
I am pretty sure that the system of reviewing by the community (open reviews) will work once you achieve a critical point of number of users and reviewers. That army of reviewers is out there, working right now, doing the same work for private companies that sell the product to the same community that generates it. This alternative would avoid the intermediary journal step, with a better classification of the output, and adding fresh air to the machinery.
ScieceOverflow takes elements from model that are successfully driven by the community:
- Stack exchange (StackOverflow, MathOverflow. etc)
- OpenSource code community. GitHub
Each piece of work will have an internal ID, which can be internally referenced (and counted). You can ask questions(Stack Exchange like) and link them to a draft, or open manuscript that you are writing.
Open manuscripts can be edited Wikipedia way. Or even private drafts can receive commits, (minor corrections, or similar) by external users, with no change of authorship, but a mention in the draft.
The organic development of OpenSource is inspiring and can be used in Science as well.
Arxiv, the precursor of this idea. A huge server of preprints. With tags, etc. But no review involved.
Places to get ideas:
Research Gate, the Facebook for scientist. No information, just networking. Umm, no. http://www.researchgate.net/
Publons: Kiwi startup, a way to reward reviewers with DOI, things that might “count” as a publication. The idea to reward the reviewer is good. Giving them DOI’s might not be enough. https://publons.com
The current journal system, simplified version:
You are a scientist. You are an expert in a narrow field of science, maybe you are consolidated and well-known in your area of expertise, maybe you are starting, in both cases you are a learner. You have been working hard on something, and you have found results that deserve to be known by your community.
You try to give that information to the community. There are journals/magazines that accomplish that. You give your work to the journal in a “paper/article” format, hoping that your community find it useful and add knowledge to the area. Your reward doing this is simple: your reputation in that field increases. How this reputation is rewarded into something more “solid” by the system is other complex question.
That magazine main function (ideally) is to provide the communication channel between the scientific community.
The magazine consists in an editor, which receives the paper of the scientist, and a review committee, and a publishing platform, paper or digital edition of the article.
The editor usually is able to evaluate the paper in terms of impact into the community. If the editor thinks the papers is worth it and accomplish with the particular journal policy, he sends it to some reviewers (2-3) for further and proper scientific evaluation.
The reviewers or “peer-reviewers” are members of the same field than the author that is trying to publish his work. They usually are consolidated expert on the field (they have already published papers in the area). But they are in the same level than the author, everybody work must be peer-reviewed, even if the author is the king of the field, his experimental data might be corrupted. Extra eyes before publishing to a wider community (and non-scientific) are always good.
Filters. They journal publishing step is a filtering process. After your work has been published, it is (officially) worth and trust-able. It has been revised by peers of the author, and they have found it good enough.
Final journal step: publication, provide a communication channel between scientists. This is provided selling the edited, reviewed article, prepared to be “consumed” by other scientists. Usually, the Universities, where most of scientist work, pay for the subscription to journals of interest.
A high percentage of journals are crap. Open source and not open source journals
The editor does not evaluate the paper based in just editing variables(language, plagiarism, scientific method rigor), but also in personal estimation of the future impact factor (journal policies), or how appealing or fancy appears to be.
The peer-review process is incomplete, or even nonexistent.
The journal access is usually really expensive (there are also open access journals). Usually the journals bundle together “packs” of magazines and sell them to the Universities that generate the product. Harvard, the richest university in the world, had to cut off its journal subscriptions, because the wild proliferation of them, and because they are expensive.
At the end the community is doing all the job, and private companies are getting money from a job that can be done freely in this information/internet era. Fifty years ago, the unique way to access knowledge was the Library, the university, the face-to-face contact with others. The only way to access others work in your field was buying a magazine of your field. What about that now? Science can get rid of it, and get freshness and independence in the way.
Open access journals.
Instead of charging for access to the contest, the journal charges for publishing on it. It is free to read, but authors have to pay (2500$ in Plos One per article).
The open access model is easily corruptible. The journal gets money for publish, the author keeps his job publishing. What is the obstacle to keep everybody happy? The scientific method with its peer-review. Some companies just get rid of it: http://www.sciencemag.org/content/342/6154/60.full
That article came from a non-open source journal and I think is biased against them, in my opinion it is not an open journal problem, it is an endemic problem to the actual system. This is more precise point of view: