Empirical scientific publications are all about spreading the word on "experimental evidence". You'd say: "If you run electricity through water using terminals, you can produce Hydrogen and Oxygen gas."
After you published this, people would know that, if they tried this themselves, they could expect the same result. Science thus became a Team Sport. For quality-control and scaling purposes, no single person would replicate everything, only a few "peers" (people in the same area) would check the results to make sure that they made sense. Thus Peer Review was born.
It worked well then, and it works pretty well now. The problem I have with it is that it sometimes doesn't scale past the environment it evolved in (one where everyone knew everyone else's name and what they were working on, one where one only published something they felt would replicate). I might be wrong, but there are some big differences. Nowadays, a replication study ("we got the same results as X") is practically unpublishable (worthless), you need cash just to look at the insiders-club from afar (journal articles are rarely open access), and to join (ie publish something) I just believe that, in certain instances, careful flattery of the reviewers would count for more than accuracy. There's a difference between empirical predictions that sound reasonable to a target audience
vs those that are actually correct
. Lastly, there is a "publication bias" toward "being interesting". For example, in economics, it is difficult to find a study explaining that 'everything is fine' with respect to current policy (about anything
). It's just more fun to criticize.
There is another way: a Truthcoin Dominance Assurance Contract.
I'm going to try one out that's slightly different from what I've previously described
. Imagine a 2 x 2 PM stating (Will Trusted Replication Firm attempt to replicate Study X?) along the row and (Will the results of Study X be upheld?) on the column. We have four states:
| ||Not Replicated||Replicated|
|Not Attempted: ||1||2|
Individuals creating these Markets would do so to capitalize on any perceived disagreement, just like any other PM-Author.
Individuals buying States 1 and 2, would be those who either felt that the study wouldn't be chosen for replication, or wanted to subsidize the contract for replication (give Trusted Replication Firm some $ reasons to replicate the study).
Individuals buying State 3 would be those who feel the study is 'bad' and would not replicate if a replication was attempted.
Individuals buying State 4 would be those who feel the study is 'good' and that it would replicate if a replication was attempted.
The audit firm would buy States 3 and 4 equally, just before they decided to audit the study. They can uniquely profit because only they know which studies they will choose to replicate. (Ideally, this would be random). They receive a payout of 1 no matter what the outcome is, which they purchased for <1, so they profit as well.
The whole point of doing this, however, is that one would be able to look at all of the market prices for all studies (not just those that one attempted to replicate) and assess the quality of work that way.
These markets might be a little thin. Would universities or governments subsidize them? What about a single study, perhaps a very controversial or interesting result?