Academics, myself included, often complain about the peer-review process: slow, unfair, partial, unpaid, irrelevant…until their paper has been accepted somewhere. Publish or perish. In this post, I will discuss some points I find interesting to share about the “peer-review game”. Interestingly, we experienced all these points during the publication process of our recent opinion paper discussing why jump height is not always a reliable indicator of lower limb power capability (access here).
The underestimated importance of the “Cover Letter” and the “pre-review” process
The famous “cover letter” is something I did not consider important until I realize it is in fact the only way authors can detail and “defend” the importance of their work and help the Editor better understand why the paper should be reviewed and eventually published. This letter is compulsory in most Journals, and I usually inserted in the document very neutral and basic information about the submitted paper: title, type or article, and compulsory statements requested by the Journal. Based on the assumption that the Editor will read this cover letter before deciding to send the paper to reviewers, this poor content was a mistake. 10-20 papers ago, I’ve decided to explain, in 10-15 lines max, the main strengths of the study/paper submitted (and taught my students to). After all, if you don’t “sell” your own work with what you think are the best arguments, no one will do it for you. It’s a lawyer’s job, trying to convince the Editor, that the work should be reviewed. I have no idea about the effect of writing such an informative letter versus the neutral, poor content described above, but as I often tell my students: “at worst, it will not change the outcome”. Yes, it is a significant additional amount of work. Writing an efficient cover letter may take several hours, to recall the literature context leading to the study, why it is innovative, relevant to the field, etc…
Do you offer flowers without saying why? Same for a paper submission, add a wish card.
Well, with the Sports Medicine paper, this strategy failed. The cover letter included our arguments, but the paper was rejected by the Editor without peer-review. For those who do not know the process, this is the email we received:
« Based on our initial editorial assessment, we regret to advise that we are unable to pursue publication of your manuscript in Sports Medicine. While the topic falls within the aims and scope of the journal, we felt we were not able to prioritize your article ahead of the large number of other worthy articles being submitted to the journal, many of which we also have to turn away because of journal space and resource limitations. »
The main scientific argument for this rejection-by-the-Editor was: missing.
We were convinced this paper was interesting for the community, so we decided to improve our overall submission (cover letter + manuscript) with two additional steps: pre-review and a feedback on the audience of the paper. These two steps were made possible by publishing the paper on two open archives and “preprint” platforms: SportrXiv and Researchgate. Then, we used social media (mainly Facebook, Twitter and LinkedIn) to spread the link to access the paper, and carefully monitored for 2 months (i) how many views/downloads the paper got and (ii) the feedbacks and comments people sent us. I will discuss the arguments of “why the hell did you do that” below, but the main outcome is that after these 2 months, we received several relevant and constructive comments, which allowed us to improve the manuscript. This is open science pre-review. The manuscript also received a clear interest, with several hundreds of downloads and thousands of views in total. It is not only because we are fortunate enough to have many people following our work. I’m convinced this “2-month stat” is also due to the content of the paper being interesting to the sport science and training community. Why 2 months? Long enough to gather solid feedback and short enough to wait until re-submission.
Then, we re-submitted to the same Journal, this time adding to the cover letter the argument of improved work after pre-reviewing, and the overall audience metrics as evidence of the potential interest of the paper for the community. This time, the Editor (we don’t know if it was the same Editor) sent the paper over to review, which was our objective. The Reviewer’s feedback was overall positive, and the paper accepted after one round of revision.
I cannot tell if and to what extent this pre-review process helped, but I am convinced it did, even a little bit. However, when I explain this strategy to colleagues in academia, these are the main “yes but” comments I get, and my answers.
Yes, but is it “ok” to publish your work before submitting it?
This is very easy to verify since most Journals have a pre-print policy that is detailed in the instructions to authors. On this website, you can also check if the Journal you are targeting is “Romeo Green”, which means publishing the work prior to submission is accepted. If not, you can still try it and eventually modify the content due to pre-review remarks, or simply opt for a Romero Green Journal. In the long term, a good way to change Journals’ policy is to change Journals you submit to. Vox populi, likely more effective than Twitter rants in which you tag the Journal.
Yes, but there is a risk someone steals-copies your idea and submits a similar content first
The good thing with pre-prints at SportrXiv platform for example is that once the preprint is published on this website, it is public, it has a DOI and a time stamp. So you can then argue about when your work was first made public, if needed. A friend of mine experienced such a bad practice in the context of classical peer-review, when the review process is not public, lasts for months, ends up with a rejection decision and a few weeks later a very similar paper pops up…By making the work publicly available you “leave your mark” clearly and officially. In addition, the main reason why I like the preprint process of ready-to-submit content is that people can immediately read the work and benefit from the research conclusions, while a lucky and fast review process takes all-in-all 3 to 6 months for what I’ve experienced. In our immediate delivery world, it is very frustrating to work for months on a research and once the conclusions are available to the co-authors, they are still several months away from their targeted audience. Just like blog posts, pre-review and pre-print is a good way to make your results (at least in the authors’ version) public whenever you want to.
Yes, but I don’t have 10k+ followers so no way I can get good “impact” stats
I disagree, the sports and exercise science community is so small, and the links between people so numerous (remember we are all within a 5-6 direct links network) that, if your work is interesting, it will be very quickly shared, read and mentioned by people with more audience, who care about it.
For other “yes but” questions, see this great post on the SportRXiv website and this Twitter thread by Dan Quintana.
Final point here is that the open archive publication of two of our recent works resulted in colleagues “pre-reviewers” sending us very interesting and relevant comments, because they cared about our work.
When discussion with reviewers deserves publication
During the review process of our Sports Medicine paper, a reviewer brought questions that we very often address during our lectures, conferences and interactions with other academics and practitioners. We had the opportunity (obligation…) to detail our points, and we were happy with the overall discussion. Unfortunately, this discussion does not appear in the final version of the paper, and does not appear anywhere else. This is, from a scientific perspective, a total waste of time, energy, and likely knowledge. So I’ve decided to publish this discussion here, which I hope you will find interesting.
Reviewer: About the first objective, the authors describe that push-off phase is a determinant aspect for jump performance. The main issue is that athletes with different lower limb segment lengths may have different hPO (considering the same knee starting angle), confounding the jump height-Pmax relationship. In addition, as stated by the authors, hPO is equivalent to squat depth. In this line of thought, some studies (McBride et al. 2010, Kirby et al. 2011, Gheller et al., 2015) have already shown that jump height, in both SJ and CMJ, was higher in the jumps performed from a deeper squat position when compared to the jumps performed from a smaller squat depth, and the opposite occurring for power output. In addition, higher impulse was produced in the jumps performed from a deeper squat depth. This suggests the impulse and not power output as the main determinant of jump performance (almost perfect correlation can be found between impulse and jump height). Thus, I suggest the authors add to the text that different hPO imply in the manipulation of the generated impulse and consequently final performance (jump height).
Response: The first comments here are in line with our point about hPO (or squat depth indeed): it is a factor that influences (all other things equal) jump height, and we have cited some theoretical and experimental studies showing it, so we think it is not necessary to cite all the available studies on this topic. Whether this factor is influenced itself by anthropometric, “technical” components of performance, is beyond the topic of this paper. However, all these factors are actually “integrated” into the hPO value.
As to the impulse versus power comment, we fully agree that mechanically, impulse “explains” the change in center of mass velocity during push-off. However, we want to clarify that jumping performance is determined by the capability to accelerate body mass as much as possible to reach the highest velocity at the end of a push-off, i.e. lower limb extension. From Newton’s second law of motion, the velocity reached by the body center of mass (CM) at the end of a push-off (take-off velocity) directly depends on the mechanical impulse developed in the movement direction, i.e. the integration of the force produced over the time (Winter 2005; McBride et al. 2010; Knudson 2009). However, the ability to develop a high impulse cannot be considered as a mechanical capability of the neuromuscular system: we cannot say that an athlete presents a high mechanical impulse ability. The mechanical impulse is directly associated to the movement/task constraints and not only to the individual properties. It is important to differentiate mechanical outputs characterizing the movement/task (e.g. external force, movement velocity, power output, impulse, mechanical work) from mechanical capabilities of the neuromuscular system which represent the maximal limit of what the athlete muscles can produce. So, the issue is to identify which mechanical muscle capability(ies) determine(s) the ability to produce a high mechanical impulse.
Developing a high impulse during a lower limb push-off, and in turn accelerating a mass as much as possible, has often been assumed to depend on muscle power capabilities (Vandewalle et al. 1987; James et al. 2007; Yamauchi and Ishii 2007; Newton and Kraemer 1994; Frost et al. 2010; Samozino et al. 2008; McBride et al. 2010). It is why many sports performance practitioners, interested in ballistic performances, focus in improving muscular power (Cormie et al. 2011b; Frost et al. 2010; Cronin and Sleivert 2005; McBride et al. 2002; Cormie et al. 2011a). Moreover, in many sports, hPO is not a variable we can make vary to improve performance since it is fixed by the athlete properties (anthropometry, force-length relationship) and the sports requirements, notably the time the athlete has to do the push-off: for instance, a basket-ball player cannot use a deep squat depth since he has to jump quickly regarding his opponents, which is less the case for a volley-ball player. We have recently shown that developing a high impulse during a push-off characterized by a given hPO, and so jumping performance, depends on both maximal power and FV profile (Samozino et al. 2014).
REFERENCES CITED IN THIS RESPONSE:
Cormie P, McGuigan MR, Newton RU (2011a) Developing maximal neuromuscular power: part 1 – biological basis of maximal power production. Sports Med 41 (1):17-38
Cormie P, McGuigan MR, Newton RU (2011b) Developing maximal neuromuscular power: part 2 – training considerations for improving maximal power production. Sports Med 41 (2):125-146
Cronin J, Sleivert G (2005) Challenges in understanding the influence of maximal power training on improving athletic performance. Sports Med 35 (3):213-234
Frost DM, Cronin J, Newton RU (2010) A biomechanical evaluation of resistance: fundamental concepts for training and sports performance. Sports Med 40 (4):303-326
James RS, Navas CA, Herrel A (2007) How important are skeletal muscle mechanics in setting limits on jumping performance? J Exp Biol 210 (Pt 6):923-933
Knudson DV (2009) Correcting the use of the term “power” in the strength and conditioning literature. J Strength Cond Res 23 (6):1902-1908
McBride JM, Kirby TJ, Haines TL, Skinner J (2010) Relationship between relative net vertical impulse and jump height in jump squats performed to various squat depths and with various loads. International journal of sports physiology and performance 5 (4):484-496
McBride JM, Triplett-McBride T, Davie A, Newton RU (2002) The effect of heavy- vs. light-load jump squats on the development of strength, power, and speed. J Strength Cond Res 16 (1):75-82
Newton RU, Kraemer WJ (1994) Developing explosive muscular power: implications for a mixed methods training strategy. Strength and Conditioning 16 (5):20-31
Samozino P, Edouard P, Sangnier S, Brughelli M, Gimenez P, Morin JB (2014) Force-velocity profile: imbalance determination and effect on lower limb ballistic performance. Int J Sports Med 35 (6):505-510. doi:10.1055/s-0033-1354382
Vandewalle H, Peres G, Monod H (1987b) Standard anaerobic exercise tests. Sports Med 4 (4):268-289
Winter EM (2005) Jumping: Power or Impulse. Med Sci Sports Exerc 37 (3):523-524
Yamauchi J, Ishii N (2007) Relations between force-velocity characteristics of the knee-hip extension movement and vertical jump performance. J Strength Cond Res 21 (3):703-709
Reviewer: It has been shown that mean power in vertical jump is not a good predictor of sports performance. Therefore, I question the authors if it is better to use jump height to evaluate athletes (which presents moderate correlation with peak power output) or to use the mean power obtained by equations.
Response: This is a very interesting and unresolved discussion, and we would be interested in reading the paper(s) that support your first statement. We think on the contrary that average power is likely a more appropriate approach, as detailed by Andrews (1983) who suggested that:
Instantaneous values are adapted to describe the value of a variable at a specific time of the movement (e.g. take-off during jumping or heel-ground contact during running) or to characterize extreme values of a parameter over a movement (e.g. extreme joint angles to compute range of motion, maximal running speed, minimum heart rate).
Averaged values (or more generally values representing a time interval) are adapted to characterize an effort or a movement in its entirety, notably when the parameter significantly changes over the effort or the movement.
It is worth noting that the two types of values are strongly related during ballistic movements, with for instance averaged values of power output between 40 and 60% of maximal instantaneous values (Marsh 1994; Martin et al. 1997; Driss et al. 2001). So, the general shape of Force- and Power-velocity relationships are almost exactly the same, only the magnitudes of values change (Martin et al. 1997).
When we aim to evaluate muscle mechanical capabilities, we want to characterize the lower limb maximal capabilities to produce force or power over one extension. However, force production capabilities change throughout the lower limb extension: in addition to be influenced by movement velocity, they are affected by the torque-angle (force-length) relationship of muscle groups involved at each joint (e.g. (Thorstensson et al. 1976)), by the time required for muscles to reach their maximum active state (e.g. (van Soest and Casius 2000)), or by muscle coordination patterns (e.g. (Suzuki et al. 1982; Van Soest et al. 1994)). So, only focusing on the instantaneous peak value measured during a functional movement does not make much sense since this value would correspond to a very specific anatomical and neuromuscular configuration and does not represent the whole dynamic lower limb capabilities. Consequently, even if it is still a source of debate (Dugan et al. 2004; Vandewalle et al. 1987b), we think that using force, velocity and power values averaged over the entire extension movement seems to be adapted to characterize these mechanical capabilities (e.g. (Arsac et al. 1996; Bassey and Short 1990; Samozino et al. 2012; Samozino et al. 2007; Rahmani et al. 2001)). Finally, from a purely mechanical point of view, dynamic principles show that the change in momentum of a system depends directly to the net mechanical impulse applied on it over the entire movement. So, ballistic performances do not depend on the maximum force or power output lower limb muscles are able to produce at a given (very short) time during their extension, but rather to the force or power output muscles are able to produce over the entire extension phase allowing maximization of the net mechanical impulse. And this is better described by averaged than by peak instantaneous values.
Finally, as to your last comment, this is exactly the aim of our paper: discussing the fact that in some cases (much more frequent than we could think), inferring lower limb maximal power output from a measurement of jump height is associated with small to large errors in the prediction (due to all the factors discussed). It is correct that in some cases two athletes with the same SJ or CMJ height aso have the same maximal power outputs, but (i) we do not know it a priori and (ii) we think it is better to anticipate for these potential errors and directly compute Pmax from jump height as proposed here. This is why we opted for the conditional title “When jump height….” and not for a more systematic one such as “Jump height is not a good…”.
REFERENCES CITED IN THIS RESPONSE:
Andrews GC (1983) Biomechanical measures of muscular effort. Med Sci Sports Exerc 15 (3):199-207
Arsac LM, Belli A, Lacour JR (1996) Muscle function during brief maximal exercise: accurate measurements on a friction-loaded cycle ergometer. Eur J Appl Physiol 74 (1-2):100-106
Bassey EJ, Short AH (1990) A new method for measuring power output in a single leg extension: feasibility, reliability and validity. Eur J Appl Physiol Occup Physiol 60 (5):385-390
Driss T, Vandewalle H, Quièvre J, Miller C, Monod H (2001) Effects of external loading on power output in a squat jump on a force platform: a comparison between strength and power athletes and sedentary individuals. J Sports Sci 19 (2):99-105
Dugan EL, Doyle TLA, Humphries B, Hasson CJ, Newton RU (2004) Determining the optimal load for jump squats: a review of methods and calculations. J Strength Cond Res 18 (3):668-674
Marsh RL (1994) Jumping ability of anuran amphibians. Adv Vet Sci Comp Med 38B:51-111
Martin JC, Wagner BM, Coyle EF (1997) Inertial-load method determines maximal cycling power in a single exercise bout. Med Sci Sports Exerc 29 (11):1505-1512
Rahmani A, Viale F, Dalleau G, Lacour JR (2001) Force/velocity and power/velocity relationships in squat exercise. Eur J Appl Physiol 84 (3):227-232
Samozino P, Horvais N, Hintzy F (2007) Why Does Power Output Decrease at High Pedaling Rates during Sprint Cycling? Med Sci Sports Exerc 39 (4):680-687
Samozino P, Rejc E, Di Prampero PE, Belli A, Morin JB (2012) Optimal Force-Velocity Profile in Ballistic Movements. Altius: citius or fortius? Med Sci Sports Exerc 44 (2):313-322
Suzuki S, Watanabe S, Homma S (1982) EMG activity and kinematics of human cycling movements at different constant velocities. Brain Res 240 (2):245-258.
Thorstensson A, Grimby G, Karlsson J (1976) Force-velocity relations and fiber composition in human knee extensor muscles. J Appl Physiol 40 (1):12-16.
Van Soest AJ, Bobbert MF, Van Ingen Schenau GJ (1994) A control strategy for the execution of explosive movements from varying starting positions. J Neurophysiol 71 (4):1390-1402.
van Soest O, Casius LJ (2000) Which factors determine the optimal pedaling rate in sprint cycling? Med Sci Sports Exerc 32 (11):1927-1934
Vandewalle H, Peres G, Monod H (1987b) Standard anaerobic exercise tests. Sports Med 4 (4):268-289
My collaborators and I have recently experienced some very intriguing review practices such as circulated papers making comments pop-up on Twitter by well-informed people, reviewers deliberately slowing down/blocking the review process because of inconvenient arguments towards their own work, and so on. In order to see the glass half-full, I think it is possible to overall improve the review process (among other possibilities discussed elsewhere) with:
- Authors caring about better informing the Editor on the importance-impact of their work. A pre-review process might help forming stronger arguments and a more efficient cover letter
- Editors caring about authors cover letter arguments, and reviewers selection
- Reviewers caring about how to improve papers, whether they advise acceptance or rejection
- Community (academics, students, practitioners) members caring about sharing contents (preprints, blog posts) they find interesting
2 thoughts on “Peer-Review isn’t dead – provided everybody cares”
This discussion with the reviewers was a class for me. Congratulations for the text!