Thursday, April 18, 2013

PLoS ONE rejects our paper

Background:

Then this:

PONE-D-13-07851R1
In Silico Screening of 393 Mutants Facilitates Enzyme Engineering of Amidase Activity in CalB
PLOS ONE

Dear Dr. Jensen,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we have decided that your manuscript does not meet our criteria for publication and must therefore be rejected. 

Specifically:

The Reviewers have considered your responses and revisions not convincing. They raised again strong concerns on the methodology and on the overall significance of the conclusions.

I am sorry that we cannot be more positive on this occasion, but hope that you appreciate the reasons for this decision.

Yours sincerely,

xxx
Academic Editor
PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:



Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass this form and submit your "Accept" recommendation.

Reviewer #1: (No Response)

Reviewer #2: (No Response)



Please explain (optional).

Reviewer #1: (No Response)

Reviewer #2: In the revised Version of the manuscript entitled In silico screening of 393 mutants facilitates enzyme engineering of amidase activity in CalB by Martin R. Hediger, Luca De Vico, Allan Svendsen, Werner Besenmatter and Jan H. Jensen the authors have include minor changes, stating that the presented essay is intended to deliver potentially interesting mutants for further study.
As the authors claim in their answer, they compare to experiment, the gold standard in science. This is missing for all presented mutants. Unortunately no further experimental or computational characterization of the selected mutants were carried out, therefore the study remains inconclusive and incomplete. As also in experimental screening methods applied, a rescreening of interesting hits is mandatory in any way.
According to my specific questions, none of them was sufficiently answered and no changes were applied to the manuscript. 
E.g. my simple question was, why certain active site residues were not considered in the chosen set. The answer, that the criteria are already given in the text is complete nonsense, because all residues questioned by me fulfill exactly the authors diffuse criteria, albeit were not selected. This is highly disappointing and not scientific sound, because from the given criteria one is not able to reproduce the expert choice of residues performed by the authors. Especially for the protonation of P38H I expected a more competent answer from the group of Prof. Jensen instead of no answer at all.
According to the quantitative interpretation of the computed results, the authors claim, that the intent of the method is not a quantitative ranking. Nevertheless they still give a discrete energy-cutoff in the paper, suggesting a quantitative meaningful barrier to the reader. 
If the goal is just to identify N interesting mutants from a larger subset, this should be clarified in the manuscript and not only in the answer to the editor. 
I can see no attempt for a scientific discussion about the accuracy and the aim of the method compared with current state of the art methods to predict enzyme activity and conformational space of protein mutants. Therefore the scientific perspective and evaluation of the scientific contribution with regard to existing methods is completely missing. This is in my view not acceptable for a scientific publication, even from an industrial perspective.



2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: No

Reviewer #2: Partly



Please explain (optional).

Reviewer #1: This paper is based on unsound methodology.

1. as any textbook will show, the transition state is a saddle point and should therefore have only 1 negative eigenvalue. Calculation of the eigenvalues is therefore a standard and common practice to proof that the transition state was indeed obtained. The authors' claim in the rebuttal that a vibrational analysis would not be valid does not make any sense; without eigenvalues it cannot be proven that a transition state was obtained. Such a proof is absolutely necessary to show that the method works (especially given my other concerns, see below).

2. The authors cherry-picked data by deciding that certain shapes of the transition barrier should be thrown away, that certain atoms moved too much in the minimization and should be held fixed, etc. etc. How can anyone believe this is a proper procedure with so much arbitrary and manual input, especially without further proof that the transition states were indeed identified.

3. All this manual and arbitrary input indicates that the procedure is not robust; therefore, it cannot be used for high-throughput screening.

4. The authors' claim in the rebuttal that an error analysis is not needed since a comparison is made to experiments would be correct if exactly the same property was compared in the experiments as in the computation, but here this is not the case. Experimental activities are characterized by kcat/KM while the computationally obtained number is a barrier height. Since the entropic contribution is missing and since the authors do not know the value of the transmission coefficient, not even kcat can be correctly calculated. Given the arbitrariness of the procedure, and the inherent limitations of a semiempirical method like PM6, an error analysis would be highly appropriate.

In conclusion, the desire to have a high-throughput algorithm has led to way too many concessions on accuracy and robustness; without further proofs, the accuracy of data and conclusion is in question.


Reviewer #2: (No Response)

No comments: