RESEARCHERS IDENTIFY EVOLUTIONARY ORIGINS OF SARS-COV-2 – ITS NOT RATg13
The researchers found that the lineage of viruses to which SARS-CoV-2 belongs diverged from other bat viruses about 40-70 years ago. Importantly, although SARS-CoV-2 is genetically similar (about 96%) to the RaTG13 coronavirus, which was sampled from a Rhinolophus affinis horseshoe bat in 2013 in Yunnan province, China, the team found that it diverged from RaTG13 a relatively long time ago, in 1969.
What evidence do they offer to support their claim? I hope Dr. Mayer will kindly respond.
Ok, now we got to find that bat in the flesh and not just on a computer.
Funny thing about virus evolution…it can happen a LOT faster in the lab.
The evolutionary clock is assuming a rate of mutations, both synonymous and non-synonymous. Count those up and multiply by the average time per mutation and – voila! – you get the age of the divergence.
Thing is, those are the sorts of things that are routinely sped up in the lab, either by performing very aggressive serial passages which speed up the clock or by actively introducing mutagens, which is also a thing.
Any time I see an ‘article’ like this one that pre-supposes that any changes must have happened naturally and therefore the assessed evolutionary 50-70 year span proof that this happened outside of a lab … well, I know I am being misled.
That’s not how deductive logic actually works.
Chris is completely right, the technology to artificially speed up virus evolution is quite well established. Try researching error-prone PCR. With that technology it is possible to introduce random mutations, commonly at a rate of 1-3 mutations per 1000 nucleotides per pcr cycle! After 10-20 cycles of errorprone PCR we will have generated an equal amount of random mutations, equal to the difference between RATG13 and SARS-CoV-2. Subsequently we transfect cells with the generated mutants, this allows the removal of disadvantageous mutations as these will not proliferate in cell cultures. After culturing with cells we seperate out the virus particles, isolate their RNA molecules and reperform error-prone PCR. Iterate a few times and you have a new virus which will have optimized its avility to grow in certain host cells, while accumulating some none disabling mutations making it appear as a new virus when compared to original virus.
If it wasn’t for the health risk, most undergrads within molecular biology, biochemistry or biotech related educations could perform the experiments. It is that basic!
The methodology used in this study is extremely questionable. Honestly common sense tells you that unless they had genetic material from 60 years ago that this is pure speculation based upon sketchy modeling.
@JoeVickers – bayesian phylogenetic tree. construction is fairly routine technique- granted there are some priors that need to be input which can change the outcome . I could see Chris’s point on how the time to mutate can be reduced to very short period in lab. But, it also raises question if it was accident or deliberately released? if accident – you would hope there were far more accidents while tinkering with these viruses. If it was deliberately released, then it makes no sense that it was released next to the research center.
Also, error prone PCR theory seems to be a bit of leap. Here is the paper and the phylogenetic tree in case if you are interested.
Here is the paper and the phylogenetic tree in case if you are interested.
Dr. Martenson’s previous analysis of this paper points to weaknesses with respect to speed of natural virus evolution compared to speed of lab induced creation shows that it is misleading to say that natural evolution is more likely.
Also the paper appears to be more based on the writers’ opinion review of history than observation of laboratory data derived in accordance with the scientific method.
Chris is completely right, the technology to artificially speed up virus evolution is quite well established. Try researching error-prone PCR. With that technology it is possible to introduce random mutations, commonly at a rate of 1-3 mutations per 1000 nucleotides per pcr cycle! After 10-20 cycles of errorprone PCR we will have generated an equal amount of random mutations, equal to the difference between RATG13 and SARS-CoV-2.
Wait, so if 0.1-0.3% mutations per PCR cycle, and PCR manufacturer guidelines suggest anywhere between 30 and 45, with 40 being the most popular Serial Cycle Threshold (Ct or Cq)… That’s a 3-13.5% variation!
MIQE guidelines for use and reporting of RT-PCR warn that “Cq values >=40 are suspect because of the implied low efficiency and generally should be reported”, specifically warning of the risk of false positives.
Yet even at 40 that should be huge variation of 4-12%.
This is why some people have said that, if you set the sensitivity cutoff at 20 cycles, everybody(?) could be negative, while if you set it to 50, everybody could be positive… Crazy!
From what I understand, the tests look for any segments of the SARS-CoV-2 genome, which amounts to about 1% or less of the total genome. How many segments to look for is another decision that varies from lab to lab and country to country. Some look for only 1, some look for 2 but require only 1, some look for 2 and require 2, some look for 3 and require 2, and some look for 3 and require 3.
Dr. Mayer says hCoV-OC43 and SARS-CoV-2 (both Sarbe-betacoronavirii) have a 66% RdRp sequence homology, which makes them confusable by PCR testing, leading to probably over 40% false positives…
So what are all the factors influencing PCR false positives?
-mutations during amplification (0.1-0.3% per cycle)
-number of amplification cycles (30-45)
-number of segments amplified and compared (1-3)
-presence of hCoV-OC43 (is routinely around and can manifest as a common cold)
I’m sure I’m missing some relevant factors here — hopefully Dr. Mayer can bring more clarity to the picture.
Notice that the technology I describe is called error prone PCR. The technology uses a low fidelity DNA polymerase such as TAQ without proofreading in buffer conditions increasing incorporation of incorrect nucleotides.
Normal PCR reaction can be run with high fidelity DNA polymerases such as PFU or Phusion with proofreading, in buffer conditions inhibiting incorporation of incorrect nucleotides.
While both are PCR techniques, their uses are significantly different.
A comparison of high fidelity DNA polymerases can be found here: https://www.hindawi.com/journals/mbi/2014/287430/
Notice that their TAQ polymerase have proofreading activity and therefore 100x lower mutation rate than the rate in error prone PCR.