My watch list
my.bionity.com  
Login  

Error catastrophe



Error catastrophe is a term used to describe the extinction of an organism (often in the context of microorganisms such as viruses) as a result of excessive RNA mutations. The term specifically refers to the predictions of mathematical models similar to that described below, and not to an observed phenomenon.

Many viruses 'make mistakes' (or mutate) during replication. The resulting mutations increase biodiversity among its population and to help subvert the ability of a mammal's immune system to recognise it in a subsequent infection. The more mutations (mistakes) the virus makes during replication, the more likely it is to avoid recognition by the immune system and the more diverse its population will be (see the article on biodiversity for an explanation of the selective advantages of this). However if it makes too many mutations it may lose some of its biological features which have evolved to its advantage, including its ability to reproduce at all.

The question arises: how many mutations can be made during each replication before the population of viruses begins to lose self-identity?

A basic mathematical model

Consider a virus which has a genetic identity modeled by a string of ones and zeros (eg 11010001011101....). Suppose that the string has fixed length L and that during replication the virus copies each digit one by one, making a mistake with probability q independently of all other digits.

Due to the mutations resulting from erroneous replication, there exist up to 2L distinct strains derived from the parent virus. Let xi denote the concentration of strain i; let ai denote the rate at which strain i reproduces; and let Qij denote the probability of a virus of strain i mutating to strain j.

Then the rate of change of concentration xj is given by

\dot{x}_j = \sum_i a_i Q_{ij} x_i

At this point, we make a mathematical idealisation: we pick the fittest strain (the one with the greatest reproduction rate aj) and assume that it is unique (ie that the chosen aj satisfies aj > ai for all i); and we then group the remaining strains into a single group. Let the concentrations of the two groups be x , y with reproduction rates a>b; let Q be the probability of a virus in the first group mutating to a member of the second group and let R be the probability of a member of the second group returning to the first (via an unlikely and very specific mutation). The equations governing the development of the populations are:

\begin{cases} \dot{x} = & a(1-Q)x + bRy \\ \dot{y} = & aQx + b(1-R)y \\ \end{cases}

We are particularly interested in the case where L is very large, so we may safely neglect R and instead consider:

\begin{cases} \dot{x} = & a(1-Q)x \\ \dot{y} = & aQx + by \\ \end{cases}

Then setting z = x/y we have

\begin{matrix} \frac{\partial z}{\partial t} & = & \frac{\dot{x} y - x \dot{y}}{y^2} \\ && \\ & = & \frac{a(1-Q)xy - x(aQx + by)}{y^2} \\ && \\ & = & a(1-Q)z -(aQz^2 +bz) \\ && \\ & = & z(a(1-Q) -aQz -b) \\ \end{matrix}.

Assuming z achieves a steady concentration over time, z settles down to satisfy

z(\infty) = \frac{a(1-Q)-b}{aQ}

(which is deduced by setting the derivative of z with respect to time to zero).

So the important question is under what parameter values does the original population persist (continue to exist)? The population persists if and only if the steady state value of z is strictly positive. ie if and only if:

z(\infty) > 0 \iff a(1-Q)-b >0 \iff (1-Q) > b/a .

This result is more popularly expressed in terms of the ratio of a:b and the error rate q of individual digits: set b/a = (1-s), then the condition becomes

z(\infty) > 0 \iff (1-Q) = (1-q)^L > 1-s

Taking a logarithm on both sides and approximating for small q and s one gets

L \ln{(1-q)} \approx -Lq > \ln{(1-s)} \approx -s

reducing the condition to:

Lq < s

RNA viruses which replicate close to the error threshold have a genome size of order 104 base pairs. Human DNA is about 3.3 billion (109) base units long. This means that the replication mechanism for DNA must be orders of magnitude more accurate than for RNA.

The theory of error catastrophe has been criticized as being based on an unrealistic assumption, namely, that all variants of strain "j", the fittest strain, have a finite replication rate. That is, no replication error or errors will cause replication to cease. At high error rates, above a threshold value, it follows that a population of replicating organisms with random genomic sequences will be produced which out-competes strain "j", eventually driving it to extinction. Thus, the assumption is incompatible with the well established principle in biology that the genomic sequence encodes the biological functions of the organism. The phenomenon of error catastrophe predicted by the mathematical model, has not been convincingly shown to occur.

Applications of the theory

Some viruses such as polio or hepatitis C operate very close to the critical mutation rate (ie the largest q that L will allow). Drugs have been created to increase the mutation rate of the viruses in order to push them over the critical boundary so that they lose self identity. However, given the criticism of the basic assumption of the mathematical model, this approach is problematic.

Recently scientists have discovered an enzyme (A3G) that may cause HIV to mutate to death, which could allow error catastrophe for AIDS to become a usable treatment method. [1] Researchers have also identified a pharmaceutical agent, KP-1461 that similarly is believed to act on the HIV by introducing errors into the viral genome. It does this by being incorporated into a copy strand of viral DNA as an analog of cytidine which should normally pair with guanosine but then instead pairs with adenosine introducing a mutation into the genome. Overtime these guanosine-to-adenosine mutations, which can occur randomly anywhere through out the viral genome, would be expected to build up leading to an error catastrophe. Although in vitro tests have demonstrated viral population collapse using this method, it has not been proven to work in vivo. [2]


The result introduces a catch-22 mystery for biologists: in general, large genomes are required for accurate replication (high replication rates are achieved by the help of enzymes), but a large genome requires a high accuracy rate q to persist. Which comes first and how does it happen? An illustration of the difficulty involved is L can only be 100 if q' is 0.99 - a very small string length in terms of genes.

See also

In repeated in vitro studies, KP-1461 has demonstrated irreversible viral extinction. Results of Phase 1 studies show that treatment with KP-1461 is generally safe and well tolerated when given for two weeks to people who have been infected with HIV. Most of the side effects seen in this study were mild to moderate and no patients stopped the study due to side effects. Though these studies ( 10 days treatment ) did not expect to see any affect on the virus, trends in the amount of virus in the blood and how well the virus was functioning after being exposed to KP-1461 were encouraging. KP-1461 is currently being studied in the next phase of drug discovery. A Phase 2a trial is currently underway to evaluate the safety, efficacy and tolerability of KP-1461 when administered to treatment-experienced HIV patients ( twice a day for 4 months ). The study is ongoing at 30 HIV-research specialty centers in the United States, including Kansas City University of Medicine and Biosciences' Dybedal Clinical Research Center, under the direction of Dr. Clay.

 
This article is licensed under the GNU Free Documentation License. It uses material from the Wikipedia article "Error_catastrophe". A list of authors is available in Wikipedia.
Your browser is not current. Microsoft Internet Explorer 6.0 does not support some functions on Chemie.DE