Monday, May 25, 2015

QC and the limits of detection

It’s been a while since I’ve blogged about anything – things have been quite busy around SMRU and I haven't had many chances to sit down and write for fun [i] (some press about our current work here, here, here and here). 

Today I’m sitting in our home, listening to waves of bird and insect songs, and the sound of a gentle rain.  It is the end of another hot season and the beginning of another rainy season, now my 5th in a row here in Northwestern Thailand and our (my wife Amber and son Salem, and me) 3rd rainy season since moving here full time in 2013.  The rains mean that the oppressive heat will subside a bit.  They also mean that malaria season is taking off again. 


Just this last week the final chapter of my dissertation was accepted for publication in Malaria Journal.  What I’d like to do over a few short posts is to flesh out a couple of parts of this paper, hopefully to reach an audience that is interested in malaria and infectious diseases, but perhaps doesn’t have time to keep up on all things malaria or tropical medicine.  This also gives me a chance to go into more detail about specific parts of the paper that I needed to condense for a scientific paper. 


The project that I worked on for my dissertation included several study villages, made up of mostly Karen villagers, on the Thai side of the Thailand-Myanmar border. 

This particular research had several very interesting findings. 

In one of the study villages we did full blood surveys, taking blood from every villager that would participate, every 5 months, for a total of 3 surveys over 1 year.  These blood surveys included making blood smears on glass slides as well as taking blood spots on filter papers that could later be PCRd to test for malaria.  Blood smears are the gold standard of malaria detection (if you're really interested, see here and here).  A microscopist uses the slides to look for malaria parasites within the blood.  Diagnosing malaria this way requires some skill and training.  PCR is generally considered a more sensitive means of detecting malaria, but isn’t currently a realistic approach to use in field settings[ii]

Collecting blood, on both slides (for microscopic detection of malaria)
and filter papers (for PCR detection of malaria)


The glass slides were prepared with a staining chemical and immediately read by a field microscopist, someone who works at a local malaria clinic, and anyone who was diagnosed with malaria was treated.  The slides were then shipped to Bangkok, where an expert microscopist, someone who has been diagnosing malaria using this method for over 20 years and who trains other to do the same, also read through the slides.  Then, the filter papers were PCRd for malaria DNA.  In this way we could look at three different modes of diagnosing malaria – the field microscopist, the expert microscopist, and PCR. 

And basically what we found was that the field microscopist missed a whole lot of malaria.  Compared to PCR, the field microscopist missed about 90% of all cases (detecting 8 cases compared to 75 that were detected by PCR).   Even the expert microscopist missed over half of the cases (34 infections). 

What does this mean though? 

Lets start with how this happens.  To be fair, it isn’t just that the microscopists are bad at what they do.  There are at least two things at play here: one has to do with training and quality control (QC) while the other has to do with limits of detection.

Microscopy requires proper training, upkeep of that training, quality control systems, and upkeep of the materials (regents etc.)  In a field setting, all of these things can be difficult.  Mold and algae can grow inside of a microscope.  The chemicals used in microscopy, in staining the slides, etc. can go bad, and probably will more quickly under very hot and humid conditions.  In more remote areas, retraining workshops and frequent quality control testing are more difficult to accomplish and therefore less likely to happen.  There is a brain drain problem too.  Many of the most capable laboratory workers leave remote settings as soon as they have a chance (for example, if they can get better salary and benefits for doing the same job elsewhere – perhaps by becoming an “expert microscopist”?)

Regarding the second point, I think that most people would expect PCR to pick up more cases than microscopy.  In fact, there is some probability at play here.  When we prick someone’s finger and make a blood smear on a glass slide, there is a possibility that even if there are malaria parasites in that person’s blood, there won’t be any in the blood that winds up on the slide.  The same is true when we make a filter paper for doing PCR work.  However, the microscopist is unlikely to look at every single spot on the glass slide and there is also some probability at play here too.  There could be parasites on the slide, but they may only be in a far corner of the slide where the microscopist doesn’t happen to look.  These are the ones that would hopefully be picked up through PCR anyway. 

Presumably, many of these infected people had a low level of parasitemia, meaning relatively few parasites in their blood, making it more difficult to catch the infection through microscopy.  Conversely, when people have lots of parasites in their blood, it should be easier to catch regardless of the method of diagnosis.  


These issues lead to a few more points. 

Some people have very few parasites in their blood while others have many.  The common view on this is that in high transmission[iii] areas, people will be exposed to malaria very frequently during their life and will therefore build up some immunity.  These people will have immune systems that can keep malaria parasite population numbers low, and as a result should not feel as sick.  Conversely, people who aren’t frequently exposed to malaria would not be expected to develop this type of “acquired” (versus inherited or genetic) immunity.  Here in Southeast Asia, transmission is generally considered to be low – and therefore I (and others) wouldn’t normally expect high levels of acquired malaria immunity.  Why then are we finding so many people with few parasites in their blood? 

Furthermore, those with very low numbers of parasites may not know they’re infected.  In fact, even if they are tested by the local microscopist they might not be diagnosed (probably because they have a “submicroscopic” infection).  From further work we’ve been doing along these lines at SMRU and during my dissertation work, it seems that many of these people don’t have symptoms, or if they do, those symptoms aren’t very strong (that is, some are “asymptomatic”).

It also seems like this isn’t exactly a rare phenomenon and this leads to all sorts of questions:  How long can these people actually carry parasites in their blood – that is, how long does a malaria infection like this last?  In the paper I’m discussing here we found a few people with infections across multiple blood screenings.  This means it is at least possible that they had the same infection for 5 months or more (people with no symptoms, who were only diagnosed by PCR quite a few months later, were not treated for malaria infection).  Also, does a person with very few malaria parasites in her blood, with no apparent symptoms, actually have “malaria”?  If they’re not sick, should they be treated?  Should we even bother telling them that they have parasites in their blood?  Should they be counted as a malaria case in an epidemiological data system?

For that matter, what then is malaria?  Is it being infected with a Plasmodium parasite, regardless of whether or not it is bothering you?  Or do you only have malaria when you're sick with those classic malaria symptoms (periodic chills and fevers)?  

Perhaps what matters most here though is another question: Can these people transmit the disease to others?  Right now we don’t know the answer to this question.  It is not enough to only have malaria parasites in your blood – you must have a very specific life stage of the parasite present in your blood in order for an Anopheles mosquito to pick the parasite up when taking a blood meal.  The PCR methods used in this paper would not allow us to differentiate between life stages – they only tell us whether or not malaria is present.  This question should, however, be answered in future work. 




*** As always, my opinions are my own.  This post and my opinions do not necessarily reflect those of Shoklo Malaria Research Unit, Mahidol Oxford Tropical Medicine Research Unit, or the Wellcome Trust. For that matter, it is possible that absolutely no one agrees with my opinions and even that my opinions will change as I gather new experiences and information.  

    





[ii]  PCR can be expensive, can take time, and requires machinery and materials that aren’t currently practical in at least some field settings
[iii] In a high transmission area, people would have more infectious bites by mosquitoes per unit of time when compared to a low transmission area.  For example, in a low transmission area a person might only experience one infectious bite per year whereas in a high transmission area a person might have 1 infectious bite per month.  

2 comments:

Peter said...

How are you guarding against false positives in the PCR reaction? Did (any/some/all) of the individuals with PCR-detected malaria subsequently progress to having microscopically detectable disease?

Unknown said...

Hi Peter, thanks for the question.

I would say that there is always a possibility of there being false positives. The only way to know that there are false positives is to compare the method (in this case, nested PCR) to a method that is superior. In this case the “gold standard” is microscopy, but in my opinion the only reason microscopy remains the gold standard is because it has been for such a long period of time, not because it is actually a superior approach.

There have been several studies that looked at variability in detection by microscopy, some have indicated that in low parasite density infections, that variability is greater (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2504332/). What I think is going on here is that there are strong limits to detection. I would expect that a microscopist that isn’t as well trained or doesn’t have high quality lab materials would find less cases than one who is well trained, but that even one who is well trained wouldn’t be able to find as many cases as a more sensitive approach such as nested PCR.

Our findings in this study are quite parsimonious with this explanation. Furthermore, the nested PCR was done in two rounds – the first looking for malaria at the genus level and then a second round looking for malaria at the species level – in this case we were always able to identify a species in the second level. This gives me further confidence to this approach. (If you’re interested, here is a link to our paper describing this approach, the ways we’ve tried to verify it, sensitivity and specificity: http://www.malariajournal.com/content/13/1/175 ). However, if we’d used an even more sensitive approach (such as quantitative PCR) we may have found even more undetected cases than we did by nested PCR, and some of those may have had such low parasitemias that we might not have been able to differentiate them at the species level using a second wave of nested PCR, genotyping etc.

I don’t have the numbers with me right now, but a few of the people who were detected by PCR only did later have symptomatic malaria. This particular study wasn’t set up to look at that, the participants could have visited a malaria post or clinic that wasn’t a part of this study and we would never have known. We would also have to do genotyping of the parasites to see if the parasite strain being carried in the participant at an earlier time point was in fact the same as the parasite strain that eventually lead to that patient being ill. It is possible that a person has an asymptomatic infection (perhaps of a given genotype), is later infected by a new, unfamiliar parasite strain and then develops a symptomatic infection because of that new strain. There would be some difficulties even with this because of mixed strain infections. If we had done this though, I wouldn’t take it as a gold standard. Asymptomatic malaria is a well-known, though perhaps not very well understood, phenomenon.

So perhaps this isn’t a very convincing answer. I think that there is a possibility of false positives, but I don’t think it would change our results here. I think that the most likely scenario is that asymptomatic, low parasitemia cases are missed because they are difficult to detect using methods (like microscopy) that are less sensitive than DNA based methods - and there are a growing number of other studies that are coming to the same conclusions. As with several other “gold standards” in tropical infectious diseases (probably in other diseases too) I think that our gold standard for malaria just isn’t very good.
Daniel