TO WELCOME
Hints on evaluating a study
TO A SAMPLE EVALUATION
to web site evaluation
to links
Given the frequency with which studies are posted on a.s.m. and because so many press releases and news reports begin with "a study released today," or "studies suggest," it is important to know how to evaluate a study and examine its conclusions. This brief outline attempts to list some of the principles needed to do such evaluations. 

(A mnemonic: People say fleas on every dog really seem unhealthy) 

1. Examine population.
2. Look at size of study. 
3. Find source of funds if possible. 
4. How are results obtained
5. Are endpoints or actual diseases being studied? 
6. How is prevention/improvement defined
7. Look at raw numbers and beware if none are supplied. 
8. Does the study support the conclusion? 
9. Is the conclusion being used appropriately in subsequent claims. 

1. Examine the population of the study from two different angles
   a) What is the relationship of the study population to the general population as a whole? Do the study participants represent a large cross-section of the general population or is the study confined to a very small isolated group? Have they been chosen at random? If not, who has been excluded or included? 

    b) Are controls and subjects evenly matched on all relevant variables? If not does the study point out the variables which may have affected the results but which they could not control. 

2. Look at the size of the study population
The larger the population the better. How long a time period does the study cover? Again, the longer the time the better. 

3. Who is financing/supporting the study?
Who is supplying the medication if medication is being studied? This is difficult to establish since drug companies try to launder their money to hide their involvement. Assume that the drug company providing the medication has a vested interest in the results and read them with that caveat in mind. 

4. How are the results being obtained?
    a) Self-reporting or relying on memory of the participant is theist reliable method. Retrospective studies that rely on old medical records are marginally more reliable but they assume that the old medical record is accurate - a somewhat questionable assumption. Current 
ongoing objective testing of the participants is the most reliable method. 

    b) Is there a distinction made between in vivo (i.e. results in living human beings) and in vitro (i.e. results in test tubes and petri dishes). Very frequently results in test tubes and petri dishes do not translate into results in living human beings. Similarly results on animals are a step removed from results on humans and frequently will not translate. 

5. What is being studied?
Surrogate end points (factors associated with the disease) or the actual disease? If endpoints rather than disease, then are there studies showing that the end points are actually connected to the disease? Those studies must also be evaluated. 

6. How is improvement or prevention defined?
What is the practical significance of the improvement as the researcher defines it. Remember that statistical significance may not mean clinical significance. 

7. Examine the raw numbers.
Beware of abstracts and studies reporting the results as percentages without indicating the raw numbers. Remember that 100% of x is 0 if x is 0. A 50% reduction in something is insignificant if the study shows that you have prevented 1 death in 100,000 people. 
A recently published study on fosamax highlights this problem of percentages rather than raw numbers. After three years, according to the conclusion, women taking fosamax had lost 35% less height than women not on fossamax. Sounds impressive until you read the fine print - 35% is equal to 1.6 millimeters. Over 3 years. in thirty years the difference (assuming that it's linear) would be less than 3/4 of an inch. And this assumes that measurement was sufficiently accurate to conclude that 1.6mm was actual and not a measurement error. 

8. Does the body of the study actually support the conclusion?
This is frequently where the raw numbers vs percentages becomes crucial. 

9. Finally even if the study itself seems logical and all seems to be in order, is it being used appropriately. Or is is being used to justify a course of action or to prove a theory which does not logically follow. 


I have quoted the following with the permission of Dr. William. Rich who has authored a web site on gyn cancers. URL is http://www.gyncancer.com. I highly recommend Dr. Rich's web site. He has given permission for us to use it in any way which we feel will help the members of this group. 
"Numbers don't lie, but what is inferred from them is almost always a distortion. For instance, what is the probability that the next time you fly someone will have brought a bomb onboard to blow up the plane? This can be estimated and will be a very small number. Assume that it is one in one hundred thousand flights (1:100,000). What is the probability that there will be two people with bombs on your flight? This will be an exceedingly small number, and is calculated by multiplying 1:100,000, by 
1:100,000. This is 1:10,000,000,000. 

Conclusion: always bring your own bomb when you fly because the likelihood of there being two bombs is infinitesimal. So, provided you don't blow yourself up, there is an infinitesimal likelihood that anybody else on the plane will also have a bomb. The conclusion seems reasonable, but it is not. Do you know why the conclusion is mathematically erroneous? " 

When you look at the statistics of a study, make sure no one has invalidated the equation in advance by bringing his/her own bomb aboard. 

If you don't know what's wrong with the mathematical conclusion here, email me. Tetje 

The articles below appear in the British Medical Journal. They are eminently readable. 
How to read a paper : the Medline Database
How to read a paper : Getting your bearings (deciding what the paper is about)
Extract:The science of "trashing" papers 
 It usually comes as a surprise to students to learn that some (perhaps most)  published articles belong in the bin, and should certainly not be used to  inform practice. Below are some common reasons why papers  are rejected by peer reviewed journals. 
    • The study did not address an important scientific issue 
    • The study was not original (someone else had already done the same or a similar study) 
    • The study did not actually test the authors' hypothesis 
    • A different type of study should have been done 
    • Practical difficulties (in recruiting subjects, for example) led the authors to compromise on the original study protocol 
    • The sample size was too small 
    • The study was uncontrolled or inadequately controlled 
    • The statistical analysis was incorrect or inappropriate 
    • The authors drew unjustified conclusions from their data 
    • There is a significant conflict of interest (one of the authors, or a sponsor, might benefit financially from the publication of the paper and insufficient safeguards were seen to be in place to guard against bias) 
    • The paper is so badly written that it is incomprehensible
How to read a paper : Assessing the methodological quality of published papers
How to read a paper : Statistics for the non-statistician
How to read a paper : Statistics for the non-statistician. II: "Significant" relations and their pitfalls
How to read a paper : Papers that report drug trials
How to read a paper : Papers that report diagnostic or screening tests

http://cebm.jr2.ox.ac.uk/docs/studies.html
Study Design
This page gives a brief comparison of the advantages and disadvantages of the different types of study. 

     Case-Control Study 
     Cross-Sectional Survey 
     Cohort Study 
     Randomised Controlled Trial 
     Crossover Design 

 TO WELCOME
 
 
Hosted by www.Geocities.ws

1