Saturday, April 25, 2009

Autism under the sun: Epidemiology from Aruba

ResearchBlogging.orgA recently epublished paper reports autism epidemiology in Aruba (van Balkom et al., in press). The findings: autism prevalence of ~19/10,000 and autistic spectrum prevalence of ~53/10,000 (of which only ~2/10,000 Asperger individuals were identified). Because case finding methods were very conservative and limited, the authors state that:

These prevalence estimates should be considered minimum prevalence.
Even so, these figures are

similar to recent reports from the United Kingdom and the United States.
For example, the combined Chakrabarti and Fombonne (2001, 2005) studies reported autism prevalance of ~19/10,000 and autistic spectrum prevalence of ~61/10,000. Very similar figures from the UK were recently found by Williams et al. (2008). In fact the reported minimum autistic spectrum prevalence in Aruba is noticeably similar to reported autistic spectrum prevalence in the Faroe Islands (~56/10,000; Ellefsen et al., 2007).

I am no climatologist, but all available reports converge on Aruba, which sits not too far from the equator, being a remarkably sunny and dry place. This is not the case for the Faroe Islands, which have a cloudy, rainy, foggy climate where sunshine is rare.

Speculation that lack of sun exposure may cause autism via Vitamin D deficiency is back in the media, providing a reminder that Michael Waldman and his Cornell colleagues persist in claiming that less precipitation means fewer autistics. Aruba's recent autism epidemiology sheds some bright light on these hypotheses.


References:

Chakrabarti, S., & Fombonne, E. (2001). Pervasive developmental disorders in preschool children. JAMA, 285, 3093-9.

Chakrabarti, S., & Fombonne, E. (2005). Pervasive developmental disorders in preschool children: Confirmation of high prevalence. American Journal of Psychiatry, 162, 1133-41.

Ellefsen, A., Kampmann, H., Billstedt, E., Gillberg, I.C., & Gillberg, C. (2007). Autism in the Faroe Islands: an epidemiological study. Journal of Autism and Developmental Disorders, 37, 437-44.

Balkom, I., Bresnahan, M., Vogtländer, M., Hoeken, D., Minderaa, R., Susser, E., & Hoek, H. (2009). Prevalence of treated autism spectrum disorders in Aruba Journal of Neurodevelopmental Disorders DOI: 10.1007/s11689-009-9011-1

Waldman, M., Nicholson, S., Adilov, N., & Williams, J. (2008). Autism prevalence and precipitation rates in California, Oregon, and Washington counties. Archives of Pediatrics and Adolescent Medicine, 162, 1026-34.

Williams, E., Thomas, K., Sidebotham, H., & Emond, A. (2008). Prevalence and characteristics of autistic spectrum disorders in the ALSPAC cohort. Developmental Medicine and Child Neurology, 50, 672-677.

17 comments:

Anonymous said...

If precipitation were significantly associated with autism, prevalence by birth year series would look quite noisy. I looked at data from California here.

Michelle Dawson said...

That's interesting looking data but the California DDS numbers aren't epidemiology.

MalchowMama said...

I had heard (from my Mother!) that the allegedly higher rates of autism in rainy places was meant to be because children there are inside watching more television! I am relieved to hear this is not the case, as my sons were born in Ireland, where it is nothing if not rainy. They don't watch television, but we do have a fairly extensive dvd collection that they both enjoy . . .

I was thinking about this type of thing in terms of the many parents (especially in Ireland) who are convinced that vaccines are the cause of their children's autism simply because they began to notice it around the time they received the MMR. By this logic, I have decided that moving to Germany must have caused Sammy's, since we ddn't notice anything unusual about him until shortly after we moved here. . .

Michelle Dawson said...

Re autism being caused by television, you can thank Michael Waldman and his colleagues for that one too. See this 2006 unpublished paper.

Dr Waldman's more-rain-equals-more-autism paper is more or less his earlier television-causes-autism paper without the television.

Anonymous said...

That's interesting looking data but the California DDS numbers aren't epidemiology.What they aren't are exact counts, or counts that are uniformly equivalent over time. So long as that is understood, there's no reason to think the data is useless in research.

It has been used in peer-reviewed papers, not only the one you linked to (which I've critiqued), but also Schechter & Grether (2008).

In fact, I seem to recall a paper by Gernsbacher, Dawson & Goldsmith that uses California DDS data to make a point about epidemiology.

Databases with similar limitations, such as the Danish psychiatric registry, are used in epidemiology all the time. Of course it would be better to do whole population screenings with the same exact methodology over time. If this were practical and existed, perhaps no one would be discussing the alleged autism "epidemic."

Michelle Dawson said...

In Gernsbacher et al. (2005), there is an entire section about the misuse of CDDS data as epidemiology.

Schecter and Grether (2008) used CDDS data explicitly only in response to the misuse of these data by those promoting vaccine-autism hypotheses. The authors note that:

"The DDS data are generated from an administrative system that was designed to track complex enrollment and fiscal data, not to measure the occurrence of developmental disabilities in the population."

Anonymous said...

The statement of limitations of the Cal DDS data is absolutely true, but I disagree that it means Cal DDS data is misused when it's used for epidemiology.

It's misused when people claim "see, here's a clear rise in true prevalence" and more specifically, when they correlate it to something else and disregard the non-stationary nature of the administrative prevalence series.

But it's not misused if I were to claim, for example, "there's been a clear rise in the administrative prevalence of autism in California." This is an entirely true claim, epidemiologically, and it can be researched.

It would be entirely appropriate to use Cal DDS data or IDEA data to study administrative diagnostic substitution, for example. This is epidemiology too.

Data like that has an explanation. To simply say "it's not epidemiology" is an easy cop-out and not very informative, in my view.

Michelle Dawson said...

When it comes to autism (I haven't studied a lot of other areas), you can use DDS-type administrative data to study DDS-type administrative questions (how many individuals are getting which services under which categories and how this has shifted over time within a specific administrative system).

But you can't use DDS-type (or IDEA-type) administrative data as is, to study prevalence of autism over time.

Anonymous said...

But you can't use DDS-type (or IDEA-type) administrative data as is, to study prevalence of autism over time.I basically agree with that. For example, the Cal DDS 3-5 prevalence of autism is about 40 in 10,000. It would be nonsense to claim this is the prevalence of ASD in California. 40 in 10,000 is a lower bound, assuming all children with a classification of autism are really autistic (and I believe there's data that says almost all of them are.)

Let's say that in a couple years the Cal DDS prevalence becomes 200 in 10,000. Obviously, "it's not epidemiology" wouldn't cut it as an explanation. It would either mean that children are being over-diagnosed in droves or there would have to be an environmental explanation of some sort.

So I think it's more complicated than "we can use it for this but not for this other thing."

Similarly, I don't think it's far fetched to assume that there's a changing relationship between administrative prevalence and true prevalence. If there were a big spike in true incidence one quarter, probabilistically, you'd expect to see a spike in administrative incidence maybe the next quarter or some time in the near future.

The problem I had with H-P et al. (2009) was not that they used Cal DDS data as epidemiology. The problem was that they made incorrect implicit assumptions - many of them - and they didn't consider awareness.

Michelle Dawson said...
This comment has been removed by the author.
Michelle Dawson said...

(I'm reposting this comment because I messed up the formatting in the original, sorry...)

You can take guesses, based on a speculated relationship between DDS data and actual autism prevalence. But unless this relationship is actually studied for one or more specific points in time, then this is speculation, not epidemiology.

As the CDDS writes in their latest report (the below is the only mention of epidemiology):

"The information presented in this report is purely descriptive and should not be used to draw scientifically valid conclusions about the incidence or prevalence of ASD in California. Numbers of people with ASD described in this report reflect point-in-time counts and do not constitute formal epidemiological measures of incidence or prevalence. The information contained in this report is limited by factors such as case finding, accuracy of diagnosis, hand entry, and possible error, by case workers of large amounts of information onto state forms. Finally, it is important to note that entry into and exit from California’s developmental services system is voluntary. This may further alter the data presented herein relative to the actual population of persons with ASD in California."

Larry Arnold PhD FRSA said...

Its like trying to measure accident prevalence or crime by the number of insurance claims.

There is a correlation, but also an almighty big third factor, financial incentive and reward for general "claim inflation".

Anonymous said...

I'm in agreement that Cal DDS "should not be used to draw scientifically valid conclusions about the incidence or prevalence of ASD in California" as Call DDS themselves say.

However, this doesn't mean that it can't be used for epidemiology, and they obviously allow lots of researchers to use it for epidemiology.

All epidemiological studies I can think of that test a hypothesized relationship between autism and some other factor rely on passive data much like that of Cal DDS. It would be very difficult to do it any other way.

Take Fombonne et al. (2006). They use data on psychiatric diagnostic categories provided by MEQ in Quebec.

If you were to believe his figures, then the prevalence of PDD has been increasing over time in Canada.

Michelle Dawson said...
This comment has been removed by the author.
Michelle Dawson said...

(I'm again re-posting a message due to formatting problems, sorry again...)

Fombonne et al. (2006) write about their sample that:

"a majority of these children (N = 155; 86.1%) have been diagnosed at the Montreal Children’s Hospital"

Using records about autism means having to look at them, and apply standards, and so on.

Joseph wrote: "they obviously allow lots of researchers to use it for epidemiology"

You mean the DDS states to researchers that yes, DDS data related to autism are valid measures of prevalence, while telling the public they are not?

Anonymous said...

You mean the DDS states to researchers that yes, DDS data related to autism are valid measures of prevalence, while telling the public they are not?

No, and there's no contradiction. I mean that you can use records that are not valid measures of prevalence to do epidemiology. Of course, you need to take into account that you're looking at administrative counts, and any differences between regions and changes over time could be due to changes in ascertainment, awareness, and so forth. There are ways you can control for such confounds statistically.

VSD, for example, is not a valid measure of prevalence. Neither is the Danish Psychiatric registry.

Cal DDS data is obviously used all the time by the MIND Institute. Some specific researchers that I can think of are Croen, Hertz-Picciotto, Widham, Roberts, etc.

A study like Windham et al. (2006) would be worthy of consideration if it had only controlled for (log) population density of the child's place of diagnosis. At that point a positive result would not be convincingly refuted by saying "well, you used Cal DDS data, and you shouldn't have done that."

I'll end this comment by quoting a excerpt from a good paper by Gernsbacher, Dawson & Goldsmith:

"Two further aspects of the California data suggest that the criteria must have broadened. First, children in the more recent cohort were dramatically less likely to have intellectual impairment: Whereas 61% of the children in the earlier cohort were identified as having intellectual impairments, only 27% of the children in the more recent cohort were so identified. The lower rate of intellectual impairment in the more recent cohort matches recent epidemiological data, and the difference between the two rates suggests a major difference between the two cohorts (e.g., that the more recent cohort was drawn from a less
cognitively impaired population)."

BTW, I think it's possible to model admin. prevalence vs. proportion of autistic clients with MR, and see if the model is consistent with a cultural hypothesis. It's perfectly valid to try to understand the behavior of these databases.

Michelle Dawson said...

The quoted excerpt from Gernsbacher et al. (2005) does not involve data reported by the CDDS, but data from an unrefereed study involving a small number of children who received CDDS services. However, the reported non-CDDS data confirm that CDDS data cannot be used as epidemiology--just as the CDDS currently states.

Joseph wrote: "It's perfectly valid to try to understand the behavior of these databases."

As I wrote above, you can use CDDS data to study the number of individuals who receive CDDS services under the category of "autism," and to study the CDDS-reported demographics of these individuals. But as the CDDS states, these data cannot, in the form they are reported by the CDDS, be used as autism epidemiology.