ByRachel Fairbank
Published January 29, 2024
A new digital screening tool may help fast track the diagnosis of children with autism.
One out of every 36 children are autistic, according to recent estimates from the CDC. The prevalence, which has increased in recent decades, is thought to be due to better recognition of symptoms and improved screening procedures. Even so, families still face challenges, including a delayed diagnosis. And, for girls and children of minorities, this delay is often longer due to an inability to access the right experts and the variability of symptoms from child to child.
Currently, there is no medical test for autism. Instead, experts make the diagnosis by evaluating a child’s developmental history and behavior.
“For the majority of kids, there’s no objective test other than observation of behavior,” says Geraldine Dawson, a psychologist at Duke University, and the lead author of the research study. “We’re only relying on parent reports.”
For Black and Hispanic children, although their parents start noticing the signs of autism around the same time as other parents, they are still diagnosed later than their peers, says Daniel Geschwind, a physician researcher at University of California Los Angeles, whose research focuses on autism. As Geschwind notes, these children also must be taken to more doctor’s appointments and are at a higher risk of receiving inaccurate diagnoses than their peers.
In a recent paper, published in the journal Nature Medicine, researchers describe a digital screening device that uses machine learning to analyze various aspects of behavior to determine whether a child has a high probability of being autistic or not. When they tested this screening tool—called the SenseToKnow app—in a sample of 475 children, they found it had a high accuracy rate for predicting which children were eventually diagnosed as autistic.
Barriers to a timely diagnosis
As Dawson notes, parents are quite good at detecting if something is different about their child. But reporting those concerns to their doctors poses significant challenges; whether its difficulty framing the context or finding the right words to describe what they are observing. This is further complicated because autism manifests differently in each child and the timing of early symptoms can also vary.
Given how variable these signs can be, even when parents report their concerns, pediatricians often don’t have the right knowledge and training to pick up on the fact that it’s autism, rather than something else. “There aren’t enough providers with expertise, and most general pediatricians don’t have the expertise to do this,” Geschwind says.
The main screening tool, which is called The Modified Checklist for Autism in Toddlers, Revised with Follow-Up, or M-CHAT-R/F for short, is made up of a formal screening questionnaire that includes a number of questions about a child’s behavior and developmental milestones. That is then followed up with further questions from a pediatrician.
The M-CHAT-R/F works well in a formal research setting, but when applied in a busy pediatrician’s office where appointments can be rushed, this accuracy drops. This drop in accuracy disproportionately affects girls, as well as Black and Hispanic children.
“Of those [children] who screen positive, only half are referred to early intervention,” says David Mandell, a professor of psychiatry at the University of Pennsylvania, whose research focuses on racial, ethnic and socioeconomic health disparities in access to autism resources. Mandell was not a part of the study.
How the SenseToKnow app works
The new screening tool works like this: parents have their child watch a 10-minute video while the camera records various aspects of behavior. The test predicts whether the child has a high probability of being autistic based on several factors—what they pay attention to in the video, what facial expressions they make, how they move their head, and how they respond to their name.
“We found differences in face expression that are extremely subtle,” Dawson says. In practical terms, for parents reporting these subtleties can be challenging. “It is very difficult for a parent to quantify and to even describe,” Dawson says.
Of the 475 children who were screened with this app during a primary care visit, 49 of them were eventually diagnosed with autism, while another 98 children were diagnosed with other developmental delays. This prevalence, which is higher than average, was likely due to the opt-in nature of the study, which may have led the parents concerned about their child’s development to enroll.
Sensitivity vs specificity
A good screening tool will reliably identify the kids who are autistic, while also identify the children who aren’t. These aspects of a test’s accuracy are called sensitivity and specificity.
A test’s sensitivity is its ability to correctly detect autism when it is present; specificity is a test’s ability to correctly detect when autism isn’t present.
If a test has poor sensitivity but high specificity, there is a high chance that the children with a positive result will be autistic. However, there also will be a lot of autistic kids, which the test will falsely label: not autistic.
If a test has high sensitivity but poor specificity, then there will be a lot of children incorrectly flagged as being autistic (false positives), but very few autistic children being overlooked.
When many autistic children are overlooked, it means a delay in receiving the services and accommodations that they need; while a lot of children incorrectly flagged as autistic will lead to long waiting lists to see an expert who can carry out a full evaluation.
“You want to balance your sensitivity and your specificity, to try to find as many of the true positives as you possibly can, to get those kids started on intervention services, without clogging up the system with a lot of false positives,” says Diana Robins, a psychologist at Drexel University, whose research focuses on autism. Robins, who is one of the creators of the M-CHAT-R/F screening tool, was not involved in the Nature study.
The SenseToKnow app was shown to have a sensitivity rating of 87.8 percent, and a specificity rating of 80.8 percent.
More research is needed
Before the SenseToKnow app is ready for use in a primary care setting, it will need further studies, which includes validating its accuracy in different groups of children.
“The next step is to test this in an independent population, to understand its generalizability more broadly,” says Geschwind, who was not part of the study. “Can it predict outside of the sample that it learned on?”
Dawson and her collaborators are currently carrying out this research, testing the SenseToKnow app in a bigger, more diverse set of patients, to see if it can still accurately predict autism. Although the SenseToKnow’s accuracy was generally quite good, these results weren’t uniform among all groups of patients.
“The sensitivity in Black children was great,” Mandell says. “The specificity was not great.”
This lower specificity would mean a higher chance of a child receiving a false positive result—in which the test predicts that a child is autistic when they are not. Given the relatively low numbers of Black children who were enrolled in the study, this accuracy can most likely be improved with further testing.
“The next step,” Robins says, “is to test 5,000 or 10,000 kids at checkup and see how it works.”
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : National Geographic – https://www.nationalgeographic.com/science/article/digital-tool-autism-diagnosis-minority-children