Evidence-based is a term we hear constantly across healthcare, education, and disability services, but its meaning can shift dramatically depending on who’s using it. For clinicians and therapists it can range from peer-reviewed studies with statistically significant results to clinical opinions developed over years of consistent, observable outcomes. Policymakers appear to seek large-scale, longitudinal data to support specific outcomes in a set population – often from reviews with strict inclusion and exclusion criteria. For individuals and families living with disability, it often becomes far more personal: “evidence” might mean the recommendation provided by a trusted professional with experience, or the one therapy that finally helped their child sleep through the night, even if the literature reports not statistically significant outcomes..
The recent discussion about creating a final version of the NDIS lists of supported and unsupported therapies has reignited debate, especially around those classified as not “evidence-based” or “cost-effective.” Full disclosure: I provide neurofeedback, a therapy currently listed as not evidence-based, so I do have some inherent bias and may benefit from changes to the lists. Last year when the draft version was released, I asked how this conclusion was reached: what definition of evidence was applied, what criteria were used for research inclusion, and who conducted the evaluation. I received a generic template letter citing upcoming role applications for the proposed Evidence Advisory Committee (EAC). In short, it appeared as though no clear assessment was conducted, and the public and professional submissions to the consultation have been overlooked.
With another consultation closing soon, and final lists seemingly being proposed even before the committee has reviewed anything, it raises a critical question: How will the NDIA define evidence-based practice?
The Evidence Advisory Committee’s framework will shape not just funding decisions, but the therapeutic futures of thousands of Australians with disability and the public opinion of the therapies on the lists. Yet there is still no public definition or criteria from the NDIA for what constitutes “evidence-based.” Will double-blinded randomised controlled trials (RCTs) dominate, or will the definition allow space for qualitative data, clinical experience, practice-based results, and lived experience? Not all research carries the same weight, nor should it. Evidence exists on a spectrum, from systematic reviews and RCTs to cohort studies, case series, and clinical observations. Even within high-level reviews, standards can be inconsistently applied. Some therapies are criticised for lacking double-blind trials, even when blinding is impractical, while others are given leeway under similar conditions. This inconsistency often privileges pharmacological, highly standardised, and add-on therapies while dismissing those grounded in individualised, real-world practice.
Part of the problem lies in the medicalisation of research itself. Strict inclusion and exclusion criteria attempt to isolate variables, but in doing so, strip away the complexity of human experience. I personally feel some comorbidity symptoms are actually the extension of the one diagnosis and should be viewed as such to best support the individual. Yet I also acknowledge in the real world, diagnoses rarely occur in isolation. Co-occurring conditions, trauma histories, and environmental factors all influence therapeutic outcomes. When research focuses only on diagnosis and group-level significance, it often misses the personal, functional changes that matter most to individuals. Take, for example, the common overlap of anxiety and autism. Despite being deeply interconnected, they are often treated as separate conditions, with referrals made to mental health care plans for the anxiety, which doesn’t match the complexity of the situation.
I see similar patterns in my neurofeedback work with clients diagnosed with ADHD. While each client may share the same diagnosis and symptoms, yet a QEEG and ERP assessment will show their underlying brain activity can vary dramatically. One may show the common ADHD pattern of excess frontal theta frequencies, another an anxiety pattern in attention-related regions, while a third may present with either paroxysmal activity or disruptions in visual or auditory processing. An individual can present with just one of these patterns or a combination of several. This diversity guides my clinical decision-making. I tailor neurofeedback protocols to the individual, their symptoms, and their physiology, not their diagnosis label. I measure progress based on their specific difficulties and goals, not the scale that determined their diagnosis. Yet reviews of neurofeedback often only include approaches that use standardised protocols, such as theta-beta training, or only training at set sites. Some go far as to use a control group that are provided some form of training – nutrition, sleep, technology or sham feedback. These studies typically only include those using scales that measure the specific diagnostic symptoms, rather than the actual difficulties the person experiences. While standardisation may aid in data analysis, it flattens the nuanced, individualised responses that make these therapies effective in practice. Focusing on these studies and excluding those that personalise training, dilutes the efficacy that is observed by professionals applying the therapy. Unfortunately, this dilution of outcomes issue extends into practice when individuals are provided generic at-home systems for rent, with little oversight or follow-up from trained practitioners.
In my experience, clients frequently report meaningful improvements: reduced anxiety, better sleep, improved focus, more stable mood, enhanced memory, and greater tolerance to sensory input. These are tangible, life-changing outcomes. Yet they are difficult to capture in studies that do not account for personalised protocols or that select outcome measures failing to align with client goals. For one person, the noise in their head may finally settle. Another might study for longer without frustration. Someone else may begin to tolerate social environments for the first time in years. These changes do not need to be statistically significant to be deeply impactful. Being able to study longer reduces distress and anxious build-up. Improving regulation reduces reactivity and strengthens relationships. Greater tolerance in social environments builds confidence and relationships. For children, these changes create opportunities to develop vital social skills, opportunities missed when their nervous system is in a state of dysregulation.
This broader issue of disability research and the persistent focus on diagnosis over function is a current problem. Using research that places participants in groups based on label, with outcomes measured by symptom severity or adaptive behaviour scales does not align with the intention of the NDIS. Because what happens when the change that matters, such as a non-verbal child learning to regulate and express a need, does not shift a group average enough to hit statistical significance? Do we disregard that transformation simply because it is not captured by the chosen metric?
The NDIA needs to ensure the final lists are not published before proper reviews of the supports are performed. This process deserves careful, participant informed consideration, not premature decisions before thorough review has taken place. The first step must be clear definitions of what is meant by the terms ‘evidence-based’ and ‘cost-effective.’ Yes, I am asking for transparency in decision making. I am asking for public and professional submissions to the consultation to be used in the review. And I am asking for the voices of participants and professionals who use these therapies to not just be listened to, but heard and acknowledged.
It is time for the NDIA and their Evidence Advisory Committee to define what they will consider valid evidence. The criteria must be transparent, inclusive, and reflective of the full spectrum of human complexity. Because when it comes to disability support, real lives hang in the balance.
This article is very clearly written and explains the situation that we are all stuck in as alternative therapists that gets consistent and positive outcomes with our clients and the parents of (and/or) the children with special needs.
Thank you very much for sharing the experience and the situation..