Author: Heta Pukki
Image above: Kat Van der Poorten trains an international group of autistic activists in Prague in December 2023.
It is now just over a year since we started our journey to build our capacity to advocate for autistic people in matters related to AI. We started from a situation where the board of the European Council of Autistic People found that we had opinions and perspectives to contribute to the discourse on AI and its impact on minority communities – knowledge that was needed, and that no one else was providing.
At the same time, we found that there were gaps in our own knowledge, preventing us from moving towards clear statements that we could all agree on. The EAISF 2022 call for proposals thus matched our needs perfectly, and the AIRA project was conceived.
We were initially faced with questions such as: Can emotion recognition technologies be helpful for autistic persons as assistive technologies, and should they be allowed for personal use? Can emotion recognition be helpful for therapeutic purposes? Our response was, in essence, that they will probably not turn out to be particularly helpful for a significant proportion of autistic people. In our view, suggesting such benefits demonstrates a fundamental misperception of autism. We saw potential risks in using such applications of the technology, i.e. they could reinforce a normative message of only certain types of emotional expression being correct and acceptable, and ours being inherently and comprehensively pathological, thus encouraging harmful masking of autism. Since EUCAP as an organisation promotes the acceptance of typically autistic forms of expression, instead of aiming to hide or remove them, we are keenly aware that AI in this context could fall into the category of exacerbating existing inequality and oppression – a type of concern that we have found is widely shared among disability communities in other contexts.
While we do not generally support the idea of blanket bans on new technologies, we also did not appreciate the impression that we were being used as an excuse, arguing that the benefits to autistic people should justify lower levels of regulation on emotion recognition systems. Such arguments did not convince us, as we saw no call for the development and use of such applications from our communities, and no participation by autistic people’s organisations in the debates concerning the regulation or the research.
With this as our starting point, we felt that in our project it was important to first gain an overview of AI applications that could affect autistic people specifically, as well as an overview of knowledge and perceptions in our communities, and then share these with our member organisations. The timing has not been ideal in terms of participating in influencing the AI Act, beyond some collaboration to begin to learn about the European Disability Forum’s role in the Trilogues. In this second year of our project, we will be catching up, equipped with our new knowledge, keeping an eye on the finalizing and implementation of the AI Act and bringing our voice to the debates.
Image above: AIRA project team getting together for training days in Brussels. From left to right, Silke Rudolph, Imke Heuer, Khiah Strachan, Kat Van der Poorten.
Visiting Kave Noori (right), AI Policy Coordinator of the European Disability Forum.
Autistic people as targets of AI applications
We have learned that autistic people are the targets of extensive and growing research efforts involving the use of AI and the development of new technologies. The number of research articles featuring the use of AI applications in the context of autism research has been increasing exponentially since 2010. More than a thousand such articles were located by the AIRA literature literature review team. While participatory research and efforts to bring community and funder priorities closer together are major themes in current autism research, in the literature involving AI we saw very little evidence of working in this direction. The autistic voice appeared to be largely absent.
The greatest number of articles by far concerned diagnosing autism with the help of AI, as exemplified by a recent article in Nature Medicine. The second most common type covered biomedical research involving AI/big data, followed by other types of interventions, including behavioural approaches that have long been widely criticized by autistic people’s organisations, including ours. In contrast, there are areas where we initially imagined seeing lively research with great potential benefit, such as alternative and augmentative communication, which seem to have received minimal attention.
The AIRA project team has observed many developments that clearly involve both risks and possibilities. For example, while we would like to see autism diagnoses become more affordable and available throughout Europe, as well as more objective and reliable, with the help of tools that might counteract clinicians’ gender and ethnicity related biases, we are concerned about potential over-reliance on AI based diagnostic tools, leading to situations where they could replace or override human judgement where it remains vitally important. We also see diagnostic tools being developed in the context of excessively deficit-focused, overly medicalizing views of autism, limited almost exclusively to early diagnosis and justified by the goal of altering brain development – which might be accomplished through AI-enhanced behavioural interventions. The overwhelming emphasis on these types of research means that vast populations of autistic adults in desperate need of diagnosis and recognition are excluded, when they could benefit from AI being applied to adult diagnosis and needs assessment, and their lives could improve tremendously if technologies were developed to serve more efficient provision of services, accommodations, disability benefits and health care. Overall, our impression is that for the most part, the way AI is being used in research targeting autistic people does not agree with the priorities of autistic communities.
Images: An additional Learning and Development grant allowed us to organise training to address general organisational capacity issues that limit our ability to carry out effective advocacy.
Shared concerns and promising new directions
By following the civil society discourse on AI, we have learned where our concerns coincide with other minorities and other types of disabilities. We are in the same boat with numerous others in needing beneficial AI applications to be accessible, as well as needing to participate in research and design processes to accomplish this from the beginning. Similarly, the fear that AI developments may enhance practices that we find negative or oppressive is not unique to us. Biometric categorization would probably lead to harmful misinterpretations stemming from differences in expressions, gestures, body movements and voice characteristics that are typically autistic. We are thus at risk of becoming targets of automated exclusion, along with many other groups. We also need to consider intersectionalities – how would an autistic refugee be interpreted by an AI application meant to identify their accent, in order to judge the credibility of their story, when they may never have picked up the accent of the people around them?
In addition to potential threats, we have encountered AI applications and research that have aroused interest, enthusiasm and optimism. We originally viewed the detection of any emotional expression by AI with skepticism, but the project team learned about an application that appears to offer genuine benefits for a group of people that partly overlaps with the autistic spectrum: those with no spoken or written language. We also saw AI starting to be applied to noise cancellation, an area where existing technology has already enhanced our wellbeing, and where many of us eagerly wait for new developments. We still see tremendous unexplored potential in the field of augmentative and alternative communication, and we will follow with interest the developments in fields such as brain-computer interface studies, with researchers engaging in groundbreaking work to reach people with the greatest communication challenges.
Outside the world of research, we hear from individual autistic people how they are learning to use common AI tools in creative ways, adopting them to counteract common challenges such as difficulties in initiating communication, navigating everyday tasks that involve multiple steps, or switching from one situation or task to another – commonly referred to as ‘autistic inertia’, and increasingly researched as something autistic people recognised as impacting our quality of life. It appears that such spontaneous adaptations hold some promise and would benefit from further investigation. At the same time, we see an ever-increasing demand for autistic talent in the IT sector, including the development of AI applications. We have encountered autistic experts in a variety of roles, from the most highly skilled to those with a knack for routine data annotation.We even have a bespoke generative AI tool, created for EUCAP by the tech company Visium, waiting for us to start experimenting and discovering how it can be made to serve our advocacy and campaigning.
Images below: Some of the people who have contributed to our understanding of AI issues through collaborations and consultations.
These observations give us hope that whole new kinds of collaborations could emerge, bringing diversity to the research and industry, and allowing us to move from the position of target population to partners, participants, end users and consumers, as well as bringing our voice to the political debates that will influence our lives along with other minorities.
Preparing to take the next steps by growing the team: For Reetta Viljanen (left) from Finland and Brian D’Arcy from Ireland, the training at the end of 2023 was their first contact with AIRA. Reetta’s presence was a visible sign of another collaboration, that between AIRA and the Finnish project AAVA which works to make advocacy roles accessible to a broad variety of autistic people.