jaldmn Profile Banner
Joe Alderman Profile
Joe Alderman

@jaldmn

Followers
801
Following
2K
Media
422
Statuses
4K

Medical AI researcher. Anaesthesia & critical care doctor. Triathlete (kinda) @unibirmingham @UHBTrust @diversedata_ST

Birmingham, England
Joined March 2009
Don't wanna be here? Send us removal request.
@jaldmn
Joe Alderman
5 months
At a loose end this afternoon? Join @DrXiaoLiu and I online at 1pm for a discussion about algorithmic bias and the STANDING Together recommendations. STANDING Together gives guidance on how to minimise risk of bias in medical AI technologies.
Tweet card summary image
aiforgood.itu.int
Health data is highly complex and can be challenging to interpret without knowing the context in which it was created. Data biases can be encoded into
1
0
0
@jaldmn
Joe Alderman
5 months
RT @FICMNews: Speakers for the FICM Annual Meeting include Dr Joe Alderman on using AI health technology, Dr Dale Gardiner on Diagnosing de….
0
5
0
@jaldmn
Joe Alderman
7 months
RT @carlosepinzon: Abordar el sesgo de la #IA y la falta de transparencia en los conjuntos de datos sanitarios. Recomendaciones de consenso….
0
1
0
@jaldmn
Joe Alderman
7 months
RT @DrXiaoLiu: One of my biggest joys is seeing an entire community take a stand for something important. STANDING Together, recommendation….
Tweet card summary image
thelancet.com
Without careful dissection of the ways in which biases can be encoded into artificial intelligence (AI) health technologies, there is a risk of perpetuating existing health inequalities at scale. One...
0
13
0
@jaldmn
Joe Alderman
8 months
RT @jaldmn: We hope STANDING Together helps everyone across the AI development lifecycle to make thoughtful choices about the way they use….
Tweet card summary image
thelancet.com
Without careful dissection of the ways in which biases can be encoded into artificial intelligence (AI) health technologies, there is a risk of perpetuating existing health inequalities at scale. One...
0
2
0
@jaldmn
Joe Alderman
8 months
Also published by @NEJM_AI and available here:.
Tweet card summary image
ai.nejm.org
Without careful dissection of the ways in which biases can be encoded into artificial intelligence (AI) health technologies, there is a risk of perpetuating existing health inequalities at scale. O...
@jaldmn
Joe Alderman
8 months
We hope STANDING Together helps everyone across the AI development lifecycle to make thoughtful choices about the way they use data, reducing the risk that biases in datasets feed through to biases in algorithms and downstream patient harm. (10/.
0
0
1
@jaldmn
Joe Alderman
8 months
@unisouthampton @WHO @pioneer_hub @BSI_UK @MoorfieldsBRC. Special thanks to our funders & supporters: The NHS AI Lab, The Health Foundation and the NIHR @NHSEngland @HealthFdn @NIHRresearch . (end).
0
0
3
@jaldmn
Joe Alderman
8 months
Last thing to say is an enormous THANK YOU to all who have contributed their time, energy and expertise to this work. Thanks for STANDING Together with us these last few years 🥹. (11/.
1
0
4
@jaldmn
Joe Alderman
8 months
We hope STANDING Together helps everyone across the AI development lifecycle to make thoughtful choices about the way they use data, reducing the risk that biases in datasets feed through to biases in algorithms and downstream patient harm. (10/.
Tweet card summary image
thelancet.com
Without careful dissection of the ways in which biases can be encoded into artificial intelligence (AI) health technologies, there is a risk of perpetuating existing health inequalities at scale. One...
1
2
3
@jaldmn
Joe Alderman
8 months
These recommendations are the culmination of nearly 3 years of work by an international group of researchers, healthcare professionals, policy experts, funders, medical device regulators, AI/ML developers, and many more besides. (9/.
1
1
4
@jaldmn
Joe Alderman
8 months
STANDING Together = STANdards for data Diversity, INclusivity and Generalisability. We have worked with >350 stakeholders from 58 countries to agree a set of recommendations to improve the documentation and use of health datasets. (8/.
1
1
3
@jaldmn
Joe Alderman
8 months
Key point: there is (probably) no such thing as a perfect dataset!. Knowledge of a dataset's limitations is not a negative - it is actually a positive, as steps might then be taken to mitigate any issues. Not knowing ≠ there are no issues. (7/.
1
1
4
@jaldmn
Joe Alderman
8 months
Those using datasets should carefully appraise the suitability of the dataset for their purpose, and consider how they might mitigate any biases or limitations contained within. (6/.
1
1
3
@jaldmn
Joe Alderman
8 months
To prevent this happening, it's really important that those creating datasets also supply documentation. This should transparently explain what data they contain, and describe any limitations or related issues which those using data should be aware of. (5/.
1
1
3
@jaldmn
Joe Alderman
8 months
There are lots of reasons why algorithms can be biased. One key driver is the data used to develop or evaluate them. Biases in data can pass along the chain and drive biases in algorithms, leading to downstream issues which can be hard to predict in advance. (4/.
1
1
4
@jaldmn
Joe Alderman
8 months
BUT: these benefits are not guaranteed. In fact, there is growing evidence that medical AI works better for certain groups than others. This may contribute to health inequity and cause patients harm. (3/.
1
1
3
@jaldmn
Joe Alderman
8 months
The world of medical artificial intelligence is moving at a remarkable pace, with a dizzying range of AI/ML tools already available for use in patients' care today. These tools are undoubtedly cool, and have great potential to improve health! . (2/.
1
1
3