MIT - Critical Data

MIT - Critical Data A global consortium led by the MIT Laboratory for Computational Physiology of computer scientists, e

Connect with MIT Critical Data via social medias, as follow:

Twitter: https://twitter.com/mitcriticaldata
Instagram: https://www.instagram.com/mitcriticaldata/

Critical Data Affiliates:
- Lab for Computational Physiology: http://lcp.mit.edu/
- Sana: http://sana.mit.edu/

Knowledge, our understanding of truth, is purely based on experiences and observations, and that there is no such thing ...
03/24/2025

Knowledge, our understanding of truth, is purely based on experiences and observations, and that there is no such thing as “ground truth”. How we see things objectifies, or more aptly, “subjectifies” those things. This brings to mind Schroedinger’s cat. Just like truth, at any one time, the cat is neither dead or alive. This is why scientific thinking requires a plurality of perspectives. Without diversity, there is no science. In this paper, we brought together the perspectives of nurses, pharmacists, respiratory therapists, social workers, doctors and computer scientists to reflect on the unfolding of AI in healthcare.

https://www.jscai.org/article/S2772-9303(25)00053-5/fulltext

The pursuit of diversity goes much deeper than simply a core tenet of some ”liberal“ ideology. For artificial intelligen...
03/13/2025

The pursuit of diversity goes much deeper than simply a core tenet of some ”liberal“ ideology. For artificial intelligence, specifically reinforcement learning, successful adaptation to changing conditions requires a level of diversity among agents’ actions. In Acemoglu‘s work, the most effective social networks combine a large fraction of strongly connected agents with smaller communities that maintain diverse choices through weak links. Scientific thinking requires a plurality of perspectives. Without diversity, there is no science. Without diversity, there is no truth.
https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000495

It is unrealistic to expect the machine learning community to be able to understand the social patterning of the data ge...
02/07/2025

It is unrealistic to expect the machine learning community to be able to understand the social patterning of the data generation process. The requisite perspectives to unearth all the “data artefacts” that are hidden in plain sight within EHRs are beyond the “expertise” of any group.

In this paper, we propose that a glossary of data artefacts, e.g. measurement bias of devices and instruments, variation in the frequency of screening and monitoring that is not explained by the disease, which have profound effects on distal prediction and classification algorithms, is created and maintained by a community of practice around each dataset. Data curation goes beyond exploratory data analysis and entails a deep understanding of the data generation that even clinicians lack. The onus to understand the backstory of the data, should not rest on individual research groups but should be shared by the wider community that learns together.

https://jbiomedsci.biomedcentral.com/articles/10.1186/s12929-024-01106-6

How can we do fairness evaluation and health service research when we don’t have accurate demographic labels that reflec...
02/04/2025

How can we do fairness evaluation and health service research when we don’t have accurate demographic labels that reflect social determinants of health in the majority of EHR datasets? In most countries, health systems are forbidden to obtain race-ethnicity information because it is considered racist. Asking one’s sexual identity is not only embarrassing but unlikely to obtain accurate information because of societal stigma.

We introduce the concept of care phenotypes, an objective representation of the care that patients get based on how they are treated, tested and monitored. We describe the creation of these labels based on the performance of routine care in this paper, and in other papers, based on the intensity of monitoring and screening that result from social determinants of health and social determinants of care. This study quantifies essential care procedures in the intensive care unit, such as turning and mouth care for patients who are sedated and intubated, to measure the performance of routine care protocols. We demonstrate a distribution when it comes to the frequency of the administration of routine care and propose that we leverage them for health service research and fairness evaluation in machine learning.

https://www.medrxiv.org/content/10.1101/2025.01.24.25320468v1

In order to advance AI in healthcare, it is crucial that developers understand (1) how the data came about, (2) the accu...
01/24/2025

In order to advance AI in healthcare, it is crucial that developers understand (1) how the data came about, (2) the accuracy of the instruments and devices used to measure physiologic signals, (3) the impact of variation in the measurement frequency of features and the capture of outcomes across patients (care phenotypes), and (4) local clinical practice patterns and provider perception of the patient that are typically almost never fully captured but we know have a huge effect on outcomes, including complications, among other very complex social patterning of the data generation process. A diversity of expertise, perspectives and lived experiences is requisite to be able to understand the data and develop safe AI models. We need to invest in the “who” and the “how” rather than just the “what” if we are to leverage this beast of a technology that has the potential to truly disrupt legacy systems with data-informed redesign.

https://bmjopen.bmj.com/content/15/1/e086982.full

Statistical measures of model performance are only the beginning of the continuous validation and evaluation that AI too...
01/21/2025

Statistical measures of model performance are only the beginning of the continuous validation and evaluation that AI tools require. AUC, precision, recall, F1 score, calibration plots, SHAP values, etc. do not translate to better patient outcomes nor health system efficiency. Journals and ML conferences should downgrade the importance of these artificial measures of value. Everyone is racing to build models that are overfitted to these metrics. We have to come up with better evaluation frameworks of what we ought to value, rather than assigning value to what we can measure.

https://journals.plos.org/globalpublichealth/article?id=10.1371/journal.pgph.0004171

We are offering a short health AI course immediately before the Society of Critical Care Medicine annual congress on Feb...
12/20/2024

We are offering a short health AI course immediately before the Society of Critical Care Medicine annual congress on February 22, 2025 in Orlando, FL. This course will not teach how to build machine learning models. Instead, it will provide a landscape of the field and guidance on how to navigate the immediate future of healthcare that is increasingly incorporating AI tools. We have limited capacity so please register early.

Gain insights into the practical aspects of AI implementation and integration within healthcare settings, addressing the technical, organizational, and infrastructural challenges involved. The course also emphasizes the importance of ethical and legal considerations, fostering a critical understanding of data privacy, consent, accountability, and the potential biases inherent in AI systems.

https://sccm.org/education-center/educational-programming/deepdive/deep-dive-ai

We are looking into how to imbibe epistemic humility and pursuit for plurality in scientific thinking. We see AI everywh...
12/08/2024

We are looking into how to imbibe epistemic humility and pursuit for plurality in scientific thinking. We see AI everywhere: Arrogance and Ignorance. Epistemic humility “force functions” seeking to work with people who think differently. Humility and plurality, which go hand in hand, keep us grounded.

On this note, we are organizing a symposium on epistemic humility at MIT on January 16, 2025. The event will deep dive in tackling human design flaws and societal techno-optimism that may impair our ability to innovate and evaluate technologies.

https://www.eventbrite.com/e/ai-health-equity-and-ethics-symposium-tickets-1073599227189?aff=oddtdtcreator

In the age of multi-modal modeling, it is crucial to have a strategy to reduce the risk of AI agents learning shortcut f...
12/07/2024

In the age of multi-modal modeling, it is crucial to have a strategy to reduce the risk of AI agents learning shortcut features. Optimizing accuracy alone is insufficient when building artificial intelligence. If a strategy to mitigate the risk of learning shortcut features is not in place, then images should not be used as data input. Otherwise, you will be contributing to the problem of algorithmic bias.

https://link.springer.com/article/10.1007/s10278-024-01335-z?utm_source=rct_congratemailt&utm_medium=email&utm_campaign=nonoa_20241206&utm_content=10.1007/s10278-024-01335-z

👀 Our latest article "The impact of commercial health datasets on medical research and health-care algorithms" is availa...
04/27/2023

👀 Our latest article "The impact of commercial health datasets on medical research and health-care algorithms" is available on the Lancet Digital Health!

Curation of health datasets should be performed by diverse teams in a transparent manner in order to prevent the encoding of disparities and biases in clinical algorithms. Developing and validating models on commercial datasets created with a "black box" curation process, and without an understanding of the social patterning of the data generation process, is a perfect set-up for algorithms perpetuating health inequities.

READ HERE: https://bit.ly/3n6feYS

♦︎ MIT Critical Data:
Website: https://criticaldata.mit.edu/
LinkedIn: https://www.linkedin.com/company/mitcriticaldata/
FB: https://www.facebook.com/mitcriticaldata
Instagram: https://www.instagram.com/mitcriticaldata/
Twitter: https://twitter.com/mitcriticaldata

📘 Read our new article on PLOS DIGITAL HEALTH on "Equity should be fundamental to the emergence of innovation".The focus...
04/17/2023

📘 Read our new article on PLOS DIGITAL HEALTH on "Equity should be fundamental to the emergence of innovation".

The focus on health equity is not stifling innovation. In fact, health equity, not profit, should be the goal of innovation.

READ HERE: https://bit.ly/3GOhKcY

♦︎ MIT Critical Data:
Website: https://criticaldata.mit.edu/
LinkedIn: https://www.linkedin.com/company/mitcriticaldata/
FB: https://www.facebook.com/mitcriticaldata
Instagram: https://www.instagram.com/mitcriticaldata/
Twitter: https://twitter.com/mitcriticaldata

✏️ Our newest article "Critical Bias in Critical Care Devices" is out on Critical Care Clinics!The numbers in real-world...
03/31/2023

✏️ Our newest article "Critical Bias in Critical Care Devices" is out on Critical Care Clinics!

The numbers in real-world health data mean different depending on who you are because the accuracy of medical devices is different across patients. The encoders and transformers that we use to build AI are not aware of this and therefore learn disparities-laced truths. In this paper, we describe a few of the health technologies in the ICU that perform differently across patient populations. What do these mean for downstream clinical tasks? Are we sealing the fate of certain populations with poor outcomes through our algorithms? AI for whose good?

READ HERE: https://bit.ly/3KmaS8Q

♦︎ MIT Critical Data:
Website: https://criticaldata.mit.edu/
LinkedIn: https://www.linkedin.com/company/mitcriticaldata/
FB: https://www.facebook.com/mitcriticaldata
Instagram: https://www.instagram.com/mitcriticaldata/
Twitter: https://twitter.com/mitcriticaldata

Address

45 Carleton Street
Cambridge, MA
02139

Alerts

Be the first to know and let us send you an email when MIT - Critical Data posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Share

Category