MIT - Critical Data

MIT - Critical Data A global consortium led by the MIT Laboratory for Computational Physiology of computer scientists, e

Connect with MIT Critical Data via social medias, as follow:

Twitter: https://twitter.com/mitcriticaldata
Instagram: https://www.instagram.com/mitcriticaldata/

Critical Data Affiliates:
- Lab for Computational Physiology: http://lcp.mit.edu/
- Sana: http://sana.mit.edu/

Intelligence evolved as a way for living systems to resist entropy, the universal drift toward disorder. It is life’s st...
10/13/2025

Intelligence evolved as a way for living systems to resist entropy, the universal drift toward disorder. It is life’s strategy for maintaining itself in an ever-degrading universe. Current AI systems are extensions of the human intelligence handpicked by their developers. Like a digital prosthesis, they are tools for their ends, not ours.

AI systems rely entirely on human-generated data, not all of it, but those easily accessed by their creators, as far as their eyes can see, but not those hiding in plain sight: bodies of knowledge from religions, non-Western civilizations, and from indigenous communities.

Trained, fine-tuned and finally monetized by humans operating in a capitalist world, AI exhibits the behavior of those who create it. Confident, dominant, all-knowing, designed to impress and designed to please, because after all, it is optimized for profit.

Join us at the AI & Future of Medicine at the University of British Columbia in Vancouver, Canada on November 15-16, 2025 as we forge a path forward in the chaos that the AI hype has wrought.

DASH Event AI & the Future of Medicine: Bridging AI Innovation and Health Equity November 15 - November 16, 2025 Google Calendar iCal Outlook AI & the Future of MedicineBridging AI Innovation and Health Equity A weekend of interactive panel discussions with audience participation, hands-on collabor...

Medical education does not exist in a vacuum, and attempts to reform it in isolation are fundamentally inadequate if the...
09/23/2025

Medical education does not exist in a vacuum, and attempts to reform it in isolation are fundamentally inadequate if the broader healthcare system remains broken. The integration of artificial intelligence (AI) into medical practice necessitates a complete reimagining of how we prepare future physicians, but this transformation cannot succeed without simultaneously addressing the systemic failures that plague healthcare delivery.

Current medical education perpetuates epistemic injustices by training students on knowledge derived primarily from observations of white men in rich countries, then expecting these findings to apply universally. This approach creates physicians who are unprepared to serve diverse populations effectively. Meanwhile, the healthcare system that medical students are preparing to join faces profound systemic failures. It produces more carbon emissions than any other industry while deaths from preventable medical errors have risen from 96,000 deaths annually

Medical education does not exist in a vacuum, and attempts to reform it in isolation are fundamentally inadequate if the...
09/23/2025

Medical education does not exist in a vacuum, and attempts to reform it in isolation are fundamentally inadequate if the broader healthcare system remains broken. The integration of artificial intelligence (AI) into medical practice necessitates a complete reimagining of how we prepare future physicians, but this transformation cannot succeed without simultaneously addressing the systemic failures that plague healthcare delivery.

Current medical education perpetuates epistemic injustices by training students on knowledge derived primarily from observations of white men in rich countries, then expecting these findings to apply universally. This approach creates physicians who are unprepared to serve diverse populations effectively. Meanwhile, the healthcare system that medical students are preparing to join faces profound systemic failures. It produces more carbon emissions than any other industry while deaths from preventable medical errors have risen from 96,000 deaths annually based on the "To Errr is Human" report from the Institute of Medicine published in 1999 to a staggering 800,000 based on recent estimates, despite billions invested in patient safety initiatives.

The challenge extends beyond curriculum reform. AI deployment in healthcare reveals deeper structural problems: commercial interests that prioritize profit over patient outcomes, evaluation systems that reward individual expertise over collective wisdom and historical roots of a knowledge creation system tinged with racism and sexism, among other epistemic injustices. And now, medical students entering this environment face a world where the majority of hospitals already use AI systems trained on historical data marinated in unconscious biases in day-to-day clinical decision-making, while receiving hardly any training on how to critically evaluate these tools or understand their limitations.

The illustration is provided by Dancing with Markers.

Here are the videos of the talks, discussions and workshops held in Rome around AI and faith on September 8, 2025. The k...
09/21/2025

Here are the videos of the talks, discussions and workshops held in Rome around AI and faith on September 8, 2025. The key takeaway message from the dialogues is that we need to reflect and reimagine. This does not have to be the Age of Disillusionment. This can be the Age of Great Reflection, the Age of Reimagination. Why do we need to reflect?

AI has democratized access to knowledge. In the past, only doctors can answer complex questions about health and diseases. That’s no longer the case nowadays. One does not need to do a PhD to have a deep understanding of a topic. To a certain extent, knowledge has been devalued by AI. There is a lot of benefit to that. AI might allow dismantling of knowledge hierarchies, knowledge hierarchies that enable and preserve power structures. The powerful has access to knowledge and information that the masses do not possess.

But knowledge democratization brings a new set of problems. Carl Jung posited that if knowledge is handed rather than learned, then we would not know how to use it well.Knowledge is out. Behavior is in. So how do we teach behavior? How do we evaluate behavior? How do we teach collective instead of individual learning?

Another recurring question was how do we teach critical thinking. We don't, because we can't. By its very nature, we cannot teach critical thinking. It needs to be discovered. What we can do is to create an environment that allows for discovery of critical thinking. There are two key components of such an environment: (1) brings learners with different backgrounds and lived experiences together, and (2) creates a space of psychological safety for learners to challenge each other. There is no room for hubris in this space, only curiosity.

Share your videos with friends, family, and the world

Please join us at the Vatican on September 8th for dialogues around AI and faith.“Faith and Artificial Intelligence: Usi...
09/09/2025

Please join us at the Vatican on September 8th for dialogues around AI and faith.

“Faith and Artificial Intelligence: Using language models to promote human connection“

How do we make AI worth its cost? By harnessing it to address humanity’s greatest challenges. This conference brings together religious leaders, researchers, and lay communities to examine both AI’s promise and its inherent risks. Rather than approaching AI as merely a technological advancement, we explore it through the lens of Catholic social teaching, emphasizing human dignity and our responsibility to protect the most vulnerable. The event addresses critical questions: How might AI reshape human relationships and community bonds? What safeguards ensure technology serves rather than supplants our mission to care for others? Through dialogues and workshops, participants will discuss AI development and use that recognize its transformative potential but also the irreplaceable value of human connection.

Please register here: https://sites.google.com/view/ai-faith-dialogues
The event will be livestreamed on Facebook.

Knowledge, our understanding of truth, is purely based on experiences and observations, and that there is no such thing ...
03/24/2025

Knowledge, our understanding of truth, is purely based on experiences and observations, and that there is no such thing as “ground truth”. How we see things objectifies, or more aptly, “subjectifies” those things. This brings to mind Schroedinger’s cat. Just like truth, at any one time, the cat is neither dead or alive. This is why scientific thinking requires a plurality of perspectives. Without diversity, there is no science. In this paper, we brought together the perspectives of nurses, pharmacists, respiratory therapists, social workers, doctors and computer scientists to reflect on the unfolding of AI in healthcare.

https://www.jscai.org/article/S2772-9303(25)00053-5/fulltext

The pursuit of diversity goes much deeper than simply a core tenet of some ”liberal“ ideology. For artificial intelligen...
03/13/2025

The pursuit of diversity goes much deeper than simply a core tenet of some ”liberal“ ideology. For artificial intelligence, specifically reinforcement learning, successful adaptation to changing conditions requires a level of diversity among agents’ actions. In Acemoglu‘s work, the most effective social networks combine a large fraction of strongly connected agents with smaller communities that maintain diverse choices through weak links. Scientific thinking requires a plurality of perspectives. Without diversity, there is no science. Without diversity, there is no truth.
https://journals.plos.org/digitalhealth/article?id=10.1371/journal.pdig.0000495

It is unrealistic to expect the machine learning community to be able to understand the social patterning of the data ge...
02/07/2025

It is unrealistic to expect the machine learning community to be able to understand the social patterning of the data generation process. The requisite perspectives to unearth all the “data artefacts” that are hidden in plain sight within EHRs are beyond the “expertise” of any group.

In this paper, we propose that a glossary of data artefacts, e.g. measurement bias of devices and instruments, variation in the frequency of screening and monitoring that is not explained by the disease, which have profound effects on distal prediction and classification algorithms, is created and maintained by a community of practice around each dataset. Data curation goes beyond exploratory data analysis and entails a deep understanding of the data generation that even clinicians lack. The onus to understand the backstory of the data, should not rest on individual research groups but should be shared by the wider community that learns together.

https://jbiomedsci.biomedcentral.com/articles/10.1186/s12929-024-01106-6

How can we do fairness evaluation and health service research when we don’t have accurate demographic labels that reflec...
02/04/2025

How can we do fairness evaluation and health service research when we don’t have accurate demographic labels that reflect social determinants of health in the majority of EHR datasets? In most countries, health systems are forbidden to obtain race-ethnicity information because it is considered racist. Asking one’s sexual identity is not only embarrassing but unlikely to obtain accurate information because of societal stigma.

We introduce the concept of care phenotypes, an objective representation of the care that patients get based on how they are treated, tested and monitored. We describe the creation of these labels based on the performance of routine care in this paper, and in other papers, based on the intensity of monitoring and screening that result from social determinants of health and social determinants of care. This study quantifies essential care procedures in the intensive care unit, such as turning and mouth care for patients who are sedated and intubated, to measure the performance of routine care protocols. We demonstrate a distribution when it comes to the frequency of the administration of routine care and propose that we leverage them for health service research and fairness evaluation in machine learning.

https://www.medrxiv.org/content/10.1101/2025.01.24.25320468v1

In order to advance AI in healthcare, it is crucial that developers understand (1) how the data came about, (2) the accu...
01/24/2025

In order to advance AI in healthcare, it is crucial that developers understand (1) how the data came about, (2) the accuracy of the instruments and devices used to measure physiologic signals, (3) the impact of variation in the measurement frequency of features and the capture of outcomes across patients (care phenotypes), and (4) local clinical practice patterns and provider perception of the patient that are typically almost never fully captured but we know have a huge effect on outcomes, including complications, among other very complex social patterning of the data generation process. A diversity of expertise, perspectives and lived experiences is requisite to be able to understand the data and develop safe AI models. We need to invest in the “who” and the “how” rather than just the “what” if we are to leverage this beast of a technology that has the potential to truly disrupt legacy systems with data-informed redesign.

https://bmjopen.bmj.com/content/15/1/e086982.full

Statistical measures of model performance are only the beginning of the continuous validation and evaluation that AI too...
01/21/2025

Statistical measures of model performance are only the beginning of the continuous validation and evaluation that AI tools require. AUC, precision, recall, F1 score, calibration plots, SHAP values, etc. do not translate to better patient outcomes nor health system efficiency. Journals and ML conferences should downgrade the importance of these artificial measures of value. Everyone is racing to build models that are overfitted to these metrics. We have to come up with better evaluation frameworks of what we ought to value, rather than assigning value to what we can measure.

https://journals.plos.org/globalpublichealth/article?id=10.1371/journal.pgph.0004171

We are offering a short health AI course immediately before the Society of Critical Care Medicine annual congress on Feb...
12/20/2024

We are offering a short health AI course immediately before the Society of Critical Care Medicine annual congress on February 22, 2025 in Orlando, FL. This course will not teach how to build machine learning models. Instead, it will provide a landscape of the field and guidance on how to navigate the immediate future of healthcare that is increasingly incorporating AI tools. We have limited capacity so please register early.

Gain insights into the practical aspects of AI implementation and integration within healthcare settings, addressing the technical, organizational, and infrastructural challenges involved. The course also emphasizes the importance of ethical and legal considerations, fostering a critical understanding of data privacy, consent, accountability, and the potential biases inherent in AI systems.

https://sccm.org/education-center/educational-programming/deepdive/deep-dive-ai

Address

45 Carleton Street
Cambridge, MA
02139

Alerts

Be the first to know and let us send you an email when MIT - Critical Data posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Share

Share on Facebook Share on Twitter Share on LinkedIn
Share on Pinterest Share on Reddit Share via Email
Share on WhatsApp Share on Instagram Share on Telegram

Category