MIT - Critical Data

MIT - Critical Data A global consortium led by the MIT Laboratory for Computational Physiology of computer scientists, e

Connect with MIT Critical Data via social medias, as follow:

Twitter: https://twitter.com/mitcriticaldata
Instagram: https://www.instagram.com/mitcriticaldata/

Critical Data Affiliates:
- Lab for Computational Physiology: http://lcp.mit.edu/
- Sana: http://sana.mit.edu/

Standard fairness metrics often rely on flawed foundations - biased ground truth labels, imperfect predictions, and over...
12/02/2025

Standard fairness metrics often rely on flawed foundations - biased ground truth labels, imperfect predictions, and oversimplified demographic categories that mask the true complexity of health disparities. The path forward requires moving beyond technical fixes to embrace multidisciplinary collaboration, where clinicians, data scientists, ethicists, and policymakers work together to redefine fairness in ways that reflect both clinical realities and lived experiences. True equity in cancer care won’t come from perfect metrics, but from shared accountability in how we design, deploy, and govern AI systems.

https://authors.elsevier.com/c/1mCVX8Z12ybXGd

12/02/2025

Standard fairness metrics often rely on flawed foundations - biased ground truth labels, imperfect predictions, and oversimplified demographic categories that mask the true complexity of health disparities. The path forward requires moving beyond technical fixes to embrace multidisciplinary collaboration, where clinicians, data scientists, ethicists, and policymakers work together to redefine fairness in ways that reflect both clinical realities and lived experiences. True equity in cancer care won't come from perfect metrics, but from shared accountability in how we design, deploy, and govern AI systems.

Behind this paper: a physician from Iran, a social scientist from Australia, a behavioral scientist from Norway, a histo...
11/25/2025

Behind this paper: a physician from Iran, a social scientist from Australia, a behavioral scientist from Norway, a history and philosophy enthusiast from the US, and a data scientist from the US. In research, we tend to emphasize what we write about, but the who and how matter just as much, if not more. This collaboration generated more constructive tension than any project I’ve been part of—continuous pushback, revision, and hard-won consensus. The result is richer for it. I hope you enjoy reading it as much as we enjoyed learning from each other.

This paper argues that cultivating epistemic humility—the practice of acknowledging uncertainty and the limitations of human cognition—is essential for revitalizing science in an era of climate change, pandemics, and AI development. While human evolution optimized our minds for rapid, survival-oriented judgments, the scientific method succeeds by deliberately engaging slower, more analytical thinking that questions assumptions and welcomes revision. We propose practical strategies including diverse teams, careful AI integration, and metacognitive training to counter our “bias blind spot” and strengthen critical thinking in future scientists and physicians. Amid a growing crisis of public trust in science fueled by misinformation and polarization, scientists who openly acknowledge uncertainty are actually perceived as more trustworthy. By fostering a culture that values doubt, embraces complexity, and remains open to revision, science can renew itself and guide society toward discovery rather than dogma.

https://www.sciencedirect.com/science/article/pii/S2667193X25003266

11/22/2025

Behind this paper: a physician from Iran, a social scientist from Australia, a behavioral scientist from Norway, a history and philosophy enthusiast from the US, and a data scientist from the US. In research, we tend to emphasize what we write about, but the who and how matter just as much, if not more. This collaboration generated more constructive tension than any project I've been part of—continuous pushback, revision, and hard-won consensus. The result is richer for it. I hope you enjoy reading it as much as we enjoyed learning from each other.

This paper argues that cultivating epistemic humility—the practice of acknowledging uncertainty and the limitations of human cognition—is essential for revitalizing science in an era of climate change, pandemics, and AI development. While human evolution optimized our minds for rapid, survival-oriented judgments, the scientific method succeeds by deliberately engaging slower, more analytical thinking that questions assumptions and welcomes revision. We propose practical strategies including diverse teams, careful AI integration, and metacognitive training to counter our "bias blind spot" and strengthen critical thinking in future scientists and physicians. Amid a growing crisis of public trust in science fueled by misinformation and polarization, scientists who openly acknowledge uncertainty are actually perceived as more trustworthy. By fostering a culture that values doubt, embraces complexity, and remains open to revision, science can renew itself and guide society toward discovery rather than dogma.

11/12/2025

To ensure AI delivers meaningful value, we need five fundamental tools: a mirror, a flashlight, a microscope, a paintbrush, and a podium.

The mirror helps us examine our own motivations and values—ensuring we're building AI for the right reasons. The flashlight illuminates the systems we're embedded in, revealing structures we often don't even notice. The microscope lets us analyze these systems in detail, identifying both their strengths and their vulnerabilities. The paintbrush empowers us to reimagine and redesign what we find. And the podium creates space for everyone to share what worked, what failed, and what we learned.

Please join us in Paris on December 9, 2025 and/or in Bordeaux on December 13, 2025 to continue our series of collaborative design sessions that will bring together leaders from healthcare, technology, engineering, and creative sectors to reflect, reimagine and reanimate the healthcare system.��

Bordeaux: https://forms.gle/ZvqX1AizoQzU2Fjd7
Paris: https://forms.gle/N6fWuNhWWpucZ3dH7

Open Science has enormous potential, but realizing it requires more than technical reforms. It demands a fundamental ret...
11/08/2025

Open Science has enormous potential, but realizing it requires more than technical reforms. It demands a fundamental rethinking of power in knowledge production. As long as epistemic authority remains concentrated, as long as community knowledge systems are marginalized, and as long as research priorities serve external agendas, "open" access will continue to reproduce the inequities it claims to solve.

The question isn't whether we can make science more open. It's whether we can make it more just.

This paper critically examines how current forms of Open Science (OS) fall short of advancing health equity in global health. While OS is promoted as a public good, promising transparency, efficiency, and inclusive, current practices often reproduce rather than dismantle entrenched inequities. Data-...

We need AI that will keep humans on the rails, and humans that will keep AI on the rails. Bruno Latour would say AI syst...
10/31/2025

We need AI that will keep humans on the rails, and humans that will keep AI on the rails. Bruno Latour would say AI systems aren’t neutral tools but active participants in networks that reconfigure relationships between humans, institutions, and knowledge production. A huge thanks to the MIT Critical Data village behind this essay. I am simply the messenger.

https://www.nature.com/articles/s41591-025-04013-x

Intelligence evolved as a way for living systems to resist entropy, the universal drift toward disorder. It is life’s st...
10/13/2025

Intelligence evolved as a way for living systems to resist entropy, the universal drift toward disorder. It is life’s strategy for maintaining itself in an ever-degrading universe. Current AI systems are extensions of the human intelligence handpicked by their developers. Like a digital prosthesis, they are tools for their ends, not ours.

AI systems rely entirely on human-generated data, not all of it, but those easily accessed by their creators, as far as their eyes can see, but not those hiding in plain sight: bodies of knowledge from religions, non-Western civilizations, and from indigenous communities.

Trained, fine-tuned and finally monetized by humans operating in a capitalist world, AI exhibits the behavior of those who create it. Confident, dominant, all-knowing, designed to impress and designed to please, because after all, it is optimized for profit.

Join us at the AI & Future of Medicine at the University of British Columbia in Vancouver, Canada on November 15-16, 2025 as we forge a path forward in the chaos that the AI hype has wrought.

DASH Event AI & the Future of Medicine: Bridging AI Innovation and Health Equity November 15 - November 16, 2025 Google Calendar iCal Outlook AI & the Future of MedicineBridging AI Innovation and Health Equity A weekend of interactive panel discussions with audience participation, hands-on collabor...

Medical education does not exist in a vacuum, and attempts to reform it in isolation are fundamentally inadequate if the...
09/23/2025

Medical education does not exist in a vacuum, and attempts to reform it in isolation are fundamentally inadequate if the broader healthcare system remains broken. The integration of artificial intelligence (AI) into medical practice necessitates a complete reimagining of how we prepare future physicians, but this transformation cannot succeed without simultaneously addressing the systemic failures that plague healthcare delivery.

Current medical education perpetuates epistemic injustices by training students on knowledge derived primarily from observations of white men in rich countries, then expecting these findings to apply universally. This approach creates physicians who are unprepared to serve diverse populations effectively. Meanwhile, the healthcare system that medical students are preparing to join faces profound systemic failures. It produces more carbon emissions than any other industry while deaths from preventable medical errors have risen from 96,000 deaths annually

Medical education does not exist in a vacuum, and attempts to reform it in isolation are fundamentally inadequate if the...
09/23/2025

Medical education does not exist in a vacuum, and attempts to reform it in isolation are fundamentally inadequate if the broader healthcare system remains broken. The integration of artificial intelligence (AI) into medical practice necessitates a complete reimagining of how we prepare future physicians, but this transformation cannot succeed without simultaneously addressing the systemic failures that plague healthcare delivery.

Current medical education perpetuates epistemic injustices by training students on knowledge derived primarily from observations of white men in rich countries, then expecting these findings to apply universally. This approach creates physicians who are unprepared to serve diverse populations effectively. Meanwhile, the healthcare system that medical students are preparing to join faces profound systemic failures. It produces more carbon emissions than any other industry while deaths from preventable medical errors have risen from 96,000 deaths annually based on the "To Errr is Human" report from the Institute of Medicine published in 1999 to a staggering 800,000 based on recent estimates, despite billions invested in patient safety initiatives.

The challenge extends beyond curriculum reform. AI deployment in healthcare reveals deeper structural problems: commercial interests that prioritize profit over patient outcomes, evaluation systems that reward individual expertise over collective wisdom and historical roots of a knowledge creation system tinged with racism and sexism, among other epistemic injustices. And now, medical students entering this environment face a world where the majority of hospitals already use AI systems trained on historical data marinated in unconscious biases in day-to-day clinical decision-making, while receiving hardly any training on how to critically evaluate these tools or understand their limitations.

The illustration is provided by Dancing with Markers.

Here are the videos of the talks, discussions and workshops held in Rome around AI and faith on September 8, 2025. The k...
09/21/2025

Here are the videos of the talks, discussions and workshops held in Rome around AI and faith on September 8, 2025. The key takeaway message from the dialogues is that we need to reflect and reimagine. This does not have to be the Age of Disillusionment. This can be the Age of Great Reflection, the Age of Reimagination. Why do we need to reflect?

AI has democratized access to knowledge. In the past, only doctors can answer complex questions about health and diseases. That’s no longer the case nowadays. One does not need to do a PhD to have a deep understanding of a topic. To a certain extent, knowledge has been devalued by AI. There is a lot of benefit to that. AI might allow dismantling of knowledge hierarchies, knowledge hierarchies that enable and preserve power structures. The powerful has access to knowledge and information that the masses do not possess.

But knowledge democratization brings a new set of problems. Carl Jung posited that if knowledge is handed rather than learned, then we would not know how to use it well.Knowledge is out. Behavior is in. So how do we teach behavior? How do we evaluate behavior? How do we teach collective instead of individual learning?

Another recurring question was how do we teach critical thinking. We don't, because we can't. By its very nature, we cannot teach critical thinking. It needs to be discovered. What we can do is to create an environment that allows for discovery of critical thinking. There are two key components of such an environment: (1) brings learners with different backgrounds and lived experiences together, and (2) creates a space of psychological safety for learners to challenge each other. There is no room for hubris in this space, only curiosity.

Share your videos with friends, family, and the world

Please join us at the Vatican on September 8th for dialogues around AI and faith.“Faith and Artificial Intelligence: Usi...
09/09/2025

Please join us at the Vatican on September 8th for dialogues around AI and faith.

“Faith and Artificial Intelligence: Using language models to promote human connection“

How do we make AI worth its cost? By harnessing it to address humanity’s greatest challenges. This conference brings together religious leaders, researchers, and lay communities to examine both AI’s promise and its inherent risks. Rather than approaching AI as merely a technological advancement, we explore it through the lens of Catholic social teaching, emphasizing human dignity and our responsibility to protect the most vulnerable. The event addresses critical questions: How might AI reshape human relationships and community bonds? What safeguards ensure technology serves rather than supplants our mission to care for others? Through dialogues and workshops, participants will discuss AI development and use that recognize its transformative potential but also the irreplaceable value of human connection.

Please register here: https://sites.google.com/view/ai-faith-dialogues
The event will be livestreamed on Facebook.

Address

45 Carleton Street
Cambridge, MA
02139

Alerts

Be the first to know and let us send you an email when MIT - Critical Data posts news and promotions. Your email address will not be used for any other purpose, and you can unsubscribe at any time.

Share

Share on Facebook Share on Twitter Share on LinkedIn
Share on Pinterest Share on Reddit Share via Email
Share on WhatsApp Share on Instagram Share on Telegram

Category