13/02/2026
The claim that degrees in medicine and law are becoming obsolete because artificial intelligence is advancing rapidly is provocative, but it requires careful unpacking, especially in high-stakes professions such as clinical medicine and jurisprudence.
First, there is a categorical distinction between knowledge acquisition and professional authority. Systems such as OpenAI’s large language models can perform at or near expert-level on standardized exams. However, professional practice in medicine or law is not exam performance; it involves licensing, accountability, ethical reasoning, fiduciary duty, uncertainty management, and legally recognized responsibility. AI currently augments cognition but does not assume liability. That difference is foundational.
Second, medicine, particularly clinical cardiology, surgery, emergency care, and critical care, is embodied, relational, and procedural. Clinical reasoning integrates probabilistic inference, pattern recognition, non-verbal cues, contextual social determinants, and risk-benefit negotiation under uncertainty. AI can assist with diagnostics, risk stratification, and literature synthesis. It does not obtain informed consent, perform bedside ultrasound, manage a crashing airway, or assume medicolegal responsibility. Even if AI supports decision-making, the physician remains the accountable agent.
Third, regulatory and sociotechnical inertia matters. Licensing bodies, malpractice frameworks, and hospital governance structures evolve far more slowly than AI model improvements. The diffusion of responsibility in autonomous decision systems remains unresolved legally and ethically. Until liability, trust calibration, validation in diverse populations, and bias mitigation are fully addressed, independent AI replacement of licensed professionals is unlikely.
Fourth, historical analogies are instructive. Automation in radiology, pathology, and anesthesiology was predicted to eliminate specialists decades ago. Instead, the fields transformed: productivity increased, sub-specialization expanded, and new competencies emerged, such as AI oversight, interventional techniques, and systems-based practice.
Where the concern has merit is in skill composition. Routine cognitive tasks, such as document drafting, chart summarization, precedent search, coding, and literature review, are increasingly automatable. Graduates who rely solely on memorized knowledge without procedural skill, human judgment, or adaptive reasoning may indeed face reduced differentiation. Therefore, the comparative advantage shifts toward integrative thinking, procedural expertise, leadership, and ethical stewardship of AI tools.
In synthesis, AI will likely reshape medicine and law rather than render their degrees wasteful. The long training period may increasingly include AI literacy, data interpretation, and human–machine collaboration. The economic return may depend less on credential alone and more on how effectively professionals leverage AI. For students, the rational strategy is not avoidance of medicine or law, but intentional adaptation: cultivate clinical mastery, procedural competence, systems thinking, communication, and technological fluency. In high-stakes domains involving life, liberty, and fiduciary trust, society still requires accountable human professionals, even if they practice alongside powerful AI systems.