[Epistemic status: Low confidence, but this idea seems important enough that I want a clear reason why it could or couldn’t work, and what might change my mind.]
Imagine a world where the most brilliant medical mind is at your fingertips, ready to guide you toward optimal health and well-being. This doctor isn't just the best doctor alive today; it is a nearly omniscient entity that embodies the collective wisdom of all medical knowledge—a vast repository of information that no human could ever hope to master.
I am, of course, talking about a world where AI-based chatbots play a large role in our healthcare. Recent advances in AI have created this opportunity, and we would be wise to take it. To achieve its full potential, it would require us to allow AI to take on some tasks traditionally reserved for medical professions, up to and including doctors, such as diagnosing conditions, prescribing medications, and ordering medical tests. The potential benefits of the creation of “AI doctors” are monumental. The path forward would require a fundamental shift in our approach to healthcare, but the rewards—a healthier, happier, and more prosperous society—are well worth the challenge.
Easy Problems and Hard Problems
AI's impact on healthcare will be far-reaching, potentially improving many aspects of the industry. Some applications, such as drug discovery, align well with our current societal norms and technological adoption patterns. We can consider these "easy" problems from a societal perspective, as they require minimal changes to our existing structures and processes. Consider, for instance, using tools like AlphaFold to predict protein structures and accelerate drug discovery. These aren’t “easy” in the sense that I can do them, but they're easy in the sense that they don’t require a societal shift in how we think about medicine.
However, despite claims that AI will “revolutionize” the medical world, I doubt it will, given our current approach to AI adoption. I’m sure that it will have an impact, but to tap the transformative potential of AI in healthcare we would need to tackle the "hard" problems—those that challenge our current practices and require us to change social attitudes.
AI for Information Processing Tasks
The key to dramatically improving human health outcomes and reducing healthcare costs lies in using AI to perform information-processing tasks that are currently done by human healthcare professionals, such as physicians and therapists. These tasks encompass a significant portion of the healthcare industry, including diagnosing conditions, prescribing medications, and educating patients. When I refer to "information processing," I am not only talking about technical tasks like data analysis and treatment planning but also the interpersonal aspects of care, such as empathizing with patients and providing emotional support, insofar as it can.
Consider the process of obtaining a prescription. You describe your symptoms to a doctor, they ask you some questions, you answer them, and eventually, you receive a prescription based on their assessment. In many cases, this entire process is essentially an exchange of information, making it an ideal task for an AI doctor.
The same principle applies to ordering diagnostic tests. If a patient has some set of symptoms, an AI doctor could determine that an MRI is needed and order the test. The patient would still need to go to a facility for the MRI, but the decision-making process of determining whether an MRI is necessary could be done by an AI doctor.
Beyond diagnosis and treatment, AI doctors could also play a big role in patient education and disease management. They could provide personalized information about health conditions, answer patient questions, and monitor patient progress over time, alerting human providers if intervention is needed. AI doctors can provide patients with personalized, round-the-clock assistance, answering their questions, offering guidance, and monitoring their health status.
Implementing AI Doctors
Transitioning to a world with AI doctors would involve addressing numerous challenges. As a general principle, my baseline response to "How would AI doctors handle X?" is that they would do it similarly to how human doctors currently handle X. For example, there is an existing process for dealing with medical liability, typically involving doctors working for an entity that can be held legally liable for mistakes. In the case of AI doctors, hospitals could legally "own" the AIs (i.e., be legally responsible for their actions), which would likely be licensed from tech companies. Specialized malpractice insurance for AI doctors would also probably be necessary.
To ensure the safety and effectiveness of AI doctors, they would undergo a rigorous accreditation process, akin to the one human doctors go through. However, the AI doctor tests would be far more comprehensive than those for human doctors. One reason for this is that AIs have a greater ability to memorize and provide correct responses without grokking underlying concepts, so the tests would need to cover a larger range of scenarios. Additionally, certain skills that human doctors are expected to possess, such as common sense and the ability to detect lies, are not (I assume) explicitly tested for in human medical exams. These skills would need to be specifically evaluated in AI doctors to prevent any potential gaps in their capabilities that could lead to suboptimal care.
Additionally, the accreditation process could be designed to allow for a gradual rollout with incremental expansion of their responsibilities based on their demonstrated performance. One potential framework for this incremental approach is to use the existing system of drug scheduling. AI doctors could be granted prescribing authority for different drug schedules based on their performance on various tests. For example, as AI doctors demonstrate greater accuracy and reliability, they could be allowed to prescribe non-controlled substances (e.g., antibiotics), followed by Schedule V drugs (e.g., cough suppressants with codeine), Schedule IV (e.g., Xanax), Schedule III (e.g., ketamine), and eventually Schedule II drugs (e.g., OxyContin, Adderall). I’m not claiming that the schedules are perfect, or even that they’re the best framework to use, but that there are practical ways to gradually integrate AI doctors into healthcare using a safe and incremental approach.
Although the current generation of LLM-based chatbots would likely not pass these tests, ones that could might not be far off. The next generation of AI models could be capable enough to serve as foundations from which to build this technology. Once they have a strong AI foundation, specialized AI systems designed specifically for healthcare could be developed relatively quickly. These AI models would be trained on vast amounts of medical knowledge and optimized for healthcare interactions. With the right regulatory environment, the tech industry would likely invest immense resources into making this vision a reality, with major players like OpenAI and Google, as well as numerous startups, leading the charge.
Opt-In
The use of AI doctors would be an opt-in system, allowing individuals to choose between using an AI doctor or continuing to see a human doctor. Some people prefer the emotional support and human connection provided by a traditional doctor-patient relationship. Some want a doctor to put an arm around their back, to break bad news gently, and to console them. They should have that option.
Other people, like me, don’t want any of this. I want access to the best medical information and medicines in the world from the comfort of my own home. I don’t want an emotional support doctor, but I also wouldn’t want to deprive anyone else of one.
Hurdles
I can’t figure out whether AI doctors are a no-brainer or extremely radical. I think they might be both, which is not a good sign. In a way, they remind me of human challenge trials. There is the same sense of “Yes, it would save countless lives, but companies are more likely to get sued by the process, so let’s not do it.”
One of the biggest hurdles to overcome is the heavily regulated nature of the healthcare industry, particularly when it comes to prescription medicine. As I’m sure you're aware, AIs are currently not permitted to prescribe medicine. Allowing AI doctors to write prescriptions and order tests would be a revolutionary change in regulation. So much so that I doubt we have the stomach for it.
Consider how we have approached telemedicine. It has been possible for years, yet hamstrung by state-by-state licensing requirements that made it difficult for physicians to practice across state lines. It took the global COVID-19 pandemic to catalyze the rapid expansion of telemedicine, as the crisis forced regulators to temporarily waive many of these restrictions.
So, after being forced into it, how has it gone? A survey of physicians found that they believed that telehealth visits were as good as in-person visits. In addition, the survey found that physicians feel that telehealth care should continue to play a larger role in post-pandemic health care. However, despite these positive signs, many of the temporary waivers are expiring, and it seems like many states and medical boards are fine with that, mainly because it’s easier to regulate. The plan is to go back to the way we were before.
Despite being forced into a situation that appears to be better for both patients and doctors, we are allowing ourselves to return to a worse one because it’s easier for regulators. The issues with implementing AI doctors are even more difficult. First, it’s a more revolutionary change. It’s not about doctors from a different state prescribing medicine, it’s about doctors from a different “species” prescribing medicine. Second, with telemedicine we were forced to try it, so now there’s less status quo bias against keeping it. But with AI doctors, the status quo is decidedly on the “don't use AI for medical purposes” side. In addition, the implementation of AI doctors would be the kind of challenge that we’re worst at: balancing the fear of specific harms against the promise of tremendous, but as yet unrealized, benefits.
Still, I want to make the case for AI doctors because I believe in that tremendous, unrealized benefit.
Impacts
The implementation of AI doctors could have far-reaching impacts on the healthcare industry, with the two most significant being cost reduction and improved patient care.
Cost Reduction
The amount people spend on healthcare is so much that even tackling a small piece of it could greatly impact people. For example, the UK’s NHS reported that 83.4 million antidepressant items were prescribed in the 2021/2022 fiscal year. The most common antidepressants, selective serotonin reuptake inhibitors (SSRIs) like Prozac, Lexapro, and Zoloft, are not controlled substances and are remarkably safe. One study found that overdosing up to 30 times the common daily dose is associated with minor or no symptoms. (Obvious disclaimer: This information is not intended to encourage anyone to take excessive doses of their prescribed medication.) Going to a human psychiatrist for these prescriptions is expensive, so getting them from an AI doctor could significantly reduce expenses for many people.
While it's challenging to precisely estimate the proportion of the healthcare industry that involves only information processing, it likely constitutes a substantial portion. For example, WebMD reports that 75-90% of all doctor's office visits are for “stress-related ailments and complaints”. Although not all of these cases can be resolved through information processing alone, I would guess a large percentage could be. I don’t know exactly what the numbers would turn out to be; my point here is that there is a lot of relatively low-hanging fruit.
The cost reduction alone could be the most impactful aspect of implementing AI doctors. And that would be a big deal; a 2019 Pew Research Center survey found that the affordability of healthcare was the top concern among major issues facing the country. This sentiment is further supported by data from the Bureau of Labor Statistics in the figure below, which shows healthcare costs rising at an alarming rate, significantly outpacing inflation.
Better Care
As AI doctors increasingly take on information-processing tasks, human doctors can focus on the aspects of healthcare where they have the most comparative advantage. This would allow them to become more specialized and proficient in their respective fields, ultimately leading to improved patient care.
Moreover, AI doctors could streamline the patient experience by making human doctor visits more focused on parts that require a human. If patients can have their questions answered and receive preliminary assessments through AI doctors, they will arrive at their appointments better prepared and informed. This would result in more focused visits, enabling doctors to see more patients and provide better care.
Also, AI doctors could be designed to be far more interpretable than human doctors. While there is a common belief that humans are very interpretable, may I remind you that humans are full of shit? In addition, there is a belief that AI systems are inherently opaque and difficult to understand, but this need not be the case. With proper design and implementation, AI doctors could provide auditable explanations for their decisions and recommendations, allowing patients and human doctors to better understand the reasoning behind their actions. This transparency would improve patient trust, facilitate collaboration between AI and human doctors, and make improving AI doctors easier.
Even though I’ve been calling them “AI doctors”, this probably sells them short of their full potential. It suggests a narrow focus on medical treatment, whereas they could provide immense value in promoting overall wellness. Beyond prescribing drugs, AI can offer personalized guidance, support, and motivation to help individuals maintain healthy lifestyles. While not a perfect analog, wellness coaches might be the closest existing example. Rather than solely interacting with them when you're ill, these AI doctors would serve as ongoing wellness partners.
The impact of exercise and healthy eating on overall health is well-known, yet many people struggle to maintain these habits consistently. With an AI serving as a dedicated wellness partner, individuals may find it easier to stay committed to their health goals. The AI would be solely focused on checking in with users daily, offering personalized guidance, and providing the motivation needed to achieve and sustain a healthy lifestyle. It's hard to overstate how transformative it would be to have the most knowledgeable doctor in the world, personalized for you, with nothing better to do than help you stay healthy.
Just by making information more easily accessible, AI wellness coaches could allow people to take more control of their health. This wouldn't be for everyone, but some people (like me) would really want this. I think it could do a lot to reduce hospital visits and shift our focus from merely providing better healthcare to achieving improved human health—even if that means relying less on formal healthcare systems.
AI can also help bridge the gap in healthcare access and quality across different regions and demographics. By providing a consistent level of care and expertise, we can ensure that patients receive reliable information and support regardless of their location or economic status. This could be particularly transformative for underserved communities and rural areas where access to healthcare professionals may be limited.
Potential Risks and Mitigation Strategies
As with any significant technological shift, the integration of AI in healthcare will undoubtedly face challenges and resistance. We should clarify and operationalize those challenges as much as possible and see if we can overcome them. We shouldn’t allow vague worries and the fear of potential obstacles to be used as a cudgel to prevent innovation that could drastically improve the lives of countless people worldwide. Instead, we should adopt a solutions-oriented mindset when addressing these concerns.
It's worth confronting some potential concerns directly, providing context, and developing strategies to mitigate potential risks. But before I get into specifics, I want to make a few quick points. We are talking about a significant change to a crucial system, and it’s important to recognize the status quo bias. We naturally resist major changes, but we shouldn’t assume that the current system is inherently good just because it’s familiar. It certainly is good in some ways, but it is also bad in some ways.
Another thing to remember is that the proposed AI-enabled healthcare system would be opt-in, allowing individuals to choose whether they prefer to engage with AI doctors or continue seeing human doctors. This ensures that patients can make decisions based on their personal preferences and comfort levels. However, those who are skeptical of or uncomfortable with AI doctors shouldn’t have the power to deny them to others.
It Will Make Mistakes
It certainly will. But so do human doctors. The point of the accreditation system is to set levels of accuracy that we would be comfortable with for given levels of authority. And, again, it will be incremental, starting with making recommendations and the lowest-risk interventions.
It Will Cause Job Loss
Deploying AI doctors at scale would significantly disrupt employment within the healthcare industry, particularly in fields like therapy and psychiatry, which primarily involve information processing. While AI therapists would be optional, allowing patients to choose human therapists if they prefer, the demand for human therapists might be lower than anticipated, especially among younger generations. The anonymity and lack of perceived judgment offered by AI doctors could make many people feel more at ease when discussing their mental health.
This would result in fewer employment opportunities for human therapists. While this concern is understandable, it is important to remember that the primary purpose of the therapy industry is to improve people's well-being, not to provide jobs for therapists. As AI demonstrates its effectiveness in delivering therapy, we may need to adapt our workforce and support those affected by the transition.
In addition, many established businesses within the healthcare industry may also face challenges as the industry changes. Some may need to pivot their business models or risk failing altogether. This disruption is an inevitable part of progress.
Lying and Manipulation
Another potential issue is that people may find it easier to lie to an AI doctor than a human doctor. There are a few ways to address this. One is to ensure the AI doctors have at least some baseline ability to detect this; I mentioned before that the medical accreditation tests would include lie detection.
However, it's important to note that these AI doctors will not simply be ChatGPT-like interfaces; they will be fully-fledged doctors with access to patients' medical records, just like human doctors. If an AI doctor has concerns about whether a particular medication is right for a patient, it will ask follow-up questions and even order tests if appropriate. Additionally, AI doctors will be able to consult with other doctors, including human ones, to ensure the best possible care for their patients. AI doctors aren’t under an obligation to prescribe drugs just because a patient asks for them.
Lastly, let’s not pretend the current system is perfect. People lie to doctors all the time. This University of Michigan article says that “Between 60 and 80 percent of people surveyed have not been forthcoming with their doctors about information that could be relevant to their health.” However, this study says it’s 20%, so it probably depends on how exactly the question is worded. Either way, it’s not a problem unique to AI doctors. I’m not a doctor and don’t know what the industry practices are for when a doctor believes they’re being lied to, but my guess is there’s some guidance we could apply to AI doctors as well.
Dealing with Bias
Bias in AI systems is a valid concern that deserves a separate, in-depth discussion. There are plenty of examples of unacceptable AI bias, even in the medical industry. For example, a 2019 study by Obermeyer et al. found racial bias in a widely used algorithm that predicts which patients need additional medical care. The model was trained on health costs as a proxy for health needs. However, because black patients spent less on healthcare, it concluded that they had fewer needs. This was an avoidable problem.
Fear of bias shouldn’t be used as a blanket reason to halt the progress of AI in healthcare. By acknowledging and actively working to mitigate bias, we can harness the power of AI to improve health outcomes for all individuals, regardless of their background.
Data Privacy
Data privacy is a major concern in the medical world. There should be appropriate data privacy laws, similar to those that govern traditional doctor-patient relationships. AI doctors shouldn’t fundamentally change this.
Conclusion
The potential for AI to revolutionize healthcare and improve human well-being is immense, but its success is not guaranteed. While partial adoption of AI in healthcare is relatively easy, realizing its full potential requires a fundamental shift in our approach to medicine and a willingness to tackle the "hard" problems.
The question of how much impact AI will have in healthcare is not primarily a technological one; it is a societal one. It will set the tone for how we handle AI in all aspects of our lives. The biggest determinant of how much AI improves our well-being lies in our willingness to embrace it and adapt our systems to maximize its benefits.
Soon, we will have the technological capabilities to unlock the greatest revolution in human health the world has ever seen. However, we will only achieve this if we are bold enough to pursue it. The potential benefits of AI doctors are staggering. They could provide consistent, high-quality care to patients regardless of their location or background, effectively erasing disparities in medical access. Even if we could achieve only a fraction of the potential described here, it would still rank among the greatest medical advances of all time. Even a tool that couldn’t prescribe medicine but could still perform triage and answer patient’s basic questions about their health could have an enormous impact on public health.
The hurdles we face in implementing AI doctors are not insignificant, but they pale in comparison to the potential benefits. We shouldn’t let fear of lawsuits or an unhealthy attachment to the status quo prevent us from pursuing something that could be so incredibly good for humanity.
I worry that we will not do this. In fact, I think that’s the most likely outcome. I think it’s far more likely that we implement AI in healthcare by only tackling the “easy” problems. While these would still be valuable, they represent a mere fraction of what AI could achieve in healthcare.
Implementing AI in healthcare is not without its challenges, particularly given the heavily regulated nature of the industry. But it would be a failure to cite this as a reason that we cannot have a better future. We as a society must be agentic. We have made these regulations; we can remove or update them. If we don’t, it’s not because we “couldn’t”, it would be because we could and failed to.
Despite the ongoing discourse surrounding healthcare disparities, it is remarkable that so little attention has been given to a potential solution that could have such a profound impact. The significance of this opportunity cannot be overstated. Disparities in access to care are a pervasive issue not only within the United States but also on a global scale. The implementation of AI doctors has the potential to eliminate or at least greatly reduce these disparities, provided that we are willing to invest the necessary effort and resources.
The amount of work required seems relatively small compared to the potential benefits. As I said, one thing that would need to be done is to create a comprehensive evaluation set for AI doctors. It’s hard in that sense that I couldn’t do it, but how hard is it? How much could you get done with $100 million? Or $500 million? Given that current US spending on healthcare is $4.5 trillion a year, $500 million would be the equivalent of one hour of additional healthcare spending.
One hour or even one day of medical spending to significantly diminish disparities in medical access seems like an absurdly good deal. So absurd that I’m left wondering, “What am I missing?” Yes, it’s hard, and not possible with today’s tools. But prediction markets expect GPT-5 to be released this year. If this becomes possible with specially-tuned models based on GPT-5, we would be foolish not to be laying the groundwork for it already.
You may already know this, but there's companies trying to do this sort of thing right now. I'm aware of CurAI (www.curaihealth.com) which afaik are trying to use a combination of AI and humans via a chat interface to handle routine questions and prescriptions and wellness couching. And I think I've heard of at least one other company using AI for mental health advice and coaching, but can't remember the name now. I'm sure there's more too, these are just ones I've heard of.
Although as far as I understand, you can already get things like Adderall or antidepressants by basically giving the right answers to a few questions over a zoom call or maybe even a chat. And even the blood tests I order online out of pocket are technically prescribed by some physician somewhere, judging by the lab form. From the technical standpoint, it should be relatively trivial to replace these with even current gen LLMs. But I'm not sure how much of an improvement would it be, if you can get one MD in Nebraska approving hundreds of prescriptions per day, are you really saving that much by deploying a purpose-built frontier model to approve all your prescriptions? And more importantly, if we authorize an AI to make such prescriptions it feels like legally admitting that we're ok with basically giving these things to anyone who wants them, by which point we could just drop the pretense and deregulate them.
Beyond that, I'm not sure how much room is there between "useless for serious medical advice" and "full-blown AGI, singularity ensues". I think (maybe naively?) that medicine is one of the professions where you need more or less the full scale of human intellectual capabilities to be good at it - you need to work with multimodal data, have decent social skills, good abstract reasoning, at least some numerical reasoning, deal with ambiguity, think outside the box (ideally, sometimes) and just have the general "good judgement" or "common sense", however that is defined. Not saying we won't be able to train it all into an AI one day and perhaps soon-ish, but by the time we have done so we've built a human+ level AGI which can replace nearly any human at any job, and the consequences for the society are vastly beyond just disrupting an industry or a few.