You may already know this, but there's companies trying to do this sort of thing right now. I'm aware of CurAI (www.curaihealth.com) which afaik are trying to use a combination of AI and humans via a chat interface to handle routine questions and prescriptions and wellness couching. And I think I've heard of at least one other company using AI for mental health advice and coaching, but can't remember the name now. I'm sure there's more too, these are just ones I've heard of.
Although as far as I understand, you can already get things like Adderall or antidepressants by basically giving the right answers to a few questions over a zoom call or maybe even a chat. And even the blood tests I order online out of pocket are technically prescribed by some physician somewhere, judging by the lab form. From the technical standpoint, it should be relatively trivial to replace these with even current gen LLMs. But I'm not sure how much of an improvement would it be, if you can get one MD in Nebraska approving hundreds of prescriptions per day, are you really saving that much by deploying a purpose-built frontier model to approve all your prescriptions? And more importantly, if we authorize an AI to make such prescriptions it feels like legally admitting that we're ok with basically giving these things to anyone who wants them, by which point we could just drop the pretense and deregulate them.
Beyond that, I'm not sure how much room is there between "useless for serious medical advice" and "full-blown AGI, singularity ensues". I think (maybe naively?) that medicine is one of the professions where you need more or less the full scale of human intellectual capabilities to be good at it - you need to work with multimodal data, have decent social skills, good abstract reasoning, at least some numerical reasoning, deal with ambiguity, think outside the box (ideally, sometimes) and just have the general "good judgement" or "common sense", however that is defined. Not saying we won't be able to train it all into an AI one day and perhaps soon-ish, but by the time we have done so we've built a human+ level AGI which can replace nearly any human at any job, and the consequences for the society are vastly beyond just disrupting an industry or a few.
You may already know this, but there's companies trying to do this sort of thing right now. I'm aware of CurAI (www.curaihealth.com) which afaik are trying to use a combination of AI and humans via a chat interface to handle routine questions and prescriptions and wellness couching. And I think I've heard of at least one other company using AI for mental health advice and coaching, but can't remember the name now. I'm sure there's more too, these are just ones I've heard of.
Although as far as I understand, you can already get things like Adderall or antidepressants by basically giving the right answers to a few questions over a zoom call or maybe even a chat. And even the blood tests I order online out of pocket are technically prescribed by some physician somewhere, judging by the lab form. From the technical standpoint, it should be relatively trivial to replace these with even current gen LLMs. But I'm not sure how much of an improvement would it be, if you can get one MD in Nebraska approving hundreds of prescriptions per day, are you really saving that much by deploying a purpose-built frontier model to approve all your prescriptions? And more importantly, if we authorize an AI to make such prescriptions it feels like legally admitting that we're ok with basically giving these things to anyone who wants them, by which point we could just drop the pretense and deregulate them.
Beyond that, I'm not sure how much room is there between "useless for serious medical advice" and "full-blown AGI, singularity ensues". I think (maybe naively?) that medicine is one of the professions where you need more or less the full scale of human intellectual capabilities to be good at it - you need to work with multimodal data, have decent social skills, good abstract reasoning, at least some numerical reasoning, deal with ambiguity, think outside the box (ideally, sometimes) and just have the general "good judgement" or "common sense", however that is defined. Not saying we won't be able to train it all into an AI one day and perhaps soon-ish, but by the time we have done so we've built a human+ level AGI which can replace nearly any human at any job, and the consequences for the society are vastly beyond just disrupting an industry or a few.