AI and health [from podcast]
You may not be aware of it, but right now, hundreds of millions of people around the world are doing something that seemed impossible not so long ago: consulting a chatbot when they have a question about their health. People send test results, ask about symptoms, try to understand their diagnosis, and use AI solutions such as GPT chat or similar tools to do so. And now, big tech companies such as OpenAI, Anthropic, and Google have decided that if people are already doing this, let's make it safe, responsible, and truly useful.
In the last week or two, there has been a lot of buzz about solutions from ChatGPT, Anthropic, and Google. These are solutions such as GPT Health from OpenAI, Cloud for Healthcare from Anthropic, and MedGemma from Google. And to be clear, these are no longer general chatbots. These are tools designed specifically for health.
How does it work?
I'll tell you in a moment. In fact, in January this year, the two largest companies, OpenAI and Anthropic, announced their new health-dedicated tools. OpenAI was the first. They announced the launch of GPT Health chat, which is not just another button in the GPT chat. It is also not a separate tool, but a separate environment, i.e., a separate space within the GPT chat, which has its own security rules and data processing methods. OpenAI wanted to clearly show that medical data is completely different from everything else we do on the internet. It wanted to show people that health conversations should be separated from other activities and convince them that they don't have to be afraid.
A few days later, Anthropic, the company responsible for Cloud, did practically the same thing. They announced Cloud for Healthcare. Except that this solution is even more focused on what doctors and medical systems need, not just patients. Yes, patients can also agree to share their data, but this is a solution that can also be used by clinics and hospitals to automate medical bureaucracy.
And Google? Google has previously worked on specialized AI models for medicine. It was called MedGemma, but in fact there are two different artificial intelligence models built specifically for medical analysis. One of them can read images such as X-rays, CT scans, dermatological photos, etc. The other understands medical records and doctor's notes, and can comprehend, read, understand, and connect the entire network of text data generated by medical care. In addition, Google also has an AI agent project that can conduct diagnostic conversations, analyze test results, read ECG's, and, interestingly, interpret laboratory data. In tests, this agent achieved diagnostic results comparable to or even better than family doctors, and, interestingly, it proved to be more empathetic in conversations with patients. The situation is such that a few years ago AI in medicine was more of a tool for advanced data analysis, and now we are slowly seeing AI that is directly entering or even initiating the treatment process.
The questions now is simple.
What does this give us? Let's start with the big picture.