This month, for example, it was revealed that researchers at the University of Aberdeen, NHS Grampian and an industry partner had made a significant breakthrough in using AI to speed up the detection of breast cancer.
Their 3-year project, funded by UK Research and Innovation, involved analysing 220,000 mammograms from more than 55,000 patients, using AI-powered breast screening technology.
It proved to be highly effective at identifying potentially missed cancer cases, known as interval cancers, that would have remained undetected under current screening procedures until patients developed symptoms.
This breakthrough could lead to recalling 34.1% of women who developed interval cancers. In another study of 80,000 mammograms of women who had undergone screenings in Sweden, an AI-assisted system detected 20% more cases of cancer than human radiologists.
The global market for AI in medical diagnostics is experiencing significant growth, projected to reach $2.85 billion in 2023 (up by 43.1% on the previous year). The market is expected to grow by 42.5% annually during the next 4 years, when it will be worth $11.75billion.
It includes services such as medical imaging, robotic process automation, machine learning, natural language processing and rule-based expert systems, along with sales of various medical devices used to provide AI-based diagnostics services.
AI also has the potential to identify and even design up to 50 new therapeutic drugs in the next decade, representing a $50 billion opportunity for the global pharmaceutical industry and a 20–40% reduction in preclinical costs.
There is fevered speculation about how the technology might be used in other areas of medical technology to improve patient outcomes and reduce the burden on hard-pressed, and often under-resourced, medical practitioners.
Imagine a future in which AI-powered smartphone cameras could analyse skin conditions and provide instant feedback on potential issues. By enhancing early detection, AI could save countless lives and improve overall healthcare outcomes.
It may also soon be able to contribute to the development of personalised medicine, with treatment plans tailored to each individual patient’s unique characteristics, including their genetic make-up, lifestyle factors and medical history.
By analysing vast datasets and applying predictive analytics, algorithms can identify optimal treatment options and predict potential responses to specific therapies. The integration of AI in surgery, for example, has the potential to optimise treatment efficacy, minimise adverse effects and improve patient satisfaction.
However, whereas pharmaceutical corporations, MedTech manufacturers and national healthcare providers see the benefits of huge cost savings, quicker and more effective ways of treating more patients and better health outcomes, not everyone shares their unequivocally optimistic view.
Two sides of the AI coin
AI will undoubtedly change the roles and responsibilities of medical professionals, with routine diagnostics being handled by AI systems, freeing up doctors to focus on more specialist and complex tasks.
Just as motor mechanics have shifted to specialise in managing on-board computing systems in cars, medics may need to adapt and evolve their roles to work alongside AI systems. To do this they will need to learn new skills and some employers may struggle to find enough qualified AI experts — or be prepared to pay a premium — to secure their services.
The likelihood is that many AI developments will end-up being concentrated in a few consultancy firms, leading to a potential monopolisation of expertise and, as a result, an increased cost for end-users.
The growing use of AI in healthcare also raises important ethical considerations. Although technology can provide data-driven recommendations and diagnostics, it lacks the capacity for empathy and sensitivity.
The human touch is crucial in healthcare, especially when addressing mental health issues. A balance must be struck between using AI for efficient diagnostics and ensuring that human healthcare professionals maintain a central role in providing emotional support and understanding to patients.
A recent study by University of Arizona Health Sciences found that 52% of participants would choose a human doctor rather than AI for diagnosis and treatment. Researchers found that most patients aren’t convinced that the diagnoses provided by AI are as trustworthy of those delivered by human medical professionals.
The omnipresence of AI monitoring and its capacity for data collection may also raise concerns about privacy and individual agency.
Patient confidentiality
Although AI may offer real-time insights into our health and provide us with tailored recommendations, it also poses a risk of intrusion and oversight.
Patients may experience anxiety and uncertainty when AI systems predict their future health outcomes, leading to potential psychological impacts. Some of the benefits of a healthcare system are in its powers of mediation to protect patients from the impact of existential reality.
A system that can tell an 18-year-old patient, using AI, that they are going to have a lifestyle-related cancer by the age of 40, will also have a duty of care to ensure that the recipient is psychologically equipped to deal with such news … and also to present them with advice on how to change their lifestyle.
And, although some people may want to hear how long they have to live, years in advance, the main beneficiaries of such information will be life insurance companies who will see an opportunity to deal in greater certainty while minimising, or even eliminating, risk.
Standards and regulation
And, yes, there will be some patients who believe that having a chip implanted under their skin — that permanently monitors their health — will drastically improve their life chances; for others, it will be an unacceptable intrusion.
Although we may still be in the early days of AI in healthcare, there is already an urgent need to establish ethical guidelines and regulatory standards to govern its use. Ensuring transparency, accountability and patient autonomy must be prioritised.
Decision making algorithms must be explainable and unbiased, and the data collected must be handled responsibly and securely to protect patient privacy.