What is AI and Machine Learning? An introduction for clinicians

AI And Machine Learning

Artificial intelligence automates human reasoning and problem-solving skills. Ideally, create a machine that mimic human being better to say an ideal human being. Human set rules and logics for machine and machine follows that rules and works upon those logics. Healthcare software has been started using AI since 1970, but recently there have been huge technological advancements in the field of AI such as Machine learning, deep learning, artificial neural networks, natural language processing, chatboats; where machine learn from examples provided in different formats.

Machine learning uses the approaches of Artificial Neural Networks to develop AI systems.

Artificial Neural Networks progressively improve the ability of machine at a particular task by studying examples. Such systems are able find independent connections in data. For example we train a system manually by providing images labeled with ‘tumor’ or ‘no tumor’. After enough training with a large enough data set and powerful enough computer, neural networks get better and better at this task.

Limitation of training through Neural Networks

Explain ability

Machine learning algorithms are ‘black box’; it is very complex to understand for a human being to understand the process of the result because decisions are based on large number of connections between ‘neurons’. It is difficult to access reliability, bias or detect malicious attacks.

Data requirement

To provide training for a neural network it required a large amount of data set which should be also accurate and reliable otherwise inaccurate or misrepresentation data could lead to wrong decision. Health data is heterogeneous, complex and poorly coded.

 Transferability

What if we transfer algorithm for the data that has not is seen yet by algorithm in other word the algorithm may provide optimal solution for the specific task trained over data but this algorithm may be incorrect on the data it has not seen before.

Read Also: SMART On FHIR: A Standards-Based, Interoperable Platform For Electronic Health Records

The drawbacks of machine learning in healthcare:

  • The algorithm may fail if it is trained or tested on data that is not clinically significant.
  • The algorithm may fail if is not trained or tested on independent blinded real-world data.
  • The narrow algorithms cannot generalize to clinical use.
  • Measuring performance of algorithm on inconsistent means.
  • Commercial value of application based upon unpublished, untested and unverifiable results.

Domains

Patient safety

When discussion over introduction of AI to healthcare is start the most fundamental question arise: will patients be safe or safer? Proponents argue that programmed machines don’t get tired, machines don’t mix emotion to decisions, machines can find solutions faster and machines can be programmed to learn more willingly than humans. Opponents argue that for any clinical activity human judgment is fundamental component and a doctor apply holistic approach to care for patient.

Computerized clinical service tools have a facility to remove unnecessary variation in patient care. All healthcare system can adopt such algorithm which could standardize tests, prescriptions and procedures. The system need to be up-to-date with latest guidelines same as an antivirus updates its virus definition. It is possible to advice the patient in real-time where the patient need to refer secondary or tertiary service provider and in the patients’ locality. Digital consultation make possible to provide direct-to-patient services regardless of geography, time of day or verbal communication needs including language.

We can apply the tech mantra ‘move fast and break things’ in patient care. Algorithms could also provide unsafe advice. Applying AI is safe has challenge because it may be poorly programmed, poorly trained, used in inappropriate situations, have incomplete data and could be misled or hacked. And AI can replicate harm at large scale.

Ethical issues:

The extensive introduction of new AI healthcare technology can help some patients but it can also expose others to unforeseen risks. How to check the threshold that how many people are helped over the one person who is on risk by AI healthcare technology. How does this compare to standard to which a human clinician is held?

If AI healthcare technology harm someone then who will be responsible- the clinician, the regulator, the tech company or the computer programmer.

Do machines have automatic rights to over-rule the doctor’s diagnosis or decision and vice versa?

Read Also: How Home Health Care Software Became A Big Picture?

Practical challenges:

Accountability for decisions

Who is responsible if something happen wrong? This is a fundamental question at the center of the conversation between AI developers, healthcare organizations, clinicians, policy makers.

Is it required for healthcare providers to understand the intricacies of AI technology and also for the technology firms to understand the realities of clinical practice if so then upto what extent we should expect from them.

Rapid development and growing complexities in AI technologies leads for more unforeseen errors and consequences. Technology companies are currently focusing on such apps those support clinicians rather than replacing clinicians or their clinical decisions; this implies that accountability for misshaping and mistakes remains with the clinician.

But here needs a clear distinction between accountability for operation and for content. If any harm caused by incorrect content then tech firm or the who designed and a assured the quality will be accountable but a clinician is accountable if not using an algorithm or device correctly.

It is not easy so easy to define that who will be accountable for error because human tendency to trust a machine more than trusting on themselves. If all the process of decision making is well documented and  the clinician use something like ‘rubber stamping’ on anything recommended by an algorithm then in this case who will be responsible if an error is made?

It is very difficult to give reason behind the decision made by AI system in such format than a clinician can understand. One another problem is that the software may be unavailable to analysis for intellectual property reasons, the training data for privacy reason than to find true accountability becomes more impractical. Here we recommend to the patients and clinicians that they should check or challenge the approach of machine with a course of action if they don’t have any other real opportunity to check.

The doctor and patient relationship

doctor-patient-relationship

Patients and clinicians relationship has its own culture where doctor include patient’s wishes in holistic approach of decision making. In future the use of AI technologies can shift this culture of interactions between clinicians and patients.

This all depends upon the type of interface between the patient and AI. AI applications can include doctors in its decision and it could be unnoticed by the patient or an autonomous AI system which diagnose and treat condition without human clinical involvement.

It is difficult to create an application with digital tools which works same as real consultation because doctors can detect non-verbal signs such as voice tone, facial expressions, over react for disease. Without human clinician it is very difficult to get awareness about loneliness, safeguarding, or social needs of patient.

Read Also: What Is Teleradiology And How It Is Helpful For Radiologist

In this situation we can adopt second opinion for patients after AI-generated advice for quality assurance and interpretation but it raise cost of involvement of human clinician and required patients interaction with clinician.

Another big problem is that if AI-generated advice include medical terminology and patient is not aware about that then patient may under or overestimate the severity of conditions and misunderstand the scale of risks.

Ethical issues:

It is quite difficult to explain how the decision is made by AI algorithm because reason and processing of AI techniques such as deep neural networks and fuzzy logics are “black boxed” for the clinician. Here a question arise that is it needed to doctors explain these concepts?

A clinician is not capable to modify or understand the errors of AI system then why the clinician bear psychological stress if an AI system harm any patient.

Replacing the advice of doctors by these AI tools can reduce trust and degrade the quality of relationship in eyes of public.

Practical challenges:

What if AI and doctor disagree then who will decide the ‘right’? The degree of relative trust in AI technology and in healthcare may differ between individuals and generations.

It is difficult to promote clinician’s interest for offers and mediation between service providers if AI tools reduce face-to-face contact.

Public acceptance and trust

Public acceptance and trust

It is very complex to measure the concept of AI that how it works and what it can and cannot do. We can assume that mostly people are not interested that how their app is working and providing solution they just update belief on solutions based upon their experiences, so it is safe to assume that patents will not bother about the details of how AI works. They just need to know that it does work on demand and can be trusted if experiences of other user is safe.

The most essential step to develop an AI system for healthcare, is gaining the trust of clinician and patient. So it is advisable to developers that they should continue to focus on the utility of AI system to the individual rather than approval from the outset. It is not such that AI-systems are not gaining trust because some health apps and chatbots related to youngsters and their mental health, home monitoring systems that keep track of our daily routine are good examples.

Other than health the AI sets in our daily lives, acceptance and trust in AI that a machine is capable to make decisions will increase our interest to develop health related AI tools. AI health tools needs the ‘social license’ which is gained so far by the other avenues and it would be precious commodity. It is advisable to developers learn from historic controversies over AI tools and should not take public acceptance and trust for granted.

Clinical considerations:

So far there are no nationally and internationally accepted standards for quality. Do we require such standards and if we required then who should set these standards.

Another question is that do the standards will choke the opportunities for innovations?

Check Our Product: VCDoctor, HIPAA Compliant Telehealth App

How can a patient or clinician measure good AI system and bad AI system? It is quite possible for an app which have good user interface but based upon poor study or data.

Ethical issues:

How to decide the cost of services provided by AI tools such as ‘self-help’ AI services should be free or paid for users. As we don’t have standard measures to check quality of AI tools then it is very difficult to decide what payment should a clinician ask form patients if uses AI services.

Marketing restrictions would be applicable or not for these online resources or apps.

Transparency of service should be maintain or not by the AI service provider, how the data is going to be used by these.

Practical challenges:

Youngsters use AI tools with greater acceptance or reliance while oldies patients more trust upon doctor. This difference in acceptance between generations leads a two-tier health system.

If AI is over-sold by developers or sales persons and fails to deliver the promised benefits, then here a risk arise that the public can reject the use of AI in healthcare.

Should the patient be always provided choice for diagnosis between doctor or AI tool?

Last Paragraph…

Dreamsoft4u works for the Healthcare IT Services in India and USA. Our best services we include Healthcare Software Solution, EMR Software and also have best practices in Wearable App Development Company for the healthcare purpose. We feel proud to say that we work for our India, USA, Australia and UAE based clients.

Want to connect us, you can Contact Us – (+1)-949-340-7490 | Mail at enquiry@localhost

Share:

Share on facebook
Facebook
Share on twitter
Twitter
Share on pinterest
Pinterest
Share on linkedin
LinkedIn

Leave a comment

Your email address will not be published.