Medicine, AI ակալություն bias. Will bad data disrupt good technology?

May 18, 2022 – Imagine entering the Library of Congress with its millions of books to read them all. Impossible, isn’t it? Even if you could read every word of every work, you would not be able to remember or understand everything, even if you had tried it for a lifetime.

Now suppose you somehow had a superpowered brain that could read and understand all that information. You still had a problem, you did not know what was not are included in those books. what questions they could not answer, whose experiences they missed.

Similarly, today’s researchers have a staggering amount of data to sift through. All the reviews in the world contain: more than 34 million quotes. Millions more data sets to study how things like blood work, medical և family historyGenetics, social and economic characteristics affect the patient’s results.

Artificial intelligence allows us to use more of this material than ever before! Developing models can quickly and accurately organize vast amounts of data, predict patient outcomes, and assist physicians in calling for treatment or treatment. preventive care.

Advanced mathematics holds great promise. Some algorithms, such as troubleshooting instructions, may: Diagnose breast cancer more accurately than pathologists. Other artificial intelligence tools are already being used in medical facilities, allowing doctors to search for a patient faster. medical history or improve their capabilities analyze radiology images.

But some AI experts in medicine suggest that while the benefits may seem obvious, less noticeable biases could undermine these technologies. In fact, they warn that bias can lead to ineffective or even harmful decisions in patient care.

New tools, the same bias?

While many people associate “bias” with personal, ethnic, or racial prejudices, it is widely stated that bias tends to lean in a certain direction, for or against something.

Statistically bias occurs when data do not fully or accurately represent the population to be modeled. This can happen from bad data at first, or it can happen when one population’s data is misused on another.

There are two types of bias in the medical literature: statistical, racial / ethnic. Some populations have been studied more, while others are less well represented. This raises a question. If we build artificial intelligence models based on existing information, are we simply transferring old problems to new technology?

“Well, that’s definitely worrying,” he says David M. Kent, doctor, Director of the Center for Predictive Analysis, Tufts Medical Center.

In: new studyA team of Kent և researchers studied 104 models that predict heart disease. models designed to help doctors decide how to prevent this condition. The researchers wanted to know if the previously accurate models would work the same way when tested on new patients.

Their revelations.

The models “did worse than people expected,” says Kent.

They have not always been able to distinguish between high-risk and low-risk patients. Sometimes the instruments overestimated or underestimated the patient’s risk of disease. It is alarming that most models could be harmed if they were used in a real clinical setting.

Why was the performance of the models so different from their initial tests compared to now? Statistical bias.

“Predictive models are not generalized the way people think they are generalized,” says Kent.

When you move a model from one database to another, or when everything changes over time (from a decade to another) or space (from one city to another), the model fails to fix those differences.

It creates a statistical bias. As a result, the model no longer represents the new patient population; it may not work.

That’s not to say that artificial intelligence should not be used in healthcare, says Kent. But it does show why human control is so important.

“Studies do not show that these models are particularly bad,” he said. “It underscores the general vulnerability of models that try to predict absolute risk. “It shows the need for better audit and model updates.”

But even human control has its limits, as researchers warn new paper challenging the standardized process. Without such a framework, we can only find the bias we are thinking of seeking, as they point out. Again, we do not know what we do not know.

Bias in this box

Race is a mixture of physical, behavioral and cultural characteristics. It is a significant variable in the healthcare sector. But race is a complex concept, և problems can arise when using race in predictive algorithms. Although there are health differences between racial groups, it cannot be assumed that all members of the group will have the same health outcome.

David S. Jones, Ph.D., Ph.D.Co-author of Harvard University School of Culture and Medicine Hidden in plain field of view – Reviewing the use of race correction in algorithmssays “many of these tools [analog algorithms] It seems to direct health resources to white people. ”

About the same time as: bias in AI tools They were identified by researchers Ziad Obermeier, Ph.D. և Ph.D. By Eric Topol.

The lack of diversity of clinical trials affecting patient care has long been a concern. Jones says the concern now is that using these studies to create predictive models not only conveys these biases, but also makes them more incomprehensible and difficult to detect.

Before the advent of AI, analog algorithms were the only clinical version. These types of predictive models are calculated manually instead of automatically.

“Using an analog model,” says Jones, “one can easily look at information to know exactly what patient information is included, such as race or not.”

Now, in the case of machine learning tools, the algorithm can be proprietary, meaning the data is hidden from the user and cannot be changed. It is a “Black box»: This is a problem because the user, the care provider, may not know what information is included about the patient, or how that information might affect AI recommendations.

“If we use it in racial medicine, it has to be completely transparent so that we can understand and make sound judgments about whether it is appropriate to use it,” Jones said. “The questions that need to be answered are the following: how to use racial labels to do them good without harming them.”

Should You Be Concerned About Clinical Care AI?

Despite the flood of artificial intelligence research, most clinical models have yet to be adopted in real life. But if you’re concerned about your supplier using technology or race, Jones suggests being proactive. You can ask the supplier. “Are there ways in which your attitude towards me is based on your understanding of my race or ethnicity?” This can start a dialogue about the provider making decisions.

At the same time, the consensus among experts is that there are problems with statistical-racial bias in artificial intelligence in medicine, which must be addressed before the tools can be widely used.

“The real danger is that tons of money will be poured into new companies that create forecasting models that are under pressure. [return on investment]”- says Kent. “It can cause controversy over the distribution of models that may not be ready or well-tested, which may worsen the quality of care instead of better.”

see secret product in Box below

‘The accuracy or reliability of any information/material/calculation contained in this article is not guaranteed. This information has been brought to you by collecting from various mediums / astrologers / almanacs / discourses / beliefs / scriptures. Our purpose is only to deliver information, its users should take it as mere information. In addition, any use thereof shall be the responsibility of the user himself.’

Leave a Reply

Your email address will not be published.

Translate »