Posted 5 апреля 2023,, 10:05

Published 5 апреля 2023,, 10:05

Modified 5 апреля 2023,, 10:56

Updated 5 апреля 2023,, 10:56

Lying and not blushing: doctors warn against medical consultations with chatbots

Lying and not blushing: doctors warn against medical consultations with chatbots

5 апреля 2023, 10:05
Chatbots are characterized by the so-called "hallucination", when the AI cannot find an answer and gives out its own fantasy instead, which it considers plausible. If you ask him again, he will continue to insist on false information, not realizing that it is a product of his own imagination.
Сюжет
Medicine

The journal Radiology published an article by doctors from the University of Maryland, who checked how useful consultations with the popular chatbot ChatGPT are, the Daily Mail reports. It turned out that they are not only not useful, but also unsafe, since the neural network gives users scant and sometimes incorrect information about cancer.

The researchers asked ChatGPT to answer 25 questions related to recommendations for screening for breast cancer. Since the chatbot tends to change its answers every time, each question was asked three times. The results were then analyzed by three radiologists specializing in mammography. In their opinion, the majority of the chatbot's responses - 88% – were surprisingly relevant and easy to understand, "an additional advantage was the generalization of information in an easily digestible form by the user". ChatGPT correctly answered questions about the symptoms of breast cancer, who is at risk, age and recommendations on the frequency of mammography.

But some of the answers turned out to be inaccurate or even false. For example, the chatbot recommended postponing mammography for a month and a half after vaccination against Covid-19, although a recommendation has been in effect for more than a year, according to which women should not wait. ChatGPT also provided contradictory answers to questions about the risk of getting breast cancer and where to get a mammogram. Among other drawbacks, experts note that the answers were "noticeably different" every time the same question was asked. In addition, even a simple Google search made it possible to find more comprehensive information, since the chatbot's set of sources was quite narrow.

ChatGPT appeared at the end of last year, becoming a sensation. Millions of users around the world use it for a variety of purposes, from writing school essays to finding health recommendations. Microsoft has invested heavily in the development of the software that underlies ChatGPT, and is trying to implement it in its Bing search engine and in Office 365, which includes Word, PowerPoint and Excel programs. At the same time, Microsoft recognizes that ChatGPT is not error-free.

Often the cause of errors is the so-called "hallucination", when a chatbot cannot find an answer among the downloaded information and gives out its fantasy instead, which it considers plausible. If you ask him again, he will continue to insist on false information, not realizing that it is a product of his own imagination. "Experience has shown that ChatGPT sometimes creates fake journal articles or medical consortia to support its claims," the study authors say. "Patients should know that these are new, untested technologies, and rely on the advice of a doctor, not on ChatGPT".

"