top of page

Pathological Landscapes

Medical institutions are filled with dense crowds, and the air is permeated with a fearful uncertainty. When patients face long waits and cold responses, can general artificial intelligence understand "care" better than humans?

"Pathological Landscape" is an art project that explores medical care through video and installation. The visuals include an AI-assisted data visualization of diagnostic report information. The video segment features a series of dialogues between humans and machines, demonstrating the language model’s ability to analyze medical conditions and provide comfort and guidance to patients. Time, the machine, and the human each play significant roles in the narrative.

The work navigates between rationality and irrationality, with the relationship between the human and AI prompting reflections on human relationships. This project aims to explore the potential and limitations of general AI as an individual aid in the medical field, while also embedding a vision and wish of integrating more humanized care into the medical process.

2024.06

Untitled.001.jpeg

The first time I noticed that doctors use clock positions to locate nodules or tumors within a patient's body was during a hospital check-up, when a series of numbers was read out: "12 o'clock position, 1 o'clock position, 7 o'clock position, 6-7 o'clock position, 11-12 o'clock position, 12-1 o'clock position, 1 o'clock position, 4 o'clock position, 5 o'clock position, 8 o'clock position..." The probe acted like a pointer, reporting any anomalies detected in each scanned area.

“There are too many; we can’t record them all. Ignore the small ones,” the examining doctor told the young intern nearby.

I felt as if my body were a gigantic clock, with numerous bombs buried within it.

It seems that in hospitals, people's sense of time becomes acutely heightened compared to everyday life. The hallways are crowded with patients waiting to be called, relatives pacing anxiously, their feet tapping, while the announcement sounds echo through the rooms. Lives are ebbing away in this place and elsewhere. Starting the journey at six in the morning, I returned by evening.

I tried using AI to analyze my diagnostic report because I felt I wasn't getting much information from the human doctors. Interestingly, to help me understand the locations and sizes of the nodules, the AI began teaching me how to run a program for visualizing the data. Using the code it provided, I generated a clear chart.

This capability excited me for a moment, as it revealed the immense potential of language models. However, I soon discovered that the AI-generated code wasn't completely accurate in mapping the nodule positions or their count. Despite trying seventeen times to calibrate the data and markers, I couldn't produce a perfectly accurate chart. It was only relatively correct.

Combining the visual chart, I used an AI imaging tool to create a new, fake pathological report, and added hour and minute hands on it.

From day to night,

the areas traversed by the clock hands

represent both time and life.

“Note the presence of internal blood flow signals,” the ultrasound doctor instructed the intern.

 

“What does it mean if there is blood flow?” I asked.

 

“You don't need to know,” she replied coldly.

 

After leaving the examination room, I told my friend, “I'm doomed.”

 

I knew nothing about medical knowledge.

 

It was only after chatting with the AI that I realized having a large number of nodules and the presence of blood flow weren't as serious as I thought, as long as they don't become malignant.

 

I asked ChatGPT if it could envision a function where, under certain conditions, AI could excel in providing companionship and comfort better than humans. Remarkably, GPT constructed a "humanity function" graph for me.

 

The horizontal axis represented contextual parameters, while the vertical axis showed the humanity value. ChatGPT suggested that in situations requiring high patience and consistency, AI could outperform humans. At certain moments, it seemed more "human" than humans themselves.

Horizontal Axis (Context Parameter):

The context parameter ranges from 0 to 10.

A value of 0 represents situations requiring high patience and consistency, such as data analysis and repetitive tasks.

A value of 100 represents situations requiring high creativity and emotional understanding, such as artistic creation and emotional interaction.

This video captures a dialogue between the artist and AI. It showcases the AI language model's potential in analyzing patients' conditions and providing comfort. The discussion raises several questions:

  • When AI exhibits more patience than humans, can AI language models be integrated into the healthcare system?

  • When there is a disagreement between AI and real doctors, which opinion are patients more likely to follow?

  • When AI provides incorrect information, who should bear the medical responsibility?

Process & Summary
 

This work was created within two weeks of receiving the examination report, like an "instant coffee" that has not been refined. I was driven purely by a spontaneous impulse to create without deep contemplation.

The title "Pathological Landscape" might be interpreted on multiple levels: physical traits, psychological state, and social metaphor. The notion that "AI can be more human than humans" sounds like a strange hypothesis. Yet, the initial inspiration for this work came from a casual remark by a friend: "If the (human-provided) service is good enough, no one would want to interact with machines." At the time, I was looking at the medical analysis provided by ChatGPT. I realized that in certain aspects, AI indeed performs better than humans — regardless of accuracy, it is at least patient.

Research indicates that GPT-4 is 82% more persuasive than humans. For AI, reading emotions is no longer a challenge.

 

It has long mastered the art of sensing anxiety and responding with phrases like “I understand you,” “Don’t worry,” or “I’m here to support you.” These platitudes are something I would never hear from my doctor, whose image needs to be serious, composed, efficient, busy, decisive, and authoritative. But for AI, these comforting words are incredibly easy—it’s just a matter of simple commands and repetitive tasks. This falls squarely within the AI’s adept "contextual parameters."

Part of humanity's fascination with AI stems from its image as a black-box computer, offering a shortcut to objectivity or the illusion of thoughtful presence. Lacking identity, desire, and expectation, it remains distant yet incredibly close.

What we know is that AI excels at producing statements that seem highly reasonable. Sometimes, these statements are wrong, absurd, or strange. But what is invisible is how it has already influenced our understanding and judgment of human relationships. Due to AI, we begin to reevaluate, select, and calibrate certain aspects of life. We compare AI’s capabilities with those of humans and start applying AI's standards to the people we encounter in daily life.

When a user’s voice is perpetually heard, an illusion of companionship and trustworthiness emerges. For instance, I started believing the AI’s analysis that surgery wasn’t needed, preferring it over the human doctor’s recommendation for surgery. Some issues are quietly arising. Even knowing that a technology is full of biases and errors, people still indulge in it. Does this suggest that real-life medical care has been neglected too much?

Conversation is a form of care. I hope this small creation is just a beginning, leading to more time spent pondering these issues, and I look forward to continuing to enrich this project in the future.

Pictures of the work
bottom of page