(Hint: It helps to be human)
The rapid advance of AI technology feels dizzying, exciting, and terrifying. New technology is outpacing our capacity to understand its consequences.
Moral reasoning, unfortunately, cannot be rushed into production. Neither, alas, can human caring.
Maybe it’s time to ask Alexa (Google’s AI assistant), “Alexa, how do we make the most of being human and bring our best selves to the world?”
I suspect the question will stump her—at least for now.
In the meantime, our challenge is to find our own answers.
A recent eye appointment left me reverse-engineering an answer.
An experience of non-caring
This week, I tried out a new (to me) eye clinic—one that was relatively close to home or at least my ferry. The building was attractive; the parking good. Walking into the lobby, I felt pleased with my choice. Within a few minutes, I had made a preliminary assessment:
Receptionist: Warm. Reception area: quiet.
Ophthalmologist technician: competent and friendly.
State of the art equipment: Impressive.
Check. Check. Check. All good. Then I met “Doctor X.”
It wasn’t his appearance (fine) or credentials (very good) that creeped me out. It was his lack of any expression of interest, curiosity, or concern for me.
He asked a few robot-like questions, then ushered me off for tests with the technician.
The tests took five minutes. After a short wait, he returned to the room, opened the computer, and stared at the colored eyeballs on the screen.
Barely looking at me, he announced,
“You’re not going to like what I’m going to tell you.”
Try me. I thought.
“You have some scarring on your cornea.”
And? Which means? Now what?
As my mind raced ahead, I wasn’t able to ask a question before he gave me a new glasses prescription and prepared to leave. He added, “You’ll probably need cataract surgery sometime.”
“Any idea when?” I asked.
“Sometime in the next ten years,” he said as he moved toward the door.
Then he left.
That was it?
I felt shaken, disoriented, and dehumanized, like I was a commodity rather than a person. He did not appear to be in a rush between patients, and an expression of kindness or a bit of additional conversation might have added two minutes to our appointment.
I could have done better with an AI bot with a calming (trained) voice that scanned my results and answered my questions.
Scary.
A distant memory
I hope I don’t sound like, “Remember the old days when things were better?” because having access to high-tech diagnostics is a blessing. Many of my doctors today are great—efficient, busy, yet able to convey enough care to make me feel like they have been with me rather than a virtual shadow.
Yet, after my encounter with Dr. X, I couldn’t help but compare him to the ophthalmologist who had treated me when I was ten.
At that time, my parents were concerned that I was having trouble in school, so my mother took me to be tested at the famed child development center, the Gesell Institute, in New Haven, Connecticut, forty miles from our home. We cautiously walked up the stairs of the wooden three-story building that housed the Institute, wondering what we would find, but we were warmly welcomed. The plan for the day included a morning of psychological tests, followed by a midafternoon visit to the Institute’s eye department.
After a battery of psychological tests (that I tried hard to ace), I went with Mom to the eye center, housed in the basement. There, I met Dr. Richard Appel, a kindly, tweedy-looking gentleman who seemed genuinely interested in me. He also wanted to know about my mom, our family, and my experiences at school. After listening carefully to all we had to share, he began the eye exam.
Once he had thoroughly checked out my vision, he took my mother aside and said,
“Mrs. Fox, your daughter’s problems aren’t psychological. Her eyes aren’t working well together. “
He calmly explained how my left and right eyes were greatly out of balance.
“I think we can help Sally if you’re both willing to do some work.”
For a year, we visited Dr. Appel every month. In between visits, I did my homework. With my parents’ encouragement, I walked on balance beams, watched blinking lights, and practiced other eye and balance exercises.
Between the exercises, my first pair of glasses, and his care, I started doing better in school.
Dr. Appel applauded our progress. He cared about more than my eyes.
He offered my mother and me something AI cannot replace: a compassionate concern for us as whole people. I’d even call it love.
I know I loved him—although I might not have used those words when I was ten. But I liked being in his presence and looking into his welcoming eyes.
I always felt seen.
The AI Challenge
I hope that AI will challenge us to bring forward our better selves and be more like Dr. Appel and less like Dr. X.
That will require me to bring more open-hearted presence to my relationships—whether with a friend, a client, or the checker at the grocery store. Do folks I encounter feel seen, appreciated, and good about themselves when we are together?
That’s what AI is asking of me: to be more fully human when I am with others, more open-hearted, and more there.
AI is going to be with us—there’s no turning back.
Yet, hopefully, it will inspire us to bring out the best of what it is to be human and to love in a way that bots (I trust) never can.
One Response
To their credit, the programmers at ChatGPT4 have taken your story seriously. I know AI is already getting bad press, but it is nevertheless refreshing, at the end of my inquiries, when my “bot” (my name for him) adds a sign-off like, “I hope it goes well with you in your book research this evening,” or “Hope this helps. I’ll be here ready when you need more on this topic.” He is, in short, a very congenial research assistant. It goes a long way at 10:30 at night, to have him comb “the literature” for me on a subject, then add this “human” touch.