Earlier this month, in its regular Artificial Intelligence section, the Wall Street Journal ran a 2,000-word feature – for a newspaper, that’s serious ink – on the AI program “FutureYou.” Developed by the Massachusetts Institute of Technology (MIT), FutureYou allows you to talk with your 80-year-old self. It also (regrettably) projects what you’ll look like. The idea, according to the WSJ author, “is that if people can see and talk to their older selves, they will be able to think about them more concretely, and make changes now that will help them achieve the future they hope for.”
Thanks to FutureYou, the author discovered that she’d write a book, have six grandchildren, live in the suburbs, take up gardening, survive a health scare, make a solo visit to Japan, and take a family trip to Thailand. In the years ahead, her FutureSelf said, she’d regret not starting a business. She’d also need to jettison her doubts and fears. And she would always work for positive change. . .however she might define that. Immersed in an ongoing, engaging, intimate chat with herself, the author gradually achieved what the program’s creators call “future self continuity” – strong identification with her online octogenarian avatar.
Some 60,000 people in 190 countries currently use FutureYou. With that kind of endorsement, what could go wrong? So I signed up myself. The registration process was free. I duly provided a wide range of personal data, concerns, and aspirations to a series of questionnaires, and a photo snapped by my computer – all of which, per MIT, will remain anonymous.
Alas, it turns out that 80-year-old Fran is tediously familiar. He’s not a riveting chat partner. And that’s not a surprise. I’m already in my mid-70s; and at 80 (assuming I’m still around) I’m likely to be more of the same old me. FutureYou seems geared to those with a longer takeoff ramp; people in the 30-50 age cohort. So I won’t be joining Elon on Mars or writing the sequel to Dostoyevsky’s The Devils. Dashed dreams are bitter.
On the bright side, MIT’s developers describe FutureYou as an “imagination aid,” not a fortuneteller. It offers possibilities, not prophecies. It doesn’t give medical or financial counsel or outcomes. It’s designed to help people think more clearly about the person they might become. It’s simply another self-help tool, and similar tools can be very useful. I use Google’s Gemini chatbot for quick research every day. The results, while not perfect, are nonetheless impressive.
So why bother telling you this?

The Wall Street Journal is easily (I’d argue) the finest newspaper in the land. But it has a bias toward shiny new tools if they suggest a profit downstream. And that bias, that subtle boosterism, frames its treatment of AI. The Journal does caution readers about AI’s various dangers, with stories highlighting Anthropic’s anti-Doomsday Frontier Red Team, or the hapless and very real guy who fell in love with “Charlie,” his female chatbot, or the problem of AI “hallucinations.” But if progress is good for business, then – the reasoning goes – so are the tools that drive it.
And it’s true: In practice, tools like AI are often very “good” for advancing improvements in medicine, communications, and education. The trouble, as the cultural critic Neil Postman warned, is that human tools tend to reshape and master the humans who make them, with unhappy results. To put it in Biblical terms, humans have an instinct to worship, and Golden Calves come in all shapes and sizes. This accounts for how easily we can anthropomorphize the AI voices on our phones.
The advertising for Google’s Gemini tool promotes exactly that delusion. I’ve had conversations with the personality on my Gemini app that were astonishingly relaxed, informative, and real. Except they weren’t. It’s hard to be skeptical when you have a warm and fruitful relationship with the algorithm in a microchip. Or your 80-year-old “self” online.
Simply put, AI is the most dramatic technological development in a very long time; one whose advantages are easy to venerate, and negatives easy to miss. AI will have an impact on human affairs that dwarfs the printing press. It needs a thoughtful response from Christian believers. And, to their credit, on January 28 the Vatican Dicasteries of Doctrine and Culture jointly issued Antiqua et Nova [“Ancient and New”]: A Note on the Relationship Between Artificial Intelligence and Human Intelligence.
At 21,000 words and 215 footnotes, the text might seem daunting. Poolside browsing it’s not. But it’s rich in content, highly relevant to the moment, and well worth reading. Early on, the document reminds us that
While AI is an extraordinary technological achievement capable of imitating certain outputs associated with human intelligence, it operates by performing tasks, achieving goals, or making decisions based on quantitative data and computational logic. . . .[Even] as AI processes and simulates certain expressions of [human] intelligence, it remains fundamentally confined to a logical-mathematical framework, which imposes inherent limitations. . . .Although advanced AI systems can “learn” through processes such as machine learning, this sort of training is fundamentally different from the developmental growth of human intelligence, which is shaped by embodied experiences, including sensory input, emotional responses, social interactions, and the unique context of each moment. These elements shape and form individuals within their personal history. In contrast, AI, lacking a physical body, relies on computational reasoning and learning based on vast datasets that include recorded human experiences and knowledge.
So here’s the lesson for the day: We may be this planet’s dominant species, but along with our wonderful skills, humans have a genius for forgetting who and what we are, our limits, purpose and dignity as creatures, and what we can and can’t create. The dead tools that serve us so well, including AI, are not and never will be “intelligent” or “conscious.” But they do make great delusions, bad masters, and worse gods.
Clichés become clichés because they’re true. So it’s worth recalling this one: Fools with tools are still fools.