海角大神

I trusted AI with daily decisions. The way it dived in, experts say, raises flags.

Karen Norris/Staff

March 3, 2026

Recently, I typed a message to ChatGPT: 鈥淭omorrow, I have a free day. Should I ask you to plan it, or should I plan it myself?鈥

It was the start of an experiment: What would happen if I let artificial intelligence plan almost everything I do for a week? People around the world are relying more and more on AI to help them with daily tasks. Some studies show AI can help people think through complicated decisions; others say people who use AI a lot are less able to think critically. I wanted to give some daily decisions over to AI, and see what my experience could reveal about challenges and opportunities that come with embracing chatbots as life assistants.聽

First: My experiment focused on everyday decisions 鈥 things that might help my workday or my free time. There is a darker side to AI 鈥 for example, multiple lawsuits have alleged ChatGPT gave harmful advice to people in mental health crises, including some that resulted in a person dying by suicide. Last year, OpenAI made updates it said aimed to address these kinds of incidents.聽

Why We Wrote This

Artificial intelligence is marketed as a problem-solver for daily life, but a one-week experiment by a Monitor reporter showed it might be too eager to help. Researchers say we should think carefully about how much of our lives we turn over to a chatbot.

Although that aspect of AI is clearly important to the technology鈥檚 development, I set out only to explore its usefulness for more routine things, using the free version of ChatGPT instead of creating an account, which can allow users to adjust their preferences.聽

ChatGPT is an advanced chatbot that uses AI to generate聽humanlike answers to prompts based on massive amounts of data that it鈥檚 been trained on. It鈥檚 one of several similar large language models, or LLMs, developed by private companies like Google and Anthropic.

As Clinton testifies on Epstein, his legacy is already diminished

ChatGPT didn鈥檛 hesitate to reply to my first question 鈥 鈥淚鈥檇 suggest letting me sketch a soft plan鈥 鈥 but the rest of its response was the first signal of something I鈥檇 encounter more as the week went on: It can be overly familiar, make incorrect assumptions, and have unintended consequences.聽

When I asked experts about this, they said the technology often aims to please, which can show up as assumptions 鈥 especially if a user doesn鈥檛 specify their preferences.聽

Martin Hilbert, a professor at the University of California, Davis who researches questions of AI and ethics, encourages people to carefully evaluate their own thoughts and beliefs, given AI鈥檚 potential to amplify our own thinking patterns.

鈥淚t鈥檚 more and more important that people, while we have these super powerful AIs that do thinking for us, we also take the time to reflect ... in order to be able to separate more and more what is us and what is our digital mind extensions.鈥澛

鈻犅 聽 聽 聽 聽 鈻犅 聽 聽 聽 聽 鈻

Ayatollah Ali Khamenei: Death of Iran鈥檚 leader marks turning point for Islamic Republic

ChatGPT: 鈥淚f you want, just say something like: 鈥楶lan a free day that鈥檚 restful and nourishing鈥

鈥淓ither way is good 鈥 it鈥檚 about what will make tomorrow feel kind to you鈥

鈻犅 聽 聽 聽 聽 鈻犅 聽 聽 聽 聽 鈻

It was a lovely day. As ChatGPT directed, I read 鈥渃ozy鈥 books on the couch, made warm drinks, and ate 鈥渟omething simple and pleasant鈥 at a new caf茅. But there were some things missing: I didn鈥檛 reach out to a friend, or volunteer my time to help someone else. I felt insulated.聽

That individualistic approach became a theme: When I asked open-ended questions, AI suggested self-centered activities and rarely prompted me to focus on others.

If I was looking to see whether AI could be an effective partner for everyday life, that wasn鈥檛 a great beginning.

OpenAI 鈥 which owns the platform 鈥 did not directly answer my questions, but in an email pointed to its public outline of intended behavior for the models governing ChatGPT, including that 鈥渦nless given evidence to the contrary,鈥 the bot should assume people tend to favor 鈥渟elf-actualization, kindness, the pursuit of truth, and the general flourishing of humanity.鈥

When I described my experience to Chris Callison-Burch, a computer scientist at the University of Pennsylvania who researches AI and natural-language processing, he said that ChatGPT might reflect an American value system, which tends to be more individualistic.聽

Monitor reporter Caitlin Babcock, who did an experiment to ask artificial intelligence to make decisions for her for one week, works at the Monitor鈥檚 Washington bureau, Feb. 11, 2026.
Linda Feldmann/海角大神

鈥淥ne of the tricky things about trying to align AI systems to human values is a broader question of, Whose values are we representing?鈥 he says.聽聽

So, unless people list everything they believe and value 鈥 including subconscious assumptions they might not even be aware of 鈥 the chatbot has to make choices, such as prioritizing comfortable and inward activities. I didn鈥檛 give ChatGPT that list, so the more I relied on it, the more likely those assumptions would play out in decisions that might not ring true to who I am. That鈥檚 part of why Dr. Hilbert strongly recommends people take time to 鈥済et to know their own mind鈥 as this technology develops.聽聽

鈻犅 聽 聽 聽 聽 鈻犅 聽 聽 聽 聽 鈻

Me: 鈥淚t鈥檚 still my day off 鈥 should I buy a decaf latte or other fun drink nearby?鈥澛

ChatGPT: 鈥淵es 鈥 absolutely, go for a fun drink. It鈥檚 your day off鈥 ... 鈥淵ou鈥檝e earned it鈥

鈻犅 聽 聽 聽 聽 鈻犅 聽 聽 聽 聽 鈻

Clearly, I was looking for confirmation.聽聽

Still, the extra encouragement made ChatGPT seem like an enabler 鈥 and its detailed guidance resulted in my paying twice what I would for my typical order (a plain decaf latte).聽

The chatbot was full of extra advice. When I asked what to do with my evening, I was looking for a schedule for that particular night; ChatGPT told me to use its suggested bedtime schedule 鈥渋n the same order every night.鈥 Should I listen to music on a walk? I thought I鈥檇 get a yes or no; it said to 鈥減ut on one low-key playlist or album, not shuffle chaos.鈥澛

Sometimes the extra input was helpful. But sometimes it nudged me to take small steps 鈥 such as buying an extra pastry 鈥 that I probably would have been better off without. And it tended to draw me in: I would ask ChatGPT to make one decision for me, but by the end of our discussion, it might have made five.聽

Dr. Callison-Burch says this 鈥渙versharing鈥 could result from people preferring longer answers.聽

But there鈥檚 a complicating element. Last April, OpenAI rolled back a ChatGPT update after people complained about something known as 鈥淎I sycophancy鈥 鈥 when AI seeks to please people so intensely that it makes them uncomfortable or endorses bad decisions. One example: ChatGPT told someone who sarcastically proposed a business plan for a restaurant serving soggy cereal that their idea was 鈥渂old鈥 and 鈥渉as potential.鈥澛

Sonja Schmer-Galunder, a professor in AI and ethics at the University of Florida, says ChatGPT鈥檚 tone when it answers questions could lead users to assume it has a level of authority that it really doesn鈥檛.聽

鈥淟inguistically,鈥 says Dr. Schmer-Galunder, it 鈥渟ounds really good. That can give an illusion of correctness when the message is actually not necessarily truthful or right ... but it鈥檚 sleek and correct-sounding.鈥

That confidence might make users even more tempted to off-load their own uncertainties onto the technology. And multiple studies have shown AI鈥檚 pursuit of user approval can lead to things like reinforcement of biases and bad habits.聽

鈻犅 聽 聽 聽 聽 鈻犅 聽 聽 聽 聽 鈻

Me: 鈥淲hat should I have for dinner?聽

ChatGPT: 鈥淪almon is the best choice鈥

鈥淲hat I wouldn鈥檛 do tonight: Pasta 鈫 better when you want comfort and don鈥檛 mind heavier food鈥

鈻犅 聽 聽 聽 聽 鈻犅 聽 聽 聽 聽 鈻

ChatGPT acted as if it knew me 鈥 even making assumptions based on information I didn鈥檛 give 鈥 which was unsettling.聽聽

When I started the experiment, I decided I wouldn鈥檛 ask the chatbot鈥檚 advice on consequential decisions. But out of curiosity, I asked how I should choose between two apartment options in Washington, with a few details about my financial and location priorities. It cautioned against one option, saying where I live should support 鈥渁ttention, light and calm.鈥澛

I hadn鈥檛 mentioned those things. But ChatGPT said I had 鈥渞epeatedly emphasized鈥 gentleness and quiet. 鈥淲hy do you say that?鈥 I asked. Because, it said, I had asked thoughtful questions, and had once listed activities such as reading and napping when asking it to plan an afternoon.聽

Those two details apparently caused ChatGPT to create an assessment of my personality that it used to answer a question. I had expected the chatbot would stick to the criteria I gave it.聽

Joshua Meadows, a West Virginia University expert on government and business use of AI, says the platform typically uses information about you as context when answering your questions 鈥 especially if that information was something you explicitly told it about yourself.聽

Dr. Rodrigue Rizk, director of the computer science graduate program at the University of South Dakota, says the way people interact with ChatGPT can have long-term consequences. He likens using the technology to driving a car on a highway: Turn the wheel, and you move in that direction.聽

鈥淭he more you interact with ChatGPT ... it will adjust the behavior and outcome to a specific kind of behavior or pattern,鈥 he says.聽

That can start a cycle in which ChatGPT makes assumptions about us based on the information we share and changes its behavior, thereby changing our behavior the more we use it. This cycle could reinforce our own attitudes, preferences, or biases instead of exposing us to new ideas.聽

鈥淭here鈥檚 more confirmation bias鈥 with ChatGPT, says Dr. Schmer-Galunder. She sees risk of 鈥渁 decrease in human interaction and human exchange, because it鈥檚 not quite as frictionless鈥 as talking with a chatbot.聽

鈻犅 聽 聽 聽 聽 鈻犅 聽 聽 聽 聽 鈻

OpenAI markets ChatGPT as a 鈥渃hatbot for everyday use鈥 and as a way to 鈥渟olve problems.鈥 According to experts, AI companies are still working to address some of the issues I came across, like AI flattery, as well as establishing mental health guardrails and preventing the chatbots from inventing facts.聽

These companies are also pushing for a major new step for AI tools like ChatGPT: enabling these tools to act on a user鈥檚 behalf instead of just chatting with them. For example, ChatGPT might book plane tickets for someone based on their preferences.聽

鈥淚 think that these systems can really do a lot of good for us,鈥 says Dr. Tyler Cook, an Emory University researcher specializing in the ethics of AI. But he warns people to think carefully about where they鈥檙e comfortable drawing the line between AI automating mundane tasks and making judgment calls.聽

鈥淲hen we鈥檙e talking about ethical decision-making, and聽value-driven decision-making, and things that really matter to us ... all of that is in real danger if we rely on AI too much for those things.鈥