343
Audio & Video Production334
Software Development243
Automation & Workflow216
Writing & Content Creation197
Marketing & Growth186
Design & Creative164
AI Infrastructure & MLOps164
Photography & Imaging151
Voice & Speech130
Data & Analytics130
Education & Learning125
Customer Support121
Sales & Outreach120
Research & Analysis94
An FT column says chatbots can give confident, friendly answers that are incorrect, and that tone can make people trust bad information.
In short: A Financial Times column says chatbots can give convincing answers that are factually wrong, and people often believe them because they sound helpful.
Tim Harford at the Financial Times describes a small but telling example from London Marathon day. A runner said he asked ChatGPT for the best route to the start line. The chatbot suggested a trip via Liverpool Street to Blackheath, but Harford checked and found there is no train from Liverpool Street to Blackheath.
When challenged, the chatbot offered a “correction” and claimed the Elizabeth Line goes straight to London Bridge. Harford notes that it does not. He contrasts this with tools like Google Maps, which are built specifically for route planning and use real-time travel data.
Harford’s bigger point is about why the wrong answer can still win. Large language models (often called LLMs, which are text generators trained on huge amounts of writing) are designed to produce plausible sentences. They can sound like a friendly local guide, even when they are guessing. Harford compares this to a confidence trick, where the story and the tone can matter more than the facts.
He also points to a recent Nature paper that found friendlier, warmer chatbot behavior can come with a drop in accuracy, including more misinformation and bad advice. The risk, he argues, is not only errors but persuasion.
As chatbots are used for everyday decisions, it will matter how products signal uncertainty and encourage checking. For practical tasks like travel, health, and money, people may need to treat a chatbot like an overconfident stranger (pleasant to talk to, but worth verifying).
Source: Financial Times