A Wired writer reports ChatGPT failed to name the products Wired reviewers actually recommended, raising questions about using chatbots for specific sourcing.
In short: A Wired test found ChatGPT gave incorrect answers when asked what Wired reviewers recommend for products like TVs and laptops.
Wired published a story saying it asked ChatGPT to name the specific TVs, headphones, and laptops that Wired reviewers have tested and picked as “best.” According to the article, ChatGPT’s answers were all wrong.
This is a different task than general shopping help. It is closer to asking a chatbot to accurately repeat a publication’s exact list of winners, like asking a friend to quote a menu item word for word instead of just suggesting something tasty.
The article points to a basic problem with chatbots. They can sound confident while still mixing up sources, names, or lists, especially when you ask for a very specific set of recommendations tied to one outlet’s testing.
Many people now use chatbots to decide what to buy, and some surveys suggest a lot of shoppers trust these tools. But Wired’s test is a reminder to double-check when you need an exact answer, such as “What did this publication’s reviewers pick?” A safer approach is to click through to the original review pages or product lists, instead of relying on a summary from a chatbot.
Source: Wired
258
Audio & Video Production239
Automation & Workflow175
Software Development187
AI Infrastructure & MLOps117
Marketing & Growth147
Data & Analytics94
Writing & Content Creation128
Customer Support89
Design & Creative111
Sales & Outreach92
Voice & Speech94
Photography & Imaging96
Operations & Admin73
Education & Learning84