355
Audio & Video Production344
Automation & Workflow224
Software Development250
Marketing & Growth192
AI Infrastructure & MLOps173
Writing & Content Creation203
Data & Analytics140
Design & Creative169
Customer Support130
Photography & Imaging156
Sales & Outreach125
Voice & Speech135
Operations & Admin87
Education & Learning131
Former Meta news chief Campbell Brown says AI often gives wrong or biased answers and her startup Forum AI aims to measure and improve it.
In short: Former Meta news chief Campbell Brown says today’s AI chatbots often get important topics wrong, and her startup Forum AI is trying to measure that and push for better answers.
Campbell Brown, a former TV journalist and Meta’s former head of news partnerships, spoke with TechCrunch about her company, Forum AI. The company evaluates how well major AI systems handle “high-stakes topics,” including geopolitics, mental health, finance, and hiring.
Forum AI’s approach is to bring in well-known experts to design tests, then use “AI judges” to score chatbot answers at large scale. Think of it like creating a detailed grading rubric with top teachers, then training an assistant grader to apply it consistently. Brown said Forum AI aims for its AI judges to match human expert consensus about 90% of the time, and that the company has reached that level.
Brown said she started Forum AI after seeing ChatGPT’s public launch while she was at Meta. She believed AI would become a main gateway for information, but she felt it was “not very good,” especially for everyday questions.
She also said accuracy has not been the top focus for many AI makers, who have emphasized areas like coding and math. In Forum AI’s evaluations, Brown pointed to issues like chatbots pulling information from Chinese Communist Party websites when it was not relevant, and what she described as left-leaning bias across many models.
More people now use chatbots as a shortcut to learn things. If the shortcut is unreliable, it can mislead people on topics that affect jobs, money, and health. Brown argues businesses may push harder for accuracy because wrong AI answers in lending, insurance, and hiring can create legal risk.
Source: TechCrunch AI