354
Audio & Video Production343
Automation & Workflow224
Software Development250
Marketing & Growth192
AI Infrastructure & MLOps173
Writing & Content Creation203
Data & Analytics140
Design & Creative169
Customer Support130
Photography & Imaging156
Sales & Outreach125
Voice & Speech135
Operations & Admin87
Education & Learning131
TechCrunch updated a living glossary that explains common AI words like LLM, AI agent, hallucination, and training in plain language.
In short: TechCrunch has published and regularly updates a glossary that explains common artificial intelligence terms in plain language.
TechCrunch put together a running list of AI terms that often show up in news, product pages, and workplace conversations. The article is designed for readers who keep seeing acronyms and phrases like “LLM,” “RAG,” and “RLHF” and want a clear explanation.
The glossary defines many ideas, including “large language model” (the kind of AI behind chatbots), “AI agent” (software that can take steps for you, like booking a table), and “hallucination” (when an AI makes something up and presents it as fact). It also covers behind the scenes concepts like “training” (teaching the AI by showing it lots of examples) and “inference” (when the trained AI is actually used to answer questions).
Some entries use simple comparisons. For example, API endpoints are described like “buttons” on the back of software that other programs can press. Parallelization is explained like having several employees working at the same time instead of one person doing everything in order.
TechCrunch says it updates the glossary regularly as new terms appear and old ones change meaning.
AI is now part of everyday tools, from customer support chats to writing helpers, but the language around it can be hard to follow. A clear glossary helps people understand what companies are claiming, spot when an AI feature might be risky, and ask better questions, especially when accuracy problems like hallucinations can affect real decisions.
Source: TechCrunch AI