12
Software Development18
Data & Analytics6
Audio & Video Production9
Productivity & Workflow12
Voice & Speech5
Sales & Outreach5
Design & Creative5
Marketing & Growth4
Search & Discovery8
Email & Communication6
Art & Illustration3
Customer Support1
Automation & Workflow1
HR & Recruiting2
An AI inference platform and API for developers and enterprises, delivering low-latency, cost-efficient LLM and speech model inference via LPU hardware.
Groq is an AI inference hardware and cloud platform for developers and enterprises that need high-speed, low-latency model inference for LLMs and related AI workloads.
Key capabilities include:
Pricing: Paid, with pay-per-token pricing for the GroqCloud inference API; enterprise hardware and managed services are available via direct engagement.
Notable for its LPU-first architecture (pioneered in 2016), which prioritizes inference speed and cost efficiency versus GPU-only stacks, and for adoption by organizations such as Dropbox, Vercel, Volkswagen, Canva, Robinhood, and others.
Reviews reflect the personal opinions of users. Companies are not able to pay to alter or remove reviews. Verified reviews indicate the reviewer submitted proof of product usage. Learn how reviews work.
No reviews yet. Be the first to review this product.
Upload screenshots or photos as proof of usage (optional, up to 5 images, max 5MB each). Reviews with proof can show the 'verified' badge.