In short: In 2026, more workplaces are using “agentic” AI tools that can plan and complete routine tasks on their own, leaving people to supervise and make bigger decisions.
Agentic AI is becoming common in jobs that involve lots of repeat steps, like security monitoring, IT support, engineering, and supply chain planning. “Agentic” means the software does not just answer questions. It can decide what to do next, take actions, and keep going until a goal is reached (like a junior assistant who can run errands without being told every step).
These tools often work in teams of “agents”, which are separate AI workers with different roles. One might check compliance rules while another confirms customer details, then they compare results. They also connect to other software through APIs, which are like plugs that let apps talk to each other.
Examples include systems that sort low-priority security alerts, pull related log files, and start an investigation. In software work, they can draft pieces of a build process, like writing basic API code and tests, then a human reviews it. In operations, they can watch data flows, spot problems like a file format changing, and try to fix the issue.
As these systems take on more work, companies will need clear rules for when the AI must stop and ask a person, plus records of what it did and why. The Financial Times notes rising interest in multi-agent setups, and Gartner reported a sharp jump in related inquiries by mid-2025. The near-term shift is likely to be more oversight roles, where people manage and approve AI-run workflows instead of doing every step themselves.
Source: Financial Times
17
Productivity & Workflow10
AI Infrastructure & MLOps12
Design & Creative5
Marketing & Growth4
Audio & Video Production5
Search & Discovery7
Data & Analytics6
Email & Communication5
Sales & Outreach5
Art & Illustration3
Voice & Speech5
Writing & Content Creation3
Automation & Workflow1
Operations & Admin3