The 5 Workflows to Automate First With an AI Coworker
Key Takeaways
- Most teams start with the wrong workflow. They pick the one that looks most impressive in a demo, not the one that pays back in the first week.
- The right first workflows share three traits. High frequency, low judgment risk, and a clear owner who can approve the draft.
- Inbox triage is the fastest payback. Every role opens Gmail or Outlook in the morning. An AI coworker that sorts, drafts, and waits for a human to send pays back on day one.
- Weekly reporting and ticket routing are the next two. Both are repetitive, low-judgment, and live in tools an AI coworker already reads (Stripe, Slack, Linear, Pylon).
- Keep judgment work manual on purpose. Commercial calls, headcount decisions, and anything that touches a live customer dispute stay with humans. Starting with judgment work is how teams kill their own pilot.
Why most AI pilots start with the wrong workflow
The first workflow you hand to an AI coworker sets the narrative for the next six months. Pick a good one, the team asks for more. Pick a bad one, the team quietly stops @mentioning the agent and you are back to the old way inside a month.
Most teams pick badly. They chase the workflow that looked most impressive in the demo, usually "draft a 10-page strategy document from three bullet points" or "auto-reply to every customer ticket." Both are wrong. The first is low frequency, so nobody remembers to use the agent. The second is high judgment risk, so the first week produces a customer complaint and the team loses trust.
Gartner's 2024 generative AI forecast estimated that 30%+ of generative AI projects would be abandoned after proof-of-concept by the end of 2025, and the failure mode is almost always the same: teams picked a workflow that was too ambitious or too rare, and never got to the repeating value.
The good first workflows share three traits:
- High frequency. Every day or every week, not every quarter.
- Low judgment risk. A wrong draft is embarrassing, not expensive.
- Clear owner. One named human who approves the output before it ships.
The five workflows below are ranked by how quickly a 10-to-50 person team tends to see payback. Your team is different. Read the list, pick the one that matches your highest-frequency pain, and start there.
#1: Morning inbox triage
Every role opens an inbox in the morning. Founders, salespeople, operators, engineers, support leads. The first 45 minutes of the day disappear into reading, categorizing, and drafting replies to messages that do not need deep thought.
An AI coworker reads the inbox, sorts by what actually needs a human reply (vs newsletters, calendar invites, auto-replies, vendor outreach), and drafts a response to each one. You skim the drafts, edit or approve, and hit send.
@Viktor go through my Gmail inbox from the last 12 hours. Filter out
newsletters, calendar accept/decline, SignWell view notifications, and
cold outbound. For anything that genuinely needs my reply, draft a
response in my voice (short, direct, no hollow empathy). Group the
drafts into a single Slack message, ranked by urgency. I will approve
them one by one.
Viktor reads 140 emails, flags 11 that need a real reply, drafts each one, and drops the batch in Slack. The operator reads the batch over coffee, edits three, approves eight, and sends. The 45-minute morning ritual takes 12.
Why this one first: daily frequency, low judgment risk (you approve every send), and one clear owner (you). If the draft is wrong, you edit it. Nobody sees the bad version.
#2: Weekly reporting
Every growing team has a weekly report that nobody wants to write. Revenue from Stripe. Pipeline from HubSpot. Engineering velocity from Linear. Marketing from Google Ads and Meta Ads. The human who writes it spends 3-4 hours pulling data, copying numbers into a template, and making the commentary sound confident.
An AI coworker pulls the data, writes the first draft, and drops it in the channel where the team reviews it. The human tightens the commentary, pushes back on one number they disagree with, and publishes.
| Report | Data source | Human approver | Hours saved per week |
|---|---|---|---|
| Revenue report | Stripe, HubSpot | Growth lead | 3-4 |
| Engineering velocity | Linear, GitHub | VP Engineering | 2 |
| Marketing performance | Google Ads, Meta Ads, HubSpot | Marketing lead | 3 |
| Support load | Pylon, Slack | Support lead | 1-2 |
We wrote about this specific pattern in how to replace weekly reporting with an AI coworker. The mechanics are the same across every report: the agent does the data gathering, the human does the commentary.
Why this one second: weekly frequency, every team has it, and the data sources are clean. Payback shows up in the first Monday.
#3: Support ticket routing
Support teams spend 20-30% of their day on routing, not replying. A ticket comes in, someone reads it, decides if it is a bug, a billing issue, a feature request, or a how-to question, and tags the right owner.
An AI coworker reads each incoming Pylon or Zendesk ticket, looks up the customer in Stripe, checks if a similar ticket landed in the last 30 days, and proposes a routing with the suggested owner. The support lead confirms or overrides in one click.
| Ticket type | What Viktor pulls | What Viktor proposes |
|---|---|---|
| Billing question | Stripe customer record, last invoice | Route to billing, draft one-line answer |
| Product bug | Linear search for similar issues | Route to engineering, flag similar open ticket |
| How-to question | KB search in Notion, past similar tickets | Route to support, draft KB link reply |
| Feature request | Linear search in roadmap | Route to PM, tag for backlog review |
Why this one third: daily frequency, clear owner in the support lead, and the downside of a wrong routing is small (you re-route, no customer harm). Named in our piece on AI for customer support as the highest-ROI support workflow.
#4: Onboarding checklist execution
New hires lose their first week to setup. Accounts, tools, documents, a buddy to ask questions. Most companies have a checklist, and the checklist is half-stale.
An AI coworker reads the new-hire checklist from Notion, creates the accounts it can (in tools where the AI has permissions), drafts the welcome message, schedules the first-day buddy meeting, and flags which steps need a human (security access, physical badge, executive intros). The people lead reviews the flagged steps and approves.
@Viktor onboard Alex, starting next Monday as a software engineer.
Pull the engineering onboarding checklist from Notion. Create the
GitHub org invite, the Linear account, the Slack channels, and the
Notion workspace access. Draft the Day 1 welcome message. Schedule
a 30 min intro with Alex and their buddy. Flag anything that needs
a human approver (IT security, badge, executive intros).
Gallup's research on employee engagement shows that only 12% of employees strongly agree their organization does great onboarding, and the cost of a bad first week compounds for months. An AI coworker is not the fix for a bad onboarding playbook. But if the playbook is reasonable and the problem is execution, an AI coworker removes the execution gap.
Why this one fourth: lower frequency than the top three, but very high impact per run. A smooth first week correlates with faster ramp and better retention. We covered the broader pattern in how we onboarded a new hire without HR touching anything twice.
#5: Standup digest
Engineering standups eat 150 minutes per week per engineer at a 10-person team. Most of the meeting is status updates that could have been a Slack thread.
An AI coworker reads the last 24 hours of activity (GitHub commits, Linear issue transitions, Slack updates per engineer) and drafts a standup digest the team reviews before the meeting starts. The meeting itself focuses on blockers and judgment calls, not on what each engineer did yesterday.
| Source | What the digest surfaces |
|---|---|
| GitHub | PRs opened, merged, reviewed per engineer |
| Linear | Issues transitioned, blockers flagged |
| Slack | Any @here / @channel pings on engineering |
| Deployment logs | What shipped, what rolled back |
We covered this workflow in more depth in the async playbook for replacing standup, weekly sync, and status review with Slack reports. The pattern is identical: the agent produces the status, the humans spend the meeting time on judgment.
Why this one fifth: weekly cadence for most teams, but the time savings compound the more engineers you have. Starts paying back above 6 engineers, strongly above 12.
What to keep manual on purpose
Five first workflows are above. Five workflows a team should deliberately keep manual for the first 90 days are below.
- Commercial judgment calls. How to respond to a competitor's proposal, whether to accept unusual contract terms, and walk-away calls on a negotiation. Judgment-heavy, high blast radius.
- Customer escalation responses. When a customer is angry, the email goes from a human. Always. Anthropic's December 2024 guide on building effective AI agents explicitly flags this pattern: use an agent to draft, but never let it send a response to a contested situation.
- Headcount decisions. Whether to post a role, which candidates to interview, and when to extend employment terms. All human.
- Legal sign-off. Any contract, NDA, DPA, or policy change. Draft yes, send no.
- Security access grants. The approver is a human, named, with a known access review cadence. The AI coworker can file the request. It does not grant the access.
The pattern across all five: high judgment, high blast radius, low frequency. The opposite of the first workflows.
How to pick your first workflow for next week
If you are picking one workflow to start with and you have not decided yet, the short version is:
- If your team reads inboxes all morning, start with inbox triage. Every role benefits.
- If your Monday is eaten by reporting, start with weekly reporting. One channel, one scheduled cron, ~3 hours back.
- If your support team is drowning in routing, start with support ticket routing. The payback is daily.
- If you have a hire starting in the next 30 days, start with onboarding. The first hire through the new playbook sets the template for the next ten.
- If you have an engineering team above 8, start with standup digest. Each engineer gets 15 minutes back per day.
For a deeper read on the decision, our 8-question checklist before you buy an AI agent covers the approval, audit, and integration questions to answer before rollout.
How Viktor handles the review loop
A recurring concern when teams start: what happens when the draft is wrong?
Viktor runs review-first by default. For every write action in any of the five workflows above, Viktor drafts the action, shows the source data it used, and waits for the named approver to confirm before the action lands. The approver sees:
- The exact source rows or messages Viktor read
- The proposed action (email body, ticket content, report text)
- A confidence flag when any input was ambiguous
- A link back to the raw source for spot-checking
Every action is logged with a timestamp and the human approver. If Viktor drafts an inbox reply, the audit trail shows the proposed draft and the human who hit send. We wrote the broader argument in why an AI agent that acts without asking is a liability.
Frequently Asked Questions
Which workflow pays back fastest? Morning inbox triage. Daily frequency, low risk, every role uses an inbox. Most teams see the payback inside the first week.
Do I need to set up integrations before starting? Viktor connects through the same OAuth your team already uses for Slack, Gmail, Stripe, HubSpot, Linear, Notion, and 3,000+ other tools. Most first workflows run on integrations your team already has.
What happens if the draft is wrong? You edit or reject it. The action does not ship until a human approves. The wrong draft never reaches a customer.
How long before the team trusts the drafts? Most teams report the approval rate climbing from 50-60% in the first week to 80-90% by week three. The jump comes from the agent learning team context (team voice, approver preferences, which channels to post where).
Can I run all five workflows at once? You can, but most teams move faster by picking one, letting the team @mention Viktor for two weeks, and then adding the second. A sequenced rollout avoids the "too many new surfaces" problem.
Where should I keep humans in the loop forever? Commercial calls, customer escalations, headcount decisions, legal sign-offs, and security access grants. These are judgment work, and we recommend leaving them manual on purpose.
What if my team already uses Zapier or Gumloop for some of these? Keep them. A scheduled Zapier flow for a stable nightly job is a good fit. Use Viktor for the conversational, context-heavy work that lives in Slack threads. Most teams run both.
Viktor is an AI coworker that lives in Slack, connects to 3,000+ integrations, and does real work for your team. Add Viktor to your workspace.