12 Repetitive Tasks Our Team Killed in 30 Days With an AI Coworker

Key Takeaways

  • In 30 days we replaced 12 specific tasks across ops, growth, support, and engineering. Total estimated time saved: 47 hours per week.
  • Every task had the same shape. Cross-tool, repeatable, low-judgment, frequent enough that a human felt it as a tax.
  • We kept some tasks manual on purpose. Sales calls, hiring decisions, customer escalations, anything where the cost of being wrong was higher than the cost of doing it ourselves.
  • The biggest win was not any single task. It was that nobody owned them, and now they get done. Work that "everyone forgot to do" is the cleanest fit for an AI coworker.
  • The integrations that mattered most: Slack, Stripe, HubSpot, Linear, Notion, Google Ads, Pylon, GitHub. Not exotic tools. The boring stack everyone already has.

Why we did this

We spent Q1 in firefighting mode. The team had grown from 18 to 31 in six months. Same number of tools (about 20). Three times the number of seats inside each tool. The work to keep all of it tidy compounded faster than we could hire for it.

Specifically, three things were going wrong:

  • Reports that used to take 30 minutes were taking 2 hours because the data was spread across more places.
  • Things that were supposed to happen weekly were not happening, because they had no clear owner.
  • Customer-facing work was getting slower because operators were drowning in internal work.

We rolled out an AI coworker at the start of the month with one rule: every Friday, we picked one workflow that was a tax on the team and we tried to move it to the agent. By the end of the month, we had moved 12.

Here are all of them, with what we actually do, what we kept human, and how much time we got back.


1. Weekly revenue snapshot

What used to happen. Every Monday morning, our finance lead opened Stripe, copied invoice data into a spreadsheet, calculated week-over-week changes, and posted a summary in Slack. About 45 minutes.

What happens now. Every Monday at 8 AM, Viktor pulls last week's invoices from Stripe, calculates the deltas, and posts a draft summary to a Slack thread. The finance lead reviews it (about 3 minutes), edits if needed, and posts.

Time saved per week. ~40 minutes. Integrations used. Stripe, Slack. What we kept human. Edits and any commentary about anomalies.


2. Pipeline-at-risk audit

What used to happen. Once a quarter, our head of sales would manually go through HubSpot looking for deals stuck in stages too long. He would do this when he remembered. Often that was after a deal had silently died.

What happens now. Every Wednesday, Viktor runs a query against HubSpot for any deal in the same stage for more than 30 days, with no recent activity. It posts a list to the sales channel and tags the owner.

Time saved per week. ~1 hour, but the bigger win is that it actually happens now. Integrations used. HubSpot, Slack. What we kept human. The decision of what to do about each stuck deal.


3. Ticket triage at the start of each shift

What used to happen. Our support lead would log into Pylon every morning, scan the queue, prioritize tickets, and assign them. About 20 minutes per shift.

What happens now. Viktor pre-triages the queue based on customer tier, ticket age, and content classification. It drafts an assignment list and posts it to the support channel for the lead to approve in one click.

Time saved per week. ~1.5 hours. Integrations used. Pylon, Slack. What we kept human. Final assignment for tier-1 customer escalations and any ticket that mentioned a specific dollar amount.


4. Underperforming Google Ads campaigns

What used to happen. Our growth lead would check Google Ads twice a week, look for campaigns spending without conversions, and pause them. Sometimes he forgot. We wasted budget for two weeks once.

What happens now. Every weekday at 6 PM, Viktor pulls Google Ads spend data, flags campaigns with high spend and low conversion, and proposes pauses in the growth channel. Approval is one click.

Time saved per week. ~30 minutes, plus ~$2,000 in saved spend per month. Integrations used. Google Ads, Slack. What we kept human. The approval. Always. We never auto-pause.


5. Linear issue grooming

What used to happen. Engineering team would do a weekly grooming session where they hunted for stale issues, missing labels, and unclear descriptions. About 90 minutes for the team.

What happens now. Every Sunday night, Viktor scans the Linear board for issues without labels, without estimates, or untouched for more than 60 days. It posts a list to engineering. The team reviews and acts on it Monday in 10 minutes.

Time saved per week. ~80 minutes for the team. Integrations used. Linear, Slack. What we kept human. Actually closing or de-prioritizing issues. Easy decisions, but they need a human owner.


6. Customer churn risk flag

What used to happen. We did not do this. We learned about churn after it happened.

What happens now. Every Monday, Viktor cross-references Stripe (subscription status), Pylon (recent ticket sentiment), and HubSpot (deal notes) to surface accounts that look at risk. It posts a short list to the customer success channel.

Time saved per week. This was new work. It surfaced 4 at-risk accounts in the first month, two of which our CS lead said she did not know were at risk. Integrations used. Stripe, Pylon, HubSpot, Slack. What we kept human. All of the customer outreach. The agent flags. Humans call.


7. Weekly content briefing

What used to happen. Our content lead would scan competitors, read industry news, and pull together a Friday content update. About 2 hours.

What happens now. Viktor scans defined sources (specific blogs, X accounts, Reddit subs), summarizes what changed, and posts a draft briefing. The content lead reviews and adds her commentary.

Time saved per week. ~90 minutes. Integrations used. Web search, RSS, Slack. What we kept human. Commentary and editorial choices about what to actually do with the information.


8. Recruiting candidate first-pass screening

What used to happen. Hiring manager would manually open each new application in Ashby, read the resume, and decide whether to advance to a screening call. About 5 minutes per candidate. With 40-60 candidates per role, this was a couple of hours per week.

What happens now. Viktor reviews each new candidate against the role's must-haves and nice-to-haves, drafts a recommendation (advance, reject, edge case), and posts to the recruiting channel. The hiring manager approves in seconds for clear cases and reviews carefully for edge cases.

Time saved per week. ~2 hours per active role. Integrations used. Ashby, Slack. What we kept human. All actual decisions. Viktor recommends. Humans decide.


9. PR review for documentation

What used to happen. Documentation PRs would sit in the queue for days because the engineers were focused on code PRs. Docs got out of date.

What happens now. Viktor reviews documentation PRs for style consistency, broken links, and accuracy against the actual product behavior. It posts a comment with suggested changes. A human still has to merge.

Time saved per week. ~2 hours, plus docs are now actually current. Integrations used. GitHub, Slack. What we kept human. Final merge and any judgment about whether the doc change is correct.


10. Notion knowledge audit

What used to happen. Our knowledge base in Notion was rotting. Pages with outdated info, broken internal links, dead screenshots. Nobody owned it.

What happens now. Once a month, Viktor crawls our Notion workspace, identifies pages that have not been updated in 6+ months, and lists pages with broken internal links. It posts the list to the team and proposes which pages to update or archive.

Time saved per month. ~3 hours, but the real value is that the knowledge base is no longer untrustworthy. Integrations used. Notion, Slack. What we kept human. Updating or archiving. The agent identifies; the team acts.


11. Engineering on-call summary

What used to happen. Our on-call engineer would write a handoff message at the end of their shift summarizing what happened. Sometimes they forgot. Sometimes the summary was sparse.

What happens now. Viktor pulls from the on-call channel, the incident log, and the last 24 hours of error rates, and drafts the handoff message at the end of each shift. The on-call engineer reviews, edits, and posts.

Time saved per shift. ~15 minutes. Integrations used. Slack, internal monitoring. What we kept human. The actual content of the handoff. The agent provides scaffolding.


12. Quarterly contract renewal radar

What used to happen. Our COO would manually scan invoices for SaaS tools coming up for renewal, look at usage data to see if we still needed each one, and flag for review. He did this when he remembered. Sometimes auto-renewals went through that should not have.

What happens now. Every Tuesday, Viktor cross-references our subscription tracker (a Notion database) with calendar dates, surfaces anything renewing in the next 60 days, and proposes a recommendation (renew as is, downgrade, cancel) based on usage.

Time saved per quarter. ~3 hours, plus an estimated $4,000 in saved costs from cancellations we would have missed. Integrations used. Notion, Stripe, Slack. What we kept human. All actual cancellation decisions. The agent recommends. Humans negotiate or cancel.


What we deliberately kept manual

This list is as important as the one above.

  • Sales calls and discovery. No agent on the call.
  • Performance conversations with the team. Always human.
  • Customer escalations from named accounts. Reviewed and replied by a human, even if drafted with help.
  • Hiring decisions. Agent screens, humans decide.
  • Final wording on anything externally branded. Press, marketing copy, board emails. Drafts welcome, decisions human.
  • Anything that touches money over a threshold we set. $500 in our case. Agent can flag, agent cannot send.

The principle: if the cost of being wrong is higher than the cost of doing it ourselves, do it ourselves.


What did the rollout actually look like?

Week Tasks moved Time saved that week Lessons
Week 1 Tasks 1-3 ~3.5 hours Started with simple read-only tasks
Week 2 Tasks 4-6 ~3 hours Added first action-taking task (campaign pauses)
Week 3 Tasks 7-9 ~5.5 hours Mix of internal and external-facing
Week 4 Tasks 10-12 ~2 hours Audit-style tasks, lower frequency

The pattern that worked: start with internal, low-stakes, read-only tasks. Add action-taking tasks once the team trusted the drafts. Save the higher-stakes tasks for when the audit log is well-trusted.


What kind of team is this for?

If you are 10-50 people and you have 15+ tools in your stack, you have at least 12 tasks of this shape. Probably more. They are sitting in your team's heads as "I need to remember to do that this week."

If you are smaller, you probably have 3-5 of these. Still worth it for the time and the peace of mind.

If you are larger and have a real ops function, an AI coworker is more likely to augment that function than to replace it. The shape is the same, the volume is bigger.


How does Viktor compare to general-purpose automation tools?

A reasonable question. Tools like Zapier and Make can wire together many of the same integrations. We have used both.

The difference is that automation tools execute fixed workflows. They cannot reason. They cannot draft. They cannot wait for human approval gracefully. They cannot ask follow-up questions.

For tasks that are pure plumbing ("when X happens, do Y"), Zapier and Make are great and often cheaper. We use them too. For tasks that need any reasoning at all (which deal is at risk, which campaign should be paused, which candidate looks like a fit), an AI coworker is built for it.

We covered this in more depth in The Best Zapier Alternative for Teams Tired of Workflow Spaghetti.


Frequently Asked Questions

How long did it take to set up each task? Average about 30-45 minutes from idea to running. The first one took 90 minutes because we were learning. The last one took 15 minutes.

What if a task fails? The agent surfaces the failure in the channel where the work was scheduled. We see it within minutes. We have not had a silent failure in production yet.

How much did this cost? Our Viktor bill in this 30-day period was ~$1,200. The estimated time saved was 47 hours per week. At our blended cost, payback was clearly positive in week 2.

Did your team push back? Some. Two team members were initially skeptical. Both came around once they saw their own time go up after their workflows moved over. The trick is to start with workflows the operator hates doing, not workflows the operator loves doing.

Can I see what an action looked like before the agent took it? Yes. Audit log includes every draft, every approval, every action. We rely on this for compliance and for debugging.


Related reading


Viktor is an AI coworker that lives in Slack, connects to 3,000+ integrations, and replaces the cross-tool work nobody owns. Add Viktor to your workspace, free to start →