How to Manage an AI Coworker Like a Team Member (Not a Tool)
Key Takeaways
- An AI coworker is not a feature you configure. It is a team member you manage. Same rhythm: onboarding, clear scope, regular review, expanding responsibility as trust grows.
- Someone on your team has to own it. The fastest way to waste money on an AI coworker is to install it and let everyone expect someone else to set it up.
- The first two weeks are onboarding, not launch. You teach it your tools, your tone, and your definition of done. Fast teams treat this the same way they treat a new hire's first month.
- The weekly review matters. Ten minutes of "what went well, what did not, what should change" keeps the AI coworker pointed at the right work.
- Expand scope the same way you would with a junior hire. Start with the narrow well-defined tasks. Earn trust. Then hand over the messier, higher-leverage work.
The frame most teams get wrong
When a team installs an AI coworker, the default mental model is the software one: we pay for it, we turn it on, it works. That is the wrong frame.
The right frame is the team member one: we hired it, we onboard it, we manage it, we expand its scope as it proves itself.
Teams that treat an AI coworker like software typically get 3 months in and say "it did not do what we expected." When you look at their setup, they skipped the onboarding loop. Nobody on their team taught the AI coworker their specific workflow, specific tone, and specific definition of done. They expected a turnkey solution and got a general-purpose tool.
Teams that treat it like a team member get 3 months in and say "we cannot imagine running ops without it." The difference is management, not the product.
Anthropic's December 2024 engineering guide on building effective agents makes the same point from the developer side: agents that ship in production have a clear human in the loop, well-defined scopes, and an explicit review cycle. The same holds for deploying one inside your team.
Who owns the AI coworker?
The single biggest predictor of whether an AI coworker works out in a team is whether one named person owns it. Not a committee. Not "everyone." One person.
The owner does three things:
- Defines the scope. Which workflows the AI coworker is running, which ones it is not.
- Reviews the work. Reads the outputs every week and catches the drift before it compounds.
- Expands the scope. Decides when the AI coworker has earned the next responsibility.
The best owners we have seen are usually operations leads, chiefs of staff, and technical project managers. They already think in workflows and trust loops. Engineering teams can own it too, though they tend to under-invest in the tone and workflow side and over-invest in the integration side.
What does not work: making "the whole team" the owner, making the founder the owner without delegation, or making IT the owner. None of those have the workflow context to manage the AI coworker well.
The first two weeks: treat it like a new hire
A new hire's first two weeks are about context. The same is true for an AI coworker. These are the five things a team should teach it in week one.
1. The stack. Connect every tool the AI coworker will touch: Slack, Gmail, Stripe, HubSpot, Linear, Notion, Google Ads, Meta Ads, GitHub, Ashby, Pylon, Zendesk, Salesforce, whatever your team runs on. Half of the failure cases we see at day 30 trace back to an integration that was never connected on day one.
2. The tone. Paste five recent messages written by the team lead or founder. Let the AI coworker see how your team actually talks. This alone removes most of the "it sounds generic" complaints.
3. The definition of done. Show it what a finished deliverable looks like. Link to last week's weekly report. Paste a good customer reply. Attach a well-written Linear ticket. It works the same way it would for a junior.
4. The review loop. Decide which work needs human approval (most of it, week one) and which does not (almost none of it, week one). Configure the approval flow in Slack so the owner sees every output before it ships.
5. The scope. Pick three to five workflows to run in the first two weeks. Resist adding the 20 other things you want it to do. The first two weeks are about earning trust in a small scope, not covering ground.
How to review the work each week
The weekly review is the single highest-leverage management behavior. Ten minutes every Friday. Three questions.
@Viktor post the weekly review for the owner to read.
Include: every action you took this week with the outcome,
every time a human rejected or corrected your draft,
and the three patterns from the corrections that changed your approach.
The review reads like a manager reading a report from a junior. You see what the AI coworker did, where it got corrected, and what it learned from the corrections. The owner scans it, catches the drift, and adjusts the scope or the tone guidance.
Three things to look for in the weekly review:
- Corrections that repeat. If the same correction shows up three weeks in a row, the scope or the instructions need updating. Do not just correct it again.
- Work that sat unused. If the AI coworker produced a weekly report that nobody read, kill it. Ruthlessly remove work nobody is using.
- Patterns that worked. If a workflow is running clean for three weeks, expand it. Give the AI coworker more of the same shape of work.
A comparison: managing a tool vs managing a team member
The table below is the shift in practice. Same AI coworker, different management style, very different outcome at day 90.
| Management moment | "AI coworker is a tool" frame | "AI coworker is a team member" frame |
|---|---|---|
| Day 1 setup | Install and turn on | Install, connect tools, teach tone, set scope with one owner |
| First wrong output | File a support ticket | Review the correction, adjust the prompt or scope |
| Week 2 | Evaluate "is it working?" | Run a weekly review with the owner, expand if going well |
| Month 1 | Consider replacing it | Review the trend of corrections, formally expand scope |
| New workflow need | Buy another tool | Extend the AI coworker's scope with a new prompt and training |
| Team member pushback | Escalate to IT | Have the owner walk the team member through how the AI coworker got it right |
The team-member frame is not fluffy. It is the cheaper one at month 3, because the scope compounds and the corrections stop repeating.
A concrete example: week 3 scope expansion at a 40-person team
To make this less abstract, here is a composite of how scope actually expands at the teams we work with.
Week 1-2 scope: Monday growth digest from Stripe and Google Ads, daily support queue summary from Pylon, Friday engineering status from Linear and GitHub.
By the end of week 2, the ops lead (the owner) has run two weekly reviews. Corrections have slowed to one or two per workflow. The team is asking for specific additions. Time to expand.
The week-3 prompt to expand one workflow:
@Viktor starting Monday the weekly growth digest expands.
Keep the Stripe MRR and Google Ads sections as-is. Add:
(1) a Meta Ads section with top 3 campaigns by CPA,
(2) a HubSpot section with deals over $10K that moved stage this week,
(3) a PostHog section flagging any product event that dropped
more than 20% week over week.
Run the first expanded version in the #growth channel
at 9 AM Monday. Flag the new sections so the team knows what is new.
One prompt. One review in Slack. The ops lead reads the first expanded output on Monday, accepts with two line edits, and the new digest ships. No rebuild. No new integration setup. The scope expansion took 5 minutes of management time.
This is what management looks like when it works. The first two weeks feel like effort. By week 3, each expansion takes minutes because the AI coworker already knows the team, the tone, and the tools.
Expanding scope the way you would expand a junior hire
When a junior hire does a good job on their starter task, you give them a harder one. Same idea.
The scope ladder we have seen work, in order:
- Recurring reports (weekly, monthly) that pull from clean data sources
- Inbox triage and drafting replies in your tone
- CRM and backlog hygiene (updating HubSpot or Linear from signals)
- Research briefs before calls
- Prep work for recurring meetings (board meetings, 1:1s, pipeline reviews)
- Cross-tool reconciliation (finance close, expense audit, ticket-to-PR sync)
- Customer-facing work with human approval (support drafts, outbound email drafts)
- Customer-facing work without human approval (only for the specific workflows where the team has earned trust over months)
Most teams land between steps 3 and 5 by day 90. The teams that push past step 6 are the ones with an active owner and a strong weekly review.
The trust model
Every step up the scope ladder is a trust decision. A team member you trust is one whose judgment you have seen repeatedly. The same applies to an AI coworker.
The practical tools to build trust:
- Review-first defaults. Until the team has seen a workflow run correctly 20 times, every output is human-approved. The argument lives here.
- An audit trail the team can read. Every action is visible in Slack. No hidden work.
- A kill switch. One Slack command pauses the AI coworker across every workflow.
- A named owner who reads the weekly review and adjusts.
Anthropic, OpenAI, and independent research groups all converge on the same principle in their agent deployment guidance: the systems that work in production are the ones that keep a human in the evaluation loop. The question is not whether to have that loop. The question is who owns it.
What to do when it is not working at month 3
Not every AI coworker deployment goes well. If you hit month 3 and your team is not getting value, the diagnostic is the same three-step check you would run on a struggling hire.
- Is the scope too broad? Most teams try to run 20 workflows at once and drift on every one. Cut to 5 and get them clean.
- Is the owner actually owning it? If the weekly review has not happened in a month, that is the reason. Assign a new owner or reinstate the ritual.
- Is the team using the output? If the Monday report is going into a channel nobody reads, the problem is not the AI coworker. Kill unused work and redirect the AI coworker's time.
Most of the time, the fix is one of those three. The AI coworker itself is almost never the blocker at month 3. Management is.
For teams still early in the process, our 8-question checklist before buying an AI agent covers the evaluation questions to ask before you install. And if you want the workflow inventory to start with, 12 tasks we killed in 30 days is the concrete menu.
Frequently Asked Questions
Is an AI coworker really a team member, or is this just framing? It is framing that produces different behavior. The teams that use the framing end up investing in onboarding, review, and scope expansion. Those investments produce measurably better outcomes at day 90.
Who should own the AI coworker on my team? Ideally an operations lead, chief of staff, or technical project manager. Someone who already thinks in workflows and has time for a weekly review. Founders can own it early, but should hand it off once the team grows past 10.
How much time should the owner spend managing an AI coworker? Week one: about 3-5 hours on setup. Week two: 1-2 hours. Steady state: 30 minutes to an hour per week on the review and scope adjustments.
What does the weekly review look like in practice? The owner reads a Viktor-drafted summary of the week's work, scans the corrections, and decides one thing to expand, one thing to adjust, and one thing to kill. Ten to fifteen minutes.
When should I expand an AI coworker's scope? When a workflow has run clean for 2-3 weeks with no new corrections. Extend in the direction of the existing work (a new report, a new variant of inbox triage) rather than jumping to a different category.
What is the biggest management mistake teams make? Not assigning an owner. The AI coworker drifts because nobody is reading its work or adjusting the scope. Six months in, the team gives up, without ever actually managing the deployment.
Does managing an AI coworker require technical skills? No. The owner needs to write a clear prompt and review work, not write code. Most successful owners are ops leads and chiefs of staff, not engineers.
Viktor is an AI coworker that lives in Slack, connects to 3,000+ integrations, and does real work for your team. Add Viktor to your workspace, free to start.