How to Prompt an AI Coworker (Stop Using Your ChatGPT Habits)
Key Takeaways
- ChatGPT habits do not transfer. ChatGPT knows nothing about your team, your tools, or your channel conventions. An AI coworker like Viktor already does, and the prompt style changes because of it.
- The three rules that matter. Name the channel the output should land in. Name the human who approves. Name the tools the agent should use.
- Short prompts beat long prompts. With an AI coworker, the context is already in the thread. You do not need to re-explain what HubSpot is or who Lena is.
- Include the stop condition. Tell the agent when to stop and wait for approval. "Draft the email, do not send" is a better prompt than "send an email."
- If you have to re-prompt, the first prompt was underspecified. Write the clarifying question into the prompt the first time.
Why ChatGPT habits break when you move to an AI coworker
ChatGPT is a blank surface. You open a browser tab, and whatever the agent knows, you typed in yourself. That shape produced a specific prompting style: long setup, role-play ("You are a senior marketer..."), explicit formatting rules, and a lot of output-steering at the end.
An AI coworker like Viktor is the opposite. It lives inside Slack, reads the channel description, the pinned docs, the last 30 messages, and knows who the humans in the thread are. When Lena drops a request, Viktor already knows that Lena is the controller, that "our finance channel" means the one she is posting in, and that "the Monday report" is the one she ships every Monday at 9.
The prompting style has to change because the context does. A ChatGPT prompt that starts "You are a world-class finance analyst..." wastes space on a stage the AI coworker does not need. The better prompt is the one-sentence version that names the channel, the approver, and the tools.
Anthropic's December 2024 guide on building effective AI agents makes the same point: agents work best when the prompt names the environment they are already in, not when the prompt tries to recreate a neutral one.
The three rules that actually matter
After a year of watching teams @mention Viktor in real work, the prompts that consistently produce useful output share three traits.
Rule 1: Name the channel the output should land in
An AI coworker is going to produce something. A draft email, a Slack report, a Linear ticket, a Notion page, a Viktor Space. Name where it lands or the agent picks, and it will pick wrong.
- Weaker: "Write a weekly revenue report."
- Stronger: "Write a weekly revenue report and post it as a message in #growth."
- Stronger still: "Post it as a threaded reply to this Monday's report thread in #growth."
Same input, three very different outputs. The first produces a PDF you never share. The third produces a threaded update your team already knows where to find.
Rule 2: Name the human who approves
AI coworker work is review-first. The prompt that names the approver is the one where review goes smoothly.
- Weaker: "Draft a reply to this customer."
- Stronger: "Draft a reply to this customer and ping @Lena for approval before sending."
The second prompt tells Viktor: produce the draft, stop, wait. The first prompt leaves the stop condition implicit, and when Viktor defaults to review-first the team sometimes reads the pause as broken.
Rule 3: Name the tools the agent should use
Viktor connects to 3,000+ integrations. That is a feature and a trap. If your team pays for HubSpot and Pipedrive, Viktor needs to know which CRM holds the truth for this request. If you say "pull the pipeline," Viktor might read the wrong one.
- Weaker: "Pull our pipeline for Q2."
- Stronger: "Pull the pipeline from HubSpot (not Pipedrive, that is legacy). Stage = decision-maker meeting, date range = Q2."
The second prompt cuts the ambiguity in one line. No clarifying question needed, no wrong-data draft sent to the team.
A working example
Here is a full prompt from our finance channel, annotated:
@Viktor pull the Stripe payout reconciliation against NetSuite for
April. Flag variances over $50. Post the exception list as a threaded
reply to this Monday's close thread in #finance-ops, and ping @Lena
for sign-off on each proposed journal entry. Do not post any journal
entries to NetSuite without her approval.
The prompt does five things in 57 words:
- Names the source tools (Stripe, NetSuite)
- Names the period (April)
- Sets the threshold (variances over $50)
- Names the output channel and thread (
#finance-ops, this Monday's thread) - Names the approver and the stop condition (@Lena, do not post without approval)
The result is a draft the controller reviews in 8 minutes. Compare that to a ChatGPT prompt for the same work:
You are a senior finance analyst. I need you to reconcile my Stripe
payouts against NetSuite for April. Please format the output as a
table. Be thorough. Double-check your work. Do not make up any
numbers. Output should be in markdown.
ChatGPT cannot actually touch Stripe or NetSuite. The prompt produces a template, not a reconciliation. Same 60 words, very different outcome.
Why prompting gets shorter, not longer
The working pattern teams settle into: two sentences, three clauses each. The first sentence names the work and the tools. The second sentence names the channel and the approver.
| Field | What belongs here |
|---|---|
| Work | The verb and the artifact ("reconcile payouts," "draft the reply," "open the ticket") |
| Tools | Named integrations ("from Stripe," "in HubSpot," "using the Notion runbook") |
| Scope | The window or filter ("for April," "since Monday," "for enterprise accounts") |
| Output | Channel, thread, or file ("post in #growth," "draft in this thread," "save as a Notion page") |
| Approver | Named human who signs off ("ping @Lena," "wait for @Kris to approve") |
| Stop condition | The explicit pause ("do not send," "do not post until approved") |
If the prompt has all six, the draft is usually close enough to ship with one round of edits. If the prompt is missing any of them, Viktor either asks a clarifying question or proposes a draft that needs more rework than it should.
The anti-patterns we see most often
Teams new to an AI coworker tend to make the same three mistakes for the first few weeks.
Anti-pattern 1: Role-play preamble. "You are a senior sales engineer who specializes in enterprise deals..." wastes tokens and produces a more generic draft, not a more specific one. Viktor is already in the sales channel. It knows the role. (We wrote about this in the broader context of what AI agents still cannot do.)
Anti-pattern 2: Over-formatting instructions. "Output as a markdown table with 4 columns and 10 rows and bold the first column..." is a ChatGPT habit. The better version is "post it as a Slack message in #growth." Viktor picks the format that matches the channel.
Anti-pattern 3: Missing the stop condition. "Send the email" vs "Draft the email, do not send." The first prompt produces an awkward pause where Viktor drafts and waits anyway (review-first is on). The second prompt aligns the behavior with the phrasing.
We wrote about the broader version of this in why an AI agent that acts without asking is a liability. The prompt is the place to set the pause explicitly.
When a longer prompt is actually the right move
Short prompts are the default. Two cases justify going longer.
Case 1: A new workflow with no precedent. The first time your team asks Viktor to do a specific workflow, spend 10 minutes writing a prompt that covers the six fields in the table above. Save it as a Slack canvas or a pinned message. The next person to run the workflow reuses it.
Case 2: A workflow that touches production systems with real blast radius. Billing changes, outbound customer emails to large accounts, anything that writes to a system outside your own team's tools. Longer prompts are worth it here, and the stop condition should be very explicit.
For everything else, two sentences is the target.
How Viktor handles an underspecified prompt
If you leave a field ambiguous, Viktor does not guess and send. It asks a clarifying question in the thread.
@Viktor draft the customer follow-up.
Got it. Two clarifications before I draft:
1. Which customer? There are 3 active threads in #sales tagged for
follow-up this week.
2. Which Gmail account should this send from (yours or Lena's)?
Once you confirm, I will draft the reply and wait for you to approve
before sending.
The first draft should name the customer and the sender. If it does not, Viktor asks. The agent never sends a wrong-customer email from the wrong sender.
This is the default, and it is deliberate. Gartner's 2024 generative AI forecast estimated that 30%+ of generative AI projects would be abandoned after proof-of-concept by the end of 2025, and the common failure mode is an agent that guessed when it should have paused. The prompt that prevents the guess is the prompt that ships the right work.
Where this still breaks
A clean prompt does not save you from every failure mode. Two edges are worth naming.
Stale context. If your Slack channel description says "finance ops, contact @Lena" but Lena moved to another team two months ago, Viktor reads the stale description. Keep channel docs current. The AI coworker is as accurate as the team's own documentation.
Cross-team work. A prompt that spans three teams ("pull the marketing data, run it by sales, then post it in finance") tends to produce a draft with three different voices. Either split the prompt into three, or accept that the draft needs a heavier review pass.
Frequently Asked Questions
Do I need to write long prompts like I would for ChatGPT? No. Viktor already has the context ChatGPT does not (channel, pinned docs, last 30 messages). Two sentences is the target.
What fields should I include in every prompt? Work, tools, scope, output channel, approver, stop condition. Six fields, usually two sentences.
Does Viktor ask clarifying questions if the prompt is underspecified? Yes, by default. If any of the six fields is ambiguous, Viktor asks in the thread before drafting.
Can I save a good prompt as a template? Yes. Most teams save the high-value prompts as Slack canvases or pinned messages in the channel they live in, so the next person running the workflow reuses the exact text.
Should I ever use role-play ("you are a senior...")? Rarely. Viktor reads the channel, the pinned docs, and the thread. The role is already clear. Role-play preamble wastes tokens and produces a more generic draft.
What is the right stop condition for a production system? "Draft the action, do not execute, ping @{approver} for sign-off." If the action is a Stripe refund, a HubSpot write, or an outbound email to a named customer, the stop condition is not optional.
Where should I start if my team has never prompted an AI coworker? Pick one workflow, write the six-field prompt once, save it to the channel, and let the team reuse it for two weeks. The patterns that repeat are the prompts worth refining.
Viktor is an AI coworker that lives in Slack, connects to 3,000+ integrations, and does real work for your team. Add Viktor to your workspace.