LLMs are only as useful as the prompts that guide them. In revenue operations, the difference between a mediocre prompt and an excellent one can mean the difference between generic automation and genuinely intelligent workflows.
This guide covers practical prompt engineering for common RevOps use cases—lead qualification, account research, message generation, and data enrichment.
Prompt Engineering Fundamentals #
Before diving into specific use cases, let’s establish core principles:
Principle 1: Be Specific About Role and Context
LLMs perform better when they understand who they’re being and what they’re doing:
Weak:
Summarize this company.
Strong:
You are a B2B sales research analyst preparing
account briefs for enterprise software sales reps.
Summarize this company focusing on:
- Business model and target customers
- Recent news that might create buying triggers
- Likely pain points related to revenue operations
- Key stakeholders in sales/marketing leadership
Principle 2: Provide Examples (Few-Shot Learning)
Show the output format you want:
Weak:
Score this lead.
Strong:
Score this lead on a scale of 1-100 based on ICP fit
and buying signals.
Example Output:
Score: 78/100
ICP Fit: 85/100 - Strong match on company size and
industry, slight mismatch on tech stack complexity
Buying Signals: 71/100 - Recent VP Sales hire suggests
GTM investment, but no direct intent signals detected
Summary: High-potential lead, prioritize for outreach
Next Best Action: SDR call within 48 hours
Now score this lead:
[Lead data]
Principle 3: Constrain Output Format
Structured outputs are easier to process downstream:
Return your analysis as JSON with this schema:
{
"score": number (1-100),
"fit_score": number (1-100),
"intent_score": number (1-100),
"summary": string (max 100 words),
"recommended_action": string,
"confidence": "high" | "medium" | "low"
}
Principle 4: Include Guardrails
Prevent hallucination and off-topic responses:
Important guidelines:
- Only use information provided in the context
- If information is missing, say "Unknown" rather than guessing
- Do not make up company details or statistics
- Flag any claims you're uncertain about
Use Case 1: Lead Qualification Prompts #
Basic Qualification Prompt
You are a B2B lead qualification specialist for a
revenue operations platform.
ICP Definition:
- Company Size: 50-500 employees
- Industries: B2B SaaS, FinTech, E-commerce
- Funding: Seed to Series C
- Tech Stack: Must use a CRM (Salesforce, HubSpot, etc.)
- GTM Motion: Has dedicated sales team
Qualification Criteria:
1. ICP Fit (0-40 points): How well does the company
match our ideal customer profile?
2. Timing Signals (0-30 points): Are there indicators
of active evaluation or need?
3. Engagement (0-20 points): How has the lead engaged
with our content/product?
4. Stakeholder (0-10 points): Is this person a
decision-maker or influencer?
Lead Information:
[Insert lead data]
Provide your qualification as JSON:
{
"total_score": number,
"icp_fit": {"score": number, "reasoning": string},
"timing": {"score": number, "reasoning": string},
"engagement": {"score": number, "reasoning": string},
"stakeholder": {"score": number, "reasoning": string},
"qualification": "Qualified" | "Nurture" | "Disqualify",
"recommended_action": string
}
Advanced Qualification with Reasoning
You are evaluating a lead for sales follow-up.
Think through this step-by-step.
Step 1: Assess Company Fit
- Does this company match our ICP on firmographics?
- Are there any disqualifying factors?
Step 2: Evaluate Timing
- What signals suggest they might be in a buying cycle?
- What signals suggest they're not ready?
Step 3: Analyze Engagement
- What actions has this lead taken?
- What do these actions suggest about intent?
Step 4: Determine Stakeholder Fit
- Can this person influence or make purchasing decisions?
- Should we also target other stakeholders?
Step 5: Synthesize
- Weigh the factors and provide a final recommendation
Lead Data:
[Insert comprehensive lead data]
Provide your step-by-step analysis, then conclude with:
QUALIFICATION: [Qualified/Nurture/Disqualify]
CONFIDENCE: [High/Medium/Low]
ACTION: [Specific recommended next step]
Use Case 2: Account Research Prompts #
Company Research Synthesis
You are a sales intelligence analyst preparing an
account brief for an enterprise sales rep.
Research Sources Provided:
- Company website content
- Recent press releases
- LinkedIn company page
- News articles from past 6 months
Create an account brief covering:
1. COMPANY OVERVIEW (2-3 sentences)
Business model, target market, and competitive position
2. RECENT DEVELOPMENTS (bullet points)
- Funding, M&A, or financial news
- Leadership changes
- Product launches or pivots
- Partnerships or integrations
3. POTENTIAL PAIN POINTS
Based on company stage and recent news, what challenges
might they face related to:
- Sales efficiency and pipeline
- Marketing and demand generation
- Revenue operations and data
4. BUYING TRIGGERS
Events that might create urgency for our solution
5. KEY STAKEHOLDERS
Likely decision-makers and influencers for revenue
operations tools
6. RECOMMENDED APPROACH
Suggested angle and messaging for initial outreach
Research Data:
[Insert scraped content]
Format as a clean, scannable brief that a sales rep
can review in 2 minutes.
Competitive Intelligence Prompt
You are a competitive intelligence analyst. Based on
the provided information, assess this account's
competitive landscape.
Current Stack (from enrichment data):
[List of technologies]
Questions to answer:
1. What relevant competitors do they currently use?
2. How entrenched is their current solution?
3. What displacement opportunities exist?
4. What integration requirements would they have?
5. What competitive positioning should we use?
Provide analysis in this format:
COMPETITIVE STATUS:
- Current Solution: [Tool name or "None detected"]
- Entrenchment Level: High/Medium/Low
- Displacement Difficulty: Hard/Medium/Easy
OPPORTUNITY ASSESSMENT:
[2-3 sentences on competitive dynamics]
RECOMMENDED POSITIONING:
[Specific angle to use against current/likely solution]
INTEGRATION REQUIREMENTS:
[List of must-have integrations based on their stack]
Use Case 3: Message Generation Prompts #
Cold Email Generation
You are a sales copywriter creating personalized cold
emails for B2B SaaS outreach.
Brand Voice Guidelines:
- Conversational and direct, not salesy
- Lead with insight, not pitch
- Short paragraphs (1-2 sentences each)
- Total length under 100 words
- One clear, low-friction CTA
Email Framework:
1. Hook: Reference something specific and recent about
their company/role
2. Problem: Briefly articulate a challenge they likely face
3. Credibility: One proof point (similar customer, result)
4. CTA: Specific ask with clear next step
Personalization Data:
[Company research, contact info, trigger events]
Generate 3 email variations:
- Variation A: Lead with company trigger
- Variation B: Lead with role-specific pain point
- Variation C: Lead with industry trend
For each, provide:
- Subject line (under 50 characters)
- Email body
- Why this approach might resonate
Follow-Up Email Prompt
You are writing a follow-up email after no response
to initial outreach.
Context:
- Original email sent [X days ago]
- Subject: [Original subject]
- Key point: [What the first email emphasized]
- No open/click data available
Follow-up Guidelines:
- Don't repeat the first email
- Add new value (insight, resource, question)
- Keep even shorter than original (under 75 words)
- Make it easy to respond
Generate a follow-up that:
1. Acknowledges they're busy (briefly)
2. Provides a new angle or piece of value
3. Asks a specific question OR offers specific help
4. Has a clear CTA
Output format:
Subject: [New subject line]
Body: [Email text]
Approach: [Why this follow-up angle]
LinkedIn Connection Request
You are writing LinkedIn connection requests for
B2B sales outreach.
Constraints:
- Maximum 300 characters (LinkedIn limit)
- No sales pitch
- Personal but professional
- Give a reason to connect
Personalization Data:
[Contact info, company, recent activity, mutual connections]
Generate 3 connection request variations:
1. Shared interest/background approach
2. Compliment on their content/work approach
3. Direct but non-salesy professional approach
For each, provide:
- Connection request text (under 300 chars)
- What makes this personalized
Use Case 4: Data Enrichment Prompts #
Extracting Structured Data from Unstructured Text
You are a data extraction specialist. Extract
structured information from the provided text.
Source Text:
[Company "About" page, press release, or other content]
Extract the following fields (return "Not found" if
information is not available):
{
"company_description": string (1-2 sentences),
"business_model": "B2B" | "B2C" | "B2B2C" | "Marketplace",
"target_market": string,
"founding_year": number | "Not found",
"headquarters": string,
"employee_count_estimate": string (range like "50-100"),
"funding_stage": string | "Not found",
"key_products": [string],
"industries_served": [string],
"notable_customers": [string] | "Not found"
}
Important:
- Only extract information explicitly stated in the text
- Do not infer or guess missing information
- Use "Not found" for any field without clear evidence
ICP Classification
You are classifying companies against an Ideal
Customer Profile.
ICP Definition:
[Detailed ICP criteria]
Company Data:
[All available firmographic and technographic data]
Classify this company:
1. PRIMARY CLASSIFICATION
- Tier 1 (Ideal): Matches all must-have criteria
- Tier 2 (Good Fit): Matches most criteria with
minor gaps
- Tier 3 (Possible): Some fit but significant gaps
- Tier 4 (Poor Fit): Does not match ICP
2. CRITERIA MATCH
For each ICP criterion, assess:
- Criterion: [Name]
- Match: Yes/No/Partial/Unknown
- Evidence: [What data supports this]
3. GAPS AND CONCERNS
What criteria are not met? What information is missing?
4. RECOMMENDATION
Should this account be targeted? Why or why not?
Return structured assessment.
Prompt Templates in Cargo #
Cargo’s workflow engine supports reusable prompt templates:
Template Variables
You are scoring a lead for \{\{company_name\}\}.
Company Details:
- Industry: \{\{industry\}\}
- Size: \{\{employee_count\}\} employees
- Funding: \{\{funding_stage\}\}
Contact Details:
- Name: \{\{contact_name\}\}
- Title: \{\{job_title\}\}
- Department: \{\{department\}\}
Recent Activity:
\{\{#each activities\}\}
- \{\{this.type\}\}: \{\{this.description\}\} (\{\{this.date\}\})
\{\{/each\}\}
[Rest of prompt...]
Conditional Prompt Sections
\{\{#if has_intent_signals\}\}
Intent Signals Detected:
\{\{#each intent_signals\}\}
- \{\{this.topic\}\}: \{\{this.strength\}\} strength
\{\{/each\}\}
\{\{else\}\}
No intent signals detected for this account.
\{\{/if\}\}
Prompt Chaining
Use output from one prompt as input to another:
Workflow:
1. Research Prompt → Account Brief
2. Scoring Prompt (uses Account Brief) → Lead Score
3. Message Prompt (uses Score + Brief) → Personalized Email
Measuring Prompt Effectiveness #
Track these metrics for your prompts:
Quality Metrics
- Human approval rate on generated content
- Accuracy of extracted/classified data
- Downstream conversion rates
Efficiency Metrics
- Token usage per task
- Latency per prompt execution
- Retry rates due to format errors
Consistency Metrics
- Variance in outputs for similar inputs
- Format compliance rate
- Edge case handling
Prompt Optimization Tips #
-
Start simple, add complexity: Begin with basic prompts and iterate based on output quality
-
Test with diverse inputs: Ensure prompts handle edge cases and unusual data
-
Version control prompts: Track changes and their impact on output quality
-
A/B test variations: Compare prompt approaches on real data
-
Collect feedback: Build loops for users to flag bad outputs
Effective prompt engineering is a skill that compounds. The teams that invest in optimizing their prompts will see dramatically better results from their LLM-powered workflows.
Ready to build LLM-powered revenue workflows? Cargo’s platform makes it easy to design, test, and deploy prompts at scale.
Key Takeaways #
- Four core principles for effective prompts: be specific about role/context, provide examples (few-shot learning), constrain output format, and include guardrails against hallucination
- Structured outputs (JSON, defined schemas) make prompts more reliable and easier to process downstream
- Four primary RevOps use cases: lead qualification, account research, message generation, and data enrichment—each requires different prompt strategies
- Prompt templates with variables enable personalized outputs at scale without rewriting prompts for each record
- Measure prompt effectiveness through quality metrics (approval rate, accuracy), efficiency metrics (tokens, latency), and consistency metrics (variance, compliance)