Sales teams operate in real-time. A hot lead visits your pricing page right now. A key account shows intent signals today. A deal moves forward this moment. But most data infrastructure operates in batch—nightly syncs, daily updates, weekly reports.
This gap between data availability and sales action costs opportunities. Real-time data pipelines close that gap.
The Real-Time Imperative #
Why Timing Matters
| Signal | Value at Real-Time | Value at +24 Hours |
|---|---|---|
| Pricing page visit | 21x more likely to convert | Competitor may have won |
| Intent spike | Perfect outreach timing | Window may have closed |
| Champion job change | Day-one outreach | Too late, already contacted |
| Product usage surge | Expansion opportunity | Moment passed |
Batch vs. Real-Time
| Aspect | Batch | Real-Time |
|---|---|---|
| Latency | Hours to days | Seconds to minutes |
| Use case | Analytics, reporting | Alerts, actions |
| Complexity | Lower | Higher |
| Cost | Lower | Higher (but worth it) |
Real-Time Pipeline Architecture #
Architecture Overview
flowchart TB
subgraph Sources[EVENT SOURCES]
Website
Product
CRM1[CRM]
Email
Intent
Enrichment
end
subgraph Streaming[EVENT STREAMING]
Stream[Kafka, Kinesis, Pub/Sub, Segment]
end
Sources --> Streaming
Streaming --> RealTime[Real-Time Processing<br/>Flink]
Streaming --> Warehouse[Data Warehouse<br/>Snowflake]
Streaming --> Operational[Operational Systems<br/>CRM]
RealTime --> Actions[ACTIONS<br/>Alerts, Routing, Personalization]
Operational --> Actions
Component Deep Dive
Event Sources Everything generating sales-relevant data.
| Source | Event Types | Latency Requirement |
|---|---|---|
| Website | Page views, form fills | Real-time |
| Product | Signups, usage | Real-time |
| Opens, clicks, replies | Near real-time | |
| CRM | Status changes | Near real-time |
| Intent | Topic spikes | Hourly acceptable |
| Enrichment | Data updates | On-demand |
Event Streaming Layer Collects and distributes events.
| Technology | Strength | Best For |
|---|---|---|
| Segment | Easy implementation | Most companies |
| Kafka | Scale, flexibility | High volume |
| Kinesis | AWS integration | AWS shops |
| Pub/Sub | GCP integration | GCP shops |
Processing Layer Transforms and acts on events.
| Approach | Use Case |
|---|---|
| Stream processing | Aggregation, scoring |
| Operational platform | Routing, actions |
| Direct delivery | Simple pass-through |
Key Real-Time Use Cases #
Use Case 1: Hot Lead Alerts
Event: Website visitor views pricing page
Pipeline:
1. Website tracks page view (Segment)
2. Event enriched with company (Clearbit)
3. Account identified and scored
4. If score > threshold AND in TAL:
→ Slack alert to assigned AE
→ CRM task created
→ Priority sequence triggered
Latency: < 5 minutes
Use Case 2: Intent Signal Response
Event: Account intent score spikes
Pipeline:
1. Intent provider detects surge (Bombora)
2. Signal received via webhook/API
3. Account matched to CRM record
4. Combined with existing score
5. If total score > threshold:
→ Alert sales team
→ Update account tier
→ Trigger outreach sequence
Latency: < 1 hour
Use Case 3: Product Engagement Routing
Event: User completes key activation
Pipeline:
1. Product event fired (Amplitude/Segment)
2. Event matched to account/contact
3. Engagement score updated
4. PQL criteria evaluated
5. If PQL threshold crossed:
→ Route to sales assist
→ Create opportunity
→ Send internal notification
Latency: < 15 minutes
Use Case 4: Deal Risk Detection
Event: No activity on opportunity for X days
Pipeline:
1. Daily scan of open opportunities
2. Calculate activity recency
3. Flag accounts with no recent activity
4. Cross-reference with engagement signals
5. If high value + stalled:
→ Alert manager
→ Suggest intervention
→ Update risk score
Latency: Daily (near real-time for high-value)
Implementation Guide #
Step 1: Identify Critical Events
What events require real-time response?
| Event Category | Examples | Response Needed |
|---|---|---|
| High-intent website | Pricing, demo, contact | Immediate outreach |
| Product activation | Key feature used | Sales assist |
| Champion change | Job change detected | Quick outreach |
| Intent spike | Topic surge | Campaign adjustment |
| Deal signals | Stage change, stall | Process intervention |
Step 2: Design Event Schema
Standardize your event structure:
{
"event_id": "uuid",
"event_type": "page_view",
"timestamp": "2025-01-15T10:30:00Z",
"source": "website",
"user": {
"anonymous_id": "...",
"email": "...",
"account_id": "..."
},
"context": {
"page_url": "/pricing",
"referrer": "google.com",
"utm_source": "...",
"ip": "...",
"country": "US"
},
"properties": {
"time_on_page": 120,
"scroll_depth": 80
}
}
Step 3: Build Collection Layer
Get events from all sources:
Website Events
- Segment/Rudderstack tracking
- Custom event tracking
- Form submission hooks
Product Events
- Amplitude/Mixpanel events
- Custom product tracking
- Feature flag events
Third-Party Events
- Webhook listeners
- API polling
- Integration platforms
Step 4: Build Processing Layer
Process events for action:
flowchart TB
A[Raw Event] --> B[Enrichment<br/>add context]
B --> C[Identity Resolution<br/>match to account/contact]
C --> D[Scoring<br/>update scores]
D --> E[Rule Evaluation<br/>check thresholds]
E --> F[Action Triggering<br/>alerts, routing, automation]
Step 5: Build Action Layer
Turn insights into actions:
Alert Actions
- Slack notifications
- Email alerts
- CRM tasks
Routing Actions
- Lead assignment
- Tier changes
- Queue updates
Automation Actions
- Sequence enrollment
- Campaign triggers
- Record updates
Real-Time Pipelines with Cargo #
Cargo provides real-time processing:
Event Processing
Workflow: Real-Time Lead Routing
Trigger: Webhook (form submission)
→ Enrich: Company and contact data
→ Score: Calculate ICP fit
→ Score: Add engagement points
→ Match: Check against TAL
→ Route: Based on score and segment
→ Notify: Alert assigned rep
→ Track: Log for analytics
Latency: < 2 minutes
Signal Aggregation
Workflow: Multi-Signal Processing
Triggers:
- Website events
- Intent signals
- Product events
→ Aggregate: All signals for account
→ Calculate: Combined score
→ Evaluate: Against thresholds
→ If threshold crossed:
→ Update: Account status
→ Alert: Sales team
→ Trigger: Appropriate action
Intelligent Routing
Workflow: Smart Lead Distribution
Trigger: New lead created
→ Enrich: Full data enhancement
→ Score: Multi-factor scoring
→ Classify: Segment assignment
→ Route: Based on rules
- Enterprise → Enterprise AE
- Mid-market → MM team round-robin
- SMB → Self-serve or nurture
→ Notify: Within SLA
Measuring Pipeline Performance #
Latency Metrics
| Metric | Target |
|---|---|
| Event ingestion | < 1 second |
| Processing time | < 30 seconds |
| End-to-end (event to action) | < 5 minutes |
| Alert delivery | < 1 minute |
Quality Metrics
| Metric | Target |
|---|---|
| Event delivery rate | > 99.9% |
| Match rate | > 90% |
| False positive rate | < 5% |
| Action success rate | > 95% |
Business Metrics
| Metric | Measurement |
|---|---|
| Response time improvement | Minutes saved |
| Conversion lift | Real-time vs. delayed |
| Pipeline from signals | $ attributed |
Best Practices #
- Start with highest-value events - Don’t boil the ocean
- Design for failure - Events will be lost; handle gracefully
- Monitor latency - Degradation kills value
- Balance real-time vs. batch - Not everything needs instant
- Test thoroughly - Real-time mistakes propagate fast
Real-time data pipelines transform sales from reactive to proactive. The investment in infrastructure pays back in opportunities captured that would otherwise be lost.
Ready to build real-time sales intelligence? Cargo processes events in real-time and triggers immediate actions across your GTM systems.
Key Takeaways #
- Timing matters: hot leads visiting your pricing page are 21x more likely to convert when contacted immediately—24 hours later, competitors may have won
- Real-time architecture: event sources → event streaming (Kafka/Segment) → processing → action triggers → operational systems
- Key use cases: hot lead alerts (pricing page visits), intent signal response (topic spikes), PQL routing (product activation), deal risk detection (activity staleness)
- Balance real-time vs. batch: not everything needs instant—match latency to business value (intent signals: hourly, hot leads: minutes)
- Measure pipeline performance: event ingestion < 1 second, processing < 30 seconds, end-to-end < 5 minutes, alert delivery < 1 minute
Key Takeaways #
- Real-time signals are 21x more valuable: a pricing page visit right now converts at dramatically higher rates than the same signal 24 hours later
- Not everything needs real-time: high-intent website events and PQL signals need seconds; intent data can be hourly; analytics can be daily
- Pipeline architecture: sources → event streaming (Segment/Kafka) → processing → actions (alerts, routing, automation)
- Latency targets: event ingestion < 1 second, processing < 30 seconds, end-to-end < 5 minutes, alert delivery < 1 minute
- Design for failure: events will be lost—build in delivery guarantees, monitoring, and graceful degradation