Cold email doesn't work cause you don't know how to test. Save this post, here's the framework that 3X'd our response rates and got us meetings with CTOs at Microsoft, Oracle, and Salesforce: Most companies screw this up completely. They create 50+ variations. They tweak signature formats. They A/B test subject lines endlessly. But they're optimizing the wrong things. Here's the framework we use to crack outbound for our clients in under 30 days: 1. TESTING vs. OPTIMIZING Testing = the big stuff that actually matters - Value proposition - Pain points addressed - Target persona - Core offer Optimizing = minor tweaks that only matter AFTER you have a winning message - Subject line variations - CTA placement - Signature style - Send time 2. THE TESTING SEQUENCE THAT WORKS: Step 1: Start with ONE core message - Focus on a single, clear value proposition - Target ONE specific pain point - Keep it under 150 words Step 2: Test different OFFERS with the same message - Not getting responses? Don't rewrite the entire email - Change what you're offering at the end - Demo → Case study → Quick call → Coffee chat Step 3: Once an offer converts, create 3-5 VARIATIONS - Same core value prop and offer - Different opening hooks - Different proof points - Test with a small sample (100-200 sends each) Step 4: Scale the winner & start optimizing - ONLY now should you tweak subject lines - ONLY now should you test send times - ONLY now should you add personalization This approach cut our testing cycle from 3 months to 3 weeks. Most teams waste months on microscopic changes when they haven't even validated their core message. The best part? Tools like Smartlead make this systematic testing simple. Once you've got your winner, then you can go wild with personalization using Clay. But not before. Cymate 🛠️♠️
Email Testing Techniques
Explore top LinkedIn content from expert professionals.
-
-
We analyzed over 27 million cold emails. Here are key insights into mailbox activity and performance: 📊 open rates: - Across all non-agency campaigns globally, the average open rate is 26%. - Open rates remain stable across workdays but drop 6-9% on weekends. 📊 reply rates: - Smaller, highly targeted campaigns (1-50 prospects) have a reply rate nearly 3x higher than mass campaigns with 1001-5000 prospects. - Campaigns with a bounce rate under 2% enjoy a 60% higher reply rate than those with 20%+ bounce rates. - Initial emails have the highest reply rates (~1%), with subsequent follow-ups seeing diminishing returns (30% as effective by the 5th or 6th email). 📈 mailbox volume and performance: - Mailboxes sending 1-20 emails daily have reply rates four times higher than those sending 101-200 emails daily. - 21-50 daily emails see a balanced performance, with over twice the reply rates of high-volume senders. 🔑 key takeaways ↳ Personalized, concise, and well-segmented campaigns drive higher engagement. ↳ Maintaining a low bounce rate and optimizing email volume per mailbox is critical for maximizing results. ↳ Send no more than 3 messages in sequence. Have you noticed similar patterns in your campaigns?
-
Stop running random email A/B tests. Before you say, “Let’s A/B test this,” ask yourself: 1. Why are we A/B testing this? 2. What’s the hypothesis? 3. What’s the goal? Testing just for the sake of testing is a waste of time, money, and resources—whether you’re running it in-house or working with an agency. Example: ⤷ Swapping out an image in an email? That’s not moving the needle. ⤷ Optimizing time delays in an abandonment flow? Now we’re talking. Most brands fall into one of two camps: 1. They don’t test at all → Missing out on incremental revenue. 2. They test everything → Poor resource allocation. The sweet spot? Prioritising high-impact tests. Here’s a smarter A/B testing framework: 1. Welcome flow & popups → Capture and convert traffic efficiently. ⤷ Start with the offer (% off, $ off, free gift). ⤷ Then test email angles and formatting. ⤷ Finally, optimise time delays—most buyers act within 48 hours. 2. Abandonment flows → Recover lost revenue. ⤷ Start with time delays (email 1 matters most). ⤷ Then test offers—discounts, free shipping, urgency tactics. ⤷ Finally, experiment with different ways to address FUDs (fears, uncertainties, doubts). 3. Post-purchase flows → Maximize LTV. ⤷ Test immediate vs. delayed upsells. ⤷ Optimise offer types for repeat purchase behaviour. ⤷ Experiment with content that drives the most return visits. 4. Everything else (winbacks, sunset flows, etc.) ⤷ Only test these once your core flows are fully optimised. ⤷ Expect smaller, incremental gains. So next time someone suggests an A/B test, challenge them: → What’s the hypothesis? → What’s the expected impact? If they don’t have a clear answer, skip it. What’s your approach to A/B testing?
-
Most people never find Message-Market Fit. Not because their offer sucks. Because they're testing in the dark. Let me show you the exact framework that's generated $100M+ in pipeline: What is Message-Market Fit? It's how well your messaging resonates with your target audience. No resonance = no response. Perfect resonance = floodgates open. Simple as that. The 5 Core Components: Every message has 5 parts that matter: 1. Offer → What you're proposing in exchange for their time/money 2. Angle → How you frame it (the perspective that makes it appealing) 3. CTA → The specific action you want them to take 4. Subject Line → What gets them to open 5. Social Proof → Why they should believe you Here's where people mess up: They test everything at once. New subject line + new offer + new angle = chaos. You have zero clue what actually moved the needle. The Framework: Think of Message-Market Fit like a combination lock. Each component is a dial. You need to crack them ONE at a time. Here's the order: Step 1: Test Offers First → Create 4 different offers → Keep everything else identical → Send to equal audience segments → Find the winner Step 2: Test Angles → Take your winning offer → Create 4 different angles to frame it → Same CTA, same subject line, same social proof → Find the winner Step 3: Test CTAs → Take your winning offer + angle → Test 4 different calls to action → Find the winner Step 4: Test Subject Lines → Keep your winning combo → Test 4 different subject lines → Find the winner Step 5: Test Social Proof → Add different case studies, results, testimonials → Find what builds the most trust Why this order matters: Your offer has the biggest impact. Your subject line has the smallest. Test high-impact first The Key Rule: Only test ONE variable at a time. Otherwise you're just wasting data. Every day you don't have Message-Market Fit, you're losing money. But finding it isn't magic. It's a process. Test what matters. Test in the right order. Let the data guide you. The market will tell you when you've nailed it. _____________________________ P.S. What component are you stuck on right now? Drop it in the comments and I'll help you figure out what to test next.
-
36 things I learned about cold email after 3 years & working with 30+ clients: 1. Most cold email problems happen before you hit send. 2. Deliverability first, copy later. 3. If your emails land in spam, nothing else matters. 4. Never send cold emails from your main domain (!!!) 5. Cold email is rarely a copy problem. Your positioning, website, social proof, social presence, list quality... just to mention a few. 6. Warm-up your domains for at least 2 weeks before sending. 7. It should be an ongoing process. Warm-up also when you actively send campaigns. 8. Buying 1 domain and blasting 500 emails/day from it is how you burn your brand. 9. The best subject lines look like they came from a colleague. 10. 3–5 words. Lowercase. No emojis or names. That's my best subject lines. 11. Your first line should prove you understand them. 12. Relevance > personalization. Always! 13. Personalization is useless in 85% of cases. 14. List segmentation is your superpower. 15. Write one message per segment, not one message per person. 16. Your ICP definition is probably too broad. 17. The best cold emails are 50-80 words. People have low attention spans. 18. One CTA per email. Make it easy to say yes. 19. "Let me know if you'd like to chat" is the weakest CTA in existence. 20. Follow-ups should add value, not just "bump this to the top of your inbox." 21. Most replies come from email 2-4, not email 1. 22. If nobody's replying, the problem is usually your list. 23. Data quality is the #1 predictor of campaign success. 24. Waterfall verification > relying on one data provider. 25. Clay changed the game for enrichment workflows. 26. Your tech stack should have fewer tools, not more. 27. Volume should match your TAM. 500 prospects - go deep. 50,000 - go wide. 28. Sending more emails won't fix a positioning problem. 29. If you can't explain your offer in one sentence, prospects won't read the second one. 30. Your offer clarity matters more than any copywriting trick. 31. Test the boring stuff first - send times, list quality, domain health. 32. Track buying signals: job changes, funding rounds, new hires, tech installs. 33. Cold email is not about selling. It's about starting a conversation. 34. Nobody wants to book a call with a stranger. Give them a reason first. 35. Social proof in your email works better than any persuasion tactic. 36. Cold email isn't dead. It's still the cheapest acquisition channel. What would you add to the list?
-
I A/B tested 247 different onboarding email approaches. I used to send 23 onboarding emails per user. Now I send 7. Here's the breakdown: EMAIL 1: The "Impossible Promise" Email** Subject: "Get results in [X] minutes or I'll refund you personally" Don't welcome them. SHOCK them. I tested 47 different first emails. This beat everything by 600%. The secret? Promise something that sounds impossible but deliver it immediately. EMAIL 2: The "Holy Shit" Moment" Sent [X] minutes after [Email/Action] 1. "Here's proof you just saved $2,400" Show them their exact savings. Dollar amount. Time saved. Be stupidly specific. This email has a 89% open rate because people can't believe what just happened. EMAIL 3: The "Objection Annihilator" Day 2. Here's where I broke every rule. "Why this will never work for you (and why that's perfect)" I literally told people why my product would fail for them. Conversions doubled overnight. EMAIL 4: The "Competitor Confession" Day 4. This one made my team panic. "Why [Competitor] is probably better for you" I compared myself to competitors and recommended them for specific use cases. Trust went through the roof. Upgrades followed. EMAIL 5: The "Failure Autopsy" Day 7. This email shouldn't work but destroys everything else. "Here's how 67% of users accidentally sabotage themselves" Teaching failure prevention beats teaching success by 10x. Why? Because people are terrified of screwing up. EMAIL 6: The "Secret Society" Day 14. The psychology here is dirty. "You're now in the top 5% (here's your proof)" I show them their ranking vs other users with specific metrics. Creates instant addiction to checking their progress. EMAIL 7: The "Forbidden Fruit" Day 21. The final trap. "The advanced stuff we don't advertise" I reveal "hidden" features that aren't actually hidden. Makes them feel like insiders getting secret access. The insight that broke my brain: Users don't want to use your product. They want their problem to disappear. Every email focused on problem elimination vs product education got 1,847% better results. The biggest mistake I made for 3 years? Trying to teach people my product instead of solving their problem. Your onboarding sequence should feel like magic, not homework. Most founders are building email courses. Stop asking "How do I explain my product?" Start asking "How do I change their life?"
-
I see a lot of misconception about A/B testing in outbound. The problem I see most? People jumping straight into copy variations. When you do this, you don't even have an understanding which core message resonates with your audience. This creates a foundation of sand where every subsequent test builds on potentially flawed assumptions. Here's how we do it at OutboundLeads. Phase 1: Angle Testing (The Foundation) Before writing a single email, use ChatGPT or Anthropic to help identify 3-4 distinct angles based on different pain points your product or service solves. Each angle should represent a fundamentally different value proposition, not just different ways of saying the same thing. For a sales automation tool, your angles might focus on: → Time efficiency ("Save 15 hours/week on manual tasks") → Revenue growth ("Companies using automation see 23% more deals") → Team scalability ("Scale your sales team without hiring") → Competitive advantage ("While competitors do manual work...") The key to proper angle testing is maintaining consistency across all other variables. Same subject lines, same CTA structure, identical sending schedules, same personalization level. The only variable should be the core value proposition and pain point being addressed. You need minimum 200 contacts per angle for statistical significance. Run tests for 2-3 weeks minimum to account for different response patterns. Measure reply rates, positive reply rates, meeting booking rates, and response quality/intent level. Phase 2: Copy Optimization (The Polish) Once you've identified your winning angle through systematic testing, then optimize copy elements within that proven framework. THIS is where most teams want to start, but doing it prematurely WASTES time and resources. Within your winning angle, test email length, personalization depth, social proof placement, CTA formats, and tone variations. The critical rule: test only one element at a time. Change both email length and CTA format simultaneously? You'll never know which change drove the performance difference. Phase 3: Scaling and Documentation When you've identified both winning angles and optimal variations of copy, scale confidently. You don't need to keep trying to reinvent the wheel. You've already done it. Document learnings for future campaigns. Angle testing insights often reveal broader market positioning opportunities beyond email. The compound effect is powerful. Get angle testing right first, and every subsequent optimization builds on solid foundation. Our campaigns typically see 300-400% improvement over initial versions using this methodology.
-
We did $55M in email sales last year. We test everything. Every variable. Every element. Here's our exact testing framework: CTA LANGUAGE - "Shop the Sale" vs humor vs emotional/aspirational - Test what resonates with YOUR audience - Don't assume. Data decides. BUTTON DESIGN - Rounded vs rectangular edges - Brand colors vs high-contrast colors - What pops more = what converts more HERO IMAGES - Static photos vs looping GIFs - Product shots vs lifestyle imagery - Cozy organic vs glossy close-ups SUBJECT LINES (this is huge) - Emoji placement: beginning vs end vs none - Length: short hooks vs longer, personal - Personalization with first names - Questions vs strong statements PRICE INCLUSION - Upfront pricing vs no pricing - Depends on your audience. Test both. CUSTOMER REVIEWS - Include them vs not including them - Where to place them - Customer photos vs text only EMAIL FLOWS (the money makers) - Time delays between emails - Generic vs product-specific follow-ups - Review inclusion strategies Example: We tested 4 hours vs 1.5 hours for abandoned cart reminders. Outcome: Longer wait time = higher open rate, better click rate, more orders. POP-UP OPTIMIZATION - 15% vs 20% off incentives - Timing: 5 seconds vs 10 seconds vs scroll triggers - Design: full screen vs corner box Don't guess what your audience wants. Test it. Measure it. Double down on what works.