Growth Marketing Best Practices For Companies
Planning
Every new tactic you do has a short google sheets or excel model calculating the impact you expect to achieve if things go right and highlighting the one or two things you need to test before scaling the tactic
Test budgets and test sizes are determined mathematically, typically based on what the confidence interval and significance rate you need to have confidence in understanding whether or not to scale the tactic going forward
You have a backlog list of growth tactics that you plan to test with estimated models of impacts that are stack ranked by impact and ability to test
You have a system for documenting and storing post-mortems of channels or tactics when they don’t work
Analytics
At some point in your funnel or after purchase, you ask users “how did you hear about us” to go a UGC checkpoint on attribution to supplement your click tracking
You frequently implement lift based methodologies for understanding the impact of channels, particularly when it comes to understanding the impact of branded paid search, remarketing, marketing automation, etc.
You don’t just uniformly use first click or last click methodology as your main source of truth for understanding the performance of your channels. Instead you have channel by channel multipliers or some kind of MTA methodology for understanding the impact of each growth channel.
You don’t get caught up with false precision when making scaling decisions. Instead you focus on goal posts or the likely general range of things like cac and payback period for a channel. This is critical because the key thing you need to understand for most channels and tactics is should we a) scale this b) iterate on this more before scaling or c) reduce or kill the tactic. Being decimal point specific is not needed to do this in most situations
Tactics
Your marketing automation system is pretty much completely event based. (So no time based 14-day onboarding flows where everyone gets the same education regardless of stage in the funnel)
You have a critical growth infrastructure list (key pixels, events etc.) that engineering has and understands need to be part of QA pre-deploys that might affect this stuff. Bugs are somewhat inevitable, but if you’re not working with your tech team in this way to actively minimize them, you’ll have many more than is needed
If you do heavy outbound emailing, then you have redundancy of senders, domains / sub-domains, etc. that are outside of your main domain for sending marketing automation, so you minimize any negative impact of sales outreach gone awry
Hiring / onboarding
New hires usually start with a single project. If you want to not be able to effectively track results and give people feedback, then assign them a lot of stuff off the bat. However, if you start with one workstream, then progress is clear and understanding if the hire is in line with company’s quality bar is straightforward
You have an unbalanced team where headcount and resource deployment mostly maps to revenue and growth drivers or at least the areas you think will drive the most impact.
When you hire, you typically prioritize the question: “if I could, would I hire 10 more people like this?” over specific skills and experiences
Everyone you hire has to do a take-home project where they show their ability to do work that is similar to what they would do in the role