Thought Leadership 8 min read

I Manage $400M+ in Career Ad Spend. Here Is What I Check First.

When someone asks me to look at their advertising accounts, I do not start with the campaigns. I check five things first. They tell me almost everything I need to know before I open a single ad group.

Over 15 years managing advertising across South Africa, the United States, the United Kingdom, and global markets — at budgets ranging from startup-scale to eight figures per month — I have developed a way of walking into a new account that cuts through the noise quickly. Most advertising problems are not where brands think they are. The campaigns are usually the symptom. The cause is almost always in one of these five places.

Check 1

Conversion Tracking Integrity

Before I look at a single campaign, I verify whether the conversion events being tracked are actually the business outcomes that matter. This sounds obvious. It is rarely done.

The most common problem I find: an account is optimising toward a micro-conversion — a lead form submission, a quote request, an email sign-up — that has almost no correlation with actual revenue. The campaigns look healthy. Revenue is flat. When you trace back from revenue to the conversion events the platform is using to learn and bid, the disconnect becomes obvious.

The second most common problem: duplicate conversions. The same transaction is being counted multiple times — once in the ad platform's pixel, once in GA4, sometimes once more in a third-party attribution tool. Reported ROAS is inflated. Budget decisions are being made based on overcounted data.

I check: Are conversion events firing correctly? Are they deduplicated? Are the events the algorithm is optimising toward actually correlated with the revenue line? In most accounts I see for the first time, at least one of these is broken.

Check 2

Brand vs Non-Brand Separation

Brand search is not acquisition. It is defence. If someone searches your brand name and clicks your paid ad, you have paid for a customer who was already going to find you. The ROAS on branded search is almost always excellent — because the user already has intent, already knows you, and was probably going to convert regardless of whether the paid ad appeared.

When brand and non-brand search run together in the same campaign, brand performance inflates the blended numbers and makes non-brand acquisition look cheaper than it is. Budget follows the reported performance — toward brand, away from actual new customer acquisition. The account keeps reporting green. New customer growth slows.

I check: Are brand and non-brand campaigns completely separated? Does the reporting show non-brand performance independently? Does the team understand the difference between defending existing demand and generating new demand — and are they treating them as separate budget questions?

This structural problem is present in the majority of accounts I audit at brands spending above $50k per month on search. It is almost always invisible to the finance team reviewing the monthly report.

Check 3

Budget Allocation Rationale

I ask to see the budget split by channel, campaign type, and audience stage. Then I ask why it is allocated the way it is. The answer to this question tells me more about the account's health than anything else.

The most revealing answer is: "It has always been split this way." Budget allocation based on history — not on current performance data, not on channel contribution to the funnel, not on where the incremental return is highest — is budget allocation by inertia. It compounds over time. Old campaigns accumulate spend. New channels that could outperform never get the budget to prove it. The account optimises toward its past, not its future.

The second most revealing answer is: "We split it evenly." Equal budget across unequal channels is a sign that no one has done the work to understand relative contribution. Some channels are better at building awareness. Some are better at closing intent. The budget should reflect the funnel role of each channel — and that role changes as the business grows and the competitive landscape shifts.

I check: Can the account manager explain, with data, why each channel receives its current budget allocation? When was the last time the allocation was questioned rather than carried forward? What would happen to revenue if 20% of the highest-spend channel's budget was redirected tomorrow?

Check 4

What Happens When Things Go Wrong

I ask to see the last three months of significant performance drops. Not "what happened this week" — the retrospective of periods when performance fell materially. How it was diagnosed. What changed. What was learned.

This check is less about the incidents themselves and more about the response to them. Accounts managed proactively have a documented history of diagnosis, hypothesis, and outcome. Accounts managed reactively have a history of budget pauses, panicked creative swaps, and structural changes made without a hypothesis — followed by a return to the same problems two months later.

The most common pattern I find: performance drops are attributed to "algorithm changes" or "seasonality" without evidence. The same explanation is used repeatedly across different periods. No one has looked at whether the actual cause — creative fatigue, audience overlap, a bid strategy change that reset the learning phase, a competitor entering the category — is something actionable.

"The algorithm changed" is not a diagnosis. It is a way of describing something as uncontrollable that is often, on closer inspection, very controllable. I check whether the team has the instinct and the methodology to tell the difference.

Check 5

The Question Your Agency Cannot Answer

After the first four checks, I ask one question that reliably reveals the depth of the agency or team's strategic understanding: If I cut your budget by 30% tomorrow, which channels would you cut first and why?

A team that understands the account — channel incrementality, audience stage, funnel contribution — answers this immediately and with confidence. They know which spend is truly incremental and which is maintaining momentum that would continue without the ads. They know where the margin sits.

A team that does not understand the account will either hesitate, give a generic answer ("we would reduce the lowest ROAS channels"), or redirect the question. The generic answer is the most dangerous — it sounds analytical but is actually the wrong framework. ROAS is a reported metric. It is not the same as incrementality. The channel with the highest reported ROAS is often brand search, which, as discussed above, is largely defending demand that already exists.

This question is not a gotcha. It is the question every brand should be able to answer about their own advertising — and most cannot, because their agency has never been asked to explain it, and therefore has never done the work to understand it.

Why These Five?

Problems in the foundation propagate everywhere else

Campaign-level optimisation is valuable, but it operates within the constraints of the account's foundation. If the tracking is broken, you are optimising toward the wrong outcomes. If brand and non-brand are mixed, you are misreading performance. If budget allocation is historical rather than strategic, you are funding yesterday's priorities. If the team cannot diagnose performance drops, the same problems will recur. And if no one can answer the cut-budget question, no one really understands the account. Fix the foundation. The campaigns get easier from there.