The Complete PPC Audit Checklist: 47 Things Your Agency Should Be Checking
I have audited dozens of advertising accounts. The pattern is always the same: about 60% of what should be checked, never gets checked. Here is the full list.
Before I touch a budget or change a single campaign setting, I run every new account through a structured audit. Over 15 years and more than $400 million in career ad spend, I have refined this list to 47 checks across six areas. Most agencies will tell you they do this already. Ask them to show you the last audit report they produced for your account. Most cannot.
Account Structure
1. Campaign architecture matches business objectives
Campaigns should be structured to reflect how you actually measure success — not how the agency finds them easiest to manage. Prospecting, retargeting, and brand should never be mixed in a single campaign.
2. Brand and non-brand are separated
This is the most common structural error I find. When brand and non-brand terms run together, brand inflates performance metrics and hides the true cost of acquiring new customers.
3. Audience lists are correctly segmented
Existing customers, website visitors, and cold audiences require different messaging and different bidding. Running them together means you are showing acquisition messaging to people who already bought — and paying for it.
4. Campaign objectives match the buying stage
A conversion campaign targeting cold audiences will underperform not because the creative is wrong, but because the objective is wrong. Awareness objectives for cold, consideration for warm, conversion for hot.
5. Ad group / ad set structure is logical and scalable
Too many ad groups dilute learnings and make the account unmanageable. Too few means you cannot isolate what is driving performance. The right number depends on spend, not preference.
6. Negative keyword lists are comprehensive and maintained
Negative keyword lists degrade over time. Without regular maintenance, spend leaks into irrelevant queries. Check the search terms report — not just the keyword list — for evidence of waste.
7. Match types are used deliberately
Broad match has its place. So does exact. Most accounts I audit use broad match by default, not by strategy, which means the algorithm is making decisions the account manager should be making.
8. Budget is allocated by business priority, not account age
Old campaigns tend to accumulate budget because no one wants to disturb them. Check whether budget allocation reflects current commercial priorities or legacy inertia.
9. Dayparting and device settings are configured and justified
Showing ads at 3am to mobile users who never convert is a slow budget drain. Dayparting and device bid adjustments should be based on actual conversion data, not assumptions.
10. Geographic targeting is deliberate
Default geographic settings often include locations that will never convert. Check the locations report. You will almost always find budget going to places that make no commercial sense.
Tracking & Attribution
11. Conversion tracking is implemented correctly
Fire the tag in a test environment before declaring it live. I regularly find conversion events that fire on the wrong page, fire multiple times per transaction, or never fire at all.
12. Conversion events are deduplicated across platforms
If you are running both Google and Meta, the same purchase will be claimed by both platforms unless you implement deduplication. Most accounts do not. Reported total conversions are therefore overstated.
13. Attribution windows are appropriate for your purchase cycle
A 7-day click attribution window makes sense for e-commerce. It does not make sense for a B2B product with a 90-day sales cycle. Attribution window should match how customers actually buy.
14. View-through attribution is not inflating results
View-through conversion attribution is the most common cause of inflated Meta performance I find. Users who saw an ad but did not click are being counted as conversions. Check this setting in every active campaign.
15. GA4 and platform data are reconciled
Platform data and GA4 data will never perfectly match, but significant discrepancies (more than 15–20%) indicate a measurement problem, not a platform difference. Investigate before optimising.
16. Server-side tracking is considered for key events
Browser-based tracking is degrading year on year. ITP, ad blockers, and iOS privacy changes mean client-side pixels are under-reporting. Server-side tracking is no longer optional for serious measurement.
17. Offline conversions are imported where relevant
If your business closes deals offline — in-store, by phone, or through a sales team — those conversions should be fed back into the ad platforms to inform bidding. Most accounts I audit are optimising toward online micro-conversions while ignoring the actual business outcome.
18. Attribution model is appropriate and understood by the team
Last-click, first-click, linear, data-driven — the model you choose determines which channels look good and which do not. Most teams use the default without understanding what it means for budget decisions.
19. UTM parameters are consistent and complete
Missing or inconsistent UTMs make channel attribution in GA4 meaningless. Check every active campaign for correct source, medium, campaign, and content parameters.
20. Incrementality testing has been considered
Platform reporting tells you what happened in the platform. It does not tell you what would have happened without the ad. Geo holdout tests or conversion lift studies are the only way to measure true incrementality. Most accounts have never run one.
Google Ads
21. Smart bidding strategy matches account maturity
Target CPA and Target ROAS require sufficient conversion volume to work effectively. Accounts with fewer than 30–50 conversions per month per campaign should not be on automated bidding — the algorithm lacks enough signal to optimise meaningfully.
22. Search Impression Share is tracked and acted upon
Low impression share due to budget is a budget problem. Low impression share due to rank is a quality score or bid problem. They require different solutions. Most reports do not distinguish between the two.
23. Quality scores are monitored for high-spend keywords
A quality score below 5 on a high-spend keyword means you are paying more per click than competitors with better ad relevance and landing page experience. Fix the score before increasing the bid.
24. RSAs have been properly tested with varied assets
Responsive Search Ads with only minor headline variations are not being tested — they are being ignored. Every RSA should have headlines covering different value propositions, not variations of the same message.
25. Search terms report is reviewed regularly
The search terms report is the most important optimisation tool in Google Ads and the one most rarely looked at. Review weekly. Add negatives. Identify new keyword opportunities. This is the work.
26. Performance Max campaigns are correctly configured and monitored
Performance Max can be effective, but it requires careful asset group setup, audience signals, and brand exclusions. Unconfigured PMax campaigns frequently cannibalise branded search and inflate performance metrics.
27. Landing pages match the ad message and intent
An ad promising 20% off that lands on the homepage is burning money. Message match between ad and landing page is the single highest-leverage conversion rate lever. Check every high-spend ad group.
28. Ad extensions (assets) are fully utilised
Sitelinks, callouts, structured snippets, call extensions, image extensions — unused assets are free ad space you are leaving on the table. Every account should have at least six active asset types.
29. Shopping campaigns are correctly structured and fed
For e-commerce, Shopping campaign structure and feed quality are more important than bidding strategy. A well-structured feed with accurate titles, descriptions, and categories outperforms bidding optimisation every time.
30. Google Ads recommendations are not auto-applied
Auto-applied recommendations increase Google's revenue. They do not reliably increase yours. Every recommendation that has been auto-applied in the last 90 days should be reviewed and justified — or turned off.
Meta & Social
31. Audience overlap between ad sets is minimised
Overlapping audiences cause your ad sets to compete against each other in the same auction, driving up your own CPMs. Use audience overlap tools to identify and resolve conflicts before launching.
32. Creative fatigue is monitored and actioned
Frequency above 3–4 on a cold audience is a signal of creative fatigue. CPMs rise, CTR falls, and conversions drop — but the campaign continues to spend. No one is watching. Check frequency by placement and audience type.
33. Advantage+ Shopping and Advantage+ Audience are used intentionally
Meta's automated products can work well — but they remove control over audience, placement, and budget allocation. Know what you are giving up before enabling them, and have a way to measure whether the trade-off is worth it.
34. Creative testing is structured, not random
A/B testing one variable at a time — hook, offer, format — produces learnings. Running multiple variations simultaneously produces data that is difficult to interpret. Structured creative testing is the difference between knowing what works and guessing.
35. Video creative meets platform-specific best practices
First three seconds, captions on, vertical format, clear brand and value proposition without sound. These are not optional — they are the baseline for video that performs on Meta placements in 2026.
36. Retargeting audiences are sized appropriately and segmented
A retargeting audience of fewer than 1,000 people will not get meaningful delivery. An audience of all website visitors from the last 180 days is too broad to message effectively. Segment by recency and engagement level.
37. Pixel events and CAPI are both implemented
Browser-side pixel alone is no longer sufficient. Conversions API (CAPI) complements the pixel by sending server-side events that are not blocked by iOS or ad blockers. Event match quality score should be above 7.
38. Campaign learning phase disruptions are minimised
Every significant budget change, audience edit, or creative swap resets the learning phase. Accounts that are edited reactively — in response to daily performance fluctuations — spend most of their time in learning, never stabilising. Resist the urge to optimise daily.
Programmatic
39. Brand safety controls are configured
Programmatic inventory without brand safety controls will place your ads next to content that would concern your board. Category exclusions, keyword blocklists, and verified publisher lists are non-negotiable for any brand with reputation risk.
40. Invalid traffic (IVT) filtering is active
Bot traffic is a persistent problem in programmatic. Without IVT filtering, a meaningful percentage of impressions will never be seen by a human. Ask your DSP or agency to show you IVT rates by publisher and placement.
41. Viewability benchmarks are set and enforced
An impression that appears below the fold or is visible for less than one second has minimal impact. Set a viewability floor (minimum 70% for display, 50% for video) and filter placements that do not meet it.
42. Frequency caps are configured across DSPs
Without cross-channel frequency capping, the same user can be served the same ad dozens of times per day across different DSPs. This is both wasteful and damaging to brand perception. Unified frequency management requires a DMP or dedicated configuration effort.
43. Supply path is audited for intermediary fees
The programmatic supply chain between your budget and a publisher impression involves multiple intermediaries, each taking a fee. Supply path optimisation (SPO) — buying directly from SSPs with strong publisher relationships — reduces intermediary fees and improves working media percentage.
Reporting
44. Reports show business outcomes, not just platform metrics
Impressions, clicks, and platform-reported ROAS are media metrics. Revenue, profit, new customer acquisition cost, and LTV are business metrics. Your agency's report should speak the language of your finance team, not the language of the ad platform.
45. There is a single source of truth for performance data
If you are reconciling numbers from Google Ads, Meta, GA4, and a third-party reporting tool every month, you do not have a single source of truth — you have four competing truths. Agree on one number. Build your reporting around it.
46. Year-over-year comparisons account for seasonality
Comparing this month to last month in a seasonal business is misleading. Performance improvements that coincide with seasonal demand increases are not wins — they are the baseline. Always report against the same period last year where possible.
47. There is a documented optimisation log
If your agency cannot show you a log of every change made to your accounts in the last 90 days — with the rationale for each change and the observed outcome — you have no accountability. An optimisation log is the bare minimum for professional account management. Ask for it. If it does not exist, that is your answer.
Most agencies are checking about 60% of this list
I have yet to audit an account managed by a third-party agency that had all 47 checks in order. The most common gaps are in tracking integrity (Section 2), creative testing discipline (Section 4), and reporting accountability (Section 6). If your agency cannot speak to all of these — or better, produce documentation showing they actively manage them — it is worth asking why not.