David Dittman
Back to articles
Marketing Operations Performance

Managing a $2M Monthly Ad Spend

November 7, 2023

Managing a $2M Monthly Ad Spend

There is a threshold in paid media where the job fundamentally changes. Below a certain spend level, you can manage campaigns with intuition, manual checks, and a solid spreadsheet. Above that level — and for me, the inflection point was somewhere around half a million a month — intuition becomes dangerous, manual checks become impossible, and your spreadsheet starts lying to you because the data is too complex and too fast-moving for a static document to capture.

At two million a month, managing ad spend is an operations problem as much as a marketing problem. The strategic questions are important, but they are table stakes. What separates teams that scale profitably from teams that burn cash is the operational infrastructure around budget allocation, optimization cadence, fraud detection, and decision-quality reporting. I want to walk through how I think about each of these.

Budget Allocation Frameworks

The simplest useful framework I have found for budget allocation at scale is what I call the 70-20-10 model, though the exact ratios shift depending on where you are in your growth cycle.

Seventy percent of budget goes to proven performers — campaigns and channels with established, stable return metrics and at least ninety days of consistent performance data. This is your foundation. It is not exciting, but it pays the bills and funds everything else. The key discipline here is not to chase marginal improvements at the expense of stability. If a campaign is delivering a consistent three-to-one return, resist the urge to tinker with it just because you think you can squeeze out three-point-two. The risk of disrupting a proven performer almost always outweighs the potential upside.

Twenty percent goes to scaling experiments — taking things that have shown promise in small tests and seeing if they hold up at higher spend levels. This is where most of the growth comes from, and it is also where the most judgment is required. Not everything that works at a hundred dollars a day works at a thousand dollars a day. Audience saturation, frequency fatigue, and competitive dynamics all change as you scale, and you need to watch the leading indicators carefully.

Ten percent goes to pure exploration — new platforms, new audience segments, new creative formats, new offers. Most of this will not work. That is the point. You are buying information, not returns. The discipline here is to set clear learning objectives and kill criteria before you start. “We will spend ten thousand dollars on this new channel over three weeks. If cost-per-acquisition is not within forty percent of our target by week two, we pull the budget.” Without those guardrails, exploration budgets have a tendency to become charity.

Real-Time Optimization vs. Strategic Patience

One of the most counterintuitive lessons I have learned about managing large ad budgets is that more frequent optimization is not always better. In fact, at scale, over-optimization is one of the most common and most expensive mistakes.

Here is the dynamic: platform algorithms need data to learn. When you change bids, budgets, or targeting too frequently, you reset the learning process and prevent the algorithm from finding the optimal delivery pattern. I have seen teams cut their effective return by twenty or thirty percent simply by making too many changes too quickly, each change individually rational but collectively destructive.

My rule of thumb is to match optimization frequency to the statistical significance of your data. For campaigns spending over a thousand dollars a day, I want at least forty-eight hours of data before making any change, and I want to see at least a hundred conversions before I trust a performance trend. For smaller campaigns, the required patience window is even longer. Yes, this means watching a campaign spend money at what looks like an unfavorable rate for a couple of days. That is the cost of making decisions based on signal rather than noise.

The exception is anomaly detection. If spend is pacing dramatically above normal, if a campaign suddenly shows zero conversions after consistent performance, or if cost-per-click spikes beyond historical bounds — those warrant immediate investigation regardless of the patience window. That is not optimization. That is damage control.

Platform Diversification Strategy

Concentrating your spend on a single platform is comfortable and dangerous. It is comfortable because you build deep expertise, your tooling is optimized, and your benchmarks are well-calibrated. It is dangerous because you are one algorithm change, one policy update, or one account suspension away from losing your entire revenue engine.

I target a maximum of sixty percent of spend on any single platform, and I maintain active campaigns on at least three platforms at all times. “Active” means spending enough to maintain algorithmic learning and generate statistically meaningful data, not just a token presence. If a platform represents less than five percent of your spend, you are not really diversified — you are just wasting the operations overhead of managing another channel.

The hidden benefit of diversification is not just risk mitigation. Different platforms reach different segments of your audience in different mindsets. Someone scrolling a social feed is in a different mental state than someone actively searching for a solution, who is different from someone watching a video, who is different from someone reading an article. A diversified platform strategy lets you meet the same person at multiple points in their decision journey, and the cross-platform reinforcement effect is real and measurable.

When to Kill Underperforming Campaigns

This is where emotional discipline matters most. Every campaign represents someone’s strategic hypothesis, someone’s creative work, someone’s optimism. Killing a campaign feels like admitting failure. So teams let underperformers linger, hoping they will turn around, slowly bleeding budget that could be deployed more productively elsewhere.

I use a three-strike framework. Strike one: the campaign misses its target metric by more than twenty percent after the minimum data window. We investigate — check creative fatigue, audience saturation, landing page issues, tracking discrepancies. Strike two: after adjustments, the campaign still misses target by more than twenty percent in the next data window. We make one more round of significant changes — new creative, new audience segments, revised offer. Strike three: if it still is not working, we kill it. No more extensions, no more “let’s give it one more week.”

The math supports this discipline. At scale, the opportunity cost of leaving budget in an underperforming campaign is enormous. If you have fifty thousand dollars a month sitting in campaigns that are delivering a one-point-five-to-one return when your proven performers deliver three-to-one, moving that budget is not just an incremental improvement — it is a transformative one. That reallocation alone could generate an additional seventy-five thousand in monthly revenue.

Fraud Detection and Traffic Quality

At two million a month in spend, you are a target. Click fraud, bot traffic, attribution manipulation — these are not theoretical risks. They are line items that will eat your budget if you do not actively defend against them.

I run a layered defense. The first layer is platform-level fraud detection, which is free and catches the obvious stuff. The second layer is a third-party verification service that provides independent measurement of traffic quality. The third layer — and this is the one most teams skip — is internal anomaly detection built on our own data.

That third layer is critical because it catches the fraud that is sophisticated enough to fool the first two layers. We monitor conversion-to-engagement ratios at the campaign level. If a campaign suddenly shows a spike in clicks but conversion rates drop to near zero, that is a fraud signal. If a new traffic source shows perfect click-through rates but zero downstream engagement, that is a fraud signal. If cost-per-click drops dramatically without any corresponding change in targeting or creative, that is a fraud signal.

I estimate that our fraud detection efforts save between three and five percent of our total spend, which at our scale means sixty to a hundred thousand dollars a month. That easily justifies the investment in both tools and the engineering time to build our internal monitoring.

Reporting That Drives Decisions

I have seen a lot of ad spend reports, and most of them are useless. They are packed with data, beautifully formatted, and completely disconnected from the decisions that need to be made. A fifty-page weekly report that shows you every metric for every campaign is not a report — it is a data dump with a cover page.

The reporting framework I use at scale has three tiers. The daily pulse is a single page — sometimes a single screen — that answers three questions: Are we on pace for our monthly targets? Are there any anomalies that need immediate attention? What is the single highest-impact action we could take today? This is what the media buying team looks at every morning.

The weekly review is a four-to-five-page document that covers performance by channel, creative performance trends, audience insights from the past week, and the testing roadmap for the coming week. This is the working document for the weekly optimization meeting, and every section ends with a specific recommendation and a decision that needs to be made.

The monthly strategic report goes to leadership and focuses on the big picture: month-over-month trends, progress against quarterly goals, budget reallocation recommendations, and a forward-looking view of risks and opportunities. This is where you zoom out from the daily noise and evaluate whether your overall strategy is working.

The common thread across all three tiers is that every piece of data is attached to a decision. If a metric does not inform a specific action someone needs to take, it does not belong in the report. This sounds obvious, but enforcing it ruthlessly is what separates reporting that drives performance from reporting that just documents it.

Managing spend at this scale is a craft. It requires equal parts analytical rigor, operational discipline, and strategic judgment. The teams that get it right build compounding advantages — better data, faster learning, more efficient allocation — that are extraordinarily difficult for competitors to replicate. That operational moat, more than any single creative insight or targeting trick, is what sustains performance over the long term.