10 Questions App Growth Professionals Should Be Able to Answer
The digital landscape is dynamic, user preferences are constantly changing, and competition is fierce. To successfully navigate this challenging terrain, app professionals need to maintain a deep understanding of the ecosystem and its key success drivers. Today, that success is about more than just downloads; it’s about increasing engagement, sustaining retention, and maximizing revenue.
In this blog post, we will delve into 10 crucial questions that should be at the forefront of every diligent app growth professional’s thoughts, with clear and readily available answers.
1. Which trends are you monitoring?
Staying up to date with the latest trends is crucial for app growth professionals to remain effective in their roles. This can include a wide range of factors such as changing revenue models, new networks, emerging partners, preferred ad formats, innovative targeting options, best practices for engagement and retention, cutting-edge tools and technologies, or evolving campaign and revenue drivers. Essentially, tracking these trends is key to making informed decisions and boosting performance.
The more specific a trend and the better you define it, the faster you can take action. This means that by smartly using advanced analytics to tap into data trends, you can quickly see gains that compound over time.
When it comes to metric-related trends, finding subtle or slow-moving patterns can be tricky. These trends, which gradually unfold over time, can easily go unnoticed or be dismissed, as they might seem insignificant in daily operations. However, even these small trends can build up over time to become quite significant. That’s where top-notch data monitoring solutions come in. They offer automated trend detection for all key metrics, sending alerts so you never miss out on an important trend.
In the example pictured below, a gradual decrease in Chinese ARPDAU (Average Revenue Per Daily Active User) is depicted. Without an alert, this decline could easily have been overlooked for weeks. In this case, it was due to a Rewarded Video Placement that was accidentally removed for Chinese users.
2. How do you measure labor productivity?
Labor productivity is a critical metric that measures the efficiency and effectiveness of a workforce. Tracking and managing labor productivity helps app growth teams see which projects match up with expenditure expectations and productivity goals and which don’t.
With more knowledge around which tasks are consuming excessive time and effort, growth professionals can quickly and impactfully make adjustments to improve cost controls. Knowing where improvements are needed makes it possible to implement process enhancements, provide training, or adopt tools & methodologies to boost productivity.
On the flip side, when a given workflow or team exceeds its productivity goals, decision-makers can invest resources with greater awareness and acuity. This information can guide decisions related to hiring, outsourcing, or redistributing tasks among team members to optimize resources.
It can also help in setting more precise timelines, milestones, and budgets for new projects — preventing the team from overcommitting and reducing the likelihood of delays.
Labor productivity is usually calculated with the formula below:
Labor Productivity = Total Output / Total Working Hours
Of course, output can be a little tricky to measure. It’s a softer and more subjective metric. It can be measured in terms of tickets closed, tasks completed, campaigns launched, users gained, inventory created, impressions delivered, revenue generated, or any other quantifiable achievement. Whatever matrix of factors you use to measure output — and that matrix should vary by team — the key is to have your metrics well-defined and consistently tracked.
Time-tracking tools and project management software can help measure labor productivity — recording time spent on various tasks and projects. In some cases, it’s the same tools that help you measure labor productivity that are also most instrumental to improving it.
oolo AI, for example, improves labor productivity by helping teams detect issues faster and intervene more surgically. That not only ensures they’re acting to maximum impact, but frees us time for them to move through more tasks in a day.
3. Which app/game creatives are gaining traction?
To keep your audience responsive and avoid desensitization to stale ads, especially with such fast-changing consumer preferences and market trends – you need to keep things fresh and experiment with new means of attention capture, click compulsion, and conversion generation. Regular creative updates allow mobile app advertisers to A/B test different approaches – identifying what works best and optimizing their campaigns accordingly.
Understanding how your app’s performance is impacted according to creatives is a huge opportunity for improvement — not just in knowing where to throw spend, but in shaping new creatives strategy. But with so many moving pieces, it can be hard to detect when a given creative gains traction in a specific channel or campaign.
With a stable Day0 ROAS, the advertiser will know to check budgets to ensure nothing is limiting this newfound performance. This creative level monitoring frees up the user to spend time analyzing creative elements and help the creative team design tests to confirm and capitalize on the difference maker(s).
4. What are your current MTTR standards & goals?
MTTR stands for mean time to repair/resolution, and it measures the average time it takes to resolve a problem once detected. In the context of app growth, it is a key performance indicator (KPI) that indicates how efficiently an organization can identify and resolve issues in their marketing and monetization stacks.
The longer it takes to recover from these issues, the more potential UA and revenue is lost. Monitoring and improving MTTR can lead to more efficient incident response processes, reducing manual effort and associated costs. In a tight-margined app economy that translates to a vital and sustainable competitive advantage.
Tracking MTTR over time also allows app developers to identify trends, patterns, and recurring issues. This data can be used to make better informed decisions that result in improving app infrastructure, architecture, and growth practices, rapidly correcting for or even preventing similar incidents going forward.
To effectively manage MTTR, app developers should implement robust incident management practices, including monitoring, alerting, and follow-up procedures. These practices help teams identify issues quickly, diagnose the root cause, and implement fixes promptly.
By analyzing the time it takes to recover from different incidents, app teams can better focus their efforts on the most common and costly problems.
MTTR can be calculated with the formula below:
MTTR = Total Detection Time + Total Diagnostic Time + Total Redress Time
Number of Incidents
Measuring MTTR is helpful not only in setting and measuring yearly (or quarterly) goals — but in gauging the business ramifications of efficiency gains/losses. Once you have MTTR standards in place you can begin to look at TTR on a case-by-case basis and tie each minute beyond or below the standard to a real measurable impact.
For most growth teams, low-hanging MTTR improvements can be collected by automating issue detection, streamlining investigations, and rallying the team’s effort around the most fixable (and impactful) issues first. Using oolo AI, for example, TapNation was able to reduce their MTTR by 54% in just 30 days.
5. What’s your long-tail strategy?
As a general rule, the more successful the app development company, the more titles there are to be managed in the portfolio. But with only so much time in the day and so much talent on the payroll, it’s just not feasible to have eyes on every single data point for every single app.
That’s why it’s important to know exactly how large your long-tail is and how you plan to handle it. Of course, trouble-shooting and optimization work have diminishing marginal returns. When you set about fixing problems and making improvements for a given title, you’ll normally see greater uplift from your first hour of work than you do from your second, and from your second than your third, etc. So if you can manage it, throwing your long-tail just a few minutes of attention each day or week may be worthwhile.
The problem is that there’s a minimum threshold of effective time spent. If you devote 10 minutes a day to the long-tail and the long-tail includes 32 titles, that’d leave just under 19 seconds for you to work on each title. In reality, you’d need at least a few minutes just to get into the work on each title before you can actually begin doing anything; which is why it’s important to know exactly how large your long-tail is and what options you realistically have for dealing with it.
- First, that means defining exactly what you mean by long-tail. The number of portfolio properties that contribute to less than 7% of your revenue and account for less than 7% of your user base can be used as a standard working definition
- Second, you’ll want to estimate how much revenue/growth is surrendered to inefficiencies in the long tail
- Finally, you’ll want to decide how much time you’re willing to spend on the long tail
With that information in tow, you’ll know whether your strategy to manage the long-tail is realistic or not. For most successful app companies, it’s not realistic. And the decision is usually made to effectively abandon the long-tail. In those cases, growth teams just focus on maximizing downloads to and revenue from their larger titles — the subset of the portfolio that consistently delivers the greatest value.
And while a Pareto Principle-inspired style of management has its merits, it guarantees losses that will accumulate over time to become significant. Though this can be seen as an acceptable cost of doing business in the short-term, over the long-term, it becomes more difficult to justify. Which is why an increasing number of companies are bucking the trend and eschewing the idea of long-tail auto-pilot — leaving it to run and perform as it will without any intervention.
More and more companies are turning to automated monitoring and alerting systems to lower the minimum threshold of effective time spent and allow them to up the monitoring and optimization standards across the entire portfolio. In this model, there are no a priori manual health-checks. That’s all automated.
Only when an alert is issued to a specific problem or opportunity at a specific point in the data hierarchy does the growth manager intervene. This ensures that there are no unresolved problems, unseen revenue leaks, or unmet opportunities, slowly chipping away at your business.
6. How many old low-spend campaigns have you left running?
It is very common for advertisers to leave low spending campaigns running, whether intentionally or by slipping through the cracks (as they run beneath normal monitoring thresholds/triggers). This “campaign limbo” introduces some risk.
From time to time, specific campaigns will get picked up by the network algorithms and suddenly the volume can start to increase. If the right measures aren’t in place (such as appropriate daily caps), these spikes can quickly become expensive if missed for more than a day or so.
So growth managers need to not only know how many leftover low-spend campaigns they’ve left running, they need to know what purpose is served by keeping them going and how the purpose can be weighed against the risk that they may inadvertently but meaningfully drain resources. Depending on where things stand in that cost-benefit analysis, the growth manager should consider officially ending some of those old campaigns.
Either way, all those campaigns should be on a watch list, so they can keep a close eye on them.
7. Which active A/B tests can be closed?
This is a bit of a trick question, because the only acceptable answer is ”none.” That said, any growth manager worth his/her salt should have a running tally of the tests nearing conclusion. But the line between nearing conclusion and concluded is manually drawn and typically subject to a great deal of ambiguity. And that’s the rub.
While A/B tests offer a direct path to iterative improvements, they are tedious and generally managed with precision and punctuality that’s less than perfect. That can leave actionable insights to wither and (sometimes) die on the vine — or worse, it could lead to wrong conclusions. Either way, you’re looking at squandered growth.
To keep growth healthy and, well, growing, it’s important to know and to act as soon as tests reach maturity and lessons can be implemented. Of course, it’s no easy task to always be on top of everything at all times and, looking at both test groups, the durable delta can be hard to pin down with the naked eye and simple analyses.
It’s for this reason that growth teams may look to gain a leg up by tooling up. These teams leverage deep data monitoring technologies to automatically monitor all active tests, normalize distributions, and use sequential probability algorithms to declare the definitive winner at the earliest possible point.
oolo AI’s A/B test monitoring, for example, displays the winning test group, a confidence score for that determination, and the monthly revenue impact from implementing changes pursuant to the test conclusions — with the added ability to filter for specific geographies to understand both the macro and micro impact.
8. How much spend is planned for the month?
As important as ARPDAU optimization is, fundamentally spend-to-download optimization must come first.
App growth professionals need to know how much spend they have to work with and what type of deliverables they’re expected to produce with it. This is essential for resource planning and management as well as for setting and tracking goals.
From a planning perspective, when advocating for a budget, the ability to throw out specific and reliable growth projections is invaluable. When it comes to long-term planning, precise spending and download expectation can help project future growth, guide investments, and shape decision-making around app development and marketing.
On a more day-to-day basis, having a firm grasp of spend and download plan gives the team something to aim for in terms of overall cost per install. An overall figure that can be broken down and adjusted according to the performance expectations of different channels, formats, days of the week, time of the month, etc.
Converting these monthly goals into more granular benchmarks and milestones makes a really big difference in terms of pacing. It’ll prevent you from falling behind the eight ball and help you consistently hit your marks. It’ll also give you a deeper understanding of the different networks, timing, and creative dynamics at play. An understanding that you’ll ultimately use to your advantage and that’ll allow you to set higher goals.
Knowing how many downloads are expected from allocated spend also makes it easier to evaluate the success of specific marketing efforts and campaigns. If resulting downloads fall short of expectations, you know to kill the campaign, rethink your strategy, or adjust the marketing mix.
Finally, given the dynamism of the app economy, being acutely aware of your spend rate and expected downloads will give you a quick tip-off any time that external factors are influencing performance. This will give you a head-start when adapting to changes in the market, competition, or user behavior.
9. What’s your shrinkage?
Shrinkage refers to the difference in value between what you actually have and what you thought you had. For retail businesses, this is calculated based on inventory (i.e. the difference between the inventory shown in your records vs the physical inventory you count in your possession). But the same formula can be adapted for other business cases. An advertiser might calculate shrinkage according to the following formula.
Shrinkage = (Actual ROAS – Forecasted ROAS)
Measuring shrinkage is essential to optimize growth & revenue streams while ensuring profitability. Of course, ROAS isn’t a perfect analog for inventory. But it doesn’t need to be. You can replace ROAS with any other KPI and measure shrinkage through a collection of calculations.
For advertisers and publishers, shrinkage can result from fraud, market volatility, technical issues, mismanagement, discrepancies, user base decline, and faulty forecasting. Regardless, the result is the same: a loss of expected value.
For most businesses, a certain amount of shrinkage is to be expected, but if it gets out of control, you’ll find your earnings reports consistently falling short and your company in peril. As a rule of thumb, you want to keep your shrinkage below 5%.
If you can catch and contain some of those shrinkage factors before they manifest, that’s extra money in the company coffers and/or users in its database. Again, this is an area where best-in-class monitoring technologies can help. With the benefit of predictive modeling, and fuller more context-aware operational observability, you can effectively limit your shrinkage exposure to instances of market volatility.
With AI patrolling their UA and monetization operations for any sign of discrepancy or anomaly, app growth professionals minimize shrinkage, ensuring that no growth obstructions are missed or allowed to linger.
In this way, growth teams can correct every misalignment, misconfiguration, and sub-optimal setup before it ever touches the bottom line — refining their forward-facing strategies as they go.
10. What tasks are you replacing with advanced analytics?
If you can name a handful of specific tasks that have been significantly made more efficient and effective through advanced analytics, you’re in good shape. If you can’t, well, you’re liable to fall behind.
It can be segment construction, placement optimization, A/B assessment, copy analysis, forecasting, funnel optimization, pacing adjustment, ARPDAU tracking, or metric monitoring. Regardless of the task, if you’re still toggling through spreadsheets and reporting dashboards, manually crunching numbers and running analyses, you’re almost certainly spending too much time on oversight that’s too incomplete, imprecise, and error-prone.
Thankfully, as long as you’re in business, it’s not too late to up your analytics game. A good way to start is by making two lists:
- one of your most time-consuming daily tasks, and
- the other of tasks that have the greatest business impact
If any of your tasks appear on both lists, those are your top candidates for improvement. Then you’ll want to research any solutions or technologies that can help you automate, enhance, or streamline those tasks. When you find something that you believe is worth further exploration, you’ll want to start small, prove the value, and expand from there.
Follow the crawl, walk, run methodology. Rinse and repeat. And be sure to track the impact of all your analytics projects at all stages of their rollout.
Long story short
In the ever-evolving world of app growth, adaptation and knowledge are the keys to your app’s success. You need to have a firm grasp of your known knowns and known unknowns; and the answers to these 10 questions better be included among the former.
Put simply, there are certain pieces of information that should always be top of your mind and on the tip of your tongue:
- Keep an eye out for relevant trends
- Maximize labor productivity
- Have a finger on the pulse of your app creatives
- Minimize MTTR
- Maintain a realistic long-tail optimization strategy
- Keep the lid on low-spend campaigns
- Carry out diligent A/B test tracking
- Set spend & download goals
- Keep shrinkage in check
- Have an advanced analytics strategy in place
When it comes to growth operations, the ability to answer these questions quickly, clearly, and comprehensively can be the difference between success and failure. By embracing these questions, growth teams will be better equipped to stay ahead of the curve and thrive. In their answers you’ll find a compass to guide you through the complex terrain of app growth.