3 AdTech Blunders and Their LearningsTue 24 Nov 2015, 11:23 AM - kanika
This new guest post is brought to us by Kanika Upadhyay, Head of Ad Quality at Bidstalk (now part of AppLift). In this post, Kanika describes 3 AdTech misadventures that she has experienced and shares their learnings.
Over my 6 years in adtech, I have come across various functions and operations which always bask in the glory of how their product and engineering life cycles have evolved. I have met various vendors, all of them boasting about having the best tools, proxy services and crowdsourcing. Their confident sales people have intimidated me with their terminologies, impressed me with techniques and, at times, surprised me with their pricing.
AdTech has been evolving at an unbelievable pace and the innovations around its various dimensions have been incredible. Unfortunately, as a quality ad operations manager, I still have a few fights which do not seem to have a permanent solution. Ad rendering on proxy- and carrier- based targeting is on top of that list. I currently head up an ad quality team which works 24/7, manually reviewing, testing and verifying every demand source that runs (or intends to run) on our platform. My team is based out of India and global delivery is channeled through this center. Clearly, this means that, sitting in our office in Bangalore, we are required to test how a particular ad will be served on a given device, on a given operator and in a given country, which is the hard part.
From being an Ad Quality executive half a decade ago, back to leading a team of ad quality professionals, I have lived through many experiences, both good and bad. All of them ended up with learning which I shall never trade for anything else.
The initial two days were spent basking in the glory of how happy our supply partners were and how well we were spending. This is where the horror began…
The third day started with multiple emails from all channels sending us screenshots and clippings from various users, media, publishers and, last but not least, my manager. Amidst the whole frenzy, it took me a while to realize that our ‘star’ advertiser was a shady agency redirecting their JS ads onto a pornographic website. The placements of these ads were on our premium inventory and hence the damage was huge. One of the key placements where these ads were served was a military website of a South Asian country; I can only resort to my imagination to understand what a high ranking military official went through: claims to the best and brights and pleas to the nation’s youth to join the forces, followed by a pole dancer in skimpy clothes towards the end of the page. Horrendous.
My first lesson learnt: big budget does not always mean spotless ads.
During one of my previous startup experiences, we suddenly realized the need to scale up our team. Work was pouring in from everywhere and approvals for new human resources became a priority. I scheduled quick interviews and we soon hired a small battery of people. Their trainings and induction was completed like a breeze and we were a sizable number in the Ad Operations team.
Work resumed and soon the new joiners started approving and reviewing ads independently. In the rush to prove ourselves, we also promoted senior ad quality resources to other teams. The review process was trimmed and we eliminated the sample checks and reduced the frequency of audits of campaigns. Unexpectedly, a giant e-Commerce advertiser acted up, as their campaign to promote a specific ‘new year sale’ hadn’t started serving at all. The deal had been made much in advance and everyone had great hopes for this campaign. In the middle of the night, we had to backtrack and understand what could possibly have gone wrong. Every team was shaken up in the middle on the night, ad serving and technical teams checked everything on their end, from servers to traffic sources, etc.
We finally discovered that the ads were incorrectly tagged with the wrong category by a newly hired ad quality executive. They were supposed to be qualified as ‘Commerce’ but were mistakenly tagged as ‘Contraceptive’ (in the huge list of categories in the drop down, the latter was just below the former, hence the error). This resulted in the ads being blocked on relevant sites and therefore the campaigns failed to deliver. Since the whole concept was to promote the sale on a specific day, there was nothing we could do.
Lesson learnt, the hard way: check, confirm and audit. It’s worth following the processes, have elaborate training and assessment sessions, whatever effort it takes…
During yet another adtech startup experience, as we received additional work and had shorter turnaround time, I was keen to incorporate newer tools and devices to automate and reduce our efforts. Soon, we began a trial-and-error method of testing our processes until we reached a combination of a few services and tools which greatly helped in reducing our SLAs (Service Level Agreement, here the minimum time we needed to approve an ad). While some tools worked like a charm, others failed miserably. Our luck ran out when we could not judge which ones belonged to which basket. We then bought a fancy software to help us flag a particular format of unapproved ads, in this case ‘auto download’ ads. As the name implies, these ads directly download a file on the user’s device without their consent. At this time, these ads were all the rage in China and South Korea, and many leading game developers relied on them for their campaigns. In other regions, these ads were less accepted and brand publishers completely disapproved their autonomous behaviour.
We started using the new tool immediately after a botched due diligence, as we discontinued our older way of testing these ads manually. As you must have guessed by now, we soon got blocked by our branded supply partner, in this case a leading news app from the US. Their policy clearly mentioned that they did not accept auto downloads, considered them very bad for the user experience, and held them as a threat to user’s device (there were random cases reported earlier where a malware was auto downloaded corrupting the mobile device due to such ads). We found out that our new tool came with limited bandwidth. Once we reached the threshold, we could enter the URLs of the tags but they would not be tested and hence not be flagged. This was a huge bug in the tool and we raised it with the vendors in parallel. The terms of violation were clear and we paid a huge penalty for the damage to our partner (much more than we had anticipated to have spent with them).
Lesson learnt, the harder way again…
What are your adtech blunders? Let us know in the comments!
[A version of this article originally appeared on AppLift’s blog.]
© 2013 - 2015 AppTweak. All rights reserved.App Store Optimization ("ASO") is the process of improving the visibility of a mobile app (such as an iPhone, iPad, Android, or Windows Phone app) in an app store (such as iTunes or Google Play for Android). App Store Optimization is the mobile equivalent of Search engine optimization. Specifically, App Store Optimization includes the process of ranking highly in an app store's search results and top charts rankings. ASO marketers agree that ranking higher in search results and top charts rankings will drive more downloads for an app. - From Wikipedia, the free encyclopedia.