RESOURCES

Machine Learning in Adtech with Pawel Godula

Oct 14, 2021
By: Jean Ortiz-Luis

Machine learning has become a crucial tool utilized in many industries, but what does that look like in the adtech space? In the past few years, more advertisers and platforms have been looking to machine learning in order to optimize campaigns, protect against fraud, and help consumers receive quality content that they will actually enjoy. What’s more, with machine learning taking more technical aspects out of the ad creation and delivery process, advertisers can take more space to be creative and develop mobile advertising campaigns with which users will want to engage.

We reached out to Digital Turbine’s very own Pawel Godula, a Senior Director of the Data Science department at Digital Turbine, to better understand how machine learning is used in adtech and how it can benefit campaign development and clients across our Digital Turbine community. What’s more, Pawel has recently become a Kaggle Competitions Grandmaster, so he knows what he’s talking about! Keep reading to discover what that means for Pawel, Digital Turbine, and the adtech space.

What is your role at Digital Turbine?

I am a Senior Director of the Data Science department at Digital Turbine, where I have the privilege to work with a team of brilliant data scientists and engineers on building machine learning algorithms to optimize campaigns for our partners. I am also a manager of the Digital Turbine Warsaw office.

What does it mean to be a Kaggle Competitions Grandmaster? How do machine learning competitions work?

Kaggle can be compared to the world championship in building machine learning and artificial intelligence algorithms. While there are many platforms organizing similar competitions, Kaggle is the most popular. It was acquired by Google in 2017.

To become a Kaggle Competition Grandmaster, you need to get five gold medals, and one of them has to be won solo. It is quite a challenging criterion to meet, and that is why there are only 240 Kaggle Competitions worldwide, and only a few per country, e.g. in Poland, I was the second person to achieve this rank.

In Kaggle competitions, the task is to build your machine learning algorithm on a public dataset, and then its performance is measured on a separate, private (unseen) dataset. In a way, this helps measure the algorithm’s robustness to changing conditions, something we have to deal a lot with when building models in real life, especially in adtech.

How did you become a Kaggle Competitions Grandmaster?

It was a long and eventful journey for me, and I don’t want to overwhelm you with details, but let me just say that it was one the toughest challenges I ever faced. On Kaggle you are competing with the entire world. On average, there are a couple of thousand teams participating in every competition, so in order to get five gold medals, you have to consistently outperform a large group of brilliant and super committed engineers, hackers, and mathematicians.

In terms of effort and commitment, it was like having a second full-time job. Frequently I was devoting weekends and holidays to work on the algorithms, waking up at 6 am and earlier to set up experiments so that I could analyze results in the evening. You can only do that if you simply like it.

And I have to highlight – I could not have done it without the support of my wife. Many times my passion interfered with family life. I remember one occasion when we were in Venice and the competition was nearing a deadline. I was spending my time running algorithms on my laptop, rather than enjoying our holiday. My wife always understood, but Venice was on the border of understanding. 🙂

Are there any similarities between machine learning competitions on Kaggle and building machine learning models for adtech?

Among the many industries I have worked in, adtech is the most similar to a machine learning competition because both worlds share the “winner takes all” mechanics.

In machine learning competitions, even if the difference between the Top 100 algorithms is very small in absolute terms, only the Top 10 get the gold medals. Hence, everyone is working hard to build complicated models to get the 1% of extra model accuracy.

Adtech is based on auctions, where only the winners (Top 1) get to show their impression, and the rest get zero. Hence we see a similar phenomenon – even small gains in model accuracy lead sometimes to disproportionate benefits if it shifts your position in the auction. Therefore, it actually makes a lot of sense to go the extra mile to build complicated and powerful models only to achieve an additional 1% in model quality.

Another similarity is that there is virtually no limit to the impact you might bring with better models. Any improvement in model accuracy translates directly into a profit for the entire ecosystem. To give an example: If we build an algorithm that is better at recommending the next best game to the players, they end up installing games that they are really interested in, which creates value for the advertisers, the publishers, and for us as a consequence.

These two fundamental characteristics convinced me to move into adtech in the first place.

How does being a Kaggle Competitions Grandmaster help Digital Turbine optimize client campaigns?

I lead a team of amazing data scientists and engineers, where we try to impress each other every day with the intellectual rigor and quality of ideas that we bring to the table. Competitions help me constantly benchmark our modeling stack to the state of the art solutions and make sure we are pushing the needle of what is possible in predictive modeling in adtech.

One example is that our current main model (LightGBM) is the technology that frequently wins Kaggle competitions and was first introduced on a broad scale on Kaggle. It has the advantage of great accuracy and training speed but also allows for quick iteration with new ideas as it does not require heavy data preprocessing before training.

How does Digital Turbine use machine learning/data science in campaign development?

In adtech, machine learning is at the heart of the technology, serving many important use-cases. It is especially true for performance advertising, where the quality of the algorithm directly affects outcomes for all advertisers and publishers but is not limited to it. We have a complex pipeline of models serving various use cases:

  • Install Rate prediction models, which predict install rates individually for all creatives that we have in our portfolio
  • ROAS models, which predict the expected value of in-app purchases individually for every user
  • Dynamic Margin Buffer models, which predict external winning bid for every request and optimize trade-off between likelihood of winning the auction and expected margin should we win
  • Exploration models, which address a problem of cold start for campaigns
  • Fraud models, which are responsible for detecting and banning fraudulent traffic
  • QPS optimization, which serves as a filter on top of the funnel to optimize traffic before it is even fed into the models

Adtech is a unique industry in this respect because the majority of processes in an organization have the possibility to use machine learning to optimize the outcome.

What is key in building machine learning models in adtech?

I would say knowing what really matters for the general success of the solution and adjusting modeling complexity to the task.

For example, in Install Rate prediction, it makes sense to go for complicated models because to properly assess the likelihood of an install, the model needs to consider many variables, like supply-demand matching, quality of the creative, user’s overall affiliation to different game genres, user’s current mood, etc. It makes sense to build a complex feature engineering pipeline and a model that can analyze interactions between those features.

By contrast, in ROAS modeling, the crucial asset is the historical data on transactions. Having enough such data, it is possible to be very successful even with relatively simple models. Knowing what matters in particular use-cases allows us to quickly prioritize development ideas and build proper strategy.

How does your becoming Grandmaster benefit the clients and partners of Digital Turbine?

As Digital Turbine, we are an independent growth and monetization platform, and unlike our competitors, we don’t have any trade-offs between using the data for the benefit of advertisers and publishers vs. building the success of our own games built by the in-house game studio. Being a Grandmaster and having such a great team to work with guarantees that we can build powerful models and technology, and being independent guarantees that this technology always works directly in the best interest of our partners.

In simple words, our success is related in a clean and transparent way to the success of the advertisers and publishers we are working with. If I were running a game studio, I would consider three times before giving my data to an integrated group that has both mediation and their own in-house game studio. Data cross-joining opportunities are endless. Especially so because the potential disadvantages won’t manifest immediately in daily installs reports, but are much more likely to negatively affect long-term goals like retention and ROAS.

At the Digital Turbine group, any extra 1% that we find through better technology means an extra 1% for our partners, and I particularly enjoy this lack of conflict of interest. We also see that this transparency and clean alignment of objectives is more and more important to our partners as well.

By Jean Ortiz-Luis
Read more by this author