Articles tagged python

  1. Finding a Confidence Interval for Lift

    The motivation for this blog post is simple: I was having trouble searching Google for a simple formula for the confidence interval of lift. Lift is a very important metric in our industry, and after all the work I put into researching it I want to make sure the next person to google ‘confidence interval of lift’ has an easier time.

  2. Distributed Metrics for Conversion Model Evaluation

    At Magnetic we use logistic regression and Vowpal Wabbit in order to determine the probability of a given impression resulting in either a click or a conversion. In order to decide which variables to include in our models, we need objective metrics to determine if we are doing a good job. Out of these metrics, only the computation of lift quality (in it’s exact form) is not easily parallelizable. In this post, I will show how the computation of lift quality can be re-ordered to make it distributable.

  3. Computing Distributed Groupwise Cumulative Sums in PySpark

    When we work on modeling projects, we often need to compute the cumulative sum of a given quantity. At Magnetic, we are especially interested in making sure that our advertising campaigns spend their daily budgets evenly through out the day. To do this we need to compute cumulative sums of dollars spent through out the day in order to identify the moment at which a given campaign has delivered half of it’s daily budget. Another example where being able to compute a cumulative sum comes in handy is transforming a probability density function into a cumulative distribution function.

    Because we deal with large quantities of data, we need to be able to compute cumulative sums in a distributed fashion. Unfortunately, most of the algorithms described in online resources do not work that well when groups are either: large (in which case we can run out of memory) or un-evenly distributed (in which case the largest group becomes the bottle neck).

  4. Demystifying Logistic Regression

    For our hackathon this week, I, along with several co-workers, decided to re-implement Vowpal Wabbit (aka “VW”) in Go as a chance to learn more about how logistic regression, a common machine learning approach, works, and to gain some practical programming experience with Go.

    Though our hackathon project focused on learning Go, in this post I want to spotlight logistic regression, which is far simpler in practice than I had previously thought. I’ll use a very simple (perhaps simplistic?) implementation in pure Python to explain how to train and use a logistic regression model.

  5. VIRBs and Sampling Events from Streams

    VIRB (Variable Incoming Rate Biased) reservoir sampling is a streaming sampling algorithm that stores a representative fixed-sized sample of events from the recent past (the user specifies the desired mean age of samples), even when the incoming rate varies. It is heavily inspired by reservoir sampling.

  6. Real Time Facial Recognition in Python

    Last month we had another instance of our quarterly hackathon. I had an urge to experiment a bit with computer vision, despite not having done anything related before.

    Our hackathons are around 48 hours long, which I hoped would be long enough to do some simple facial recognition. My goal ...

  7. One-Pass Distributed Random Sampling

    One of the important factors that affects efficiency of our predictive models is the recency of the model. The earlier our bidders get new version of prediction model, the better decisions they can make. Delays in producing the model result in lost money due to incorrect predictions.

    The slowest steps in our modeling pipeline are those that require manipulating the full data set — multiple weeks worth of data. Our sampling process has historically required two full passes over the data set, and so was an obvious target for optimization.

  8. Click Prediction with Vowpal Wabbit

    At the core of our automated campaign optimization algorithms lies a difficult problem: predicting the outcome of an event before it happens. With a good predictor, we can craft algorithms to maximize campaign performance, minimize campaign cost, or balance the two in some way. Without a good predictor, all we can do is hope for the best.

  9. Real-time Ad Targeting with Apache Kafka

    Here at Magnetic, as a search-retargeting company, our core business model is to provide relevant ads to viewers. Our platform performs this task well, matching viewers up with related ads through various methods including page visits, search queries, and data analytics of each. It currently takes about 15 minutes on average for us to be able to react to new events in our core targeting infrastructure. If we could reduce this time, we could make our engineers, product management, ad operations, and our CEO really happy.

  10. Optimize Python with Closures

    Magnetic’s real-time bidding system, written in pure Python, needs to keep up with a tremendous volume of incoming requests. On an ordinary weekday, our application handles about 300,000 requests per second at peak volumes, and responds in under 10 milliseconds. It should be obvious that at this scale optimizing the performance of the hottest sections of our code is of utmost importance. This is the story of the evolution of one such hot section over several performance-improving revisions.

  11. Good Test, Bad Test

    A good test suite is a developer’s best friend — it tells you what your code does and what it’s supposed to do. It’s your second set of eyes as you’re working, and your safety net before you go to production.

    By contrast, a bad test suite stands in the way of progress — whenever you make a small change, suddenly fifty tests are failing, and it’s not clear how or why the cases are related to your change.