Statistical techniques used in anomaly detection

Anomaly detection in Analysis Workspace uses a series of advanced statistical techniques to determine whether an observation should be considered anomalous or not.

Depending on the date granularity used in the report, 3 different statistical techniques are used - specifically for hourly, daily, weekly/monthly anomaly detection. Each statistical technique is outlined below.

Anomaly detection for daily granularity section_758ACA3C0A6B4D399563ECABFB8316FA

For daily granularity reports, the algorithm considers several important factors to deliver the most accurate results possible. First, the algorithm determines which type of model to apply based on available data of which we select between one of two classes - a time-series-based model or an outlier-detection model (called functional filtering).

The time series model selection is based on the following combinations for type of error, trend, and seasonality (ETS) as described by Hyndman et al. (2008). Specifically, the algorithm tries the following combinations:

  1. ANA (additive error, no trend, additive seasonality)
  2. AAA (additive error, additive trend, additive seasonality)
  3. MNM (multiplicative error, no trend, multiplicative seasonality)
  4. MNA (multiplicative error, no trend, additive seasonality)
  5. AAN (additive error, additive trend, no seasonality)

The algorithm tests the suitability of each of these by selecting the one with the best mean absolute percentage error (MAPE). If the MAPE of the best time series model is greater than 15% however, functional filtering is applied. Typically, data with a high degree of repetition (e.g. week over week or month over month) is the best fit with a time series model.

After model selection, the algorithm then adjusts the results based on holidays, and year-over-year seasonality. For holidays, the algorithm checks to see if any of the following holidays are present in the reporting date range:

  • Memorial Day
  • July 4
  • Thanksgiving
  • Black Friday
  • Cyber Monday
  • December 24-26
  • January 1
  • December 31

These holidays were selected based on extensive statistical analysis across many customer data points to identify holidays that mattered the most to the highest number of customers’ trends. While the list is certainly not exhaustive for all customers or business cycles, we found that applying these holidays significantly improved the performance of the algorithm overall for nearly all customers’ datasets.

Once the model has been selected and holidays have been identified in the reporting date range, the algorithm proceeds in the following manner:

  1. Construct the anomaly reference period - this includes up to 35 days prior to the reporting date range, and a matching date range 1 year prior (accounting for leap days when required and inclusive of any applicable holidays that may have occurred on a differing calendar day the previous year).

  2. Test whether holidays in the current period (excluding the prior year) are anomalous based on most recent data.

  3. If the holiday in the current date range is anomalous, adjust the expected value and confidence interval of the current holiday given the previous year’s holiday (considering 2 days before and after). The correction for the current holiday is based on the lowest mean absolute percentage error of:

    1. Additive effects
    2. Multiplicative effects
    3. YoY difference

Notice the dramatic improvement of performance on Christmas Day and New Year’s Day in the following example:

Anomaly detection for hourly granularity section_014C9E9209AF43F8A03D5D46E3B3AEE7

Hourly data relies on the same time series algorithm approach that the daily granularity algorithm does. However, it relies heavily on two trend patterns: the 24-hour cycle as well as the weekend/weekday cycle. To capture these two seasonal effects, the hourly algorithm constructs two separate models for a weekend and a weekday using the same approach outlined above.

The training windows for hourly trends relies on a 336-hour lookback window.

Anomaly detection for weekly and monthly granularities section_5D421576BFBC4B24A58DFCC0A6407545

Weekly and monthly trends do not exhibit the same weekly or daily trends found at daily or hourly granularities, so as such a separate algorithm is used. For weekly and monthly, a two-step outlier detection approach is used known as the Generalized Extreme Studentized Deviate (GESD) test. This test considers the maximum number of expected anomalies combined with the adjusted box-plot approach (a non-parametric method for outlier discovery) to determine the maximum number of outliers. The two steps are:

  1. Adjusted box-plot function: This function determines the maximum number of anomalies given the input data.
  2. GESD function: Applied to the input data with the output from step 1.

The holiday and YoY seasonality anomaly detection step then subtracts last year’s data from this year’s data and then iterates on the data again using the two-step process above to verify that anomalies are seasonally appropriate. Each of these date granularities uses a 15-period lookback inclusive of the selected reporting date range (either 15 months or 15 weeks) and a corresponding date range 1 year ago for training.

Statistical techniques used in Contribution Analysis

Contribution Analysis is an intensive machine learning process designed to uncover contributors to an observed anomaly in Adobe Analytics. The intent is to assist the user in finding areas of focus or opportunities for additional analysis much more quickly than would otherwise be possible.

Contribution Analysis accomplishes this by performing a two-part algorithm to every single dimension item available to the user’s Contribution Analysis report. The algorithm operates in this order:

  1. For each dimension, it computes the Cramer’s V test statistic. In the following example, consider a contingency table with page views by countries over two time periods:

    In Table 1, Cramer’s V can be used to measure the association between the page views by countries for period 1 (e.g. historical) and period 2 (e.g. the day the anomaly occurred). A low value for Cramer’s V implies a low level of association. Cramer’s V ranges from 0 (no association) to 1 (complete association). The Cramer’s V statistic can be computed:

  2. For each dimension item, Pearson’s Residual (PR) is used to measure the association between the anomalous metric and each dimension item. PR follows a standard normal distribution, which allows the algorithm to compare the PRs of two random variables even if the deviations are not comparable. In practice, the error is not known and is estimated using finite sample correction.

    In the previous example Table 1, the PR, with finite sample correction for country i and time period 2 is given by

    Here,

    (A similar formula can be obtained for time period 1.)

    For final results, the score for each dimension item is then weighted by the Cramer’s V measure and rescaled to a number between 0 and 1 to provide its contribution score.

recommendation-more-help
a83f8947-1ec6-4156-b2fc-94b5551b3efc