Lighthouse Studio

Key Driver Analysis Reporting

Using the Results

Lighthouse Studio employs a methodology called Johnson's Relative Weights Analysis (RWA), which measures the proportion of variance in the dependent variable attributable to each of the independent variables. Among the many driver analysis options available (general regression, Shapley Value, etc.), Johnson's Relative Weights is more robust and better allows us to control for collinearity among the items, handles missing data nicely, and is fast and scalable. The drawback is that it can only accommodate metric variables in both the dependent and independent variables.

Importance Weights

The primary output of Key Drivers Analysis is the reported Importance Weights. Raw scores apportion the overall R-square value across the variables in proportion to their impact on the outcome variable. We normalize these values so they sum to 100 to make them easier for clients to interpret. Importance Weights are ratio scaled numbers that tell how much impact each driver has. An item with a score of 0.08 is twice as impactful as an item with a score of 0.04.

Correlations and Variance Inflation Factor (VIF)

The next section of the report contains a standard correlation matrix between your independent variables. High correlations among predictors suggest multicollinearity, where two or more predictors are highly correlated with each other. It also includes a measure of multicollinearity called the Variance Inflation Factor (VIF). VIF is a measure of how much the variance of a coefficient is inflated due to multicollinearity with other predictors. A VIF score of 1 indicates no collinearity, while higher numbers indicate more collinearity between your measures. A VIF value above a certain threshold (commonly 5 or 10) indicates strong multicollinearity among the predictors. While Johnson's Relative Weights allows us to include correlated variables in our analysis, they may make it difficult to identify the true key drivers. By identifying predictors with high VIFs, you can decide to remove or combine these variables to reduce multicollinearity. This refinement helps in obtaining more accurate and interpretable results.

Eigenvectors and Eigenvalues (Optional)

Eigenvectors are vectors that represent the directions of maximum variance in a dataset. In KDA, they help identify the most important underlying factors or components that contribute to the variation in the data.

Eigenvalues measure the amount of variance in the data that is explained by each eigenvector. In KDA, they indicate the relative importance of each eigenvector (and hence each underlying factor). While not used as often as correlations and VIF in determining model structure, they can be useful in helping determine which variables to include in your model and confirming the importance of your drivers.

Reporting Stacked Key Driver Analysis

Reporting for Stacked Key Driver Analysis is the same as standard Key Driver Analysis except that the sample size reported represents the number of sets of ratings, not the number of respondents. If 500 respondents had rated two brands each, the total stacked sample size would be 1,000, not 500.

 

Created with Help & Manual 8 and styled with Premium Pack Version 4 © by EC Software