Think of the stat tests for significance of the interactions between attributes as something of an omnibus test, one which would have to be significant before looking at the details of interactions among levels. Only if an interaction between two attributes is significant does it make sense to look at the specific interactions involving pairs of levels.
Moreover, if you're running a large number of interactions, you run the risk of counting experiment-wise error as a significant interaction. In other words, while the chance of a false positive in a single stat test run at 95% confidence is 5% (by definition) if you run 10 interactions, the chance of a false positive rises to 40%, because there are 10 opportunities each with a 5% chance of a false positive. You may want to correct for this fact using an adjustment like the Benjamini-Hochberg procedure for combating false discovery.
One more thing - a lot of times what look like interactions when running aggregate logit models turn out to be non-significant when you run the analysis using HB: sometimes heterogeneity at the individual level masquerades as interactions at a more aggregate level. So once you identify candidate interactions with the aggregate logit tests (suitably corrected for having run multiple tests) you probably want to confirm that they're still present when you estimate your model with HB, using appropriate tests with respondent level utilities.