I am running an HB analysis for my CBC data that I originally collected then exported using the Lighthouse studio to dual .csv files.
When I use the HB analysis in the Lighthouse studio (LH), I notice that the number of choices in each response category is different from the standalone program (SA) although the expanded takes totals are the same (I have only 2 concepts/task)
LH: concept 1 52.15% vs concept 2 47.85%; 3576 expanded tasks in total & 8.0 on average
SA: concept 1 45.78% vs concept 2 54.22%; 3576 expanded tasks in total & 8.0 on average
Note that I have a dual response none in the task, but I am excluding it from this analysis. I also have 2 fixed tasks that I excluded from the analysis.
What might be the possible reason for this discrimination? & which analysis I can rely on?
By the way, When I went into the Excel file for the responses & manually calculated the percentage of responses for choices of concept 1 overall respondents gave me the number consistent with the LH results. Also, when I tried the .cho file for the SA program, it gave me the same percentages as the LH (52.15% vs 47.85%; 3576 expanded tasks in total & 8.0 on average).
I wanted to use the .csv file because, in the Latent Class Analysis, I am adding an interaction term since I am analyzing the data using both LCA & HB for comparison requirements for my Project.
Then If I used the SA results, Is there a program to analyze the results to get comparisons between subgroups like the MaxDiff analyzer for the MaxDiff results? Can I import the SA results into the LH for this analysis?