Open lscottmyers opened 2 years ago
Hi @lscottmyers -- thanks for your interest in SEMinR. We should really add some documentation or notice about this in our results. The test has two tails (e.g., <0.05 and >0.95 if you are looking at 5% significance) and both imply significant difference depending on whether the difference is positive or negative.
From the SmartPLS documentation on PLS-MGA:
This method is a non-parametric significance test for the difference of group-specific results that builds on PLS-SEM bootstrapping results. A result is significant at the 5% probability of error level, if the p-value is smaller than 0.05 or larger than 0.95 for a certain difference of group-specific path coefficients.
Also, please use a much larger nboot
to get more stable results -- I realize you are likely trying what is in our tests/documentation (deliberately kept small due to constraints of running tests on CRAN). Our default nboot
is 2000.
Lastly (pending your followup questions of course), please keep this issue open as I'd like to take the opportunity to think about where to add a note about significance of PLS-MGA in our documentation or results.
Thanks for your replies and explaining that this is a 2-tailed test. For my real data I am using nboot = 2000; I just used the 50 in the example to match what was in CRAN. And thanks for creating/writing/managing everything associated with the seminR package; I am finding it very helpful in my work.
On Thursday, June 23, 2022, 10:04:07 PM EDT, Soumya Ray ***@***.***> wrote:
Lastly (pending your followup questions of course), please keep this issue open as I'd like to take the opportunity to think about where to add a note about significance of PLS-MGA in our documentation or results.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
I noticed that the p-values for group 1 vs. group 2 are different depending on the order of the filter in estimate_pls_mga function. Examples are with mobi data & from the R help page for estimate_pls_mga: mobi_mga <- estimate_pls_mga(mobi_pls, mobi$CUEX1 <= 8, nboot=50, cores = 2) print(mobi_mga) PLS-MGA results: from to group1 group2 p Image -> Expectation 0.46982 0.4437 0.560 Image -> Satisfaction 0.25380 0.1280 0.227 Image -> Loyalty 0.22649 0.2188 0.509 Expectation -> Quality 0.49681 0.5565 0.723 Expectation -> Value 0.04030 0.0283 0.517 Expectation -> Satisfaction 0.00196 0.1578 0.858 Quality -> Value 0.53122 0.6060 0.616 Quality -> Satisfaction 0.46512 0.5107 0.569 Value -> Satisfaction 0.21564 0.1727 0.238 Satisfaction -> Complaints 0.51903 0.5326 0.552 Satisfaction -> Loyalty 0.39776 0.6482 0.966 Complaints -> Loyalty 0.08839 -0.0425 0.192
Now flip the filter so group 1 and group 2 are switched but have the same members:
mobi_mga2 <- estimate_pls_mga(mobi_pls, mobi$CUEX1 > 8, nboot=50, cores = 2) print(mobi_mga2) PLS-MGA results: from to group1 group2 p Image -> Expectation 0.4437 0.46982 0.3680 Image -> Satisfaction 0.1280 0.25380 0.8508 Image -> Loyalty 0.2188 0.22649 0.4140 Expectation -> Quality 0.5565 0.49681 0.0640 Expectation -> Value 0.0283 0.04030 0.4936 Expectation -> Satisfaction 0.1578 0.00196 0.0636 Quality -> Value 0.6060 0.53122 0.3036 Quality -> Satisfaction 0.5107 0.46512 0.5176 Value -> Satisfaction 0.1727 0.21564 0.6544 Satisfaction -> Complaints 0.5326 0.51903 0.5144 Satisfaction -> Loyalty 0.6482 0.39776 0.0728 Complaints -> Loyalty -0.0425 0.08839 0.8060
The beta's for each path are the synched up no matter the direction of the filter, but the p-values are very different; e.g. Expectation -> Quality p-value is 0.723 in the first example and 0.064 in the second. Is this expected? Seems odd that the p-value for the significant difference between 2 groups depends on which way the filter is set up. If this is expected, how do we choose which is the "right" way to set up the filter so we get accurate p-values?