The RapidMiner community is on read-only mode until further notice. Technical support via cases will continue to work as is. For any urgent licensing related requests from Students/Faculty members, please use the Altair academic forum here.

💬0 Comments | 🔥0 Discussions | 👤0 Members | 🔌0 Online |

The RapidMiner community is on read-only mode until further notice. Technical support via cases will continue to work as is. For any urgent licensing related requests from Students/Faculty members, please use the Altair academic forum here.

## Answers

347Maventhe description says: so I say it is a both-sided test with H0: mu1=mu2, H1: mu1!=mu2

greetings

Steffen

39Maven347MavenAs far as I see the following method

TTestSignificanceTestOperator#getProbability(PerformanceCriterion pc1, PerformanceCriterion pc2)calculates the p-value of the test. The test itself is ''performed'' here:TTestSignificanceTestOperator#TTestSignificanceTestResult#toString(), i.e. here: the comparison of pvalue < alpha is no clear indication for a left-sided test...this is the used formula:

http://en.wikipedia.org/wiki/Student';s_t-test#Unequal_sample_sizes.2C_equal_variance => two-sided-test

BUTlooking at this formula raises another questions:First: I have read somewhere that the assumption of equal variances is not a problem if the sizes of the test samples are equal. On the other hand, if this is not valid, no one can guarantee anything for the true alpha error. What do you think about it?

Second:

I thought in case of a two-sided test the alpha parameter must be divided by 2. Or is this already implied by the test statistics ?

greetings

Steffen

PS: I prefer such argumentations with class names and line numbers

39Maven347MavenI guess everything is clear now

greetings

Steffen