diff options
author | sotech117 <26747948+sotech117@users.noreply.github.com> | 2024-03-11 15:48:04 -0400 |
---|---|---|
committer | GitHub <noreply@github.com> | 2024-03-11 15:48:04 -0400 |
commit | 9bc70af0d617ada11d0a27b4902183b6f9b017d8 (patch) | |
tree | 31bca96b06d5d5b996a1ec907a9f8dae446d2a2d | |
parent | f3b1281d6e690259a36ae4ee8d00e7832d9b9746 (diff) |
Update README.md
-rw-r--r-- | README.md | 2 |
1 files changed, 1 insertions, 1 deletions
@@ -16,7 +16,7 @@ While the velocity for each dimension in our system (x, y, z) has a normal distr Chi^2 is used to test the null hypothesis of "no difference" among categorical variables in AB testing because it measures generalized, non-directional chaos among all dimensions of the system. If your distrubtions from the dimensions are similar, it should converge to be highly-chaotic & high-energy, as stated by the second law of thermodynamics. By contrast, if your underlying distrubtions create an immensely low-chaotic (i.e. low-energy state), then it's highly likely these underlying distrubtions are different. -Relating to AB testing, when you argue that, for chi^2, "if the p-value is less than 0.05, then the null hypothesis is rejected", you are saying that "if the probability of finding these distriubtions at an entropy this low and I found it (in your sampleA vs sampleB calculations), then it's highly unlinkely this state is a coincidence (violates the second law of thermodynamics) and the null hypothesis can be rejected (i.e. these distributions are not the same)." +Relating to AB testing, when you argue that, for chi^2, "if the p-value is less than 0.05, then the null hypothesis is rejected", you are saying that "if the probability of finding these distriubtions with such low choas and I found it (in your sampleA vs sampleB calculations), then it's highly unlinkely this state is a coincidence (violates the second law of thermodynamics) and the null hypothesis can be rejected (i.e. these distributions are not the same)." In theory, for performing a hypothesis test with categorical variables, we take each dimension to be the difference between the normal distrubtions (of differences in observed-expected) of the samples. This encapsulates the difference between the distrubtions into a normal curve, which we combine into the chi^2 curve (visuals helps this explanation, see video). |