The significance level is given by the p-value ; "p < 0.001" would mean there is less than a 0.1% chance that the null hypothesis is correct (and so a more than 99.9% chance that the values are different).
...in both cases the chance for the relation to be true was measured at more than 99.9%
Pet peeve alert:
I wish the world was this simple. It's an unfortunate fact that p-values cannot be simply related to the probabilities of a hypothesis being true or false: see Wikipedia. Lindley's paradox shows a situation where the null hypothesis typically is rejected, even though it has a high posterior probability of being true.
In real-world applications, the "true probabilities" of interest cannot ever be found. Statisticians then like to use p-values as proxies, since they can be computed unambiguously, but they have quite a few limitations.
In general, the p-value quantifies how probable it would be for results that are "at least as extreme" as those observed in the survey to occur by chance alone, assuming that the null hypothesis is true. This null hypothesis typically contains a lot of strong, tacit assumptions, which may not be satisfied in practice; for instance, the t-test generally assumes that deviations follow a normal distribution. If the actual distribution has high kurtosis, extreme deviations become more common. A t-test would then be misleading, as it often would give small p-values, even if the two means being compared are the same.
Regression analysis, as in the second quote, is even hairier (furrier?), in that standard correlation measures generally assume a linear relationship between the two values, on top of any distribution assumptions. Even if there is a relationship between the quantities being studied, it need not be linear at all. It could, in fact, be very complex and nonlinear. For instance, a "positive correlation" between x and y—in the sense that y on average seems to increase with x for the data at hand—could reverse itself, and the expected value of y might start decreasing for large x-values. Anscombe's quartet is a good example here.
Pet peeve alert:
I wish the world was this simple. It's an unfortunate fact that p-values cannot be simply related to the probabilities of a hypothesis being true or false: see Wikipedia. Lindley's paradox shows a situation where the null hypothesis typically is rejected, even though it has a high posterior probability of being true.
In real-world applications, the "true probabilities" of interest cannot ever be found. Statisticians then like to use p-values as proxies, since they can be computed unambiguously, but they have quite a few limitations.
In general, the p-value quantifies how probable it would be for results that are "at least as extreme" as those observed in the survey to occur by chance alone, assuming that the null hypothesis is true. This null hypothesis typically contains a lot of strong, tacit assumptions, which may not be satisfied in practice; for instance, the t-test generally assumes that deviations follow a normal distribution. If the actual distribution has high kurtosis, extreme deviations become more common. A t-test would then be misleading, as it often would give small p-values, even if the two means being compared are the same.
Regression analysis, as in the second quote, is even hairier (furrier?), in that standard correlation measures generally assume a linear relationship between the two values, on top of any distribution assumptions. Even if there is a relationship between the quantities being studied, it need not be linear at all. It could, in fact, be very complex and nonlinear. For instance, a "positive correlation" between x and y—in the sense that y on average seems to increase with x for the data at hand—could reverse itself, and the expected value of y might start decreasing for large x-values. Anscombe's quartet is a good example here.