Not sure which p-values you're referring to (they ran tests on things they were interested in being different), but:
a p value of 0.30, which indicates that there's a 30% chance that the results are due to sampling errors
This is the common, but serious, misinterpretation of p-values. You'll probably end up with the right conclusion in this case, but it can steer you "disastrously wrong" in others.
A p-value is the probability you'd see at least as extreme a result in a sample as you did if the null hypothesis were true (i.e., if there were actually no difference between the groups). In conditional probability notation, P(D|H) ("the probability of the Data given the Hypothesis").
"There's a 30% chance that the results are due to sampling errors" can be restated as a 30% probability that the null hypothesis is true given that you've obtained a result at least as extreme as yours, or P(H|D).
Often people don't immediately recognize an important difference between these two. Indeed, taking P(A|B) and P(B|A) to be either exactly or roughly equal is a common fallacy. An intuitive example of how wrong this logic can go may be useful: If you're outdoors then it's very unlikely that you're being attacked by a bear, therefore if you're being attacked by a bear then it's very unlikely that you're outdoors.