"Randomisation" here is a causal operation, not a probabilistic one. It cuts off any possible causes which would induce a spurious relationship (including cheating): what's randomised is the distribution of relevant uncontrolled causes. If you consider there to be some small finite set of relevant unknown causal factors, Trait-1...Trait-n, then under randomisation, with sufficient data, these traits should be equally-well distributed. Why? Because assuming the traits occur with "reasonable distributions" then with "reasonably large amounts of data", these traits will be randomly divided between the populations. So if a treatment effect is observed, it is due to the pill rather than an uncontrolled for Trait. This falls down if the traits are heavily skewed/fat-tailed in their distribution (since it would take unfeasibly many data points to well-mix between groups), or if there's not enough data, etc. However, given the above, the argument presented seems incorrect. Randomisation is a weak-but-useful resolution to there being a large number of unknown causes we cannot control for. Ie., my confidence that "Pill-A causes Effect-A" given "Pill-A's use is associated with Effect-A's precence" is higher if this association occurs under randomisation (of the experimental procedure by which relevant populations are partition).
If you KNOW the CAUSE of death using steel and wood crucifixes is the same, there is no point in testing it emprirically. So, indeed, doesn't make sense to test if 1 == 2. If you don't know that information, you HAVE TO TEST IT: H0: deadlyness of steel crucifixes == deadlyness of wood crucifixes H1: deadlyness of steel crucifixes != deadlyness of wood crucifixes And, being a little pedantic, there's no such thing and RANDOM in (normal) computers. "Random" numbers are computed using determinisc formulas - they are PSEUDO-random but, for pratical matters, this is enough.