There is a comical trend these days of labeling data incompatible with ones theories or views as 'just noise' or as 'simply random fluctuations'. The suggestion implicit statements such as these is that you should somehow discount that data point. This is the confirmational bias at work and is utter folly. Just as every data point compatible with a particular hypothesis should increase your degree of belief in the theory, so also should every disconfirmational data point should decrease that degree of belief. Perhaps not by much, but it must be taken into account and treated with equal weight as all previous data points.
This view is hardly novel, but it is worth repeating. Now that I have you nodding in agreement (and likely questioning the depth of this post) there is one ever so slightly more subtle point I would like to politely make.
IT IS NEVER *JUST* NOISE.
The noise is not just some random fluctuations that obscure whatever signal our theory predicts. No, noise is how we model ignorance. Now it may be that there is some fundamental limit to our knowledge about something (as in quantum mechanics), but, in general, the physics is in the fluctuations and the things we should be trying to understand tomorrow is the stuff we were forced to label as noise today. Even in the presence of rational inference, calling something noise and suggesting we forget about it is the equivalent of declaring the scientific process finished and that is something I hope we never do, regardless of how politically convenient it might be.