Bayesian Non-Parametric Models as the Appropriate Null Hypothesis.
EX: The Cascading Indian Buffet Process
I was doing some casual web surfing when I came across a set of slides Hanna Wallach made regarding a generative model for deep belief networks (link above). I always liked DBM's but felt that they were almost too general. Add enough layers, give it enough data and they can do just about anything.
That's when something which is probably obvious to many people doing Bayesian Non-Parametrics finally occurred to me: The cascade indian buffet process may constitute the Bayesian equivalent of a null hypothesis (at least for directed graphical models of this kind). After all, given this directed structure, these models have basically no assumptions built into them. None-the-less they are quite complex, much more so than the standard null hypothesis of 'no relationship' which is almost surely false. Structured models appropriate to the data should at least be better than these assumption free models. This is a sad statement for data for which these models actually are the best performers as it suggest that when the cascading IBP is the best performer we should probably conclude that, in those cases, we really don't understand squat about the mechanism which is generating the data.
Anyway, just a thought... and a Chardonnay induced one at that.