I think it's a community thing - yes, 8-bit in the DSLR community is bound to have a 'bad' name compared with 8-bit in broadcast worlds.
I can't really say whether in my experience sampling regime or 8-bit/10-bit etc. makes the bigger difference. There are just too many other elements that play into the mix.
Yes, basically 8-bit gives you very little leeway, even when it's in a professional implementation like DVCProHD that eg. Planet Earth was shot with - you need to get the exposure right, in camera, very close to what you want in the final. In my professional experience (mainly BBC) that exposure accuracy is by far the major issue - because it's pretty much impossible to get it right every time in an outdoor, uncontrolled and often rushed filming environment.
The big 'killer' I think is the combo of 4:2:0 (eg. normal DSLR and consumer camcorders) AND savage compression ratios. Of course, as you say, the whole pipeline from sensor to recording media is critical too, but I'm pretty sure that 4:2:0 is a major cause for banding and blocking issues.
For my personal independent films - largely destined for DVD, BluRay, VOD/internet delivery - I still shoot 8-bit 4:2:2 140mbps I-Frame XDCAMHD (EX3+Nanoflash). It looks great most of the time partly because I've got 20+ years experience filming ! On the other hand my 2nd camera footage - 8-bit 4:2:0 high bitrate I-frame hacked GH2 - while often almost indistinguishable from the 1st cam, gets caught out by eg. banding in blue skies and long-lens blurry backgrounds.
I don't have experience with HDMI connections. I suspect you are right that just because the HDMI spec is 4:2:2 it's possibly misleading and you can only know what the actual signal/output is by checking it directly.