The difference between 8-bit and 16-bit might be apparent when comparing two images of the same resolution (on monitors of the same resolution and bit depth to match each image). Such a difference would become obvious if the scene contained a gradation subtle enough to cause banding in the 8-bit image but not in the 16-bit image.
However, in such a scenario, if you could continually increase the resolution of the camera sensor and monitor of the 8-bit system, you would find that the banding would dissappear at some point in the 8-bit image. By increasing the resolution of the 8-bit system, you are also increasing its color depth -- yet its bit depth always remains at 8-bit.
One can easily observe a similar phenomenon. Find a digital image that exhibits a slight banding when you are directly in front of it, then move away from the image. The banding will disappear at some point. By moving away from the image, you are increasing the resolution, making the pixels smaller in your field of view. However, the bit depth is always the same, regardless of your viewing distance.
Such a test wouldn't be conclusive unless each monitor matches the resolution and bit depth of the image it displays.
Most image makers are not aware of the fact that bit depth and color depth are two different properties. In digital imaging, bit depth is a major factor of color depth, but resolution is an equally major factor of color depth (in both digital and analog imaging).
Therefore, one can sacrifice resolution while increasing bit depth, yet the color depth remains the same (or is decreased). In other words, swapping resolution for more bit depth does not result in an increase in color depth.