An ADC will be specified in terms of how many bits are useful bits, meaning that any bits beyond the useful ones are more noise than signal.
But the way in which uncompressed digital data is transported is typically in terms of bytes rather than bits. A byte is 8 bits. So if an ADC is specified as 14 bits (of useful information) a byte can't encode those 14 bits. Two bytes (or 16 bits) must be used, meaning 2 bits of which will be either noise or set to zero.
But this is by no means a standard. All sorts of encoding schemes exist for uncompressed data streams. For example, 3 x 10 bits (30 bits) can be packed uncompressed across 4 bytes (32 bits).
Manufacturer's will tend to claim all sorts of things with respect to their ADCs. If they claim a 14bit ADC it either means they realise they can't get away with saying 16bits (who would believe them) but can get away with adding an extra bit or two where the sensor might have been more correctly classified as a 13 or 12 bit sensor.
However this is not necessarily fraudulent. It is difficult (or impossible) to know whether a bit is just encoding noise or a signal. Indeed it is arguable that there is no such thing as pure noise - that there is always a faint trace of a signal, no matter how many additional bits the ADC sports. However the data processing hardware might be ignoring any signal above a certain bit depth. The limit specified on the ADC might be with respect to the processing hardware rather then the sensor.
And data often undergoes down conversion in order to remove noise. In other words the noise is recorded and then filtered (on the fly), the result of which doesn't need to be anything more than 10 bits for subsequent transport. One captures more than one needs just to be totally sure one hasn't captured less than one needs. And of course, once compression is introduced, the whole concept of bits/pixel becomes a meaningless number.