14 bits would have meant lesser audio quality, yes. Otherwise, the 2 extra bits are exactly the same as the first 14, Audio Data. The error correction system actually comprises 66% of the data on a CD! (588 bits to encode every 192 audio bits.)
It's called Cyclic Redundancy Correction Coding, or CRCC, sometimes Cyclic Redundancy Check Code, or just CRC, as in CRC Errors when talking about a bad disk.
First the 16 bit words are split into into frames, containing twelve 16 bit samples each (six left and six right samples), for a total of 192 bits of audio data per frame. The error correction then adds 64 bits of parity data to each frame. Then, 8 bits of subcode is added for ease in locating the frames. (At this point my Sony training Instructor diverged from the Wikipedia page by saying that the entire process was applied again to the resulting data stream, which he explained was the Cyclic part of Cyclic Redundancy Correction Coding.) All of these 33-byte frames (called "channel-data" frames) get their order scrambled and run through EFM before getting encoded as pits on the disc. The order is mixed up so that scratches don't destroy consecutive audio bits.
EFM means Eight to Fourteen Modulation, which is exactly what it sounds like, an 8-bit word is mapped to a 14-bit word in order to remove as many 1-to-0 or 0-to-1 transitions as possible, basically an LPF for the data. They did that to make the pits on the disk larger and easier to read. A Sony engineer explained that the way they designed EFM in the pre-computer days was by having a room full of women look through lists of 14-bit words to find the ones with the most 1's or 0's strung together, such as 00000001111111, or 11110000011111. The list that they made was then mapped into a lookup table ROM so that they had a unique 14 bit word for every possible 8-bit word.