RGB imaging is well understood by most people. All imaging systems designed to simulate colour as humans see it need three channels of information because we have three types of cone cell in our retinas, and RGB stores, roughly speaking, one channel of information which will be detectable by each type of cone cell.
YUV is a term that's commonly (if slightly inaccurately) used to refer to a set of techniques designed to separate colour information from brightness information. Because the human eye has more brightness-sensitive rods than it has colour-sensitive cones, we see most sharply in brightness, rather than colour. Separating brightness and colour means the colour information can then be stored at lower resolution without obvious image quality problems.
In YUV-type imaging, the Y refers to the brightness information, and the U and V refer to the colour information. Roughly, the U refers to the mathematical difference between brightness and the blue channel, and V refers to the difference between brightness and the red channel. Because of this, in computing, YUV-type signals are sometimes called YCbCr, or other similar terms, depending on the situation.
Given this information, an RGB result can be calculated from any YUV-type signal. In practical applications, YUV-type signals are often used with the UV colour information at half or one-quarter the resolution of the Y brightness information, which saves bandwidth without affecting the apparent sharpness very much.
There are significant mathematical complexities regarding the range of colours that can be encoded by RGB and YUV systems, as well as practical issues with standardisation of signal levels which can cause contrast and colour shifts if not properly done.
Hope this helps