Depth of field is the range of acceptably sharp focus in front of the lens for a given focal length, aperture, and focus setting.
'Acceptable' focus depends on how large the image will be displayed in presentation - for example, if you are projecting a 4K image on a 50' screen, then you will see more focus errors than if you only display the same image at HD resolution on an iPad. Circle of confusion is the measurement of tolerance used to quantify this degree of enlargement.
Basically, the idea is that with any lens there is only ever one true plane of focus. This is due to how light rays get focused and converged by a lens to a single plane. We refer to this as a cone of light.
The sides of the cone are the rays of light entering the lens, and the point of the cone is the plane where the image is in sharp focus. Since the sensor or film on which the image is projected is flat, we only ever see a cross-section of that cone of light. Thus, the point of the cone (where the image is in focus) looks like a point. Any other cross-section of the cone (where the image is out of focus) will look like a circle.
You are probably familiar with the concept of bokeh, or the out of focus areas of the image. Notice that points of light when out of focus appear as circles, or 'bokeh balls.' You are actually seeing a cross-section of the cone of light.
The closer it is to the point of the cone, the smaller the circle. When the circle is visually indistinguishable from the point, we refer to that as the circle of confusion. Obviously the better you can see, the more you can tell if you are actually looking at a circle or a point. And that's why the circle of confusion depends on the resolution and size of the camera/lens/display.
When you stop down the lens aperture, use a wider focal length, or shift the focus toward infinity, the light rays passing thru the lens become more parallel and the cone of light becomes skinnier. Thus, more of the cone looks like a point (even though it's technically not as the light rays are still converging), and more of the image appears to be in acceptably sharp focus.
This is what the depth of field charts represent - how much of the image is acceptably sharp, given the lens parameters and your chosen tolerance, which is the maximum diameter of the circle of confusion that we are still considering as a point and not a circle.
Check the current edition of the American Cinematographer's Manual for the textbook definition and charts for the Super35 format.
When I was shooting 16mm in college (and framing through a dim, fuzzy optical viewfinder) I consulted the charts occasionally to find a hyperfocal distance between two subjects. Can't say I or anyone else I know has actually used these charts professionally since then.
This would strike me as a pretty silly assignment, except for the fact that it's forcing you to research the topic and ask some questions.
I used printed DoF charts, a SamCine DoF calculator, and later pCAM when I was working as an AC to calculate focus splits and check DoF on certain shots. As DPs began shooting on 4K large sensor digital cameras, frequently shooting wide open without marks or blocking rehearsals, that all kinda fell by the wayside. I still pull out pCAM occasionally for things like product, greenscreen, and motion control shots where there is repeatability and time to check.
For real world shooting, the DoF these days is often so shallow that it's not worth checking anymore - basically the shot is either sharp or it is soft. Luckily, it's easier to see focus on a large high-res monitor these days.
As a DP, I don't use charts to find out if something was sharp enough when shooting at a wider aperture and the focus distance was off a bit during the take, especially not with digital where I can see for myself if the shot was sharp or not.
The charts are more useful for me when I'm planning ahead for a true deep focus shot and need to know just how far I will have to stop down. They are also useful for comparing formats of different sizes, if only to explain to other people what the issues were.
A crude rule to remember is that the crop factor between formats is also roughly the difference in depth of field once you match distance and field of view, so VistaVision / Full Frame has roughly 1.5-stops less depth of field compared to Super-35 (36mm wide format versus a 24mm wide format is a 1.5X difference) at the same shooting stop once you match distance and field of view. It just works out that way because, for example, if you choose a format that is twice as large, then for the same size image presentation, you can use half as critical a circle of confusion figure for the larger format (since it will be enlarged less than the smaller format to fill the same sized screen.)
So if you look on a chart and compare the depth of field of a 50mm lens versus a 25mm lens, you find that you have to stop down the 50mm lens by 4-stops to match the depth of field of the 25mm lens focused at the same distance.
However, when using that 25mm lens on a 16mm camera to match the field of view of a 50mm lens on a 35mm camera (I'm rounding off to make the math simpler), that 4-stops difference in depth of field gets cut in half because the circle of confusion figure is twice as critical for the smaller format, so the 25mm (on the 16mm camera) then only has a 2-stop difference in depth of field with the 50mm (on the 35mm camera.) So a format that is 2X larger also has 2-stops less depth of field on average (again, when shooting at the same stop, same distance, with the same field of view for the two formats).
However, my explanation is somewhat of a layman's interpretation and is not technically correct.