Just the other day I acquired a Nikon D3300 DSLR from Best Buy. Today I tested it in 24p mode and my goodness, what a shocker. If someone else had shown me the footage I shot and told me they shot it on 35 I would have believed it! Granted, I watched it on a 4K monitor and not on a 30 ft screen, but this is surely a major milestone (I know, people must be wondering where I was all this time, lol).
Yea, but not really. The DSLR age has kinda come and gone as lower-cost digital cinema cameras have become the mainstay. The reason is quite simple... DSLR's are designed to be still cameras first. So they focus all their attention onto creating a still image and the "video" side is just another feature. So the DSLR's suffer from problems like rolling shutter effect, low dynamic range, severe highlight clipping and reduced color space from the MPEG capture.
Since you're kinda new to the whole digital thing, let me break it down for ya.
A modern motion picture film scanner captures an image in 24bit RAW color, nearly identical to a raw still image. Each one of those images is packaged/stored in a file format or folder, generally in a Tiff file format. Programs like DaVinci will read the Tiff sequence (Targa/Cinema DNG) and which allows for the application of appropriate color. We use something called a look up table (LUT) to match a specific look for viewing. These LUT's are based on film stocks in the film world and when applied, they will mimmic a particular look.
Digital cinema cameras like the Red, Arri and Blackmagic, shoot in 12 - 16 bit RAW (Tiff/Jpeg) and deliver a very similar post process. The files are brought into DaVinci (or other coloring tool) and color is applied. Without the color applied, the image is washed out and very flat looking with little to no dynamic range, unlike a film image. In the world of digital, the look up table (LUT) is actually built for the specific camera. So the image is almost decoded in a way to whatever viewing system you have.
Still cameras, DSLR's and home video cameras (outside of a few rare examples) shoot using a highly compressed MPEG format. This format is designed for easy playback on a myriad of computers, software and encode efficiency. Unlike RAW, it's not a frame based system. Every 8 - 24 frames, there is a key frame and every frame after that is only encoded based on changes in the shot. So if you don't move the camera, it will look fantastic! The moment you have a lot of movement in the shot, the image falls apart because the limited bandwidth MPEG file, can't deal with it. Plus, these cameras shoot in 8 bit 4:2:0 color space which is the lowest "acceptable" quality format. It's actually nearly identical to broadcast tv. Plus, due to the size of the cameras, the manufacturers are unable to put the requisite powerhouse processors and imagers, to produce a smooth beautiful image without blurring. This effect is commonly known as "rolling shutter". The CMOS imager scans from top to bottom, so if you move the camera fast or an object moves in front of the camera at speed, that object will be in a different part of the frame at the top then at the bottom. On some cameras, it's really bad, you'll see object warping and distortion, on others it's not so bad, but on all of them it's noticeable.
So because these consumer based cameras are light, cheap and don't chew up batteries, they're very limited in operation. Even if you shoot perfectly with them in perfect conditions, pulling out a reasonable image, can be very tricky and challenging. So trying to get a certain "look" is nearly impossible without substantial post processing. Even then, MPEG artifacts are a constant battle, especially if the exposure isn't perfect. I mean, if you shoot it and never make any changes in post, you could get away with MPEG cameras, but the moment you make changes, you will open a pandoras box of unwanted noise and artifacts.
What's the solution? Well, ya gotta start with a camera that shoots RAW (RED, ARRI, Blackmagic, etc). Then you've gotta scanned test pattern from a 35mm source. Take that source and build a LUT based on it using DaVinci. Then simply apply that LUT to the RAW camera material. It may take a few tries, but if you do that, you will absolutely get the color tinting you're looking for. Then the best thing about DaVinci is that you can apply a print film LUT on the output as well. So you can say, hmm... today I want X print film on my output, which will taint the colors one more time. The whole workflow works wonderfully, but it doesn't work without a LOT of color space 12 - 24 bit, full raster (RAW) and a lot of work with DaVinci.
Sorry for the long winded response. I don't mean to pick on the DSLR, but unfortunately it's just a toy. Yes, consumer cameras have come a long way, but they're very restrictive for the filmmaker looking to make major changes in the image. I personally stayed away from DSLR's and went for the newer generation cinema cameras and they look other-worldly compared to the DSLR's.