Cropping itself doesn't change the depth of field, other than it may change the circle of confusion figure used to calculate the depth of field if the cropped image is then enlarged to match an uncropped image when comparing depth of field... but cropping changes the field of view, which then usually means you then change the focal length to compensate when comparing images -- in other words, as a starting point, you'd want to compare the same composition, same field of view, same subject size and distance to subject, when comparing depth of field differences.
This is only a rough approximation but generally the crop factor is the same as the difference in depth of field in terms of f-stop adjustment once you match field of view and distance to subject. In other words, if you are comparing two cameras and one has a sensor that is twice as wide (if you want to compare just horizontal view rather than diagonal just to make the math easier) then if the distance to the subject is the same, then camera with the sensor that is half as wide would need a lens with a focal length that was half that of the larger sensor, a 2X crop factor. So a 25mm lens, let's say, on the smaller sensor camera versus a 50mm lens on the larger sensor camera to get the same image more or less. And at the same distance, you'd have to stop down the 50mm lens by two f-stops to match the depth of field of the 25mm lens -- so let's say that the exposure was f/2.8 on the 25mm lens, it would have to be an f/5.6 on the 50mm lens that was on the larger sensor camera to get a similar depth of field.
Again, these are all ballpark figures, not precise calculations...
So if the difference between a sensor image area that is 24mm wide (Super-35 / APS-C) and one that is 36mm wide (full-frame) is 1.5X, then the practical difference in depth of field is 1.5 stops. Once you matched field of view (by using a focal length that was 1.5X shorter on the APS-C camera) and distance to subject to get the same composition and subject size more or less, you'd have to open up the APS-C camera by 1.5-stops to match the depth of field of the full-frame camera image.
Now some of these DSLR's crop the sensor when recording HD video, so you'd have to use the actual recorded dimensions to calculate the difference in depth of field, but APS-H is obviously smaller than full-frame 35mm.
This isn't rocket science and I wouldn't get too obsessed over the math of it. The real question is whether the depth of field of an f/1.4 lens on a Super-35 camera is shallow-focus enough for you, or if you need to find the new Vantage f/1.1 lenses. And let's say you went to a full-frame camera like the 5D but only could find f/2.0 lenses for it -- then you are only talking about a 1/2 stop difference in depth of field over an f/1.4 lens on an Alexa or something -- is that really a big deal or not?
Personally shooting wide-open on a f/1.4 lens on the Alexa would give you both shallow focus and a wide dynamic range, so the question is whether you really need even less depth of field by going to a Canon 5D with an f/1.4 lens (probably a still lens) and lose the extra dynamic range plus ease of focus pulling on cinema lenses. Of course, the 5D with still lenses is probably a lot cheaper. And as I said, there is a raw hack from Magic Lantern for the Canon 5D that would give you more dynamic range if you want to deal with that.