2 XL2s (one stationary and one roving) recording a weekly 60-75 minute live production (15 total events). Audio will be multitracked to 8 tracks and recorded to a computer. Video and audio will be pulled into final cut pro for post production.
From most people's reports, it seems like during an hour of footage it's likely that the three separate clocks involved (2 cameras, one audio interface) will experience a shift of about 150-300ms. That could obviously get annoying. It seems like since the XL2 has no sync or time code ins or outs I have two options to try and sync the audio to video:
Set both XL2s to free run (http://dvinfo.net/ca...s/article11.php) and record all three things separately. After everything is pulled into FCP, manually line up the beginnings together, and then stretch/shrink the two video tracks so everything lines up. I've read this can be accomplished more easily by sending one of the 8 audio tracks back to one of the cameras so that you can line it up with the audio waveform rather than eyeballing where the audio should be compared to a video frame.
Pick up an old digidesign Video Slave Driver or Motu midi timepeice AV which (I think) will input the non-black video output from the stationary XL2 and convert that to wordclock. Send wordclock to my interface, and the audio should have the exact same length in samples and frames as the stationary XL2. Sync free-run timecode on the two XL2s with the afformentioned procedure. Pull everything into FCP and sync the audio with the stationary XL2 footage. The length should be the same since the XL2 was sending wordclock (sort of) to my interface. Line up the beginning of the other XL2s footage and adjust the length by stretching/shrinking or resyncing at each video cut.
Will option 2 even work? Will those devices create wordclock from the video output of the XL2, even though it's not blackburst? Also, will the clock created by the video output of the XL2 be stable enough to record audio with? Anotherwords, will it be dependable?
Would stretching or shrinking an hours footage by 6 frames make a big difference in the video quality? Would it take hours for the computer to chunk through the conversion? Or is option 1 just fine and I'm worrying too much?
Any additional tips or options you propose would be appreciated as well. Thanks!
External audio sync for a two camera shoot
No replies to this topic