Apr 26 2012, 6:19am
Interesting commentary Owain.
Of course compression for the web isn't going to be the same. Enough said there.
I'd choose your last answer, the post-production wasn't done for resolution quality, it's the motion that mattered.
As far as the landscape, my casual observance and rookie knowledge from web research (it's not even close to my line of work, I'm just interested in the tech), I can say that certain DSLR cameras doing high-def short vids, have a cutting edge reputation for low light conditions.
Something I found interesting, was all the wiki entries describing the different optical illusions to be used when changing out length of lenses, or using extenders for the camera. Much like the way they used to use models way back in the mid-20th century for sci-fi movies, Peter used the same techniques to give the impression hobbits were small.
So if you like taking close-ups of something, and you want a landscape background (albeit blurry), basically you use a smaller 25mm lens on DSLR with an extender.
How much you point the camera away vertically from the vanishing point makes a difference too. Pretty interesting stuff. So I can follow you pros, up to a point. Mostly because I want to save my old VCR tape movies, so I've put together some basic software and cheap eBay equipment as a project. It was easier back in the day. WinXP actually had it right, as well as the capture cards. Now everything got dumbed down. I hate Win7. Win8 is going to be a travesty. Such is their way for tablets and other BS. Microsoft has me confused, it's all aesthetic for consumers nowadays. My research had me wandering around those world wide web types of RED places in the past, just as a lurker, but I'm too busy these days planting and germinating...to get back to indoor projects. Maybe later.
Something I've noticed is how Intel had kept the tick-tock development between Sandybridge and Ivy Bridge in such a way, where the new socket LGA 2011 had the processor getting just enough juice to make it the bump up to enthusiast pricing. The Sandybridge-E chip has the workstation performance, but the X79 Platform Controller Hub lacked that tock everyone wanted, or was expecting, at least those who were running the older quad cores with the big RAID arrays for videographic work. It's the same reason all the mobo manufacturers had to install their own chips to handle the parallel processing, I even think Intel turned some off right at the CPU. I always wondered, because Intel is really pushing their onboard chip graphics right now, probably for laptop users. The fiber optics LightPeak/Thunderbolt was what I wanted, but I guess copper is flexible, optical cable not so much.
I guess I started to read things like this...
How to add Mac-like RAW image support to Windows 7, Vista, XP
...then thought it would need a CPU like the SandyBridge, using a DSLR straight hi-def raw, which required by my calculations, four 1Tb drives running quickly in Raid 0, straight off the Intel drivers, no hardware RAID card.
After keeping a back-up to a back-up drive, plus the OS drive, and all that research trying to figure out the new partition stuff for the bigger drives (think DiskPart under the Windows 7 OS load to get it to work, as you load drivers), I ended up just having all these drives without any optical drives attached. Then I'm finding out my mobo has hardware problems from reviews (probably something to do with Z68 chipset and the Win7 64-bit). Great.
And then Intel comes out with their new socket. Double great (even though my chip is a third the price, just for a 20% or so hit on processing speed). Can't win for losing. Now I'm just tearing it all apart if the mobo keeps screwing it up, parsing these drives down to two separate computers, then making a home server to backup to backup all my computers (cobbled together over the years). Or maybe not. I can tell you this though. Onboard graphics with Intel CPU chips or not, that new hybrid SSD caching wasn't impressionable.
To get what I'm talking about to work, you would need a power source to run the full sized computer, dump the RAW files off the DSLR via tether, and as far as I can tell, you might as well be shooting with all this gear in the back of a truck, or some kind of wheel truck with 12v batteries to run the power to the computer, which ain't going to be a small crew of three. Just to get your RAW footage for high def, because there's so much data y'know. The other direction...with compression...they just use SSDs in a package where you got it next to the camera, like a mini-monitor. They ain't cheap either, 'cuz then you're up to the processing with broadcast quality monitors, and that takes you beyond a $10k USD investment or so (with three man crew), where you just want to get depth of field with a hi-def DSLR. That's my understanding anyway.
It's like what do you want? Do you want to play an acoustic 6 string really good, muss your hair up with teeth brightened (ouch); or do you want your electric to sound expressionist, like German film making nihilism? Wish I had the time to tackle these learning curves on my gear, but right now I'm working on my flower beds. And that's where my head is.
Klaatu... Verata... Necktie. Nectar. Nickel, Noodle...Nikto!
(This post was edited by Night Wolf on Apr 26 2012, 6:28am)