I've just recently edited a video with all cameras at 4K. Before that all video from the Typhoon was shot at 4K and final output was 1080 because at the time my other cameras could not shoot 4K. The 1080 Videos were rendered out of my editor at maximum file size for YouTube. My CCC and Trim Function Videos are both about ten minutes long and the final rendered video was just under 4 gigs each at upload.
My recent video at 4K, also ten minutes long was rendered out at maximum data rate and final rendered video is about 6 gigs for the 10 minutes - 50% more data.
Even though I do not have a 4K screen, my computer has the ability to watch 4K and the difference is clear. Even on youtube if you've got a good enough GPU, you can see a clear difference. In scenes where tree leafs are moving against a background for instance; this is where highly compressed video starts to break up and loose quality. At a high enough data rate uploaded to youtube these areas can be preserved. It all starts at the source, capturing the biggest image is paramount.
Here is the relative sizes of our 12 MP Sensor, a 4K (8MP) image size and a 1080 (2MP) Image Size - a 4K Video has 4 times the data rate. This is also why cameras like ours have the ability to shoot high frame rates at smaller resolutions - for instance shooting 1080 video at 120 FPS (4 times faster than normal) has roughly the same amount of data being written to the card.
4K Video properly render out in 1080 is always better than native 1080 because there is simply more data in the image regardless of size. Also if you want to know where that extra size of 4K available on the CG03 comes from its the extra on the sides of the 4K and is not a true anamorphic picture.
