Check these out:
View attachment 4828 View attachment 4829
I pulled these stills out of the video files at 30-seconds in. They look pretty much the same to me.
Ok, just for fun, I took these two images and analyzed them just by eye and first off, as stills, they actually look pretty good if your playing video stays at this quality, it would be better than the average that I have seen from the H, but that is a big caveat.
If you use a mac, the first thing I want you to do is put the images on top of each other (I think you said you have an iMac 4k 27) which is VERY true to color and pixel, almost as good as the $5 to $10k monitors used in digital service houses. Press the space bar and go back and forth and you will see a difference right off the bat. All you can really ascertain from that is that, as expected, the multiple h264 one is darker. A darkening of the video is such a problem with h264 encoding that it is compensated for by most people that use on board H264 encoders (essentially anyone using a bird) as most cameras worth a nickel don't encode on board. Even the new iPhone 7 has an uncompressed raw option now. I think it might be just on the + but don't quote me.
So if you do that you will see the darkness.
At a glance, and especially if you are looking on a non-client monitor (aka a monitor that is made to try and make your images look as good as possible) and that actually does include the iMac screen unless you set it otherwise which I do, because my job requires me to see what is real, not try to make it look different. A client monitor is essentially a reference monitor, just like audio monitors don't have bass boosts and stuff like that, but your home entertainment system does.
So anyway, I went ahead and looked at the two images using just my eyes. I have software that I could run it through that would spit out all kinds of info but let's just stick with the eye test.
Again, it's impossible to see what the video looks like because you didn't post video but you can ascertain from the images what the video would look like and really on neither is it great. Again, it does look better than most if the quality of blur and lighting stays constant while playing but excuse me if I have my reservations that it would.
Here is the image to analyze. I have zoomed in on the church as it shows some good contrast to which to compare not only light but also resolution (pixels). PIxelezation is the enemy of video. The more their is, the more artifacts you will see while playing the video (even if you don't see it zoomed all the way out which I can, quite easily which is a blessing while working and a curse while enjoying), ESPECIALLY when there is motion. I think it's safe to say that most of you know that there are more artifacts during motion than when not. This is down to the encoder and it's VERY low bit rate and in some cases on the H as low as like 26, 42, and 50 in gorgeous mode, and in 1080p for some reason I WILL NEVER understand, they lower it to a disgusting 16, 10 and 6 which in the world of photography is laughable. I suppose they do it so people that are recording at lower resolutions can get away with, or so Yuneec thinks, is a joke. A 6mb rate is so awful that I am embarrassed for them typing it. Most cameras that are even 5 years old or more that are half decent that don't fly measure in gigs, not single digits. It's almost like having 128k on your iMac in RAM. That's how ridiculously low the 1080p bitrates are on the H.
Sorry, I can get off topic. Back to your image:
Now hopefully you can click on it and make it full size and you are looking at it on a monitor that isn't trying to clean it up but there is a pretty significant difference in the darkness, although admittedly not as bad as I would expect (the one on the left is the one that you are saying is out of the bird, and the one on the right is the one you are saying is three times compressed). Can you see the difference in pixelazation between the two? Not that the one on the left (the original is so good) but you can definitely see pixelization a lot more on the one on the right. You would almost need to blur or somehow soften that in order to make the video not show artifacts because that kind of pixelazation is not easily overcome. Neither of these would make good video (although both stills are adequate). If you did start to do any sort of color correction or any other type of digital manipulation, it would only get worse. I would see tons of artifacts if these were being played at any FPS and the one on the left would be worse than the one on the right for the very fact that the pixels are more pronounced.
H264 is a preview codec and it's not a bad one. From an uncompressed file, one generation of h264 is acceptable for general viewing (its not a deliverable unless it was for real estate or something but not professional broadcast quality deliverable, that is usually ProRes or DNxHD150, maybe 75 and usually just Avid 8 or 10 bit uncompressed) but I've explained this before but in the P4 forum so I will say it here.
When you encode something, without getting too complex, it uses algorithms to encrypt and decrypt something. Some algorithms such as h264 are meant to be used only once because of the math.
Example, a codec is just analyzing the 1s and 0s and then making decisions as to how to encode. Put very simplistically, one algorithm that would be very non-lossy and really wouldn't make the file that much smaller would be on the COMPRESS part of the COmpresssDECompress would be to remove a 1 after anytime there are three 0s. The compression part would be that. Therefore the DECOMPRESS part would be to add a 1 whenever there are three 0s in a row and then you would get an exact replica of your image in what is called a lossless codec (although technically it could no longer be called uncompressed as even that would miss some beats).
A VERY highly compressed algorithm like h264 while very convenient and good uses MUCH MORE complex math that doesn't work on the second try because of the complexity. It is so complex that it can't even unravel it back to its original which is why h264 is a lossy codec. Even on generation 1, you are not getting your original file back. It is also why editors (meaning software editors play it back so horribly) and make it slow down because it has a lot of reading and unraveling to do while it is trying to playback.
So while it doesnt get the original image back to square one on the first try, with each additional one, it is getting exponentially worse. On a third compression, you are not 3x worse, you are 100s of times or more worse from your original 1s and 0s and since you are already starting out with a lossy compression at a low bit rate (which is THE #1 contributing factor to the image), multiply that times 3 and if it's even watchable, it's impressive. I hope this made some sense.
BUT ALL THAT SAID, at the end of the day, if it is only yourself (and your family) you are creating videos for then by all means, do whatever makes you happy and you like. You won't have to stand up to a client monitor or digital service quality checkers. You just need to do what makes you happy and if in the end, this does, everything I just said is for you info only and doesn't matter at all.
Hope this shed some light on encoding and what is happening underneath the hood.