Hello Fellow Yuneec Pilot!
Join our free Yuneec community and remove this annoying banner!
Sign up

Problems with edited 4K footage

It's a FCP issue. The most recent update seems to have done something to export presets. The same settings I've used in the past on 4K Typhoon footage now cause the skipping/stuttering that you described. Example here:


It's minor, but it's annoying.
You think that stuttering is minor? I believe you guys hold these videos to a lower barometer than most as to what constitutes acceptable. This actually looks pretty good coming out of the H (sorry for all the H bashing lately) but it and I have become unfriendly. I have been using mine more than normal lately and I have been unimpressed with the videos I am getting, and I have been dialing in my own settings and getting about as good from the stock lens as anything I've seen and have been beyond unimpressed. My iPhone captures images that would smoke anything that comes out of my Cgo3+. I am waiting to see what the lend mod will do when I get it. I have been encouraged by some of the videos of a few people with lens mods.

I know you are saying it's happening because of a software issue but I am seeing stuttering on just straight flying, without panning or any other motion, and that's not even getting into talking about the image coming out of the bird and it's on par with everything else I've seen, I just thought (been a bit overly confident) that once I started putting in my own settings and filters it would help. So far, the filters help, A LOT, but that's it. To me there is no comparison with the video quality of any of the like DJI competitors and I won't comment on the software yet because I haven't had the time to get used to it but it feels clunky and not well put together although since I did the latest firmware, it a tad better. There are some things I like better but I do not like not having a VPS. I never realized how much I used it until I wasn't.

Sorry if I seem I am basing. I am most certainly not, and I realize I am just getting used to years of using the other guys product but to me there is just no comparison on any level, at this point. I like the CCC. I won't give my full opinion until I have a good several months on it. Perhaps I'm just tainted by using DJI products for as long as I have but I feel no benefit by flying the H and my eyes certainly aren't lying to me.
 
Check these out:
View attachment 4828 View attachment 4829
I pulled these stills out of the video files at 30-seconds in. They look pretty much the same to me.
Ok, just for fun, I took these two images and analyzed them just by eye and first off, as stills, they actually look pretty good if your playing video stays at this quality, it would be better than the average that I have seen from the H, but that is a big caveat.

If you use a mac, the first thing I want you to do is put the images on top of each other (I think you said you have an iMac 4k 27) which is VERY true to color and pixel, almost as good as the $5 to $10k monitors used in digital service houses. Press the space bar and go back and forth and you will see a difference right off the bat. All you can really ascertain from that is that, as expected, the multiple h264 one is darker. A darkening of the video is such a problem with h264 encoding that it is compensated for by most people that use on board H264 encoders (essentially anyone using a bird) as most cameras worth a nickel don't encode on board. Even the new iPhone 7 has an uncompressed raw option now. I think it might be just on the + but don't quote me.

So if you do that you will see the darkness.

At a glance, and especially if you are looking on a non-client monitor (aka a monitor that is made to try and make your images look as good as possible) and that actually does include the iMac screen unless you set it otherwise which I do, because my job requires me to see what is real, not try to make it look different. A client monitor is essentially a reference monitor, just like audio monitors don't have bass boosts and stuff like that, but your home entertainment system does.

So anyway, I went ahead and looked at the two images using just my eyes. I have software that I could run it through that would spit out all kinds of info but let's just stick with the eye test.

Again, it's impossible to see what the video looks like because you didn't post video but you can ascertain from the images what the video would look like and really on neither is it great. Again, it does look better than most if the quality of blur and lighting stays constant while playing but excuse me if I have my reservations that it would.

Here is the image to analyze. I have zoomed in on the church as it shows some good contrast to which to compare not only light but also resolution (pixels). PIxelezation is the enemy of video. The more their is, the more artifacts you will see while playing the video (even if you don't see it zoomed all the way out which I can, quite easily which is a blessing while working and a curse while enjoying), ESPECIALLY when there is motion. I think it's safe to say that most of you know that there are more artifacts during motion than when not. This is down to the encoder and it's VERY low bit rate and in some cases on the H as low as like 26, 42, and 50 in gorgeous mode, and in 1080p for some reason I WILL NEVER understand, they lower it to a disgusting 16, 10 and 6 which in the world of photography is laughable. I suppose they do it so people that are recording at lower resolutions can get away with, or so Yuneec thinks, is a joke. A 6mb rate is so awful that I am embarrassed for them typing it. Most cameras that are even 5 years old or more that are half decent that don't fly measure in gigs, not single digits. It's almost like having 128k on your iMac in RAM. That's how ridiculously low the 1080p bitrates are on the H.

Sorry, I can get off topic. Back to your image:

Screen%20Shot%202017-01-27%20at%2010.05.28%20AM_zpstxwcy4n5.jpg


Now hopefully you can click on it and make it full size and you are looking at it on a monitor that isn't trying to clean it up but there is a pretty significant difference in the darkness, although admittedly not as bad as I would expect (the one on the left is the one that you are saying is out of the bird, and the one on the right is the one you are saying is three times compressed). Can you see the difference in pixelazation between the two? Not that the one on the left (the original is so good) but you can definitely see pixelization a lot more on the one on the right. You would almost need to blur or somehow soften that in order to make the video not show artifacts because that kind of pixelazation is not easily overcome. Neither of these would make good video (although both stills are adequate). If you did start to do any sort of color correction or any other type of digital manipulation, it would only get worse. I would see tons of artifacts if these were being played at any FPS and the one on the left would be worse than the one on the right for the very fact that the pixels are more pronounced.

H264 is a preview codec and it's not a bad one. From an uncompressed file, one generation of h264 is acceptable for general viewing (its not a deliverable unless it was for real estate or something but not professional broadcast quality deliverable, that is usually ProRes or DNxHD150, maybe 75 and usually just Avid 8 or 10 bit uncompressed) but I've explained this before but in the P4 forum so I will say it here.

When you encode something, without getting too complex, it uses algorithms to encrypt and decrypt something. Some algorithms such as h264 are meant to be used only once because of the math.

Example, a codec is just analyzing the 1s and 0s and then making decisions as to how to encode. Put very simplistically, one algorithm that would be very non-lossy and really wouldn't make the file that much smaller would be on the COMPRESS part of the COmpresssDECompress would be to remove a 1 after anytime there are three 0s. The compression part would be that. Therefore the DECOMPRESS part would be to add a 1 whenever there are three 0s in a row and then you would get an exact replica of your image in what is called a lossless codec (although technically it could no longer be called uncompressed as even that would miss some beats).

A VERY highly compressed algorithm like h264 while very convenient and good uses MUCH MORE complex math that doesn't work on the second try because of the complexity. It is so complex that it can't even unravel it back to its original which is why h264 is a lossy codec. Even on generation 1, you are not getting your original file back. It is also why editors (meaning software editors play it back so horribly) and make it slow down because it has a lot of reading and unraveling to do while it is trying to playback.

So while it doesnt get the original image back to square one on the first try, with each additional one, it is getting exponentially worse. On a third compression, you are not 3x worse, you are 100s of times or more worse from your original 1s and 0s and since you are already starting out with a lossy compression at a low bit rate (which is THE #1 contributing factor to the image), multiply that times 3 and if it's even watchable, it's impressive. I hope this made some sense.

BUT ALL THAT SAID, at the end of the day, if it is only yourself (and your family) you are creating videos for then by all means, do whatever makes you happy and you like. You won't have to stand up to a client monitor or digital service quality checkers. You just need to do what makes you happy and if in the end, this does, everything I just said is for you info only and doesn't matter at all.

Hope this shed some light on encoding and what is happening underneath the hood.
 
Last edited:
*also what do you mean you did zero post processing
I just took the original video file into PowerDirector, added it to the timeline, pressed Produce and rendered a file. I then took that rendered file into a new session of PowerDirector and went through the process again. I repeated that 3 times. All renders after the first, I selected SVRT (Smart Video Rendering Technology). The re-rendering process with SVRT was faster than the original rendering. No enhancements were applied.

Please post 3 untouched versions of 1) the original file and 2) the video the the thrice compressed file.
Since I can't post the original 5-minute 4k video here, I'm uploaded two 3-second 4k videos. The first is the 1st pass through the rendering process from the original Yuneec file. The other is the 4th pass. I'm interested in hearing what you think. I'm no expert in video post production and would like to know why you think this looks like "poop".
View media item 421View media item 422
 
I just took the original video file into PowerDirector, added it to the timeline, pressed Produce and rendered a file. I then took that rendered file into a new session of PowerDirector and went through the process again. I repeated that 3 times. All renders after the first, I selected SVRT (Smart Video Rendering Technology). The re-rendering process with SVRT was faster than the original rendering. No enhancements were applied.


Since I can't post the original 5-minute 4k video here, I'm uploaded two 3-second 4k videos. The first is the 1st pass through the rendering process from the original Yuneec file. The other is the 4th pass. I'm interested in hearing what you think. I'm no expert in video post production and would like to know why you think this looks like "poop".
View media item 421View media item 422
Thanks for posting it but it's showing as unplayable. Why not just upload to YT or Viimeo which would further compress it though. Dropbox?
 
Thanks for posting it but it's showing as unplayable. Why not just upload to YT or Viimeo which would further compress it though. Dropbox?
OK. Here are the links to Dropbox. Each video is 30-seconds. 1st Pass is the result of importing and producing the original Yuneec video file This took 48 seconds to produce (without using Hardware Video Encoder option). 2nd, 3rd and 4th passes were the result of importing the previous pass into a new session of Power Director and producing the next pass using the SVRT (Smart Video Rendering Technology). Each of these passes took 7 seconds to produce.
Considering that we are working with a relatively inexpensive consumer-grade flying 4k camera, I think we get pretty good looking video. It never ceases to amaze me and my friends when I show the videos on our 65" 4k Samsung TV. Granted, this particular video could use a little pepping up but I want you to see video that has not been enhanced.

Also, the quality of the multi-generation video held up well. Cyberlink PowerDirector does not have as many features as the professional editing software you are used to. But it does this thing well. Perhaps not all editing software behaves this way. It really does not waste time re-rendering video that it recognizes as already rendered. Richk's original question was how to speed up the rendering of a large video file. I think breaking his video into shorter pieces, rendering each and then reassembling the rendered videos in a new editing session is a viable solution, at least if he is using software that behaves like this.
 
OK. Here are the links to Dropbox. Each video is 30-seconds. 1st Pass is the result of importing and producing the original Yuneec video file This took 48 seconds to produce (without using Hardware Video Encoder option). 2nd, 3rd and 4th passes were the result of importing the previous pass into a new session of Power Director and producing the next pass using the SVRT (Smart Video Rendering Technology). Each of these passes took 7 seconds to produce.
Considering that we are working with a relatively inexpensive consumer-grade flying 4k camera, I think we get pretty good looking video. It never ceases to amaze me and my friends when I show the videos on our 65" 4k Samsung TV. Granted, this particular video could use a little pepping up but I want you to see video that has not been enhanced.

Also, the quality of the multi-generation video held up well. Cyberlink PowerDirector does not have as many features as the professional editing software you are used to. But it does this thing well. Perhaps not all editing software behaves this way. It really does not waste time re-rendering video that it recognizes as already rendered. Richk's original question was how to speed up the rendering of a large video file. I think breaking his video into shorter pieces, rendering each and then reassembling the rendered videos in a new editing session is a viable solution, at least if he is using software that behaves like this.
I'm out with the wife furniture shopping (ugh!) but I will have a look when I get home. Cheers for posting it.
 

New Posts

Members online

Forum statistics

Threads
20,979
Messages
241,840
Members
27,388
Latest member
Hodgepodge