Output both progressive and interlaced?
What's the best/most efficient way to get a progressive output file while retaining the interlaced info?
My end goal is primarily web use, I suspect, but there may also be need to have footage on DVD or in broadcasts. So it makes sense to go through the reinterlacing after QTGMC when restoring, ending up with a DVD-compliant 25fps file. It also seems like a massive waste to branch off after QTGMC and do everything twice - once for interlaced, once for progressive. Further, I'm not sure if the progressive file also needs to be 25fps. |
By default, QTGMC does not retain the original source fields. It massages the original lines to fit better with its invented lines. There is a lossless mode you can enable, which means that something along the lines of QTGMC().SeparateFields().SelectEvery(4,0,3) would return the original, unaltered interlaced video. But the quality of the progressive file will likely worsen as a result.
Quote:
Quote:
|
Quote:
Quote:
|
Quote:
Or do you refer to the HD material you already have? If it's 1080i/25 it could be downscaled pretty well with Avisynth to 720p/50. 1080p/50 is not valid for BluRay and many players will choke on it.. |
At present I'm reinterlacing, importing to Premiere, doing a final color correction, then editing out junk (timeouts, that sort of thing), adding chapter points, and exporting - to MPG via AME and/or frameserving out to MeGUI to create a progressive MP4.
I'm referring to the same VHS tapes my other threads are about. |
Quote:
|
I wouldn't. I'd either branch off from the 50p file into a 25i file and a 25p file, or just create a 25i file and then turn that into a 25p file.
|
If you deinterlace 25i you have 50p. You can deinterlace 25i to 25p by either dropping alternate frames or blending fields. So how are you getting 25p by deinterlacing 25i?
|
Code:
interp=nnedi2(field=1) |
In other words you're discarding the bottom field and keeping only 50% horizontal resolution. Why didn't you just say so?
|
You mean vertical?
I've had that script sitting on my drive as a simple deinterlacer for ages, found it on VH IIRC. If there's a better way, I'm all ears. |
Horizontal resolution -- i.e, temporal resolution. Uneven horizontal motion. Not for pans and action video, I'd say. That's what msgohan referred to earlier.
|
So what's the best way to do things?
|
Dropping fields or frames is better than blending, by far. If you want 25i to 25p, dropping fields is your only choice. Will probably work for UTube or the web, where quality is low and most viewers are clueless anyway and will watch anything.
|
Is there a better way if I work from the 50p version, or no?
|
You have a 25i interlaced original.
If you deinterlace normally you'll have 50p. If you deinterlace same-rate, dropping fields, you'll have 25p at 50% temporal resolution. If you want that for the internet, you have to rescale to 640x480 or it will play at 720x576 (5:4 aspect ratio) instead of 4:3. If you take 25i, deinterlace to 50p full frame rate, then drop alternate frames (SelectEven or SelectOdd) you'll have 25p at 50% temporal resolution. If you want that for the internet, you have to rescale to 640x480 or it will play at 720x576 (5:4 aspect ratio) instead of 4:3. Or take 25i, deinterlace to 50p, keep all fields and frames and have all your resolution -- but it will be 50fps and you'll have to rescale to 640x480p for the 'net, considering that some sites will accept 50p and some won't. It won't work for normal broadcast unless you take 25p at 720x576 and encode it as interlaced. The encoder will apply an interlace flag to the progressive content. When it plays it will still have some judder on horizontal motion because of the discarded fields or frames, but a player will see it as interlaced. That might work for broadcast, but for the 'net you'll have to live with 640x480 @25p with reduced resolution or find a site that accepts 640x480 @50p. The hard way is to try for BluRay or AVCHD. Deinterlace to 50p, rescale to 960x720p, add black borders of 160 black pixels to each side, and have 1280x720p at full motion resolution for BluRay or AVCHD which is 16:9 with a 4:3 upscaled image inside the frame. It will be a little blurry due to the upscale for SD-VHS. I would suggest that you upscale with Avisynth, not Adobe. Actually PAL motion looks smoother when interlaced -- that's what people say, anyway. [EDIT] Or for your own personal use with external players and TV, deinterlace to 50p and encode it as mp4 progressed with a 4:3 display aspect ratio. |
Quote:
Quote:
If I drop the 50p file into Premiere instead, I need to spit it out, reinterlace, and bring it back in in order to get something for DVD. Unless I try and convert it from 50p to 25i within Premiere, but I'm fairly sure you'd reach through the screen and slap me if I tried that. |
You have the most convoluited wokflow I've ever seen.
Why don't you: Deinterlace and denoise with Avisynth/VirtualDub. Save as lossless Lagarith or huffyuv 50p. Import into Adobe for color correction, save as lossless Lagarith or Avisynth. Use the second 50p file for all the other work. Yes, you end up with two 50p files, which is the price you pay for insisting on Adobe for whatever reason. The last 50p is a cleaned up, color corrected 50p that you can reinterlace for DVD, or discard alternate frames and resize for the net, or do whatever else you want. |
I've actually moved to completely uncompressed files coming into Premiere...I was using Ut but there was a really obvious drop in quality. Not sure if that would be true of Lagarith but I'm also trying to stay Mac-compatible from the earliest possible point in the process.
Color correction in Premiere is vastly easier, and I'm not familiar with any other software that will do the edits I need to get from the tape to the final footage (removing commercials, censoring audio, overlaying a score graphic, adding chapters, etc) and create a DVD-compliant MPG (with GOPs where I want chapter markers). |
Quote:
Code:
### for DVD Code:
### for square-pixel 25p/mp4/internet But since no suggestion seems appropriate to your workflow, I wouldn't know what else to say. |
Quote:
Quote:
Would Spline36Resize be the way to go for 720p50? |
And by the way:
Quote:
Quote:
Code:
nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=1280, fheight=720) |
Quote:
|
I understand that, and you can also encode so that a scene change starts a new key frame or GOP. A chaptyer won't go there until it's authored.
Quote:
:smack: Let's change that. you need a 960x720 image in a 1280x720 frame. So you can try it two ways: method A: Code:
Spline36Resize(960,720) Code:
nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=960, fheight=720) |
I have HD2SD, which also has SD2HD, not sure how that handles things though.
What's the best way to run that lordsmurf script? It expects interlaced at present, but it seems a waste to run that script, go through all my other work, and then do the same thing all over again without that script. Essentially I do one version with the lordsmurf script and one version without, then watch the footage through and if there are bad dropouts or whatever at any point, I drop the LS version over the top. That way I can still get the detail of not having that script (you know, minor things like the ball), but also utilise its cleanup power when necessary. EDIT: And for square-pixel output, why 640x480? Why not 720x526? |
SD2HD is worth a try. HDtoSD has been replaced with iResize for better control of line twitter and other downscaling defects. A recent version is posted here: https://forum.videohelp.com/threads/...on#post2368998.
Internet posts usually require square-pixel progressive frames. 720x576 on the internet won't play at 4:3, but at 5:4. If your 720x576 is 16:9 anamorphic instead of 4:3, resize to square pixel 16:9 such as 856x480 or smaller frames. For your personal use anamorphic video at 4:3 or 16:9 can be coded into mp4 for the proper display aspect ratio without resizing. mp4 encoding can accept 16:9 and 4:3 display aspect ratios, but anamorphic wont usually work on online 'net media players. |
720x526, not 720x576.
Basically, why downscale in both directions rather than just shifting one direction until it'll work as square pixels? |
Quote:
You can keep the height and resize horizontally, of course. To-spec PAL SD at 16:9 would be 1024x576, at 4:3 it would be 768x576. If you want 4:3 at a width of 720, the height would be 536 (which is mod8), not 526, and still wouldn't be exactly 4:3 but close. |
Quote:
If this is just a delivery format for web use, are PAL spec, interlaced chroma and AviSynth filters relevant? |
The internet is square pixel pogressive, not anamorphic. Make it any size you want. Web players won't adjust the display aspect ratio. If they don't like it, they'll let you know.
I have no idea how adobe is resizing. IF the original was 788x576, that's 1:37:1, not 1.333:1, and I have no idea where it came from. Mod2 vertical dimensions won't work for YV12 unless it's progressive. You want odd frame sizes or mod2 work, go ahead. The fact that adobe lets you do it without throwing errors is another reason why I don't use it. |
I was just resizing a document in Photoshop, it doesn't have a clue why I'm doing it and has no reason to complain.
Photoshop's widescreen square pixel is 1050x576, not 1024. So they're both wider than your stated dimensions. |
More reasons why I don't use Adobe for resizing. You should be setti8ng youir own frame dimensions. With 1050x576, what is the aspect ratio? Note: it's not 16:9.
|
Quote:
|
First, you have to browse through that script and see what it's doing. It's not necessary to understand every detail such as exactly how the Manalyze lines actually work (they're adapted straight out of mvtools documentation, in case you haven't read it), but some things are obvious. For instance you'll see "SeparateFields()" at the beginning of the top procedure, then two other functions are called, and at the bottom of that top procedure you'll see "Weave()", which is an operation that has to follow SeparateFields at some point. Comment-out those two lines and you can use it on progressive video if necessary.
Why would you have to do everything over again? If you have a filtered script and you still have those ripples and dropouts, run the routine on the results you have. It doesn't change your previous filters or color corrections. The main idea behind lossless codecs is that you can save your work as lossless media. Otherwise, starting at an earlier point, if you have a video and you see particular problems you have to map out beforehand what you expect is required and plan accordingly. If you have to intervene with something like Adobe, that will be part of the planning. You might need more than one intermediate stage, and that's not unusual. I sometimes use AfterEffects for color and timeline work -- in that case it requires a lossless intermediate that I will convert to RGB in Avisynth with 16-bit dither tools and then open in AE and save out of AE as lossless AVI. I had a slide show project with hundreds of photos, each sequence planned out in detail with individual resizing and composing, zooming and panning in AE with Ken Burns effects, audio and title overlays, and whatnot. I can't count the number of times I had to redo a simple sequence two or three times before it fit the running script. Then I had to join multiple lossless segments out of AE into an encoder and then author for disc. I know you have some big files, but I had several 6-hour color captures that needed extensive cleanup and edits that required more than 250GB of intermediate files before I did the encoding, and the encoding wasn't done in Adobe. I've had 2-hour VHS movies that needed multiple intermediates, had to be joined in yet another round of intermediates, then were assembled in the encoder and encoded in one shot with pulldown applied for the final output to avoid pulldown cadences changes in the final version. I've had plenty of long videos that required different scene-by-scene filter changes, taking weeks to complete, that had to be joined for the final encoding. So what you describe isn't unusual. It's par for the course with problem videos, of which I and others have had plenty to deal with. I've had MPEG's that had to be demuxed into elementary video and audio streams so that pulldown could be applied to 20fps film video in DGPulddown to make it 25fps PAL or 29.976 fps NTSC, then remuxed in a smart rendering editor for edits and authoring. I worked on one truly horrific 3-hour opera transfer from tape directly to DVD that took 14 months to complete (the video didn't even belong to me, but it was my baptism of fire into Avisynth). With that project I saved hundreds of intermediate files and scripts on a USB and optical discs in hopes that one of these days I can do an even better job, as there are still a lot of unsolved problems with the final. You have to think ahead, and sometimes you have to drop back and rework something that gets blended back into the final. If you're dealing with an NLE that impedes that process, you should alter the workflow accordingly. |
Quote:
|
SeparateFields() with RemoveSpots broke the video into smaller segments and separated fileds in which a spot or rip extended over multiple images. RemoveSpotsMC is a temporal filter -- if noise stays the same for 2 or 3 frames, the nosise isn't considered noise. Temporal filters look at the way images change over time. If something doesn't change, it isn't seen as noise. If you break the images into disparate pieces, the same noise would appear in one group of images but not in the other group, so in one of those groups the noise would be treated as a disturbance that doesn't belong there.
Not all filters can be used in this manner. Some filters that require progressive video will distort alternate lines if SeparateFields is used because alternate lines don't appear in the same place in both images and will be reassembled incorrectly during the weave process. It takes experiementation to tell which method works best which different filters. If you want to break up the frame sequence using deinterlaced full-frame video, separate Even and Odd frames, process them separately, then interleave into the original order when filtering is done. There have been examples of using either separatefieds() or treating alternate frames in these forums. The chroma cleaner chubbyrain2 is one filter that has been used both ways. MCTemporalDenosie is another, although MCTD has a parameter that can be set to work with interlaced video. One filter that's only partially effective with SeparateFields() is dfttest. [EDIT]With the RemoveSpots example you overlooked the fact that in many cases SepaateFields(0 was followed by filtering even and odd fields separately, then reassembling them: Code:
SeparateFields() |
Method 1 = Don't.
Method 2 = Deinterlace all. Method 3 = Interlace all. If for web streaming, deinterlace interlaced footage, then merge with progressive. Broadcasts can actually handle mixed interlaced/progressive, if done correctly with the TS streams. Doing everything twice is indeed the best: interlaced for interlaced, progressive for progressive. Many people would get bored at the triage required when you work for studios. You encode lots of things lots of ways. Sometimes you can automate, sometime not. 50p, 25p, 50i, 25i ... oh goody. What fun. PTSD flashbacks to studio work. :laugh: I need more detailed source details. Long thread, but I never saw it. The conversation is too broad for simple interlaced vs. progressive when you start getting into non25/30 framerates. (Ditto for NTSC/PAL mixing.) |
Quote:
Quote:
|
If uploading to YouTube, you should upscale the content because:
Of course, web streaming quality is bad to begin with, so I don't know how much you care about making it less-bad. |
2 Attachment(s)
Quote:
Attachment 7753 Attachment 7752 |
Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.