digitalFAQ.com Forum

digitalFAQ.com Forum (https://www.digitalfaq.com/forum/)
-   Restore, Filter, Improve Quality (https://www.digitalfaq.com/forum/video-restore/)
-   -   Output both progressive and interlaced? (https://www.digitalfaq.com/forum/video-restore/8007-output-both-progressive.html)

koberulz 05-15-2017 05:47 AM

Output both progressive and interlaced?
 
What's the best/most efficient way to get a progressive output file while retaining the interlaced info?

My end goal is primarily web use, I suspect, but there may also be need to have footage on DVD or in broadcasts. So it makes sense to go through the reinterlacing after QTGMC when restoring, ending up with a DVD-compliant 25fps file.

It also seems like a massive waste to branch off after QTGMC and do everything twice - once for interlaced, once for progressive. Further, I'm not sure if the progressive file also needs to be 25fps.

msgohan 05-15-2017 09:17 AM

By default, QTGMC does not retain the original source fields. It massages the original lines to fit better with its invented lines. There is a lossless mode you can enable, which means that something along the lines of QTGMC().SeparateFields().SelectEvery(4,0,3) would return the original, unaltered interlaced video. But the quality of the progressive file will likely worsen as a result.

Quote:

Originally Posted by koberulz (Post 49377)
It also seems like a massive waste to branch off after QTGMC and do everything twice - once for interlaced, once for progressive.

I guess what you're saying here is that one or more of your filters require progressive input.

Quote:

Further, I'm not sure if the progressive file also needs to be 25fps.
If you don't keep the deinterlaced file 50fps, you're screwed.

koberulz 05-15-2017 11:50 AM

Quote:

I guess what you're saying here is that one or more of your filters require progressive input.
Yes.

Quote:

If you don't keep the deinterlaced file 50fps, you're screwed.
In what sense?

sanlyn 05-15-2017 03:50 PM

Quote:

Originally Posted by koberulz (Post 49377)
What's the best/most efficient way to get a progressive output file while retaining the interlaced info?

My end goal is primarily web use, I suspect, but there may also be need to have footage on DVD or in broadcasts. So it makes sense to go through the reinterlacing after QTGMC when restoring, ending up with a DVD-compliant 25fps file.

It also seems like a massive waste to branch off after QTGMC and do everything twice - once for interlaced, once for progressive. Further, I'm not sure if the progressive file also needs to be 25fps.

You don't have to repeat the entire process. Keep the filtered, progressive copy. Re-interlace for DVD and broadcast. What you do with the 50fps version remains the question. For web use you'd need progressive media. Not al websites accept 50fps. The only choice there, as msgohan has alluded, is to discard alternate frames. That's not a good choice for action video -- it destroys 50% of the temporal resolution, giving jerky plaback during fast action and camera pans. Some websites (not all) will accept standard definition 50fps. The only usual distribition/disc format for 50fps PAL is 1280x720 BluRay/AVCHD, but you need fairly decent masters for getting SD into that format, and Adobe wouldn't be the upscaler of choice.

Or do you refer to the HD material you already have? If it's 1080i/25 it could be downscaled pretty well with Avisynth to 720p/50. 1080p/50 is not valid for BluRay and many players will choke on it..

koberulz 05-15-2017 04:51 PM

At present I'm reinterlacing, importing to Premiere, doing a final color correction, then editing out junk (timeouts, that sort of thing), adding chapter points, and exporting - to MPG via AME and/or frameserving out to MeGUI to create a progressive MP4.

I'm referring to the same VHS tapes my other threads are about.

msgohan 05-15-2017 09:13 PM

Quote:

Originally Posted by koberulz (Post 49384)
In what sense?

If you deinterlace to 25fps, how would you ever reinterlace that from 25p to 25i?

koberulz 05-16-2017 12:10 AM

I wouldn't. I'd either branch off from the 50p file into a 25i file and a 25p file, or just create a 25i file and then turn that into a 25p file.

sanlyn 05-16-2017 07:09 AM

If you deinterlace 25i you have 50p. You can deinterlace 25i to 25p by either dropping alternate frames or blending fields. So how are you getting 25p by deinterlacing 25i?

koberulz 05-16-2017 07:12 AM

Code:

interp=nnedi2(field=1)
yadifmod(order=1, field=-1, mode=0, edeint=interp)


sanlyn 05-16-2017 08:48 AM

In other words you're discarding the bottom field and keeping only 50% horizontal resolution. Why didn't you just say so?

koberulz 05-16-2017 09:31 AM

You mean vertical?

I've had that script sitting on my drive as a simple deinterlacer for ages, found it on VH IIRC. If there's a better way, I'm all ears.

sanlyn 05-16-2017 09:37 AM

Horizontal resolution -- i.e, temporal resolution. Uneven horizontal motion. Not for pans and action video, I'd say. That's what msgohan referred to earlier.

koberulz 05-16-2017 10:21 AM

So what's the best way to do things?

sanlyn 05-16-2017 10:33 AM

Dropping fields or frames is better than blending, by far. If you want 25i to 25p, dropping fields is your only choice. Will probably work for UTube or the web, where quality is low and most viewers are clueless anyway and will watch anything.

koberulz 05-16-2017 11:01 AM

Is there a better way if I work from the 50p version, or no?

sanlyn 05-16-2017 12:18 PM

You have a 25i interlaced original.
If you deinterlace normally you'll have 50p. If you deinterlace same-rate, dropping fields, you'll have 25p at 50% temporal resolution. If you want that for the internet, you have to rescale to 640x480 or it will play at 720x576 (5:4 aspect ratio) instead of 4:3.

If you take 25i, deinterlace to 50p full frame rate, then drop alternate frames (SelectEven or SelectOdd) you'll have 25p at 50% temporal resolution. If you want that for the internet, you have to rescale to 640x480 or it will play at 720x576 (5:4 aspect ratio) instead of 4:3.

Or take 25i, deinterlace to 50p, keep all fields and frames and have all your resolution -- but it will be 50fps and you'll have to rescale to 640x480p for the 'net, considering that some sites will accept 50p and some won't. It won't work for normal broadcast unless you take 25p at 720x576 and encode it as interlaced. The encoder will apply an interlace flag to the progressive content. When it plays it will still have some judder on horizontal motion because of the discarded fields or frames, but a player will see it as interlaced. That might work for broadcast, but for the 'net you'll have to live with 640x480 @25p with reduced resolution or find a site that accepts 640x480 @50p.

The hard way is to try for BluRay or AVCHD. Deinterlace to 50p, rescale to 960x720p, add black borders of 160 black pixels to each side, and have 1280x720p at full motion resolution for BluRay or AVCHD which is 16:9 with a 4:3 upscaled image inside the frame. It will be a little blurry due to the upscale for SD-VHS. I would suggest that you upscale with Avisynth, not Adobe.

Actually PAL motion looks smoother when interlaced -- that's what people say, anyway.
[EDIT] Or for your own personal use with external players and TV, deinterlace to 50p and encode it as mp4 progressed with a 4:3 display aspect ratio.

koberulz 05-16-2017 12:40 PM

Quote:

Originally Posted by sanlyn (Post 49410)
If you want that for the internet, you have to rescale to 640x480 or it will play at 720x576 (5:4 aspect ratio) instead of 4:3.

Treating the pixels as square?

Quote:

Deinterlace to 50p, rescale to 960x720p, add black borders of 160 black pixels to each side, and have 1280x720p at full motion resolution for BluRay or AVCHD which is 16:9 with a 4:3 upscaled image inside the frame. It will be a little blurry due to the upscale for SD-VHS. I would suggest that you upscale with Avisynth, not Adobe.
The issue here, though, is that I reinterlace to go into Premiere...I'd then be redeinterlacing out of Premiere...

If I drop the 50p file into Premiere instead, I need to spit it out, reinterlace, and bring it back in in order to get something for DVD. Unless I try and convert it from 50p to 25i within Premiere, but I'm fairly sure you'd reach through the screen and slap me if I tried that.

sanlyn 05-16-2017 01:12 PM

You have the most convoluited wokflow I've ever seen.

Why don't you:
Deinterlace and denoise with Avisynth/VirtualDub. Save as lossless Lagarith or huffyuv 50p. Import into Adobe for color correction, save as lossless Lagarith or Avisynth. Use the second 50p file for all the other work. Yes, you end up with two 50p files, which is the price you pay for insisting on Adobe for whatever reason. The last 50p is a cleaned up, color corrected 50p that you can reinterlace for DVD, or discard alternate frames and resize for the net, or do whatever else you want.

koberulz 05-16-2017 01:24 PM

I've actually moved to completely uncompressed files coming into Premiere...I was using Ut but there was a really obvious drop in quality. Not sure if that would be true of Lagarith but I'm also trying to stay Mac-compatible from the earliest possible point in the process.

Color correction in Premiere is vastly easier, and I'm not familiar with any other software that will do the edits I need to get from the tape to the final footage (removing commercials, censoring audio, overlaying a score graphic, adding chapters, etc) and create a DVD-compliant MPG (with GOPs where I want chapter markers).

sanlyn 05-16-2017 01:45 PM

Quote:

Originally Posted by koberulz (Post 49413)
I've actually moved to completely uncompressed files coming into Premiere...I was using Ut but there was a really obvious drop in quality.

There's no quality loss with lossless codecs, so I have no idea what you mean. What's wrong with doing your edits on a 50p video in Adobe and outputting the edits as 50p? Take that 50p edited Adobe output:

Code:

### for DVD
AviSource(whatever file)
AssumeTFF()
SeparateFields().SelectEvery(4,0,3).Weave()

Code:

### for square-pixel 25p/mp4/internet
AviSource(whatever file)
Select Even()  # or SelectOdd
Spline36Resize(640,480)

For 720x576/50p/mp4/4:3. do nothing. Encode it as 4:3 DAR progressive.

But since no suggestion seems appropriate to your workflow, I wouldn't know what else to say.

koberulz 05-16-2017 03:02 PM

Quote:

Originally Posted by sanlyn (Post 49414)
There's no quality loss with lossless codecs, so I have no idea what you mean.

I deleted the Ut file so I can't screengrab, but there was...smudging, I guess would be the best way to describe it. Some sort of motion blur or something. Almost like overly-aggressive denoising, but an uncompressed file with identical settings didn't have the same issue.

Quote:

What's wrong with doing your edits on a 50p video in Adobe and outputting the edits as 50p? Take that 50p edited Adobe output:
Well, I'd then have to reimport that. So regardless of which way I do it, there are inefficiencies. I'm just trying to find the method that has the least. For most tapes I'm also doing a version that's run through lordsmurf's script...it adds up.

Would Spline36Resize be the way to go for 720p50?

sanlyn 05-16-2017 03:08 PM

And by the way:

Quote:

Originally Posted by koberulz (Post 49413)
and create a DVD-compliant MPG (with GOPs where I want chapter markers).

With DVD, the max limit for PAL GOP is 15 frames. Chapter points are an authoring function, not an encoding function.

Quote:

Originally Posted by koberulz (Post 49416)
Would Spline36Resize be the way to go for 720p50?

You can try that by itself, or experiment with this which many say is better for upscaling from SD:
Code:

nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=1280, fheight=720)

koberulz 05-16-2017 03:44 PM

Quote:

Originally Posted by sanlyn (Post 49417)
With DVD, the max limit for PAL GOP is 15 frames. Chapter points are an authoring function, not an encoding function.

Premiere allows you to insert chapter markers into a sequence, and when it encodes it ensures there's a GOP there so the chapter point in the authoring program can land exactly on that frame, rather than merely being at the nearest randomly-placed GOP.

sanlyn 05-16-2017 04:52 PM

I understand that, and you can also encode so that a scene change starts a new key frame or GOP. A chaptyer won't go there until it's authored.

Quote:

Originally Posted by sanlyn (Post 49417)
Quote:

Originally Posted by koberulz http://www.digitalfaq.com/forum/styl...s/viewpost.gif
Would Spline36Resize be the way to go for 720p50?
You can try that by itself, or experiment with this which many say is better for upscaling from SD:
Code:

nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=1280, fheight=720)

No wait a minute, that won't work for a 4:3 image. Now this crazy workflow has me going in circles!
:smack:

Let's change that. you need a 960x720 image in a 1280x720 frame. So you can try it two ways:

method A:
Code:

Spline36Resize(960,720)
AddBorders(160,0,160,0)

method B:
Code:

nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=960, fheight=720)
AddBorders(160,0,160,0)

I have a feeling that method "A" will have less blur to it, because nnedi3_rpow2 actually resizes twice -- once to get a straight 2x resize the complicated nnedi3 way, then spline36resize to exactly 960x720. I don't have a good sample of your videos around to try it both ways.

koberulz 05-17-2017 04:53 AM

I have HD2SD, which also has SD2HD, not sure how that handles things though.

What's the best way to run that lordsmurf script? It expects interlaced at present, but it seems a waste to run that script, go through all my other work, and then do the same thing all over again without that script.

Essentially I do one version with the lordsmurf script and one version without, then watch the footage through and if there are bad dropouts or whatever at any point, I drop the LS version over the top. That way I can still get the detail of not having that script (you know, minor things like the ball), but also utilise its cleanup power when necessary.

EDIT: And for square-pixel output, why 640x480? Why not 720x526?

sanlyn 05-17-2017 07:14 AM

SD2HD is worth a try. HDtoSD has been replaced with iResize for better control of line twitter and other downscaling defects. A recent version is posted here: https://forum.videohelp.com/threads/...on#post2368998.

Internet posts usually require square-pixel progressive frames. 720x576 on the internet won't play at 4:3, but at 5:4. If your 720x576 is 16:9 anamorphic instead of 4:3, resize to square pixel 16:9 such as 856x480 or smaller frames.

For your personal use anamorphic video at 4:3 or 16:9 can be coded into mp4 for the proper display aspect ratio without resizing. mp4 encoding can accept 16:9 and 4:3 display aspect ratios, but anamorphic wont usually work on online 'net media players.

koberulz 05-17-2017 07:18 AM

720x526, not 720x576.

Basically, why downscale in both directions rather than just shifting one direction until it'll work as square pixels?

sanlyn 05-17-2017 07:32 AM

Quote:

Originally Posted by koberulz (Post 49432)
720x526, not 720x576.

Basically, why downscale in both directions rather than just shifting one direction until it'll work as square pixels?

720x526 is neither 4:3 nor 16:9, is not PAL spec, and the height is mod2 only which will be problematic for interlaced video chroma in 4:2:0 or 4:1:1 YV12. It will also be problematic for many Avisynth filters that expect mod8 in all dimensions.

You can keep the height and resize horizontally, of course. To-spec PAL SD at 16:9 would be 1024x576, at 4:3 it would be 768x576.

If you want 4:3 at a width of 720, the height would be 536 (which is mod8), not 526, and still wouldn't be exactly 4:3 but close.

koberulz 05-17-2017 07:38 AM

Quote:

720x526 is neither 4:3 nor 16:9
I created a square-pixel PAL 4:3 document in Photoshop and resized the width back down to 720; it set the height at 526. Although it started at 788x576, not 768...so no idea.

If this is just a delivery format for web use, are PAL spec, interlaced chroma and AviSynth filters relevant?

sanlyn 05-17-2017 07:52 AM

The internet is square pixel pogressive, not anamorphic. Make it any size you want. Web players won't adjust the display aspect ratio. If they don't like it, they'll let you know.

I have no idea how adobe is resizing. IF the original was 788x576, that's 1:37:1, not 1.333:1, and I have no idea where it came from. Mod2 vertical dimensions won't work for YV12 unless it's progressive. You want odd frame sizes or mod2 work, go ahead. The fact that adobe lets you do it without throwing errors is another reason why I don't use it.

koberulz 05-17-2017 08:08 AM

I was just resizing a document in Photoshop, it doesn't have a clue why I'm doing it and has no reason to complain.

Photoshop's widescreen square pixel is 1050x576, not 1024. So they're both wider than your stated dimensions.

sanlyn 05-17-2017 08:33 AM

More reasons why I don't use Adobe for resizing. You should be setti8ng youir own frame dimensions. With 1050x576, what is the aspect ratio? Note: it's not 16:9.

koberulz 05-18-2017 01:11 AM

Quote:

Originally Posted by koberulz (Post 49427)
What's the best way to run that lordsmurf script? It expects interlaced at present, but it seems a waste to run that script, go through all my other work, and then do the same thing all over again without that script.

I believe this got skipped over.

sanlyn 05-18-2017 08:06 AM

First, you have to browse through that script and see what it's doing. It's not necessary to understand every detail such as exactly how the Manalyze lines actually work (they're adapted straight out of mvtools documentation, in case you haven't read it), but some things are obvious. For instance you'll see "SeparateFields()" at the beginning of the top procedure, then two other functions are called, and at the bottom of that top procedure you'll see "Weave()", which is an operation that has to follow SeparateFields at some point. Comment-out those two lines and you can use it on progressive video if necessary.

Why would you have to do everything over again? If you have a filtered script and you still have those ripples and dropouts, run the routine on the results you have. It doesn't change your previous filters or color corrections. The main idea behind lossless codecs is that you can save your work as lossless media.

Otherwise, starting at an earlier point, if you have a video and you see particular problems you have to map out beforehand what you expect is required and plan accordingly. If you have to intervene with something like Adobe, that will be part of the planning. You might need more than one intermediate stage, and that's not unusual. I sometimes use AfterEffects for color and timeline work -- in that case it requires a lossless intermediate that I will convert to RGB in Avisynth with 16-bit dither tools and then open in AE and save out of AE as lossless AVI. I had a slide show project with hundreds of photos, each sequence planned out in detail with individual resizing and composing, zooming and panning in AE with Ken Burns effects, audio and title overlays, and whatnot. I can't count the number of times I had to redo a simple sequence two or three times before it fit the running script. Then I had to join multiple lossless segments out of AE into an encoder and then author for disc.

I know you have some big files, but I had several 6-hour color captures that needed extensive cleanup and edits that required more than 250GB of intermediate files before I did the encoding, and the encoding wasn't done in Adobe. I've had 2-hour VHS movies that needed multiple intermediates, had to be joined in yet another round of intermediates, then were assembled in the encoder and encoded in one shot with pulldown applied for the final output to avoid pulldown cadences changes in the final version. I've had plenty of long videos that required different scene-by-scene filter changes, taking weeks to complete, that had to be joined for the final encoding. So what you describe isn't unusual. It's par for the course with problem videos, of which I and others have had plenty to deal with. I've had MPEG's that had to be demuxed into elementary video and audio streams so that pulldown could be applied to 20fps film video in DGPulddown to make it 25fps PAL or 29.976 fps NTSC, then remuxed in a smart rendering editor for edits and authoring. I worked on one truly horrific 3-hour opera transfer from tape directly to DVD that took 14 months to complete (the video didn't even belong to me, but it was my baptism of fire into Avisynth). With that project I saved hundreds of intermediate files and scripts on a USB and optical discs in hopes that one of these days I can do an even better job, as there are still a lot of unsolved problems with the final. You have to think ahead, and sometimes you have to drop back and rework something that gets blended back into the final. If you're dealing with an NLE that impedes that process, you should alter the workflow accordingly.

koberulz 05-18-2017 09:05 AM

Quote:

For instance you'll see "SeparateFields()" at the beginning of the top procedure, then two other functions are called, and at the bottom of that top procedure you'll see "Weave()", which is an operation that has to follow SeparateFields at some point. Comment-out those two lines and you can use it on progressive video if necessary.
Right, but when we were working through RemoveSpots() previously, you talked about how SeparateFields() got different results to operating on originally-progressive footage, so I wasn't sure if that was the case here.

sanlyn 05-18-2017 09:31 AM

SeparateFields() with RemoveSpots broke the video into smaller segments and separated fileds in which a spot or rip extended over multiple images. RemoveSpotsMC is a temporal filter -- if noise stays the same for 2 or 3 frames, the nosise isn't considered noise. Temporal filters look at the way images change over time. If something doesn't change, it isn't seen as noise. If you break the images into disparate pieces, the same noise would appear in one group of images but not in the other group, so in one of those groups the noise would be treated as a disturbance that doesn't belong there.

Not all filters can be used in this manner. Some filters that require progressive video will distort alternate lines if SeparateFields is used because alternate lines don't appear in the same place in both images and will be reassembled incorrectly during the weave process. It takes experiementation to tell which method works best which different filters. If you want to break up the frame sequence using deinterlaced full-frame video, separate Even and Odd frames, process them separately, then interleave into the original order when filtering is done.

There have been examples of using either separatefieds() or treating alternate frames in these forums. The chroma cleaner chubbyrain2 is one filter that has been used both ways. MCTemporalDenosie is another, although MCTD has a parameter that can be set to work with interlaced video. One filter that's only partially effective with SeparateFields() is dfttest.

[EDIT]With the RemoveSpots example you overlooked the fact that in many cases SepaateFields(0 was followed by filtering even and odd fields separately, then reassembling them:
Code:

SeparateFields()
a=last
e=a.SelectEven() + filters ... filter, etc.
o=a.SelectOdd() + filters ... filter, etc.
Interleave(e,o)
Weave()
... more procesing ....
return last


lordsmurf 05-25-2017 05:02 AM

Method 1 = Don't.
Method 2 = Deinterlace all.
Method 3 = Interlace all.

If for web streaming, deinterlace interlaced footage, then merge with progressive.

Broadcasts can actually handle mixed interlaced/progressive, if done correctly with the TS streams.

Doing everything twice is indeed the best: interlaced for interlaced, progressive for progressive. Many people would get bored at the triage required when you work for studios. You encode lots of things lots of ways. Sometimes you can automate, sometime not.

50p, 25p, 50i, 25i ... oh goody. What fun. PTSD flashbacks to studio work. :laugh:

I need more detailed source details. Long thread, but I never saw it. The conversation is too broad for simple interlaced vs. progressive when you start getting into non25/30 framerates. (Ditto for NTSC/PAL mixing.)

koberulz 05-27-2017 10:53 AM

Quote:

Originally Posted by lordsmurf (Post 49536)
Method 1 = Don't.
Method 2 = Deinterlace all.
Method 3 = Interlace all.

I'm not 100% sure but this sounds like you're thinking of a combined progressive/interlaced source project?

Quote:

I need more detailed source details. Long thread, but I never saw it. The conversation is too broad for simple interlaced vs. progressive when you start getting into non25/30 framerates. (Ditto for NTSC/PAL mixing.)
Just my PAL VHS tapes that I've been working with. So there's no need to combine different source frame rates, interlace flags, sizes, or whatever else. Just after the best way of getting those 576i25 sources into a web-compatible format while retaining an interlaced intermediate for DVD encoding (as the required bitrate may vary based on other content used, or different tapes may even be combined into highlights or something, so I don't want to just go straight to an MPEG file).

msgohan 05-28-2017 12:09 PM

If uploading to YouTube, you should upscale the content because:
  1. They don't support 576. It would be downscaled to 480p.
  2. Each quality level they offer uses better-quality encoding than every lower level. Nowadays their best quality is at 2160p or above, but perhaps it's not worth going above 1080p since there are diminishing returns for upscaled video. If uploading at 2160p and then viewing on a 1080p display, you would also have to weigh the impact of double-scaling artifacts vs compression artifacts.

Of course, web streaming quality is bad to begin with, so I don't know how much you care about making it less-bad.

koberulz 07-28-2017 10:58 AM

2 Attachment(s)
Quote:

Originally Posted by sanlyn (Post 49419)
I understand that, and you can also encode so that a scene change starts a new key frame or GOP. A chaptyer won't go there until it's authored.

No wait a minute, that won't work for a 4:3 image. Now this crazy workflow has me going in circles!
:smack:

Let's change that. you need a 960x720 image in a 1280x720 frame. So you can try it two ways:

method A:
Code:

Spline36Resize(960,720)
AddBorders(160,0,160,0)

method B:
Code:

nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=960, fheight=720)
AddBorders(160,0,160,0)

I have a feeling that method "A" will have less blur to it, because nnedi3_rpow2 actually resizes twice -- once to get a straight 2x resize the complicated nnedi3 way, then spline36resize to exactly 960x720. I don't have a good sample of your videos around to try it both ways.

Are you sure you've got that math right? SD2HD has the picture significantly wider than the Spline36Resize script:
Attachment 7753
Attachment 7752


All times are GMT -5. The time now is 12:13 AM

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.