Quote:
Quote:
Would Spline36Resize be the way to go for 720p50? |
And by the way:
Quote:
Quote:
Code:
nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=1280, fheight=720) |
Quote:
|
I understand that, and you can also encode so that a scene change starts a new key frame or GOP. A chaptyer won't go there until it's authored.
Quote:
:smack: Let's change that. you need a 960x720 image in a 1280x720 frame. So you can try it two ways: method A: Code:
Spline36Resize(960,720) Code:
nnedi3_rpow2(2, cshift="Spline36Resize", fwidth=960, fheight=720) |
I have HD2SD, which also has SD2HD, not sure how that handles things though.
What's the best way to run that lordsmurf script? It expects interlaced at present, but it seems a waste to run that script, go through all my other work, and then do the same thing all over again without that script. Essentially I do one version with the lordsmurf script and one version without, then watch the footage through and if there are bad dropouts or whatever at any point, I drop the LS version over the top. That way I can still get the detail of not having that script (you know, minor things like the ball), but also utilise its cleanup power when necessary. EDIT: And for square-pixel output, why 640x480? Why not 720x526? |
SD2HD is worth a try. HDtoSD has been replaced with iResize for better control of line twitter and other downscaling defects. A recent version is posted here: https://forum.videohelp.com/threads/...on#post2368998.
Internet posts usually require square-pixel progressive frames. 720x576 on the internet won't play at 4:3, but at 5:4. If your 720x576 is 16:9 anamorphic instead of 4:3, resize to square pixel 16:9 such as 856x480 or smaller frames. For your personal use anamorphic video at 4:3 or 16:9 can be coded into mp4 for the proper display aspect ratio without resizing. mp4 encoding can accept 16:9 and 4:3 display aspect ratios, but anamorphic wont usually work on online 'net media players. |
720x526, not 720x576.
Basically, why downscale in both directions rather than just shifting one direction until it'll work as square pixels? |
Quote:
You can keep the height and resize horizontally, of course. To-spec PAL SD at 16:9 would be 1024x576, at 4:3 it would be 768x576. If you want 4:3 at a width of 720, the height would be 536 (which is mod8), not 526, and still wouldn't be exactly 4:3 but close. |
Quote:
If this is just a delivery format for web use, are PAL spec, interlaced chroma and AviSynth filters relevant? |
The internet is square pixel pogressive, not anamorphic. Make it any size you want. Web players won't adjust the display aspect ratio. If they don't like it, they'll let you know.
I have no idea how adobe is resizing. IF the original was 788x576, that's 1:37:1, not 1.333:1, and I have no idea where it came from. Mod2 vertical dimensions won't work for YV12 unless it's progressive. You want odd frame sizes or mod2 work, go ahead. The fact that adobe lets you do it without throwing errors is another reason why I don't use it. |
I was just resizing a document in Photoshop, it doesn't have a clue why I'm doing it and has no reason to complain.
Photoshop's widescreen square pixel is 1050x576, not 1024. So they're both wider than your stated dimensions. |
More reasons why I don't use Adobe for resizing. You should be setti8ng youir own frame dimensions. With 1050x576, what is the aspect ratio? Note: it's not 16:9.
|
Quote:
|
First, you have to browse through that script and see what it's doing. It's not necessary to understand every detail such as exactly how the Manalyze lines actually work (they're adapted straight out of mvtools documentation, in case you haven't read it), but some things are obvious. For instance you'll see "SeparateFields()" at the beginning of the top procedure, then two other functions are called, and at the bottom of that top procedure you'll see "Weave()", which is an operation that has to follow SeparateFields at some point. Comment-out those two lines and you can use it on progressive video if necessary.
Why would you have to do everything over again? If you have a filtered script and you still have those ripples and dropouts, run the routine on the results you have. It doesn't change your previous filters or color corrections. The main idea behind lossless codecs is that you can save your work as lossless media. Otherwise, starting at an earlier point, if you have a video and you see particular problems you have to map out beforehand what you expect is required and plan accordingly. If you have to intervene with something like Adobe, that will be part of the planning. You might need more than one intermediate stage, and that's not unusual. I sometimes use AfterEffects for color and timeline work -- in that case it requires a lossless intermediate that I will convert to RGB in Avisynth with 16-bit dither tools and then open in AE and save out of AE as lossless AVI. I had a slide show project with hundreds of photos, each sequence planned out in detail with individual resizing and composing, zooming and panning in AE with Ken Burns effects, audio and title overlays, and whatnot. I can't count the number of times I had to redo a simple sequence two or three times before it fit the running script. Then I had to join multiple lossless segments out of AE into an encoder and then author for disc. I know you have some big files, but I had several 6-hour color captures that needed extensive cleanup and edits that required more than 250GB of intermediate files before I did the encoding, and the encoding wasn't done in Adobe. I've had 2-hour VHS movies that needed multiple intermediates, had to be joined in yet another round of intermediates, then were assembled in the encoder and encoded in one shot with pulldown applied for the final output to avoid pulldown cadences changes in the final version. I've had plenty of long videos that required different scene-by-scene filter changes, taking weeks to complete, that had to be joined for the final encoding. So what you describe isn't unusual. It's par for the course with problem videos, of which I and others have had plenty to deal with. I've had MPEG's that had to be demuxed into elementary video and audio streams so that pulldown could be applied to 20fps film video in DGPulddown to make it 25fps PAL or 29.976 fps NTSC, then remuxed in a smart rendering editor for edits and authoring. I worked on one truly horrific 3-hour opera transfer from tape directly to DVD that took 14 months to complete (the video didn't even belong to me, but it was my baptism of fire into Avisynth). With that project I saved hundreds of intermediate files and scripts on a USB and optical discs in hopes that one of these days I can do an even better job, as there are still a lot of unsolved problems with the final. You have to think ahead, and sometimes you have to drop back and rework something that gets blended back into the final. If you're dealing with an NLE that impedes that process, you should alter the workflow accordingly. |
Quote:
|
SeparateFields() with RemoveSpots broke the video into smaller segments and separated fileds in which a spot or rip extended over multiple images. RemoveSpotsMC is a temporal filter -- if noise stays the same for 2 or 3 frames, the nosise isn't considered noise. Temporal filters look at the way images change over time. If something doesn't change, it isn't seen as noise. If you break the images into disparate pieces, the same noise would appear in one group of images but not in the other group, so in one of those groups the noise would be treated as a disturbance that doesn't belong there.
Not all filters can be used in this manner. Some filters that require progressive video will distort alternate lines if SeparateFields is used because alternate lines don't appear in the same place in both images and will be reassembled incorrectly during the weave process. It takes experiementation to tell which method works best which different filters. If you want to break up the frame sequence using deinterlaced full-frame video, separate Even and Odd frames, process them separately, then interleave into the original order when filtering is done. There have been examples of using either separatefieds() or treating alternate frames in these forums. The chroma cleaner chubbyrain2 is one filter that has been used both ways. MCTemporalDenosie is another, although MCTD has a parameter that can be set to work with interlaced video. One filter that's only partially effective with SeparateFields() is dfttest. [EDIT]With the RemoveSpots example you overlooked the fact that in many cases SepaateFields(0 was followed by filtering even and odd fields separately, then reassembling them: Code:
SeparateFields() |
Method 1 = Don't.
Method 2 = Deinterlace all. Method 3 = Interlace all. If for web streaming, deinterlace interlaced footage, then merge with progressive. Broadcasts can actually handle mixed interlaced/progressive, if done correctly with the TS streams. Doing everything twice is indeed the best: interlaced for interlaced, progressive for progressive. Many people would get bored at the triage required when you work for studios. You encode lots of things lots of ways. Sometimes you can automate, sometime not. 50p, 25p, 50i, 25i ... oh goody. What fun. PTSD flashbacks to studio work. :laugh: I need more detailed source details. Long thread, but I never saw it. The conversation is too broad for simple interlaced vs. progressive when you start getting into non25/30 framerates. (Ditto for NTSC/PAL mixing.) |
Quote:
Quote:
|
If uploading to YouTube, you should upscale the content because:
Of course, web streaming quality is bad to begin with, so I don't know how much you care about making it less-bad. |
2 Attachment(s)
Quote:
Attachment 7753 Attachment 7752 |
Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.