digitalFAQ.com Forum

digitalFAQ.com Forum (https://www.digitalfaq.com/forum/)
-   Restore, Filter, Improve Quality (https://www.digitalfaq.com/forum/video-restore/)
-   -   VirtualDub and Avisynth filter help? (https://www.digitalfaq.com/forum/video-restore/6033-virtualdub-avisynth-filter.html)

pinheadlarry 08-09-2014 09:02 PM

VirtualDub and Avisynth filter help?
 
Hello again,

This is originally what i wanted to ask during my first thread, but that led to various other topics that helped me understand half of what i was trying to do. But now i'm trying to figure out filters..

I've found a couple tutorials but they have left me confused 1/4 of the way through. There isn't much for information for beginners other than personal threads, so i figured i would start a thread for myself.

I'm capturing vhs to prores422 and opening with qtsource in avisynth/virtualdub. I was going through a tutorial that had me open up the avisynth script in virtualdub, but i got an error with qtsource. I tried going through some of the filters in both programs, but to be honest there is just so many options and information i feel like this would take me years to even come close to something one might consider a 'good script'.

So i'm hoping I can get some better knowledge here..


Example..
https://www.dropbox.com/s/hifa7eqd7kj5cvz/example.mov

premiumcapture 08-09-2014 10:25 PM

http://sourceforge.net/projects/fcch...time%20Plugin/

Try this. Worked on Windows 7 for me but there's a few alternatives if it doesn't load.

pinheadlarry 08-09-2014 10:56 PM

i'm pretty sure i already installed that plugin because i can open the file fine in virtualdub. but for some reason qtsource won't transfer from as to vd?

premiumcapture 08-09-2014 11:05 PM

LordSmurf is actually working on a script that has the best of what most tapes need. I am not sure when he'll be finish, but when he does, it should simplify a lot.

I like to use AvsP, which can feel a little easier, but depending on the filter I actually use XVID4PSP as it can make applying a single filter or two a lot easier.

sanlyn 08-10-2014 10:29 AM

Quote:

Originally Posted by pinheadlarry (Post 33540)
i'm pretty sure i already installed that plugin because i can open the file fine in virtualdub. but for some reason qtsource won't transfer from as to vd?

That link is a VirtualDub plugin. It has nothing to do with Avisynth. Avisynth has no idea what happens in VirtualDub or which VDub plugins are used. And vice versa: VirtualDub has no idea what Avisynth is doing other than sending out decompressed video frames to be viewed. All Avisynth does is open and decode the named file, run any Avisynth filters specified, then make its output available to whatever app is looking at it. Its output is decoded uncompressed video...and audio, if audio is present and can be decoded.

The VirtualDub QT plugin .vdf file opens various .mov files (if it can), converts the video to RGB (whch you might not want, especially with the video sample submitted), and makes certain assumptions about the video's structure that you might not need or want.

I don't know what "transfer from as to vd" means. I guess you mean "avs to virtualdub" ?? If you see VirtualDub or Avisynth error messages, you have to give more detail about what the message says.

Avisynth's qtSource plugin consists of qtSource.dll and an html documentation file. The only file that belongs in the Avisynth plugins folder is the .dll. DO NOT COPY html files to Avisynth's plugins folder. The only files that belong in that folder are those that installed with Avisynth, along with plugins that you add as .dll, .avs, and .avsi . Don't keep your own user-created .avs scripts in the plugins folder, as your own scripts are temporary anyway and you'll soon have a plugins folder the size of the Congressional Library if you keep all your scripts there.

Don't download plugins or .zip, .rar. or .7z packages into a plugins folder. Somewhere on your computer -- and NOT in the program folder for Avisynth or VirtualDub -- make a separate folder for Avisynth downloads and another for Virtualdub downloads. In those folders, make subfolders for each plugin or filter. Why? Because many zip packages contain such things as files with duplicate names ("readme.txt" comes in hundreds of versions, each with different content). Many plugin packages come with subfolders and uncompiled C++ project files. Some plugin packages contain different versions of a .dll or .avsi filter, but the different versions have identical names. Obviously, that won't work. Keep plugin downloads in their own subfolders, or they will be impossible to manage. Once the package is unzipped or opened, look over the contents or instructions if any, and load the plugin itself into the proper plugins folder for Avisynth or Virtualdub.

How plugins are detected for Avisynth and VirtualDub:

VirtualDub recognizes .vdf files as plugins. When VirtualDub opens, it scans the plugins folder and internally makes a list of all its plugins, so they will appear in VDub's filter dialog window.

Avisynth recognizes .dll, .avs, and .avsi files as plugins. A .dll or an .avsi is automatically detected when an Avisynth script runs. An ".avs" plugin is not an auto loader, they must be explicitly imported using Avisynth's "Import()" function if the script needs them. There are a handful of other .dll's that require a special loading function because of the way they are compiled, but instructions for those plugins always tell you what to do. Two such plugins are yadif.dll and ffms2.dll.

If you look at the html document that came with the qtSource plugin, you'll probably notice that "qtSource" is not shown as a function. The name of the plugin isn't always the name of its main function. For example, the QTGMC deinterlacer downloads as an autoloader script over 200 lines long named "QTGMC-3.32.avsi". If you type that name in a script as QTGMC-3.32.avsi or just QTGMC-3.32, you'll get an error. The name of the main function is simply "QTGMC". Many functions and plugins have a long list of parameters that can be set for different values, but most filters -- that's most, but not 100% of them -- can be run with their default settings. Here is how some familiar plugins would be typed using their default settings:
QTGMC()
LSFmod()
MCTemporalDenoise()
SangNom()

But there are just as many functions and plugins that require at least one parameter to be specified. For example, you don't run these built-in Avisynth functions without setting one or more specific parameters:
ColorYUV()
Tweak()
Trim()
AviSource()

How do you know what to specify? You look over the documenation and you look at the way others use them. True, much documentation is over the heads of newcomers, but the basic stuff is, well, pretty basic.

If you have the qtSource.dll plugin in your Avisynth plugins folder, you can write this script and save it as something named "first run.avs" or whatever name you want, then open it in Virtualdub.

Code:

qtInput("Drive:\path\to\video\example.mov",audio=1)
info()

Note that "Drive:\path\to\video\example.mov" is not valid. I typed the example that way to show how to place the path and name of the input video. On my computer that script reads as follows:

Code:

qtInput("E:\forum\pinheadlarry\Aug09\example.mov", audio=1)
info()

The "Info()" function will display some file data on the output screen. If you don't want to see that info, just delete that line of text. Or you can keep it there, but just put a "#" comment-marker at the start of the line, and Avisynth will ignore it:

Code:

qtInput("Drive:\path\to\video\example.mov",audio=1)
# info()  <-- the starting # makes this line a comment, which will be ignored.

or you can add comments this way, after the initial code statement:

Code:

info()  # <-- info() will run, but it is followed by comment text which is ignored.
The filters to use depend on what you want to do. I'd suggest that you first set some decent, valid luma and chroma levels, as many shots in this clip are unviewable.

Like most editors, VirtualDub converts input to RGB32, but for viewing only. What happens to this file if you view it in VirtualDub and then just close it without doing anything? Nothing. If you use "Save as avi..." to output another copy of it, by default VirtualDub outputs uncompressed RGB24. Otherwise you have to specify a colorspace and compressor for output. If saved in other colorspaces or compressors, your original 244kb example.mov would by default save the file as follows:
(uncompressed RGB24): 1,233 kb
(comcompressed YUY2): 825 kb
(lossless huffyuv YUY2): 255 kb
(lossless Lagarith YUY2): 235 kb
By default audio is saved as uncompressed PCM unless you specify otherwise.

pinheadlarry 08-10-2014 12:35 PM

Thanks sanlyn for the very informative post. My ego hopes i didn't come off as incompetent as to not know do simple tasks as install or call a plugin. But I still do appreciate the answer don't get me wrong.

i double checked and yes, i did have that quicktime plugin installed for virtualdub.

After removing some filters i blindly added to the avs script, i was able to open the script in virtualdub. So obviously that was my fault for just assuming the script wouldn't transfer.

Here is the tutorial i was following. I know it's using a cartoon, but i figured i could adjust the settings to my liking. But half way through i got confused and just stopped. Is the article worth a revisit?

http://www.animemusicvideos.org/guid...spostqual.html

I'll have to reread your post and do some googling before i ask more technical questions. But when you said 'filters depend on what i want to do', i didn't realize there were different spectrums to cleaning up a video? I'm sure knowledge in this field probably leads to some very technical options, but what about just a standard clean up?

I'm not sure i have the right vocabulary to explain what i'd like to do. But similar to the tutorial above, i'd like to just make a better picture. I'm just not sure what they means as far as filters or time spent. I'm not even sure what the stand out problems are in a video.

sanlyn 08-10-2014 02:23 PM

The AMV site is mostly about toons, but its principles apply to any video. The link you posted is version 2 of the AMV guide. Version 3 is at http://www.animemusicvideos.org/guides/avtech31/. You'll find many of the procedures are about the same. The newer Avisynth sampler is at http://www.animemusicvideos.org/guid...post-qual.html.

Then there's another old guide (old = 2009) from Scintilla at http://www.aquilinestudios.org/avsfi...dex.html#intro. There's an index at the top of the the intro page.

Both of the links above are decent sources for samples of what various problems look like. About 90% of the filters can be used on "real" video. Degrainers, anti-alias, smoothers, dot crawl and chroma cleaners work on almost anything, but something like a line darkener....well, that would apply mostly to line art, but you never can tell when some offbeat technique just might be handy.

Many current "official" Avisynth plugins are at http://avisynth.nl/index.php/External_filters, though it hardly covers all. One handy feature is that it lists plugins by category. It also has links to some geeky discussion threads.

Don't discount VirtualDub, either. It's far more extensive than you'd expect. But it's best to work with Avisynth first.

As for a "standard" cleanup script, no one has ever concocted one. No such thing as a standard video problem. There are "common" problems. But no one script could even cover the different problems seen in your sample from scene to scene. Looking over your new sample now. Will try to post some specifics a little later.

pinheadlarry 08-10-2014 03:57 PM

Thanks sanlyn, I'm going to spend a chunk of my night going through those tutorials.

i'm not sure if my first exmaple was the best so i uploaded 2 more short clips that may be better to work with.

https://www.dropbox.com/s/7m1x45xvb1od7qa/example2.mov
https://www.dropbox.com/s/hax4ztlo7bgliok/example3.mov

lordsmurf 08-10-2014 07:04 PM

You need to be careful with the AMV site. Some of the "help" there is terrible.

sanlyn 08-10-2014 08:35 PM

Yeah, some of their other procedures strike me as mysterious, often too simple. Fairly OK page on the plugins, though. They left out a lot of heavy hitters and some details, but I guess one has to start somewhere.

@pinheadlarry, somewhere the AMV site advertises a big download file full of filters. Avoid that one. It's behind the times and overwrites stuff you need.

sanlyn 08-12-2014 01:57 PM

So here are a few simple scripts to show how this stuff works. And to get into analyzing some problems.

Have you had a look at the first short shot in your sample "example.mov" (frames 0 to 130)? Note that in the code below I've used the path where that .mov clip is stored on my PC. You'll have to modify that path to point to the file on your system.

In Avisynth it's easy to make a clip using only frames 0 to 130 and its audio:
Code:

qtInput("E:\forum\pinheadlarry\Aug09\example.mov",audio=1)
Trim(0,130)

Save the small file if you want as YUY2 using the compressor of your choice, I used Lagarith. In VirtualDub, play that file one frame at a time. You will see what appears to be 1 interlaced frame every 2 frames, until the fade to black. If you tried opening that clip with the info() function displaying file info onscreen, you'll note that VirtualDub thinks this clip if bottom field first (BFF). Avisynth usually assumes BFF. The clip is actually Top Field First (TFF). For that reason, as you'll later see, you usually have to specify TFF or BFF in an Avisynth script to keep this matter straight.

This shot was encoded as interlaced. If it's interlaced, we should be able to deinterlace it. Deinterlacing will take the 2 fields in each frame, separate them, and expand each of those fields to full-frame size. It will double the number of frames and double the frame rate. Because each field in an interlaced video represents 2 instants of time in a single frame, deinterlacing should reveal two "frames" for each original field, and each new frame should be a different image when the original object moves.

The simplest and least talented of deinterlacers is the Bob() function. But it's fast and OK for analyzing stuff. So, the code below deinterlaces this clip using Bob():

Code:

qtInput("E:\forum\pinheadlarry\Aug09\example.mov",audio=1)
Trim(0,130)
AssumeTFF().Bob()

If you play the bob() results one frame at a time, each new frame should look different. But that's not what happens here. You'll see that every group of 5 frames shows 2 consecutive interlaced double-images. Why? Well, among the several most damaging ways to deinterlace video, two of the really bad ways are near the top of the list: (1) deinterlace using field blending. (2) Deinterlace film source that is progressive 23.976 fps and uses pulldown (telecine) instead of being interlaced. The space cadets who processed this video made both mistakes, not just one. In a few instances, one might fix it (unblend). Most of the time, however, it can't be fixed.

In this case it's not fixable because the images are actually field-blended progressive video, encoded as interlaced. If you try to use inverse telecine and deblenders (it won't unblend anyway), you'll usually get a blended result that is somewhere around home-camera speed of 18 or 20 FPS. You can use other methods that will get 23.976FPS film speed, but still with blended frames. You can try a whole slew of over 20 de-blend filters found on Doom9, but none of them will restore this clip to its original state. Field-blending is the worst. Now you know. Maybe someone else can come up with a fix.

sanlyn 08-12-2014 02:17 PM

6 Attachment(s)
More scripts, and some plugins you need to learn...

Even if you decide to use that first shot in "example.mov", there's little you can do with it. Too much luma and chroma data is destroyed. I'm guessing, but this shot looks like a special effect applied on purpose ? ? Any histogram or vectorscope will describe the problem. Below is a capture of frame 37 from the original .mov:

http://www.digitalfaq.com/forum/atta...1&d=1407870046

Whether it's a special effect or not, this image has hardly any pixel data. It's washed out and very like a reddish-sepia print. This is evident from the YUV histogram, shown below. You get the YUV histogram in the form of a waveform monitor wih this code:

Code:

qtInput("E:\forum\pinheadlarry\Aug09\example.mov",audio=1)
Trim(0,130)
Histogram(mode="Classic")

http://www.digitalfaq.com/forum/atta...1&d=1407870157

The waveform has several sections. The yellow-orange side borders represent the undesirable luma and chroma range that are darker than RGB 16 (left side) or brighter than RGB 235 (right side). The desirable area lies in the dark area between those two borders. The green line down the middle represents the middle of the spectrum, or RGB 128. Pixel values would normally populate most of that black area. But we see that 90% of the pixels have been squeezed ("crushed") into a thick vertical line around RGB 200 on the right. There's a small scatter of "dust" around the middle of the black area representing stray pixels that have survived, such as those in the darker shadow areas and hair. There is no other data to work with. You couldn't widen or exapnd that thick bright line of crushed pixels. Crushed=detroyed.

The two 'scopes below are the way this image displays in RGB. These are an RGB "parade" histogram on the right, and an RGB vectorscope on the left. This VirtualDub histogram filter doesn't work in Win7 or 8.

http://www.digitalfaq.com/forum/atta...1&d=1407870264

RGB histogram (left-hand chart): This histogram shows average luminance (the white section at the top) and has one section each for Red, Green, Blue. Dark values are at the left, bright values on the right. The histogram mirrors the YUV info -- all of the data in this image has been squished into 4 small "spikes' at the right-hand side. There's no other data to work with.

RGB Vectorscope (right-hand chart): Luminance and color are joined in this 'scope. The spread of pixels has the dark values in the center, while brighter pixels radiate outward. The small circle of boxes indicates the limits of the RGB 16-235 range. You can see that the only data is a small blotch of flesh-tone pixels near the center that radiates toward the upper left. Other colors aren't present.

If you want, you can try to add a little pizzaz to frame 37 by using the Avisynth COLORYV() function and the SmoothLevels function (a function of the SmoothAdjust plugin). The code below attempts to do this:

Code:

qtInput("E:\forum\pinheadlarry\Aug09\example.mov",audio=1)
Trim(0,130)
Crop(0,4,0,-8).AddBorders(0,6,0,6)
ColorYUV(off_y=-40,cont_y=70,off_u=7,off_v=-4)
ConvertToYV12(interlaced=true)
SmoothLevels(16,0.95,255,16,245,chroma=200,limiter=0,tvrange=true,dither=100,protect=6)

The code does several things. The Crop() function cleans up the top and bottom borders and centers the image. If you play this video you'll see head-switching noise along the bottom and a "twittering" or hopping border across the top. These are removed with Crop() and then AddBorders() makes new black borders to center the image and restore the 640x480 frame.

ColorYUV is used to shift that thick white line of pixel data toward the left (darker) part of the spectrum, while lume contrast (cont_y) increases the darks and brights to try to widen the available values. Off_u shifts blue 7 points to the brighter right side, and Off_v shifts red a little to the darker left side. ConvertToYV12() converts the colorspace properly for use by the next plugin. Then SmoothLevels() is used to smooth luma and chroma to prevent hot spots and to make the colors look less banded. The resulting frame 37 is below:

http://www.digitalfaq.com/forum/atta...1&d=1407870565

No, doesn't look so great. Most flesh colors have values in the middle of the spectrum, but there aren't many real midtones around. Almost everything is in that thick vertical line above the midtones. But it does have some dimension to it and looks sharper. Unfortunately, all these fixes look like garbage when you get to the fade to black (frame 54, below):

http://www.digitalfaq.com/forum/atta...1&d=1407870719

The proposed luma and chroma fix results in some bizarre posterization and oversaturation effects during the fade. It looks progressively more gruesome as the fade continues toward black. You can see that dark detail under the wood fixture has turned completely black, although some details were clearly visible in the original frame. So this image "fix" is really impractical and, at the end, it's an ugly fadeout with a wild flurry of huge clumps of simmering black blobs by the time it's over.

Finally, there's the fade to black in frame 130. Again, the YUV waveform has a single thick white line of data at the far right, around RGB 11 to 16, to indicate that a black screen has hardly more data than the other images.

http://www.digitalfaq.com/forum/atta...1&d=1407870785

Probably better to leave that shot as-is except for some denoising to calm down that fade a little.

pinheadlarry 08-13-2014 07:19 PM

i guess i found what i'm doing for the rest of the night lol. will report back after i go through all this. but thank you in advance :)

sanlyn 08-13-2014 07:44 PM

from my post #12:

Quote:

Originally Posted by sanlyn (Post 33612)
........
The two 'scopes below are the way this image displays in RGB. These are an RGB "parade" histogram on the right, and an RGB vectorscope on the left. This VirtualDub histogram filter doesn't work in Win7 or 8.

http://www.digitalfaq.com/forum/atta...1&d=1407870264

Ooooops! Sorry, folks, my bad. The RGB "parade" histogram is on the left. The vectorscope is on the right. I should know left-right by now. Sometimes at 2:00 AM, though, I forget.
:o :smack:

Here's a tip I saw posted some time ago, even though the content might be so obvious it seems silly:

The code in an Avisynth script is executed line by line in the order that the statements appear. The ouput from Line 1 becomes the inpit for line 2. Output from line 2 becomes the input for line 3. And so on.

What this means is that you can insert comment markers (the # symbol) to cause a line to be ignored. So you could comment-out the lines and then start un-commenting them one by one to see what accumulated lines do. For example, take this fictional script where all 4 lines will be executed in order:
Code:

line 1
line 2
line 3
line 4

Then comment-out the last 3 lines to run only line #1:
Code:

line 1
#line 2
#line 3
#line 4

You can uncomment the lines one by one, but keep them running in the same sequence. Sometimes if the output from a previous line doesn't run, the next line won't run properly or might not run at all. So don't take away the comment markers at random.

You once remarked that you'd like some sort of "standard script" to use for everything. That might be possible, especially if you have a video with shots that all have the same problems. Most video doesn't work that way -- however, VHS has some fairly common problems that require pretty much the same cleanup. It's possible to have a standard filter set and a standard sequence, but quite often some of the defaults or specific parameter settings might have to change to suit the content.

The filter samples shown in the earlier links to the AMV filter page and Scintilla's discussions are old standby's that people use frequently. As I said, line darkeners are really for use with cartoon line art, but anti-alias fllters, denoiers and sharpeners are useful everywhere.

Meanwhile I played with some of the shots in your samples and can try to come up with some sample scripts later. Sometimes the documentation can make things look more complex than necessary. Seeing how it's done in practice and in scripts from other threads will make it look easier, I'm sure.

pinheadlarry 08-14-2014 12:10 PM

Quote:

Originally Posted by sanlyn (Post 33637)
Sometimes the documentation can make things look more complex than necessary. Seeing how it's done in practice and in scripts from other threads will make it look easier, I'm sure.

This. I'm overwhelmed by how dense these programs are.

sanlyn 08-14-2014 03:34 PM

No problem. Most scripts you'll see are fairly short. You only have to learn it once. Most of the time you'll use the same filters for similar videos, just change the settings when needed.

Got very busy around here the last couple of days, but I'm preparing a couple of samples for later. Sorry for the delay.

Yeah, the text of some of the heavy-hitter plugins like QTGMC is really big. Good thing the designer worked all that out for us --you can run that monster with only one line of code in your own script.

pinheadlarry 08-16-2014 01:10 PM

Looking forward to the script, sanlyn.


I've been playing around with different filters you guys have recommended on this thread and the last, specifically QTGMC. But i just can't get it right. I can always find one out of so many frames that are still interlaced. Going to continue more tonight. I'll try and post some screens of what i'm talking about later on.

sanlyn 08-16-2014 02:25 PM

QTGMC deintelaces completely. If you refer to camera shots such as the one discussed in post #12, that shot was encoded as interlaced but has been field-blend deinterlaced before it got to you. It's rare to be able to restore that kind of fake deinterlace, as deinterlacers can't do it and most unblend plugins wouldn't be able to help very much. You might be encountering a few clips like that one. You might also be looking at telecined shots that should be inverse-telecined, not deinterlaced. The shot in post #12 appears to have been a PAL to NTSC conversion that was telecined to get 25fps up to 29.97 fps, then incorrectly field-blend deinterlaced. Blended means there aren't two separate top-and-bottom fields in the frame that contain two different images: the original two fields were blended into one. Both fields contain the same image, with a blended "ghost" instead of two separate images.

If you run the statement "QTGMC()" as-is, it's the same as running QTGMC with "slow" default settings. The slowest settings run slowest because they make more repairs and do more denoising. The faster prwesets don't clean clean up as well, but they're usually adequate for most purposes. I've been using these variations:
Code:

AssumeTFF().QTGMC(preset="medium")
AssumeTFF().QTGMC(preset="fast")
AssumeTFF().QTGMC(preset="very fast")

If you want to add extra denoising and cleanup to any of those statements, do it this way:
Code:

AssumeTFF().QTGMC(preset="medium",EZdenoise=2)
AssumeTFF().QTGMC(preset="fast",EZdenoise=2)
AssumeTFF().QTGMC(preset="very fast",EZdenoise=2)

If you want even more denoising and motion smoothing, try this:
Code:

AssumeTFF().QTGMC(preset="medium",EZdenoise=3,denoiser="dfttest")
There are three sources of QTGMC documentation:
1. The html that comes with the plugin
2. The avsi script itself. Opens best with Windows Notepad. Don't use "wrap text" when viewing it. The first several dozen lines of text describe all the defaults for each of the presets.
3. The doom9 thread on QTGMC: http://forum.doom9.org/showthread.php?t=156028. Don't get into too big a rush trying to get thru that thread. It's over 50 web pages!

Note that for final output, DVD is usually interlaced, and standard defintion BluRay/AVCHD is interlaced for disc output. Interlaced usually displays fast motion and camera pans more smoothly. Deinterlace or inverse telecine are usually used for cleanup purposes that require it, then are usually reinterlaced or telecined at the end.

sanlyn 08-17-2014 05:17 PM

5 Attachment(s)
Quote:

Originally Posted by pinheadlarry (Post 33549)
when you said 'filters depend on what i want to do', i didn't realize there were different spectrums to cleaning up a video? I'm sure knowledge in this field probably leads to some very technical options, but what about just a standard clean up?

I'm not sure i have the right vocabulary to explain what i'd like to do. But similar to the tutorial above, i'd like to just make a better picture. I'm just not sure what they means as far as filters or time spent. I'm not even sure what the stand out problems are in a video.

Recognizing a few things can help build a vocabulary. Keep in mind that the sources you're working with are great examples of how not to process video. As you'll see, some glitches are impossible to correct and some can be fixed but the fix could look worse. VHS is bad enough without adding bad processing or dubbing to the mix. It makes it really difficult for newcomers. I've been there. Many of us are still there!

I took 3 more camera shots from your example.mov clip. Earlier I posted notes about that clip's first shot. I'm calling that scene "A". The next three shots I'll call B, C, and D. Some sample scripts might help you define what a "better picture" means in terms of cleaning up problems, even if "better" has different meanings for different people. I'll try to focus on problems that are common and obvious (at least, they should be obvious).

To start, for the moment I'll skip scene "B" (the night-time shot) and move to scene C. This is the guy leaping into the scene in early dawn light, or maybe it's late afternoon. If you consider that the first frame in that shot as number 0, the image below is frame 59 from the original clip (it's still interlaced):

http://www.digitalfaq.com/forum/atta...1&d=1408313284

Minor points: there's the usual head switching noise at the bottom border. The top border is a broken black and white line. A "standard" filter and procedure would be to crop off the noisy border stuff with Crop() and replace it with clean blacks using AddBorders(). The black borders will blend in with any TV screen background, but noisy borders won't.

The sky has some magenta blotches. Not much grainy noise, but clearing the blotches will remove pixel data and cause banding effects where the sky colors gradually change. So a debanding filter (gradfun2DBMod) and a little fine film-like grain (AddGrainC) were used.

Along the right side of the guy's head and on some of the fence posts you'll see a bright edge line called a halo. You'll have to sharpen this scene, but most sharpeners will worsen halo effects. So a de-halo filter will be needed (DeHalo_Alpha) after sharpening with LimitedSharpenFaster. And if you look closer at his arms you'll see a small amount of reddish smear against the sky. Increasing saturation will increase that discoloration, so some chroma cleaners would be needed to control it (FFT3Dfilter in chroma-only mode and CNR2).

Interlace combing is always seen on a computer. On these videos it seems excessive, even with deinterlacing media players. It might look a little worse because (I guess) some of these shots appear to be sharpened while still interlaced (a non-no in anyone's book). Rough sawtooth edges isn't interlace combing -- look at the guy's head, arms, and slacks. Those can be smoothed a bit without totally obscuring the soft detail in the figure. But my guess is that this shot has already been denoised and, again, done while interlaced. Anti-alias filters are nearly as destructive as dot crawl filters. Better to take it easy with those and live with some imperfection in the edges rather than destroy everything that's left from the original processing. I actually used three anti-alias and edge smoothers in filtering this shot, all of them mild. Otherwise, the guy's face would be totally smeared.

Which brings up a major problem here. It's really poor lighting, made worse by the camera's autogain and auotcolor features. You can't use just a primitive "Bright" control here to reveal darker detail. Well, you could, but you'll soon see that all the "detail" you'll get from the darks is what you already see. Part of the trouble is that the brightest part of the guy's face is at about RGB 30 to 45 -- an extremely narrow range. But most of the background and trees and other stuff are in the same tonal range. A brightness filter would simply gray out everything "down there", making it look like an unreal blur.

Worse than that, the camera's autogain changes levels three times between the start frame and the end frames. If you brighten the darks in this part of the shot, the start and end of the shot will be blown away. What the camera crew needed was light in the shadows to begin with. The contrast and level changes in this shot are far beyond the capacity of video to manage it. Lighting did change overall during the shot, but shadow lighting remained the same. Brightening part of the scene will brighten all of it. Below are three YUV histograms of how black levels change between the first frame, the middle frames, and the end. The histograms also show that black levels are already too high to begin with, so this scene looks undersaturated and washed out all the way through.

http://www.digitalfaq.com/forum/atta...1&d=1408313376

Above, the YUV white section shows luma values. Darks are to the left, brights to the right. You can see that blacks at the start and end are rather high, at about RGB 40 to 50, and the middle is depressed in all the frames. With so little midtone data, you won't get clear skin tones. In the right-hand histogram's far right side you see a sharp white spike that indicates bright clipping.

ColorYUV() and the SmoothAdjust plugin were used to level things out, along with an avs scripted plugin called ContrastMask.avs, which mimics some Photoshop masking techniques. After blacks were lowered to real-world values, this mask raised the darkest stuff just enough to be able to see something.

Deinterlacing was required for some of the plugins. QTGMC was used for that and to get some motion noise reduction. Then the clip was re-interlaced at the end for smooth motion in this fast scene.

The script below looks twice as long as it should because I added comment lines. Notice the first line. It uses the Import() function to copy the ContrastMask.avs scripted function. An avs scripted function that's in your plugins folder doesn't autoload. You have to Import() it. Change the path statement to match the location of your plugins folder. The ContrastMask.avs filter is attached at the bottom of this post. Copy it into your Avisynth plugins. ContrastMask() also requires the VariableBlur plugin, attached.

Code:

Import("D:\AVisynth 2.5\plugins\ContrastMask.avs") ## <<-- Change path to your plugins folder !!
qtInput("E:\forum\pinheadlarry\Aug09\example.mov",audio=1) ## <<-- Change path to your video's folder !!
Trim(246,386)
ColorYUV(cont_y=8,off_y=-12,off_v=1,cont_u=-30,off_u=-1)
ConvertToYV12(interlaced=true)

    # --- 2 edge cleaning filters ---
TComb()
maa()

    # --- deinterlace + decomb ---
AssumeTFF().QTGMC(preset="very fast",border=true)
vInverse()

    # --- clean pink blotches ---
Cnr2("xxx",4,5,255)
MergeChroma(FFT3DFilter(sigma=5,bt=3,plane=3))

    # --- anti-banding ---
GradFun2DBmod(thr=1.8)

    # --- sharpen and edge clean ---
MergeChroma(awarpsharp2(depth=30))
LimitedSharpenFaster(strength=75)
DeHalo_Alpha()

    # --- more edge cleaning ---
Santiag(2,2)

    # --- add mild fake "detail" and "texture" ---
AddGrainC(1.5,1.5)

    # --- clarify shadows, set levels, dither for cleaner color ---
ContrastMask(enhance=2.5)
SmoothLevels(12, 1.1, 255, 16, 250,chroma=200,limiter=1,tvrange=true,dither=100,protect=6)

    # --- add color and depth ---
SmoothTweak(saturation=1.4)

    # --- clean the borders ---
Crop(0,2,0,-6).AddBorders(0,4,0,4)
 
    # --- restore interlacing ---
AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave()

Is this much work usually required? Usually, no. But these videos are in bad shape. Wait until we get to scene "B" that I bypassed a while back. An mpeg of scene "C" produced by this script is attached.

The other question is, did this work make a vast difference? Not that much. The shot was incorrectly photographed and improperly over filtered to begin with. The usual playback dnr didn't help much, either -- look at the foreground as the camera pans and you'll see its details disappear entirely for several frames . At least a TV screen won't blink when the overwrought chroma levels hit it, and it no longer looks like almost-black-and-white.

Scene "D" is next.

sanlyn 08-17-2014 05:30 PM

2 Attachment(s)
Scene D: Daytime jump, public building.

You don't need a histogram to see what's happening with hot red saturation and blown-out highlights. Contrast has been pumped to make this look (I guess) like bright sunlight. In fact it was shot in overcast light -- the end frames have more detail that show what's missing, and there are no shadows on the ground. Apparently it previously rained, as there appear to be remnants of two rain puddles. The green leaves are almost turning olive with all that red. You can't do anything with the color because Autocolor changed it at the end of the shot, so adding blue will turn sidewalks purple at the end.

This one's easy because there's little you can do. Levels are calmed a bit, then deinterlace and some decombing with vInverse() is used to clean up some of the bad twitter when the camera pans across the brick wall. The shot was sharpened while interlaced or during dub or playback, as can be seen from the jaggy edges and dark combing on the figures. Undoing it would just inflict more wreckage.

Notice that the script below uses many of the filters and plugins used earlier. A few specific values change to suit the circumstances:

Code:

qtInput("E:\forum\pinheadlarry\Aug09\example.mov",audio=1) ##<<- Adjust the path to match your system
Trim(368,470)
ColorYUV(gamma_y=-15,cont_v=-120,gain_u=5,cont_u=20)
Crop(2,2,-2,-6).AddBorders(2,4,2,4)
ConvertToYV12(interlaced=true)
SmoothTweak(saturation=1.2)
SmoothLevels(24, 1.0, 255, 16, 245,chroma=200,limiter=2,LMode=3,tvrange=true,dither=100,protect=6)
AssumeTFF().QTGMC(preset="fast")
vInverse()
Cnr2("xxx",4,5,255)
Santiag(2,2)
AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave()

If you don't find the motion and edge noise problematic, you could end the above script just before the line that runs QTGMC, by deleting those lines or commenting them out.

Fixing this shot poses a problem. When using Trim() to detach this shot from the main video, I left an extra 30 frames at the end to handle the dissolve into the next scene. It's a 1-second 30-frame dissolve that begins at frame 73 and ends at frame 102 of this scene. Do these filters affect the next scene? They sure do, which means that the guys who made this video forced you into one of three choices: (1) Accept this shot as-is and allow it to dissolve into the next shot, which already has depressed levels and a lot of low-light noise, or (2) Correct this scene but don't correct the next one, which will be too dark and noisy, or (3) correct both shots and learn to rework your own dissolve between two corrected scenes, which can be done in Avisynth. That would be another long demo, which I didn't get into at this point.

The RGB histogram below is taken from frame 55 of the "D" scene. The sharp peaks at the right show serious bright clipping, caused by raising midtones so far to the bright end that brighter detail was destroyed. You can darken the image, but it won't restore anything.

http://www.digitalfaq.com/forum/atta...1&d=1408313991

sanlyn 08-17-2014 05:41 PM

4 Attachment(s)
The two scenes following the D shot have low-light noise. That brings me back to the "B" scene bypassed earlier. The noise in these low-light and night shots is CCD/CMOS noise. It's from using a camera that's unsuitable for low light shooting. The signal strength of the CMOS residual noise is stronger than the signal strength of the low-light objects. Think of it as audio tape where the hiss is louder than the music. CMOS noise is clumpy, thick, very coarse grain. But it doesn't look like it these scenes. Below, a 200% blowup from the upper right section of the 94th frame from scene "B":

http://www.digitalfaq.com/forum/atta...1&d=1408314741

The noise isn't heavy grain any more. It's been hit with a simple blurring filter that converted it from distinct grain into a swirl of simmering sludge. Sludge is tougher to remove than grain. It doesn't help, either, that this is another shot with high IRE (black levels) that looks filtered and sharpened while still interlaced, or perhaps during earlier playback or recording.

Avisynth has a handful of filters to address CMOS stuff, but they don't work so well after it's turned to mush. Having seen this before, IMO it's best to use two heavy-hitters but with more moderate settings, rather than one big brute that tears everything apart. So I made this a two-step process. Two steps, because running both sets of these plugins in the same script slows the script to a crawl.

The first step uses a modification of a favorite cleaner called MDegrain, which is part of the MVTools v2 plugin (Motion Vector Tools). MDegrain comes in two flavors, 2 and 3. Flavor 3 is used here. The function is a separate avs script that can be saved as a plugin. It's modded to use interlaced video without deinterlacing it, using SeparateFields() instead. Use MDegrain before deinterlacing, in this example. Why? To prevent that ugly swirling junk from being interpolated across multiple frames during deinterlace, thus smearing it even more.

I attached a copy of the function as a script called "MDegrain2i2.avs". Save that avs attachment as-is in your Avisynth plugins folder. To use an avs scripted function in your own script, you have to import it into your script using the Import() function as shown in the previous post, and as shown in the script below for STEP #1:
Code:

# ----------------------------
# FIRST STEP #1 --------------
# ----------------------------
Import("D:\Avisynth 2.5\plugins\MDegrain2i2.avs") ## <<-- Change path to your plugins folder !!
qtInput("E:\forum\pinheadlarry\Aug09\example.mov",audio=1) ## <<-- Change path to your video's folder !!
Trim(131,265)
ColorYUV(off_y=-7,cont_y=30)
TComb()
ConvertToYV12(interlaced=true)
SmoothTweak(saturation=1.5)
MDegrain2i2(last,8,4,0)

In VirtualDub I saved the output of Step #1 using YV12 color, Lagarith YV12 lossless compression, and the "fast recompress" video processing mode. This file will be the input for STEP #2.

STEP #2 uses some familiar filters you saw earlier. Notice that the video isn't opened with qtsource(), but uses AviSource() instead:

Code:

AviSource("E:\forum\pinheadlarry\Aug09\B\Step1.avi") ## <<-- Change path to your video's folder !!
AssumeTFF().QTGMC(preset="medium",EZDenoise=2)
vInverse()
DeBlock_QED()
Santiag()
GradFun2Dbmod(thr=2)
DeHalo_Alpha()
LimitedSharpenFaster()
AddGrainC(2.5, 2.5)
Crop(0,2,0,-6).AddBorders(0,4,0,4)
AssumeTFF().SeparateFields().SelectEvery(4,0,3).Weave()

The second step should output a clip that looks like an appropriately dark but fairly clear night shot instead of a washed-out foggy what-is-this. The result is attached as "mov_B00mv3_Q.mpg". Yes, you do give up some dirt in the process. But, then, you can always leave it as-is or omit MDgrain.

I took another step (call it STEP 3), running NeatVideo at about 1/3 power on the output of step 2. NV made things just a little more visibly smoother. No script there. NeatVideo is a paid plugin, often abused and bad-mouthed by people who don't read its extensive user guide. Clip attached as "mov_B00mv3_Q_NV.mpg".

I think you can see how much trouble it is to clean up careless work, and how much it's possible to lose in the process.

The cardinal rule is this: "garbage in, garbage out". A good VHS original can look pretty good. But watch out for careless analog work. It's a real pain.

pinheadlarry 08-18-2014 02:44 PM

This is awesome so far and i'm not even half way through. Before i really get started i have to download all of these plugins. I'm having trouble finding a couple of these, such as maa(). Not sure if i should use maa2 or not. Are there any resourceful plugin packs?

sanlyn 08-18-2014 04:45 PM

4 Attachment(s)
maa2 is a special version of maa that requires a bunch of AVisynth_MT plugins. It won't run in other versions of Avisynth. It's aggressive, not recommended for soft or over processed video.

The plugins package for QTGMC contains a lot of support files used by many of these filters, including the versions of masktool and mvtools called by many filters. It also includes support files for FFT3dFilter. http://forum.doom9.org/showthread.php?t=156028. You want the "Plugins Package" offered, not the modded package.

Attached:
-LimitedSharpenFaster.zip
-Santiag.zip
-maa.avs
-SangNom.zip (this might come with QTGMC. Used by several anti-alias plugins)

pinheadlarry 08-19-2014 10:43 PM

This is a response to post #19

My content will primarily be seen on a computer screen. In that case, should i ignore the restore interlace and interlace=true steps?

Also a workflow question that may sound noobish, but there aren't many resources out there. I assume you skim through frames throughout the video and find frames to fix. You add filters to one frame, but what do you do when you approach a frame that needs more filtering? Do you add filters on the existing script? Or is there a different way?

I just figure if you keep adding filters on the script for different frames it would do more damage.

sanlyn 08-20-2014 07:07 AM

3 Attachment(s)
Quote:

Originally Posted by pinheadlarry (Post 33785)
This is a response to post #19

My content will primarily be seen on a computer screen. In that case, should i ignore the restore interlace and interlace=true steps?

What does "primarily" mean? If you mean that all this work is being given only to people who watch videos on a PC, you don't have to reinterlace. The results will be larger files with twice as many frames and will run at 59.94fps. You can make it 29.97fps by discarding every second frame, but you'll have to decide for yourself whether the inferior motion handling and lower resolution suits you. One thing's for certain: it wouldn't play smoothly on TV. This is fast-action video, not still photographs.

Since you don't see some of these videos that well, you probably haven't noticed that some of the shots in your samples are not interlaced.

This scene is not interlaced:
http://www.digitalfaq.com/forum/atta...1&d=1408535813

This scene isn't interlaced, either:
http://www.digitalfaq.com/forum/atta...1&d=1408535833

The scene in the upper image is slightly but visibly underexposed. The scene in the lower image has the opposite levels problem: it's over-exposed. Would you apply the same filter to both scenes?

If you deinterlace both scenes and leave them deinterlaced, you will have duplicates of every frame. If you discard duplicate frames, you will have scenes of effectively lower-resolution frames.

For a specific problem that affects only a few specific frames, below is a 2X blowup of part of the the 9th frame in one of the scenes. It affects 6 consecutive frames. The glitch can be fixed, but you'd need about 30 more lines of code and two more plugins. You can't use that same code on the entire video. You'd also have to save the audio as a separate file and rejoin the audio after repairing those frames, because the code will affect audio sync. When working with this scene, I decided it's probably better to just leave the problem as-is and live with it. I see several bugs in the scene pictured below, some of which don't exist in some other shots. The image is not interlaced. See if you can spot the problems in the image below:

http://www.digitalfaq.com/forum/atta...1&d=1408535848

Quote:

Originally Posted by pinheadlarry (Post 33785)
Also a workflow question that may sound noobish, but there aren't many resources out there.

I'm not sure what you mean by resources. The restoration threads of this and several other forums have thousands of examples of problem videos and how they were repaired.

Quote:

Originally Posted by pinheadlarry (Post 33785)
I assume you skim through frames throughout the video and find frames to fix. You add filters to one frame, but what do you do when you approach a frame that needs more filtering? Do you add filters on the existing script? Or is there a different way?

I just figure if you keep adding filters on the script for different frames it would do more damage.

Are you referring to "frames", or to "scenes"? If a histogram or sample frame is displayed, it's usually a sample of how things look in general in a scene, not how they look in a specific frame unless that specific frame has a specific problem such as a bad dropout or one-time glitch. The scripts posted are run as separate scripts that address defects throughout a scene and that result in separately filtered video clips, not lines of code added into one huge script and run all at once on the entire video. The scenes that were cut out and filtered are rejoined into a final product, which can be done with FCP or any editor.

Yes, there is a different way. Keep the videos intact, run them through FCP and set up three or four filters for the entire video. Or use FCP to cut and join scenes into a different arrangement, then use the same filters for that entire compilation. Of course, the scenes joined in that way will come from different sources with different problems, but isn't that the way your sample videos were originally produced?

Noise and visual defects are usually more obvious on a computer screen than on TV. That aside, if you see no apparent problem with any of the scenes in your samples, you're wasting your time with Avisynth and VirtualDub. Almost all NLE's in every price range have the same or similar cut and join features as another NLE, similar timeline and audio features, similar simplified color balance and denoisers, and similar scaled-down encoders and authoring tools. If differences between the original scenes aren't a problem in your view, and if basically they all seem to look alike, then why allow others to program everything you're doing? That wouldn't be learning anything, it would just be following rote procedure.

I don't skim through frames looking for something to fix. All you have to do to is watch. The boo-boo's jump right off the screen, no searching required. But as I say, if you don't see problems or differences in an assemblage of badly processed video clips, you should be feeding everything intact through a single NLE and not worrying about it. If the friend that you mentioned earlier claims that he uses a single universal Avisynth script to fix all his videos and doesn't need to do any specific repair work, then realize that you, too, have lossless media and can run his script or any other's script just as well as they can.

You might want to rethink the purpose of fixing up several really bad videos. Or leave them as-is and do a simpler and quicker re-edit into custom scenarios using a single set of a few NLE filters to make them look the way you want.

sanlyn 08-20-2014 08:53 AM

1 Attachment(s)
8 segments from the first example.mov, A thru H, filtered separately and rejoined. Encoder TMPGEnc Plus 2.5, with edit/join/audio sync/AC3 resampling in TMPGEnc MPEG Editor V3. Auditioned with MPC-BE and VLC players (*I hate VLC). Could use more tweaks.

sanlyn 08-20-2014 09:30 AM

Quote:

Originally Posted by pinheadlarry (Post 33785)
should i ignore the restore interlace and interlace=true steps?

Sorry, I forgot to address this.

interlaced=true and interlaced=false are relevant for colorspace conversation depending on whether or not the video is in an interlaced state when the conversion statement is run. Remember that different colorspaces store chroma information differently. Doing the conversion properly requires that avisynth is informed about the interlace state. Doing it the wrong way screws up chroma.

Colorspace and interlace factors also affect the Crop() function. Be carefulo how you crop. The following rule is quoted from the Avisynth online help concerning Crop():

Quote:

In order to preserve the data structure of the different colorspaces, the
following mods should be used. You will not get an error message if they
are not obeyed, but it may create strange artifacts.

In RGB:
width no restriction
height no restriction if video is progressive
height mod-2 if video is interlaced

In YUY2:
width mod-2
height no restriction if video is progressive
height mod-2 if video is interlaced

In YV12:
width mod-2
height mod-2 if video is progressive
height mod-4 if video is interlaced
mod-2 means that a number must be evenly divisible by 2.
mod-4 means that a number must be evenly divisible by 4.
http://avisynth.nl/index.php/Crop

lordsmurf 08-22-2014 06:05 AM

I always like Avisynth discussions. I see an item or two I never use. :)

Although themaster1 never attaches scripts to the forum (WHY?), he has some good ones too. He gave me an idea last week, and the script turned out very nice indeed. It's something to add to repertoire, and onto an "advanced multiscript" when I finish the typical one.

pinheadlarry 08-22-2014 01:05 PM

Quote:

Originally Posted by sanlyn (Post 33802)
What does "primarily" mean? If you mean that all this work is being given only to people who watch videos on a PC, you don't have to reinterlace.

These videos will be for youtube and for download. For the most part they will only be seen on a PC. Unless they download the file and watch it on a tv. But i'm focusing on the PC viewers.


Quote:

Originally Posted by sanlyn (Post 33802)
The scenes that were cut out and filtered are rejoined into a final product, which can be done with FCP or any editor.

This makes much more sense now lol. But i'm not sure if i want to spend that much time with these considering the amount of scenes that need to be addressed. I'd much rather fix the video as a whole. I understand that will now limit me to what i can fix.

The thing with other programs like FCP is that you need to buy additional software filters. And with avisynth or virutaldub, you get powerful software for free. There's a high learning curve, but i think it would be worth it incase i want to go scene by scene in the future.

Also, I got ahold of the 3rd party i mentioned before and he sent me his script. Thoughts?

Code:

#SetMemoryMax(1200)  # Optional line. See below for value M
SetMTMode(3, 6)  # See below for value X, could try 5 instead of 3 for non-standard source-filter/avisynth combinations
dgdecode_MPEG2Source("C:\Users\Tim\Desktop\Hoax 2\VTS_01_1.d2v")
SetMTMode(2)
crop(8, 0, -8,-8)
QTGMC( Preset="Very Slow", EdiThreads=1 ).SelectEven()


sanlyn 08-22-2014 02:22 PM

LOL!

Okay, fellas, have it your way. All the script does is create an invalid frame height for anything except PC playback or YouTube (invalid altogether for BluRay), some degraining, frame-decimation deinterlace, plays back at one-half it's original vertical resolution, and ends up as pretty much what you started with except for jerky playback on TV -- if you can find a way to play it on TV, anyway. Definitely amateur work. Try it. You'll need Avisynth_MT or the 2.6 mod downloaded from the Doom9 QTGMC thread.

pinheadlarry 08-22-2014 06:37 PM

Not sure why I would try the script after you just put it down like that lol

sanlyn 08-22-2014 08:11 PM

After all, it does work. It just do very much and makes a couple of mistakes. On the other hand, one can improve most of those shots only minimally, while a handful or so could just stay as-is, and perhaps another handful are so bad that even a small effort could make them look, well, noticeably better. But for your vids you'd have to make some adjustments. For one thing, the crop() was obviously designed for 720x480 mpeg's but doesn't give a 3:4 image back, leaves the video at a nonstandard height that's invalid for several formats, and doesn't clean the upper border twitter.

You can leave your videos at 640x480 for PC playback or YouTube, but that frame size can't be used for DVD, etc., and DVD/BluRay are usually interlaced formats or at least encoded that way. So you could use that QTGMC statement but reinterlace afterwards to keep your original resolution and to be able to resize for other formats (which should be either 704x480 or 720x480). So, with Avisynth 2.5.8 you could do it this way and customize for your particular set of 640x480 captures:

First, you can't open with dgdecode, you have open your .mov files with qtinput. Dropping the Avisynth 2.6 requirements (since QTGMC won't run much faster in 2.6 "MT" anyway):

Code:

ConvertToYV12(interlaced=true)
AssumeTFF()
QTGMC(Preset="Very Slow", RT2=2,EZDenoise=2)
crop(0, 2, 0, -8).AddBorders(0, 4, 0, 6)
DeHalo_Alpha()
GradFun2DBMod(thr=1.5)
LimitedSharpenFaster(strength=50)

Then just leave it that way, deinterlaced at double frame rate (59.974) and double frame length, compress losslessly at YV12 with Lagarith. That's a very slow script that does some basic denoising, debanding, de-halo, and reinterlace. This would be your cleaned-up archive version from which you could go in multiple directions. For progressive display for PC-only or YouTube, all you'd need to do is run this statement on your cleaned-up version:

Code:

AviSource("path\and\whatever\deinterlaced archive name.avi")
SelectEven()

You could also use that deinterlaced cleaned-up version for anything, including DVD or BluRay. For encoding to those formats, use this on the clean-up deinterlace version:

Code:

AviSource("path\and\whatever\deinterlaced archive name.avi")
Spline64Resize(720,480)
AssumeTFF()
SeparateFields().SelectEvery(4,0,3).Weave()

Encode that with a 4:3 display aspect ratio. For standard def BluRay you can go up to 8 mbps variable bitrate and Max 12Mbps, with a GOP of size of 30 or even 15 for better fast action control, encoded to HD MPEG or h264. For plain old DVD you could use a bitrate of about 6500 VBR for fast action, 2-pass encoding. To be safe, you'd set the max VBR to 8000 and for action DVD you'd use a GOP of 12 (it can't be GOP bigger than 18 frames for DVD).

Whatever you do, don't use low bitrates on noisy action video, and don't use huge GOP sizes if you want clean motion encoding. Noise, fast action, and swift and jumpy camera pans require higher bitrates than clean static scenes with steady cameras and normal motion or less.

pinheadlarry 08-23-2014 08:10 PM

hmm the QTGMC line doesn't work. I'm getting the script error "qtgmc does not have a named argument "rt2"

For the plugins below qtgmc i'm getting "evaluate: system exception - access violation"

:huh1: :hmm: :question:

edit. i believe rt2 was a typo, i changed it to TR2=2 but got the same access violation error :(

sanlyn 08-24-2014 07:11 AM

OOoops. Yes, it's TR2. But that's what I get for being in a rush, because you don't even need to specify it. At "Very slow", TR2 is already set to 2 by default. So this line:

Code:

QTGMC(Preset="Very Slow", RT2=2,EZDenoise=2)
can be:

Code:

QTGMC(Preset="Very Slow", EZDenoise=2)
The access violation message comes from Windows. Try clearing memory by shutting down and rebooting. I get that Windows message now and then if I run a lot of memory intensive stuff for a long time without shutting down occasionally.

lordsmurf 08-24-2014 07:57 AM

Few notes from me:
- I prefer to do all my cropping in VirtualDub.
- I don't much care for the multi-threaded version on Avisynth. It never made a difference in my encoding times, and just made the scripting longer.

NOTE: Remember to use the [code][/code] bbcode to display the scripts. :old:

The # symbol in the quick reply or advanced reply make it easy.
Otherwise forum smiley bbcode can mess up the script. The smileys don't work inside the CODE blocks, however.

pinheadlarry 08-25-2014 11:40 AM

Ok. so after i export to the deinterlaced archive version (first script). I run the second script which is just SelectEven.

Now, what would be the proper way to get the video into FCP. That program doesn't accept the lagarith AVI. I'm not sure if it's the compression or the container, that FCP doesn't recognize

The only solution i know would be to use handbrake on the progressive Youtube copy. Then pull the handbrake version into FCP. Is that acceptable or will that kill the quality?

themaster1 08-25-2014 02:33 PM

I'm not familar with FCP but in many pro softwares (like sony vegas) you can use a little app called avs2avi (google it) which create a pseudo .avi (only a few kb) and you can import it thereafter.
This way you avoid a compression step [always better;) ], the downside is if you use a complex script it'll be slow as hell on your timeline to move back/forth

pinheadlarry 08-25-2014 07:08 PM

i realized it doesn't matter if they are compressed because they're going on youtube anyway.

I used the scripts on a full VHS capture and i think it came out well. I still have to compare scenes, but i can definitely noticed less interlacing.

But thank you sanlyn, you really came through big time helping me out and explaining all this. I'm going to use the scripts you provided and continue to reference this thread and all the filters you have been mentioning. I'm slowly starting to understand avisynth and hopefully i can start to figure out what all these filters actually do lol.

Thanks again!

edit- Actually one last question..

When i pull the script into VirtualDub and save as AVI. I'm confused on the input and output video it shows in VirtualDub. It doesn't show the changes between the original video and avisynth filters, correct? These two screens only show the differences in VirtualDub filters?

sanlyn 08-25-2014 07:42 PM

Quote:

Originally Posted by pinheadlarry (Post 34071)
When i pull the script into VirtualDub and save as AVI. I'm confused on the input and output video it shows in VirtualDub. It doesn't show the changes between the original video and avisynth filters, correct?

The left-hand "Input" windows shows what it's name says it does: the input, which came from Avisynth. VirtualDub has no way of knowing what the signal looked like before it was processed by Avisynth.

Quote:

Originally Posted by pinheadlarry (Post 34071)
These two screens only show the differences in VirtualDub filters?

The right hand window ("Output") would be different from the input only if you used some VirtualDub filtering. However, if you saw both windows "playing" the video while it was being saved to AVI, then you probably set VirtualDub processing to "full processing mode". If so, your YUV input from Avisynth was converted to RGB, then on output to a new AVI it was reconverted to YV12 if that's the colorspace you specified. If the right-hand Output window did not change at all while the new file was being saved, then no conversion occurred and you probably had VirtualDub set to "fast recompress", which is where it should have been.

Unless you plan to do some complicated processing, why would you need FCP? I thought you were going to encode the results, a progressive encode for mp4 or something for PC-only or web, and then an interlaced version for DVD, BluRay, or AVCHD. Those two steps can be done in Windows. along with some simple cut/join, using free software.

pinheadlarry 08-25-2014 09:46 PM

I didn't touch anything but compression so i'll have to check the settings.

I only use FCP to split the video and upload it directly to youtube.

I may be overthinking this but if i wanted to save one of these video files on my ps3 and play it on my TV, would i use the interlaced DVD version?


All times are GMT -5. The time now is 05:55 PM

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.