Go Back    Forum > Digital Video > Video Project Help > Restore, Filter, Improve Quality

Reply
 
LinkBack Thread Tools
  #21  
04-17-2018, 10:32 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Family matters have interrupted my day. I apologize for the delay. Will post some workflow notes as soon as I can.
Reply With Quote
The following users thank sanlyn for this useful post: yukukuhi (04-18-2018)
Someday, 12:01 PM
admin's Avatar
Ads / Sponsors
 
Join Date: ∞
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #22  
04-18-2018, 12:17 AM
yukukuhi yukukuhi is offline
Free Member
 
Join Date: Apr 2018
Posts: 68
Thanked 0 Times in 0 Posts
You don't have to apologize, instead it would be great if you can teach me some of your skills in video restoration whenever you can lend some time.
Reply With Quote
  #23  
04-18-2018, 03:11 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
I don't believe I'm particularly skilled at working with video restoration. If I do have a skill it's one that's shared by many others -- that skill is patience. I spend time browsing through hundreds of forum posts, especially in the archives of forums like digitalfaq or doom9 where one finds examples of solutions developed by experts who are far more skilled than I. Forum search utilities and Google are a great help. I also browse through forum posts on a weekly and sometimes daily basis to pick up information that may not be immediately useful but comes in handy later. Most of my notes are text copies of forum conversations, Avisynth scripts, and links to other projects.

You can learn a lot from the documentation that comes with many popular Avisynth filters. At first, much of it won't make much sense until you have a better knowledge of the tech lingo, but that comes with reading. Many Avisynth filter wiki pages have additional links to doom9 threads that go into great detail about what the filter can and cannot do. For instance, if you want a lot of techy info and more links about the QTGMC deinterlacer, take a look at its wiki page by using Google to find a string like "Avisynth QTGMC". Google will give you a ton of links, the first usually being QTGMC's home page: http://avisynth.nl/index.php/QTGMC. Another filer whose documentation can teach you a lot about special processing techniques is the Avisynth 16-bit Dither Tools package. Its wiki page has a lot of theoretical info, but even more instructive is the original doom9 post that introduced the reasons for the filter: https://forum.doom9.org/showpost.php...18&postcount=2. Even more, the post that comes right after it poses the filter itself: https://forum.doom9.org/showpost.php...59&postcount=3. That doom9 thread in its entirety has a wealth of information about cleanup work: Color Banding And Noise Removal .

Yes, it can seem like very dry stuff. But things change once you get deeper into it.

For me, knowing more about Avisynth filters was an aid in understanding how many of VirtualDub's filters can be used. Don't underestimate the value of research and browsing through forum threads -- you learn by seeing how other users have solved problems or sought answers.

I've learned that some problems simply can't be repaired. Some videos are so badly damaged that repair isn't worth the effort. Then there's the point of diminishing returns, where additional work or more filtering makes very little improvement or none at all.

The Chinnver video, aside from the obvious film damage, is hampered from the start by being recorded to low-bitrate MPEG2. There's nothing wrong with MPEG. It's used in broadcasting, DVD, and BluRay. But lossy codecs like MPEG and h.264 are not designed for restoration or edits. They are final delivery codecs. "Final" means that they aren't designed for furtehr modification unless you're willing to live with image and audio degradation through re-processing. Final formats are lossy. A certain amount of data is rounded off and discarded (low bitrates discard more data than high bitrates). This data loss can't be recovered. More processing causes more data loss. The way to avoid loss and degradation is to use lossless codecs for capture and processing until the final encoding step.

Your mp4 samples were multiple-generation versions of the .ts originals. What you should work with are the originals, not lossy re-encodes. I ran the earlier scripts with your .ts originals, and while there was still some lowering of quality because of the lossy encode, the results were cleaner and sharper than with the mp4 samples. With long videos most people break up projects into smalller segments, especially where different processing is needed for different problems in different segments. The lossless segments are rejoined later for the final encode. WHile many video problems are common, such as VHS tape noise and poor color balance, others require some experimentation with different filters. The optimal filters are arrived at by experience and trial mand error. I run scripts in VirtualDub on short segments, cruising back and forth to see filter effects or test for color correction. I have the avs script open in one window and VirtualDub in anoytehr window. A handy trick for testing script changes: type the script change you want, then hit F2 or "Reopen" to check for effect.

You have to work with a properly calibrated monitor. A poorly calibrated monitor is one of the biggest impediments to getting the results you want. Without monitor adjustment, the results will look one way on one playback system and another way on another system. Monitors are calibrated to universal standards for uniformity. It's true that another person might be watching videos on an uncalibrated monitor or tv with entirely different results -- but that's their problem, not yours. You can't correct for every monitor or viewing room in existence. Rather you do what the pros do: calibrate to a universal standard. Also, working in a bright environment plays tricks on your eyes and makes many noise and color problems difficult to see. Those problems show up later under different viewing conditions.

My first step when working with video and your sampls was to open it with a very basic Avisynth script and check for proper luminance and chroma levels using histograms. Avisynth has YUV histograms and VirtualDub has RGB histograms for this purpose. Major level and color problems are addressed first. Denoising follows. You might have to try different filters or filter settings to get what you want.

Sometimes I load VirtualDub filters while running the script, saving myself a processing step. VirtualDub uses RGB for filtering, but I rarely save working files as RGB. Instead, I use VirtualDub's "Video" menu settings to set color depth and compression for output (usually YV12 with lossless Lagarith). Sometimes I have to do a little tech forum research to see how others have solved a particular problem. I don't encode until I have the cleanest lossless working results I can reasonably get. Very often, getting what I want mans letting a project rest for a night or two and coming back later for a second look.

Color correction actually took me quite a while to learn. Good color can often mask other problems or make them less noticeable. lousy color is always a problem in itself and usually turns off many viewers or gets boring. I used free tutorials from internet sites that deal with still photo and Photoshop work -- which sounds irrelevant until you realize that good photo color and good movie color involve the same principles. The final arbiter for color work is to ask yourself the question: in real life, what would this scene look like? The all-time basic guide for learning how color presents itself? Simply look out the window or take a walk outside. Color tech theory is basic and important, and it's easy to pick it up from internet tutorials. But Mother Nature is still the best teacher for color work.

This activity is mostly a learn-and-test process. Fortunately you only have to learn it once, although something new comes along almost every day.
Reply With Quote
The following users thank sanlyn for this useful post: plaxamate (04-19-2018), yukukuhi (04-19-2018)
  #24  
04-18-2018, 06:55 PM
lordsmurf's Avatar
lordsmurf lordsmurf is online now
Site Staff | Video
 
Join Date: Dec 2002
Posts: 13,508
Thanked 2,449 Times in 2,081 Posts
Quote:
Originally Posted by sanlyn View Post
I don't believe I'm particularly skilled at working with video restoration. If I do have a skill it's one that's shared by many others -- that skill is patience. I spend time browsing through hundreds of forum posts, especially in the archives of forums like digitalfaq or doom9 where one finds examples of solutions developed by experts who are far more skilled than I.
Nah, don't discount yourself too much.

We all learn from each other. Something like Avisynth has so many aspects that nobody can master it. On our own, we dig deeper to resolve a specific issue. I know that there's a lot that I don't know about it, and probably never will. For one thing, some of that math and programming is above my understanding.

I think it's your anal retentiveness to details (ie hating JVC NR) that is an advantage to your Avisynth work. There are times where my own filtering is looking a bit mushy, and I think to myself "I wonder what sanlyn would do here", and look at your past posts where you added scripts. Sometimes I find what I need, sometimes a lead on something that does eventually work. Sometimes nothing.

I think the same of johnmeyer, jagabo, and some others. Each has a different specialty in Avisynth.

I often find Avisynth devs a bit too prickly, doom9 unfriendly, and many are not really widespread video users anyway. So they're even more limited in scope of restoration video knowledge than we are. It's not much different from being into VCR repair vs. the VCR user. They're good at what they can do (making the tools), and we're good at what we can do (using the tools).

- Did my advice help you? Then become a Premium Member and support this site.
- For sale in the marketplace: TBCs, workflows, capture cards, VCRs
Reply With Quote
  #25  
04-19-2018, 12:02 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
I willingly confess that some of the Avisynth documentation that puts me to sleep happens to be some of the most obtuse, senseless, useless programmer jammin' that no one really wants to use. Fortunately a lot of doom9 is very useful. But some of it.... oh, brother.
Reply With Quote
  #26  
04-19-2018, 12:18 AM
yukukuhi yukukuhi is offline
Free Member
 
Join Date: Apr 2018
Posts: 68
Thanked 0 Times in 0 Posts
Ok I understand sanlyn.

Could you be able to explain your script functions step by step for the chinnavar video so that I could get to understand the mind boggling tech thing. Pretty please.
Reply With Quote
  #27  
04-20-2018, 09:03 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Sorry for the delay. It's been one emergency after another on the home front all day. Still working on the details for you.

By the way, to partially answer your questions about histograms that you asked in PM:
You can't use histograms unless you know what they're telling you. Years ago I ran across an excellent free tutorial on the 'net that discusses a very common form of histogram. This deals with still photo, but remember that the principles of color, contrast, tonality, etc., are the same for photo and video. After all, a video is just a stream of still images.
Part 1, Understanding Histograms: Tones & Contrast https://www.cambridgeincolour.com/tu...istograms1.htm
Part 2, Understanding Histograms: Luminosity & Color: https://www.cambridgeincolour.com/tu...istograms2.htm

Last edited by sanlyn; 04-20-2018 at 09:26 PM.
Reply With Quote
The following users thank sanlyn for this useful post: yukukuhi (04-21-2018)
  #28  
04-21-2018, 09:25 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
If you're going to be working with .TS files and that sort of thing you'll need some filters:

If you've downloaded Avisynth plugins, you probably noticed that some of them ship as *.7z files. The decompressor for those files is 7zip.exe. It can also decompress zip and rar. If you don't have 7zip already, I suggest you download the free 32-bit version and install it. 7zip 32-bit for Windows: https://www.7-zip.org/a/7z1801.exe.

I decided to rework the earlier avs scripts, especially since I found some of your additional MPEG .TS downloads that had not been re-encoded to h.264/mp4.

The original script in post #12 for the mp4 sample used the FFMS2 utility to open the audio and video, and themaster1's script in post #10 used LSMASH to do the same thing. I'm partial to FFMS2, so I'll explain the code I used in post #12:

Code:
aud=ffaudioSource("Drive:\path\to\Chinnvar Movie Comedy.mp4")
vid=ffvideoSource("Drive:\path\to\Chinnvar Movie Comedy.mp4")
AudioDub(vid,aud)
The functions ffaudiosource and ffvideosource decode audio and video respectively. These are two functions of the FFMS2 plugins. Instructions suggest that the audio statement comes before the video statement. The words "aud" and "vid" are names I created. They define places in memory that you can create for saving values that you specify. I simply invented the word "aud" to contain decoded audio data from ffaudiosource, and I invented the word "vid" to contain decoded video data from ffvideosource. You can make up any names you like as long as they aren't the same names used by filers and functions in your script.

In order to join audio and video into one stream I used Avisynth's built-in AudioDub function (http://avisynth.nl/index.php/AudioDub). From this point in the script, audio and video are joined in a new clip in memory for as long as this script is running.

If you want FFMS2 for h.264 and other encodes, get it at https://github.com/FFMS/ffms2/releas...2.23.1-msvc.7z. It comes with two files, FFMS2.dll and FFMS2.avsi, that you copy into your Avisynth plugins. Use the 32-bit (x86) versions that unzip with the 7z file. Documentation comes with the download but is also online at the FFMPEG2Source wiki page at http://avisynth.nl/index.php/FFmpegSource.

FFMS2 is OK for mp4 and mkv containers, but your latest samples were MPEG containers, which includes MPG, M2V, VOB, and .TS. The gurus will tell you right out that the tool for those MPG files is DGIndex, a function of the free DGMPGDec utility (https://www.videohelp.com/software/DGMPGDec), and still the frame-accurate standard. Create a folder named DGMPGDec and download the utility into that folder from https://www.videohelp.com/download/dgmpgdec158.zip. Unzip the file and copy DGDecode.dll into the Avisynth plugins folder. Then make yourself a desktop shortcut to DGindex.exe in your DGMPGDec folder and double-click it to run the utility.

When DGIndex starts, click "File..." -> "Open...". Locate your MPG, VOB, m2V, or .TS file, select it in the Open dialog, and click "Open". You'll see a "File List" window with the name of the selected file. Click "OK" to return to the main window.

You'll see a preview window of the video. Click "File.." -> "Save project..." In the "Save As" dialog window you'll see that DGindex will create a d2v project file in the same folder with your selected video. Accept the default file name (or rename it if you want) and save the .d2v project. A d2v file is a complex index to your selected MPG video. Depending on the video's length, the d2v can be created in a few seconds or a few minutes. A progress panel tells you what's happening. When you see "FINISH", close the DGIndex windows.

In the folder with your video you'll see that DGindex created .d2v file, an audio file (in this case an .mp2 file), and a .log file. We will use DGMPGDec's MPEG2Source() function to open the video track. What about the audio track? For mp2, mp3, ac3, and a few other formats, we can use the NicAudio plugin (http://avisynth.nl/index.php/NicAudio). Make a folder or subfolder for the NicAudio 7z download package, unzip it, and copy NicAudio.dll into your Avisynth plugins.

Now for a script that will open one of your MPEG .TS files the preferred way for frame accuracy:

Code:
aud=NicMPG123Source("E:\forum\yukukuhi\VH\" 
   \+ "Chinnvar Movie Comedy_short PID 125 L2 2ch 48 256 DELAY 13ms.mp2",
   \Normalize=false)
vid=MPEG2Source("E:\forum\yukukuhi\VH\Chinnvar Movie Comedy_short.d2v")
AudioDub(vid,aud)
return last
In the above code you can see that I used "aud" and "vid" again to contain the audio and video, but you can use any names you like. I also used the backslash "\" as a line continuation character. You can use it at the end of a line or at the beginning of a line to break long statements into multiple lines. Notice that if you break a quotation or quoted text into multiple lines, you must enclose both breaks inside separate quotation marks. The two parts of the broken string are joined with a "+" concatenator. Then, AudioDub() joins audio and video into a single clip.

Note the use of "return last". When this statement occurs, the script returns the results of the last statement that was executed before the return line. Usually this is used to simply end the processing and return whatever has been accomplished so far. it's a handy tool for interrupting a script where you want to see what's happened. When you no longer want to use that return statement you can either delete it or make it a plain comment by starting the line with "#", as shown here:

Code:
#return last
During the run of a script, comments are ignored by Avisynth.

One of the first things I usually do when opening a new video is to check signal levels and other factors with Avisynth's YUV histogram function, described here: http://avisynth.nl/index.php/Histogram. Avisynth also has RGB and CMY histograms at http://avisynth.nl/index.php/Histogr...ogramCMYLevels. But if there are problems in YUV color, those should be addressed first before doing anything else. Note that some YUV histograms are available in various YUV colorspaces, but the "Levels" histogram works only in YV12. If your incoming video is another colorspace, such as YUY2, you might have to convert to YV12 first. In the case of MPEG, the incoming video will have been encoded using YV12. After you finish your YUV histogram checks, you can delete those statements or convert them to comments for re-using later.

Here is MPEG2Source and a NivcAudio function (NicMPG123Source for mp2) with another of your samples:

Code:
aud=NicMPG123Source("E:\forum\yukukuhi\" 
   \+ "Chinnvar PID 125 L2 2ch 48 256 DELAY 5ms.mp2",Normalize=false)
vid=MPEG2Source("E:\forum\yukukuhi\Chinnvar Movie Comedy Sample.d2v")
AudioDub(vid,aud)
### Note: incoming video is already YV12 ###
Histogram("Levels")
return last
Notice how "return last" is used to return the results rather than letting the script continue. If you had a longer script with more lines, the processing would stop at this point.

The result of the above code using the named video sample:



Notice that the original black borders are present in the image. The YUV histogram shows luma values in the top white band. Darks are at the left, midtones in the middle, and brights at the right. The shaded edges on either side of the histogram indicate values that are outside the preferred video range of y=16-235. At the left edge, you can see black values that creep into the shaded, "unsafe" values below y-16. When this image is opened or displayed in RGB, the left-side blacks will be expanded toward RGB=0 and will be darker than zero-black. RGB can't tolerate those dark values, so dark detail below y=16 will be clipped (i.e, destroyed). The same would be true for brights. Brights at y=235 will expand in RGB to RGB 255, but not beyond. Therefore, if brights exceed y=235, then expanded brights greater than RGB 255 will be clipped (destroyed).

The histogram will change if crop() is used to remove the black borders:

Code:
aud=NicMPG123Source("E:\forum\yukukuhi\" 
   \+ "Chinnvar PID 125 L2 2ch 48 256 DELAY 5ms.mp2",Normalize=false)
vid=MPEG2Source("E:\forum\yukukuhi\Chinnvar Movie Comedy Sample.d2v")
AudioDub(vid,aud)
### Note: incoming video is already YV12 ###
Crop(12,8,-12,0)
Histogram("Levels")
return last
The values for Crop depend on the thickness of the borders. Don't worry about cutting slightly into the image at this point, as long as the borders are gone and the crop values are multiples of 4. The final Crop will be changed later.




The white band now shows nothing at the dark end lower than about y=64, which is dark grays but no real blacks. That's certainly OK and no black clipping will occur, but the image looks washed out because the darkest black levels are at the edge of the midtones, with no strong darks. We would say that this image has a limited dynamic range. You can extend the blacks if you want by using color functions or VirtualDub, but other scenes in the same video are darker and would be too dark with those changes.

Another way to check the luma and color ranges is with the ColorYUV Anlyze function (http://avisynth.nl/index.php/ColorYUV).

Code:
aud=NicMPG123Source("E:\forum\yukukuhi\" 
   \+ "Chinnvar PID 125 L2 2ch 48 256 DELAY 5ms.mp2",Normalize=false)
vid=MPEG2Source("E:\forum\yukukuhi\Chinnvar Movie Comedy Sample.d2v")
AudioDub(vid,aud)
### Note: incoming video is already YV12 ###
Crop(12,8,-12,0)
ColorYUV(Analyze=true)
#Histogram("Levels")
return last
In the code above, the histogram is commented-out but the Analyze statement will execute. Here is the result, with the black border still removed:



In the grid of numbers overlaid onto the image, the rows we're interested in are the Min and Max values, although the Averages are also useful. In the left-hand column for Luma values, the minimum dark values fall off rapidly to y=20, while some the bright values going up to y=250 and will be clipped in RGB. We can also see that U and V color values are well within the range of 16-235, so no problem there. But this is, after all, a pale image. It could be that the film maker wanted it that way.

Later in the same clip, luma values change considerably. The YUV histogram below, with borders removed, is later in the clip:



The histogram tells us that this scene has some "snap" to it because it populates the entire spectrum nicely from dark to light. It's not necessary for every scene to fill the graph (a night scene or a gray, misty one certainly wouldn't). But don't let this one fool you. Check the numbers first with Analyze and you'll see some bright clipping:



The numbers say that luma goes below 16 all the way to zero, and brights exceed 235 and hit 255. Colors are OK. Using a pixel value reader tells us that the 0's come from the black hair, but where does the bright 255 come from? You would think it comes from the bright reflection on the wall or from the scene outside the window. Some of it is in the window. But believe it or not, some is also from the logo in the lower right corner.

These histograms demonstrate the wide variations in levels you can sometimes expect from different parts of a video, even when it's a digital broadcast. The infractions in this case are minor, and you'll see far worse. The histograms also show that if you darken or brighten a scene in one part of a video, other scenes in the same video will be unpleasantly affected. The fix for that is to cut the video into segments as needed, process accordingly, and then rejoin the segments later. The other solution is to live with it as-is. For tweaking the extreme levels, in my opinion, I'd do it with RGB controls. But that's for another thread entirely.

I'm working on a different script for your lagtest samples. While I'm finishing that, you might want to browse some very detailed posts that get into scripting as well as into details about using VirtualDub filters.

This thread at http://www.digitalfaq.com/forum/vide...g-huffyuv.html covers a host of issues from overexposure, to crushed zero-blacks, using curves, ColorMill, pixel readers, gross underexposure, capture levels, and a host of other glitches. The details start in post #7 of that thread.

Now, back to work on the new script versions.....


Attached Images
File Type: jpg A - YUV histogram with borders.jpg (95.1 KB, 155 downloads)
File Type: jpg B - YUV histogram - no borders.jpg (92.4 KB, 154 downloads)
File Type: jpg C - Analyze with borders removed.jpg (97.3 KB, 154 downloads)
File Type: jpg D - YUV histogram 2 - no borders.jpg (124.0 KB, 154 downloads)
File Type: jpg E - YUV Analyze 2 - no borders.jpg (124.9 KB, 153 downloads)

Last edited by sanlyn; 04-21-2018 at 09:43 PM.
Reply With Quote
The following users thank sanlyn for this useful post: wimvs (04-22-2018), yukukuhi (04-22-2018)
  #29  
04-22-2018, 01:19 AM
yukukuhi yukukuhi is offline
Free Member
 
Join Date: Apr 2018
Posts: 68
Thanked 0 Times in 0 Posts
Oh boy! It's going to take a breather digesting all these infos.
Just kidding!

Thanks sanlyn for sharing such excellent links & informations.
Looking forward for more valuable posts.
Reply With Quote
  #30  
04-22-2018, 07:14 PM
steffen42 steffen42 is offline
Premium Member
 
Join Date: Dec 2017
Location: Portland, OR
Posts: 7
Thanked 0 Times in 0 Posts
This has been a great thread to lurk on and get insight to the mindset on how to do video restoration. Thanks much for the detailed info on the thought process and execution. I'm going to be re-reading this and all the links for the next month (or longer).
Reply With Quote
  #31  
04-24-2018, 08:47 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Sorry for the delay, readers. Finally rebuilt my PC work area and finally re-mounted my standard PC's (yippee). Only 50 more moving boxes or so to go before the video work station is complete again (!). Meanwhile I'll try working piecemeal on yukukuhi's problem video and posting as I go.
Reply With Quote
The following users thank sanlyn for this useful post: yukukuhi (04-30-2018)
  #32  
04-28-2018, 07:19 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
The next few posts are examples of what frequemntly appears in video tech forums -- i.e, Information Overload. It happens when you get videos that are nightmares like the .ts samples that were submitted. Always beware of video transfers from the subcontinent. Most of them are nightmares.

Herewith, some new scripts for your .ts samples and some extra cleanup. These aren't the same scripts and plugins used earlier in post #12 (http://www.digitalfaq.com/forum/vide...html#post53774). The main part in question in the earlier scripts is this routine:

Code:
SeparateFields()
Source=last
a=source1.SelectEvery(3,0).RemoveDirtMC(50,false).Descratch().TurnRight().DeScratch().TurnLeft().MV2()
b=source1.SelectEvery(3,1).RemoveDirtMC(50,false).Descratch().TurnRight().DeScratch().TurnLeft().MV2()
c=source1.SelectEvery(3,2).RemoveDirtMC(50,false).Descratch().TurnRight().DeScratch().TurnLeft().MV2()
Interleave(a,b,c)
That code was followed by a second script with the following sequence:

Code:
SeparateFields()
Source1=last
e=source1.SelectEven().RemoveSpotsMC3()
o=source1.SelectOdd().RemoveSpotsMC3()
Interleave(e,o)
There were two notions behind all that code. The first notion was to break the images into smaller half-height frames by using SeparateFields(). Smaller frame = faster pixel scanning = less memory swapping = faster running. The second notion was to give some randomness to the noise by using SelectEvery() statements and then weaving all the pieces together again -- although it is true that if you use SeparateFields() on progressive video, the two fields of each progressive frame will be the same image, so each image gets scanned and filtered twice.

Some of the posted results look OK, but I think they're over-filtered. You can make up your own mind in that respect. But you should be aware of the damage caused by DeScratch and too much of RemoveSpots. Perhaps other videos might not be affected in the same way, but the video attached and linked below contains three examples of the way the filters are over-cleaning one problem while causing serious artifacts elsewhere. There are three short scenes in the video:

1. A beach sequence with badly split horizontal lines and noisy line twitter along the shoreline, and twitter on the horizontal shadows in the background figure on the sand (left).
2. The "blinking window" in the upper central part of the image.
3. A "blinking" door panel on the left and, on the right, a very "nervous" checkered shirt that looks as if it's being devoured by small black bugs.

Also, take a look at the bottom right logo in this sample video. It's fractured and its mouth "laughs" a lot.

Filter artifact sample.mp4

What I suggest is to worry less about the film scratches -- get some of them, of course, but you'll never get them all. Worry more about overall clarity and avoid creating new problems. As it is, there are more than just spots and scratches. There is very uneven color grading, visible object shimmer in some scenes, unsteady frames, and some flicker. On top of that the movie runs too fast; it's been speeded up from 23.976 film to 25 fps PAL, and it looks and sounds like it. Many find the latter to be one of the most annoying aspects of film-to-PAL transfers.

If you haven't already, it's about time to get for yourself an arsenal of frequently used Avisynth plugins in one big package. A while back (November '17) the forum posted such a collection, named Avisynth_plugins.zip at http://www.digitalfaq.com/forum/atta...nth_pluginszip. Advice: make a folder for the zip file and unzip into that folder. You might already have some of these plugins. The .zip contains 11 or 12 subfolders and instructions for some Avisynth heavy-hitters, including QTGMC and some big guys and little guys, including some filters used in this post, the next post, and in many other forum threads.

Even if you don't fully understand some parts of the script, at last you'll get a decent start at collecting some essential Avisynth filters.

New general script:

Code:
Import("Drive:\path\to\Avisynth plugins\MDG2.avs")
Import("Drive:\path\to\Avisynth plugins\RemoveDirtMC.avs")

aud=NicMPG123Source("E:\forum\faq\yukukuhi\B\"
  \+"Chinnvar Movie Comedy Sample PID 125 L2 2ch 48 256 DELAY 5ms.mp2",
  \normalize=false)
vid=MPEG2Source("E:\forum\faq\yukukuhi\B\"E:\forum\faq\yukukuhi\B\"
  \+"Chinnvar Movie Comedy Sample.d2v")
AudioDub(vid,aud)

Source1=last
a=source1.SelectEvery(3,0).RemoveDirtMC(50,false).RemoveSpotsMC2().MDG2()
b=source1.SelectEvery(3,1).RemoveDirtMC(50,false).RemoveSpotsMC2().MDG2()
c=source1.SelectEvery(3,2).RemoveDirtMC(50,false).RemoveSpotsMC2().MDG2()
Interleave(a,b,c)

LimitedSharpenFaster()
AddGrainC(1.5,1.5)
return last
details:

Import
The script begins with two Import() statements. Import() is a built-in Avisynth function that evaluates another script and imports the results into your current script (http://avisynth.nl/index.php/Internal_functions#Import). This saves you the trouble of copying and pasting the outside script into yours, although you can still do that if you wish. Two script-formatted plugins are imported. One of them, MDG2, is the code that you previously saw as the "MV2" function near the bottom of the earlier script in post #12. I decided to rename it MDG2, since it's really a doom9-modified version of the MDegrain2 filter from the MVTools plugin documentation.

MDG2 uses MVTools functions to scan and analyze motion for two frames preceding the current frame and two frames after it. The filter then decides what is motion and what is noise. How that works is best known to the geek who designed the filter. MDG2 then uses some clever masking and overlay techniques to retrieve more original detail and restores a little ordered noise (after all, some of the finer detail in video is similar to noise). One disadvantage of temporal filters is that they fail when motion is involved if they read only a frame at a time. Motion analysis with mvtools helps overcome that limitation, but not entirely.

I've attached MDG2.avs as a plugin. It requires either progressive video or video after using SeparateFields(). MDG2 also requires MVTools2.dll (http://www.digitalfaq.com/forum/atta...s2_27_21_22zip) and aWarpSharp2.dll (http://www.digitalfaq.com/forum/atta...sharp2_2015zip). Unzip the download files and copy the dll's into your Avisynth plugins. (note that aWarpSharp2 and that mvtools2 as used in QTGMC are included in the Avisynth_plugins.zip mentioned earlier).

Plugins named "dll" or "avsi" will load automatically when used in a script. Plugins named ".avs" are not autoloaders, so you load them explicitly with an Import() statement. What, then, is the use of an .avs for plugins? You can have many versions of similar code but with different file names. There are many official and unofficial versions of RemoveSpots, RemoveDirt, and others, that are .avs files.

aud=NicMPG123Source("E:\forum\faq\yukukuhi\B\"
\+"Chinnvar Movie Comedy Sample PID 125 L2 2ch 48 256 DELAY 5ms.mp2",
\normalize=false)

The NicAudio plugin is used to decode the MPG Layer2 audio file that was extracted when I ran DGIndex on your .ts file. The audio will be decompressed and runs as uncompressed PCM audio. You can save it as re-compressed later if you want, but every time you recompress lossy audio it sounds worse. Save audio recompression for your final encode.

vid=MPEG2Source("E:\forum\faq\yukukuhi\B\"E:\forum \faq\yukukuhi\B\"
\+"Chinnvar Movie Comedy Sample.d2v")

I used NicMPG123Audio to create an audio track named "aud" in memory, and then I used MPEG2Source to create a video file name "vid" in memory. "Aud" and "vid" are names that I invented.

AudioDub(vid,aud)
This built-in function joins video and audio together. http://avisynth.nl/index.php/AudioDub

Source1=last
I invented another object in memory and named it "source1". Source1 will contain the results of the "last" statement that was executed -- in this case, the last statement that was executed was the AudioDub() statement. So "Source1" refers to the video that was created by AudioDub.

a=source1.SelectEvery(3,0).RemoveDirtMC(50,false). RemoveSpotsMC2().MDG2()
b=source1.SelectEvery(3,1).RemoveDirtMC(50,false). RemoveSpotsMC2().MDG2()
c=source1.SelectEvery(3,2).RemoveDirtMC(50,false). RemoveSpotsMC2().MDG2()

The SelectEvery() function selects only specified members from groups of frames or fields. The real name of the function is Select(), with several variants. In this case I first create a place in memory called "a", and I tell Source1 to fill "a" only with certain frames that I specify. Frame numbers start with 0, so if there are 3 frames their frame numbers are 0, 1, and 2. I use SelectEvery(3,0) to read the frames in Source1 and for every group of 3 frames, take only frame 0. After I fill "a" with frame 0's, I create "b" and fill it with the second frame of every group of 3 frames. Then I create "c" and tell Source1 to fill "c" with the third frame of every group of 3 frames. Therefore a, b, and c each contain unique frames from Source1. http://avisynth.nl/index.php/Select

In turn, a, b, and c each are filtered by RemoveDirtMC, then by RemoveSpotsMC2, and then by MDG2. Splitting the frames of source1 three ways throws the spots, scratches and other defects into a more random pattern, in the hope that a spot that persists over the length of 2 or 3 frames will be seen only once by one of the filters, and thus will be seen as "noise" that doesn't repeat in the other frames. If the disturbance persists over several frames and doesn't change shape or position, denosiers won't see it as noise.

Interleave(a,b,c)
This built-in function takes frames from "a", then from "b", then from "c", one at a time and in that order, and returns them to their original order and lineup. It does this until it runs out of frames. http://avisynth.nl/index.php/Interleave

LimitedSharpenFaster()
This avsi plugin is a specialized sharpener that is "limited" in the way it sharpens, which is to do so without creating the usual edge-distorting halos and other bad effects. It requires progressive video. It has many adjustment parameters, but don't think it's perfect: set it strong enough and you'll get the usual halos and plastic effects. But in normal use it can get very sharp before it starts looking strange. http://www.digitalfaq.com/forum/atta...rpenfasterzip. LimitedSharpenFaster requires other plugins: MaskTools2.2.x (included in Avisynth_plugins.zip in the QTGMC subfolder). It also requires the old RemoveGrain package (http://www.digitalfaq.com/forum/atta..._v1_0_fileszip).

These require Microsoft VisualC++ 2015 runtimes (https://www.microsoft.com/en-us/down....aspx?id=52685), which you may as well acquire because many other filters use that system runtime library.

AddGrainC(1.5,1.5)
We've used so many denoisers we risk creating an overly smooth, plastic look. We can make things look more like fine-grain film by adding some ordered, dithered noise in the form of very fine film-like grain, rather than the coarse VHS type of dirty grain. RemoveGrainC is in the QTGMC subfolder in the Avisynth_Plugins.zip package.

return last
A previous post discussed this standard return statement ("return the last thing you just completed"). In this case the statement isn't optional. The script has invented a bunch of new entities, named "a", "b", "c", "Source1", etc. You have to tell Avisynth Which one of these entities you want returned. What you want is the result of the very last operation performed up to this point, which would be "last".

This new general script doen't clean as many scratches as the earlier scripts did. But it doesn't destroy as much detail or create nearly as many artifacts, either. However, it manages to clean up spots and blemishes just about as well.

The output of this script is YV12 color. I saved it as YV12 using Lagarith lossless codec. If you run this script in VirtualDub and don't specify an output color depth or compressor, VirtualDub by default will save it as uncompressed RGB -- which you don't want. Besides, uncompressed RGB would be 4 times the file size of losslessly compressed YV12.

In the next post I'll suggest more repairs and post some samples.


Attached Files
File Type: mp4 Filter artifact sample.mp4 (14.16 MB, 10 downloads)
File Type: avs MDG2.avs (1.4 KB, 49 downloads)
Reply With Quote
The following users thank sanlyn for this useful post: yukukuhi (04-30-2018)
  #33  
04-28-2018, 08:01 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
(Continued from the previous post)

Here's a case that requires a fairly common repair coupled with some desperate-case tactics that one doesn't encounter every day. It's a shame that so many film transfers from the subcontinent have horrible problems. But there are worse exmples than this one, and most of them can't be repaired at all.

The new script posted above works pretty well for all of your samples to clean a lot of scratches and scrub almost all of the spots and blemishes. But there are other problems, like frames with really bad defects. Here's a sample of a frame with serious damage:



The same frame after the repair script:




The attached video "Chinnvar bad frames before_vs_after.mp4" is a side-by-side comparison of the original problem frames and the repaired segment. The original frames are on the left side of the video image, the repaired segment is on the right. Play this video continuously and look at the original frames on the left. Most of the glitches are along the right side of that left image and include two big red "flashes" and some distorted "bulges" in the walls behind the man in the checkered shirt. These are the remains that were unfiltered by the original script. You can play or download the attached mp4 by clicking this link: http://www.digitalfaq.com/forum/atta...1&d=1524919327.

Such repairs aren't always completely successful, if at all. The plugin used was ReplaceFramesMC.avsi (http://www.digitalfaq.com/forum/atta...ceframesmcavsi). It uses functions in MVTools2 to interpolate a new frame from data in adjacent good frames. The first big red flash in the demo video populates two sequential frames, so two frames had to be re-created. It gets tricky when multiple frames are involved; if there is much motion or camera panning, bizarre distortions can be created and objects can disappear. In this case the sequence had very little movement. ReplaceFramesMC requires MVTools2 (see the QTGMC subfolder in Avisynth_Plugins.zip, discussed in the previous post).

In the repair code below which repairs the leftover defects in that video, the defective avi being opened is the file that resulted from the new general cleanup in the previous post. That video has already had the basic filtering but it has a few glitches that the filters aren't designed to handle. The code below extracts a segment of a few hundred frames for special treatment:

Code:
AviSource("E:\forum\faq\yukukuhi\B2\Chinnvar_04D.avi")
Trim(953,1159)
source=last
save_audio=source

source
ReplaceFramesMC(80,2)
ReplaceFramesMC(97,1)
ReplaceFramesMC(113,1)
ReplaceFramesMC(129,1)

v1 = last
b0=v1
b01=v1.ReplaceFramesMC(126,5).Crop(598,0,0,0)
b02=Overlay(b0,b01,x=598,y=0)
v2=ReplaceFramesSimple(v1,b02,mappings="126 130")

b0=v2
b01=v1.ReplaceFramesMC(162,1).Crop(598,0,0,0)
b02=Overlay(b0,b01,x=598,y=0)
v3=ReplaceFramesSimple(v2,b02,mappings="162")

b0=v3
b01=v1.ReplaceFramesMC(32,1).Crop(612,0,0,0)
b02=Overlay(b0,b01,x=612,y=0)
v4=ReplaceFramesSimple(v3,b02,mappings="32")

AudioDub(v4,save_audio)
ConvertToRGB32(interlaced=false)
LoadVirtualDubPlugin("D:\VirtualDub\plugins\deflick.vdf","DeFlicker",1)
DeFlicker(8, 10, 0, 256, 0)
ConvertToYV12(interlaced=false)
Stab()
Crop(12,4,-14,-6).AddBorders(12,6,14,4)
return last
AviSource("E:\forum\faq\yukukuhi\B2\Chinnvar_04D.a vi")
AviSource is a built-in function that opens and decodes several forms of AVI. It's used for lossless AVI's compressed with the likes of huffyuv, Lagarith, UT Video codec, and DV, where those codecs are installed in your system. http://avisynth.nl/index.php/AviSource

Trim(953,1159)
The built-in Trim() function (http://avisynth.nl/index.php/Trim) pulls out frames 953 to 1159 from the original file. The desired frame numbers were determined by scrolling through the avi in VirtualDub.

source=last
save_audio=source
We need to create two versions of the video for this repair. One version is called "source" and consists of the last thing the script did, which was to trim the input file down to 207 frames. The second video being created is named "save_audio" -- it is a copy of "source". The script saves it so that its audio track can be used later in the script. "Source" and "save_audio" are names I invented.

source
ReplaceFramesMC(80,2)
ReplaceFramesMC(97,1)
ReplaceFramesMC(113,1)
ReplaceFramesMC(129,1)

This sequence of code begins with the name of the "source" video -- by mentioning it in this way, the script brings the focus of operations to the "source" video. The statements that follow the first line will be applied to that named "source" video. The numbers of the frames being replaced are the frame numbers within that video of 207 frames.

The syntax of the ReplaceFrames(80,2) statement means that 2 frames will be replaced starting with frame 80. Therefore, the two "bad" frames being replaced are 80 and 81. The "good" frames that will be used to interpolate new data for frames 80 and 81 will be the preceding frame 79 and the following frame 82. Because there is so little motion in those frames, the results should work pretty well. They do, and the only visible clue to the interpolation is a slight blurring of the moving hand of the man on the left. But that hand is a little blurred in the original anyway, so no great harm is done.

Other frames that need replacing are 97, 113, and 129. The number of frames being replaced is just 1 frame each, not 2 frames as in the first replacement. Again, there is so little movement in those frames that it works well. But....

v1 = last
b0=v1
b01=v1.ReplaceFramesMC(126,5).Crop(598,0,0,0)
b02=Overlay(b0,b01,x=598,y=0)
v2=ReplaceFramesSimple(v1,b02,mappings="126 130")

There is a motion problem with some of the remaining bad frames. There is too much motion from the three people to avoid distorting them badly. But, friends, we are in luck: the bad portion of the images doesn't involve motion. This means that if we repair only the bad portion of the frame, the rest of the frame will remain intact. The patch for only a portion of the frames is what the above code is about.

First, the routine needs to create a new copy of the video as it exists to this point. Therefore, "v1 = last" creates a new working copy of the video and arbitrarily calls it by the made-up name of "v1". v1 is going to be a constant master file from which data will be pulled to create repair patches.

"b0=v1" creates yet another video copy. It's a copy of v1 and is arbitrarily named "b0" (that's b-zero"). In a moment you'll see what b0 is for.

"b01=v1.ReplaceFramesMC(126,5).Crop(598,0,0,0)" does several things. First, it creates a new video named "b01". Then this line of code looks at the v1 master video and uses ReplaceFramesMC to replace 5 bad frames in v1 starting with v1 frame number 126. Then the crop statement "Crop(598,0,0,0)" removes 598 pixels from the left side of v1's frames. Those 598 pixels are where the three people are moving around. That portion of v1's frames will be discarded because we don't want to repair the moving people. What we want to keep are the right-hand 122 repaired pixels in v1's frames. These remaining 122 pixels make up our repair "patches". The results of this replace and crop operation are moved into the new "b01".

Now we need to create a new piece of video onto which we will overlay the repaired 122-pixel patches. So "b02=Overlay(b0,b01,x=598,y=0)" creates a new video named b02. This statement then overlays all the original frames in b0 with all the patches that were created in b01. It places the repair patches 568 pixels on the right side of the old frames, thus covering the 122 bad pixels with 122 repaired pixels. All of the overlaid frames from b0 are then kept in the new b02.

Problem: b02 actually contains 207 frames, but all we wanted to repair were 5 frames, 126 thru 130. All the other frames in b02 contain hundreds of oddball repairs and distortions that we don't want. How do we use only the 5 repaired frames we wanted?

Enter the new output video V2. "v2=ReplaceFramesSimple(v1,b02,mappings="126 130")" uses a function called RelaceFramesSimple to take our saved master v1 video and replace 5 of its frames numbered 125 thru 130 with the same numbered frames from repaired patches in b02. The new output video V2 contains all the previously fixed frames of v1 along with 5 new repaired frames from b02.

ReplaceFrameSimple is a function in the plugin RemapFrames.dll, which contains other handy frame replacement functions. Its home page is here: http://avisynth.nl/index.php?title=R...es&redirect=no. Use the 32-bit version of RemapFrames.dll from http://ldesoras.free.fr/src/avs/RemapFrames-0.4.1.zip.

The next two repair routines do the same thing to create new videos V3 and V4. The process is the same:
1. Start with the name of the latest repaired master video. This script started with v1.
2. Create another copy of the master called b0, which can be used without affecting v1.
3. Perform a repair or replace operation on a portion of a frame(s), then use crop() to save the desired patch area. Place the patches in a new video called b01.
4. Overlay the repair patches from b01 onto a new video b02 and place the repair patch in the desired frame area.
5. Replace old master frame(s) with only the desired replacement frame(s) from b02, then place the saved master frames and the new repaired frames in a new master output video. In this case the new master output videos were v2, then v3, and finally v4.

Yes, I know. Very tricky at first. There's another way: buy a really pricey NLE that creates overlays, frame by frame, one at a time, and then create a new video from the repaired frames.

AudioDub(v4,save_audio)
I guess you recall the "save_Audio" video created earlier in this script. Now is the time to use it. The frame replacements often create audio problems and interruptions. Dub the saved audio into the last version of the video that was completed (here, the last repaired version was V4).

ConvertToRGB32(interlaced=false)
LoadVirtualDubPlugin("D:\VirtualDub\plugins\deflic k.vdf","DeFlicker",1)
DeFlicker(8, 10, 0, 256, 0)

This damaged scene has subtle but visible luma flicker, which you can see in the blue door panels. This code shows how a VirtualDub filter can be executed in an Avisynth script. Not all VDub filters can do this. First, convert the video to RGB32 (and be sure to specify whether or not the video is interlaced. Yes, it matters). LoadVirtualDubPlugin looks the same way for almost every VDub filter. The only thuings to change in the statement would be the filter's .vdf name and the coded name you want to use (in double-quotes, as here in "DeFlicker"), followed by the pre-roll number. What is the pre-roll number? We'll be here for weeks explaining that one. Take our word for it: the pre-roll number is always 1 until some VDub filter's documentation says otherwise.

Now the question is, where do the numbers in "DeFlicker(8, 10, 0, 256, 0)" come from? If you ever mount a VirtualDub filter and save the settings as a .vcf file, open the .vcf file with Notepad (a .vcf is just plain text) and look for the name of your desired filter in the listing, usually at or near the end. The filter's name will be followed by a line that contains "config" and a whole bunch of values in parentheses. The values in parentheses are what you want for your Avisynth script.

In this case the two affected lines for this filter as tested in the .vcf file looked like this:
Quote:
VirtualDub.video.filters.Add("deflicker (1.3b1)");
VirtualDub.video.filters.instance[0].Config(8, 10, 0, 256, 0);
Of course you have to put two and two together and realize that the filter named "deflicker (1.3b1)" in the .vcf file is really the plugin Deflick.vdf in the VDub plugins folder. No, it shouldn't be that complicated, but they made it that way on purpose just to annoy us.

Stab()
You don't have to do this, but the jumpy frames in this scene and others can get annoying after a while until you apply at least some mild stabilization. This is a job for stab(), a small .avsi script. Stab() is a short name for "stabilizier". Clever, eh? You can get stab.avsi at http://avisynth.nl/index.php/Stab. Be sure to browse that page, especially the stuff in the top half where requirements are listed.

This filter requires non-interlaced video and uses Depan.dll and RgTools.dll. Both are in the Avisynth_plugins.zip file. RgTools is in the QTGMC subfolder inside that zip.

Stab() will shift the image contents perhaps 2 frames or so in any direction in order to make things look less shaky.

Crop(12,4,-14,-6).AddBorders(12,6,14,4)
Using Stab() in the previous statement will shift all 4 borders a small but visible amount, so they get a mild readjustment here. Slight border changes occur in movies and broadcasts all the time, so these will go unnoticed against dark display backgrounds when the repaired segment is rejoined to the main video.
http://avisynth.nl/index.php/Crop
http://avisynth.nl/index.php/AddBorders

return last
By now you know what this statement means and why it's needed here. Many named entities like V1, V2, b0, etc., were invented here, and Avisynth needs to know which of these inventions should be output. The answer is "last", meaning the last thing done.

Allow me to finish with something easier in the next post....


Attached Images
File Type: jpg original bad frame.jpg (136.9 KB, 134 downloads)
File Type: jpg replaced frame.jpg (135.4 KB, 133 downloads)
Attached Files
File Type: mp4 Chinnvar bad frames before_vs_after.mp4 (5.39 MB, 15 downloads)
Reply With Quote
The following users thank sanlyn for this useful post: Delta (05-23-2021), wimvs (04-29-2018), yukukuhi (04-30-2018)
  #34  
04-28-2018, 08:17 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
(Continued from the previous post)


Something easier to work with is the high black levels and backlighting problem in the starting shot of "Chinnvar Movie Comedy Sample.ts", and the beach scenes in "Chinnvar Movie Comedy.ts" (the latter was downloaded to videohelp.com). It's possible that the filmmaker wanted those scenes to look high-key the way they do. But if you wanted them to look more realistic, here's a script and some VDub filter settings that can change things:

original frame:



after corrections:



Code:
AviSource("E:\forum\faq\yukukuhi\B2\Chinnvar_04A.avi")
Trim(0,559)
ConvertToYV24(interlaced=false)
ColorYUV(off_y=-18,cont_y=25)
Levels(42,1.4,255,16,255,dither=true,coring=false)
Tweak(sat=1.15,dither=true,coring=false)
#ConvertToYV12(interlaced=false)
#Histogram("Levels")
ConvertToRGB32(interlaced=false)
return last
In the images posted above you see YUV histograms on the right side. The histograms are produced by the "Histogram("Levels") statement near the bottom of the script. This statement can be turned on to check the effects of filter settings, then turned off with the "#" comment mark when outputting the final results.

The script uses aviSource to open an .avi that was filtered by the new general script discussed earlier, then was saved as Lagarith YV12. The Trim() function then extracts the first 560 frames from the original file (frame numbers 0 to 559 equal 560 frames total).

ConvertToYV24(interlaced=false) converts a 4:2:0 limited-chroma YV12 file to a 4:4:4 colorspace with greater chroma depth. This isn't essential, but it does prevent huge "gaps" in the response spectrum when levels and colors are stretched to populate more of the spectrum. As you'll see in a moment, we almost literally intend to "stretch" some of values in the original. http://avisynth.nl/index.php/Convert

In the histogram of the top image you can see that luma values (the white band) fall off rapidly at the left-hand side. This indicates that there are very few really dark colors or blacks, if any. We would say that black levels seem rather high, giving a somewhat over-exposed and low-contrast look to the image. ColorYUV(off_=-18) applies a negative offset to the brightness channel -- that is, it subtracts 18 points from every luma pixel in the frame, thus darkening every pixel from the darkest to the brightest by the same 18-point amount. This gives the image some darker pixels for the hair and mustache. But it also darkens everything else, including some of the midrange facial tones. http://avisynth.nl/index.php/ColorYUV

Levels(42,1.4,255,16,255,dither=true,coring=false) is used to make a few tweaks and corrections. The first numeric value of 42 is a luminance value that the process considers to be the darkest luma value available (i.e, dark input). The 4th number in the sequence is 16, which indicates where we want those 42-level input pixels to be output (i.e, = dark output). In other words, we tell the Levels filter to take dark values at about 42 and gradually darken them down to y=16, which will be RGB black. That's fine for the very darkest part of the hair in shadow, but what about the highlights and facial tones? Those are addressed by the 2nd in the number sequence (1.4), which is the desired gamma output. Gamma handles mostly the dark midtones up to the lesser brights -- specifying a value of 1.4 brightens that entire range to give the image some brigher mids and brights and to open up shadow and facial detail.

The third number in the Levels sequence defines the bright input value, while the fifth number in the sequence defines the bright output value. d255 is specified for input and output, only because we don't want the process to change any brights. The brightest values in the image are about y=225, so brights won't get brighter and won't exceed the desired limit of y=235 for YUV. http://avisynth.nl/index.php/Levels

Tweak(sat=1.15,dither=true,coring=false) is used to mildly increase saturation. http://avisynth.nl/index.php/Tweak

#ConvertToYV12(interlaced=false) and #Histogram("Levels"), as you can see, are commented-out or disabled for the final output version. During adjustments they were used to display the YUV Levels histogram, which works only in YV12. http://avisynth.nl/index.php/Histogram

ConvertToRGB32(interlaced=false) prepares the output for Virtualdub color filters. The VDub filters were loaded and adjusted while running and viewing the Avisynth script. You can run the script and run VDub filters in two separate steps if you want. Sometimes a script is so slow that it's impossible to run both steps at the same time. In the corrected image above, skin tones look more properly exposed and there is actually some blue in the sky.

Exact color balance is largely a personal issue. I prefer the result to look reasonably convincing or realistic. For instance, purple hair might look cool as an effect, but it would be inappropriate here. The VDub filters used were ColorMill and gradation curves. Attached is the .vcf file "Chinnvar VirtualDub settings.vcf" with the settings that gave the results in the lower image posted above. To load the filters, use "File.."-> "load processing settings", then locate the saved .vcf file and open it. You must have the two named filters in your VDub plugins folder.

Attached is a sample of the general script results and some color tweaks used for different, edited scenes from some of your .ts downloads. While there are several improvements over the lossy originals, there are gross imperfections that don't immediately catch the eye. Put the mp4 in an editor that can view frame-by-frame and look at frames 276 thru 297 (@ 11.5 to 12.5 seconds).

The mp4 attachment is "Filtered samples 23_976.mp4". The video plays at the original 23.976 fps film speed instead of 25 fps. A few more tweaks are easily possible.

This should give readers plenty to think about before taking on nightmare projects like this. Sometimes it's better to just leave things as-is.


Attached Images
File Type: jpg Levels and contrast - original.jpg (72.4 KB, 133 downloads)
File Type: jpg Levels and contrast fix - after.jpg (80.4 KB, 132 downloads)
Attached Files
File Type: vcf Chinnvar VirtualDub settings.vcf (3.5 KB, 4 downloads)
File Type: mp4 Filtered samples 23_976.mp4 (30.24 MB, 3 downloads)
Reply With Quote
The following users thank sanlyn for this useful post: wimvs (04-29-2018), yukukuhi (04-30-2018)
  #35  
04-29-2018, 09:38 AM
yukukuhi yukukuhi is offline
Free Member
 
Join Date: Apr 2018
Posts: 68
Thanked 0 Times in 0 Posts
This is going to take me some time to keep up.
Reply With Quote
  #36  
04-29-2018, 09:44 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Fortunately you have to learn it only once.
Thank goodness all videos and captures aren't so badly damaged. I have noisy old VHS home-made tapes that looked better.
Reply With Quote
  #37  
04-30-2018, 10:43 AM
yukukuhi yukukuhi is offline
Free Member
 
Join Date: Apr 2018
Posts: 68
Thanked 0 Times in 0 Posts
Hey sanlyn, should i deinterlace before or after the frame repairs so as to using the stab filter?

And you coming up with the numbers for the level filter, is it a matter of self judgment or are there any maths behind it?
Reply With Quote
  #38  
04-30-2018, 11:02 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Quote:
Originally Posted by yukukuhi View Post
Hey sanlyn, should i deinterlace before or after the frame repairs so as to using the stab filter?
The .ts files aren't interlaced.
Otherwise, for inerlaced files you have to deinterlace. For telecined video you should use TIVTC (inverse telecine). Most of the time PAL standard def is either interlaced or progressive.


Quote:
Originally Posted by yukukuhi View Post
And you coming up with the numbers for the level filter, is it a matter of self judgment or are there any maths behind it?
When adjusting levels I'm looking at a YUV histogram at the same time in the frame. The documentation for Levels() is on the net at http://avisynth.nl/index.php/Levels. Documentation for all Avisynth functions is also in your computer. Open the Programs or All-Programs dialog, look for the Avisynth program group, expand it, and click on "Avisynth documentation". The installed file that runs the documentation is in the Avisynth program folder in "\docs\English\index.htm".

You can also make a desktop shortcut to that menu item and just click on it. You want to keep the top white luma band inside the unshaded safe range of y=16-235. Luma that flows into the right or left shaded area indicates clipping in RGB because RGB display will expand 16-235 to 0-255. Remember that black borders throw off the left side, they are usually plain black and will be a little "spike" at the left-hand border. I usually crop off borders temporarily so the histogram shows only levels for the real content. Lookat the images with YUV histogram,s in post #34 and you'll see that the borders were cropped off. Once you're in the correct range of 16-235, the rest is eyeball judgement (and you'd best be working with a calibrated monitor, or your video will look entirely different on other displays). After adjustments are made, delete or comment-out the crop and histogram statements so the full frame will be restored.

Sometimes further tweaks for specific luma or color ranges are done in RGB in VirtualDub, if needed. For example, if blacks look a little too green or whites are too blue or whatever, it's difficult to restrict operations to narrow ranges in YUV. If you want to adjust darks or skin tones without affecting the rest of the spectrum, RGB filters such as ColorMill or curves will let you do that.
Reply With Quote
  #39  
04-30-2018, 11:57 AM
yukukuhi yukukuhi is offline
Free Member
 
Join Date: Apr 2018
Posts: 68
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by sanlyn View Post
The .ts files aren't interlaced.
But when I run it in dgindex, it shows it as an interlaced video.
Reply With Quote
  #40  
04-30-2018, 12:22 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
All of your samples are progressive video encoded as interlaced. This is often done for compatibility when making authored discs, and HD-PVR's (except gaming editions) almost always record with interlace flags even if the broadcast is progressive or telecined. If you look at motion frames in VirtualDub or running a script, frame by frame, there is no interlace combing or double field images. If you deinterlace and play frame by frame you'll see a duplicate of every frame

Neuron2_How To Analyze Video Frame Structure
Reply With Quote
Reply




Similar Threads
Thread Thread Starter Forum Replies Last Post
Horizontal lines/scanlines in captures? (ATI 600 USB) Turok81 Capture, Record, Transfer 8 11-27-2016 08:25 PM
My guide on removing vertical jitter using VirtualDub and Photoshop hysteriah Restore, Filter, Improve Quality 6 06-08-2015 04:45 AM
Need help with blue vertical lines max_cady Restore, Filter, Improve Quality 3 05-03-2011 04:24 AM
Vertical or horizontal for storing an external hard drive & best enclosures ? Sossity Computers 1 12-09-2010 07:26 PM

Thread Tools



 
All times are GMT -5. The time now is 04:11 AM