#41  
10-06-2016, 03:42 AM
msgohan msgohan is offline
Free Member
 
Join Date: Feb 2011
Location: Vancouver, Canada
Posts: 1,323
Thanked 334 Times in 276 Posts
Quote:
Originally Posted by lordsmurf View Post
I used to get in heated discussions with plugins devs at VH, because their documentation was obtuse crap -- or simply missing entirely. Making a plugin is great, but worthless if nobody can use it. Several filters are still total unknowns after a decade. Nobody knows to use it. Waste of space, really. Just a tease.

Some are actually just "work in progress" with no progress.
This hits a little close to home.

http://forum.doom9.org/showthread.php?t=149003
http://forum.doom9.org/showthread.php?t=167875
Reply With Quote
The following users thank msgohan for this useful post: sanlyn (10-06-2016)
Someday, 12:01 PM
admin's Avatar
Ads / Sponsors
 
Join Date: ∞
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #42  
10-06-2016, 04:02 AM
lordsmurf's Avatar
lordsmurf lordsmurf is online now
Site Staff | Video
 
Join Date: Dec 2002
Posts: 13,503
Thanked 2,449 Times in 2,081 Posts
Off-topic for a moment...

Quote:
Originally Posted by msgohan View Post
jmac is the same. He had good momentum, then petered out. I was really looking forward to seeing where that software TBC would go. But that was 5-7 years ago.

At least mirror your files here sometime.

New plugins devs learn from past mistakes, errors and successes. Some of my scripting is based on failures of others. So that old plugin may actually be useful to some.

I'd considered opening a SourceForge/Github-like member area just for Avisynth and VirtualDub plugin developers, but devs are fickle. Nothing ever pleased the ones I talked to, and then they'd always flake out 6 months later anyway. Many outright disappeared. After a year, I gave up. Plugins can/should be added to the forum now, but instead they always use crap like MegaUpload (and we all know what happened to those files!), MediaFire (malware ridden cesspool), etc. Stuff on those sites disappears.

It was actually Doom9 that made me want to attach all files and images to this forum, as the number of 404 links on that site is ridiculous. Then VH upped their size limits, and we were finally able to do the same after upgrades in 2015.

I want to work on that thumbnail / larger image size you've requested. Forum software somewhat limits what's possible. Anything to get more members here, and away from some of those others sites.

- Did my advice help you? Then become a Premium Member and support this site.
- For sale in the marketplace: TBCs, workflows, capture cards, VCRs
Reply With Quote
  #43  
10-06-2016, 06:25 AM
koberulz koberulz is offline
Premium Member
 
Join Date: Feb 2016
Location: Perth, Australia
Posts: 453
Thanked 3 Times in 2 Posts
Quote:
Originally Posted by lordsmurf View Post
Use AvsPmod.
I stumbled across that a couple of weeks ago, and have been, but there are still quite a few things you can't adjust in the GUI. Still, it's certainly progress.

Quote:
Yeah, it's really crappy. This has been my main complaint for 15 years now. I used to get in heated discussions with plugins devs at VH, because their documentation was obtuse crap -- or simply missing entirely. Making a plugin is great, but worthless if nobody can use it. Several filters are still total unknowns after a decade. Nobody knows to use it. Waste of space, really.
Well, true, also that. But for example CNR2, I look it up in the Wiki and it talks about rainbows and 'huge analog chroma activity' and...what? Screenshots or video samples or something, please.

You at least have the advantage of knowing enough terminology and enough about this stuff to spot and identify errors, whereas I don't even have that yet.

Quote:
Just a tease.
Says the guy who's been posting sample restorations with a script nobody else can use.

Quote:
Colors should not be neon. The saturation is "illegal" because it blooms, destroys detail. For example, the "studio" lapel. You can see them. All you can see is neon pink. The value is exceeding the luma that held the primary contrast data. It's not legal, not balanced.

Not in this instance. Again, it chroma blooms beyond the luma. That should not happen.
Detail is about variation within colours as well though, yes? Not just luma?

How do you tell the difference between chroma overwhelming luma and luma just not having any contrast to begin with? Or does it not matter?

Quote:
Pay close attention to the VCR, TBC and capture card for harsh value changes. Process of elimination needed to exclude items. Yes, that means multiple equipment. And I know, not possible for everybody. You just need to extra vigilant in not screwing up values.
I must not have been clear; I was doing rough restorations of MPG files I was given from someone else's digitisation work, and my 'restoration' process, looking at it now, basically consisted of turning the contrast and saturation up way too much.

Quote:
sanlyn and I said the same thing, but with different words. He likes jargon, which is good. That lets me go more jargon-less. So do you understand it better now, with my explanation? If so, use his to better understand the jargon, which you'll see in the software.
Well, I guess I understand the idea, I'm just not sure where the 'clipping' is. Like, I know with luma you get a sharp cutoff instead of a gradual slope down if you're looking at a histogram, I'm just not sure what the chroma equivalent would be.

Quote:
My settings are actually conservative, and would work for that entire tape. Fix noise now. Do color correction in a second pass. It's lossless, after all -- and that's why.
I meant to say, but didn't, that the MP4 was generated via my first restoration attempt for the main game camera angle.

At which stage would you adjust the chroma shift? I know that at least varies between the studio and game segments.
Reply With Quote
  #44  
10-06-2016, 11:45 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Perhaps the following will answer a few questions:

You earlier questioned what clipping is, and also how one would know that the video you're working with has gone through previous processing, and what is chroma noise, what are inavlid values, etc.. So, put on your patience hat for a brief minute or so while I try to explain via the images and histograms below:

The image below is a 4:3 image from frame 9 of the original Studio.avi, unprocessed.



The red jacket is oversaturated to the point of clipping the lapel edges and other details. This is an objectively measurable and viewable error:

Quote:
clipping is a result of capturing or processing an image where the intensity in a certain area falls outside the minimum and maximum intensity which can be represented. It is an instance of signal clipping in the image domain.
- https://en.wikipedia.org/wiki/Clippi...photography%29
With learning video and graphics work, Google is your friend.

Clipping means that luma/and or chroma values beyond the clipping point are compressed into the value at the clipping point: that is, all data beyond the clipping point is destroyed. It's lost. Kaput. Forever. Pumped or poorly interpreted color values have been pushed to the point of distortion. The technology reacts with flashing and flicker.

Besides the horizontal dropout and chroma bleed and displacement (chroma shift), there are other annoyances. THe red jacket has an unreal neon or glowing effect called "blooming". Blooming is also evident in the upper right logo. The few remaining details in facial tones are exagerrated contours, the shadows under the chin are purple, and there are cyan blotches around the eyes and chin. There are also magenta blotches in the gray background. Yiou can call these annoyances chroma noise. The magenta blotches are classed as rainbows.

The image below has two histograms made from frame 9: A histogram of the original YUV colorspace (left) and a VirtualDub ColorTools histogram of the RGB display (right).



clipping, previous processing, chroma noise, invalid values, etc.....:

The YUV Levels histogram at the above left was made with Avisynth. The darker shaded borders at both sides of the YUV graph indicate unsafe ("invalid") areas whose values lie beyond the allowable range for most digital video formats. The allowable range is 16 to 235, which are the RGB equivalents of what geekos call the y-lumance, or brightness range. When YUV video is interpreted for RGB display, dark values in the area around y=16 are expanded to RGB 0 (black), and bright values around the area at y=235 are expanded to RGB 255 ("bright white").

YUV color systems store luma values separately in the Y channel, blue-yellow separately in the U ("blue") channel, and red-to-green values in the V ("Red") channel. In the YUV histogram shown, the Y luminance channel is the white bar at the top of the graph. U is in the middle, V at the bottom. Dark values are at the left side, bright values are at the right side.

Consider that if YUV 16-235 is expanded to RGB 0-255, what do you think would happen in RGB if the YUV video has values that exceed 16-235? In RGB, the out of bounds values would be clipped, or converted to RGB 0 at the darkest point or to RGB 255 at the brightest point. YUV values beyond 16-235 are therefore destroyed when converted in RGB systems. Clipped data cannot be retrieved. Video YUV itself can also contain values greater than 0-255, which is common in professional photography and movies. YUV systems that can contain wide-gamut colors are used to avoid clipping of data in YUV and must later be converted into more common narrow-gamut YUV or RGB color matrices, as digital vdeo, mnvies, TV, and printing.

Now to the practical world:

Avisynth's ColorYUV filter was able to display a numerical index of the min and max luma values in this frame. The minimum (darkest) YUV value is 16. At the y=16 point, there was no data below that point. At y-16 you can see that instead of overflow into the unsafe area, there is a small "spike" at the left-hand border where data abruptly stops. That's why I mentioned earlier that the capture card is known to clip super-blacks at y=16. Your original black borders (y=0) and any other original values that were darker than y=16 were all clipped, or converted to y=16. Thus any values darker than y=16 cannot be retrieved in their original form. You can make those values brighter, but they'll always look solid black or very dark gray, with no details.

ColorYUV also reported that the highest luma value was 242, which lies beyond y=235 and would be clipped in RGB. But it would be possible to retrieve some bright values by lowering contrast in YUV so that the brightest data would be interpreted smoothly back into the realm lower than y=236. Once those values are clipped in RGB, they cannot be retrieved. This is one of several reasons that we've said that this video has been through previous processing. High-contrast brights were clipped before they ever appeared in the video you're working with. The process that performed that clipping resulted in blanking, or flicker.

While this all seems academic, in practical use the YUV histogram and the numbers are clues that that mild dark and some hard bright clippng would still occur.

The histogram at the right in the image above is an RGB histogram made in VirtualDub with the ColorTools plugin. RGB stores luminance and chroma data in the same pixel, not in separate pixels. In RGB you can adjust all 3 color channels separately and in discreet regions without affecting the other colors. In YUV, you can adjust luminance and contrast without affecting chroma values. In YUV you can also adjust chroma contrast without affecting color contrast, and vice cersa. In RGB, if you adjust with a "brightness" filter or a "Contrast" filter you adjust the luminance and/or contrast of all the colors at the same time. In advanced image control apps such as Premiere Pro, TMPGenc's color filters, a few VirtualDub filters, AE's Color Finesse, and other color apps, you can make more targeted adjustments in RGB and YUV.

The image below is an RGB vesctorscope of frame 9 made with VirtualDub's ColorTools plugin.



This type of graph displays color content and saturation levels. The small rectangles inside the circle indicate the outer limits of the safe RGB 0-255 level. Youy can see by the huge cluser of red-yellow values that saturation levels exceed safe limits. I've added a white arrow at the upper right to indicate a flat "wedge" that shows hard clipping of reds, orange, yellows. The blank area shows how much chroma detail has been previously destroyed.

The RGB histogram pictured earlier shows the Red channel climbing up the right-hand wall of the histogram, which indicates clipping. This oversaturation (invalid contrast levels) isn't a matter of personal taste. It's processing error, pure and simple. Some TV's will react to errors like this with blanking, blooming, and/or or flicker.

The image below is the same frame 9 after running the Avisynth scripts and some VirtualDub filters. Some mild details of the red jacket and lapels were retrieved in Avisynth, skin tone looks more natural, and there were other corrections. The horizontal dropout is gone.



You can sharpen that frame if you want, but you won't create detail that isn't already gone.

The image below has more red saturation if you really want it, although most people on TV are discouraged from wearing overly bright clothing. Red was increased with VirtualDub's HueSaturationIntensity (HSI) filter.



The corrected video will look brighter on TV than it does in a browser or a PC monitor. TV's have a different luma curve. This is one reason why you should be working with a properly calibrated monitor.

Two free tutorial linked below show you how to work from histograms. The principles are the same for still graphics, cameras, and digital video.
http://www.cambridgeincolour.com/tut...istograms1.htm
http://www.cambridgeincolour.com/tut...istograms2.htm

I'll post the Avisynth scripts and a VirtualDub .vcf file in the next post.


Attached Images
File Type: jpg frame 9 original 640.jpg (106.5 KB, 173 downloads)
File Type: png frame 9 orignal YUV-RGB.png (35.6 KB, 141 downloads)
File Type: png frame9 original VScope 2.png (112.1 KB, 141 downloads)
File Type: jpg frame 9 - B version mp4 640.jpg (78.6 KB, 142 downloads)
File Type: jpg frame 9 - B version mp4 360 MidHit.jpg (50.2 KB, 142 downloads)
Reply With Quote
The following users thank sanlyn for this useful post: Angies_Husband (12-24-2019), koberulz (10-06-2016), msgohan (10-06-2016)
  #45  
10-06-2016, 01:54 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
The Avisynth scripts for the earlier MP4 and for the new MP4 attached below.

I had to run Avisynth in two steps, saving the first step as a Lagarith YV12 intermediate working file. I found this necessary because running the RemoveSpots plugin was so bloody slow. You can go without RemoveSpots if you wish -- it softens things a bit -- but all the horizontal rips and ripples will still be there. There's nothing exotic about the Avisynth and VirtualDub plugins I used; they are common mainstays, in popular use everywhere.

Detailed notes to follow. Lots of them.

Code:
#### step 1 - save as Studio_03a, Lagarith YV12 ####
#### -------------------------------------------####

AviSource(vidpath+"Studio.avi")

ColorYUV(cont_v=-60)
ConvertToYV12(interlaced=true)
AssumeTFF()
QTGMC(preset="medium",border=true,EZDenoise=10,denoiser="dfttest")
stabmod()
FixChromaBleeding()
MergeChroma(MCTemporalDenoise(settings="very High"))
ChromaShift(v=10,L=-4)
MergeChroma(awarpsharp2(depth=30))
SmoothUV()
LimitedSharpenFaster(edgemode=2)
AddGrainC(2.0,2.5)
Crop(10,10,-18,-4).AddBorders(14,6,14,8)
SeparateFields().SelectEvery(4,0,3).Weave()
Code:
#### step 2 - save as Studio_03ab, Lagarith YV12 ####
####     (uses the RemnoveSpots5.avs plugin)     ####
#### --------------------------------------------####

AviSource(vidpath+"Studio_03a_a.avi")
AssumeTFF()
SeparateFields()
a=last
e=a.SelectEven().RemoveSpotsMC().RemoveSpotsMC3()
o=SelectOdd().RemoveSpotsMC().RemoveSpotsMC3()
Interleave(e,o)
AddGrainC(1.5,1.5)
Weave()
ConvertToRGB32(interlaced=true,matrix="Rec601")
return last
Now some notes om these scripts:

Script #1:

AviSource(vidpath+"Studio.avi")
Adjust the path statement and file name for the name and location of your video.

ColorYUV(cont_v=-60)
The ColorYUV() function is a multi-faceted Avisynth built-in function. This statement lowers V (RED) channel contrast, which lowers saturation, and brings that channel's values to below y=235. Lowering it too much removes red from skin tones, turning the guy's face toward cyan. These colors are further tweaked in RGB in VirtualDub.

ConvertToYV12(interlaced=true)
Most of the filters used below work only in YV12. The nature of the noise and chroma cleanup is such that full-frame deintelacing is required. You can't adjust problems like chromashift on interlaced or telecined video, you'll destroy the field relationship and screw up the colors. Note that you must state whether the source is interlaced or not.

AssumeTFF()
AVisynth's default is BottomFieldFirst (BFF). This built-in function informs Avisynth that the field order here is TFF (Top Field First).

QTGMC(preset="medium",border=true,EZDenoise=10,den oiser="dfttest")
QTGMC is the most favored and cleanest deinterlacer. Its "medium" preset array is good for speed balanced with cleaning power. Border=true tells QTGMC not to break up borders or make them look as if they're "rattling". This parameter isn't always needed, but it was needed here because of jitter. EZDenoise=10 adds extra denoising and is fairly strong, but you could get stronger. Denoiser="dfttest" specified the denoiser to be used. Dfttest is a support plugin that comes with QTGMC's plugin package and can be used as a standalone plugin on its own.

stabmod()
This is a vertical and horizontal stabilizer that helps stabilize jittery frames. Stabilizing frame motion helps denoisers that will follow, which are less effective if objects in the frame are hopping around. Will not work with interlaced video.

FixChromaBleeding()
This old standby plugin works on over saturation and chroma bleed, especially blue and red. Progressive frames required.

MergeChroma(MCTemporalDenoise(settings="very High"))
MergeChroma() is a built-in Avisynth function. It tells Avisynth to merge only the results of filtered chroma with the luma from the preceding steps. Thus, the named filter inside the parenetheses effectively works only to stabilize chroma with less impact on luma sharpness. MCTemporalDenoise (MCTD for short) is a heavy-hitter plugin that runs slower at its strongest "very high" settings. It also cleans up a lot of chroma noise, flutter, and smear.

ChromaShift(v=10,L=-4)
This filter uses masking techniques to clean and restore chroma offsets. In this case, the value v=10 shifts displaced V-channel (red/yellow) chroma 10 pixels to the right and is a fairly severe shift. The "L=-4" paramter shifts chroma 4 pixels upward.

MergeChroma(awarpsharp2(depth=30))
MergeChroma() is again used to tell the filter named inside the parenethese to transfer only filtered chroma to the results, leaving luma as-is. This sharpener performs masking to tighten and make more tidy edges with less chroma overrun.

SmoothUV()
This is often used as a de-rainbow (chroma blotch) cleaner.

LimitedSharpenFaster(edgemode=2)
Limited sharpening refers to this sharpener's many configuration parameters (I only used of them) to sharpen without creating halos or clay-face effects. Edgemode=2 limits the amount of sharpening away from edges-only. Strong edge sharpening on soft video like this often just makes it look phoney.

AddGrainC(2.0,2.5)
This adds very fine film-like grain, especially useful for masking hard edges on areas like skin shadows and other gradient areas. It helps to mask macroblocks in large areas of over-filtered videos such as this one appears to be. There's no fine detail to enhance in these samples. Adding a little ordered noise helps fill in the gaps and makes it look as if there's a little more texture than is really there.

Crop(10,10,-18,-4).AddBorders(14,6,14,8)
This centers the image inside the borders. The stabmod stablizer that was run earlier in this script shifts the image during operation and fills in shifted border areas with black pixels, so the image looks uncentered. Crop() removes the old borders without affecting the core image, then AddBorders() creates new border pixels to center the image. Both of these functions are Avisynth built-ins.

SeparateFields().SelectEvery(4,0,3).Weave()
These three dot-connected filters restore interlacing. They are all Avisynth built-ins. SeparateFields() separates full-sized progressive frames into half-size fields, two fields for each frame. SelectEvery(4,0,3) takes every four separated fields and selects fields 0 and 3 (the field numbers begin with zero). Weave() then re-weaves the separated fields back into full-sized interlaced frames.

The results of this scripts were saved in VirtualDub using "fast recompress" mode, as a Lagarith YV12 file named "Studio_03a.avi".


Script #2:

AviSource(vidpath+"Studio_03a.avi")
Adjust the path statement and file name for the name and location of the source video.

AssumeTFF()
As was the case in the first script, Avisynth's default is BottomFieldFirst (BFF). This built-in function informs Avisynth that the field order here is TFF (Top Field First). Remember that this source file was saved as YV12, which is required for RemoveSpots.

SeparateFields()
The RemoveSpots filters work only with non-interlaced frames. Deintelacing isn't always required; you can emulate it by separating the fields into half-sized "frames". Also, it takes RemoveSpots longer to work with full-sized frames than with half-sized fields.

a=last
I have invented an entirely new video in memory that I call "a". Because "a" isn't the name of any filter or function in this script, I can use it to name whatever I want. This in-memory video is a copy of the original and I'm calling that copy by the name of "a". This copy is a copy of the video that resulted from the "last" statement executed before this line.

e=a.SelectEven().RemoveSpotsMC().RemoveSpotsMC3()
Here I'm creating another name for something, in this case a copy of all the even-numbered fields in "a". I assign this copy of even-numbered fields the name of "e". For all of the fields in "e", I apply RemoveSpotsMC, then RemovespotsMC3, which are two different functions in the RemoveSpotsMC5 plugin. Basically this means that the two spot removers, which also work on dropouts and comets, will make a total of 4 passes over each field. This is rather drastic, but those rips and tears are ugly and numerous. That's why the script is so slow. Even-numbered fields are numbered 0, 2, 4, 6, 8, etc.

o=SelectOdd().RemoveSpotsMC().RemoveSpotsMC3()
In this case I've given the name "o" to an in-memory copy all of the odd-numbered fields in "a". I then do to "o" the same thing I did to "e" with the spot removers. Why separate the fields and work them out of order? Because some of those rips extend over multiple fields. Taking them out of order hopefully will find some fields where the rip appears only once. If a rip appears in consecutive fields, it's considered to be a "real" part of the image and won't be filtered. Even-numbered fields are numbered 1, 3, 5, 7, 9, etc.

Interleave(e,o)
This is an Avisynth bult-in function. It takes the separated fields from "e", then from "o", alternately one field at a time (one field from "e", then one field from "o", and so on), and rearranges the fields into their original sequence even/odd sequence.

AddGrainC(1.5,1.5)
Again the spot removers did some strong filtering so I'm adding a little fine-grain ordered noise to avoid a plastic look.

Weave()
Again, Weave() re-weaves the separated fields into full-sized interlaced frames.

ConvertToRGB32(interlaced=true,matrix="Rec601")
This converts YV12 to RGB32 for VirtualDub work to follow. The VirtualDub filters were mounted in VirtualDub in "full processing mode" while the script was running, so the results get VirtualDub's filtering on output. "Rec601" is the color matrix for standard definition video.

return last
Because I have named more than one video copy (Remember, I made three videos named "a", "e", and "o") I have to tell Avisynth which video's results I want to output. In this case, I want to return the results of the "last" step that was executed, which was the ConvertToRGB32 statement.

The total tun is complete and I no longer need the first intermediate working file from Step 1. The Step 2 file was input to the encoder for an MP4 separately. The total time taken to run these two scripts and the VirtualDub filtyers for this short clip was less than 1 minute.

The attached .vcf file has the setup and filter names for the AVsiytn filters I used to get the Step 2 file. You must have the named filters installed for the .vcf to work. If you don't have them, let us know. The VirtualDub filters used were, in this order:
CamcorderColorDenosie (v1.6)
Gradation Curves
ColorMill
Hue/saturation/intensity
VHS (Flaxen)


Hopefully you can learn a little Avisynth and color correction along the way. Is it always necessary to get this complicated with most of my own captures? No.

Thanks for listening, and good luck. Ask for help if needed.


Attached Files
File Type: vcf studio.vcf (4.0 KB, 7 downloads)
File Type: mp4 Studio_New_Trial_03ab.mp4 (983.8 KB, 13 downloads)

Last edited by sanlyn; 10-06-2016 at 02:47 PM.
Reply With Quote
The following users thank sanlyn for this useful post: Delta (05-24-2021), koberulz (10-06-2016)
  #46  
10-06-2016, 09:55 PM
msgohan msgohan is offline
Free Member
 
Join Date: Feb 2011
Location: Vancouver, Canada
Posts: 1,323
Thanked 334 Times in 276 Posts
Quote:
Originally Posted by sanlyn View Post
High-contrast brights were clipped before they ever appeared in the video you're working with. The process that performed that clipping resulted in blanking, or flicker.
...
Some TV's will react to errors like this with blanking, blooming, and/or or flicker.
Nice post. I'm unclear what you mean by blanking, though.

Quote:
In YUV you can also adjust chroma contrast without affecting color contrast, and vice cersa.
Typo?
Reply With Quote
  #47  
10-06-2016, 11:44 PM
koberulz koberulz is offline
Premium Member
 
Join Date: Feb 2016
Location: Perth, Australia
Posts: 453
Thanked 3 Times in 2 Posts
Those posts are awesome.

Quote:
Originally Posted by sanlyn View Post
The image below is a 4:3 image from frame 9 of the original Studio.avi, unprocessed.
Is there any trick to picking a good frame for this sort of thing? I happened to have my full capture AVI open in AvsPmod with a blank script, at a frame from the studio segment, so I gave ColorYUV(analyze=true) a whirl and came up with max values of 230, 156 and 224.

When capturing, I first run the VCR's output (via GraphEdit) through an AVS that adds a wave form monitor above the image, in order to adjust the Hauppauge's proc amp settings and prevent clipping.

Obviously adjusting for a frame that doesn't have a maximum or minimum value for the entire footage isn't going to help (as shown by the fact that it's still clipping Y, unless that's just the black borders).


Quote:
Clipping means that...
I understand all that; it's been discussed a bit in my capturing threads here and VH, and I get it when it comes to luma. I just wasn't sure how it worked in chroma, in the sense of what 'more red' meant. Black<>white is an obvious scale, but without seeing those YUV histograms I was trying to figure out how that would look on a scale of red<>...less red (I wiki'd "YUV" and it showed a four-way graph, which was even more confusing)?

Even those appear to show it going through black between red and green...but there's not supposed to be any luma involvement there?

The RGB histograms...black on the left, pure color on the right?

I'm not sure I see any UV clipping there, although there's obviously RGB clipping, which is confusing.

Quote:
The corrected video will look brighter on TV than it does in a browser or a PC monitor. TV's have a different luma curve. This is one reason why you should be working with a properly calibrated monitor.
If a TV functions differently, is it possible to calibrate a computer monitor? Or is using a TV display necessary? How is calibration done?
Reply With Quote
  #48  
10-07-2016, 12:56 AM
koberulz koberulz is offline
Premium Member
 
Join Date: Feb 2016
Location: Perth, Australia
Posts: 453
Thanked 3 Times in 2 Posts
Quote:
Originally Posted by sanlyn View Post
ColorYUV(cont_v=-60)
The ColorYUV() function is a multi-faceted Avisynth built-in function. This statement lowers V (RED) channel contrast, which lowers saturation, and brings that channel's values to below y=235. Lowering it too much removes red from skin tones, turning the guy's face toward cyan. These colors are further tweaked in RGB in VirtualDub.
How does V have a Y value? This is the bit I found really confusing earlier: referring to colors by Y values, which...isn't anything to do with chroma as I understand it.

So, ColorYUV "off" is like adjusting 'hue' in the hue/saturation/intensity filter in VDub, yes?

I don't understand the difference between "gain" and "cont" as explained on the Wiki, nor do I really comprehend all the math that's going on there.

I was going to ask if there was a way to generate something like the ColorTools vectorscope in AviSynth, but Histogram(color2) seems to do the trick (and "color" answers my earlier question about the histograms). Is that the same thing? I assume the circle is the RGB-safe area?

Don't have time to go through the rest of it right now, except that on my first read-through I noticed you're only using ChromaShift on V, not C. There still seems to be some shift in yours (to the left of the box containing the NBL logo, there's a yellowish color), which disappears if you use C. Is there some reasoning behind using V only?
Reply With Quote
  #49  
10-07-2016, 07:01 AM
koberulz koberulz is offline
Premium Member
 
Join Date: Feb 2016
Location: Perth, Australia
Posts: 453
Thanked 3 Times in 2 Posts
Quote:
Originally Posted by sanlyn View Post
QTGMC(preset="medium",border=true,EZDenoise=10,den oiser="dfttest")
QTGMC is the most favored and cleanest deinterlacer. Its "medium" preset array is good for speed balanced with cleaning power. Border=true tells QTGMC not to break up borders or make them look as if they're "rattling". This parameter isn't always needed, but it was needed here because of jitter. EZDenoise=10 adds extra denoising and is fairly strong, but you could get stronger. Denoiser="dfttest" specified the denoiser to be used. Dfttest is a support plugin that comes with QTGMC's plugin package and can be used as a standalone plugin on its own.
AvsPmod says EZDenoise has values from 1.0-5.0?

What does Borders do that you wouldn't want it turned on all the time?

When I run QTGMC in AvsPmod, I get a weird pattern over the preview, as shown in the attachment.

What exactly am I looking at/for when I set ShowNoise to 'true'?

Quote:
FixChromaBleeding()
This old standby plugin works on over saturation and chroma bleed, especially blue and red. Progressive frames required.
I've always used 'bleeding' to refer to colors leaking into areas they shouldn't via chroma shift (back before I knew what chroma shift - or chroma for that matter - was). So what is it really? This seems to desaturate the jacket without actually affecting the vectorscope, so what's going on there? How does this affect the required cont_v setting, if at all?

While we're back on the ColorYUV() line, why does altering cont_v appear to rotate the content inside the vectorscope clockwise?


Attached Images
File Type: jpg Pattern.jpg (126.2 KB, 5 downloads)
Reply With Quote
  #50  
10-07-2016, 10:39 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
@koberluz, wow, enough questions to keep me working. But that's what the forum is about. Wish I were getting paid, and for overtime as well, LOL!

Quote:
Originally Posted by msgohan View Post
Nice post. I'm unclear what you mean by blanking, though.
I've also seen it called "blink". As one explanation put it, a display's AGC overreacts and lowers ouput considerably, then recovers. I saw that on a CRT once.

Quote:
Originally Posted by msgohan View Post
Quote:
In YUV you can also adjust chroma contrast without affecting color contrast, and vice cersa.
Typo?
Yep. My bad. That should be "In YUV you can also adjust chroma contrast without affecting luma contrast, and vice versa." Wish we could edit after an hour. Shucks.

Quote:
Originally Posted by koberulz View Post
Is there any trick to picking a good frame for this sort of thing? I happened to have my full capture AVI open in AvsPmod with a blank script, at a frame from the studio segment, so I gave ColorYUV(analyze=true) a whirl and came up with max values of 230, 156 and 224.
Flicker means on/off, up/down, darker/brighter. So you won't get the same numbers for every frame.

Quote:
Originally Posted by koberulz View Post
When capturing, I first run the VCR's output (via GraphEdit) through an AVS that adds a wave form monitor above the image, in order to adjust the Hauppauge's proc amp settings and prevent clipping.
The capture histogram is luma only.

Quote:
Originally Posted by koberulz View Post
Obviously adjusting for a frame that doesn't have a maximum or minimum value for the entire footage isn't going to help (as shown by the fact that it's still clipping Y, unless that's just the black borders).
You adjust by watching a problem scene or video for a few sampling minutes, then adjust for a worst case scenario. If you have wild fluctuations, you'll sometimes have to accept some overrun in one direction or other. With slight overrun into unsafe areas, you can adjust later with various filters to recover some lost dark or bright detail (SmoothAdjust is one way, some ColorYUV settings are another).

Looking for outlaw chroma is a problem with luma histograms. You can have a very bright picture but low-intensity chroma. Generally, bad luma readings are a guide to what colors are doing, but most people rely on experience. For real problem videos, some test sequences are captured first and viewed later. I've had to do that with a couple of bad tapes and hated every minute of it.

Remember, in YUV the luma data is stored separately from chroma. You can adjust Y without affecting U or V, and so forth. RGB stores luma and chroma as a single composite value, so there's no separate adjustment -- at least, not with the usual RGB controls. You can do it with advanced image controls such as those I mentioned, in both YUV and RGB, where some very sophisticated programming is employed.

Quote:
Originally Posted by koberulz View Post
I wiki'd "YUV" and it showed a four-way graph, which was even more confusing
Yeah, I see graphs sometimes that make me shudder -- like cubes, for instance. With those multi-sided cubes you have to stop learn to think in 3D. Think of an LED whose brightness can be turned up or down but the color remains the same hue of blue. Try that in RGB and the blue changes from bright sky blue to black.

Quote:
Originally Posted by koberulz View Post
Even those appear to show it going through black between red and green...but there's not supposed to be any luma involvement there?
I'm not sure what you refer to there, but I think mean some cubes/graphs that show luma and chroma together. Some don't.

Quote:
Originally Posted by koberulz View Post
The RGB histograms...black on the left, pure color on the right?
Almost all histograms have dark values (low-number values) on the left, higher values on the right. From left to right the height of the scale varies according to how many pixels you have with the same values. If you have high peaks in the middle, for example, it means you have more middle-value content with those same values.

RGB histograms display in the same way (position from left to right indicates bright or dark values), but height indicates how many pixels have that same value. With RGB there is usually a "white" band at the top -- that white band indicates average brightness values for all the colors from dark to bright. Some histograms let you turn off the white band or view one band at a time.

Quote:
Originally Posted by koberulz View Post
I'm not sure I see any UV clipping there, although there's obviously RGB clipping, which is confusing.
It is, but remember that the original video has clipped values before you got it. That's why the scripts and filters applied more effort to chroma than luma.

Quote:
Originally Posted by koberulz View Post
If a TV functions differently, is it possible to calibrate a computer monitor? Or is using a TV display necessary? How is calibration done?
There are various ways to calibrate PC monitors and TV. The best way is to use a software/hardware package that includes a direct optical measuring device (a colorimeter or photometer of some sort) and the software to read the measurements and either make adjustments to the hardware (your graphics card) or tell you how to adjust some controls. If you've never seen it done it's really touchy to explain.

Calibration is a 50-gallon drum of material, too much to get into detail here. Many TV's have submenus for adjusting elements of each color channel. Many PC monitors do, too, but PC monitor adjustments tend to be more non-linear. In any case you need some way to measure the raw signal. Don't trust your eyes for that. There is one website that has some LCD test panels for adjusting things, but they're very limited. I guess you've also seen TV calibration discs -- they work up to a point, but a colorimeter is far more exact, and for PC's the software kits make graphics card adjustments that you could never do with the monitor controls.

You'll be surprised to learn that the first step in calibration is based on gray test patches. That's right, gray. The patches are various brightness levels of gray, from back through several shades of gray to bright white. Why? Because blacks, grays and whites consist of equal portions of each of all three colors. The RGB values are listed for each color: an R, a G, and a B value. Pure bright white is R255 G255 B255. Middlen gray is RGB 128, 128, 128. "Video black" is RGB 16, 16, 16. If you adjust so that each test patch displays with the correct values for each panel, all the other colors will fall into place because the output of all 3 colors is in balance from dark to bright. Try that manually with just your eyes and you'll soon be ready for therapy.

There are other measurements in the calibration process that adjust for hue purity for each color, which includes secondary and primary colors. Primary colors are red, green, blue, blue. Secondaries are magenta (red+blue), cyan (blue+green), yellow (green+red). Then there are tertiaries (browns, pinks, orange, etc., etc., etc.). The calibration kit adjusts for up to 42 or more test panels, then for luma and gamma curves, and on and on. You wouldn't want to try that manually.

Fortunately for PC's, the software works automatically for about 10 to 15 minutes while you make coffee or something. Thank goodness! For TV's it's manual and takes much longer.

A TV calibration disc can be had for about $35 USD or so and works well enough to get a TV into some kind of basic shape so that things look more real and sensible. You can't use those discs for a PC.

The TFT Central website in the UK (http://www.tftcentral.co.uk/) is a primary source for monitor info. All kinds of stuff there, you could spend a weekend on that site and not catch all of it. They test monitor calibration kits, and it's more than just a feature list of which buttons to click. Their review of the older i1 Display-2 calbration kit from XRite shows how they work and the results you can get, with lots of great pictures: http://www.tftcentral.co.uk/reviews/...e_display2.htm. There is a whole range of budget to more expensive kits from XRite, Spyder, and Pantone. I had to update to the i1 Display Pro (EODIS3) but managed to beat Amazon's price considerably, thank heaven.

Quote:
Originally Posted by koberulz View Post
How does V have a Y value? This is the bit I found really confusing earlier: referring to colors by Y values, which...isn't anything to do with chroma as I understand it
"V" is YUV chroma, not luma.

Quote:
Originally Posted by koberulz View Post
So, ColorYUV "off" is like adjusting 'hue' in the hue/saturation/intensity filter in VDub, yes?
Not really. A hue control changes two colors at the same time, so that if you adjust hue toward red you increase red and reduce blue and green at the same time in both YUV and RGB. The ColorYUV "off" parameter subtracts the same value from every pixel of Y, V, or U. If you view that operation in a histogram and tell ColorYUV to use an offset value of minus 10 for luma, you'll see the white luma bar move 10 points to the left (all pixels from dark to bright will be 10 points darker). For positive values it moves everything in luma equally 10 points toward the bright side. You can adjust offset for YUV channels independently.

Quote:
Originally Posted by koberulz View Post
I don't understand the difference between "gain" and "cont" as explained on the Wiki, nor do I really comprehend all the math that's going on there.
"gain" applies a multiplier or divisor to the range of dark and bright, such that darks are multiplied less than brights. It stretches values from the low values outward. So if, say, your histogram bar is only 3/4",long, apply some gain and you can stretch it to 1". A minus value for gain applies a progressive divisor, so it shrinks from higher to lower.

"Cont" is a contrast adjustment. It expands values from the middle in both directions, or negative values shrink toward the middle. Seems to mme to be more effective at the bright end. The contrast setting in Tweak is different, so consult the ColorYUV document on that.

Quote:
Originally Posted by koberulz View Post
I was going to ask if there was a way to generate something like the ColorTools vectorscope in AviSynth, but Histogram(color2) seems to do the trick (and "color" answers my earlier question about the histograms). Is that the same thing? I assume the circle is the RGB-safe area?
Yes. And Color2 is a chroma-only vectorscope. But it doesn't tell you what's happening in RGB, which will be different because brightness is involved in RGB.

Quote:
Originally Posted by koberulz View Post
I noticed you're only using ChromaShift on V, not C. There still seems to be some shift in yours (to the left of the box containing the NBL logo, there's a yellowish color), which disappears if you use C. Is there some reasoning behind using V only?
I had to make up my mind whether just moving red (V) or all colors (C) looked better. If I recall, using C made a mess in one area or another, but you might like C better.

I worked a bit on the other captures but it's slow going today. More later.
Reply With Quote
  #51  
10-07-2016, 10:41 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Quote:
Originally Posted by koberulz View Post
What does Borders do that you wouldn't want it turned on all the time?
It's an extra step, but you can leave it on all the time if you want. Borders makes QTGMC use PointResize for certain resize operations. You can opoen the QTGMC scrpt with a formatted text editor like WordPad and read the code (don't dare "save" it when you close!!). It's really an avs script, and a darn big one.

Quote:
Originally Posted by koberulz View Post
AvsPmod says EZDenoise has values from 1.0-5.0?
When I run QTGMC in AvsPmod, I get a weird pattern over the preview, as shown in the attachment.[/quote]Some of several reason why I don't use AVSPmod.

Quote:
Originally Posted by koberulz View Post
What exactly am I looking at/for when I set ShowNoise to 'true'?
Quote:
Originally Posted by koberulz View Post
I've always used 'bleeding' to refer to colors leaking into areas they shouldn't via chroma shift (back before I knew what chroma shift - or chroma for that matter - was). So what is it really?
It can also mean blooming or glowing, not necessarily shift or displacement.

Quote:
Originally Posted by koberulz View Post
This seems to desaturate the jacket without actually affecting the vectorscope, so what's going on there? How does this affect the required cont_v setting, if at all?
The red is still red, so it might not change at all. Remember, brightness of a color isn't part of the YUV vectorscope.

Quote:
Originally Posted by koberulz View Post
While we're back on the ColorYUV() line, why does altering cont_v appear to rotate the content inside the vectorscope clockwise?
V contrast affects the shape of the V channel in vectorscops and histograms. YUV chroma is often pictured as a circle, not a square or cube, which is what happens in a YUV vectorscope. The top of the circle is one color, the lower left is another, the lower right is another. By adjusting a chroma contrast you're effectively moving different amounts of color around the wheel. In RGB the effect is completely different.
Reply With Quote
  #52  
10-07-2016, 11:24 AM
koberulz koberulz is offline
Premium Member
 
Join Date: Feb 2016
Location: Perth, Australia
Posts: 453
Thanked 3 Times in 2 Posts
Quote:
Originally Posted by sanlyn View Post
Flicker means on/off, up/down, darker/brighter. So you won't get the same numbers for every frame.
Right, but with analyze on it seems like Y is legal 99% of the time. So I'm guessing you picked a frame where the colour fluctuations kicked a couple of pixels up high enough to clip (I did find one that was at 252, I think, but its loose max was still legal).

Quote:
The capture histogram is luma only.
I'm not using VirtualDub's capture histogram, I'm using AviSynth's Histogram() function. Which is still luma only I know, but I wanted to clarify. Is avoiding chroma clipping a concern at the capture stage? I've only ever worried about luma clipping previously.

Quote:
You adjust by watching a problem scene or video for a few sampling minutes, then adjust for a worst case scenario.
In post, right, sure. But I'm talking about setting my levels when capturing. Often I won't have a scene that's clearly supposed to be completely black or completely white, but it can be hard to tell when it's not a graphics overlay. It can be hard to know what to look for.

Quote:
I'm not sure what you refer to there, but I think mean some cubes/graphs that show luma and chroma together. Some don't.
The AviSynth YUV histograms. The second is a gradient from yellow to blue, with black in the middle. The third is a gradient from green to red, with black in the middle.

Quote:
It is, but remember that the original video has clipped values before you got it. That's why the scripts and filters applied more effort to chroma than luma.
But if it has clipped values, why is nothing even extending into the 'unsafe' areas on the YUV histograms?

Quote:
"V" is YUV chroma, not luma.
Right, which is why "lowers V (RED) channel contrast, which...brings that channel's values to below y=235" is confusing. How can the V channel have a value below, above, or in any other way relative to y=235?

Quote:
Yes. And Color2 is a chroma-only vectorscope. But it doesn't tell you what's happening in RGB, which will be different because brightness is involved in RGB.
...what?

I know that's a vague question, but I'm not really sure I even understand enough to ask a more specific one.

Quote:
Originally Posted by sanlyn View Post
Some of several reason why I don't use AVSPmod.
Being able to see what I'm doing is tremendously helpful, although I have time to take a nap every time I add a "MergeChroma".

Quote:
The red is still red, so it might not change at all. Remember, brightness of a color isn't part of the YUV vectorscope.
The actual answer, which I just figured out and came here to mention, is that I was adding the vectorscope before the FixChromaBleeding. Adding it after moves it significantly, to the point where it's inside the circle even without any v_off at all. So I'm not sure what that is doing?
Reply With Quote
  #53  
10-07-2016, 11:43 AM
koberulz koberulz is offline
Premium Member
 
Join Date: Feb 2016
Location: Perth, Australia
Posts: 453
Thanked 3 Times in 2 Posts
Quote:
Originally Posted by koberulz View Post
What exactly am I looking at/for when I set ShowNoise to 'true'?
You skipped this question.

Quote:
Originally Posted by sanlyn View Post
MergeChroma(MCTemporalDenoise(settings="very High"))
MergeChroma() is a built-in Avisynth function. It tells Avisynth to merge only the results of filtered chroma with the luma from the preceding steps. Thus, the named filter inside the parenetheses effectively works only to stabilize chroma with less impact on luma sharpness. MCTemporalDenoise (MCTD for short) is a heavy-hitter plugin that runs slower at its strongest "very high" settings. It also cleans up a lot of chroma noise, flutter, and smear.
As mentioned above, this just about kills AvsPmod.

Is there a way to turn the luma off and just look at chroma to see what the denoiser and later sharpener are doing? Even copying the frames to the clipboard, pasting them into a Photoshop document and toggling the top layer I could barely see a difference.

Quote:
SmoothUV()
This is often used as a de-rainbow (chroma blotch) cleaner.

LimitedSharpenFaster(edgemode=2)
Limited sharpening refers to this sharpener's many configuration parameters (I only used of them) to sharpen without creating halos or clay-face effects. Edgemode=2 limits the amount of sharpening away from edges-only. Strong edge sharpening on soft video like this often just makes it look phoney.
I say I could barely see a difference with the last two, with these two I couldn't see a difference at all. Outside of that, even speaking theoretically I don't understand what 'non-edge sharpening' could be. When I think of sharpening, I think of making edges...edgier.

Is there a thought process behind which filters go in which order, which filters go in which AVS, etc.?
Reply With Quote
  #54  
10-07-2016, 12:56 PM
koberulz koberulz is offline
Premium Member
 
Join Date: Feb 2016
Location: Perth, Australia
Posts: 453
Thanked 3 Times in 2 Posts
Quote:
Originally Posted by sanlyn View Post
The total time taken to run these two scripts and the VirtualDub filtyers for this short clip was less than 1 minute.
It takes me almost that long just to open the AVS in VDub. Once I get it open, hitting 'play' gets me about one frame every 20-30 seconds.

Kind of makes it difficult to figure out settings and such.

EDIT: Just to test, I threw a number in the 'go to frame' box. It was 44 seconds before it showed up.

EDIT 2: ColorTools seems to cause VirtualDub to crash more often than not. No idea why, seems to happen at random. Either when applying it, or changing it, or opening 'Filters' with it applied...

Last edited by koberulz; 10-07-2016 at 01:49 PM.
Reply With Quote
  #55  
10-07-2016, 02:17 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Hmm, I see I pasted the whole post into Notepad, but somehow left out my answer to that one. Too fast and loose with the delete key, people.

With"show noise" I imagine you would see some noise (that likely refers to chroma noise, which isn't always so easy to see). I've never used that option. Normnally it's there when running QTGMC in losssless modes, which is done to return some or all of the original noise. Camcorder Color Denoise has that option too, but I think it shows a lot of garbage that isn't even noise so I gave up on that one.

It can be troubleome to remember, concerning those 'scopes and histograms, that YUV and RGB histograms behave differently. RGB involves the brightness factor, so increasing one color increases brightness everywhere. Advanced software can compensate for that automatically (which you can disable if you wish). With less sophisticated controls you have to remember what's happening.

SmoothUV is a chroma noise cleaner, and at the time it's called it's likely cleaning up some subtle garbage that earlier filters missed. There was some pink noise in the background and cyan stuff in skin tones that it reduced. To some extent it also smooths color flicker a bit (with these videos, every little bit helps). LimitedSharpenFaster was used in a subtle mode to avoid halos and other edge artifacts. It might not sharpen the edge of a laprl but it would sharpen "inner" detail between edges if anything exists. If you want to see what it would do at default, just remove the edgemode setting.

Filter priority and sequence. Obviously if you have to work with deinterlacing or separatefields, you have to do that ahead of other filters. I prefer to use chromashift after using filters to fix over saturation or FixChromaBeeding. Sharpeners are always used after denosiing, not before, or you just sharpen noise. Usually sharpeners would be among last used. Reinterlace or Weave() is usually at the end of procedures.

My scripts didn't run as slowly as you indicate. You can make it faster by commenting-out MCTemporalDenoise or other slow performers temporarily. One thing about MCTD, the more noise, the slower it goes. When you're developing or testing a script you can always comment out statements that don't matter for what you're testing. If you want to end a program early without proceeding to the end, add a "return last" statement where you want it. This stops remaining statements from running and just outputs what was done up to the "return last" line. I also find, particularly with XP, that restarting and re-running scripts repeatedly for long periods will slow things now and then, so I just do a fast rebooot.

And sometijmes I run scripts that are so confounded slow and complicated, I just split it into two scripts. Glad to say this doesn't happen often.

It's seldom that I capture a frame and paste it in a graphics app to check for differences. If the differences are that subtle, it's hard to see no matter what you do. In VirtualDub it can done by changing a script, then pressing F2. The script runs again and displays the current frame.
Reply With Quote
  #56  
10-07-2016, 02:29 PM
koberulz koberulz is offline
Premium Member
 
Join Date: Feb 2016
Location: Perth, Australia
Posts: 453
Thanked 3 Times in 2 Posts
Well, with a lot of the changes you're making, the differences apparently are that subtle because I'm barely able to see them even then. Plus I'm still at the 'learning what things do' stage, so it helps with that. I don't necessarily know what effect I'm looking for, whereas you do.

Regarding 'show noise', the Wiki suggests using that to determine the value to put in 'ezdenoise'.

Currently encoding your first AVS, and it's running at a steady 0.04fps, estimating 18 hours to encode one and a half minutes of video.

I look forward to encoding the two-hour game...
Reply With Quote
  #57  
10-07-2016, 04:09 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Quote:
Originally Posted by koberulz View Post
Well, with a lot of the changes you're making, the differences apparently are that subtle because I'm barely able to see them even then. Plus I'm still at the 'learning what things do' stage, so it helps with that. I don't necessarily know what effect I'm looking for, whereas you do.
None of these scripts are bible. You can always make changes.

Quote:
Originally Posted by koberulz View Post
Regarding 'show noise', the Wiki suggests using that to determine the value to put in 'ezdenoise'.
Tell the truth, I don't see much of anything with "ShowNoise" either. I've has very little use for such functions. Certainly I can tell the difference if I see noise without the filter, then activate the filter and restart the script with F2. dfttest is a temporasl flter, based on multutiple frames. Just a single frame won't show you much. It addresses some of the flicker grunge and floating tape noise. You can see it working in the blue background of the "Intro" avi which some grayish horizontal rain, like "drizzle".

Quote:
Originally Posted by koberulz View Post
Currently encoding your first AVS, and it's running at a steady 0.04fps, estimating 18 hours to encode one and a half minutes of video.

I look forward to encoding the two-hour game...
Then something's amiss. Script #1 doesn't run that slow at all. QTGMC at "medium" is fairly fast. Script #2 with SpotReoverMC3 runs about 1fps on my Intel i5. Comment-out the MCTemporalDrnoisr and see how it goes.
Reply With Quote
  #58  
10-07-2016, 05:48 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Started on Game1 and Game2 earlier today. The more I work with it, the more I keep, brother, this is one strange video! The "Game" vids require different filter settings and other changes, so don't get in a hurry. The flicker is still there, but might be easier to handle on these two vids. But the levels changes beat anything I've seen in a sports broadcast. Holy smokes!

Did the stadium having lighting problems during this game? Either that, or the broadcast crew had equipment problems. I'll post more about this later, but these can be fixed.

And speaking of equipment problems, I think some keys on my keyboard are failing. The typos are getting silly.

Last edited by sanlyn; 10-07-2016 at 06:04 PM.
Reply With Quote
  #59  
10-08-2016, 02:42 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
I used a similar but slightly shorter script for the Intro, Game1 and Game2 samples, eliminating a few plugins. I joined all three samples into one video, attached. An odd assortment of samples, every camera shot looks as if it's from a different project. Brightness levels and color balance had no continuity, which was a hassle to deal with. Par for the course for VHS.

The attached mp4 has 5 camera shots in all. The last shot has a bad case of Hanover bars, which I left untreated so that you would know what they look like.


Attached Files
File Type: mp4 Intro_Game1_Game2_TrialRun.mp4 (8.10 MB, 5 downloads)
Reply With Quote
  #60  
10-08-2016, 03:15 AM
lordsmurf's Avatar
lordsmurf lordsmurf is online now
Site Staff | Video
 
Join Date: Dec 2002
Posts: 13,503
Thanked 2,449 Times in 2,081 Posts
Quote:
Originally Posted by koberulz View Post
Says the guy who's been posting sample restorations with a script nobody else can use.
Being called a tease feels icky. Yuck.
Why? I'm suddenly reminded of high school and college exes. Those girls were teases!

You'll find both scripts attached.
1 was basic stuff, easy.
2 can be unstable. I've removed stabilizing attempts.

After those are used, this project is honestly just a matter of color repair. And that's not really hard, either.
These almost entirely resolve color NR needs.

Don't try to cram these into a single script. This video needs at least 3 passes
- 1 basic chroma NR
- 2 advanced NR
- 3 color levels repairs


Attached Files
File Type: avs studio1a.avs (713 Bytes, 15 downloads)
File Type: avs studio1b.avs (4.0 KB, 14 downloads)

- Did my advice help you? Then become a Premium Member and support this site.
- For sale in the marketplace: TBCs, workflows, capture cards, VCRs
Reply With Quote
Reply




Similar Threads
Thread Thread Starter Forum Replies Last Post
Panasonic VCR won't take tape? bluewater Video Hardware Repair 5 12-19-2014 11:56 PM
Code Colours in Dreamweaver naimeiiz Web Development, Design 1 07-24-2013 10:19 AM
How to copy DV tape to new DV tape with audio removed? via Email or PM Edit Video, Audio 1 09-13-2012 05:55 PM
Capture Dark/Bright Flickering godsfshrmn Capture, Record, Transfer 5 01-06-2010 11:32 AM
VHS tape malfunction, mechanism not rolling tape properly admin Capture, Record, Transfer 0 10-12-2009 10:57 PM

Thread Tools



 
All times are GMT -5. The time now is 03:14 PM