Quantcast Chroma bleed problem, left and right? - digitalFAQ Forum
  #1  
01-06-2019, 06:51 AM
Spotty Spotty is offline
Premium Member
 
Join Date: Dec 2018
Posts: 11
Thanked 0 Times in 0 Posts
The sun seems to be making bright reds go well past the red, see attached image.
I thought it might be chroma shift, but when shifted right, there is still red spilling over both sides.
It seems to be an intensity thing as when the sun is on the right it is also predominantly on the right.
What can be done with this?

Capture setup used
vhs-c tape/vcr/composite/DMR ES15/s-video/VC500/virtualdub/huffyuv


Attached Images
File Type: png Red in Sun.png (826.7 KB, 24 downloads)
Reply With Quote
Someday, 12:01 PM
admin's Avatar
Ads / Sponsors
 
Join Date: ∞
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #2  
01-09-2019, 10:59 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,385
Thanked 1,059 Times in 884 Posts
Chroma bleed is a common VHS problem, especially with entry-level VCR's. It looks worse when color is oversaturated, as it is in your posted image. There are techniques you can use with Avisynth to clear up almost all of it, but you have to work in the original YUV captured colorspace using a video sample that hasn't been converted to RGB. There's no way to test or demonstrate the techniques using a still image.


If you captured to YUV (YV12 or YUY2), create a short few seconds of edited video of that same scene in VirtualDub and save it using "direct stream copy" to prevent RGB conversion.
Reply With Quote
  #3  
01-09-2019, 06:28 PM
Spotty Spotty is offline
Premium Member
 
Join Date: Dec 2018
Posts: 11
Thanked 0 Times in 0 Posts
Here's part of the original video clip as requested.
It was recorded in PAL, and I checked that it didn't exceed the top end of the histogram.
I have since tried to recapture with different proc amp settings (in the VC500 via virtualdub) and it doesn't change the length of the bleed (the saturation setting only changes how colored it is).
Thanks


Attached Files
File Type: avi Red bleed.avi (8.86 MB, 8 downloads)
Reply With Quote
  #4  
01-09-2019, 10:06 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,385
Thanked 1,059 Times in 884 Posts
Thank you for the sample. Getting late here, but will prepare some notes tomorrow.
Delay due to weather that knocked down power line here in the mountains for a few hours. Could not figure out what version of huffyuv you used, as Avisynth and VirtualDub refused to read the avi - but I finally decoded it with ffdshow's libavcodec version of huff and recompressed it to Lagarith YUY2. Will post a fixed version tomorrow.
Thanks again.
Reply With Quote
  #5  
01-09-2019, 10:32 PM
Spotty Spotty is offline
Premium Member
 
Join Date: Dec 2018
Posts: 11
Thanked 0 Times in 0 Posts
I had downloaded and installed a Virtualdub1.9.11 + filter set uploaded by Lordsmirf for the initial captures, but have since come to realise it is very old so am now using a newer Virtualdub2 (2018) for processing, but still the same huffyuv 2.1.1 patch 0.2.2 (which I think I downloaded from the same place - but not sure)
The older virtualdub I was using (1.9.11) seems to be prior to Virtualdub filters taking anything but RGB.
Does this mean during capture (assuming no filters) that the capture data (yuy2) is converted to RGB then back to yuy2 by Huffyuv? If so I need to do my captures again.
Just to check things I have used the new virtualdub2 2018 to re-capture this scene and there was no noticeable difference.
(NB all video files supplied are captured from original setup vdub 1.9.11, huffyuv 2.1.1 patch 0.2.2 no filters)
thanks
Reply With Quote
  #6  
01-09-2019, 10:37 PM
lordsmurf's Avatar
lordsmurf lordsmurf is offline
Site Staff | Video
 
Join Date: Dec 2002
Posts: 8,214
Thanked 1,353 Times in 1,193 Posts
VirtualDub 1.10.x and VirtualDub2 (fork of 1.10.x) are not suggested for capture, known issues.

VirtualDub2 also has filter issues. For example, the resize filter doesn't behave correctly, can skew and corrupt the image.

VirtualDub 1.9.x only for capture.
VirtualDub 1.10.x fine for processing
VirtualDub2 depends on the filter used, watch for oddities.

- Did my advice help you? Then become a Premium Member and support this site.
- Find television shows, cartoons, DVDs and Blu-ray releases at the TVPast forums.
Reply With Quote
The following users thank lordsmurf for this useful post: Spotty (01-09-2019)
  #7  
01-09-2019, 10:39 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,385
Thanked 1,059 Times in 884 Posts
Quote:
Originally Posted by Spotty View Post
Does this mean during capture (assuming no filters) that the capture data (yuy2) is converted to RGB then back to yuy2 by Huffyuv?
No. If you specify YUY2 during capture, VDub captures to YUY2 and delivers YUY2 to huffyuv, which compresses in YUY2. There is no RGB involved.

VirtualDub2 just has strange behavior in many respects. Anyway, I almost always make intermediate working files using lossless Lagarith, although I still capture to huffyuv using XP and VDub 1.9x.
Reply With Quote
The following users thank sanlyn for this useful post: Spotty (01-09-2019)
  #8  
01-12-2019, 12:05 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,385
Thanked 1,059 Times in 884 Posts
Again, sorry for the delays. This will be the last time I move to another house. And even I did it again, it would not be during the rain-and-lightning season, wherever it might possibly be located.

Bright levels in the sample are well under control, but darks are crushed. This is par for VHS, where levels vary wildly at times, making it impossible to always get levels just-right. A minor run-over into the red at the extremes can usually be recovered. It's when the red data starts climbing the left or right walls that you're really in trouble. Fortunately, this sample isn't one of those cases. I'll demonstrate how you can identify crushed darks by eyeball and in histograms, and how to fix it when possible.

borderess original and YUV levels:


The image above is an original frame from your avi sample, with the borders removed so that they don't affect the histograms. In the frame image look at the shadows under the automobiles and in the tree and water in the background. Shadows are clumps of black with little or no detail that don't look natural. Notice also how high contrast light is shading and dimming facial details, and objects like blackmouth and black eyeballs are relatively easy to spot by eye. These grim shadows are signs of crushed blacks, which are dark data that lie below Y=16 (or that are "in the red" on the left side of the capture histogram); when converted from YUV 16-235 to RGB 0-255 during display, crushed blacks are destroyed and become zero-black entities. In the above image, black details that lie in the unsafe area below y-16 are indicated by the RED-ORANGE arrow in the top section (white band) of the YUV levels histogram. Usually the only "safe" objects that would lie in that zone against the left wall would be zero-black borders. But borders were removed from the frame before the histogram was activated, so the only data that will appear in that unsafe area are clipped darks.

Below, a before-and-after crop of an original unaltered frame (top image), and the same frame after levels correction (bottom image) but before denoising or other work. Facial and shadow details are at least partially recovered and don't look so grimy, and overall the background and shadows look more convincing despite the high contrast light.

before and after levels correction:


Another problem that's common with consumer-camera home videos is angular distortion and line twitter during motion, either by objects in motion or by camera motion. Notice the distorted lines in the cars and the edges of their shadows and the garage door,m seen in the 2X blowup in the image below:

bad interlacing:


During video cleanup it's possible to partially calm these distortions, but they aren't eliminated entirely because of the way most VHS consumer camera shutters and circuitry worked. They were designed for the CRT era, not for modern LCD displays that inherently don't handle motion as well as analog displays or film to begin with. So sometimes you have to play digital tricks to make digital motion look as clean as they did on analog displays that had a natural film-rate "blink" during display.

I made two versions of the filtered sample. The attached Red_fix_25i.mpg is interlaced and encoded to DVD authored spec. The changing shapes of vertical lines from frame to frame and the resulting aliasing and twitter effects are pretty easy to see. These defects are entirely expected, given the way the source was created in-camera. (But we've seen worse, which I'll demonstrate below.) Meanwhile the attached Red_fix_25p.mpg is progressive video encoded as interlaced for DVD or for standard-def BluRay. Actually nowadays many set top players would play DVD/SD-BkluRay as interlaced anyway, even if they're encoded as progressive. After the Avisynth script below I'll show how interlaced and progressive output were generated.

Here is the script I used for the interlaced version:

Code:
AviSource("D:\forum\faq\spotty\Red bleed.avi")

#--- Recover shadow detail from crushed darks ---#
ColorYUV(cont_y=-20,off_y=4)
Levels(20,1.0,255,16,255,dither=true,coring=false)

#--- YUY2 routine to reduce sharpened edge halo --#
AssumeTFF()
SeparateFields()
FixVHSOversharp(20,16,12)
FixVHSOversharpL(20,12,8)
Weave()

#--- YV12 routine to recover some crushed shadow detail. ---#
ConvertToYV12(interlaced=true)
ContrastMask(enhance=5.0)

#--- routine to reduce chroma bleeding & denoise ---#
#--------    (requires clean deinterlace)   --------#
QTGMC(preset="medium",EZDenoise=4,denoiser="dfftest",ChromaNoise=true,\
    ChromaMotion=true,DenoiseMC=true,GrainRestore=0.3,border=true)
vInverse2()
RemoveDirtMC(30,false)

#--- calm red saturation, then warp-sharp U and V chroma only --#
FixChromaBleeding()
ChromaShift(c=2,L=-2)
U = UtoY()
    U = U.BilinearResize(U.width/2, U.height).aWarpSharp(depth=30).\
     nnedi3_rpow2(4, cshift="Spline64Resize", fwidth=U.width, fheight=U.height)

V = VtoY()
    V = V.BilinearResize(V.width/2, V.height).aWarpSharp(depth=30).\
     nnedi3_rpow2(4, cshift="Spline64Resize", fwidth=V.width, fheight=V.height)

#--- Merge original luma with sharpened chroma, ---#
#---  then one more round of chroma-only warp.  ---#
YtoUV(U, V, last)
MergeChroma(aWarpSharp2(depth=30).aWarpSharp2(depth=10))

#--- re-interlace, and replace dirty borders  ---#
SeparateFields().SelectEvery(4,0,3).Weave()
Crop(10,0,-12,-10).AddBorders(10,4,12,6)
return last
The progressive version uses exactly the same script, except for changing part of this statement:

Code:
QTGMC(preset="medium",EZDenoise=4,denoiser="dfftest",ChromaNoise=true,\
    ChromaMotion=true,DenoiseMC=true,GrainRestore=0.3,border=true)
to this:
Code:
QTGMC(preset="medium",EZDenoise=4,denoiser="dfftest",ChromaNoise=true,\
    ChromaMotion=true,DenoiseMC=true,GrainRestore=0.3,border=true,\
    FPSdivisor=2)
and, finally, change the last four statements from this:
Code:
#--- re-interlace, replace dirty borders, and center frame vertically ---#
SeparateFields().SelectEvery(4,0,3).Weave()
Crop(10,0,-12,-10).AddBorders(10,4,12,6)
ConvertToRGB32(interlaced=true,matrix="Rec601")
return last
to this:
Code:
#--- replace dirty borders and center frame vertically ---#
Crop(10,0,-12,-10).AddBorders(10,4,12,6)
ConvertToRGB32(interlaced=false,matrix="Rec601")
return last
which results in progressive video by discarding every second interlaced field. Caution: many NLE editors and some capture cards flatly lie to you when they advertise the "great" quality they output when they deinerlace and discard alternate frames. That's pure Baldlerdash laced with clueless B.S. Their output looks like crap. Avisynth's QTGMC is the best quality you can get for this sort of thing. Others simply can't compete. So if you're going to deinterlace in this way, insist on QTGMC.

The last line converts to RGB32 because two VirtualDub RGB filters were applied to Avisynth's output before saving the new file to lossless compression (I used Lagarith for final output to YV12 becausev tghe next step was MPEG encoding). I'll explain that in the details tomorrow. The two VDub filters I used were ColorCamcorderDenoise and ColorMill.

Also, keep in mind that this method throws away 50% of the original temporal resolution. Motion isn't as smooth, especially in action video and during camera pans. The original temporal resolution for your sample is played as 50 interlaced image fields per second; but after field decimation that rate is reduced to 25 image frames per second. Any way you look at it, it's a choice between poorly interlaced motion or a compromise that's not so annoying to watch.

You can also deinterlace in such a way as to keep all the field and discard none of them, giving you 50p double-rate deinterlacing. This doesn't always solve the problem so well; the lines still change shape between frames, so a lot of those distortion antics persist during playback -- and some players don't always make nice with double-rate playback.

Tomorrow I'll explain the line-by-line detail of those scripts and will post links for the filters.

As I said earlier, we've seen worse. One example is a video that displays various parts of the interlaced frames with truly annoying and persistent distortion -- so persistent that, whether throwing away half the frames or keeping them, the distortion remained the same or looked worse! This is how the original distorted edges looked after they were deinterlaced (below):

edge noise =-- ripples, stair-stepping, mosquito noise, and sawtooth edges change rhythmically over several frames:


Here, deinterlacing alone didn't help at all, and the distortions changed shape every few frames. I had to deinterlace and add something more dramatic -- the FixRipsP2 median filter. FixRipsP2 averages the motion in 5 frames and tries to guess at corrections. It's a dangerous filter (and verrry slowwww) but in this case it seemed to work OK. It also removed some grainy tape noise. Here is the posted output encoded as progressive mp4 (which can't be used for DVD or BluRay). Notice how the big characters in then right-hand half of the frames and the left-hand seat contours have been smoothed and clarified:

test sample 1a2.mp4

These samples are from the thread titled What are first steps to restoring captured AVI? (with samples).
The scripts used for repair and restoration are explained in post #2
and post #6.

I'll post more tomorrow after I clean up the chaotic mess of notes I made for the .avs scripts.


Attached Images
File Type: jpg bordess original and YUV levels.jpg (101.8 KB, 33 downloads)
File Type: jpg before and after levels correction.jpg (152.0 KB, 33 downloads)
File Type: png bad interlacing.png (468.7 KB, 32 downloads)
Attached Files
File Type: mpg Red_fix_25i.mpg (1.81 MB, 2 downloads)
File Type: mpg Red_fix_25p.mpg (1.82 MB, 3 downloads)
Reply With Quote
  #9  
01-12-2019, 11:44 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,385
Thanked 1,059 Times in 884 Posts
Here are wome line-by-line notes for the Avisynth script in post #8, in order of the lines as they appear.

AviSource() is a versatile Avisynth function that opens, decompresses and decodes Avi containers. It scans your system for the video and audio codecs associated with the video. ColorYUV(cont_y=-20,off_y=4) attempts to recover some shadow shadow detail, reducing contrast to keep brights under control while the "Off-y" (offset) subfunction raises all pixel values just enough to pull the darkest details out of the unsafe zone. Later in the script's code a contrast mask will be used to brighten the darkest data, so the Levels() command is used in anticipation of using the contrast mask filter and asdjusts the low gamma values to give a better balance between darks and lower midtones (see the ContrastMask routine later in the script).
Avisource(): http://avisynth.nl/index.php/AviSource
ColorYUV():http://avisynth.nl/index.php/ColorYUV
Levels(): http://avisynth.nl/index.php/Levels

AssumeTFF() tells Avisynth that the field order of the video is Top Field First. This overrides Avisynth's default, which is Bottom Field First (BFF). SeparateFields() extracts the two interlaced fields from each interlaced frame and builds a stream of all of the half-height fields in the video. This is a form of deinterlace that makes no changes in the fields themselves. This makes a progressive stream required by the FixVHSOversharp.dll filter, which reduces oversharpening halos on right and left (FixVHSOversharpL) edges. Then the Weave() command reassembles all of the half-height fields back into their original interlaced state.
SeparateFields(): http://avisynth.nl/index.php/SeparateFields
fixvhsoversharp_25_dll_20030723.zip :
http://www.digitalfaq.com/forum/atta...ll_20030723zip
Weave(): http://avisynth.nl/index.php/Weave

ConvertToYV12(interlaced=true) converts the video from YUY2 to YV12 for use with the filters that follow. Note that colorspace conversions in Avisynth are done with great precision, but they require you to explicitly include the video's interlaced state. Most editors don't convert colorspaces as cleanly as Avisynth. In fact some editors like Adobe Pro make a mess of some of it. The ContrastMask.avs plugin is used to bring up shadow detail that's usually dimmed in high contrast lighting. Its strength is adjusted with the "enhance" parameter, which defaults to 10 but is set here to 5. This was adjusted together with the paramters in the Levels() command described earlier. Sometimes ContrastMask can overly brighten lower midrange gamma, so Levels() was adjusted earlier to make a slight correction there, lowering brightness in upper darks and shadow tones from about y=40 and downward. This was adjusted by eyesight using a calibrated monitor.
Convert()and basic colorspaces: http://avisynth.nl/index.php/Convert
ContrastMask.avs: http://www.digitalfaq.com/forum/atta...ontrastmaskavs.
ContrastMask requires MaskTools2, including Microsoft VisualC++ runtime files. These are included in the QTGMC.zip file described below when discussing the QTGMC plugin.

Contrast Mask also requires the VariableBlur plugin: http://www.digitalfaq.com/forum/atta...bleblur_070zip. VariableBlur itself requires the VisualC++ 2010 runtime, links for which are included in the QTGMC plugin package, below.

QTGMC(preset="medium",EZDenoise=4,denoiser="dfftes t",ChromaNoise=true,\
ChromaMotion=true,DenoiseMC=true,GrainRestore=0.3, border=true)
.
QTGMC deinterlaces the video, producing two full-sized progressive frames from every two interlaced half-sized fields. Along the way I enhanced some of QTGMC's denoising parameters. Full deinterlace is required by some of the operations that will follow, rather than simply separating interlaced fields. The individual QTGMC parameters that are stated here and used beyond their default values are all described in QTGMC's documentation, which comes with the package. The .zip contains subfolders and brief but conscise Read-Me instruction files, plus all of the support files, most of which are also popular as standalone filters in their own right. The latest .zip package for QTGMC version 3.32 is http://www.digitalfaq.com/forum/atta...kagenov2017zip.

vInverse2() is used to ease excessive interlace combing effects. http://avisynth.nl/index.php/Vinverse. It requires the 2012 VisualC++ runtime, which is installed with the QTGMC package.

RemoveDirtMC(30,false) removes a few spots and the VHS tape noise that is often referred to as floating grunge. It's used here at a moderate strength of 30. You can use its strength at very high values up to 100, if you're willing to accept a few motion distortions and to have small moving objects disappear for a frame or two. Its usual operating strength is 10 to 40. The "false" parameter is required unless you're working in pure grayscale, in which case "true" would be specified.
RemovedirtMC.avs is at http://www.digitalfaq.com/forum/atta...emovedirtmcavs. If you are using Win7 thru Win10, you'll need the 2008 runtimes, which are not included with Windows versions after XP. The runtimes are explained and linked in the thread at Fix for problems running Avisynth's RemoveDirtMC.

RemoveDirtMC requires the original RemoveDirt_v09.zip (http://www.digitalfaq.com/forum/atta...ovedirt_v09zip).

It also requires RemoveGrain_v1_0_files.zip (http://www.digitalfaq.com/forum/atta..._v1_0_fileszip) or the RgTools plugin. All of these RemoveGrain support files are included with the QTGMC package.

We come now to the routines that address chroma bleed directly. FixChromaBleeding() is an old standby that starts by reducing chroma oversaturation, which is a major cause of bleeding and "blooming" effects. FixChromaBleeding() also does some internal resizing to help contain bleed. ChromaShift(c=2,L=-2) shifts chroma displacement 2 pixels to the right and 2 pixels upward.
FixChromaBleeding.zip is at http://www.digitalfaq.com/forum/atta...omableedingzip. Requires ChromnaShift, below.
ChromaShift v2.7: http://www.digitalfaq.com/forum/atta...romashift27zip

U = UtoY()
U = U.BilinearResize(U.width/2, U.height).aWarpSharp(depth=30).\
nnedi3_rpow2(4, cshift="Spline64Resize", fwidth=U.width, fheight=U.height)

What in the world does this code do? Actually it's less complicated than it looks. Generally what it does is resize, sharpen, and recondition the U (blue-yellow) channel to reduce bleed. First, "U = UtoY()" moves U data to the luminance channel and creates a new video in memory called "U". Whatever is done to the stuff in the created "U" video will be saved in memory as "U" for use later. The U video is then resized to one-half its original width using BilinearResize . This reduces the horizontal width of the chroma bleed. Bleed is further reduced by using aWarpSharp to tighten chroma more snuggly against edges by using warp and thinning techniques. Then it uses a special resizer called nnedi3_rpow2 to resize the scaled-down "U" video to its original size using Spline64Resize algorithms. Note that this entire operation works only on chroma, not on luma, so the overall perceived "sharpness" of the video is preserved.

V = VtoY()
V = V.BilinearResize(V.width/2, V.height).aWarpSharp(depth=30).\
nnedi3_rpow2(4, cshift="Spline64Resize", fwidth=V.width, fheight=V.height)

This odd-looking bit of code takes the "V" (red-green) channel and does the same things that were done to the "U" channel, but saves the results as a new video called "V".

We now have to return Y, U and V back to where they came from. This is done with the "YtoUV(U, V, last)" function. Basically it re-creates a video in which the saved "U" is configured as a newly-filtered U channel, and the saved "V" is configured as a newly filtered V channel. The original Y channel is restored from its "last" state, where "last" is the video that existed before all this UtoY and VtoY business started. After creating this new YUV version of the filtered video, it goes through one more round of warp-sharpening with MergeeChroma(aWarpSharp2(depth=30).aWarpSharp2(dep th=10)). This works on chroma only; the statement applies two more doses of aWarpSharp using the newer algorithms in aWarpSharp2. The MergeChroma function merges the chroma in the WarpSharp2 operations with the original version of the Y (luma) channel, which remains unchanged.

UtoY, VtoY, YtoUV:http://avisynth.nl/index.php/Swap
BilinearResize,Spline64rersize: http://avisynth.nl/index.php/Resize
aWarpSharp2(aka aWarpSharp): http://www.digitalfaq.com/forum/atta...sharp2_2015zip. It requires 2012 and 2015 visualC++ runtimes, which come with the QTGMC package.

NNEDI3_rpow2 is a resizer that works in powers of 2 and is part of the original NNEDI3 plugin, which comes with the QTGMC package. NNEDI3 requires VisualC++ runtimes,and documentation -- all of it comes with the QTGMC package.

MergeChroma: http://avisynth.nl/index.php/Merge

The code now re-interlaces the video by using SeparateFields().SelectEvery(4,0,3).Weave(). Each full-size progressive frame is separated into two half-sized fields ready for interlacing. Because the source frames are progressive, each two half-sized fields contain the same image. We don't want duplicate images, so we use the SelectEvery() function to look at groups of 4 fields. Fields are numbered from 0, so for each group of 4 fields we will pick unique fields numbered 0 and 3 -- that is, we'll take the first (top) field from the group, then take the last (bottom) field from the group. When these two are interlaced with the Weave() function, we'll have a newly re-interlaced TopFieldFirst video.
SeparateFields(): http://avisynth.nl/index.php/SeparateFields
SelectEvery(): http://avisynth.nl/index.php/Select
Weave(): http://avisynth.nl/index.php/Weave

Crop(10,0,-12,-10) removes unwanted bottom border head-switching noise, and also removes the filtered black borders that are probably not pure black any more and which also contain noise themselves. The Crop() removes pixels in this order: 10 pixels from the left border, zero pixels from the top, 12 pixels from the right border, and 10 pixels from the bottom. Remember that you can't use odd-numbered pixels in YUV. AddBorders(10,4,12,6) replaces the removed pixels with brand new black ones in the same order: left, top, right, bottom, in this case centering the frame vertically. Finally, "return last" outputs the "last" complete video operation, which was the AddBorders() function. Why do we have to specify "last"? Recall that we also created other videos called "U" and "V", so Avisynth wants to know which video version we want for output. What we want for output is the "last" thing that was done.
Crop(): http://avisynth.nl/index.php/Crop
AddBordedrs(): http://avisynth.nl/index.php/AddBorders

I applied VirtualDub filters to the output from the Avisynth script while running the script and saving the file. The two filters were ColorCamcorderDenoise and ColorMill. ColorCamcorderDenoise (aka "CCD") was used at its default values. The only modification made with ColorMill was to increase saturation of all RGB colors by a modest 5 percent.
ColorCamcOrderDenoise ("CCD"): http://www.digitalfaq.com/forum/atta...1&d=1544578132.
ColorMill: http://www.digitalfaq.com/forum/atta...colormill21zip.

Sorry for all the code, but bad and persistent chroma bleed are unfortunately common defects, especially with lower-tier VCRs, and they always take multiple filters to subdue. Meanwhile, all Avisynth functions and commands are online at the Avisynth wiki. Using Google, just enter "Avisynth" followed by the function you want to search for. The same functions and commands are inn your computer in the Avisynth program folder. Click the "Start" menu button and in your program listings find the Avisynth program group. Open that group and click on "Avisynth documentation". The major difference between the wiki version and the PC version is that the wiki often has more examples and updated graphics.
Reply With Quote
The following users thank sanlyn for this useful post: Spotty (01-13-2019)
  #10  
01-13-2019, 12:23 AM
Spotty Spotty is offline
Premium Member
 
Join Date: Dec 2018
Posts: 11
Thanked 0 Times in 0 Posts
Thanks a lot sanlyn, this forum really needs an "extra thanks" button.
Reply With Quote
Reply




Similar Threads
Thread Thread Starter Forum Replies Last Post
Remove video errors on left/right of picture? RockCassette Restore, Filter, Improve Quality 6 10-14-2018 06:25 AM
Chroma bleed on capture? milosz Restore, Filter, Improve Quality 14 02-20-2017 03:09 PM
AKAI VS-S99EOG chroma problem phaenius Video Hardware Repair 0 08-03-2015 03:48 PM
FYI: Still two JVC-SR-V10U left on eBay! JT_too Marketplace 3 02-04-2013 05:23 PM
Extending the edges to bleed area manthing Project Planning, Workflows 2 04-02-2010 03:50 AM

Thread Tools



 
All times are GMT -5. The time now is 03:57 AM