digitalFAQ.com Forum

digitalFAQ.com Forum (https://www.digitalfaq.com/forum/)
-   Restore, Filter, Improve Quality (https://www.digitalfaq.com/forum/video-restore/)
-   -   Fixing color spots and edge lines? (https://www.digitalfaq.com/forum/video-restore/10095-fixing-color-spots.html)

homefire 11-03-2019 07:08 PM

Fixing color spots and edge lines?
 
3 Attachment(s)
I have a video with some areas of bad looking color patches and some pink lines along the edge. This is an example frame, sample video attached below:
http://www.digitalfaq.com/forum/atta...1&d=1572829447

To give some background, this was recorded to a DVD with a low end dvd/vcr combo. The originals are no longer available so this is all I have to work with. It was originally captured on a VHS video camera from the early 80's on a tape that already had something recorded. From what I read, this basically means there's very little I could do to fix this. Anyone know of any AviSynth or virtualdub filters that might help correct this?

keaton 11-03-2019 11:54 PM

There are a wealth of threads in this forum on using Virtualdub filters such as ColorMill, Gradation Curves, and Hue/Saturation/Intensity to adjust video, and using the ColorTools filter to analyze via histogram (i.e. Red/Green/Blue) or vectorscope (i.e. Hue/Saturation) what problems you need to attempt to fix. Those are included in the virtualdub pack posted on this forum. If you search by user, sanlyn has put on a few clinics in response to some videos posted by users in the past regarding what to analyze via histogram or vectorscope, and some example work he's done to try and fix them. Some have been quite nasty samples to try and fix. I think avisynth has some plugins as well that can display various kinds of histograms and can be used to adjust hue/saturation that have also been mentioned on this forum. Dealing with some home videos myself, I've played with both the Color Curve or Level type adjustments (i.e. Gradation Curves or ColorMill), as well as Hue/Saturation (i.e. Hue/Saturation/Intensify filter) to find the best correction to make. I went even further and got an excellent book on color correction by Alexis Van Hurkman (Although it doesn't refer to Virtualdub or Avisynth, the principles of the book still translate to RGB and Hue/Saturation adjustments), which helped understand what different kinds of adjustments do and what to look for when analyzing video. Sometimes I find the best adjustment is in the Hue/Saturation domain (i.e. Hue adjustments rotates colors on the color wheel to help remove color casts, or reduce saturation of a particular color that a camcorder really oversaturated to reduce the unpleasantness). Other times, an RGB curve or Gamma adjustment might be better (i.e. Gradation Curves or Color Mill) to try and get the colors on the vectorscope properly centered so that neutral colors appear neutral. One of the most important things is to get skin tones accurate. There is a line on the vectorscope which happens to correspond to where most flesh tones should be on or very close to. If you've just got a big blob of Blue, for example, you may have to desaturate the video a lot with say Hue/Saturation/Intensity filter, maybe it can be done selectively on just Blue and Cyan, so that other colors are not washed out. But for gross errors like that, it may be limited in what you can remove. In my experience, I was able to get rid of some strong red casts in camcorder video by selectively reducing saturation for red/magenta hue or by making a strong negative gamma adjustment on the red channel. Using the Vectorscope in Virtualdub's ColorTools filter, I could tell whether the color needed a gamma adjustment because it wasn't centered on the scope, or if it was properly centered, then all I could do was reduce the saturation for that particular hue. Sometimes I found all colors just needed a hue adjustment of a few points (i.e. degrees of rotation on the vectorscope). Sometimes correcting for white/grey/black and flesh tones was enough to get all the other colors to fall in line. But when you have an obvious camcorder sensor error that just displays an invalid color, I try to reduce it as much as I can without doing too much harm to the rest of the video by doing a large saturation or gamma reduction for a specific hue, then trying to do a global boost of all hues as much as I can before any color gets to be too strong.

Sorry I don't have a quick answer. Fixing color in video can get rather involved, and so it would take time I don't have right now to try and solve that puzzle. So, instead, I'm trying to point you to resources that can help educate you on the topic, so you can attempt to solve any issue with color you may face. I spent time reading forum posts, a good book on the topic, and just playing with the videos I have by looking at a histogram and/or vectorscope and see how RGB, Gamma, Hue, Saturation adjustments cause them to change. Using a color picker such as the CSamp program referenced on this forum is also quite handy for seeing what RGB values you have in a given area (i.e. what should be a neutral color that has similar red and green values, but a much larger blue value). Your eyes are also important in cross checking what any histogram or vectorscope shows you. Looks like you have a challenging video, best of luck to you!

sanlyn 11-04-2019 12:53 AM

Unfortunately few readers could use your video unless they have ATI's YVU12 codec installed, which is extremely unlikely, and which can't be used for a DVD. So I don't think your sample is unaltered from a DVD anyway, which would be YV12/MPEG2.

If your source video is indeed a DVD, then there there is a guide in this forum (as well as similar guides in other a/v forums) showing how to make video samples from DVD and other MPEG sources: http://www.digitalfaq.com/forum/news...ad-sample.html.

I realize that a few video players such as VLC can play the video because they ship with a version of ATI's codec, but that codec can't be used for restoration software unless YVU12 is installed on the user's machine. Just by accident I had a machine that could open the video in VirtuaLDub, but that's not the way to get a lossless decode from a DVD. For restoration or editing purposes, DIGindex is used for extracting MPEG into unaltered mv2 video. It would also be a smaller file that you could use for a longer sample; the avi isn't much to work with.

homefire 11-04-2019 05:20 PM

1 Attachment(s)
Quote:

Originally Posted by keaton (Post 64665)
Sorry I don't have a quick answer. Fixing color in video can get rather involved, and so it would take time I don't have right now to try and solve that puzzle. So, instead, I'm trying to point you to resources that can help educate you on the topic, so you can attempt to solve any issue with color you may face.

I appreciate the insight. The overall coloring I think I have a handle on, or at least have the resources to figure it out between this site and some books I have. I should have been more specific, I was looking for a filter or plugin that will allow me to adjust specific spots in the video frame unless the correct answer is the best fix would be possible through adjusting the video color of the whole frame. I've seen sanlyn's great tutorials on coloring and editing segments of one video with avisynth. I'm just just looking for an approach to take, as in where to start looking?


Quote:

Originally Posted by sanlyn (Post 64666)
If your source video is indeed a DVD, then there there is a guide in this forum (as well as similar guides in other a/v forums) showing how to make video samples from DVD and other MPEG sources: http://www.digitalfaq.com/forum/news...ad-sample.html.

:smack: I completely missed that guide, and the goal was to convert it from MPEG to AVI which I though would be lossless in the same color space but I was definitely wrong. The original is much better. The correct full clip is attached.

By the way, thanks for all the time you take to post all of those guides and information overloads!

homefire 11-04-2019 09:04 PM

1 Attachment(s)
Quote:

Originally Posted by keaton (Post 64665)
I went even further and got an excellent book on color correction by Alexis Van Hurkman (Although it doesn't refer to Virtualdub or Avisynth, the principles of the book still translate to RGB and Hue/Saturation adjustments), which helped understand what different kinds of adjustments do and what to look for when analyzing video.

If this is the book your talking about, I found it amazing. I also got the book "Color Correction for Video by Steve Hullfish and Jaime Fowler" which really helped me out.

http://www.digitalfaq.com/forum/atta...1&d=1572922981

keaton 11-04-2019 09:32 PM

1 Attachment(s)
Here's a really quick and dirty shot at this with Virtualdub filters hue/saturation/intensity (1.2), ColorMill, Color Camcorder Denoise 1.7, and ColorTools (I think all of these are included in digitalfaq Virtualdub 1.9.11 + filters download). I didn't set the Video Compression or Color Depth settings, so change that for whatever you want to save as. In virtualdub load the attached file via File -> Load Processing Settings

This shows what may be possible, certainly spend more time on it than I did. I tried to come up with something that sort of looked good globally. Naturally, you can split the video up into different shots as the lighting or camera angle changes to get each shot adjusted optimally for the conditions. Your video has lights and darks crushed (i.e. maxed out) so not much can be done to undo that. Some highlights of what I tried were as follows:

After loading this attached file, go to Video -> Filters to see the filters loaded. Notes on each as follows:

Filter 1.) Looking at clip with Vectorscope in ColorTools filter, you see the saturation for blue, green and magenta is huge, and even goes out of gamut of what's possible to display. The square boxes (M, B, C, Y, G, R) that make up a ring of where the gamut's boundaries are. As you can see going through the video, some of those have values that go well beyond that marker. So, first off, used hue/saturation/intensity to do a major reduction in saturation globally (i.e. for all - Red, Yellow, Green, Magenta, Cyan, Blue) by moving the saturation slider way down. Naturally, this really dulls the colors, but it also helps to get rid of much of the Blue and Green splotches you see.

Filter 2.) Using color picker CSamp and looking at flesh tones (Red greater than Green, and Green greater than Blue) and neutral colors (Red/Green/Blue identical or very close together), did a combination of RGB adjustments and also some Gamma adjustments. Perhaps both were not needed, I was just rushing through it. Also, although the previous adjustment in filter #1 got the colors to at least be in gamut and get rid of much of the green and blue gunk, the colors were not centered properly on the Vectorscope. Colors should "shoot" out from the center. This can be a little hard to describe, so apologies if it's not clear. But when you make some adjustments to Red or Green or Blue gamma, and you watch in Vectorscope, you see how the color moves away from or towards one of those three axis in the Vectorscope. So making some adjustments to these gamma values, got the color to be more centered to the middle of the scope and "shoot" outward towards the correct color. This adjustment also helped take out more of the colors that just are not possible, i.e. they don't look like anything in the real world for the images you see. For the midrange (aka R-G-B Middle in ColorMill), I brought the Blue way down, to further help with the Blue problem. I also configured ColorTools to Histogram mode, and saw that the Blue Histogram was still way higher than Red and Green, by sliding this way down, I saw the red/blue/green become more aligned in the histogram when I knew there were neutrals at the high end. I also made a bit of an adjustment in the Levels part of ColorMill to increase the midrange contrast a bit, which is a personal choice, it just made things a little less dark.

Filter 3.) I added another hue/saturation/intensity filter to reduce the saturation of Cyan/Blue/Magenta even further to try and get rid of more of that Blue and Magenta gunk, as they were still really much larger on the vectorscope than the other colors when the scene doesn't really call for it.

Filter 4.) I added yet another hue/saturation/intensity filter with a hue adjustment of about -10, this seemed to improve the flesh tones a bit. This is a sensitive one, and can be per individual preference. I think it moved the flesh tones a bit closer to the ideal marker for flesh tones on the Vectorscope (i.e. that line between Red and Yellow on Vectorscope).

Filter 5.) I added Camcorder Color Denoise filter, which is to help get rid of some of the general color noise. Much has already been said about this filter on other threads. My setting here is rather arbitrary, not heavily analyzed, but it seemed to help reduce some of those color flickers.

Filter 6.) Lastly, I added another hue/saturation/instensify filter with a global increase of saturation for all colors. This is also personal preference. You could probably go even further or go back and it would still be acceptable. Looking at the Vectorscope to see how much that pushes the colors out towards the gamut boundary (i.e. those boxes on the Vectorscope), you can increase the saturation so long as none of the frames exceed those boundaries. However, often you go way below this level because some other color seems too exaggerated or just not right. This step is done last, once all other color correction has been done. If color correction was good at fixing everything so it is fairly accurate, this adjustment should only improve things by making color brighter. However, this adjustment will show the limitations or imperfections of the color correction phases (or things that just couldn't be corrected) from earlier if a certain color looks exaggerated or just wrong. It's basically a color amplifier. So if anything isn't right, this will show it if you push saturation too far. Sometimes it can only be increased a little bit, due to limitations of what can be fixed, before the adjustment can cause harm to what's been fixed already.

Filter 7.) This is ColorTools, enable or disable it by clicking/unclicking check box, then OK on Filters window. When enabled, it is the "after" video you will see, and also will be shown in the Preview window if you click configure on any of the other filters. You can configure ColorTools to show Vectorscope, or a couple different kinds of histograms.

As I say, this is quick and dirty. You could spend a lot more time breaking it up into segments and correcting per segment since the colors and lighting change a lot. You could also configure ColorTools for the Wave Form Monitor mode, and try doing more corrections that way. I didn't even try using Gradation Curves filter, but it may be able to do more than ColorMill can for trying to make finer adjustments in RGB. Anyway, this quick and dirty shows that much of the yuck you are seeing is just certain hues being way too saturated and also off center (i.e. RGB and/or gamma adjustments to help the colors properly emanate form the center of the vectorscope instead of being centered somewhere else). This video is a challenge. I'm not sure it's possible, at least with the type of tools used here, to remove all of it. But this is something that shows much of it can be removed. Perhaps with more work and patience, even more can be done. There are other programs that can highlight a section of a video and color adjust only what's selected, such as Davinci Resolve, and other editors that professionals use. I experimented briefly in avisynth with some sort of chroma keying/masking feature. It's document briefly on http://avisynth.nl/index.php/ColorKeyMask. I'm not sure if avisynth can really do it like those other editors can, but I haven't used it enough to say.

Yes, that is the book I was speaking of. It takes a lot of reading, practicing, and re-reading, but it is a fantastic resource! Before I was blind, but now I can see! It goes off into so much more than what Virtualdub or Avisynth can do, i.e Davinci Resolve, etc., but all the fundamentals are in there, and the principles translate to any tool you use. It's just some features are not in all tools, such as Virtualdub and avisynth. It's the only book on the topic I got, but seems to be very if not the most highly regarded. I don't think I'll need any other book. Wonderful that you have that.

Anyway, best of luck to you. Hope this helps get you started on one possible solution.

sanlyn 11-06-2019 05:11 PM

9 Attachment(s)
Thanks for the m2v sample. It definitely looks like a more accurate version of the DVD original.

Quote:

Originally Posted by homefire (Post 64671)
the goal was to convert it from MPEG to AVI which I though would be lossless in the same color space but I was definitely wrong.

Actually the Avi was lossless and is the same YV12 colorspace as the DVD, although I don't think the way it was converted was without damage. The obsolete ATI codec is lossless but isn't compressed; the size of the Avi using the ATI codec was 22.8mb, but compressed with lossless Lagarith it was only 8mb. https://lags.leetcode.net/codec.html

In a previous post, keaton presented some good notes with color corrections that worked nicely. Rather than go directly to RGB I took a different approach. Usually I start analyzing color problems with Avisynth's YUV tools before going to RGB. YUV is limited in that it's more difficult to target specific color ranges. But it's handy for correcting major hue problems. In this case the major problem was cyan (blue+green) oversaturation in the brights and in parts of the midrange -- likely due to strong bluish daylight from a nearby window and a camera exposure system that went schizo trying to manage it. Tape damage and poor tracking didn't help.

Below, a resized image and YUV histogram are from the m2v sample:

http://www.digitalfaq.com/forum/atta...1&d=1573080353

It shows a YUV "Levels" histogram that demonstrates two of the major problems: luminance (the top white band) is 'way beyond the legal 16-235 range for standard video -- the yellow portions at the graph's side borders show out-of-range values that get clipped in RGB. Darks are crushed on the left, and brights are wiped out on the right. Also, The "u" yellow-green channel is overextended toward the right edge with a contrast range that exceeds the abilities of digital video. Some of the original bright data disappeared when the DVD was encoded.

Below, I used Avisynth functions to reduce the contrast range and to recover some of the shadows and bright clipped data by moving it into the 16-235 range, then restored some red by increasing v-channel contrast and reduced some blue by lowering U-channel contrast:

http://www.digitalfaq.com/forum/atta...1&d=1573080427

The code that accomplished this correction was:
Code:

Levels(0,1.0,255,16,235,dither=true,coring=false)
ColorYUV(off_u=-4,off_v=2,cont_v=50,cont_u=-50)

http://avisynth.nl/index.php/Levels, http://avisynth.nl/index.php/ColorYUV

I then used a YUV vectorscope (Histogram("Color2")) to look at saturation levels, shown below:

http://www.digitalfaq.com/forum/atta...1&d=1573080515

Above, the vectorscope shows that blue chroma crashes beyond the right border of the YUV range, and some pink discoloration in the frame shows up as has some some "hot" magenta sprinkles.

Below, the out of range chroma saturation is corrected (note how the blue has a sharp cutoff on the right-hand edge, indicating data loss through clipping). When everything started turning green it indicated that blue saturation was getting too low, so I stopped at that point and waited to get to RGB for further corrections. The vectorscope shows the combined result of Levels and saturation correction:

http://www.digitalfaq.com/forum/atta...1&d=1573080587

Saturation for selected start and stop points of hue ranges was adjusted using both decreasing and increasing values, with this code:

Code:

Tweak(sat=0.85,StartHue=230, EndHue=35,dither=true,coring=false)
Tweak(sat=0.75,StartHue=36, EndHue=60,dither=true,coring=false)
Tweak(sat=1.3,StartHue=80, EndHue=190,dither=true,coring=false)
Tweak(sat=1.4,StartHue=100, EndHue=180,dither=true,coring=false)

Tweak() uses the hue values shown below to specify color ranges:

http://www.digitalfaq.com/forum/atta...1&d=1573080652
The YUV color wheel, the value chart, and the Tweak() function are described at http://avisynth.nl/index.php/Tweak.

Below, the frame with Levels correction and denoising are shown with an RGB histogram but before RGB color correction:

http://www.digitalfaq.com/forum/atta...1&d=1573080739

Below, after denoising and RGB corrections (most of the RGB work targeted more red and yellow for the midtones and brights, and serious reduction of blue in the brights and upper midrange only):

http://www.digitalfaq.com/forum/atta...1&d=1573080797

Some of the picture elements used to determine corrections were the white cuffs of the child's garment, the mother's dark hair, and the (assumed) off-white color of the wallpaper in the frame shown. Skin tones were also important because some of the corrections were giving overly orange or green skin tones. As keaton mentioned earlier, a pixel reader such as the free csamp.exe (http://www.digitalfaq.com/forum/atta...on-dv-csampzip) was used to check pixel color values for objects and skin tones. For example, the slightly dulled off-white wallpaper background used roughly RGB values 144-135-140. The blackish hair seemed to have a small amount of reddish brown in it so that the darkest parts measured around 23-22-14, which isn't a true black (hair is seldome pure black in video because of specular highlights).

The RGB filters used were Color Camcorder Denoise, Color Mill, and gradation curves. The settings I used for those filters are attached as m2v_RGB settings.vcf.

The Avisynth denoisers were RemoveDirtMC and GradFun2DBmod, although I also made a version using strong QTGMC to work on noise and image shimmer. It wasn't feasible to try to correct the severe dropouts near the end of the clip. There simply isn't enough clean data from which filters would be able to interpolate corrections. A median filter might clear small amounts of it but would leave bizarre distortion that looks much worse.

The ghosting that occurs in the early part of the clip are faults of the camera's circuitry and of lens flare, not to mention the annoying luma "pumping" of the camera's autogain feature. The pale yellow stain on the right border was simply cropped off and replaced with black border pixels because it basically contains noise (dot crawl) that's difficult to fix without overly softening the half-D image, which is 1/2 horizontal resolution and is soft to begin with.

This is the Avisynth script without QTGMC or deinterlacing:

Code:

MPEG2Source("I:\forum\faq\homefire\m2v\sample.demuxed.d2v")
Levels(0,1.0,255,16,235,dither=true,coring=false)
ColorYUV(off_u=-4,off_v=2,cont_v=50,cont_u=-50)
Tweak(sat=0.85,StartHue=230, EndHue=35,dither=true,coring=false)
Tweak(sat=0.75,StartHue=36, EndHue=60,dither=true,coring=false)
Tweak(sat=1.3,StartHue=80, EndHue=190,dither=true,coring=false)
Tweak(sat=1.4,StartHue=100, EndHue=180,dither=true,coring=false)

AssumeTFF()
SeparateFields()
RemoveDirtMC(30,false)
GradFun2DBmod(thr=1.8)
MergeChroma(aWarpSharp2(depth=20))
Weave()
Crop(0,0,-8,0).AddBorders(4,0,4,0)

This is the script for the QTGMC video version that is attached as sample_352x480i_Q.mpg:

Code:

MPEG2Source("I:\forum\faq\homefire\m2v\sample.demuxed.d2v")
Levels(0,1.0,255,16,235,dither=true,coring=false)
ColorYUV(off_u=-4,off_v=2,cont_v=50,cont_u=-50)
Tweak(sat=0.85,StartHue=230, EndHue=35,dither=true,coring=false)
Tweak(sat=0.75,StartHue=36, EndHue=60,dither=true,coring=false)
Tweak(sat=1.3,StartHue=80, EndHue=190,dither=true,coring=false)
Tweak(sat=1.4,StartHue=100, EndHue=180,dither=true,coring=false)

AssumeTFF()
QTGMC()
vInverse()
RemoveDirtMC(30,false)
GradFun2DBmod(thr=1.8)
MergeChroma(aWarpSharp2(depth=20))
SeparateFields().SelectEvery(4,0,3)
Weave()
Crop(0,0,-8,0).AddBorders(4,0,4,0)

When you make a d2v project file from DVD with DGIndex the app also generates an audio file, usually a dolby digital file that ends with ".AC3". The audio can be re-joined to the video in Avisynth using the NicAudio Avisynth plugin (http://avisynth.nl/index.php/NicAudio) and Avisynth's AudioDub() function. An example of using it in a script:

Code:

vid=MPEG2Source("I:\forum\faq\homefire\m2v\sample.demuxed.d2v")
aud=NicAc3Source("I:\forum\faq\homefire\m2v\sample_audio_filename.ac3",channels=2)
AudioDub(vid,aud)
..... processing
..... processing
return last

You need "return last" at the end of the script because the script has created two clips named "vid" and "aud". Avisynth needs to know what you want to return when all processing is completed. What you want to return is the "last" thing that's done in the script, so the term to use is "return last". NicAudio also works with some other audio formats such as mp3, mp2 and DTS.

Keaton's earlier color correction is good. This current post simply demonstrates that you can use more than one approach to get pretty much the same results. I think correcting YUV levels first gives "cleaner" results, but with a lot of damaged video it's difficult to see the difference.

keaton 11-07-2019 06:40 PM

Thanks for the compliment, sanlyn! I'm still pretty new to this stuff, and feel I have a lot to learn, as I've only color corrected a few hours of video so far. So hearing positive feedback from you means quite a lot. I do think there's some pretty significant differences between our results. I see a lot of things improved in your demonstration compared to mine.

I really appreciate sanlyn's totally different take on how to approach this from YUV colorspace in avisynth first. The Levels command is definitely something to use to fix the out of gamut luminance. I also like the finer of level of control over Saturation for specific hues with the Tweak function, compared to the virtualdub filter. I see a lot of things I like better in sanlyn's demonstration. You can see in the vectorscope of mine that colors are much more muted. Visually, I see things much more muted, especially the reds (such as in the playpen). There's a lot stronger yellow presence in my demonstration in the baby's outfit and the flesh tones. The woman's shirt is more muted in mine, and retains more blue hue in sanlyn's.

I guess what this shows is that you can certainly resolve much of what you asked about in the original post of this thread. However, you can get significantly different results when trying different things. I guess it comes down to what are the most important things to get accurate, which is usually neutrals, flesh tones, and just things that cannot possibly be the color originally displayed. Then there can be certain items that may have a strong memory of being a certain color/hue, which may also be important to try and match. Color correction is an art, and something I think can take a lifetime to study and improve upon. I would assume this video is of great importance to you, so hopefully that will help with any frustration over the time it make take to get the best possible color correction of this. I would suggest trying several revisions, trying some different methods and/or focusing on different things in each revision to help get more insight into what can be fixed and how you did it. I've seen those that color correct photos take several passes at the same photo for the sake of practice, to learn from each attempt, and have more choices as to which version came out the best.

homefire 11-07-2019 08:32 PM

I just wanted to reply right away and say thank you to the both of you! I love the way people are willing to share their knowledge and experience. I hope to do the same when I get a little less green behind the ears.

I have an issue at work I'm dealing with right now so my only free time for the past couple of day has been spent sleeping :depressed: but I plan on digging into these responses tomorrow and will update, can't wait!

Quote:

Originally Posted by keaton (Post 64674)
Here's a really quick and dirty shot at this with Virtualdub filters hue/saturation/intensity (1.2), ColorMill, Color Camcorder Denoise 1.7, and ColorTools (I think all of these are included in digitalfaq Virtualdub 1.9.11 + filters download). I didn't set the Video Compression or Color Depth settings, so change that for whatever you want to save as. In virtualdub load the attached file via File -> Load Processing Settings

Thank you for putting that together for me. I can't believe how simple you made it look. Can I ask what was the logic behind the order of filters? Why did you think to break up the hue/sat/int into that arrangement? I figure the last hue/sat/int filter is to touch up after CCD?

Quote:

Originally Posted by sanlyn (Post 64689)
Thanks for the m2v sample. It definitely looks like a more accurate version of the DVD original.


Actually the Avi was lossless and is the same YV12 colorspace as the DVD, although I don't think the way it was converted was without damage. The obsolete ATI codec is lossless but isn't compressed; the size of the Avi using the ATI codec was 22.8mb, but compressed with lossless Lagarith it was only 8mb. https://lags.leetcode.net/codec.html

I'm still trying to figure out how all of these formats, compression and color spaces work together. I should have realized converting formats, even within the same color space, is not lossless. :smack:

Quote:

Originally Posted by sanlyn (Post 64689)

It shows a YUV "Levels" histogram that demonstrates two of the major problems: luminance (the top white band) is 'way beyond the legal 16-235 range for standard video -- the yellow portions at the graph's side borders show out-of-range values that get clipped in RGB. Darks are crushed on the left, and brights are wiped out on the right. Also, The "u" yellow-green channel is overextended toward the right edge with a contrast range that exceeds the abilities of digital video. Some of the original bright data disappeared when the DVD was encoded.

I noticed that the luma is also beyond the legal range in parts of the video after the correction, especially the parts with the light from the window. I'm guessing it is acceptable to allow the luma to be crushed in these areas since there are no details to preserve or should there be a final step to make sure the video is in legal range?

Quote:

Originally Posted by sanlyn (Post 64689)

The RGB filters used were Color Camcorder Denoise, Color Mill, and gradation curves. The settings I used for those filters are attached as m2v_RGB settings.vcf.

What was your logic for the CCD first and then Color Mill and Gradation Curves? I'm guessing the Color Mill is for an overall adjustment, while the Gradation Curves are fine contrast adjustments? Do you have the link to download CCD 1.7, I could only find 1.6?

These are RGB filters but the output was set to YUV 4:2:0. Is there a reason I should convert back to the YUV and not leave it in the RGB space?

What would be the best output format and color space? The goal is to edit it in an NLE for making area based adjustments which are going to be in RGB. And then to export two versions, one for streaming and one for DVD. What would be the typical workflow for this after virtualdub?

For streaming, I would have to deinterlace regardless but it is not necessary for DVD authoring. You've said that you shouldn't deinterlace unless necessary due to the quality loss. Would there be any scenarios where you would deinterlace for a DVD?

Quote:

Originally Posted by sanlyn (Post 64689)

The Avisynth denoisers were RemoveDirtMC and GradFun2DBmod, although I also made a version using strong QTGMC to work on noise and image shimmer. It wasn't feasible to try to correct the severe dropouts near the end of the clip. There simply isn't enough clean data from which filters would be able to interpolate corrections. A median filter might clear small amounts of it but would leave bizarre distortion that looks much worse.

The ghosting that occurs in the early part of the clip are faults of the camera's circuitry and of lens flare, not to mention the annoying luma "pumping" of the camera's autogain feature. The pale yellow stain on the right border was simply cropped off and replaced with black border pixels because it basically contains noise (dot crawl) that's difficult to fix without overly softening the half-D image, which is 1/2 horizontal resolution and is soft to begin with.

This video was in bad shape to begin with and then got even worse with the low range DVD/VCR so I'm grateful for any improvement. I was planning on cutting those dropouts anyway so no need to worry about fixing them.

Also, I could not find the RemoveDirtMC filter. I found the RemoveDirtSE but it seems like there are different arguments for each. I was also unable to find NLMeansCL2 which is required by RemoveDirtSE. I was able to find and use RemoveDirt, would this be a comparable replacement for RemoveDirtMC? There are so many version of these filters with different requirements floating around.

Quote:

Originally Posted by sanlyn (Post 64689)
Code:

vid=MPEG2Source("I:\forum\faq\homefire\m2v\sample.demuxed.d2v")
aud=NicAc3Source("I:\forum\faq\homefire\m2v\sample_audio_filename.ac3",channels=2)
AudioDub(vid,aud)
..... processing
..... processing
return last

You need "return last" at the end of the script because the script has created two clips named "vid" and "aud". Avisynth needs to know what you want to return when all processing is completed. What you want to return is the "last" thing that's done in the script, so the term to use is "return last". NicAudio also works with some other audio formats such as mp3, mp2 and DTS.

Is it typical to add the audio back on before the filters? I was planning on adding it to the clip in an NLE. Is there some sort of downfall to doing it this way?

Again, thank you Keaton and Sanlyn!!!!! :congrats:

lordsmurf 11-13-2019 11:39 PM

This is sadly common from VHS videos shot in the 80s, the shooting camera is to blame.

sanlyn has done a pretty decent job for you, not much I could improve on here -- though I find the darks still a bit crushed, and overall a bit too dark. That's what I'd do different. Maybe try to tweak out some more of the blue, but that gets tough.

If I were capturing this, I'd apply both the standard proc amp (YUV) from SignVideo, and the RGB from Sima. But it's too late for this, original tape is gone. But still wanted to mention it. The most ideal color correction always starts analog (in camera, in proc amps), finishes digital.

keaton 11-14-2019 10:53 AM

Sorry, homefire, didn't realize you posted a response a week ago now. I'll try to respond as best I can.

The amount of separate filter entries I used was not necessarily required. Not too much thinking about it. I suppose I was trying to fix one thing at a time with each filter, not trying to consolidate too much. I also think I observed I couldn't make all my saturation adjustments in just one step. So I started chaining things together in a bit of a "mad scientist" sort of way. With each step, I disabled all filters that came later on the list, and looked at vectorscope/histogram to see what needed changing next. I think the large saturation corrections had to be done up front, then more subtle corrections came later, as saturation alone wouldn't solve everything. I used a hue adjustment later to try and improve the skin tone problems I noticed in vectorscope. That could have been done with color levels or curves, but I have found that a hue adjustment can sometimes do it easier if you're focused on skin tones. The last saturation adjustment (filter 6), is just a color amplifier. If you slide the saturation up, you'll see all the colors become stronger. I noticed things were still rather muted, and so I thought a color boost may help. If things are corrected well, a color boost should work well, but it can show limitations or issues with the color correction if there's something off. Just remember to check your vectorscope so all saturation is within the gamut limits shown on the scope.

Regarding the levels correction. You should use the levels command sanlyn specified to get levels between legal 16 to 235 range. It is true that some adjustments or filters can push things back outside that range again. If so, you can try using it again, or change the initial levels adjustment to go further in than the 16 to 235 limit.

I thought CCD 1.7 was in Virtualdub 1.9.11 + filters Digitalfaq pack available on this forum http://www.digitalfaq.com/forum/vide....html#post9485 If not, I found this link sanlyn posted on 2018 http://www.digitalfaq.com/forum/atta...ove-ccd_v17zip

Keeping it in RGB when saving, would imply no compression, I think. When saving to HuffYUV (4:2:2) or Lagarith (4:2:0), those color spaces are associated with those compressed formats. When loading an avs file to Virtualdub, it first does everything in that script file in whatever colorspace the file is in (i.e. YUV for Huff, YV12 for Lagarith, etc.), it then converts to RGB for Filters processing after all avisynth processing. But when you save in virtualdub back to a format, unless it's wasteful uncompressed RGB, you have to convert back to HuffYuv, Lagarith. We do it all the time! :)

I cannot see a reason to deinterlace for DVD. Although, some use QTGMC (many other forum posts on this) to deinterlace, but mostly because it can do a lot of other cleanup/improvement of the video. After doing that in avisynth, you can convert it back to interlaced for later DVD/MPEG2 file compression. Many other threads on how to do this. Although, the RemoveDirtMC and RemoveSpotMC scripts can do quite a good job cleaning things up, and you may not choose QTGMC. It's up to you to see what's best. Many threads on how to split up the frames into even and odd fields, and run these "MC" scripts on them, then recombine to original frames. It can be quite amazing how much it can clean up. It has limits, and so does QTGMC. Each video is different.

Regarding RemoveDirtMC http://www.digitalfaq.com/forum/vide...html#post57869 This also references attachments for required avisynth plugins that go with the RemoveDirtMC.avs file

Depending on how you do it, at least in avisynth and virtualdub, you don't need to separate the audio from video, unless you are going to be restoring the audio, and then remuxing the video and audio. If so, you can use the option in virtualdub to set the audio source to be from a separate .wav file (or whatever), then save the video with Direct Stream Copy, and it will remux the audio and video for you. Some operations in avisynth such as DeleteFrame can effect the audio/video sync. But if there is a corresponding DuplicateFrame for each DeleteFrame, i.e. the total number of frames doesn't change, then audio sync woudn't be effected. If you cut frames in Virtualdub, the audio for those frames is cut with it, i.e. it stays in sync. It's probably best to keep the audio with the video when doing any frame cutting, so you have sync there until you decided to save audio for separate restoration work.

Best of luck to you.

lordsmurf 11-15-2019 01:13 AM

Excellent post, keaton. :congrats:

sanlyn 11-15-2019 10:02 AM

Although keaton previously answered some of your questions, I addressed one or two others:

Quote:

Originally Posted by homefire (Post 64701)
Quote:

Originally Posted by sanlyn (Post 64689)
Actually the Avi was lossless and is the same YV12 colorspace as the DVD, although I don't think the way it was converted was without damage. The obsolete ATI codec is lossless but isn't compressed; the size of the Avi using the ATI codec was 22.8mb, but compressed with lossless Lagarith it was only 8mb.

I'm still trying to figure out how all of these formats, compression and color spaces work together. I should have realized converting formats, even within the same color space, is not lossless.

It depends on how the conversion is accomplished. If what you're doing is decompressing from one lossless codec to another lossless codec, the conversion is lossless. What 'lossless' means is that 100% of the original data is retained when the clip is compressed, and when decompressed you get back 100% of what went into it. As a comparison, PKZip is a lossless compression codec -- it's mighty slow with video (far too slow for capture!), but then again PKip is tighter compression than huffyuv or Lagarith. "Lossy" means that when the original is compressed, some portion of the original data is discarded as being "unimportant" (the codec itself makes that decision). When the lossy-compressed video is decompressed or played, the discarded portion of the original is never recovered -- it's just g-o-n-e, period. If you again submit the same video to more stages of lossy compression, as in filtering or editing and then recompressing with lossy codecs again, data loss is cumulative.

ATi's YV12 codec goes a very long way back, to Windows 95 if not earlier. It's lossless but it doesn't "compress", and given its age and original purpose I can't answer for how accurately it encodes the original data.

Converting from one colorspace to another isn't entirely lossless, either, regardless of the compression codec used. Colorspace conversion involves numeric interpolation from one system to another (for instance, YV12>RGB or YV12>YUY2, and so forth). Not only do the amounts of data storage differ for different color systems (YV12 has only half the chroma resolution of YUY2, and YUY2 has only half the chroma resolution of RGB besides storing chroma and brightness in an entirely different manner in RGB). Avisynth can make color conversions with less damage than the typical NLE and it properly allows you to specify parameters that matter, such as interlace values. Most colorspace conversions aren't especially harmful, but shuttling back and forth over numerous conversions introduces more and more interpolation errors, until pretty soon the colors and resolution look borked.

Quote:

Originally Posted by homefire (Post 64701)
I noticed that the luma is also beyond the legal range in parts of the video after the correction, especially the parts with the light from the window. I'm guessing it is acceptable to allow the luma to be crushed in these areas since there are no details to preserve or should there be a final step to make sure the video is in legal range?

The Levels() statement in the script ("Levels(0,1.0,255,16,235,dither=true,coring=false )") keeps output in the 16-235 range. True, some data is previously clipped in the camera. Nothing changes that. Out of bounds chroma is corrected with the "Tweak' statements.

Quote:

Originally Posted by homefire (Post 64701)
What was your logic for the CCD first and then Color Mill and Gradation Curves? I'm guessing the Color Mill is for an overall adjustment, while the Gradation Curves are fine contrast adjustments?

Your guess is correct. CCD can occur anywhere, but color correction is easier if spurious chroma noise doesn't spoil your view.

Quote:

Originally Posted by homefire (Post 64701)
Do you have the link to download CCD 1.7, I could only find 1.6?

CCD17.zip contains ccd17.vdf (http://www.digitalfaq.com/forum/atta...1&d=1544578132). There is also a newer ccd v1.8 32bit/64bit version at http://acobw.narod.ru/file/ccd.zip.
You can have the 1.7 and the 1.8 32-bit versions in your plugins folder at the same time because the two vdf's have different file names. Both filters will show up in the VDub filter dialog. I keep both versions for compliance with old scripts.

Quote:

Originally Posted by homefire (Post 64701)
These are RGB filters but the output was set to YUV 4:2:0. Is there a reason I should convert back to the YUV and not leave it in the RGB space?

I'm in the habit of saving VirtualDub output as YV12, since the next step would usually be encoding to MPEG or h264 which are YV12. I'd rather have VDub make the final conversion, since I'm not sure how various external encoders are doing it. Anyway, compressed RGB is 3x the file size of compressed YV12.

Quote:

Originally Posted by homefire (Post 64701)
What would be the best output format and color space? The goal is to edit it in an NLE for making area based adjustments which are going to be in RGB. And then to export two versions, one for streaming and one for DVD. What would be the typical workflow for this after virtualdub?

Because you plan to do more RGB work after using VirtualDub, better to save VDub's output to RGB and avoid another colorspace conversion in your NLE. The NLE's encoder or another encoder will automatically encode to default YV12. The NLE itself may or may not be the optimal YV12 changeover, or it could very well be perfectly OK, but in any case it's better than multiple conversions in and out of both apps. It's best to stay in the same colorspace when possible rather than switch back and forth.

Quote:

Originally Posted by homefire (Post 64701)
For streaming, I would have to deinterlace regardless but it is not necessary for DVD authoring. You've said that you shouldn't deinterlace unless necessary due to the quality loss. Would there be any scenarios where you would deinterlace for a DVD?

There's no time or space here to cover all the ramifications, but Software deinterlacing is a destructive process. Don't assume that it's merely separating two interlaced fields and upsampling them into full-sized frames. It would be great if it were that simple, but it ain't, although many cheapskate deinterlacers and simple bob() functions do exactly that. The results are sloppy and far less than the sum of the original parts. At least QTGMC makes an effort to do cleaner and more precise work. But don't think that QTGMC doesn't have definite limits and side effects. Fortunately it works as a decent denoiser when its optional parameters are managed and sometimes full deinterlacing is necessary for some heavy duty filtering jobs that won't work with interlaced frames. But with many filters there are ways around a full deinterlace (good ol' SeparateFields often works).

DVD is allowed to be progressive, even if some set top players will play it as interlaced anyway. But if you deinterlace 29.97 or 25fps video for DVD, the frame rate is doubled and you will have to drop half the fields to maintain 29.97/25fps for DVD. There are times when dumping half your video into the toilet offers so-so advantages for really horrible videos, but most of the time it's just plain butchery that makes trash of the original motion. I've done it on a few home videos that were so poorly produced that decent playback wasn't possible any other way. But it's a shame to have to take drastic measures.

Telecined video from film sources shouldn't be deinterlaced but instead is inverse telecined (telecine field removal) for denoising when necessary. This results in progressive video playing at the original film speed of anywhere from 16 to 24 fps, which can't be used for DVD.

The specs for standard definition 4:3 or 16:9 BluRay at 720x480 or 720x576 require playback at 29.97fps (NTSC) or 25fps (PAL) and must be -- repeat, must be -- encoded as interlaced or telecined.

Quote:

Originally Posted by homefire (Post 64701)
Also, I could not find the RemoveDirtMC filter. I found the RemoveDirtSE but it seems like there are different arguments for each. I was also unable to find NLMeansCL2 which is required by RemoveDirtSE. I was able to find and use RemoveDirt, would this be a comparable replacement for RemoveDirtMC? There are so many version of these filters with different requirements floating around.

RemoveDirtSE is a geek version of RemoveDirt. I'm not fond of it, but there are many other opinions. Its principle advantage is speed rather than better results. It works for a few people with specailized hardware and for the developers who keep up aq stream of "improvements" that never seems to end. I suppose it's worth the effort but I've never known it to best the results from other versions and it won't work with most mainstream Windows hardware. RemoveDirtMC seems to work well for most users. It yields cleaner results than non-MC RemoveDirt. The MC version does have the annoying occasional habit of removing objects in motion from a frame or two (when someone throws a ball, either the ball or part of an arm disappears for a moment or when a batter swings a bat the bat evaporates for a frame or two!). Remember, most denoisers are guessing about what is noise and what isn't. Guessing is far from perfect.

The version of RemoveDirtMC that most humans use is posted as as .avs plugin (http://www.digitalfaq.com/forum/atta...emovedirtmcavs). The same post #64 includes notes an attachments about the plugin's additional requirements, which are also contained in download packages for QTGMC support files. Also, later editions of Windows lack some early dll's and syslibs that RemoveDirt and a few other plugins require, so you should take a look at this thread: http://www.digitalfaq.com/forum/vide...s-running.html, and note post #4 therein.

Quote:

Originally Posted by homefire (Post 64701)
Is it typical to add the audio back on before the filters? I was planning on adding it to the clip in an NLE. Is there some sort of downfall to doing it this way?

I don't know what "typical" is. I work with audio attached. I made a DVD once and forgot to add audio for some segments later, so it was rerun time. Filters like RemapFrames and ReplaceFramesMC screw up audio, which must be restored before moving on, and adding/removing frames also affects sound. Also, I tend to listen with headphones during processing, which makes audio quality and noise more apparent. I don't usually need an NLE. The only timeline app I liked was AfterEffects, which also offered the excellent ColorFinesse plugin. All of the encoders I use are standalone apps.

homefire 11-24-2019 07:24 PM

1 Attachment(s)
Thank you for all of your help, I can't believe how much I was able to salvage this video! I have a followup question regarding the aspect ratio. In your video Sanlyn, the display aspect ratio is correct (4:3). However, when I'm working with the video, my frame is squashed as shown below. I believe it has something to do with the pixel aspect ratio, but I cannot figure out where I went wrong with the process. Is there a setting that I am missing? The video in vdub should appear to be 4:3 when it is display, even though the pixel aspect ratio is actually 352x480, correct? Should I have converted it to a square aspect ratio somewhere?

http://www.digitalfaq.com/forum/atta...1&d=1574644850

-- merged --

After more reading I think the video is supposed to be encoded in 352x480 and the DVD authoring software set to 4:3 will correct the aspect ratio. Please correct me if I've misunderstood something

lordsmurf 11-25-2019 03:02 AM

352x480 is a valid resolution for DVD.

SAR = storage aspect ratio
DAR = display aspect ratio

SAR is thin for 352x480
DAR appears 4x3 normally, traditional boxy TV image from pre-HDTV era

Every video has both SAR and DAR. ;)

sanlyn 11-25-2019 12:43 PM

Quote:

Originally Posted by homefire (Post 64984)
The video in vdub should appear to be 4:3 when it is display, even though the pixel aspect ratio is actually 352x480, correct?

VirtualDub doesn't automatically correct the
display aspectatio. Rather, like most editors it displays the incoming video frame-as-is. If you want VirtualDub to correct for the display aspect ratio, right-click on the input or output panel and choose the display aspect ratio you want. This doesn't change the actual frame size, it changes only the way it's displayed. If your incoming video was the usual 720x480, VirtualDub would display it that way unless you set the display otherwise.

As lordsmurf said earlier 352x480 is valid for DVD and is known as "half-D1". It's valid for NTSC. A similar frame for PAL video would be 352x576. You could resize it for 720x480, but hardware resampling to 4:3 display aspect ratio during playback would be much cleaner.

Quote:

Originally Posted by homefire (Post 64984)
Should I have converted it to a square aspect ratio somewhere?

I don't know what you mean by "square"aspect ratio. DVD and standard-def BluRay aren't square-pixel formats, they're anamorphic. The pixels aren't geometrically square, they're rectangles and can display at either 4:3 DAR or 16:9 DAR (720x480 frame size is required for 16:9 display ratio in standard-definition BluRay).

The pixel aspect ratio (PAR) describes the physical shape of the pixels. The display aspect ratio (DAR) describes the physical shape of the playback screen. The display screens on editors are not programmed like video players -- that is, editor screens don't automatically track those ratios. If you want your editor to display your video at the playback ratio or as anything else, you have to set it up manually in the editor. Ordinarily you'd want the video frame to display as-is, whether it's square-pixel or not; resizing in editor display panels can distort many of the frame's original elements. You should also note that unencoded AVI has no display aspect ratio data and no pixel aspect ratio data that players or editor screens can track. Because of this, players and editors display the raw frames as-is.

homefire 11-25-2019 06:45 PM

1 Attachment(s)
Quote:

Originally Posted by sanlyn (Post 65010)
As lordsmurf said earlier 352x480 is valid for DVD and is known as "half-D1". It's valid for NTSC. A similar frame for PAL video would be 352x576. You could resize it for 720x480, but hardware resampling to 4:3 display aspect ratio during playback would be much cleaner.

I don't know what you mean by "square"aspect ratio. DVD and standard-def BluRay aren't square-pixel formats, they're anamorphic. The pixels aren't geometrically square, they're rectangles and can display at either 4:3 DAR or 16:9 DAR (720x480 frame size is required for 16:9 display ratio in standard-definition BluRay).

The pixel aspect ratio (PAR) describes the physical shape of the pixels. The display aspect ratio (DAR) describes the physical shape of the playback screen. The display screens on editors are not programmed like video players -- that is, editor screens don't automatically track those ratios. If you want your editor to display your video at the playback ratio or as anything else, you have to set it up manually in the editor. Ordinarily you'd want the video frame to display as-is, whether it's square-pixel or not; resizing in editor display panels can distort many of the frame's original elements. You should also note that unencoded AVI has no display aspect ratio data and no pixel aspect ratio data that players or editor screens can track. Because of this, players and editors display the raw frames as-is.


I apologize for the completely ignorant post, reading it back now makes no sense to me either :question:. I was getting a little overwhelmed. So I should set the aspect ratio in Tmpgenc Authoring Works to "Display 4:3":
http://www.digitalfaq.com/forum/atta...1&d=1574728953

sanlyn 11-25-2019 07:02 PM

Quote:

Originally Posted by homefire (Post 65013)
So I should set the aspect ratio in Tmpgenc Authoring Works to "Display 4:3"

For this video, yes.


All times are GMT -5. The time now is 08:53 PM

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.