Advice on my first Avisynth attempt?
4 Attachment(s)
After much trepidation, I finally got around to learning Avisynth. The AfterDawn tutorial got me started, but it was largely due to the posts by Sanlyn, msgohan, themaster1, jwillis84, LS, and others, to which I am very grateful. Being locked inside for the last week due to the pandemic did not hurt either....
I have attached before and after samples of a clip. It has not gone through QTGMC, but I will add that at the end (although it was not recognizing the dfttest function, but perhaps I do not need it because of the FFT3DFilter). I tried to look into frequency filters like DeFreq for the ringing spots in the background, but did not see a difference. If I thought DeFreq's documentation was bad, that was before trying to figure out FFTQuiver, which resulted in a lot of darkened frames. FanFilter was easier to use successfully. Please let me know if there are better filters I should be using. Should I sharpen more? I could not get RemoveDirt to work. The MDegrain2/MSuper/MAnalyze script did not seem to be much better than FFT3D, but was slower. I had to use an RGB parade for coloring as I was not as confident using the YUV histograms. Should the U and V channels be allgned on top of each other? Also, the filtered version appears to have a blue bar on the bottom--any reason why? Lastly, due to the file size on this site, is it better to upload longer compressed clips for suggestions or very short lossless clips? Thanks for any help. Code:
SetFilterMTMode("QTGMC", 2) |
Since no one else seems to want to take this on.....
I don't usually work with analog to DV transfers. They're too troublesome and require strong filtering to clean up compression artifacts and other DV detritus that doesn't normally appear on purely analog source. That stuff gets added to your capture. Nevertheless I have some tips and suggestions. Overall it's not a bad job of cleanup but the video does suffer from so much work. It's softened and looks over filtered. AssumeBFF() This is not necessary. BFF is the Avisynth default. Most VHS is TFF, but DV reverses the field order. Crop(4,0,-8,-8).AddBorders(4,0,8,8) The new borders leave the frame off-center. I would have used AddBorders(6,4,6,4). Using Crop and AddBorders this early, the borders will be modified and discolored by all the filtering that follows. They won't be black by the time it's over. FixChromaBleeding() I think FixChromaBleeding works better with non-interlaced frames. Read its script. It masks edges and uses chromashift() internally. ColorYUV(gain_y=20, off_u=10) Is this gain necessary? I assume you know what gain does. I would have used a contrast increase in tweak for the brights, and something like ContrastMask() for the darks. Gain here raises your black levels and makes darks look murky in many setups. off_u = 10 does add some needed blue. But you need RGB to fix the color on this one. The skin tones here indicate that all the participants have terminal liver disorders. Tweak(sat=1.1, dither=true, coring=false) Actually some of your colors are already nearly over-saturated. Maybe you were trying to compensate for the cooked colors DV imposes on VHS. Levels(11, 1, 255, 16, 235, coring=false, dither=true) Again, the "11" here suggests that you don't have a calibrated monitor (?). In any case, your black levels are raised a bit high and the low end looks foggy. Much of the "snap" has gone from the image. It will look malnourished on TV, which has a different luminance curve than PC monitors. FFT3DFilter(sigma=3, plane=0, interlaced=true, bw=16, bh=16, ow=8, oh=8) FFT3DFilter(sigma=4, plane=3, interlaced=true, bw=16, bh=16, ow=8, oh=8) FFT3DFilter(bt=-1, sharpen=0.4) This softens the video, which is why a lot of people don't use FFT3D. But it's up to you. QTGMC uses it for its faster presets. ConvertToYV12(interlaced=true) Not necessary. DV is YV12, so your video is already YV12 and it's how you lost 50% of your VHS chroma resolution during capture. DeSpot(pwidth=25, interlaced=true, show=0, color=true, mthres=25) I don't think this filter is doing anything. I'd think RemoveSpotsMC() is more effective. separatefields() This and the mergechroma routines that follow appear copied from other scripts. The MergeChroma business does indeed "work" with SeparateFields, sort of, but for better results you really need to use this technique on deinterlaced video. Besides, I think you can see that it didn't work all that well, there's obvious blue bleeding and chroma shift in the last shot and more of it in the "after" video than in the "Before" version (look at the couple's blue-stained ears). These routines do nothing for the thick, black edge halos. You might try DeHalo_Alpha or FixVHSOversharp for that. FixRipsP2() Be careful with this. It didn't remove all the moire in the record album cover. It visibly softened the video further, and it distorted motion. It's a limited-use filter. The motion smoothing settings in QTGMC might have done almost as well without so much softening or distortion. I'd use it only in shots with the noisy album cover, not on the entire video. Again, it's up to you. #QTGMC(Preset="Slower", Edithreads=1, FPSDivisor=2) Why would you deinterlace, and why at this late point? You've already used SeparfateFields. You shouldn't need both. Let's say you didn't have a problem with QTGMC and/or dfttest -- Why are you using a slow preset? The video already looks thoroughly scrubbed with the other filters. "Slow" is pretty drastic considering all the other scrubbing. I also mention this as a flaw in logic flow: after you've spent so much time filtering the whole video, why are you using FPSDivisor=2 to throw away 50% of your work? If you have to deinterlace why not use a faster and less destructive preset? I assume you used QTGMC at this point in the script because you hinted that you couldn't use it earlier. I guess you've seen some of the documentation on these plugins, but I invite you to have another look. Yep, it's a pain in the neck and a lot of it is discernible only to the guys that wrote it. But keep looking for other usage examples, and don't leave out RemoveDirt, RemoveSpots, MCTemporalDenoise, and the huge HTML and original text in QTGMC. The latter has a special setting for chroma noise, and something like Bifrost and chubbyrain2 are also useful for rainbows. The final looks smoother and less "disturbed" than the original. DV is awful stuff, very thorny to work with. Keep at it. You're making progress. |
1 Attachment(s)
I appreciate all the detailed advice. This video is from an U-Matic tape that was converted by a technician who does these types of things for a living. I asked him if he could output a lossless version but he said that his U-Matic setup can only output DV. He also said that since DV has a vertical resolution about double of U-Matic it is over-specified for it. Although I understand that every pixel matters when it comes to Avisynth filtering, I have few options when it comes to a professional who still converts these tapes. I do have a U-Matic deck but it broke and given its weight it would cost a fortune just to ship it somewhere.
Regarding your comments, I used AssumeBFF() just to be safe. I used a minimum of 11 for the Levels filter because that was the value of the loose minimum. Correct me please, but I don't want to affect the whole video if there are only a few stray pixels that are blown out. Contrast in Tweak did help, thanks. But the ContrastMask seemed to increase brightness, not darken it when I played with both positive and negative values. While the Histogram in Avisynth indicated that most values were within the safe range after using Levels, the Histogram in VirtualDub indicated that the RGB levels were not. When I opened the original file in MediaInfo, it said it was YUV 4:1:1, but when I ran Info() right after opening the file in Avisynth, it said YV12. That is why I ran the ConvertToyv12. Which do I trust? Thank you for the suggestions to replace FFT3D and DeSpot with RemoveDirtMC and RemoveSpotsMC, which made the video less cloudy. I will only use FixRipsP2() when absolutely needed (I left it out here). I do not know how the aWarpSharp works, or why MergeChroma works without two clips, but it definitely improved the chroma bleeding more after de-interlacing. FixVHSOversharp, Bifrost, and chubbyrain2 did not seem to do much. I tried playing with a script of your from another post on chroma bleeding but it did not seem to make as much of a difference. Code:
U = UtoY() I attached a compressed sample just for forum purposes--might you know why there is a light blue bar on the bottom? If you have any further suggestions please let me know. It doesn't have to be perfect. If I did something that seemed strange it is probably because I don't know what I am doing. Thank you again. Code:
SetFilterMTMode("QTGMC", 2) |
4 Attachment(s)
Skip this post. -admin
Making another effort... I reformatted the reply to your last post into a Rich Text (.rtf) file. .Rtf is a universal format that can be opened in Word or Wordpad. It is attached as "Reply2.zip". The jpg image that the reply refers to is in "frame 371 before and after.zip". The VirtualDub settings .vcf file is attached as "VirtualDub settings.zip". The video samples mentioned are attached as "video samples.zip" Sorry for this song and dance. I think the forum needs smarter scanning software. |
4 Attachment(s)
Thanks for the new capture.
Quote:
Be that as it may, I spent quite a long time trying to get some convincing color balance. I'm still not satisfied, but the image below is the result: [before & after jpg is sent as a .zip file] (Above) At left is the original frame 371. On the right, frame 371 as seen in the attached 480i mp4. I made and attached two finished versions; one is 59.94fps 4890p similar to your latest sample's format, the other is an interlaced 480i. There must be 50 ways to denoise the original. Your earlier effort with FFT3d was pretty decent; maybe a lighter sigma would be less soft. Anyway, I used QTGMC's EZDenoise followed by mDegrain2, with some RemoveDirtMC to get rid of some spots and do some more smoothing. Getting cleaner edges is a struggle, and little blue blotches keep popping up no matter what you do. The black edge halos refuse to budge; I got tired of ruining other parts of the image with filters that worked only partially or not at all. The script below does some chroma cleanup using SeparateFields() before running QTGMC. Seems redundant but I worked the chroma edges and bleed first as separate fields because QTGMC tended to carry some defects forward across multiple frames when it interpolated new images. Another video might not pose the same problem. 90% of color work was in VirtualDub and RGB. The filters used were ColorCamcorderDenoise, ColorMill, and further tweaking with gradation curves. I had to be careful adding blue, to avoid adding too much bright blue. In most cases I've increased color; remember that in RGB when you increase color you increase brightness, when you subtract color you decrease brightness. In gradation curves, for the general "RGB" panel there is a little hook at the bottom of the slanted line to make sure everything at RGB 5 and below is really black, avoiding discolored borders. Meanwhile the line has a slight curve that mildly brightens the range between 0RGB 10 and RGB 64 or so. Colors were readjusted many many many times, with eye rest breaks every 15 minutes. I saved the settings in a .vcf file so that you can mount the filters and see how they're set up. The .vcf is attached as comateens_trial_VDub_settings.vcf. Quote:
Quote:
Quote:
Quote:
aWarpSharp and the later-and-better aWarpSharp2 actually do warp lines and edges -- it tends to tighten fuzzy edges. With chroma, it tries to tighten color nearer to the closest edge. https://www.animemusicvideos.org/gui...tml#sharpening The filters in the AMV Guide, by the way, aren't just for anime. Most of them are old standbys for every kind of video. After all, dfttest and LSFMod are compomnents in some very heavy duty filters (QTGMC and MCTemporalDenoise, for instance). You can also use aWarpSharp2 as a sharpener. My favotite is LimitedSharpenFaster, though I don't always sharpen. Quote:
I don't know where the light blue bar on the sample came from. XVid maybe? I haven't used that in 15 years. Be careful with cropping, though, which can mess up color. http://aisynth.nl/index.php/Crop RemoveDirtMC: a power of 50 seems like overkill. This filter can remove objects at high powers so check its results carefully. Sometimes you have no choice. Powers of 20 and 30 are normal. 40 and over need a close look. [QUOTE=Winsordawson;67989]If I did something that seemed strange it is probably because I don't know what I am doing.[quote] Everyone here has been at that point. We learn something new every day. You're getting there. I picked stuff up by doing what you're doing: examining other work, trying things out, and struggling through the docs. Some of the docs will definitely tell you how much you don't know! Scripting for this weird video is largely a matter of experimentation and patience. Color balance was difficult: colors are corrupt from frame to frame, and there are no clearly white or gray objects to go by. The dark colors worn by the kid on the left look black, but whenever you change other colors the darks look more like dark olive. I used skin tone as a guide. Skin is mostly red, with green at 70% of red and blue at 60 to 70% of green. If things look too red people mistakenly add more blue. But blue just makes red look pink. To balance red, add cyan (blue + green). Most VHS isn't this complicated (home camera movies excepted). I must have tried at least a dozen variations of the following script and it could still use some work -- still a bit grainy and reddish. The following is a suggestion: Code:
AviSource("I:\forum\faq\Windsordawson\B\comateens_before.avi") |
Quote:
Quote:
EDIT: Alright, I see your errors. Thanks for posting the RayID. The rule that was tripped may not be vBulletin friendly, so it's been neutered. We'll still get soft errors on our end, for logging purposes, but it shouldn't be visible to you anymore. If you ever run into CloudFlare issues, post about it in the General forum. Timestamps are most helpful, then RayID next. We can get your IP from the post. EDIT2: Attachments now added fine. Thanks. You may now resume your regularly scheduled Avisynth discussion. :) |
@admin:
Thanks for your attention to this. As it is I later scanned those files myself with Kaspersky and with Malwarebytes. Nothing found amiss. In the future if it happens again (darn!) I'll post in the General area. :salute: |
Thank you so much for the examples and detailed information, especially with regards to how mergechroma works. You have provided me with a lot to play around with. I think this is tape (from 1982) is in fact a 2nd generation dub, but from one U-Matic tape to another. You are right that the guy who converts this probably only outputs to .dv, I believe to save hard drive space. With regards to why the tape has ringing he gave the following explanation:
Quote:
Are there certain circumstances that are impossible to color correct in Avisynth with ColorYUV and Tweak and require ColorMill and Gradation curves in VirtualDub? Is it usually worth it despite the loss by converting to RGB? The faces in the samples you provided appear a bit blown out. Is this just necessary in order to get the colors right, because of the brightness that results when you add color in RGB? Also, is there any reason why you avoided using the Fan filter? Thanks again. I don't expect it to be perfect. I am just trying to get close as possible until LordSmurf has the availability for me to send them to him. |
Thanks for the info on UMatic media.
Quote:
In this case I felt chroma smearing would look worse after deinterlacing so I used SeparateFields instead. It seemed to work OK. Of course it could have made no difference or it could have looked worse, so I tested first. In the end I thought motion compensation in deinterlacing didn't make chroma repair look quite as neat -- at least, not in the frames I looked at. Frankly, I don't think anyone would notice a difference. You also have to be aware that if interlaced chroma appears shifted vertically by only 2 pixels, you can't use SeparateFields and ChromaShift to fix it, because separating into half-height fields means that in each field the chroma is shifted vertically by only 1 pixel instead of 2 -- you would need a more complex ChromaShiftSP for that, which involves shifting by single or subpixel heights and converting to RGB internally. Other purists would insist on doing this chroma edge cleaning in YUY2. But I dislike jockeying back and forth in multiple colorspace conversions. Meanwhile there are few filters that seem to work pretty well with SeparateFields, among them RemoveDustMC and Remove Spots MC. A very popular super-filter is MCTemporalDenoise which can be used with its "Interlaced = true" parameter setting, in which case it uses SeparateFields internally. Popular filters that don't work very well with SeparateFields are dfttest and derainbow filters such as BiFrost and chubbyrain2. Quote:
Some very sophisticated (and expensive) video apps can be somewhat more specifiec in YUV, but they still can't match the flixibility of RGB. Yet again, a lot of video doesn't need such complex correction. As for RGB, if done correctly in Avisynth the work has greater precision than in NLE's and the damage is insignificant unless one insists on going back and forth again and again with colorspaces. Quote:
Quote:
The scripts in this thread are suggestions for different fixes of different problems. Often there are multiple ways to accomplish the same thing. Other users of Avisynth and VirtualDub are always welcome to contribute to these projects. I don't have an exclusive usage license for this stuff. |
Quote:
Thanks again. |
There is some overlap to these ranges:
RGB 0 to 64 (shadow areas, darker colors, darkest area on fairly white shirts, deep skin shadows), lower quadrant of a curves filter. RGB 64 to 192 (skin tones from shadow to to highlight; green shrubbery, middle gray = 128); middle two quadrants. RGB 192 to 255 (brightest areas, bright sky, light grays, RGB 255 can look pretty "hot" sometimes); top quadrant. A little experience will show you how these ranges operate in real life. |
BTW, you should mhave a tool that reads pixel values in VirtualDub and other apps. One free no-install pixel reader tool that many users keep in a corner of their desktop is csamp.exe (http://www.digitalfaq.com/forum/atta...on-dv-csampzip). That link is to the old version 1.4. A VirtualDub ColorTools histogram is almost always used in these projects but be careful that you get the correct version. The older v1.4 won't work in Win7 or later. The new version 1.5 works everywhere and is at https://sourceforge.net/projects/vdf...1.5%20update1/.
NOTE: You can keep ColorTools 1.4 and 1.5 in your plugins together, but change one of their names to prevent conflicts. I have version 1.4 installed as "clrtools.vdf" and version 1.5 installed as "clrtools15.vdf". Below is an image from a Cher video project showing how Csamp was used to read pixel values from a mouse cursor on Cher's nose. The Csamp readout panel is in the middle of the image. I guess you know where Cher's nose is. http://www.digitalfaq.com/forum/atta...ame34_curvejpg The other tool shown in the image is the RGB Blue panel of the gradation curves filter. |
versions 1.4 and 1.5 in the above post refer to the ColorTools vdf, not to the pixel sampler.
Another free tool is ColorPic which can be contracted or expanded on the desktop and reads continuously (http://www.iconico.com/colorpic/help.aspx). |
Thanks. It's hard to tell where Cher's nose is given the plastic surgery. If Csamp only works with an older version of ColorTools, it seems installing ColorPic is the easier solution, since I already have that. I would probably forget at some point later why I have two ColorTools installed. However, what would be the purpose of sampling pixels except in the case of something that you think is pure black or pure white? I would think determining if something in the video is in fact middle gray in real life would be difficult.
The video from above does in fact have color bars at the beginning of the tape. However, you mentioned that the video changes color multiple times throughout even the short clip, so I don't know if the color bars would be a good basis. This old post of yours down the page was also useful for explaining ColorMill: https://forum.videohelp.com/threads/...-as-VirtualDub |
Csamp works with all versions of VirtualDub and Windows. It's ColorTools that requires a new version for Win7 and later. I use ClrTools and ColorPic both, just for a change of pace.
If you measure the pixels in black, gray, or white objects you'll know if they're off-spec. Don't trust your eyes alone. If you can't use a histogram for information you're at a serious disadvantage. There is more than one kind of histogram in YUV and RGB; there are vectorscopes that provide different info. Histograms measure the number and brightness of pixels in various parts of the spectrum. the pixels. Vectorscopes measure saturation, which can tell a very revealinbg story and can explain a lot about problem videos. Avisynth has both for YUV, CoorTools has both for RGB. Pick up a book for digital color processing with pro tools and see what they say about histograms, YUV, and RGB. I think you won't find a tutorial anywhere that would agree with you on those tools. Unless you understand the behavior of YUV and RGB in greater detail, you'll be frustrated. The Color Correction Handbook by Van Hurkman is a real eye opener about YUV and RGB both, and free tutorials about color correction in Photoshop Pro and AfterEffects are excellent. You can adapt the principles for use in Avisynth and VirtualDub. The color bars on VHS tapes are usually not a good guide for achieving color balance, especially when VHS changes color and levels so frequently. Those bars are very general levels setters for bulk tape mastering machines. And you still need a pixel reader and histograms if you want to work with them. I've seen posts by people who used them to fix colors; they're really not that accurate. One of the samples was said to look great -- that is, if you like purple hair, pumpkin colored skin, and dingy shadows. |
That is good to know that Csamp can be used with all versions, thank you. I know how to use histograms and vectorscopes, but I am sure I can learn a lot from a 600+ page book like the one you linked above. Sadly, my library is closed until further notice. I usually do not trust my eyes.
Regarding the pixel sampler, in theory, it should be easy to identify something that is pure white, if it is in fact pure white, like a piece of paper. But what if something that you color correct to be black in a video is really not completely black (i.e. if it is really RGB 10, 10, 0, which still looks black)? Likewise for middle gray. |
The idea with neutral colors like black, gray, white is to get close to the mark. Not every gray is exactly middle gray. There are darker and lighter variations of all colors. If there are no neutral colors in the frame, you can often assume the color balance from other, similar scenes. Skin tones vary as well, with skin highlights having more green and blue than in other, darker areas. Middle skin values are mostly red, followed by about green at about 70% of red, then by blue at 70% of green. Brown hair is mostly red and about 85% green (equals yellow) with varying amounts of blue. Thus, if your dark brown or dark brown hair is mostly blue, something's wrong. If Robert Redford's yellow hair is green, one or two of the other other colors needs adjusting, or reduce green.
Night scenes and scenes with odd lighting arrangements are a problem, of course. In that case you do the best you can, which simply takes some experience. A scene lit with blue lights is usually done on purpose (and isn't 100% blue) but fixing overall luminance levels first can be useful. It takes some time to get accustomed to color correction but after a short while of working with known principles something just goes "snap" and it all comes together in your head. In the meantime you'll understand why professional colorists are so expensive. You'll also see just how bad VHS color really is. Don't expect the same consistency and perfection you find in a decent digital source. You might have seen the following in another recent thread: Quote:
|
Thank you for directing me to that other useful thread of yours. By middle skin values, are you referring to Caucasian skin tone being red, green at 70 percent of red, and blue at 70 percent of green? For example, I have read that for a female Caucasian, maximum highlights are 50 to 75 percent on the Waveform monitor, while for a Black male it is 15 to 35%.
|
1 Attachment(s)
Skin tones vary by facial position. There are shadows, midtones, and highlights. Darker areas have more red and blue, lightest areas approach white, with larger portions of each of the three colors. The guide used to study skin tones is the vectorscope, which measures saturation levels. Below is the standard vectorscope that comes with ColorTools. The slanted line in the upper left indicates the areas where skin tones locate, with some slight overrun into adjacent and opposite colors such as R, M, Y, B, (Red, Magenta, Yellow, Blue), etc. Afro Americans have more red and blue, chinese have less bleu, and so forth. Colors that extend beyond the inner circle of letters are oversaturated.
http://www.digitalfaq.com/forum/atta...1&d=1587947541 Photo-oriented websites have free color charts and samples of various skin tones, some are in RGB codess but some are in html or printer color codes so you'll want a pixel reader for those. One such site is at https://www.schemecolor.com/real-ski...or-palette.php, with many sample patches farther down on the web page. |
1 Attachment(s)
Thank you. I have been able to remove most dropouts successfully with RemoveSpotsMC(). However, I couldn't in the attached sample with that filter even after applying multiple times. I tried using DePulse() but either it does not work or I am doing something wrong. I read that it is a spatio-temporal filter, which means that the even and odd fields must be worked on separately. Before this in the script most of the filters mentioned above were applied, along with RemoveSpotsMC() thrice. I then trimmed this example to try to apply further filters. Any advice is appreciated.
Code:
clip1 =Trim(0, 7680) |
Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.