Avisynth with Adobe Premiere, AVSInfoTools errors?
I am reacquainting myself with Avisynth after being away for a while. There is a video with a skewed up color tone requiring that the hue angle be adjusted. That cannot be done in Avisynth, but it is an easy fix in Premiere.
Is it recommended to first adjust the hue angle in Premiere and then export to a lossless format to Avisynth, or first use Avisynth to correct other problems and then export as a lossless to Premiere? Assuming that the effect in Premiere works on the YUV and not the RGB (if not, then I would use Premiere last), I prefer using Premiere first because Avisynth has more options for final output, and the current color scheme is not easy on the eyes. On an unrelated note, I installed Avisynth on a new PC and avsinfotools would either detect the 32 bit or 64 bit version, but not both. It would throw an error that there were 32 bit plugins in the 64 bit folder, or vice versa. I tried every combination in the registry but while I could make one work I could never make them both produce no error. Finally I ran the installation where I only installed the 32 bit version first, then ran the installation again for the 64 bit. Avsinfotools now detects both as well as the location of their respective plugins. This might be fine for now, but will I encounter problems down the road? When I load a new script in avsPmod I get error messages of the sort Code:
Error parsing Func plugin parameters: unknown character 'n' |
Skewed color, interesting, would like to see good sample of this. Both still and clip.
It depends on the problem. Premiere > Avisynth, probably, for this exact issue. Avisynth > Premiere is more usual. While important to acknowledge colorspace, too much is made of it in recent years. Even pros don't do the wacky anal adherence to colorspace that some hobbyists and amateurs do. (It's even more ridiculous when they have low quality everywhere else, but "OMG, not colorspace! The horror!") Obviously you want to maintain proper colorspace, but sometimes it's just going to be possible. There's no reason to stress over it. I don't like Avisynth+ 32-bit, nothing but problems. I stick to +64 and official 2.6. I no longer use the MT, as it has too many errors, including bad frames in the video output. I never feel the need to use AvsInfotool. The one time I tried, some years ago, it insisted I had an error, and yet I did not. I found it completely useless. (I'm not alone here. Search Google for "Avsinfotools", and on pg1, you'll see summary comments like "Despite the error message in 32-bit AVSInfoTool, QTGMC will now work in both modes" and "I get a Runtime error from AVSInfoTool, but still work fine at AvsPmod". So even without looking for errors, just the name of the tool itself brings up errors. So let that sink in.) |
Thanks. I was trying to change the registry entries for my 32 and 64-bit plugins to satisfy avsinfotools but after your experience I may reinstall everything and ignore the errors it brings up.
Yes, I'll be sure to post samples once I have done the best I can with it. I prefer Premiere first since I can color correct like I am in the 21st century, unlike with Avisynth. Plus, it turns out that the color tools in Premiere such as RGB curves and Three-Way Color Corrector work in the YUV space. I'm sure it isn't perfect, but it will do. By the way, does anyone know where to find the SpotLessUV plugin people are talking about? I found a Doom thread by StainlessS but it only had the Spotless version. Does it work better than the variants of RemoveSpotsMC()? |
Something else I recently learned about is this: https://www.videohelp.com/software/A...al-File-System
Not tried it yet. Quote:
|
Thanks--that is something to look into.
I appear to be not understanding the syntax logic of Avisynth. The below syntax works, but I don't know why. I would think once you write "a" that last/current clip becomes "a" and is such being filtered. But in reality, this syntax filters the "b" video, leaving "a" intact. I then provided an overlay on it. Any thoughts? Thank you. Code:
video1 |
6 Attachment(s)
I might as well post my progress with this. This is a screenshot of the original clip that had to be corrected in Premiere:
SEE original.jpg ATTACHED (Images inserted into the post would not show up). It is not from a 50's b-movie....There are several issues with the whole video (only a snippet shown here). First is the greenish hue, even after correcting the hue angle for the skin tone. color_before.jpg I tried to remove the green completely from the background, and the ColorPic showed that after the correction the background had even low levels of R,G, and B. But then the dress became reddish. I can't seem to prevent that if I change the background. Maybe trying a secondary color correction on the dress? color_after.jpg Ghosting also exists, as seen here ghost_before.jpg Since deghosting filters in Avisynth have rarely helped (me at least), I used a secondary color correction to remove the green. ghost_after.jpg It is not perfect, but less noticeable. The other issue is the haloing around the woman's shoulder in this photo and in the attached video. This was after using MergeLuma and colorshift. Is there a colorshift equivalent for luma? luma_issue.jpg MergeLuma works, but of course it destroys the video and makes it looks like an impressionist painting. Here is my script. Since FixRipsP2() is destructive, I only used it on the left edge of the video. If I keep that overlay in my script too long without commenting it out, Avisynth will crash. KNLMeansCL worked well on the remaining dropouts. Unfortunately, it also blurred the detail in the woman's hair, and my preference is always to keep detail rather than fix everything. TemporalDegrain2() on its default settings seems to help without removing detail. Strangely, RemoveSpotsMC2() was far worse than RemoveSpotsMC(). RemoveSpotsMC3() is too slow. Code:
video1 = AVISource("Dance.avi").AssumeBFF |
2 Attachment(s)
Here is the before and after.
|
Just few comments after a quick check:
- no need to use separatefields(), your video seems progressive - give RemoveDirtSMC a try for horizontal stripes defect, with a limit=30/50 and 2 calls - TemporalDegrain2 uses MDegrain with temporal radius=1 internally, so no need to call MDegrain2 before. Eventually use TemporalDegrain2(degrainTR=3) - Not sure if KNLMeansCL is a spatial only denoiser (to avoid double usage with TD2); if you like it you may use it to build a prefiltered clip to create better motion vectors; this is not possible with TemporalDegrain2 because it uses QTGMC approach for motion estimation. Possible using old version, i.e. TemporalDegrain(..., denoise=clip.KNLMeansCL(), ...) - Or you can use KNLMeansCL in post-processing inside TD2 with postFTT=4, avoiding sometimes destructive dfttest call when using postFTT=3 - Try LSFmod, SeeSaw or CAS instead all aWarpSharp2 calls, eventually including chroma sharpening. Difficult clip to restore, I did not try all this on your clip, hope it helps... |
Thanks lollo2--you have given me a lot to work with! I will report back when I go through all of your suggestions.
I noticed that for some reason my FixRipsP2() mask was not actually applied in the after clip. I will make sure it is in the next updated, but do you think it is wise to use if nothing else helps the left side of the clip (to avoid filtering everything)? Just FYI, the original clip is in fact interlaced. |
Quote:
If you fail, it may be more appropriate to "mask" the portion having problems with black pixels rather that applying a not needed filter on the whole video. Quote:
In this case you can also try something like Code:
AssumeBFF().nnedi3(field=-2) In fact, is mandatory for SpatialTemporal filters but also recommended for pure Temporal filtering. |
Thanks again. I will have to read more about it. Most of the info on Avisynth usage is spread all over the place.
Since you seem knowledgeable, would you know where my misunderstanding was regarding post #5 above? |
1 Attachment(s)
Quote:
Code:
AviSource("intrepretative dance before.avi") |
An alternative way to write the previous code is to make explicit assignement at each step, that is boring but helps a lot to well identify all the operations in complex scripts.
I always use this approach. Code:
video_org=AviSource("intrepretative dance before.avi") |
Thanks for the explanation and photo. It appears you are taking on Sanlyn's role! The only issue with the second method is, unless Avisynth is different, all of those video file variables should be stored in memory separately. If the filtering is too much, Avisynth might crash. It is easier to keep track of, however.
|
AviSynth is a frame server!
http://avisynth.nl/index.php/The_scr...ence_of_events http://avisynth.nl/index.php/The_scr...e_filter_graph For temporal radius > 1, is actually a frames server :wink2: If you look to "Frame caching and the effect on splitting filter graph's paths" chapter in http://avisynth.nl/index.php/The_scr...considerations you can see how a "cache" filter is created and used after each call. |
Thank you for the informative links. I don't think I make use of runtime scripts, so from the performance page it seems that the best I can do (absent creating an intermediate file for some filters) for reducing the slowdown in my Overlay with FixRipsP2() is to load all of the plugins manually and turn off autoload. Does that seem right?
|
It won't be a significant improvement in term of speed, but in a script I only load the necessary plugins and avoid "autoload" procedures.
You can try some MT modes to speed-up your processing if that's an issue (I am not familiar with thems, so i cannot help, sorry) |
Quote:
In terms of speed, there's not much you can do beyond per-core CPU speed, SSD, and sometimes GPU. Everything else is just slivers of % of speed. Certain MT/cache functions have huge negatives, such as glitching the video, which is why (for example) I never use QTGMC in the 32-bit version (cached MT is bad, uncached too slow). I'm all for speeding up Avisynth, but some things simply do not work as claimed. |
Quote:
Admittedly, I am only use 32 bit, but I would think that would only affect the speed of the process, not the rate of crashes. I avoid 64 bit because of not every filter is supported. I think Sanlyn also avoided the 64 bit version. |
5 Attachment(s)
I had some time to apply some of the advice here. TemporalDegrain2 with deGrainTR=3 did make an improvement. But whilst trying to find RemoveDirtSMC() (It is difficult to find the correct version of videoFred's fliters, since when I try to download it from Doom9 my antivirus blocks it), I came across SpotLess, which is quite magical in action (and quick). TemporalDegrain2 did not make a significant improvement after SpotLess, and it also a bit slow. Neither did two calls to RemoveDirtSMC().
I didn't apply nnedi deinterlacing because I want to preserve as much detail as possible without any interpolation of the other fields. I tried LSFMod and CAS with MergeChroma but it seemed worse. Do you have a suggestion on what parameters to use? I also found your YT channel useful (assuming it is the same person). Code:
video1 = AVISource("interpretative dance color corrected.avi").AssumeBFF You will not see it in the example clips, but secondary color correction was successful in removing the type of ghost I encountered in this video. Blacks are a bit milky because the lack of a fill light created harsh shadows. Reducing the shadows too much would affect the skin tones that are in the shadows. Am I mistaken, or is that a spot on the camera itself in the lower third, left hand side of the final video? I am surprised Avisynth kept it! Thanks! Before Attachment 14347 After Attachment 14348 |
Quote:
TemporalDegrain2 is an excellent at denoising/grain removal; it may be not fully suitable for your video, but with a high temporal radius (>3) and dfttest or KNLMeansCL postprocessing should clean the "standard" noise a lot. I experimented a temporal radius of 16 with SMDegrain (a similar filter) once, and although really really slow it was effective for defects where the solution was to "average" across a large number of frames. Quote:
Quote:
Quote:
Quote:
And finally let me say that your final result is not too bad at all. Sure, with lot of time and trying many filters/parametrs/steps you may improve it even further, but do not over process, and stop once you are satisfied, otherwise it will never end ;) |
Quote:
|
Thanks again. I will try the nnedi with double frame option and see if there is an improvement. You mentioned previously that I can only use KNLMeansCL on an old version of TemporalDegrain2. Do you know which version?
SpotLess has no produced any problems for me so far, perhaps because I choose a low threshold for when to not affect the block. But if it seems problematic I'll use a different strength on the sides. It surely works much faster than FixRipsP2! You're right that one can go crazy trying to make it perfect, and I am of the "less is more" crowd. Do you have any suggestions for the glow off the woman's shoulder? MergeLuma removes it, but makes it an oil painting. Perhaps by playing with the aWarpSharp2 parameters? (I should add that my above script and videos make use of a ChromaShift(C=4, L=2) that I forgot to include here). LSFMod and CAS perform the same function as aWarpSharp2 for MergeChroma/MergeLuma, right? It has been hard to find a proper explanation of the effect, but I assumed it was by sharpening the edges and then merging only those parts to the original. Learning from Doom9 is like going through a garbage can full of shredded notes. :unsure: I plan to upload a separate, deinterlaced version online, upscaled to HD so it gets a better bitrate by YouTube. Do you recommend that I keep my same script (plus an AddGrain and sharpening effect, which I also forgot to include above) and just use QTGMC without any denoising? Or throw away the above denoisers and use something from QTGMC? I don't want to go crazy because I care more about the interlaced version for archiving purposes. By the way, if you like the show UFO (given your videos) you may also like The Invaders, although the British were usually less corny. :laugh: |
Quote:
Quote:
Quote:
Not easy to avoid it, a tune of the parameters of the filters or something like addGrain (inside the filter if available or outside the filter) it may help. Some denoisers have an option to re-inject some "new cleaner noise" based on what has been removed :) MergeLuma itself should not produce plastic look, except if you do a temporal/spatial smoothing. Quote:
Sometime, to be sure that chroma is not touched, you force in the flow mergeChroma to use the chroma from video before sharpening. Quote:
Quote:
You can just use QTGMC() (real bob deinterlacer) and eventually remove the denoiser, because QTGMC denoises by itself. Then upscale to 1440x1080 (if your DAR is 4:3) with nnedi3_rpow2; doing this, YT should introduce less problems while compressing your video. You can save/export your final video with the same lossless codec used for capturing, because YT is able to read it and this avoid a preliminary lossy compression on your side. However, what I would experiment given the nature of your video is if the deinterlacing is more appropriate before or after the filtering (this last is uncommon). I have the impression that QTGMC may have troubles with the defecting frames. option 1: Code:
... Code:
... |
Thanks--I'll post an update once I try to implement your advice. That change after FixRipsP2() is quite impressive! But lots of detail is lost (like in the ear & hair) and two functions calls on each field would grind my computer to a halt. Hopefully the OP used a mask to just apply it to those lines!
|
Yes, that filtering was quite destructive and a last resort option! It was just an indication on how to proceed.
You are right, in general you want to apply a dedicated filter for a specific problem only to the concerned segment of the video, and eventually to a portion of the frame; and also with a "mask" to touch only where needed, but this last is not that easy. |
Yes, that is my goal! Perhaps I am naive, but masking a portion of frame does not seem too complicated if it is based on a crop (and not hue, saturation, or luminance).
Also, when you suggested to use nnedi3 deinterlacing first because it works better, in what way exactly? Do the spatial-temporal and temporal filters work better or can be used with less strength? I ask because there are some people (like Sanlyn) who suggest to de-interlace only when necessary as any method will bring a reduction in quality as half of the frames are removed (but then interpolated, so overall about a 25% on average according to LordSmurf). But if the 25% loss means a less filtered look, it may be worth it. I would also think that deinterlacing would be less damaging to a clip like mine that has less movement, bringing down the loss even further. |
Quote:
Quote:
https://forum.doom9.org/showthread.php?t=86394 http://www.doom9.org/index.html?/cap..._avisynth.html https://forum.doom9.org/showthread.php?t=167315 https://forum.doom9.org/showthread.php?t=59029 http://forum.doom9.net/showthread.ph...93#post1921993 https://forum.doom9.org/showpost.php...82&postcount=6 Quote:
Deinterlacing or not the video for final export is your choice. If you prefere to deinterlace the previous is useless and then better use QTGMC (before or after filtering in this special case). Deinterlacing (QTGMC) is recommended for YouTube upload. Quote:
interlaced frames video, 25 frames (50 fields) per second (25 frames i) frame1 frame2 frame3 frame4 frame5 frame6 frame7 frame8 A..............C.............E..............G..... ..........................................(field 0) even lines b..............d.............f...............h.... ............................................(field 1) odd lines Bob() deinterlaced Nnedi3(field=-2) deinterlaced QTGMC() deinterlaced [frame count is doubled (relative position of frames in previous scheme does not match)] frame1 frame2 frame3 frame4 frame5 frame6 frame7 frame8 A..........B'.........C..........D'.........E..... .....F'.........G..........H'....... (field 0) even lines a'..........b.........c'..........d..........e'... .......f..........g'.........h.........(field 0) odd lines x' and X' represents scanlines interpolated from X and x |
Thank you for the reading material. The deal breaker for me is that sharpening really shouldn't be done on interlaced material. Also, I it seems that JDL_UnfoldFieldsVertical stacks the even and odd fields together, which I would think still suffer in quality because of the lack of information between line 1 and 3, line 3 and 5, etc.
I am gathering that your method works better because A and a' (interpolated A) are from the same space and same time, which helps with spatial-temporal filtering. The original A and b are from a different space and time. SelectEven/selectOdd provide all of the A lines at once but because the b lines come in the next field before A again temporal filtering will suffer. In your experience, do you think using QTGMC with or without denoising is better, since it appears that I would only have the choice between dfttest and fft3dfilter that don't seem to work as well as TD2 and SpotLess? Also, do you recommend QTGMC with NNEDI3 as interpolation or something else like "EEDI3+NNEDI3" (EEDI3 with sclip from NNEDI3) to get the benefit of both? |
Quote:
Quote:
Quote:
Quote:
If you want your final result to be deinterlaced (youtube or whatever) use QTGMC(). QTGMC denoises by itself, in a less effective way than TD2 as you said, so you may want to turn off its intrinsic denoise capability, which can be done only partially, and use TD2 after QTGMC. By doing so and sharpening after, be careful to do not introduce excessive smoothing and "plastic look". SpotLess is more a "defect removal" than a denoiser. It is generally used before denoise and sharpening. Quote:
If you are looking for the absolutely best procedure by "merging" eedi3 and nnedi3 I can't answer, depends on your videos if it is worth or not. As general recommendation always experiment a lot yourself, and do not blindly trust our suggestions :wink2: |
Thanks for all the tips--I'll report back once I apply them. Also, how do you tell if a denoiser is temporal, spatial or spatial-temporal if it is not categorized as such on the Avisynth website or where it was posted?
|
Quote:
- a pure spatial filter is where the filtering only occurs inside the single frame. - in general, when you see a "motion vector" generation (MVTools) there is a temporal radius involved, so the processing concerns multiple frames (temporal filtering); a spatial filtering can be added or not. - today, the best denoising filters are spatial-temporal, combining both approaches. If the filter is a compiled dll we have to trust the author's documentation (often incomplete) or run some experiment on a reference clip to understand (not easy). |
Thanks again. I was afraid that there was no way to determine the type of filter if it were a compiled dll besides guess and check. Luckily those cases are rare (after searching the forums for prior users).
Since you have been so helpful, could you explain why UtoY() and VtoY() have to be used? In the link below, someone uses it to reduce the chroma banding. I understand how it works, but why couldn't there just be filters that allow you to directly adjust the chroma channel, as opposed to copying the values to luma, adjusting the values, and then copying back to the U or V channels? Or an argument in a filter that lets you choose the plane? https://forum.videohelp.com/threads/...os#post2536626 |
My guess is that ttempsmooth (I never used it) processes the chroma/luma planes together when testing for pixel similarity (https://forum.doom9.org/showthread.php?t=77856), while "themaster1" wanted to act only on chroma.
He writes here, so maybe he can explain better... |
Quote:
|
2 Attachment(s)
I thought I'd share this neat chroma effect that I came across on a bad part of the tape. This kind of color effect would take some serious masking in a NLE! :D
(Since both sides of the coat are the same color). Attachment 14426 Attachment 14427 |
He's a Smurf! :laugh:
|
Quote:
https://forum.videohelp.com/threads/...on#post2640813 For your specific problem, maybe a NLE is more appropriate. Good luck! |
Thanks--I didn't find the tracking error problematic enough to remove. I just thought to share it because of the interesting colors. I have taken your advice and have tried to export an .AVI from VirtualDub based on the below script, but VirtualDub keeps saying there is an out of bound memory problem. There were no bad frames detected when I scanned the file (which is only a minute long). Do you have any suggestions I could look into (I am using 32-bit Avisynth+)?
To summarize the script, it was first edited in Premiere Pro with added segments. Those segments were removed so that they were not affected by the filters, then added in afterwards. I upscaled and resized to keep the 4:3 ratio and let YouTube add pillarboxes. Removing the resizing solves the issue, so maybe I am doing something wrong there? Code:
video1 = AVISource("VW#1_AsherHada.avi").AssumeBFF Thank you. |
I forgot to add that QTGMC darkened the whole video at first but from looking online the fix was to set NoiseProcess to 0. The video was still darken a bit after this change, by about 10 percent, crushing blacks. Is there a way around this besides raising the black level before the filter?
|
Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.