digitalFAQ.com Forum

digitalFAQ.com Forum (https://www.digitalfaq.com/forum/)
-   Edit Video, Audio (https://www.digitalfaq.com/forum/video-editing/)
-   -   Avisynth with Adobe Premiere, AVSInfoTools errors? (https://www.digitalfaq.com/forum/video-editing/12111-avisynth-adobe-premiere.html)

Winsordawson 08-22-2021 07:57 PM

Avisynth with Adobe Premiere, AVSInfoTools errors?
 
I am reacquainting myself with Avisynth after being away for a while. There is a video with a skewed up color tone requiring that the hue angle be adjusted. That cannot be done in Avisynth, but it is an easy fix in Premiere.

Is it recommended to first adjust the hue angle in Premiere and then export to a lossless format to Avisynth, or first use Avisynth to correct other problems and then export as a lossless to Premiere? Assuming that the effect in Premiere works on the YUV and not the RGB (if not, then I would use Premiere last), I prefer using Premiere first because Avisynth has more options for final output, and the current color scheme is not easy on the eyes.

On an unrelated note, I installed Avisynth on a new PC and avsinfotools would either detect the 32 bit or 64 bit version, but not both. It would throw an error that there were 32 bit plugins in the 64 bit folder, or vice versa. I tried every combination in the registry but while I could make one work I could never make them both produce no error. Finally I ran the installation where I only installed the 32 bit version first, then ran the installation again for the 64 bit. Avsinfotools now detects both as well as the location of their respective plugins.

This might be fine for now, but will I encounter problems down the road? When I load a new script in avsPmod I get error messages of the sort

Code:

Error parsing Func plugin parameters: unknown character 'n'
Error parsing ConditionalSelect plugin parameters: unknown character 'n'
Error parsing WriteFileEnd plugin parameters: unknown character 'n'
Error parsing WriteFileEnd plugin parameters: + without preceeding argument
Error parsing propSet plugin parameters: unknown character 'a'
Error parsing propSetInt plugin parameters: unknown character 'n'
Error parsing propSetFloat plugin parameters: unknown character 'n'

However, the video still runs. Honestly, time is not so critical that I need the 64 bit version unless it is worth it. Thanks.

lordsmurf 09-08-2021 09:37 PM

Skewed color, interesting, would like to see good sample of this. Both still and clip.

It depends on the problem.
Premiere > Avisynth, probably, for this exact issue.
Avisynth > Premiere is more usual.

While important to acknowledge colorspace, too much is made of it in recent years. Even pros don't do the wacky anal adherence to colorspace that some hobbyists and amateurs do. (It's even more ridiculous when they have low quality everywhere else, but "OMG, not colorspace! The horror!") Obviously you want to maintain proper colorspace, but sometimes it's just going to be possible. There's no reason to stress over it.

I don't like Avisynth+ 32-bit, nothing but problems.
I stick to +64 and official 2.6.
I no longer use the MT, as it has too many errors, including bad frames in the video output.

I never feel the need to use AvsInfotool. The one time I tried, some years ago, it insisted I had an error, and yet I did not. I found it completely useless. (I'm not alone here. Search Google for "Avsinfotools", and on pg1, you'll see summary comments like "Despite the error message in 32-bit AVSInfoTool, QTGMC will now work in both modes" and "I get a Runtime error from AVSInfoTool, but still work fine at AvsPmod". So even without looking for errors, just the name of the tool itself brings up errors. So let that sink in.)

Winsordawson 09-19-2021 08:08 PM

Thanks. I was trying to change the registry entries for my 32 and 64-bit plugins to satisfy avsinfotools but after your experience I may reinstall everything and ignore the errors it brings up.

Yes, I'll be sure to post samples once I have done the best I can with it. I prefer Premiere first since I can color correct like I am in the 21st century, unlike with Avisynth. Plus, it turns out that the color tools in Premiere such as RGB curves and Three-Way Color Corrector work in the YUV space. I'm sure it isn't perfect, but it will do.

By the way, does anyone know where to find the SpotLessUV plugin people are talking about? I found a Doom thread by StainlessS but it only had the Spotless version. Does it work better than the variants of RemoveSpotsMC()?

lordsmurf 09-19-2021 11:04 PM

Something else I recently learned about is this: https://www.videohelp.com/software/A...al-File-System
Not tried it yet.
Quote:

Some of the scenarios where this is useful include:
- Serving data from 32 bit Avisynth and plugins to 64 bit encoders and media players.
- Serving data through file shares to remote systems, potentially running non Windows operating systems.
- Breaking complex scripts into stages that can be run concurrently on multiple systems.
- Serving data to encoders or players that do not support VFW or DirectShow.
Now then, that's not directly addressing anything here, but an adjacent issue, from the usual menu of Avisynth 32/64 problems. And it still won't change you needing x64 filters for x64 versions, x86 for x86. But when it comes to Premiere x64 seeing an x86 serve, this may do it.

Winsordawson 10-11-2021 09:16 AM

Thanks--that is something to look into.

I appear to be not understanding the syntax logic of Avisynth. The below syntax works, but I don't know why. I would think once you write "a" that last/current clip becomes "a" and is such being filtered. But in reality, this syntax filters the "b" video, leaving "a" intact. I then provided an overlay on it. Any thoughts? Thank you.

Code:

video1


separatefields()
a=last  #sets a to be = to video1

a        #sets a as now the current clip?
filter1  #applies filter1 to a
filter2  #applies filter2 to a
b=last  #sets filtered clip a to = b

overlay(a,b)    #overlays clip b onto clip a
weave()


Winsordawson 10-14-2021 12:56 PM

6 Attachment(s)
I might as well post my progress with this. This is a screenshot of the original clip that had to be corrected in Premiere:

SEE original.jpg ATTACHED (Images inserted into the post would not show up).

It is not from a 50's b-movie....There are several issues with the whole video (only a snippet shown here). First is the greenish hue, even after correcting the hue angle for the skin tone.

color_before.jpg

I tried to remove the green completely from the background, and the ColorPic showed that after the correction the background had even low levels of R,G, and B. But then the dress became reddish. I can't seem to prevent that if I change the background. Maybe trying a secondary color correction on the dress?

color_after.jpg

Ghosting also exists, as seen here
ghost_before.jpg

Since deghosting filters in Avisynth have rarely helped (me at least), I used a secondary color correction to remove the green.

ghost_after.jpg

It is not perfect, but less noticeable. The other issue is the haloing around the woman's shoulder in this photo and in the attached video. This was after using MergeLuma and colorshift. Is there a colorshift equivalent for luma?

luma_issue.jpg

MergeLuma works, but of course it destroys the video and makes it looks like an impressionist painting.

Here is my script. Since FixRipsP2() is destructive, I only used it on the left edge of the video. If I keep that overlay in my script too long without commenting it out, Avisynth will crash. KNLMeansCL worked well on the remaining dropouts. Unfortunately, it also blurred the detail in the woman's hair, and my preference is always to keep detail rather than fix everything. TemporalDegrain2() on its default settings seems to help without removing detail.

Strangely, RemoveSpotsMC2() was far worse than RemoveSpotsMC(). RemoveSpotsMC3() is too slow.

Code:

video1 = AVISource("Dance.avi").AssumeBFF
audio1 = video1
video1

#Applies FixRipsP2() to left edge

separatefields()
a=last

a
crop(20,0,30, 0)

FixRipsP2()
b=last

overlay(a,b,x=20, mode="lighten")
weave()


ChromaShift(C=4, L=2)


SeparateFields()


e=SelectEven().RemoveSpotsMC().RemoveSpotsMC()#.KNLMeansCL(d=1, a=2,s=8,h=3.2)
o=SelectOdd().RemoveSpotsMC().RemoveSpotsMC()#.KNLMeansCL(d=1, a=2,s=8,h=3.2)
Interleave(e, o).AssumeBFF()

FAN(lambda=5)


source = last
super=source.MSuper(pel=2, sharp=2)
backward_vec2 = MAnalyse(super, isb = true, delta=2, overlap=4,blksize=8)
backward_vec1 = MAnalyse(super, isb=true, delta=1, overlap=4,blksize=8)
forward_vec1 = MAnalyse(super, isb=false, delta=1, overlap=4,blksize=8)
forward_vec2 = MAnalyse(super, isb= false, delta=2, overlap=4,blksize=8)
MDegrain2(super, backward_vec1, forward_vec1, backward_vec2, forward_vec2, thSAD=400)

TemporalDegrain2()


FixChromaBleedingMod()

MergeChroma(aWarpSharp2(depth=30).aWarpSharp2(depth=30))
TurnRight()
MergeChroma(aWarpSharp2(depth=40).aWarpSharp2(depth=40))
TurnLeft()
SmoothUV(radius=2, field=false)

Weave()

#TemporalDegrain2(postFFT=3) #too strong

crop(22,0,-6,-8)
AddBorders(14,4,14,4)

trim(2450,2600)
return last

The before video and after videos will be attached to the next posts. Thanks for any suggestions, comments, criticisms, and diatribes. :knock:

Winsordawson 10-14-2021 01:11 PM

2 Attachment(s)
Here is the before and after.

lollo2 10-16-2021 11:27 AM

Just few comments after a quick check:

- no need to use separatefields(), your video seems progressive

- give RemoveDirtSMC a try for horizontal stripes defect, with a limit=30/50 and 2 calls

- TemporalDegrain2 uses MDegrain with temporal radius=1 internally, so no need to call MDegrain2 before. Eventually use TemporalDegrain2(degrainTR=3)

- Not sure if KNLMeansCL is a spatial only denoiser (to avoid double usage with TD2); if you like it you may use it to build a prefiltered clip to create better motion vectors; this is not possible with TemporalDegrain2 because it uses QTGMC approach for motion estimation. Possible using old version, i.e. TemporalDegrain(..., denoise=clip.KNLMeansCL(), ...)

- Or you can use KNLMeansCL in post-processing inside TD2 with postFTT=4, avoiding sometimes destructive dfttest call when using postFTT=3

- Try LSFmod, SeeSaw or CAS instead all aWarpSharp2 calls, eventually including chroma sharpening.

Difficult clip to restore, I did not try all this on your clip, hope it helps...

Winsordawson 10-16-2021 12:17 PM

Thanks lollo2--you have given me a lot to work with! I will report back when I go through all of your suggestions.

I noticed that for some reason my FixRipsP2() mask was not actually applied in the after clip. I will make sure it is in the next updated, but do you think it is wise to use if nothing else helps the left side of the clip (to avoid filtering everything)?

Just FYI, the original clip is in fact interlaced.

lollo2 10-16-2021 01:27 PM

Quote:

do you think it is wise to use if nothing else helps the left side of the clip (to avoid filtering everything)
Absolutely. The best is to apply a dedicated filter in the section of the video where is required, or, as in your case, in the part of the frame requiring it.
If you fail, it may be more appropriate to "mask" the portion having problems with black pixels rather that applying a not needed filter on the whole video.

Quote:

Just FYI, the original clip is in fact interlaced.
You are right, I am not familiar with BFF videos and I made a mistake.
In this case you can also try something like
Code:

AssumeBFF().nnedi3(field=-2)
<filtering>
AssumeBFF().SeparateFields().SelectEvery(4,0,3).Weave()

it is more effective than Separatefields().SelectXXX.<filtering>
In fact, is mandatory for SpatialTemporal filters but also recommended for pure Temporal filtering.

Winsordawson 10-16-2021 01:48 PM

Thanks again. I will have to read more about it. Most of the info on Avisynth usage is spread all over the place.

Since you seem knowledgeable, would you know where my misunderstanding was regarding post #5 above?

lollo2 10-16-2021 03:45 PM

1 Attachment(s)
Quote:

where my misunderstanding was regarding post #5 above
last is always the video at the current moment in time of the script:

Code:

AviSource("intrepretative dance before.avi")
# your current video is "intrepretative dance before.avi"

separatefields()
# your current video is now "intrepretative dance before.avi" with separate fields

a=last
# your current video above (=last) is now associated to variable a

# <filtering>
# for example filtering is: add text filtering in blue on the upper right corner
subtitle(a,"filtering",size=60,align=9,text_color=color_blue)
# the filtering is applied to last (=a)
# you current video is a+filter

b=last
# your current video above (=last) is now associated to variable b

# thus a="intrepretative dance before.avi" with separate fields
# and b="intrepretative dance before.avi" with separate fields plus filtering

overlay(a,b)
# with this command you place b on top of a

weave()
# with this command you recombine the fields

c=last
# your current video above (=last) is now associated to variable c

stackhorizontal(\
stackvertical(subtitle(a,"a",size=60,align=2),subtitle(b,"b",size=60,align=2)),\
subtitle(c,"c",size=60,align=2))
# this produces the following image:

Attachment 14282

lollo2 10-17-2021 09:30 AM

An alternative way to write the previous code is to make explicit assignement at each step, that is boring but helps a lot to well identify all the operations in complex scripts.
I always use this approach.
Code:

video_org=AviSource("intrepretative dance before.avi")

video_org_sep=video_org.separatefields()

video_org_sep_filt=subtitle(video_org_sep,"filtering",size=60,align=9,text_color=color_blue)

video_org_sep_filt_ov=overlay(video_org_sep,video_org_sep_filt)

video_restored=video_org_sep_filt_ov.weave()

return(video_restored)


Winsordawson 10-17-2021 05:56 PM

Thanks for the explanation and photo. It appears you are taking on Sanlyn's role! The only issue with the second method is, unless Avisynth is different, all of those video file variables should be stored in memory separately. If the filtering is too much, Avisynth might crash. It is easier to keep track of, however.

lollo2 10-18-2021 04:46 AM

AviSynth is a frame server!

http://avisynth.nl/index.php/The_scr...ence_of_events

http://avisynth.nl/index.php/The_scr...e_filter_graph

For temporal radius > 1, is actually a frames server :wink2:

If you look to "Frame caching and the effect on splitting filter graph's paths" chapter in http://avisynth.nl/index.php/The_scr...considerations you can see how a "cache" filter is created and used after each call.

Winsordawson 10-18-2021 11:04 AM

Thank you for the informative links. I don't think I make use of runtime scripts, so from the performance page it seems that the best I can do (absent creating an intermediate file for some filters) for reducing the slowdown in my Overlay with FixRipsP2() is to load all of the plugins manually and turn off autoload. Does that seem right?

lollo2 10-18-2021 12:05 PM

It won't be a significant improvement in term of speed, but in a script I only load the necessary plugins and avoid "autoload" procedures.

You can try some MT modes to speed-up your processing if that's an issue (I am not familiar with thems, so i cannot help, sorry)

lordsmurf 10-18-2021 02:05 PM

Quote:

Originally Posted by Winsordawson (Post 80429)
to load all of the plugins manually and turn off autoload. Does that seem right?

I don't think I've ever tested this, at least not that I can remember. In fact, I completely forgot about this. I don't really see how it could be an issue, and would affect speed.

In terms of speed, there's not much you can do beyond per-core CPU speed, SSD, and sometimes GPU. Everything else is just slivers of % of speed. Certain MT/cache functions have huge negatives, such as glitching the video, which is why (for example) I never use QTGMC in the 32-bit version (cached MT is bad, uncached too slow).

I'm all for speeding up Avisynth, but some things simply do not work as claimed.

Winsordawson 10-18-2021 08:22 PM

Quote:

Originally Posted by lordsmurf (Post 80434)
In terms of speed, there's not much you can do beyond per-core CPU speed, SSD, and sometimes GPU. Everything else is just slivers of % of speed. Certain MT/cache functions have huge negatives, such as glitching the video, which is why (for example) I never use QTGMC in the 32-bit version (cached MT is bad, uncached too slow).

I'm all for speeding up Avisynth, but some things simply do not work as claimed.

I have a 2 TB SSD, an 8 core i7, and a RTX 3080. But add a overlay mask with FixRips with a few F5s, and I am looking at a crash :laugh: The speed of the refresh is fine, however.

Admittedly, I am only use 32 bit, but I would think that would only affect the speed of the process, not the rate of crashes. I avoid 64 bit because of not every filter is supported. I think Sanlyn also avoided the 64 bit version.

Winsordawson 11-21-2021 09:20 PM

5 Attachment(s)
I had some time to apply some of the advice here. TemporalDegrain2 with deGrainTR=3 did make an improvement. But whilst trying to find RemoveDirtSMC() (It is difficult to find the correct version of videoFred's fliters, since when I try to download it from Doom9 my antivirus blocks it), I came across SpotLess, which is quite magical in action (and quick). TemporalDegrain2 did not make a significant improvement after SpotLess, and it also a bit slow. Neither did two calls to RemoveDirtSMC().

I didn't apply nnedi deinterlacing because I want to preserve as much detail as possible without any interpolation of the other fields.

I tried LSFMod and CAS with MergeChroma but it seemed worse. Do you have a suggestion on what parameters to use? I also found your YT channel useful (assuming it is the same person).

Code:

video1 = AVISource("interpretative dance color corrected.avi").AssumeBFF
video1

SeparateFields()

e=SelectEven().SpotLess(RadT=5, ThSAD=1100, Blksz=16).RemoveDirtSMC(25)
o=SelectOdd().SpotLess(RadT=5,ThSAD=1100, Blksz=16).RemoveDirtSMC(25)
Interleave(e, o).AssumeBFF()

FAN(lambda=5)

FixChromaBleedingMod()

MergeChroma(aWarpSharp2(depth=30).aWarpSharp2(depth=30))
TurnRight()
MergeChroma(aWarpSharp2(depth=40).aWarpSharp2(depth=40))
TurnLeft()

SmoothUV(radius=2, field=false)

Weave()

crop(22,0,-6,-8)
AddBorders(14,4,14,4)

trim(2450,2600)
return last

Strangely, large block sizes worked better (from 8 to 12 to 16 saw an improvement), but perhaps I am just not understanding the code correctly. ThSAD over 1100 made no difference, so I kept it as low as possible to prevent unwanted changes.

You will not see it in the example clips, but secondary color correction was successful in removing the type of ghost I encountered in this video. Blacks are a bit milky because the lack of a fill light created harsh shadows. Reducing the shadows too much would affect the skin tones that are in the shadows.

Am I mistaken, or is that a spot on the camera itself in the lower third, left hand side of the final video? I am surprised Avisynth kept it!

Thanks!

Before
Attachment 14347

After
Attachment 14348

lollo2 11-22-2021 06:07 AM

Quote:

But whilst trying to find RemoveDirtSMC() ... I came across SpotLess
Yes, SpotLess is a sort of RemoveDirtSMC evolution, and very effective. Be careful with removing small moving object: some posts in doom9's forum explain how to use adaptive masks to solve the problem. Difficult to implement, but very nice results!

TemporalDegrain2 is an excellent at denoising/grain removal; it may be not fully suitable for your video, but with a high temporal radius (>3) and dfttest or KNLMeansCL postprocessing should clean the "standard" noise a lot. I experimented a temporal radius of 16 with SMDegrain (a similar filter) once, and although really really slow it was effective for defects where the solution was to "average" across a large number of frames.

Quote:

I didn't apply nnedi deinterlacing because I want to preserve as much detail as possible without any interpolation of the other fields.
nnedit "deinterlacing" I proposed is lossless, meaning that it just builds the progressive frame from the 2 fields; you then apply the "progressive" filter, and interlace back. No interpolation, no loss of details.

Quote:

I tried LSFMod and CAS with MergeChroma but it seemed worse
Sharpening may not give a significant improvement to the look of your videos. Preset "slow" for LSFMod and defaults for CAS are generally the best options, but you have to experiment a lot. It is really source dependent.

Quote:

I also found your YT channel useful (assuming it is the same person).
That channel was built to share experiences and highlight common problems I found on my workflow with some friends, working in the same project of digital conversion of old vhs/s-vhs TV series. It is somehow repetitive, by my captures are very similar to each other.


Quote:

Strangely, large block sizes worked better (from 8 to 12 to 16 saw an improvement), but perhaps I am just not understanding the code correctly. ThSAD over 1100 made no difference, so I kept it as low as possible to prevent unwanted changes.
blocksize is a "static" parameter and, given the characteristics of your source, a larger values should be better because your defects cover large parts of the images. thSAD is used for the motion vectors, a parameter related to temporal structure then) and again given your defects should not play a role here. Your findings look coherent to me.

And finally let me say that your final result is not too bad at all. Sure, with lot of time and trying many filters/parametrs/steps you may improve it even further, but do not over process, and stop once you are satisfied, otherwise it will never end ;)

lollo2 11-22-2021 10:10 AM

Quote:

No interpolation, no loss of details.
Obviously I meant "no loss of details". (there is interpolation)

Winsordawson 11-22-2021 08:01 PM

Thanks again. I will try the nnedi with double frame option and see if there is an improvement. You mentioned previously that I can only use KNLMeansCL on an old version of TemporalDegrain2. Do you know which version?

SpotLess has no produced any problems for me so far, perhaps because I choose a low threshold for when to not affect the block. But if it seems problematic I'll use a different strength on the sides. It surely works much faster than FixRipsP2!

You're right that one can go crazy trying to make it perfect, and I am of the "less is more" crowd. Do you have any suggestions for the glow off the woman's shoulder? MergeLuma removes it, but makes it an oil painting. Perhaps by playing with the aWarpSharp2 parameters? (I should add that my above script and videos make use of a ChromaShift(C=4, L=2) that I forgot to include here).

LSFMod and CAS perform the same function as aWarpSharp2 for MergeChroma/MergeLuma, right? It has been hard to find a proper explanation of the effect, but I assumed it was by sharpening the edges and then merging only those parts to the original. Learning from Doom9 is like going through a garbage can full of shredded notes. :unsure:

I plan to upload a separate, deinterlaced version online, upscaled to HD so it gets a better bitrate by YouTube. Do you recommend that I keep my same script (plus an AddGrain and sharpening effect, which I also forgot to include above) and just use QTGMC without any denoising? Or throw away the above denoisers and use something from QTGMC? I don't want to go crazy because I care more about the interlaced version for archiving purposes.

By the way, if you like the show UFO (given your videos) you may also like The Invaders, although the British were usually less corny. :laugh:

lollo2 11-23-2021 03:09 AM

Quote:

You mentioned previously that I can only use KNLMeansCL on an old version of TemporalDegrain2. Do you know which version?
TemporalDegrain (without 2)

Quote:

It surely works much faster than FixRipsP2
Sure, but for some defects FixRipsP2 is sometime necessary: https://forum.videohelp.com/threads/...-Distortion%29

Quote:

MergeLuma removes it, but makes it an oil painting
Oil painting/plastic look and highlight of halos are the unwanted side effects of denoise/sharpening/restore etc...
Not easy to avoid it, a tune of the parameters of the filters or something like addGrain (inside the filter if available or outside the filter) it may help. Some denoisers have an option to re-inject some "new cleaner noise" based on what has been removed :)
MergeLuma itself should not produce plastic look, except if you do a temporal/spatial smoothing.

Quote:

LSFMod and CAS perform the same function as aWarpSharp2 for MergeChroma/MergeLuma, right?
The best sharpeners by default do not sharpen chroma. You just do it in special cases, if needed.
Sometime, to be sure that chroma is not touched, you force in the flow mergeChroma to use the chroma from video before sharpening.

Quote:

Learning from Doom9 is like going through a garbage can full of shredded notes.
The advantage reading there is that the "developers" of the filters participate, but often their documentation is weak and they think everybody "speaks" their same technical language, which is obscure for a beginner. On the other hand, I will always be grateful to them for their "free" releases and their effort for making AviSynth and VapourSynth and their filters the wonderful tools that they are!

Quote:

... upscaled to HD so it gets a better bitrate by YouTube
If you want to output a version for YouTube you need to deinterlace. In this case the nnedi3 fake deinterlacing is not needed.
You can just use QTGMC() (real bob deinterlacer) and eventually remove the denoiser, because QTGMC denoises by itself.
Then upscale to 1440x1080 (if your DAR is 4:3) with nnedi3_rpow2; doing this, YT should introduce less problems while compressing your video.
You can save/export your final video with the same lossless codec used for capturing, because YT is able to read it and this avoid a preliminary lossy compression on your side.

However, what I would experiment given the nature of your video is if the deinterlacing is more appropriate before or after the filtering (this last is uncommon). I have the impression that QTGMC may have troubles with the defecting frames.

option 1:
Code:

...
QTGMC
<filtering>
<upscale>

option 2:
Code:

...
nnedi3 fake deinterlacing
<filtering>
QTGMC
<upscale>


Winsordawson 11-23-2021 06:59 PM

Thanks--I'll post an update once I try to implement your advice. That change after FixRipsP2() is quite impressive! But lots of detail is lost (like in the ear & hair) and two functions calls on each field would grind my computer to a halt. Hopefully the OP used a mask to just apply it to those lines!

lollo2 11-24-2021 02:38 AM

Yes, that filtering was quite destructive and a last resort option! It was just an indication on how to proceed.

You are right, in general you want to apply a dedicated filter for a specific problem only to the concerned segment of the video, and eventually to a portion of the frame; and also with a "mask" to touch only where needed, but this last is not that easy.

Winsordawson 11-26-2021 10:50 AM

Yes, that is my goal! Perhaps I am naive, but masking a portion of frame does not seem too complicated if it is based on a crop (and not hue, saturation, or luminance).

Also, when you suggested to use nnedi3 deinterlacing first because it works better, in what way exactly? Do the spatial-temporal and temporal filters work better or can be used with less strength? I ask because there are some people (like Sanlyn) who suggest to de-interlace only when necessary as any method will bring a reduction in quality as half of the frames are removed (but then interpolated, so overall about a 25% on average according to LordSmurf).

But if the 25% loss means a less filtered look, it may be worth it. I would also think that deinterlacing would be less damaging to a clip like mine that has less movement, bringing down the loss even further.

lollo2 11-27-2021 03:14 AM

Quote:

masking a portion of frame does not seem too complicated
I was talking about mask on "elements" of the picture, i.e. edges of the objects, gradients, luma subset, certain colors, etc., not on a portion of the frame, which is trivial.

Quote:

Also, when you suggested to use nnedi3 deinterlacing first because it works better, in what way exactly? Do the spatial-temporal and temporal filters work better or can be used with less strength?
Concerning deinterlacing before filtering here some discussion we had in the paste, explaining better than what I did the right procedure:

https://forum.doom9.org/showthread.php?t=86394

http://www.doom9.org/index.html?/cap..._avisynth.html

https://forum.doom9.org/showthread.php?t=167315

https://forum.doom9.org/showthread.php?t=59029

http://forum.doom9.net/showthread.ph...93#post1921993

https://forum.doom9.org/showpost.php...82&postcount=6

Quote:

I ask because there are some people...
I proposed a lossless deinterlacing -> filtering -> interlace back approach
Deinterlacing or not the video for final export is your choice. If you prefere to deinterlace the previous is useless and then better use QTGMC (before or after filtering in this special case).
Deinterlacing (QTGMC) is recommended for YouTube upload.

Quote:

...as any method will bring a reduction in quality as half of the frames are removed (but then interpolated, so overall about a 25% on average...
I am not sure I understand what you mean. To simplify, deinterlacing at double frame rate recreates by interpolation the full frame from the single field (and much more when using QTGMC() instead of a simple Bob() for example)

interlaced frames video, 25 frames (50 fields) per second (25 frames i)
frame1 frame2 frame3 frame4 frame5 frame6 frame7 frame8
A..............C.............E..............G..... ..........................................(field 0) even lines
b..............d.............f...............h.... ............................................(field 1) odd lines

Bob() deinterlaced
Nnedi3(field=-2) deinterlaced
QTGMC() deinterlaced
[frame count is doubled (relative position of frames in previous scheme does not match)]
frame1 frame2 frame3 frame4 frame5 frame6 frame7 frame8
A..........B'.........C..........D'.........E..... .....F'.........G..........H'....... (field 0) even lines
a'..........b.........c'..........d..........e'... .......f..........g'.........h.........(field 0) odd lines
x' and X' represents scanlines interpolated from X and x

Winsordawson 12-02-2021 09:54 AM

Thank you for the reading material. The deal breaker for me is that sharpening really shouldn't be done on interlaced material. Also, I it seems that JDL_UnfoldFieldsVertical stacks the even and odd fields together, which I would think still suffer in quality because of the lack of information between line 1 and 3, line 3 and 5, etc.

I am gathering that your method works better because A and a' (interpolated A) are from the same space and same time, which helps with spatial-temporal filtering. The original A and b are from a different space and time. SelectEven/selectOdd provide all of the A lines at once but because the b lines come in the next field before A again temporal filtering will suffer.

In your experience, do you think using QTGMC with or without denoising is better, since it appears that I would only have the choice between dfttest and fft3dfilter that don't seem to work as well as TD2 and SpotLess?

Also, do you recommend QTGMC with NNEDI3 as interpolation or something else like "EEDI3+NNEDI3" (EEDI3 with sclip from NNEDI3) to get the benefit of both?

lollo2 12-04-2021 03:45 AM

Quote:

The deal breaker for me is that sharpening really shouldn't be done on interlaced material
Yes, don't do it; some filter has an "interlaced=true" option, but most of them just do internally a separateFields(), so is not recommended either.

Quote:

lso, I it seems that JDL_UnfoldFieldsVertical stacks the even and odd fields together, which I would think still suffer in quality because of the lack of information between line 1 and 3, line 3 and 5, etc.
Nor really, because UnfoldFieldsVertical shifts all the even scanlines to the top half of the frame and all odd scanlines to the bottom. More details here https://forum.doom9.org/showthread.p...834#post354834, however this method is obsolete.

Quote:

SelectEven/selectOdd provide all of the A lines at once but because the b lines come in the next field before A again temporal filtering will suffer
For a temporal filtering the fields separation is not appropriate, it works well on original interlaced material. A deinterlace is more effective because the filter "works" with more "data".

Quote:

In your experience, do you think using QTGMC with or without denoising is better, since it appears that I would only have the choice between dfttest and fft3dfilter that don't seem to work as well as TD2 and SpotLess?
If you want just to filter your interlaced video "full QTGMC power" is not necessary, you can use nnedi3 and interlace back after the filtering, or QTGMG(lossless), but the first is easier.
If you want your final result to be deinterlaced (youtube or whatever) use QTGMC(). QTGMC denoises by itself, in a less effective way than TD2 as you said, so you may want to turn off its intrinsic denoise capability, which can be done only partially, and use TD2 after QTGMC. By doing so and sharpening after, be careful to do not introduce excessive smoothing and "plastic look".
SpotLess is more a "defect removal" than a denoiser. It is generally used before denoise and sharpening.

Quote:

Also, do you recommend QTGMC with NNEDI3 as interpolation or something else like "EEDI3+NNEDI3" (EEDI3 with sclip from NNEDI3) to get the benefit of both?
As pure interpolation nnedi3 and QTGMC are equivalent, because nnedi3 is used inside QTGMC.
If you are looking for the absolutely best procedure by "merging" eedi3 and nnedi3 I can't answer, depends on your videos if it is worth or not.

As general recommendation always experiment a lot yourself, and do not blindly trust our suggestions :wink2:

Winsordawson 12-05-2021 09:06 PM

Thanks for all the tips--I'll report back once I apply them. Also, how do you tell if a denoiser is temporal, spatial or spatial-temporal if it is not categorized as such on the Avisynth website or where it was posted?

lollo2 12-06-2021 04:26 AM

Quote:

how do you tell if a denoiser is temporal, spatial or spatial-temporal
If the filter is an AviSynth script and not a compiled dll you can read the code:
- a pure spatial filter is where the filtering only occurs inside the single frame.
- in general, when you see a "motion vector" generation (MVTools) there is a temporal radius involved, so the processing concerns multiple frames (temporal filtering); a spatial filtering can be added or not.
- today, the best denoising filters are spatial-temporal, combining both approaches.

If the filter is a compiled dll we have to trust the author's documentation (often incomplete) or run some experiment on a reference clip to understand (not easy).

Winsordawson 12-06-2021 09:17 PM

Thanks again. I was afraid that there was no way to determine the type of filter if it were a compiled dll besides guess and check. Luckily those cases are rare (after searching the forums for prior users).

Since you have been so helpful, could you explain why UtoY() and VtoY() have to be used? In the link below, someone uses it to reduce the chroma banding. I understand how it works, but why couldn't there just be filters that allow you to directly adjust the chroma channel, as opposed to copying the values to luma, adjusting the values, and then copying back to the U or V channels? Or an argument in a filter that lets you choose the plane?

https://forum.videohelp.com/threads/...os#post2536626

lollo2 12-07-2021 05:49 AM

My guess is that ttempsmooth (I never used it) processes the chroma/luma planes together when testing for pixel similarity (https://forum.doom9.org/showthread.php?t=77856), while "themaster1" wanted to act only on chroma.

He writes here, so maybe he can explain better...

Winsordawson 12-07-2021 12:21 PM

Quote:

Originally Posted by lollo2 (Post 81049)
My guess is that ttempsmooth (I never used it) processes the chroma/luma planes together when testing for pixel similarity (https://forum.doom9.org/showthread.php?t=77856), while "themaster1" wanted to act only on chroma.

He writes here, so maybe he can explain better...

Thanks. What I mean is that I see this conversion of chroma to luma happen often with different filters, so is there some reason in how Avisynth was created that the programmers don't simply make an argument to allow for manipulating the chroma, versus conversion to luma first and then back? I was just curious but maybe themaster1 knows something.

Winsordawson 12-17-2021 05:35 PM

2 Attachment(s)
I thought I'd share this neat chroma effect that I came across on a bad part of the tape. This kind of color effect would take some serious masking in a NLE! :D

(Since both sides of the coat are the same color).
Attachment 14426

Attachment 14427

lordsmurf 12-17-2021 05:52 PM

He's a Smurf! :laugh:

lollo2 12-18-2021 10:59 AM

Quote:

This kind of color effect would take some serious masking in a NLE!
In AviSynth/VapourSynth you can have a look to this procedure on how to use masks (although on a different subject):
https://forum.videohelp.com/threads/...on#post2640813

For your specific problem, maybe a NLE is more appropriate.

Good luck!

Winsordawson 12-18-2021 05:11 PM

Thanks--I didn't find the tracking error problematic enough to remove. I just thought to share it because of the interesting colors. I have taken your advice and have tried to export an .AVI from VirtualDub based on the below script, but VirtualDub keeps saying there is an out of bound memory problem. There were no bad frames detected when I scanned the file (which is only a minute long). Do you have any suggestions I could look into (I am using 32-bit Avisynth+)?

To summarize the script, it was first edited in Premiere Pro with added segments. Those segments were removed so that they were not affected by the filters, then added in afterwards. I upscaled and resized to keep the 4:3 ratio and let YouTube add pillarboxes. Removing the resizing solves the issue, so maybe I am doing something wrong there?

Code:

video1 = AVISource("VW#1_AsherHada.avi").AssumeBFF
audio1 = video1
video1

ReplaceFramesMC2(192, 1)
ReplaceFramesMC2(194, 5)
ReplaceFramesMC2(268, 2)
ReplaceFramesMC2(315, 8)
ReplaceFramesMC2(359, 3)
ReplaceFramesMC2(500, 4)
ReplaceFramesMC2(520, 5)
ReplaceFramesMC2(5619, 1)

video2 = AudioDub(last, audio1)

seg1 = video2.Trim(3955,4093)
seg2 = video2.Trim(6264,6412)
seg3 = video2.Trim(10052,10200)
seg4 = video2.Trim(13981,14129)
seg5 = video2.Trim(17598,17746)
seg6 = video2.Trim(21230,21359)

vidEdit = video2.Trim(0, 3944) + video2.Trim(4094, 6263) + video2.Trim(6413, 10051) + video2.Trim(10201, 13980) + video2.Trim(14130,17597) + video2.Trim(17747,21230)

vidEdit = vidEdit.Crop(22,0, 0,0).AddBorders(10,0,12,0)

/*Double-checking correct width*/
# return Subtitle(last, String(vidEdit.Width), size = 32) 
last = vidEdit

AssumeBFF().nnedi3(field=-2)

FAN(lambda=5, plus=1, minus=50)
FAN(lambda=5, plus=50, minus=1)
ChromaShift(C=6, L=2)
SpotLess(RadT=3, ThSAD=1000, Blksz=16).SpotLess(RadT=5, ThSAD=300, Blksz=16).RemoveDirtSMC(20)

FixChromaBleedingMod(thr=7, strength=0.8)

MergeChroma(aWarpSharp2(thresh=200,depth=30, type=1,blur=4, chroma=3).aWarpSharp2(thresh=200,depth=30, type=1,blur=4,chroma=3))
TurnRight()
MergeChroma(aWarpSharp2(thresh=200,depth=30,type=1,blur=4).aWarpSharp2(thresh=200,depth=30,type=1,blur=4))
TurnLeft()

SmoothUV(radius=2, field=false)

AssumeBFF().SeparateFields().SelectEvery(4,0,3).Weave()

vidEditFinal = last

vidFinal = vidEditFinal.Trim(0, 3944) + seg1 + vidEditFinal.Trim(3945, 6113) +seg2 + vidEditFinal.Trim(6114, 9756) +seg3 + \
vidEditFinal.Trim(9757, 13529) + seg4 + vidEditFinal.Trim(13530, 17001) + seg5 + vidEditFinal.Trim(17002, 0) + seg6


last = vidFinal.Trim(0, 1800)


QTGMC( Preset="fast", EZKeepGrain=1.0, NoisePreset="Faster", NoiseProcess=0, FPSDivisor=2)
VInverse2()

nnedi3_rpow2(4, cshift="Spline36Resize", fwidth=1920, fheight=1440)

LSFmod(strength=100, preblur="ON")
AddGrainC(var=2)
ColorMatrix(mode="Rec.601->Rec.709")
return last

Also, I would have liked to keep the FPS at 59 but when I do there is a line of chroma that appears on every other frame. This goes away when I keep only half the frames but do not know why.

Thank you.

Winsordawson 12-18-2021 10:43 PM

I forgot to add that QTGMC darkened the whole video at first but from looking online the fix was to set NoiseProcess to 0. The video was still darken a bit after this change, by about 10 percent, crushing blacks. Is there a way around this besides raising the black level before the filter?


All times are GMT -5. The time now is 03:21 AM

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.