#1  
06-18-2018, 09:54 AM
KalterStein KalterStein is offline
Free Member
 
Join Date: May 2018
Posts: 5
Thanked 0 Times in 0 Posts
I am very new to ripping VHS and storing them but here is what I have so far and the issue I am hitting.

I am ripping on an i7 64bit XP Pro machine with 8GB of RAM using an old school Tevion USB Capture device with the USB2800 drivers.

The captured masters are 720x480 @ 29.970 FPS with huffyuv and audio at Stereo 48khz
I don't remember the model VCR (I am not at home right now to check) The master captures to me look good.

I am then using AviSynth scripts to filter/process the capture and pulling the avs into VirtualDub to render the production copy as MKV using x264 with default settings.

Here is where the issue arises...
The MKV Plays just fine and looks absolutely brilliant on my 70" in the living room. All is good until around the last 30 mins of the title, the video and audio go insane, image is entirely scrambled, blocked, audio turns into static/screeching. I don't have a sample to upload right now but I have had the same exact result on two conversions so far.

I have no idea what is going wrong. The rendering/dubbing/conversion completes without issue on the machine. No errors reported etc. It is just on each attempt (14hrs running) the end of the title goes nuts.

I will try to upload a sample later but if anyone has any ideas right off the bat, please let me know. I will also attach my avisynth scripts later too.

-- merged --

I haven't been able to upload samples yet. Been tied down with work/family. I did another test converting an AVI encoded with lagarith to x264 and same exact result. The MKV plays but near the end everything goes nuts.

The master avi doesn't have the issue though. Just the conversion.

-- merged --

Here is the AVS Script I am using. I have tried it on 4 different files all with the same result. The movie plays great until about 50 mins or so in then goes nuts. Green blocking, scrambled image, screeching audio etc.

These things look surprisingly good on our living room 70 inch after this script. So I really want this thing to work.

Code:
import("C:\Program Files (x86)\AviSynth\plugins\TemporalDegrain.avs")
AviSource ("C:\Documents and Settings\Administrator\My Documents\beast.avi")
ConvertToYV12(interlaced=true)
ColorYUV(gamma_y=100, off_y=-16, cont_y=-20, cont_u=40, cont_v=40)
ConvertToRGB(interlaced=true)
RGBAdjust(r=0.90, b=1.1)
ConvertToYV12(interlaced=true)

Spline64Resize(width/2, height)
QTGMC(preset="fast")
Dehalo_alpha(rx=2, ry=1)
TemporalDegrain(SAD1=200, SAD2=150, sigma=8)
TurnRight().nnedi3(dh=true).TurnLeft()
aWarpSharp(depth=5)
Sharpen(0.3, 0.0)
Crop(8,0,-10,-10).AddBorders(0,0,0,0)
SelectEven()

function DeHalo_alpha(clip clp, float "rx", float "ry", float "darkstr", float "brightstr", float "lowsens", float "highsens", float "ss")
{
rx        = default( rx,        2.0 )
ry        = default( ry,        2.0 )
darkstr   = default( darkstr,   1.0 )
brightstr = default( brightstr, 1.0 )
lowsens   = default( lowsens,    50 )
highsens  = default( highsens,   50 )
ss        = default( ss,        1.5 )

LOS = string(lowsens)
HIS = string(highsens/100.0)
DRK = string(darkstr)
BRT = string(brightstr)
ox  = clp.width()
oy  = clp.height()
uv  = 1
uv2 = (uv==3) ? 3 : 2

halos  = clp.bicubicresize(m4(ox/rx),m4(oy/ry)).bicubicresize(ox,oy,1,0)
are    = mt_lutxy(clp.mt_expand(U=uv,V=uv),clp.mt_inpand(U=uv,V=uv),"x y -","x y -","x y -",U=uv,V=uv)
ugly   = mt_lutxy(halos.mt_expand(U=uv,V=uv),halos.mt_inpand(U=uv,V=uv),"x y -","x y -","x y -",U=uv,V=uv)
so     = mt_lutxy( ugly, are, "y x - y 0.001 + / 255 * "+LOS+" - y 256 + 512 / "+HIS+" + *" )
lets   = mt_merge(halos,clp,so,U=uv,V=uv)
remove = (ss==1.0) ? clp.repair(lets,1,0) 
          \        : clp.lanczosresize(m4(ox*ss),m4(oy*ss))
          \             .mt_logic(lets.mt_expand(U=uv,V=uv).bicubicresize(m4(ox*ss),m4(oy*ss)),"min",U=uv2,V=uv2)
          \             .mt_logic(lets.mt_inpand(U=uv,V=uv).bicubicresize(m4(ox*ss),m4(oy*ss)),"max",U=uv2,V=uv2)
          \             .lanczosresize(ox,oy)
them   = mt_lutxy(clp,remove,"x y < x x y - "+DRK+" * - x x y - "+BRT+" * - ?",U=2,V=2)

return( them )
}

function m4(float x) {return(x<16?16:int(round(x/4.0)*4))}
I don't have a sample of video to upload, out of frustration and habit, I automatically delete any bad production runs.

I am going to do another run tonight but this time I am not changing compression. Maybe something during the change from lagarith to h264 or divx is causing issues. So I am not changing, just going to process the frames with the avs script and see what happens.

-- merged --

So the file plays fine on one PC in both windows media player and vlc but on my main pc, the file cannot play in windows media player and 2 minutes into vlc playback it messed up. It started going crazy with deinterlacing unfolded, bad audio, video looks to restart itself unfolded etc.

I opened that same file in Kodi media center and it played without issue in Kodi.
I am totally lost as to what is going on here. Neither computer have any special codecs installed.

I converted this file to h264 and same results as above. I now have it as a .mkv with all the modifcations from the avisynth scripts. I just don't know why it only plays right in Kodi.
Reply With Quote
Someday, 12:01 PM
admin's Avatar
Ads / Sponsors
 
Join Date: ∞
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #2  
06-24-2018, 08:33 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Thanks for the script. It would have been instructive to see why the original video required some of these steps, so at this point no one can say whether the script really "worked" or not. But if it made you happy....

Quote:
Originally Posted by KalterStein View Post
These things look surprisingly good on our living room 70 inch after this script.
Congrats on all your hard work, but I'd take issue with that evaluation -- especially after seeing your script, extraneous resizing (unexplained), and mention of Tevlon.

Quote:
Originally Posted by KalterStein View Post
[code]ConvertToYV12(interlaced=true)
ColorYUV(gamma_y=100, off_y=-16, cont_y=-20, cont_u=40, cont_v=40)
ConvertToRGB(interlaced=true)
RGBAdjust(r=0.90, b=1.1)
ConvertToYV12(interlaced=true)

Spline64Resize(width/2, height)[\code]
Experts and especially colorists will attest that this kind of work with ColorYUV would be better left to the original YUY2. Getting to RGB from there is far less destructive than from YV12 and back to YV12 again. Resizing in a higher color resolution like YUY2 or RGB rather than YV12 is also less destructive, looks much smoother, and avoids much of the YV12 resampling error as well as avoiding edge damage and things like blocky gradients. There are several discussions about these elements in professional sources like Color Correction Handbook: Professional Techniques (2nd Edition).

Why resizing before deinterlacing? Gratuitous Resizing always costs.

I see you later used DeHalo_Alpaha to subdue edge halos but first ran Spline64Resize, which accentuates halos and even creates them.

Why wait to place the following operation at the very end of your script?

Quote:
Originally Posted by KalterStein View Post
Code:
SelectEven()
After putting all that work into 100% of your video, you threw away half of it and destroyed 50% of its temporal resolution. It would be interesting to see why that was necessary, but I don't see how it improved performance on your 70" TV. That the script deinterlaced TFF video as default BFF would hardly matter at this point, since you threw away half your frames anyway.

If you really had to discard 50% of your video, why not do it at the outset instead of spending all that time processing the whole business? Just use:

Code:
QTGMC(preset="fast",FPSDivisor=2)
and you can forget about SelectEven().

Your encoding and YV12 chroma storage have some problems here:

Quote:
Originally Posted by KalterStein View Post
Code:
Crop(8,0,-10,-10)
You mod2 final height, and your final output frame width is neither mod4 nor mod8. It's mod2 as well. It's a wonder your encoder didn't barf, much less your playback system. Mod8 in all dimensions is preferred, even for ugly YouTube stuff. You'll never see Mod4 or Mod2 in the video business. One would hope that your encoder didn't do another resize internally to avoid a non-standard frame size like 702x470. I'm assuming your encoder fixed that for you.

I keep wondering what this statement is for:
Quote:
Originally Posted by KalterStein View Post
Code:
AddBorders(0,0,0,0)
It doesn't do anything. But I guess you know that. Maybe you left it there by mistake? As for why it won't play here or there, well, mod2 could be at fault. No word anywhere on your encoder settings, the actual specs for the encoded files, a suspicious lack of mention of a line tbc, and many missing details that might help. I'm just glad It's not my video.
Reply With Quote
  #3  
06-24-2018, 09:29 PM
KalterStein KalterStein is offline
Free Member
 
Join Date: May 2018
Posts: 5
Thanked 0 Times in 0 Posts
I skimmed through your response until I have time tomorrow to really read and absorb it.

As my initial post says, I am very new to this. I even referred to "ripping" the VHS which I am told is not the correct term. I assume the correct term is "Capturing" since I know in reality I am not ripping digital data from anything in this case.

DVD/BD = Digital Data on compact disc
VHS = Analog Data stored on magnetic medium

That script is a copy paste job from another forum post that I adjusted the crop on and that is about it. I don't know what 90% of it does but it was the first script I ran that actually cleared up the videos I was testing. They looked remarkably better on the main TV (the 70 inch screen) after that script processed them. By "cleared up" I mean removing comb lines and the pixelation.

Also note that I do not have a TBC nor does the VCR have any TBC capability. I know I am not using top of the line equipment and I have more to learn but I am not aiming for studio quality 1080p here. Just decent quality on larger screens.

My overall goal is to capture several VHS mostly animated movies and handful of live action. I want to stream them from my media server here in house.

So with my limited knowledge it appears the best thing is to capture in huffyuv or lagarith on default settings. Then edit and clean them up as best as possible for streaming using Avisynth or VirtualDub or a combination of both etc.

Which is what I have been testing with. Original captures in either compression look horrible on the large screens due to the pixelation and combing. So I am just trying to clear that up.

I will try to attach a few slices of a few different videos for examples. I just don't know when I can get to it. I usually just hit record and walk away because I have a 8 month old child that takes most my time. I only have two full captures right now because I deleted the rest to start over.

I have been doing 3-4 captures of each VHS and keeping the one with the least dropped or inserted frames. Some of them drop/insert more than others and my understanding is this can be caused by condition of the tape as well as other equipment.

Thanks for the details you have provided so far. I look forward to reading it further and learning some more.
Reply With Quote
  #4  
06-24-2018, 09:42 PM
lordsmurf's Avatar
lordsmurf lordsmurf is online now
Site Staff | Video
 
Join Date: Dec 2002
Posts: 13,501
Thanked 2,447 Times in 2,079 Posts
Quote:
Originally Posted by KalterStein View Post
an old school Tevion USB Capture device with the USB2800 drivers.
Quote:
Originally Posted by sanlyn View Post
and mention of Tevlon.
There were many Tevions, no model numbers. Some of them are ATI 600 USB clones, while most are not. Tevion is a European brand that mostly rebadges things. Some European stores, like Aldi, sold them in the U.S. for a short time.

I may reply to this thread again later on.

As stated, before/after sample clips are helpful for these restoration conversations.

- Did my advice help you? Then become a Premium Member and support this site.
- For sale in the marketplace: TBCs, workflows, capture cards, VCRs
Reply With Quote
  #5  
06-25-2018, 12:10 PM
KalterStein KalterStein is offline
Free Member
 
Join Date: May 2018
Posts: 5
Thanked 0 Times in 0 Posts
Ok so if I understand what I have read so far.

Skip
ConvertToYV12
ConvertToRGB
ConvertToYV12
Skipping these 3 steps entirely as my master copy is already in standard yuy2 and doesn't need conversion at all. Not even to RGB. Do all color work on interlaced yuy2 frames and adjust colors to my own liking.

Crop should ideally be Mod8 (multiple of 8) if I understand that correctly as well.


Ok so I am pretty sure I know how "interlacing" vs "Progressive" works. Correct me if my knowledge is wrong.

640, 720, 1080i is literally an image interlaced on screen hence the "i".
640 = VHS quality
720 = DVD/SD
1080 = BD

Interlacing is a frame or image drawn using even and odd lines of data. I assume this means all the even lines are drawn first than the odd lines interlaced but it can go the other way. Isn't that what Top Field First and Bottom Field First means? Interlacing can happen with all Top or Bottom drawn first.

When the viewing device can draw the lines smoothly and quick enough. You have a full resolution frame without lost data. Otherwise you see the comb lines as the interlacing is done. Typically TV's are able to interlace an image at a speed that makes comb lines invisible to the human eye.

640, 720, 1080p are all progressive so each line is drawn one by one in order of top to bottom. The faster they are down the smoother the action on screen appears.

Quote:
SelectEven()
So when capturing a VHS with the goal of keeping it digital for streaming. I thought pushing it to progressive was the correct thing to do. That is why you would deinterlace but deinterlacing destroys half the data. So you should do all image/color correction work BEFORE deinterlacing. This way the corrections have all color and data to work with before dropping it. That sounds logical to me but again I am new to this so there could be reasons that this doesn't work.

I think that is why SelectEven() was the last step. It was to essentially go to progressive after all edits/corrections were done on a full set of data.

I mean even if keeping the capture for digital streaming only, does it really have to go to progressive (deinterlace) wouldn't all playback devices/viewing devices be able to properly interlace the video without comb lines anyways?

My ultimate goal is to just capture these tapes and stream them without them looking blocky and pixelated on the main tv.


So below is what I have so far if I understand correctly.

Capture masters at huffyuv yuy2 at 720x480 29.970 FPS (This is good)
Adjust colors to my liking and crop to a mod8 that eliminates bottom noise and any black bars from capturing at 720x480

Then I am stuck on what is next. Do I need to resize to help it look better on larger screens? Can you really re-master a VHS to say 720 and still have it look good?

Why does the script I posted help clear up so much of the pixelation and blockiness?

I know without example videos you guys have no way to determine the level of pixelation. Sorry, I will try to get those here as soon as I can.

Last edited by KalterStein; 06-25-2018 at 12:22 PM.
Reply With Quote
  #6  
06-25-2018, 10:08 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Quote:
Originally Posted by KalterStein View Post
Crop should ideally be Mod8 (multiple of 8) if I understand that correctly as well.
You don't understand it correctly and are simply chopping away at your image real estate. See discussion later, below.

Quote:
Originally Posted by KalterStein View Post
Ok so I am pretty sure I know how "interlacing" vs "Progressive" works. Correct me if my knowledge is wrong.

640, 720, 1080i is literally an image interlaced on screen hence the "i".
640 = VHS quality
720 = DVD/SD
1080 = BD
Wrong in several respects.

Quote:
Originally Posted by KalterStein View Post
Interlacing is a frame or image drawn using even and odd lines of data. I assume this means all the even lines are drawn first than the odd lines interlaced but it can go the other way. Isn't that what Top Field First and Bottom Field First means? Interlacing can happen with all Top or Bottom drawn first.
That part is correct.

Quote:
Originally Posted by KalterStein View Post
When the viewing device can draw the lines smoothly and quick enough. You have a full resolution frame without lost data. Otherwise you see the comb lines as the interlacing is done. Typically TV's are able to interlace an image at a speed that makes comb lines invisible to the human eye.
Wrong. TV's, media players, and external players don't interlace. They're designed to do the reverse.

Quote:
Originally Posted by KalterStein View Post
640, 720, 1080p are all progressive so each line is drawn one by one in order of top to bottom. The faster they are down the smoother the action on screen appears.
Wrong.Or, well, that's really not quite correct. The "p" is right, but basically frame size does not define interlace or non-interlace structure. 640 and 720 can be either, and they can be telecined.

Quote:
Originally Posted by KalterStein View Post
So when capturing a VHS with the goal of keeping it digital for streaming. I thought pushing it to progressive was the correct thing to do. That is why you would deinterlace but deinterlacing destroys half the data. So you should do all image/color correction work BEFORE deinterlacing. This way the corrections have all color and data to work with before dropping it. That sounds logical to me but again I am new to this so there could be reasons that this doesn't work.
It doesn't make sense because so much of it is incorrect.

Quote:
Originally Posted by KalterStein View Post
I think that is why SelectEven() was the last step. It was to essentially go to progressive after all edits/corrections were done on a full set of data.
Your captured video was interlaced. After running it through QTGMC, it as deinterlaced -- i.e., it becomes a progressive video.

Quote:
Originally Posted by KalterStein View Post
I mean even if keeping the capture for digital streaming only, does it really have to go to progressive (deinterlace)
No.

Quote:
Originally Posted by KalterStein View Post
wouldn't all playback devices/viewing devices be able to properly interlace the video without comb lines anyways?
Playback devices don't interlace.

Quote:
Originally Posted by KalterStein View Post
My ultimate goal is to just capture these tapes and stream them without them looking blocky and pixelated on the main tv.
You need a better TV. Analog source isn't blocky. It can't be, and it can't pixelate. Block noise and pixelation are the result of improperly processed pixels. But analog sources like tape don't have pixels. Interlace combing doesn't exist in analog sources either; combing is strictly a digital phenomenon and occurs when an interlaced video is displayed without proper deinterlacing.
For instance, PC monitors don't deinterlace, and neither do video editors when they displayed the unprocessed interlaced source.

Quote:
Originally Posted by KalterStein View Post
Capture masters at huffyuv yuy2 at 720x480 29.970 FPS (This is good)
That's the way it's usually done. The YUY2 colorspace is used because digitally it most closely resembles the way video on VHS tape is stored as analog YPbPr.

Adjust colors to my liking and crop to a mod8 that eliminates bottom noise and any black bars from capturing at 720x480
Quote:
Originally Posted by KalterStein View Post
You're not quite there on the cropping. Anb example of how it would be done in Avisynth: let's say your video is something like tyhe one in your script, with two side borders (8 pixels on the left and 10 pixels on the right), and 10 pixels of bottom-border head switching noise. This is about an average frame population for SMPTE standards from 4:3 VHS tape. Changing levels and colors will affect the black borders so you would want to create clean new ones, and you don't want the head switching noise. But you do want the original image content and you still want the proper standard frame size.

[code]
Crop(8,0,-10,-10)
That discards the dirty or discolored left and right borders and chops off the head switching noise. Your frame size is now 702x470, which is the original image content. Let's add new black side borders, and then add 4 pixels at the top and 6 at the bottom to vertically center the image as well as one can wirth a 4:2:0 colorspace like YV12, which is the way your video will be encoded.

Code:
AddBorders(8,4,10,6)
You now have the original image content in a 720x480 frame. This standard frame size will do for encoding to the following:
- Re-interlaced and encoded with a 4:3 display aspect ratio for DVD (which is an interlaced format),
- Re-interlaced and encoded with a 4:3 display aspect ratio for standard definition BluRay disc (which is required to be interlaced),
- or as progressive or interlaced 4:3 display aspect ratio as h.264 mp4 or mkv, for servers, USB stick with smart TV's or external media players.

When you play a 4:3 video on any device, the black border pixels pixels will match any black border pixels that a 4:3 or 16:9 would add to areas that don't have image content. Haven't you noticed that many movies on TV aren't 16:9? Hollywood uses 4:3, 1.37:1, 1.66:1, 1.85:1, 2.0:1, 2.35:1, 2.4:1, and several other aspect ratios. Those movies appear on cable broadcasts, streamed broadcasts, DVD's, and Bluray. None of those aspect ratios will fill a 16:9 TV screen. Haven't you noticed black border areas on the tip and sides of those broadcasts? And by the way, most of those TV broadcasts are interlaced. On most TV's nowadays, thin black borders often art not noticed because of TV overscan covers them (yes, HDTV's use overscan by default, and it can be returned off on almost all TV displays).

Quote:
Originally Posted by KalterStein View Post
Why does the script I posted help clear up so much of the pixelation and blockiness?
As I explained earlier, pixelation and blockiness don't exist in analog tape sources. Analog tape doesn't have pixels. You have serious deficiencies in your tape playback, your capture chain, and your processing, and probably your TV isn't doing such a great job given what it has to work with. If you have a media server that can't handle interlace or telecine, that's another serious shortcoming.

Quote:
Originally Posted by KalterStein View Post
know without example videos you guys have no way to determine the level of pixelation. Sorry, I will try to get those here as soon as I can.
That would be extremely instructive, for us as well as for you. From what tyou've posted so far, you probably need some tips on how to make a short sample to post here. To maintain the original YUY2 colorspace, open your video in VirtualDub and cut about 8 to 10 seconds of video that contains motion of some kind: people moving, gesturing, etc. 8 to 10 seconds of YUY2 would fall well below the 99MB limit for posted samples. After you've made your edited cuts in VirtualDub,
on the app's top main menu Click "Video...", then on the drop-down menu click "Direct stream copy". Then save your sample as AVI with a new name and post it here. Give the file time to upload, which will take a few minutes.

Some basics about DVD and BluRay formats:

Standard DVD, usually 720x480 or 704x480, almost always interlaced or telecined, almost always played as interlaced, aspect ratio 4:3 or 16:9 only (720x480 preferred for 16:9), MPEG codec, dolby AC3 audio.

BluRay standard disc (MPEG, h.264, or VC1 encoding ):
- 720x480 (NTSC), interlaced only @ 29.97 (NTSC), 4:3 or 16:9 only (720x480 required for 16:9)
- 720x576 (PAL), interlaced only @ 25fps (PAL), 4:3 or 16:9 only (720x576 required for 16:9)
- 1280x720, progressive only, 59.974 or 50 fps, 16:9 only
- 1280x720, progressive only, 24 fps, 16:9 only
- 1280x720, progressive only, 23.976 fps, 16:9 only
- 1440x1080, interlaced only, 29.97 or 25 fps, 16:9 only
- 1440x1080, progressive only, 24 fps, 16:9 only
- 1440x1080, progressive only, 23.976 fps, 16:9 only
- 1920x1080, interlaced only, 29.97 or 25 fps, 16:9 only
- 1920x1080, progressive only, 24 fps, 16:9 only
- 1920x1080, progressive only, 23.976 fps, 16:9 only

Also note the following:
720 and 704 qualify as mod8 and as mod16
480 qualifies as mod8 and as mod16
1280 qualifies as mod8 and as mod16
1440 qualifies as mod8 and as mod16
1920 qualifies as mod8 and as mod16
1080 qualifies as mod8 but not as mod16

Many Avisynth and VirtualDub filters require mod8 dimensions to operate properly.

==================================
Here's how you could modify your script for output to your server. I take it that your server isn't smart enough to handle interlaced or telecined video. The script's output is progressive at 59.974 fps and maintains the original temporal resolution and detail, which should not be a problem. If it is, get a better server. Encode the results as 720x480 with a 4:3 Display Aspect Ratio (DAR).

The script will not repair dot crawl or the geometric distortion effects of scan-line timing errors due to lack of a line-level tbc. Line timing errors can't be repaired after capture.

Code:
import("C:\Program Files (x86)\AviSynth\plugins\TemporalDegrain.avs")
AviSource ("C:\Documents and Settings\Administrator\My Documents\beast.avi")
ColorYUV(gamma_y=100, off_y=-16, cont_y=-20, cont_u=40, cont_v=40)
Levels(16,1.0,255,16,235,dither=true,coring=false)
ConvertToRGB(interlaced=true)
RGBAdjust(r=0.90, b=1.1)

ConvertToYV12(interlaced=true)
AssumeTFF()
QTGMC(preset="medium",ChromaNoise=true,border=true)
TemporalDegrain(SAD1=200, SAD2=150, sigma=8)
Dehalo_alpha(rx=2, ry=1)
aWarpSharp(depth=5)
Sharpen(0.3, 0.0)
Crop(8,0,-10,-10).AddBorders(8,4,10,6)
return last

function DeHalo_alpha(clip clp, float "rx", float "ry", float "darkstr", float "brightstr", float "lowsens", float "highsens", float "ss")
{
rx        = default( rx,        2.0 )
ry        = default( ry,        2.0 )
darkstr   = default( darkstr,   1.0 )
brightstr = default( brightstr, 1.0 )
lowsens   = default( lowsens,    50 )
highsens  = default( highsens,   50 )
ss        = default( ss,        1.5 )

LOS = string(lowsens)
HIS = string(highsens/100.0)
DRK = string(darkstr)
BRT = string(brightstr)
ox  = clp.width()
oy  = clp.height()
uv  = 1
uv2 = (uv==3) ? 3 : 2

halos  = clp.bicubicresize(m4(ox/rx),m4(oy/ry)).bicubicresize(ox,oy,1,0)
are    = mt_lutxy(clp.mt_expand(U=uv,V=uv),clp.mt_inpand(U=uv,V=uv),"x y -","x y -","x y -",U=uv,V=uv)
ugly   = mt_lutxy(halos.mt_expand(U=uv,V=uv),halos.mt_inpand(U=uv,V=uv),"x y -","x y -","x y -",U=uv,V=uv)
so     = mt_lutxy( ugly, are, "y x - y 0.001 + / 255 * "+LOS+" - y 256 + 512 / "+HIS+" + *" )
lets   = mt_merge(halos,clp,so,U=uv,V=uv)
remove = (ss==1.0) ? clp.repair(lets,1,0) 
          \        : clp.lanczosresize(m4(ox*ss),m4(oy*ss))
          \             .mt_logic(lets.mt_expand(U=uv,V=uv).bicubicresize(m4(ox*ss),m4(oy*ss)),"min",U=uv2,V=uv2)
          \             .mt_logic(lets.mt_inpand(U=uv,V=uv).bicubicresize(m4(ox*ss),m4(oy*ss)),"max",U=uv2,V=uv2)
          \             .lanczosresize(ox,oy)
them   = mt_lutxy(clp,remove,"x y < x x y - "+DRK+" * - x x y - "+BRT+" * - ?",U=2,V=2)

return( them )
}

function m4(float x) {return(x<16?16:int(round(x/4.0)*4))}
The output of this script is YV12 for encoding. It can be saved out of VirtualDub as lossless Lagarith YV12. Encode the results as 720x480 with a 4:3 display aspect ratio (DAR).

That's a very old copy of DeHalo_Alpha. It has long since been encoded as a .dll.

If you'd like to encode for DVD or BluRay, you have to re-interlace the video produced by the above script, which will return the bitrate to 29.97 fps.

Code:
### --- adjust the path and file name to match your system --- ###
AviSource ("C:\Documents and Settings\Administrator\My Documents\video from script.avi")
AssumeTFF()
SeparateFields().SelectEvery(4,0,3).Weave()
You can avoid many of your capture problems by not capturing to the same partition that contains your operating system, and by using line-level and frame-level tbc's. There is no after-capture fix for tbc distortion. Either fix it at capture time, or live with the results.

Last edited by sanlyn; 06-25-2018 at 10:57 PM.
Reply With Quote
  #7  
06-26-2018, 01:00 PM
KalterStein KalterStein is offline
Free Member
 
Join Date: May 2018
Posts: 5
Thanked 0 Times in 0 Posts
Answered in order of your reply

Quote:
Originally Posted by KalterStein View Post
Ok so I am pretty sure I know how "interlacing" vs "Progressive" works. Correct me if my knowledge is wrong.

640, 720, 1080i is literally an image interlaced on screen hence the "i".
640 = VHS quality
720 = DVD/SD
1080 = BD

---- Wrong in several respects.
Sorry, what part is incorrect?
I am saying here that 640,720,1080 can all be "i" which is interlaced. Yes they can also be "p" I know that already.
I can see I got the VHS resolution wrong. It appears that common NTCS VHS is about 333x480
So maybe that is what you are saying is incorrect. I am still trying to educate myself a bit about the resolutions and relation to luma/chroma etc.

Quote:
Wrong. TV's, media players, and external players don't interlace. They're designed to do the reverse.
I was under the impression that the viewing device doesn't "Deinterlace" at all, it simply draws the interlacing so fast that no comb lines are visible. So are you saying that the viewing device/playback device actually does "Deinterlace" so is it forcing it to progressive?

Quote:
640, 720, 1080p are all progressive so each line is drawn one by one in order of top to bottom. The faster they are down the smoother the action on screen appears.

----Wrong.Or, well, that's really not quite correct. The "p" is right, but basically frame size does not define interlace or non-interlace structure. 640 and 720 can be either, and they can be telecined.
Right I knew that bit. They can be either "i" or "p" I also did a quick skim read on telecine but I have a bit more reading there as that seems to be another complex piece.

Quote:
So when capturing a VHS with the goal of keeping it digital for streaming. I thought pushing it to progressive was the correct thing to do. That is why you would deinterlace but deinterlacing destroys half the data. So you should do all image/color correction work BEFORE deinterlacing. This way the corrections have all color and data to work with before dropping it. That sounds logical to me but again I am new to this so there could be reasons that this doesn't work.

----It doesn't make sense because so much of it is incorrect.
Yea, trash this paragraph. After thinking the process through again it doesn't make sense. The minute you deinterlace you have lost half the data anyways. You can adjust the interlaced footage and make it a 100% crystal clear image but the minute you deinterlace it and trash half, you trashed half the image regardless of the correction work done.

Quote:
I think that is why SelectEven() was the last step. It was to essentially go to progressive after all edits/corrections were done on a full set of data.

----Your captured video was interlaced. After running it through QTGMC, it as deinterlaced -- i.e., it becomes a progressive video.
Got it. QTGMC IS A Deinterlacing process. I even knew that and then using SelectEven just dropped the Odd frames.
So it trashed half the processed frames.

Quote:
I mean even if keeping the capture for digital streaming only, does it really have to go to progressive (deinterlace)

----No.
So if I upload interlaced video to say "YouTube" you are saying when that video is watched on a TV via a PC or Gaming Console that the playback will be deinterlaced? I don't see how that is possible. The TV wouldn't know to deinterlace the file playing back from YouTube. Wouldn't it be the burden of the player software to deinterlace before sending out to the TV?

Quote:
wouldn't all playback devices/viewing devices be able to properly interlace the video without comb lines anyways?

----Playback devices don't interlace.
Yup end of sentence, I used to wrong term here. I should have said playback/viewing devices properly deinterlace. As I said above though is it truly "deinterlacing" as in going to progressive or does it just draw the interlace so fast you don't see combing?

Quote:
My ultimate goal is to just capture these tapes and stream them without them looking blocky and pixelated on the main tv.

----You need a better TV. Analog source isn't blocky. It can't be, and it can't pixelate. Block noise and pixelation are the result of improperly processed pixels. But analog sources like tape don't have pixels. Interlace combing doesn't exist in analog sources either; combing is strictly a digital phenomenon and occurs when an interlaced video is displayed without proper deinterlacing.
For instance, PC monitors don't deinterlace, and neither do video editors when they displayed the unprocessed interlaced source.
I don't suspect the TV itself, I suspect my playback devices. I am using Kodi Media Center which runs on a PC to stream media to the TV itself. That is why I kept thinking I have to force these to progressive because in reality it is a computer playing back the files. However they are being played back on a actual TV via HDMI not a computer monitor.

So even though playback is via Kodi on PC, does the TV itself still handle the deinterlacing properly or am I forced to go progressive in my use case?

The pixelation and blockiness I was seeing on my PC Monitor. I would open the master AVI in VLC or Windows Media Player and put it full screen. The image is blurred and blocky around things like peoples faces and such. So I referred to this as pixelation, it seems to be an affect of stretching to full screen. As when the image is kept at the default aspect ratio you cannot see the blurred edges around faces and objects. Hope I explained that better now.

Additional notes/repsonses

Your responses have been extremely helpful so thank you for taking the time to write them. Your last response here especialy with cropping/borders has clarified quite a lot for me. From that response I am gathering that even though I am playing these files from a PC I can still use interlaced footage with a 4:3 DAR.

I am familiar with broadcasts and the "Black Bars" you are talking "Widescreen" format vs "Standard" 16:9 has top and bottom bars even on a widescreen TV and 4:3 on a widescreen tv will have Top/Bottom/Left/Right when kept in 4:3 DAR and not stretched out etc. If I am understanding that correctly.

Quote:
Here's how you could modify your script for output to your server. I take it that your server isn't smart enough to handle interlaced or telecined video. The script's output is progressive at 59.974 fps and maintains the original temporal resolution and detail, which should not be a problem. If it is, get a better server. Encode the results as 720x480 with a 4:3 Display Aspect Ratio (DAR).

The script will not repair dot crawl or the geometric distortion effects of scan-line timing errors due to lack of a line-level tbc. Line timing errors can't be repaired after capture.
The server is literally just serving the file to a PC running Kodi Media Center. Kodi does deinterlace so I am thinking an interlaced encode at 4:3 DAR should work. Honestly I am not sure what dot crawl or geometric distortion is, so I will hit the old google for that in a bit.

In your script since I am already dealing with a master in yuy2, does it need conversion to RGB or later to YV12?
My understanding is YUY2 is better. I also read that RGB24 would provide the best output.

I am not worried about file sizes so if processing is easier to keep it in YUY2 I can go that route. I am just trying to understand the why and how each step works together that is all.

I plan to do more reading soon on YUY2/YV12/RGB etc. to further educate myself on them.

System has 2 hard drives.
C:\ = OS Drive
D:\ = Captures Drive

I capture to D then take the final master capture I want to keep and move it to C
So when encoding for a final production copy, I am reading the master from C and storing the new encoded copy on D
So at no point in time am I writing what I am outputting to the same drive the OS is operating on.

Unfortunately I just don't have funds for TBC equipment.

This has been a fairly educative project. I just want to capture my VHS tapes and have the ability to watch them on the main TV with as clear a picture as possible given my equipment. This way my 8 month old daughter can watch them. Most of them are animated films (You can easily guess which studio they are from LOL)

I could just hook the VCR right to the TV but where is the fun in that. Plus having these tapes on digital means I can keep tapes stored somewhere safe.
Reply With Quote
  #8  
06-27-2018, 11:41 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Quote:
Originally Posted by KalterStein View Post
Sorry, what part is incorrect?
I am saying here that 640,720,1080 can all be "i" which is interlaced. Yes they can also be "p" I know that already.
I can see I got the VHS resolution wrong. It appears that common NTCS VHS is about 333x480
So maybe that is what you are saying is incorrect. I am still trying to educate myself a bit about the resolutions and relation to luma/chroma etc.
You seem to be making the assumption that some of the frame dimensions you listed are automatically "hi-def" or BluRay or whatever. Official high definition formats like BluRay and AVCHD have very strict and precise encoding requirements far beyond frame size alone. As you noted, the "640" frame width often refers to VHS playback but is also a feature of DVD and standard definition BluRay/AVCHD. On an old fashioned CRT-TV a DVD will display in a 640x480 frame because that is the size of the TV display. But official DVD and SD-BluRay releases are far higher quality than VHS displayed on the same 640x480 CRT, assuming the DVD or BluRay is competently produced. It's not just a matter of "sharpness" and detail rendering. it's also a matter of color consistency, less noise and fewer disturbances such as chroma bleeding, and playback timing and consistency.

Many users attempt to identify quality with frame size. A 1920x1080 video can look like total garbage, as tons of YouTube and other internet posts will quickly demonstrate. The first requirement of high definition is that the source has high definition to begin with, not just a big frame size. Industrial digital video masters are many times larger and more detailed than DVD or BluRay formats, and many are even "bigger" that 4K videos.

As for VHs, yes, it's a lower quality and lower resolution source than even plain vanilla retail DVD. But that's all the more important in the area of capture, restoration and encoding. A good capture and skillful processing can often look as clean as DVD and can create output that improves the source capture many times over. A poor (i.e., average) capture with lesser players, no time base correction, poor signal level control, etc., can't compete with proper hardware, software, and processing optimized for VHS. No, it will not "look like" a 1920x1080 retail BluRay or a digital capture made with an HD-PVR from an HD broadcast (and many newbies think that upscaling VHS to huge frames will give such results, but are always severely disappointed). Still, anyone can make VHS look pretty darn nice with the proper tools. There are many forum examples here and elsewhere. There are also many examples of poor tape condition, poor playback and poor capture that readily reveal their limitations, even with decent processing.

Quote:
Originally Posted by KalterStein View Post
I was under the impression that the viewing device doesn't "Deinterlace" at all, it simply draws the interlacing so fast that no comb lines are visible. So are you saying that the viewing device/playback device actually does "Deinterlace" so is it forcing it to progressive?
Yes. Data imbedded in properly encoded digital video contains playback instructions for PC players, external players, and TV's. This doesn't mean that all players and all TV's are equal in handling various frame structures. Some are better at it than others. Usually the worst are so-called smart TV's, the majority of which are downright stupid with many formats. But, then, TV's are basically display devices, not all-encompassing media decoders.

Note that VLC player doesn't deinterlace or handle telecine properly by default. You have to set deinterlacing permanently to Auto using VLC's options menu. VLC can often trip up on many videos, with occasional image breakup, pixelation, stutter, and freezing. Windows Media Player is another story, and a sad one. Each new version of that software is worse than before. Microsoft has disabled most of its classic controls, and the newest version has most of the formerly built-in codecs removed. The free Media Player Classic and MPC-BE are better, even in their older versions.

Quote:
Originally Posted by KalterStein View Post
The minute you deinterlace you have lost half the data anyways.
Not true.

An interlaced frame consists of two images. Each image is from a different instant in time. The two images are stored on alternating horizontal scanlines, so that one image is stored on even-numbered lines and the other is on odd-numbered lines. The two images in the frame are referred to as fields -- there are two 720x240 half-height fields in each interlaced frame. When a 29.97 frames per second interlaced video is played by a proper deinterlacing device, the device interprets the video into its two distinct images and displays both images, one image at a time, at the rate of 59.94 full-sized images (fields) per second. Interlaced PAL video encoded at 25 interlaced frames per second is displayed at 50 fields per second.

Similarly, a software deinterlacing filter like QTGMC or yadif (there are several others) interprets 29.97 interlaced frames into 59.94 double-rate half-height fields per second, then uses various algorithms to resample the 59.94 half-height fields into 59.94 full-sized "new" progressive frames for every second of video.

When you deinterlace and discard alternate frames, you return the frame rate to its original 29.97 fps. But you now have only half the number of images that defined motion and other details, so that you lose 50% of the original temporal resolution. When your 29.97 fps decimated video plays on your 60Hz TV or PC device, each 1/29.97-second progressive image is displayed twice as long to maintain the display's 60Hz-per-second refresh rate. The resulting motion display is less smooth and less clear than 60 distinct images per second, and the effects become more evident with faster motion and with more image detail to render.


Quote:
Originally Posted by KalterStein View Post
I also did a quick skim read on telecine but I have a bit more reading there as that seems to be another complex piece.
examples of telecine and interlacing
http://www.infognition.com/tutorials...nterlaced.html

Neuron2_How To Analyze Video Frame Structure.zip
http://www.digitalfaq.com/forum/atta...-analyze-video

Quote:
Originally Posted by KalterStein View Post
So if I upload interlaced video to say "YouTube"
End that sentence right there. If you upload interlaced video to YouTube or anywhere else on the internet it will be deinterlaced (using the quickest, dirtiest, and cheapest means possible), and alternate fields will almost always be discarded. Or, even worse, the interlaced fields will be blended in each frame and the result will be permanent ghosting and double-image pictures. Field blending cannot be repaired. The same is true of various forms of telecine or pulldown, which are not allowed on the internet. What you can do with many websites is deinterlace your video to double-rate progressive video, keep all frames, and submit double-rate video for posting. You video will likely still be re-encoded to smaller bitrates for streaming, but at least you'll maintain the original pace of content and can perform cleaner QTGMC software deinterlacing than "they" will.

Quote:
Originally Posted by KalterStein View Post
I don't suspect the TV itself, I suspect my playback devices. I am using Kodi Media Center which runs on a PC to stream media to the TV itself. That is why I kept thinking I have to force these to progressive because in reality it is a computer playing back the files. However they are being played back on a actual TV via HDMI not a computer monitor.

So even though playback is via Kodi on PC, does the TV itself still handle the deinterlacing properly or am I forced to go progressive in my use case?

The pixelation and blockiness I was seeing on my PC Monitor. I would open the master AVI in VLC or Windows Media Player and put it full screen. The image is blurred and blocky around things like peoples faces and such. So I referred to this as pixelation, it seems to be an affect of stretching to full screen. As when the image is kept at the default aspect ratio you cannot see the blurred edges around faces and objects. Hope I explained that better now.
As I said earlier, all playback and display devices are not equal. I'm not familiar with versions of Kodi specifically, but I have many misgivings about every PC media server I've ever used or seen, and I have even less confidence in the way they transmit video to Tv displays. Further, I'm a total "equipment snob" who has returned many a big-name DVD player, DVD recorder, BluRay player, and HDTV for poor performance. Showroom salespeople hate me, especially when I start asking questions that they either can't answer, or are obviously making something up, or else they're just plain incorrect (so I stopped asking and started investigating on my own). Every a/v device I use today has been thoroughly tested and reviewed by trained, experienced people in professional fields who know what they're doing. All of my display devices are calibrated with color probes and calibration software. I'm even down to not using HDMI whenever I can avoid it, and when I do use it it's not the 98-cent or Amazon Basics variety (skeptics can poo-bah that "videophile" stuff all they want, but thank god I don't have to watch their tv's. It would be the equivalent of watching a display visually barfing).

In video processing I've managed to avoid the hooplah and colorful b.s. used to market high-priced "pro" software. Obviously I can't afford the Real Stuff that Disney and Industrial Light & Magic use, but I can afford to learn to use Avisynth and VirtualDub for free, making me neither a "software snob" nor a "price snob". The most expensive software I own is from TMPGenc and an ancient version of AfterEffects released before Adobe screwed up that product. My PC's are home made, and while I have a Win7 laptop everything else is XP. I'm not using $1000 capture cards; I stuck with All in Wonders purchased years ago for XP and with a newer $40 VC500 for Windows 7 capture. I gave up on my pricey AG-1980 VCR for most projects and found that a more mainstream Panasonic SVHS and tbc pass-thru was tracking damaged tapes better, even if it means more Avisynth cleanup after capture. Samples of those initial captures and projects finished with the tools mentioned here, with some of the nightmare problems that came with them, have been posted in this forum. They serve to demonstrate that bad tape can look pretty good with better tools and methods, and that price isn't always the answer.

I don't understand what you mean by stretching a video to fill the screen. 4:3 video and aspect ratio's other than 16:9 won't fill a 16:9 screen without distortion. The only aspect ratio that will exactly fill a 16:9 screen completely without distorting the image is 16:9. When I visit people who stretch the image to fit their screen, I usually find something else to do other than watch their TV. The distortion is really annoying, not to mention that watching a good looking actress fattened or squished or zoomed is just plain ugly.

Standard definition from DVD and SD-BluRay and even from properly processed VHS captures looks and performs adequately and is a big improvement (in some respects) over the same thing seen with CRT displays (LCD motion rendering, color accuracy, and contrast range being exceptions). Since all display devices are not equal, some players and displays can upsample better than others, some can handle telecine better than others. You can't judge by what you see in the usual showroom, because those demo videos and showroom display settings are purposely designed to mask defects. There is too much advanced testing of computer displays, playback machines, TV's, and audio gear available elsewhere than to trust big-store showrooms.

Quote:
Originally Posted by KalterStein View Post
Honestly I am not sure what dot crawl or geometric distortion is, so I will hit the old google for that in a bit.
Google will show plenty of examples of dot crawl.

Distortion due to line sync errors is more elusive to find. The image below is an example of one type of distortion, which is frame warping. In this case the scanlines that define the bottom and middle of the frame arrive "earlieR' at the capture device than scanlines that define the upper part. The result is left and right borders that are warped toward the left near the top of the frame.



Wonder why that frame looks a bit fuzzy? Obviously the guy running is blurry, but elsewhere one reason is that the edges are affected by scanline timing errors; the lines that define the edges don't all arrive "on time" at the capture device.

Another form of distortion is small "notches' in side borders, as well as wrinkles, notches, and warps in verticals and diagonals that change shape with every frame. These moving distortions make for noisy edges and poor encoder performance.

Here are 4x enlargement samples of line wiggle fixed by line-level tbc devices:
http://forum.videohelp.com/threads/3...=1#post1882662
https://forum.videohelp.com/threads/...hs#post2521850

pictures of line distortion corrected with an ES15 pass-thru tbc.
http://forum.videohelp.com/threads/3...=1#post1983288

Quote:
Originally Posted by KalterStein View Post
In your script since I am already dealing with a master in yuy2, does it need conversion to RGB or later to YV12?
My understanding is YUY2 is better. I also read that RGB24 would provide the best output.
The best strategy with colorspace conversions is to use as few of those conversions as is practicable. If you start working in YUY2, do what you can in that colorspace until a different process or filter requires a different colorspace. Every conversion involves math interpolation errors. Avisynth is probably the cleanest way to run those conversions and is even cleaner than a lot of big-name NLE's like Adobe and Vegas. But more conversions back and forth mean more interpolation errors. You don't have to be overly squeamish about it, just don't get reckless.

RGB is used for color correction when YUV's limits become evident. RGB is also used for display and sometimes for resizing. But I don't know what you mean by RGB being "better'. Video isn't normally encoded as RGB but as YV12, although other colorspaces are possible for various reasons. Remember that standard YUV levels of 16-235 for luma and chroma are expanded to 0-255 when converted to RGB. YUV levels that exceed legal limits can be problematic in RGB, when data at the extremes gets clipped (destroyed).

Quote:
Originally Posted by KalterStein View Post
I capture to D then take the final master capture I want to keep and move it to C
So when encoding for a final production copy, I am reading the master from C and storing the new encoded copy on D
So at no point in time am I writing what I am outputting to the same drive the OS is operating on.
Post-processing doesn't require the CPU or operating system headroom that capture does. There's no need to copy your capture to a different partition for post processing.

Quote:
Originally Posted by KalterStein View Post
Unfortunately I just don't have funds for TBC equipment.
I understand that. Line sync and frame sync errors can't be corrected with software. Therefore they will be a permanent element in your captures.

Many solve the problem by using a pass-thru for a tbc, such as an old used Panasonic DMR-ES10 or ES15, which is tons cheaper than "regular" tbc devices but very effective. Excellent y/c filters, too, for cleaning up dot crawl. https://forum.videohelp.com/threads/...-you-use/page4

Quote:
Originally Posted by KalterStein View Post
I could just hook the VCR right to the TV but where is the fun in that.
When I think how impressed we were in the old days of CRT and VCR, it's embarrassing. The noise and distortion were always there, it just didn't seem so obvious. It was fun then. But LCD's came along, and it was all over.

Good luck in your project and with your family.


Attached Images
File Type: jpg distorted borders.jpg (77.8 KB, 50 downloads)

Last edited by sanlyn; 06-27-2018 at 11:57 PM.
Reply With Quote
  #9  
06-29-2018, 08:30 PM
KalterStein KalterStein is offline
Free Member
 
Join Date: May 2018
Posts: 5
Thanked 0 Times in 0 Posts
Here is a clip from the master capture. I finally got a second to sit down and cut it.
Oscar_master_clip

Not exactly the best capture in the world I am sure but it will suffice if I can just make it look decent on the larger screen.

Below is the copy of the AVS script I have been working on for it. As you can see all I have done is the cropping and borders. I am horrible with math so I haven't tried figuring out how to calculate the crop vs borders to center the image and still arrive at 720x480. I think I have it right as the aspect ratio is correct and the image looks centered to me.


Code:
import("C:\Program Files (x86)\AviSynth\plugins\TemporalDegrain.avs")
AviSource ("C:\Documents and Settings\Administrator\My Documents\oscar_master.avi")
#ConvertToYV12(interlaced=true)

#Spline64Resize(width/2, height)
#QTGMC(preset="fast")
#TemporalDegrain(SAD1=200, SAD2=150, sigma=8)
#Dehalo_alpha(rx=2, ry=1)
#TurnRight().nnedi3(dh=true).TurnLeft()
#aWarpSharp(depth=5.4)
#Sharpen(0.00, 0.13)
Crop(4,0,-16,-12).AddBorders(10,6,10,6, color=$C237CE)

function DeHalo_alpha(clip clp, float "rx", float "ry", float "darkstr", float "brightstr", float "lowsens", float "highsens", float "ss")
{
rx        = default( rx,        2.0 )
ry        = default( ry,        2.0 )
darkstr   = default( darkstr,   1.0 )
brightstr = default( brightstr, 1.0 )
lowsens   = default( lowsens,    50 )
highsens  = default( highsens,   50 )
ss        = default( ss,        1.5 )

LOS = string(lowsens)
HIS = string(highsens/100.0)
DRK = string(darkstr)
BRT = string(brightstr)
ox  = clp.width()
oy  = clp.height()
uv  = 1
uv2 = (uv==3) ? 3 : 2

halos  = clp.bicubicresize(m4(ox/rx),m4(oy/ry)).bicubicresize(ox,oy,1,0)
are    = mt_lutxy(clp.mt_expand(U=uv,V=uv),clp.mt_inpand(U=uv,V=uv),"x y -","x y -","x y -",U=uv,V=uv)
ugly   = mt_lutxy(halos.mt_expand(U=uv,V=uv),halos.mt_inpand(U=uv,V=uv),"x y -","x y -","x y -",U=uv,V=uv)
so     = mt_lutxy( ugly, are, "y x - y 0.001 + / 255 * "+LOS+" - y 256 + 512 / "+HIS+" + *" )
lets   = mt_merge(halos,clp,so,U=uv,V=uv)
remove = (ss==1.0) ? clp.repair(lets,1,0) 
          \        : clp.lanczosresize(m4(ox*ss),m4(oy*ss))
          \             .mt_logic(lets.mt_expand(U=uv,V=uv).bicubicresize(m4(ox*ss),m4(oy*ss)),"min",U=uv2,V=uv2)
          \             .mt_logic(lets.mt_inpand(U=uv,V=uv).bicubicresize(m4(ox*ss),m4(oy*ss)),"max",U=uv2,V=uv2)
          \             .lanczosresize(ox,oy)
them   = mt_lutxy(clp,remove,"x y < x x y - "+DRK+" * - x x y - "+BRT+" * - ?",U=2,V=2)

return( them )
}

function m4(float x) {return(x<16?16:int(round(x/4.0)*4))}


Attached Files
File Type: avi Oscar_Master_Clip.avi (96.49 MB, 6 downloads)
Reply With Quote
  #10  
06-29-2018, 11:21 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Thank you for your sample.

I can do some work on it in a few days, as I'm traveling now with a laptop that is impossible for video work. But I think you should know that your sample is telecined with 3:2 pulldown. You should not run deinterlace filters on this type of video. You should be using TIVTC (inverse telecine).

-- merged --

Again, thanks for the sample and your script.

First, the version of DeHalo_Alpha you are using downloads as an .avsi script. If an .avsi plugin is in your Avisynth plugins folder, it loads automatically when the script calls it -- there is no need to copy the text of an .avsi plugin into your script. DeHalo_ALpha isn't called in yourb script anyway, but you should note that the filter should be used only on progressive video.

O)Obviously your script doesn't so anything except crop and replace borders. But why are you using a pink border, and why are you cropping off so much of the image?

Quote:
Originally Posted by KalterStein View Post
Not exactly the best capture in the world I am sure but it will suffice if I can just make it look decent on the larger screen.
Well, yes, the capture reveals more about the limitations of your capture gear than getting a better result for your projects. We've seen much worse. It's a soft, somewhat plastic looking image with tracking distortion along the top third of the image, especially visible in the earlier frames, and some luma and chroma smearing. There is a bad horizontal dropout (white streak or "rip") just after the lieutenant gets out of the car. Given the script, the encoded result will look exactly like the capture, but with a bright pink border. There is some tape noise along with some object flutter from poor tracking and lack of a line tbc.

There is a visible difference between time taken for cleaning up defects and skipping cleanup altogether. The top image below shows one frame with the horizontal rip visible, the bottom image shows some denoising, standard black borders, and color correction. I couldn't clean up all the original chroma smearing without destroying more detail:

original script


new script


Your video is telecined with 3:2 pulldown. Two of every five frames contain duplicated fields and appear as interlaced, the other three frames appear as progressive. The telecine is hard-coded, meaning it's physically part of the pulldown frames rather than data flags that tell a player which fields or frames to repeat to simulate 29.97 fps playback. On playback, a hard-telecined movie plays on most devices as interlaced. Removing the pulldown fields using the TIVTC plugin restores the frame rate to the movie's original 23.976 fps. The TIVTC plugins works in YUY2 and YV12.

The original color balance is too red. Most people correct red by adding blue, which is not correct. The opposite of red isn't blue, it's cyan (blue-green). Adding blue alone makes red pink. The original vido looks dim because upper mids and brights are subdued, and black levels are a bit high. I made initial corrections in YUY2 and tweaked mids and brights with ColorMill in Virtualdub and RGB. I avoided adding a lot of cyan and brighter colors because the movie appears to be trying to simulate the muted, warmish color of early 30's films and because the lighting in the scene is not bright daylight (it's overcast light modified with warming filters. You can see there are no strong shadows in any of the shots).

The Virtualdub filters I used were ColorMill and Color Camcorder Denoise. I have attached a .vcf text file that recorded those settings. To use a .vcf, open VirtualDub and click "File..." -> "Load processing settings...", then locate the ,vcf file and click "OK" or "Open". The .vcf will load the two filters and the settings I used. ColorMill and Color Camcorder Denoise must be in your VDub plugins. If you don't have those filters, they were previously posted in this link http://www.digitalfaq.com/forum/atta...dubfilters4zip packaged with two other VDub filters.

The script below doesn't attempt to fix the horizontal rips, which required special treatment. But it does perform inverse telecine and some denoising and levels correction. Note that QTGMC is not used here to de-interlace, but is used for denoising progressive video ("InputType=2") and for smoothing some of the tracking shimmer in the original capture. The Santiag plugin is used to calm some bad aliasing and line twitter. The result is encoded as the attached "Oscar_23_976.mp4)" which plays at 23.976 film speed at 4:3 aspect ratio.

Code:
AviSource("E:\forum\faq\KalterStein\Oscar_Master_Clip.avi")
AssumeTFF()

### --- inverse telecine (TIVTC plugin)  ---###
### --- output is 23.976 fps progressive ---###
### --- TIVTC works in YUY2 and in YV12  ---###
TFM().TDecimate()

Tweak(cont=1.12,sat=1.15,dither=true,coring=false)
Levels(22,1.0,255,16,235,dither=true,coring=false)
ConvertToYV12(interlaced=false)

### --- Use QTGMC as a denoiser on progressive video ---###
QTGMC(InputType=2,preset="medium",EZDenoise=8,denoiser="dfttest",ChromaMotion=true,\
   ChromaNoise=true,DenoiseMC=true,GrainRestore=0.3,border=true)
Santiag(2,2)
MergeChroma(aWarpSharp(depth=20))
LSFmod()
AddGrainC(1.2,1.2)

Crop(4,0,-12,-4).AddBorders(8,2,8,2)

### --- RGB32 for VirtualDub filters ---###
ConvertToRGB32(interlaced=false, matrix="Rec601")
return last
Here is a more complete and complicated script that cleans up the horizontal dropouts. Because the cleanup affects the soundtracjk, a trick is used to save the audio early in the script and then to dub the original sound back into the movie at the end of the script. The output of the script is 23.976 fps progressive -- the attached "Oscar_pulldown_restored.mp4" has soft-coded 3:2 pulldown flags added when the video was encoded for 29.97 fps playback at 4:3. The attachment is an mp4 container but the encoding was otherwise for standard definition BluRay spec.

I realize that you say you don't have time for this level of restoration. But it does illustrate the possibilities. You will still see some slight remnants of aliasing and line twitter (I think that's what byou were referring to as "pixelation"). That's partly a vcr playback problem, and partly a broadcast quality problem. It's possible to fix even more, but there wouldn't be much video left to watch. After you're finished, keep the tapes you really want. You might find more time and players in the future for better captures.

Code:
Import("D:\Avisynth 2.5\plugins\ReplaceFRamesMC2.avs")
Import("D:\Avisynth 2.5\plugins\FixRipsP2.avs")

AviSource("E:\forum\faq\KalterStein\Oscar_Master_Clip.avi")
AssumeTFF()

### --- inverse telecine (TIVTC plugin)  ---###
### --- output is 23.976 fps progressive ---###
### --- TIVTC works in YUY2 and in YV12  ---###
TFM().TDecimate()

### --- save audio for later use ---###
vid=last
save_aud = vid

vid
Tweak(cont=1.12,sat=1.15,dither=true,coring=false)
Levels(22,1.0,255,16,235,dither=true,coring=false)

ConvertToYV12(interlaced=false)
a1=last

### --- Create small patches from filtered frames ---###
### --- and overlay patches only onto bad frames. ---###

b0=a1
b01=a1.ReplaceFramesMC2(185,1).Crop(2,60,-4,-416)
b02=Overlay(b0,b01,x=2,y=60)
a2=ReplaceFramesSimple(a1,b02,mappings="185")

b0=a2
b01=a2.ReplaceFramesMC2(186,1).Crop(2,66,-4,-408)
b02=Overlay(b0,b01,x=2,y=66)
a3=ReplaceFramesSimple(a2,b02,mappings="186")

b0=a3
b01=a3.FixRipsP2().Crop(2,46,-4,-418)
b02=Overlay(b0,b01,x=2,y=46)
a4=ReplaceFramesSimple(a3,b02,mappings="183 184")

### --- Use QTGMC as a denoiser on progressive video ---###
a4
QTGMC(InputType=2,preset="medium",EZDenoise=8,denoiser="dfttest",ChromaMotion=true,\
   ChromaNoise=true,DenoiseMC=true,GrainRestore=0.3,border=true)
Santiag(2,2)
MergeChroma(aWarpSharp(depth=20))
LSFmod()
AddGrainC(1.2,1.2)

Crop(4,0,-12,-4).AddBorders(8,2,8,2)
AudioDub(last,save_aud)

### --- RGB32 for VirtualDub filters ---###
ConvertToRGB32(interlaced=false, matrix="Rec601")
return last


Attached Images
File Type: jpg original script.jpg (99.5 KB, 42 downloads)
File Type: jpg new script.jpg (96.5 KB, 44 downloads)
Attached Files
File Type: vcf VirtualDuib_settings.vcf (983 Bytes, 0 downloads)
File Type: mp4 Oscar_23_976.mp4 (8.82 MB, 1 downloads)
File Type: mp4 Oscar_pulldown_restored.mp4 (15.10 MB, 1 downloads)
Reply With Quote
Reply




Tags
dubbing

Similar Threads
Thread Thread Starter Forum Replies Last Post
Title/Chapter info on copied DVD9? Cadillac84 Copy DVDs, Duplicate, Replicate 0 05-05-2016 08:29 PM
Changing Title Tags after site is indexed by Google? Mr. Rey Web Development, Design 4 10-05-2013 04:37 AM
Cannot change web page title/description - shows Index of Mr. Rey Website and Server Troubleshooting 5 02-25-2013 12:56 PM

Thread Tools



 
All times are GMT -5. The time now is 05:45 AM