digitalFAQ.com Forum

digitalFAQ.com Forum (https://www.digitalfaq.com/forum/)
-   Capture, Record, Transfer (https://www.digitalfaq.com/forum/video-capture/)
-   -   Hi-8 Capture chain quality critique / advice (https://www.digitalfaq.com/forum/video-capture/9522-capture-chain-quality.html)

MindrustUK 03-08-2019 06:43 AM

Hi-8 Capture chain quality critique / advice
 
9 Attachment(s)
Hi all,

My first post here, I've done some reading reading so hopefully this won't be too daft.

I've recently decided to revisit capturing / converting all of my families old media. After years and years of having VHS decks etc on tap (my father was a TV / Video repair man for Granada in the UK), We suddenly realized as of last year that none of our extended family has these any more.

I'm trying to do a "Final" conversion at the highest reasonable (not costing thousands) quality as I don't think I'll be revisiting this project again. Previously (~10 years ago) I used a Hauppauge 1212 (HD PVR) to grab Hi8 over composite from the family Sony CCD-V8AF. The quality seemed fine but I'm sure it could be better (This is also the original source machine for most of the recordings).

Since then; I have obtained a Sony DCR-TRV230E as per one of the recommendation threads on this board. I am capturing with TBC and DNR turned on over an S-Video connection (I am not using the MiniDV connection). I am feeding this through to a Blackmagic Design Intensity Pro PCI-E (I did start with a Intensity Shuttle on USB3 but this caused havoc) using some reasonable quality cables.

For comparison I have also tried capturing off the original CCD-V8AF unit and also tried using the DCR-TRV230E in conjunction with Hauppauge HD PVR 2 Gaming Edition Plus for comparison. From all of these sources / combinations I'm struggling to see a marked difference in quality between them. I'll be the first to say this is my lack of experience / ignorance. I've attached a few still grabs from each of these captures and I'm hoping someone can tell me if there really is negligible difference in them or point out what I should be looking for?

My plan for next steps is to feed my captures into AviSynth and cut / clean up the footage. I'm not sure if it's worth up scaling to 720p, I'll certainly be trying de-interlacing + frame doubling. This maybe the topic of another post but all advice welcome.

Thanks.

jwillis84 03-08-2019 08:06 AM

Sanlyn would probably have some good comments for you if you had video clips. Video is a dynamic medium with a large component in the time domain. Its hard to tell much from static snapshots.

The act of making the snapshot throws away a large amount of data and creates something to replace it in single frames, Its like painting a picture of a scene, its 2D not 3D and a bit of an interpretation.

The problems you'd most likely want help with are in that bit thrown away, the de-interlacing, the artifacts from one moment to the next, any geometric distortion or wobbling, shake and jitter, weaving and comb or mosquito lines.

Based on your list of equipment it sounds like, you could deal with a good number of potential problems.. but without seeing the video the discussion is mostly theorectical and hard to apply to your specific situation.

If your not familar with how to make useful clips.. try sending one clip and let people suggest or correct you on how to improve the next uploads. That minimizes wasting your time and maximizes the potential for people to look at the one clip instead of five or six. And that reduces confusion over which clip is being discussed or commented on in a message.

sanlyn 03-08-2019 08:34 AM

Quote:

Originally Posted by MindrustUK (Post 59893)
I am feeding this through to a Blackmagic Design Intensity Pro PCI-E

Yes, it does look like typical BlackMagic problems with analog levels and a plasticky look. BM tries to treat analog source like digital source and tries to make make analog sources look like digital sources. It doesn't really work.

Quote:

Originally Posted by MindrustUK (Post 59893)
I've attached a few still grabs from each of these captures and I'm hoping someone can tell me if there really is negligible difference in them or point out what I should be looking for?

Because the subject is video and this is a video forum, you can't really tell very much from still photos, can you? Do you see how the noise patterns and motion handling differ in the photos? How about interlace behavior? Still photos can't tell you much in those regards. One thing one can tell from the photos is that there is very poor or nonexistent input level control, and that there are illegal or unsuitabe luminance levels resulting in unrecoverable blown out highlight detail in some pics, and unrecoverable clipped blacks in others, and some rather grim looking dark chroma density in one of the others.

Quote:

Originally Posted by MindrustUK (Post 59893)
I'm not sure if it's worth up scaling to 720p

It isn't. It's a complete waste of time with noisy low resolution sources. High definition is not based on small low resolution sources blown up into big blurry frames. High def is based on high resolution, not on frame size. Besides, can't your playback system upscale? Most likely it can upscale far better than you can with software.

Quote:

Originally Posted by MindrustUK (Post 59893)
I'll certainly be trying de-interlacing + frame doubling.

Why? Once you realize that deinterlacing is a destructive process rife with interpolation errors, you might want to give it more serious thought. In any case, the only really good deinterlacer around is QTGMC. If you use anything else, you're wasting your time.

Perhaps posting some actual, unfiltered video captures will give you much more detailed info.

hodgey 03-08-2019 08:46 AM

Not seeing any labeling on what capture is from what camera. Based on the look I presume the latter two are captures from the newer camera (the colour error on the right edge is a giveaway), with the last one being from the HD-PVR? Ideally you would want to compare the same frame from all captures, but even with slightly different frames there is a noticeable difference between the old/new camera images.

Haven't looked at the levels in detail, but captures from the old camera do look a little blown out, e.g on the sand heaps on one of the shots. In the same shot you can e.g see details and individual bricks on the brick wall better on the newer camera captures.

Another thing is the lack of TBC in the old camera. It does look like the TBC function may active on the capture card though, which can make up for it a bit, otherwise it would look more jaggy. Generally easier to see with a video than on a still frame.

There are also some visible dropouts on the old camera captures, e.g you can see a bit of a horizontal line in the sky in the picture of the road. The newer cameras are generally better at hiding these.

I would echo jwillis84's suggestion about uploading a clip if you are more curious.

As for the different capture devices, I know at least the old HD-PVR, and the blackmagic cards use similar analog -> digital chips from Analog Devices, so it makes sense that they would look similar. Didn't think you could capture lossless from the HD-PVR USB boxes though.

MindrustUK 03-08-2019 09:25 AM

Thanks for all the feedback, as advised I've taken a short clip from the DCR-TRV230E + Black Magic combo and uploaded it as follows:

https://drive.google.com/open?id=19HoLjWdArX5ct4uEtueYsH_9vATV_3va

(~500mb.avi file.)

Quote:

Originally Posted by sanlyn (Post 59895)
Yes, it does look like typical BlackMagic problems with analog levels and a plasticky look. BM tries to treat analog source like digital source and tries to make make analog sources look like digital sources. It doesn't really work.

I've read the AJA KONA LHe cards could be better suited to this work, I could get one of these fairly cheaply. Would you advise that it's worth the investment or for what I'm doing is this massively overkill?

Quote:

Originally Posted by sanlyn (Post 59895)
One thing one can tell from the photos is that there is very poor or nonexistent input level control, and that there are illegal or unsuitabe luminance levels resulting in unrecoverable blown out highlight detail in some pics, and unrecoverable clipped blacks in others, and some rather grim looking dark chroma density in one of the others.

I'll do some reading and try and understand what that all means, I'm guessing these are hardware limitations down to the combination of card and player? What would be the remedy, things that can be fixed in post with software or additional hardware signal filtering before capture?

Quote:

Originally Posted by sanlyn (Post 59895)
It isn't. It's a complete waste of time with noisy low resolution sources. High definition is not based on small low resolution sources blown up into big blurry frames. High def is based on high resolution, not on frame size. Besides, can't your playback system upscale? Most likely it can upscale far better than you can with software.

Why? Once you realize that deinterlacing is a destructive process rife with interpolation errors, you might want to give it more serious thought. In any case, the only really good deinterlacer around is QTGMC. If you use anything else, you're wasting your time.

Noted; I'm guessing that the only thing to do is compress down to a reasonable codec (I don't plan on sharing the raw AVI files given their size) after capture and clean up (whatever that may entail) and forget de-interlacing and up-scaling or in general trying to do anything remotely fancy as it will just end up being detrimental.

Quote:

Originally Posted by hodgey (Post 59896)
Didn't think you could capture lossless from the HD-PVR USB boxes though.

For clarity the captures on the old HD-PVR are straight into Mpeg2 the current HD PVR 2 is putting out h.264. These captures were conducted as a "Smash and grab" to create a primitive backup should tapes get chewed machines fail etc. The limited quality was always a known factor hence the current revisit and post for help.

jwillis84 03-08-2019 10:07 AM

MindrustUK posting a clip was a good move.

Be sure to pay attention to the "unfiltered" part and do not introduce too many devices between the source and the capture device. That way everyone commenting can give you an accurate assessment of the best moves to make next.

Adding in too many variables at the start makes it hard to separate out cause and effect.

Sanlyn is giving you very good advise, don't take any of it critically.

He is being concise and helpful from a long history of experience. I've seen him deomnstrate technique on clips made with minimal equipment with very good results. Don't throw too many extra devices, ideas and money at the problem without understanding the problem. You may not even be aware of the problems the clip reveals. Let the clip tell the story first.

I can relate to the Ajaj (great deal) issue.. often you just become aware of a device and a great deal and feel you have to move quickly, only to find out later.. it does nothing. Don't fall into that trap. Be patient let the 'great deal' go.

Confusingly.. the Ajaj is a good name, and Black Magic is not a great name in consumer level video capture.. BM has a long history of disappointing people.. myself included. Worst of all BM is known for silently dropping frames and random 'resets' in the middle of a capture. You can't fix BM gear by trading up in the cost of the BM gear or changing the connection type from USB to PCI.. its a design issue. They do custom silicon 'with bugs'.. some broadcasters live with it.. and they have a successful business.. but BM isn't great for the budget minded consumer that can't chase silicon updates (hardware updates, not firmware).

MindrustUK 03-08-2019 10:24 AM

Thanks Jwillis84, with regards to "unfiltered" this has always been my understanding. At the minute it's camera to capture device with nothing in between (cables not withstanding). I've tried to ensure the cables are "Dressed" as much as possible away from power and kept them short to reduce noise pickup.

The adding variables in issue is one I know all too well from other troubleshooting experiences, I intend on keeping things as simple as possible. Again admittedly I have a limited understanding of what I'm doing so far and that's why I've posted for direction.

No offense or otherwise taken for Sanlyn's advice, both of you seem to be taking me in the right direction and I can only say thanks for that. I am here very much to learn from what I've been given and do the right thing.

Noted on your points with regards to hardware. It seems it's not going make a big difference in this scenario so far.

MindrustUK 03-08-2019 10:36 AM

Sorry all, added the correct sharing settings to the link in the previous post!

sanlyn 03-08-2019 12:19 PM

Thank you for the sample and the sharing correction. Taking a look now, preparing some stats and demo pics/video/graphs to illustrate several pointers, tips, and tricks.
:wink2:

First impression: The original download is 720x576 PAL in Kona's proprietary lossless codec, in YUY2 color and a file size of 540.1 MB. For purposes of storing smaller lossless intermediate working files, I recompressed the download as unaltered YUY2 using the Lagarith lossless codec for a file size of 132.4 MB. This is not a quality issue, but simply a workflow consideration.

jwillis84 03-08-2019 12:32 PM

Quote:

Originally Posted by sanlyn (Post 59905)
... 540.1 MB to 132.4 MB

Yiker's talk about bury the lead !

sanlyn 03-09-2019 09:34 AM

10 Attachment(s)
The sample avi's physical frames have these border characteristics: 10 pixels of left border, a 1-pixel broken border across the top (the top border pixels extend only 65% across the length of the top border), 12 pixels of right border, and 8 pixels of bottom-border head switching noise. For purposes of analyzing the core image content , border pixels were temporarily removed so that zero-black pixels would not affect histograms. The Avisynth command for removing these border pixels was Crop(10,2,-12,-10). Notice that 10 pixels are temporarily removed from the bottom rather than 8, because the analysis tools require a mod4 height.

In the final output workfile, the borders are removed with Crop(10,2,-12,-8). When the frames were reassembled later, new black pixels were added and the image content was more vertically centered using the command AddBorders(10,4,12,6). YUY2 requires pixels in groups of Mod2. Odd-numbered pixel groups cannot be used. This and other limitations are due to the way luma and chroma information is stored in pixels. Some highly regarded NLE's ignore these rules, often with oddball results that belie the cost of the software.

One of my first steps in checking captures is to open an Avi in VirtualDub directly or by using a simple Avisynth script that allows the use of fairly direct and simple tools. The first direct and simple set of tools is eyeballs, but they're enhanced with more objectivity and precision by the tools shown below.

http://www.digitalfaq.com/forum/atta...1&d=1552144344

Above, the YUV levels histogram is an Avisynth builtin feature that displays horizontal bands describing specific luminance and chroma channels. Luminance (the Y channel) is shown in the white band across the top of the graph. The blue-yellow chroma (U channel) is the middle band, and red-green chroma (V channel) is the lower band. Darker (lower) values are on the left, brighter (higher) values are on the right. The thin vertical line down the center represents the middle of the spectrum between dark and bright.

The two shaded vertical borders along the left and right represent pixel values that lie outside the legal video range of y=16-235. Because YUV 16-235 is expanded in RGB to 0-255, elements inside the shaded borders would be values that lie outside the range of RGB=0 to 255 and thus would display incorrectly or would be clipped (destroyed). In the far upper right-hand corner of the histogram shown above, the bright yellow "peak" in the unsafe right-hand border would expand to values greater than RGB 255, which could not be accepted by most displays and would normally be clipped (rejected) by broadcast equipment. Besides, unsafe video values don't look so great and often have a freaky appearance. Plenty of PoohTube examples abound.

http://www.digitalfaq.com/forum/atta...1&d=1552144399

The "parade" formatted histogram above is a popular means of displaying RGB values for average sum brightness (the white band), Red chroma (the red band), Green chroma (the green band), and Blue chroma (the blue band). Dark colors are at the left, bright colors at the right. The height of the peaks in the bands indicate intensity and/or the number of pixels in different segments of the spectrum. The RGB histogram above shows somewhat elevated black levels, since the horizontal bands don't extend to the left (dark) border wall. On the right border wall, the amount of Green and Blue slightly surpasses RGB 255 and exceeds the ability of most RGB devices to display properly. The positions of the bands also show a deficit of Red and a predominance of Green and Blue.

http://www.digitalfaq.com/forum/atta...1&d=1552144439

Vectorscopes exist in both RGB (above) and YUV color. The center of the circle represents lower values of the colors shown, while higher values extend to the outer perimeter. Center values indicate lower saturation (grays), outer values indicate greater saturation. The letters along the outer edge of the circle indicate regions of primary and secondary colors: M=Magenta, B=Blue, C=Cyan, G=Green, Y=Yellow, and R=Red. The extended off-center line indicates the usual area for skin tones. Color pixels that extend beyond the small rectangles along the outer perimeter would be oversaturated values that distort and clip in RGB displays. The vectorscope above shows a dominance of magenta and a stronger dominance of bluish Cyan. There is a deficit of Red, Yellow, and pure Green. In the images generated by this vectorscope there would be no clean whites, grays, or blacks. The overall imbalance would be heavily tinted toward Cyan.

Using Avisynth and VirtualDub, I prepared a demo video that shows your sample video (with black borders removed) and simultaneously shows all three histograms during play. They show the color imbalance as well as the way the camera's AGC (Auto Gain) and autowhite act more like defects than "features", seldom doing what they're supposed to do. The AGC does some visible gamma "pumping" at frames 9 to 32, changes three times again shortly thereafter, and undergoes more gamma pumping and shifting at frames 406, 416, 435, and 488. There's really not much you can do about this except live with it. We've seen much worse. In more disruptive scenarios it's sometimes possible to smooth matters to a slight degree using one of several Avisynth configurable autobalance filters, but this involves a great deal of additional manipulation -- and it still doesn't totally solve the problem. You Often need the help of something like AfterEffects to address it.

Below is a reduced-size frame from the histogram video:
http://www.digitalfaq.com/forum/atta...1&d=1552144512

The original frames are unaltered except for removing the side borders. At this stage I ignored the right-hand magenta stain. Note how blue the image looks, with no clean whites or grays. The street surface and sunlit sand are all blue, ground surfaces are grayed-out tan, foliage has suppressed green, the surf's foam in the distance don't look white, and the overall scene looks dim rather than lighted by a bright midday sun. Shadow details of people under the trees looks murky. Note that the excessive camera jerking is mercilessly demanding and wasteful of encoding bitrate. It takes higher than average bitrate to properly describe hectic motion. Deinterlacers in software or hardware have to be on their toes to handle such scenes properly, whether it plays as interlaced or progressive. In this case the camera at times recorded a little motion smear and ghosting or shimmer. Or the subtle effect could be coming from the player.

The demo video is attached as Histogram-demo_25i.mp4.

Play it several times and note how the histograms changes fitfully with the camera's AGC, often going into unsafe territory in the two middle histograms. It's normal for histograms to change as scene elements change, but not to this degree if a constant manual exposure had been used. Color balance also changes, especially in the beach and surf elements. The color of the foreground roadway changes several times.

Histograms don't reveal much about noise, except some types of chroma noise. I noticed some tape noise in darker areas of shadows and trees, but most bright daylight scenes don't display as much typical noisy junk as do darker videos. I also note that interlace activity looks smooth here with less buzzy edges, aliasing, and/or excess combing than seen with videos from most consumer cameras.

Below, I've selected three frames to illustrate the changes that occurred from the original starting point to the final output mp4's. Each frame is resized to 4:3 proportions. The original frame is on the left, the frame from the 25i mp4 is on the right. From top to bottom the frame numbers are 86, 305, and 493. By comparing frames in this manner you can also see how the camera's autowhite changes colors in many elements from beginning to end.

http://www.digitalfaq.com/forum/atta...1&d=1552144727

http://www.digitalfaq.com/forum/atta...1&d=1552144727

http://www.digitalfaq.com/forum/atta...1&d=1552144727

I prepared three versions of the final output, two of which are presented here.

Each version is characterized by a certain "look" that comes from the camera, which attempted to compress extremely high contrast into the limits of the recording cells and media. The results are a few burned highlights (same of which can be rescued) and a loss of shadow detail in many areas. Most of the people are silhouettes. But within the limits of the gear and media, it's a decent recording and capture. It simply doesn't have the latitude or dynamic range of film. In processing I used contrast masking filters to bring out shadow detail.

The first output version attempted to correct the rightmost border stain using conventional mask and overlay methods. Those methods work often, but sometimes not so well. The problem in this sample is that the magenta stain isn't a tint; it actually replaces or destroys original colors. Anti-rainbow filters can work with tints, but they can't restore colors that no longer exist. The result is that a lot of experimentation and tweaking is involved, and many of the corrections have different results against different backgrounds. I find it barely acceptable and a headache to work with but you might have a better opinion. The pile of filters used slowed the processing to a measly 2.5 fps. This version of the video runs at 25i interlaced and is attached as Out_25i.mp4.

The second version took a more consistent approach, even it does cost a small bit of screen real estate. 26 pixels are removed from the right border, eliminating the stain and about 16 pixels of actual image. Then, 2 pixels from the top and 8 pixels from the bottom are removed, none of which is actual image content. This allows resizing the image to fill the frame vertically. But can one extend the image horizontally to fill the missing right border? Sorry, but definitely not. The result would be a horizontally distorted aspect ratio. Mom and Aunt Margaret would gain 30 pounds each and look weird. Doorknobs, wheels, clocks, O's and other rounds would morph into ovals, and squares would be rectangles, all of which looks like foolish mistakes. I did cheat a bit by extending the image horizontally an extra 4 pixels; that's easy enough to get away with. But going wider simply would not work. Using this method, I produced a 25i interlaced video that looks pretty much like the first version above, but without the right-border stain. The left border stain is mild and isn't worth the bother. It would be hidden by overscan on most TV's (yes, folks, today's HDTV's do use overscan, and on many it can't be disabled). Fewer filters were used here, so processing was a decent 5 to 6 fps.

The third version is the same as the above but is 50p progressive. It's not quite ready for web mounting -- you'd have to resize the 50p to square-pixel 4:3, something like 640x480. Processing ran at 8 to 9 fps. The 50p version is attached as Out_50p.mp4.

The denoisers used are Avisynth plugins, the deinterlacer is QTGMC, the resizer was Spline36resize. Color filters initially used Avisynth functions and the SmoothAdjust plugin in YUV, but these were very basic corrections that were later tweaked in RGB using VirtualDub plugins that mimic expensive NLE features. RGB correction required different color adjustments for darks, midtones, and brights. One advantage of color correction is that it can mask a multitude of sins.

I can quote all of the Avisynth and VirtualDub details if you wish, but I'll caution that if you've never used these tools you have a learning curve ahead -- not that it's anything like rocket science (if a klutz like myself can do it, any other monkey can do it even better), but it won't happen in an hour. This forum's restoration section is filled with hundreds of project examples if you want to first take a browse.

I'll also caution that if you think you can get the same deinterlace and denoise results with NLE's, you won't. Big name NLE's are editors and encoders and are great as such, but they're not restoration and repair apps. They can be upgraded with plugins if you have a few tens of thousands of euros to spare and 6 months to get the hang of using them. Avisynth is free and you can get there in a weekend.

Quote:

Originally Posted by MindrustUK (Post 59897)
Quote:

Originally Posted by sanlyn (Post 59895)
One thing one can tell from the photos is that there is very poor or nonexistent input level control, and that there are illegal or unsuitabe luminance levels resulting in unrecoverable blown out highlight detail in some pics, and unrecoverable clipped blacks in others, and some rather grim looking dark chroma density in one of the others.

I'll do some reading and try and understand what that all means, I'm guessing these are hardware limitations down to the combination of card and player? What would be the remedy, things that can be fixed in post with software or additional hardware signal filtering before capture?

Input signal levels are not often controlled by players, but most often by the capture software or with the use of an external proc amp. Good proc asmps ain't cheap. My favorite is the SignVideo PA-100. It's no longer made but you can find info in forum posts at digitalfaq. Most hobbyists and many pros use VirtualDub for lossless capture, which "hooks in" to the capture device's drivers and software proc amp. VDub also has a built-n capture histogram to measure what you think your eyes are seeing. The only required adjustments are brightness and contrast. But many like to torture themselves trying to get "perfect" color during VHS capture, which is simply masochistic. Using VDub's capture histogram is discussed in post #3 in Capturing with VirtualDub [Settings Guide].

Eric-Jan 03-09-2019 11:10 AM

Some human face samples should make good use for the skin tone vector ? I use these function(s) in Davinci Resolve
footage shot later in that day will also need correction ? so no setting for all.

MindrustUK 03-11-2019 08:20 AM

Just a quick post to say thank you every so much Sanlyn, I'm a bit all over the place with work at the minute and want to take time to write a proper reply as there's so much to take in! Up front though I'd like to say thanks for putting in so much time / effort. At first glance the changes in colours are really shocking when I see them side by side!

jwillis84 03-11-2019 09:09 AM

yes, AGC seems to be the worst feature ever invented.

I was quite surprised you could remove, or minimize the magenta stain. I assumed that was baked in and hopeless.


All times are GMT -5. The time now is 10:42 AM

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.