I'm going to digitize a bunch of Hi8 tapes.
This would my setup:
Sony DCR-TRV620E PAL (Digital8 cam but will play Hi8) as the player (since my older Sony analog Hi8 is not working properly anymore)
TBC (Time Base Corrector): ON
DNR (Digital Noise Reduction): OFF
Output i.LINK -> IEEE1394/Firewire with 'dvgrab' on Linux PC
Output S-video -> Hauppauge WinTV PVR 250 -> lossless YUV/YV12 capture via /dev/video32 with ffmpeg on Linux PC
So I can simultaneously compare the two captures in VLC
I know that VHS->DV is generally discouraged on this forum but is my DV capture sharper, or just noisier?
Is this expected or is there something wrong with the s-video out/cable/pvr250 capture card?
See snapshots & video snippets in attachment.
Last edited by iseevisions; 11-29-2023 at 04:55 PM.
Not sure, but DV does look sharper and more detailed to me. You can particularly notice it in the bricks path off to the right.
Possible theories are that the machine passes everything to DV first, and then converts the DV stream back to analog, hence there's an extra "analog to digital, then digital back to analog" conversion that loses some information. The fact that sort of thing exists is why some devices work for "TBC-like passthrough" and that's also what happens in a real TBC.
If it happens to be that everything goes to DV first, you're probably better capturing DV from that particular machine anyway since you'd also get the same chroma subsampling out of the analog too.
It would be cool if there was an easy test that could let you better visualize how different devices do their chroma subsampling. A lot of devices don't really specify and that would tell you right away if that is what your machine is doing. If the analog were to come out 4:1:1, then it's definitely been through DV first.
There is this test for chroma subsampling https://www.rtings.com/tv/learn/chroma-subsampling - but it's tricky to get that to translate in how you would display it in an analog format - most likely would involve one of the Blackmagic cards that is capable of SD output somehow
If I was you I would turn DNR on, It works with line TBC so it doesn't over process the signal, It is done after the frame is converted to digital in the DSP chip.
The dullness of the analog method has something to do with either the analog workflow you are using or aging analog components inside the camcorder, Using iLink in DV mode bypasses the conversion back to analog, If you want to find out do one more test with TBC off, DNR off and post new samples for DV and analog capture.
Thanks for the replies.
I tried with TBCon,DNRon and TBCoff,DNRoff, both gave about the same results.
Then I tried with another capture card that I found: Hauppauge WinTV-HVR4000, this one even does yuv422 (not that it matters for vhs, right?)
With TBC on, DNR off on the cam.
Now the sharpness seems to be even a little better than the DV, take a look in attachment.
So I'm not using that PVR250 anymore for capturing important stuff, that's for sure...
I also have some other cards like HVR1300 that I can test. I already tested a PVR150 before and that was even worse than PVR250.
Hmm. Color/Black levels and aspect ratio (slightly) are different, but I'm not sure that the analog capture is sharper. The HVR4000 does look like a better overall image with those levels though if you weren't planning to modify levels later.
The DV is a bit more resolution with 768x576 vs 720x576 of the card.
While this is a bonus, so I could crop & chop of the green tape noise band on the right, for me the sharpness is of most importance.
Meanwhile I managed to get my flaky Sony CCD-TR805E Hi8 cam working (cleaning the head once more), for a bit before it dropped out again.
And captured the same scene,same HVR4000 settings, but I think it looks worse (see attachment if interested), except no green tape noise band on the right, hmm interesting, this is the original recorder after all...
The PVR250 is a hardware MPEG encoder, period. And it was well known to be soft, as compared to AIW MPEG at the time. (AIW MPEG was also 4:2:0, so softer than the lossless from AIW. Unlike the Hauppauge, AIW had MPEG hardware hybrid encoding using Ligos, so Theatre 100/Rage and 200 wasnot degraded to 4:2:0 MPEG.)
So that's why. Any "lossless" capture is expanding back from the lossy MPEG. You may not see MPEG blocks because it will internally use superbit rates, probably maxing out near 20mbps. Any MPEG noise will fade ito the grain noise from analog, or be lost in the fuzzy quality.
The DV will be lossy, period. But you can find worse cards/loss if you try. But the point is to seek better, and it's not even that hard to do.
Most of these Hauppauge PVR cards are really crappy for analog videotape capturing. Those were meant for recording off coax in from antenna, cable, satellite. PVR is not capturing. PVR is compressed, often at the hardware level.
According to https://docs.kernel.org/admin-guide/media/ivtv.html , the /dev/video32 outputs "The raw YUV video output from the current video input. The YUV format is a 16x16 linear tiled NV12 format (V4L2_PIX_FMT_NV12_16L16)"
And that is what I used, not MPEG from /dev/video0
In attachment snapshots for comparisons.
First I captured simultaneously YUV + MPEG with max bitrate of 27Mbps, no visible difference.
Then after that YUV + MPEG with low bitrate of 2Mbps, visible blocking on MPEG, not on the YUV stream.
But what happens in the pipelines of the conexant chip, what makes it softer than a newer, non MPEG Hauppauge card is not clear, maybe age of the capacitors (I bought it in 2005 or so and always used MPEG before)
I will try some other cards and post updates.
I also have an ATI AIW Pro PCI, but I lost the special A/V cable. It's in the machine now as graphics card & testing and I can't get anything from "v4l2-ctl --list-devices", maybe it's broken or it needs winblows, I'll check further tomorrow.
And I still have an ATI X800XT AGP with video in, but I don't have an AGP motherboard anymore
Last edited by iseevisions; 11-30-2023 at 04:09 PM.
The pinouts are available for those AIW cards - really all you need to do is identify which pin is the composite and/or Luma/Chroma (if using S-Video in), the rest of the pins really aren't needed. The one time when you kind of did need the original are the variants that had their output monitor connections also needing to go through there. Shield ground I think is the same as the card's face plate. I feel like you could 3D print an adapter (for something less janky than some thin wires sticking into the socket) that just taps those pins, but an appropriate diameter copper wire soldered to an S-Video plug would work fine too for testing I would think.
I'm still not seeing non-MPEG lossless output possible.
The chipset may purposely soften to assist in MPEG encoding effeciency, as noise is the enemy of MPEG quality (thus causing blocks, further mosquito noise).
At any rate, 150 was never good, 250/350 are simply timelocked outdated quality. Many USB cards do better now.
You're trying to use cards that are too old, too inferior, and simply will not work as needed. There's a line in the stand between "legacy" (still useful) and "old" (worthless). You've crossed it.