Go Back    Forum > Digital Video > Video Project Help > Project Planning, Workflows

Reply
 
LinkBack Thread Tools
  #1  
08-28-2017, 12:18 AM
jwillis84 jwillis84 is offline
Free Member
 
Join Date: Aug 2017
Location: College Station, TX
Posts: 800
Thanked 217 Times in 174 Posts
I'm new to the forum.

I have been lurking for a few days.

My (current) hardware stats are below this message.

I'm looking for someone to review what I have and possibly make some suggestions, or offer precautions to take when using the gear.

I want to capture/archive mostly broadcast recorded VHS and S-VHS tapes to computer files with as much reasonable clarity as possible. Intended playback platform will be a computer. I have not decided on a compression strategy.. so defaulting to uncompressed for now. Some of the tapes are recorded in 8 hr format, and I would like to capture those in one go without stressing the transport device.. unless there is a strategy to deliberately stop and go to allow overlapping "rest stops" along the way.

This site has great detail on video hardware, but not a lot of info on what to look for in an obtainable audio capture device. (I am hoping) someone can tell me if the 192 kHz sampling rate of the home theater Asus Xonar DSX using an internal AUX cable is a "fair or poor" choice. It has the Wolfson WM8776 ADC, they have a Xonar DGX with the Cirrus Logic CS4245 ADC (which is closer to a Turtle Beach Santa Cruz) but its sample rate was restricted to 96 kHz and it had a mondo large headphone amp.. that made me think it just wasn't suitable for capture projects. (I had to go with a PCI express sound card because the mb has only one PCI conventional slot).

I am so new to this, and want to learn.

The Interlaced and Telecine topics are both interesting (horrifyingly so) and throughly confusing to me. Beyond Odd and Even fields when it lapsed into 3-2-2 and other things.. it totally baffles me. I think a "Guide" to Interlace and Telecine would be well received.

Thank you very much for putting together this site.

And thank you for keeping a wide open mind about using appropriate operating systems and hardware for specialized purposes.

Sincerely,
John W.

gears stats (follow):

motherboard:
hp ml110 (wistron corp - intel 3420 chipset)

slots:
1 FH FL PCI Express G2.0 x16
2 FH FL PCI Express G1.0 x4
3 FH FL PCI conventional
4 FH HL PCI Express G1.0 x1

cpu:
intel xeon x3440 @ 2.53GHz (4 core, 8 thread)
(codename: lynnfield, socket 1156 LGA)

memory:
DDR3 - dual channel
3064 MB recognized (16 GB actual)

video: (current)
Asus EAH 6450 (ATI Radeon)

transport: (on hand)
JVC HR-S5902U (Super VHS ET)

tbc: (on hand)
datavideo TBC-1000 (4:2:2)

capture: (on hand)
ATI AIW Radeon VE (PCI conventional)

sound: (on order)
Asus Xonar (DSX) (PCIe x1)

software:
virtualdub (VDFM 1.10.5)
active movie cap (V.8.00)(1997-2000)
windows media 9 cap (V.9.00)(1997-2003)
windows media encoder 9 series (V.9.00.00.2980)(1995-2002)

(intended software):
windows XP SP2
ATI mmc 8.8
UNI Xonar 1.81 (using Low DPC Latency)

(intended hardware):
samsung 850 evo (V-NAND SSD, 520 MB/s)(250 GB)

Last edited by jwillis84; 08-28-2017 at 12:44 AM.
Reply With Quote
Someday, 12:01 PM
admin's Avatar
Ads / Sponsors
 
Join Date: ∞
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #2  
08-30-2017, 01:15 PM
jwillis84 jwillis84 is offline
Free Member
 
Join Date: Aug 2017
Location: College Station, TX
Posts: 800
Thanked 217 Times in 174 Posts
After more reading.

I am starting to think its a bad idea to have an (actual) system memory above the 32 bit, 3 GB addressable limit of XP-SP2, apparently some poorly written drivers for the iVTV5 (Hauppauge WinTV-PVR-350) would wrap around and behave unstably if more than that limit were in reality available.. just not generally accessible.

The Theater chips, Rage Theater, Theater 200, and beyond had better programmers writing their drivers.. but I wonder. (pun intended)

There are so many competing lanes of history and evolution at work here. Steering a clear path around all the pot holes seems difficult.

It seems unquestioned that Full frame capture at 720x480 with no encoding (first) and then clean up and encoding later is the "gold standard"..

Also beginning to think using the capture server in dual boot mode, (1) as the capture device (2) as the storage device might be optimal. When its not capturing it becomes the NAS serving the VHS files. FreeNAS since it uses ZFS is looking like a good NAS operating system. And true NAS hard drives seem like the big outlay of cost.. since the motherboard, cpu and all the other capture device components get to be used twice, once for capture, second for NAS service. Since I am using a PCI capture card, its not limited to older slower CPUs and memory limits.

Last edited by jwillis84; 08-30-2017 at 01:48 PM.
Reply With Quote
  #3  
08-30-2017, 02:26 PM
dinkleberg dinkleberg is offline
Free Member
 
Join Date: Aug 2016
Posts: 38
Thanked 9 Times in 7 Posts
Capture machines and hauppauge cards are difficult and frustrating enough to troubleshoot on their own, I certainly wouldn't want to complicate things with dual booting. Further, NAS performance will suffer with older hardware.

FreeNAS also requires ECC RAM.

For NAS, I run Windows Home Server on an extremely cheap Intel Celeron build. It uses less than 20 watts, which is way better than my capture build which uses something like 150 watts.

Newer routers also offer great NAS capability.

As far as capturing goes, yes it's quite the daunting task. I am in the same boat as you, still in the learning phase.
Reply With Quote
  #4  
08-30-2017, 07:12 PM
jwillis84 jwillis84 is offline
Free Member
 
Join Date: Aug 2017
Location: College Station, TX
Posts: 800
Thanked 217 Times in 174 Posts
Yes, after more study FreeNAS and ZFS look insane for the average user, but especially for serving an archive or rarely changing collection of files.

SnapRAID and MergeFS look more interesting

SnapRAID can apparently be run on Windows, Linux or Mac and setup (after) the disks are already in use. Also (only) the disk with the specific data sought needs to be spun up. "So" probably low power needs.

MergeFS seems to be like a volume manager which "presents" multiple disks storage space as one file path, but behind the scenes it "maps" or "assigns" destinations for actual files to be written to different actual disks based on a policy.

I do like the concept of WHS (Windows Home Server) and storage spaces.. but got scared when I read the pseudo reasons for Microsoft discontinuing the feature of "SS" in their later products. In a lot of ways Microsoft seems to be training "kid" programmers these days and the quality of their software seems to be suffering. I tend to trust the "older" Microsoft products than anything since the "breakup" between the forced retirees company, and the new hirer cloud company.

I like ZFS for the bitrot "hash" checking on individual files.. but SnapRAID has that feature too.

"Best" of all however is the SnapRAID "disks" can be pulled from an array and read "natively" since their file system is still ntfs or ext3 or hpfs+ its more like a backup solution and a collection of tools with parity than a whole new operating system like FreeNAS.

I don't plan to use a hauppauge card, but reading up on their history, neatly dovetailed with the PVR obsession with compression and explained why MPEG on the fly suddenly became all consuming. The hauppauge evolution from BT878 to ivTV5 chips to the Connexant used in ATI Wonder Pro 200 was quite fascinating. I think I'll get one just to see if it can be used with DirectX strictly for uncompressed capture. (I believe its been stated however the signal handling of the video decoder is not as good as the Theater series of chips) but it is a PCI conventional card.

I may have been confusing when I spoke of dual booting.. I was proposing using it in two modes, one for capture, one for serving.. but.. since SnapRAID will work even on Windows.. no dual booting would be needed.. unless MergeFS were used.. the capture system could serve the files it was creating, and only spin up those drives with the file accessed.. kind of like a jukebox.. and therefore be low power consuming.
Reply With Quote
  #5  
09-01-2017, 02:43 PM
jwillis84 jwillis84 is offline
Free Member
 
Join Date: Aug 2017
Location: College Station, TX
Posts: 800
Thanked 217 Times in 174 Posts
I have been buried in thought reading through these forum postings.

The guides are (so) good and (so) well written.. I feel like a child posting anything.

The depth of detailed knowledge and sharing of experience is overwhelming.

I've bought an AG-5710p and I am waiting for that to arrive, and then will see about getting it serviced by someone like Tom Grant.. I didn't know he had refurb units available before my purchase.. just shows how easy it is to get tripped up by all information available here. (I don't know what I don't know).

I am on the fence however about my little JVC HR-S5902U.. without a line TBC.. it seems of tivial value. I bought it brand new from B&H way back in 2005.. digitized a few VHS tapes and then stopped.. technology kept improving.. and waiting seemed the right thing to do. Now its starting to look like the end of the line for this tech, and the men who service these machines are disappearing.. so I guess its now or never.
Reply With Quote
  #6  
09-02-2017, 10:40 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
This is a daunting thread, possibly because overthink is at work. I couldn't begin to answer 100% of your concerns, but ill try with some of them...

Many of your questions can be answered by looking at captures from VHS critically, which should be your starting point. Hands-on is far more direct than fishing through a swamp of conlficting opinions about "quality" and whatnot. A single capture of VHS source into a lossy codec with any capture device is worth many thousands of words on the subject. To put it bluntly, such captures look as bad as the tape and usually -- with added compression artifacts and the way encoders detest analog tape noise -- look somewhat worse than the original. For some people it doesn't matter, for others it's a deal breaker. Each user is left wih the task of defining their own standards as well as determining how much post-processing effort is required to meet or at least partially meet your target. For starters, your statement:

Quote:
Originally Posted by jwillis84 View Post
I want to capture/archive mostly broadcast recorded VHS and S-VHS tapes to computer files with as much reasonable clarity as possible.
leaves the field wide open. You will have to define "reasonable clarity" for yourself. Frankly I'm rather picky, as are many people, whereas others are more forgiving to the point where ugly YouTube videos are purported to look "great" on playback despite the obvious mistakes and bad processing that go into them. Many come to analog capture with the idea that capturing to any kind of digital media will automatically perform miracles. I guess you already know that it doesn't happen that way. Much depends on the sources, which vary greatly not only from tape to tape but from program to program and often from minute to minute. The big problem is that analog is faulty to begin with -- it was not designed for the rigorous demands of the digital era and has none of digital's consistency (such as it is, which often isn't conistent at all, thanks to sloppy digital processing).

One thing in anlog's favor is that analog doesn't have compression artifacts, which by nature are digital and damaging as far as "reasonable clarity" is concerned and which in practise is a headache to clean up in post-processsing. Many simply accept the combination of analog defects and digital noise as inevitable and will blame the tape and their own lack of cleanup as the villains responsible for mediocre results. They live with this visual junk (don't ask me how) and will proudly display it as indicative of their expertise. After all, it displays a moving picture and makes sounds, so it must be good stuff. For myself, I accept the fact that legacy analog has its faults and limits, but I clean it up as best I can and enjoy the material itself within those limits and without unnecessary noisy annoyances. In the end you can make tape look pretty good as digital media, far better than it started out. Or you can leave it as-is.

Quote:
Originally Posted by jwillis84 View Post
Intended playback platform will be a computer. I have not decided on a compression strategy.. so defaulting to uncompressed for now.
Personaly I don't like viewing feature movies on a PC, it's too claustrophobic for me, and PC playback is noisier than TV and incapable of the kind of display calibration that a good TV offers and the smooth playback that a good DVD or BluRay player can eke from optical disk or an external hard drive. I refer to high quality players, not the big box generics from BestBuy. But that's my personal preference, not biblical mandate. Uncompressed capture alone is a waste of space when lossless real-time compression is available with huffyuv, Lagarith, or UT Video. Of course the player device has to be capable of using those codecs, so my encodes from loisssless captures are DVD and BluRay, mostly standard definition but many as original HDTV recordings from my PVR. I can't speak for servers; I have collected a few thousand movies that I'm in no mood to transfer to servers.

Lossless capture is designed for cleanup and further high quality, customized encoding. There is no better method for that purpose, for as you said:

Quote:
Originally Posted by jwillis84 View Post
It seems unquestioned that Full frame capture at 720x480 with no encoding (first) and then clean up and encoding later is the "gold standard".
Indeed it is, but you have your choice of capturing to square-pixel formats like 640x480. The latter will be a problem for encoding to standard definition formats and even for upsizing to HD (which won't make any improvements at all from VHS tape and is a quality-degrading waste of time). It will do for web mounting, where quality standards are much lower.

[quote=jwillis84;50656]This site has great detail on video hardware, but not a lot of info on what to look for in an obtainable audio capture device. (I am hoping) someone can tell me if the 192 kHz sampling rate of the home theater Asus Xonar DSX using an internal AUX cable is a "fair or poor" choice. It has the Wolfson WM8776 ADC, they have a Xonar DGX with the Cirrus Logic CS4245 ADC (which is closer to a Turtle Beach Santa Cruz) but its sample rate was restricted to 96 kHz and it had a mondo large headphone amp.. that made me think it just wasn't suitable for capture projects. (I had to go with a PCI express sound card because the mb has only one PCI conventional slot).[quote]192KHz is cleaner than lower sampling rates. Audio is generally captured as uncompressed PCM @48KHz and has a high bitrate, perfect for not losing audio quality during post processing. The final output compression depends on your encoder. DVD and BluRay use Dolby AC3. The internet uses lower quality codecs. My capture audio cards are old Audigy cards from 2001 and 2004, which my Grado headphones tell me sound OK.

Your All In Wonder VE card is perfectly adequate for capture and looks better to me than the other devices you mention. My own captures are with original All In Wonder 7500 and 9600XT AGP cards. I tried more expensive cards, and cheaper cards, and have seen output from other devices. There's no way those two AIW's are going anywhere any time soon.

You mentioned active movie cap (V.8.00)(1997-2000), Windows media 9 cap (V.9.00)(1997-2003), windows media encoder 9 series (V.9.00.00.2980)(1995-2002). All of these are a far cry from reasonable quality and should be avoided. I wondered why you chose Virtualdub v.10 which is rather buggy, when most members here have stuck with VirtualDub 1.9.11. Many have gone with v.10 but I see a lot of complaints. Most guides here reco9mmend losssless capture with Virtualdub, or wih AmarecTV if your system doesn't like VDub. A lot of members never install Ati's MCM for capture, and it's not needed for ATI's capture drivers to work properly if the drivers themselves are installed. Installing MCM often overwrites some codec registry entries with ATI's own prorietary YUV codecs, which I didn't like. You won't use those ATI codecs anyway, I just didn't like having the registry meddled with in that way.

Quote:
Originally Posted by jwillis84 View Post
The Interlaced and Telecine topics are both interesting (horrifyingly so) and throughly confusing to me. Beyond Odd and Even fields when it lapsed into 3-2-2 and other things.. it totally baffles me. I think a "Guide" to Interlace and Telecine would be well received.
Broadcast movies and most old-time TV shows were created on film, which is normally telecined for NTSC. This becomes important if you intend to do any cleanup or fancy edits. If you're obsessewd about telecine, the best means for converting pulldown schemes are Avisynth's IVTC (Inverse Telecine) and sRestore plugins, which are used to restore telecined video to purely progressive film speed which is usually 23.976 fps. Of course if you go for DVD or BluRay you'll have to apply pulldown again when encoding to get 29.97 fps. It's not always necessary to remove telecine. The glitch about telecine is that it's not deinterlaed in the usual way lest you end up with chroma ghosting and duplicate frames, just to mention a few problems with deinteracing telcine. Pure interlace material is best worked with QTGMC, or at least with yadif, which converts the two half-height fields of interlaced video into two full-sized progressive frames at twice the original frame rate. That's up to you, but even high-quality deinterlacing has a cost. It depends on how well your media players handle interlacing and telecine. Some are good at it, some are not so great, which is why I don't put much stock into many external playback devices. You'll just have to test and see what you get.

Quote:
Originally Posted by jwillis84 View Post
I've bought an AG-5710p and I am waiting for that to arrive, and then will see about getting it serviced by someone like Tom Grant.. I didn't know he had refurb units available before my purchase.. just shows how easy it is to get tripped up by all information available here. (I don't know what I don't know).

I am on the fence however about my little JVC HR-S5902U.. without a line TBC.. it seems of tivial value. I bought it brand new from B&H way back in 2005.. digitized a few VHS tapes and then stopped.. technology kept improving.. and waiting seemed the right thing to do. Now its starting to look like the end of the line for this tech, and the men who service these machines are disappearing.. so I guess its now or never.
Don't discard that JVC. Most experienced members here have multiple VCR's. I have 4, incuding my AG-1980. You never can tell when even a high-end machine won't track an old tape to your satisfaction. We see this more times than I can count. A player without a line tbc can use be used with a DVD-R pass-thru device like a Panasonic ES10 or ES15 for line-level tbc, and it works well. You won't have the ddenoising capability of the 5710, but at times bullt-in dnr doesn't do what you want anyway. You did mention 8-hour tapes, for which the JVC would be an inferior performer -- but that's better than something that won't track at all. I've had to work my way through some horrible 8-hour and 6-hour tapes that my AG-1980 didn't like as well as a lesser SVHS Panasonic that I still use. Still, it's a god idea to have that 5710 looked over. In good condition it's hard to beat.

Because we have no samples of your work, I couldn't offer more detail. If you have questions about how to make and post samples to the forum, just ask. We request that you don't post to outside sites or to YouTube. Many readers simply will not go to outside sites, and YouTube reworks samples into something neither you nor we would appreciate.

Welcome to digitalfaq, and good luck with your effort.
Reply With Quote
The following users thank sanlyn for this useful post: jwillis84 (09-02-2017)
  #7  
09-02-2017, 05:13 PM
jwillis84 jwillis84 is offline
Free Member
 
Join Date: Aug 2017
Location: College Station, TX
Posts: 800
Thanked 217 Times in 174 Posts
Thank you Sanlyn.

That was quite a reply, and very comprehensive. I read every word and I am very greatful for the time you put into it. You made me aware of things I had no clue about. I did not know about VirtualDub 1.9 versus 1.10 or about Huffyuv lossless compression.

The personal comments about what you found acceptable or not in analog versus digital video and how you would approach things or do approach things in playback scenarios for example.. are treasure.

I have a lot of homework to do in order to fully understand everything in your reply. I wasn't even aware of a Panasonic ES10 or ES15 they sound very interesting.

Right now I am frankly obsessed with reading through the material on this site, it years and years deep and simply fascinating. Learning to post samples will be the next topic I focus on.

It should keep me busy until I can get the 5710 sent out for evaluation and servicing or repair. I am under no illusions that it will be in any condition for immediate use. It seems universal at this late date that getting the caps replaced is just the first step in preparing for a long winter that is coming.

I do gather that having a JVC and a Panasonic of good quality is a fair strategy. I take it my little JVC will serve for now, but if I were to keep an eye out for a better JVC with or without a DNR/TBC option would you have a recommendation.. or perhaps a preference?

Thank you for the warm welcome.
Reply With Quote
  #8  
09-02-2017, 06:42 PM
dinkleberg dinkleberg is offline
Free Member
 
Join Date: Aug 2016
Posts: 38
Thanked 9 Times in 7 Posts
wait, is capturing at 640x480 or 720x480 recommended for all analog sources?
Reply With Quote
  #9  
09-03-2017, 05:53 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
The question isn't that simple. I refer to capturing to lossleslly compressed AVI for post-processing and later encoding.

If your target final output needs to be universally playable and easily distributable as DVD or standard definition BluRay/AVCHD, you should capture 720x480 NTSC / 720x576 PAL. As lossless media it's a frame format that can be converted to other delivery sizes, such as square pixel for web mounting. In addition you have a greater number of defining pixels to work with in post-processing, especially more horizontal information. Resizing to square-pixels for web use does involve a quality hit more or less depending on how it's done, but unless you're using a 40-inch or larger monitor for web or PC viewing the quality demands are less than for conventional viewing. If you don't like working with a 720-width for 4:3 or 16:9 video in an editor, get a better editor that will let you view the video in its intended display ratio -- VirtualDub allows you to do so, and so do most PC media players.

If you never intend to produce anamorphic final formats, go with square-pixel frame sizes. Again, resizing square-pixel for anamorphic formats gets you a quality hit, depending on how it's done. Most editors such as Adobe Premiere Pro and many others don't resize all that well. Use any of several resize algorithms or plugins in Avisynth for best results.

For any resize operation, interlaced video must be deinterlaced and reinterlaced. For telecined formats, inverse telecine must be used and then telecine replaced during encoding., Resizing always involves more lossy re-encoding unless you're using lossless media, which retains 100% of what goes into it.

If you are capturing to lossy media such as directly to MPEG or DV-AVI, almost any capture device that does so will capture to anamorphic frame sizes and throw away 50% of NTSC's original color resolution. If you are capturing to h264 in mp4 or mkv containers, you are likely capturing to non-anamorphic square pixel frames for ultimate web use at lower color resolution. In either case, capture to lossy formats is not designed for further processing other than simple cut-and-join edits with smart-rendering editors that don't re-encode the entire video for simple edits. This caution includes DV-AVI, an anamorphic format which was designed as a shoot-and-watch format, not as a restoration format. If you want DV for restoration, repair, post-processing, and further encoding to delivery formats, give up on consumer-level DV and pay up plenty for professional DV codecs, hardware and software that will ensure the integrity of your video from capture through processing to final encoding.
Reply With Quote
The following users thank sanlyn for this useful post: archivarious (01-20-2021), dinkleberg (09-06-2017)
  #10  
09-06-2017, 05:05 PM
dinkleberg dinkleberg is offline
Free Member
 
Join Date: Aug 2016
Posts: 38
Thanked 9 Times in 7 Posts
Quote:
Originally Posted by sanlyn View Post
The question isn't that simple. I refer to capturing to lossleslly compressed AVI for post-processing and later encoding.

If your target final output needs to be universally playable and easily distributable as DVD or standard definition BluRay/AVCHD, you should capture 720x480 NTSC / 720x576 PAL. As lossless media it's a frame format that can be converted to other delivery sizes, such as square pixel for web mounting. In addition you have a greater number of defining pixels to work with in post-processing, especially more horizontal information. Resizing to square-pixels for web use does involve a quality hit
gotcha. for archival 720x480 it is -- can always resize later when encoding to lossy.

Quote:
Originally Posted by sanlyn View Post
If you are capturing to lossy media such as directly to MPEG or DV-AVI, almost any capture device that does so will capture to anamorphic frame sizes and throw away 50% of NTSC's original color resolution. If you are capturing to h264 in mp4 or mkv containers, you are likely capturing to non-anamorphic square pixel frames for ultimate web use at lower color resolution. In either case, capture to lossy formats is not designed for further processing other than simple cut-and-join edits with smart-rendering editors that don't re-encode the entire video for simple edits. This caution includes DV-AVI, an anamorphic format which was designed as a shoot-and-watch format, not as a restoration format. If you want DV for restoration, repair, post-processing, and further encoding to delivery formats, give up on consumer-level DV and pay up plenty for professional DV codecs, hardware and software that will ensure the integrity of your video from capture through processing to final encoding.
you kinda lost me here. Just want to make sure I'm understanding. DV is lossy compression, but WinDV is a direct transfer of DV-encoded media as opposed to an analog to digital capture, yes? so as long as I'm not re-encoding anywhere in the workflow , it's a-okay?
Reply With Quote
  #11  
09-06-2017, 06:05 PM
jwillis84 jwillis84 is offline
Free Member
 
Join Date: Aug 2017
Location: College Station, TX
Posts: 800
Thanked 217 Times in 174 Posts
I think he is making a distinction between DV (professional) and DV-AVI.

DV (professional) preserves more of the signal while DV-AVI throws away original color resolution.

It seems DV-AVI (which is consumer grade) makes some assumptions about your target viewing environment and decides for you that you don't need all that high color resolution.
Reply With Quote
  #12  
09-06-2017, 09:45 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
True. Consumer DV quality is not the higher quality of professional broadcast-grade DV codecs. On the other hand, pro codecs cost plenty of money, and they cost a lot of skill and experience in handling, not to mention the hardware, software, and other studio gear that few members here can afford. A direct copy of a consumer DV source to a PC is as good as consumer DV will get. Intermediate processing of that material should be done using lossless media all the way up to the final encoding step for the intended delivery format.

But I think in this particular case we're talking about lossless capture and processing anyway.
Reply With Quote
The following users thank sanlyn for this useful post: jwillis84 (09-06-2017)
  #13  
09-06-2017, 10:01 PM
jwillis84 jwillis84 is offline
Free Member
 
Join Date: Aug 2017
Location: College Station, TX
Posts: 800
Thanked 217 Times in 174 Posts
Quote:
Originally Posted by sanlyn View Post
But I think in this particular case we're talking about lossless capture and processing anyway.
Yes.

Absolutely, and thank you for introducing me to "compressed" lossless encoding.

To clarify one final point.

The mere action of capturing video over a consumer grade DV connection, implies that a consumer grade DV codec will be used to write the saved capture file and thus color resolution will be "lost" immediately because it must conform to the DV-AVI standard, correct? i.e. there is no choice on the part of the person making the capture, they cannot decide to use a DV connection to store in some other "lossless" AVI format?

I don't plan to use DV consumer or DV commercial for anything, but it seems an important point to make clear to anyone contemplating using consumer DV gear they have on hand.

Explaining that there is commercial "broadcast" quality DV equipment which "does" allow choice seems a generous admission, but might mislead someone into thinking they can make that choice on consumer grade DV equipment.. if I'm not mistaken, they don't have that choice.. the "freebie" consumer grade DV codecs .. are free for a "lossy" reason.
Reply With Quote
The following users thank jwillis84 for this useful post: dinkleberg (09-07-2017)
  #14  
09-07-2017, 02:53 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
If you start with consumer-level DV source and expect to "improve" it by capturing to a new DV file that uses pro-grade codecs, it doesn't happen that way. Taking a lossy source and re-encoding to another lossy (but less lossy) codec is still a lossy re-encode, which throws away data.

Because PAL VHS is not 4:2:2 to begin with, capturing PAL analog to DV-AVI won't cost as far as color depth is concerned, but it does cost in terms of data loss (compression losses and lossy artifacts) plus the esthetic complaints that analog color capped to DV's color system looks "cooked" or fried, and brights tend to explode into low-detail hot spots. Many say the resulting video looks "etched" or rather plastic and denuded. I agree with that, but others either don't see it that way or don't mind.
Reply With Quote
The following users thank sanlyn for this useful post: dinkleberg (09-07-2017), jwillis84 (09-07-2017)
  #15  
09-07-2017, 07:53 AM
jwillis84 jwillis84 is offline
Free Member
 
Join Date: Aug 2017
Location: College Station, TX
Posts: 800
Thanked 217 Times in 174 Posts
Quote:
Originally Posted by sanlyn View Post
If you sthttp://www.digitalfaq.com/guides/video/capture-dv.htmart with consumer-level DV source and expect to "improve" it by capturing to a new DV file that uses pro-grade codecs, it doesn't happen that way. Taking a lossy source and re-encoding to another lossy (but less lossy) codec is still a lossy re-encode, which throws away data.

Because PAL VHS is not 4:2:2 to begin with, capturing PAL analog to DV-AVI won't cost as far as color depth is concerned, but it does cost in terms of data loss (compression losses and lossy artifacts) plus the esthetic complaints that analog color capped to DV's color system looks "cooked" or fried, and brights tend to explode into low-detail hot spots. Many say the resulting video looks "etched" or rather plastic and denuded. I agree with that, but others either don't see it that way or don't mind.
I see.

I also found the guide on DV and it clarified my misconceptions.

http://www.digitalfaq.com/guides/video/capture-dv.htm

So we're really talking a consumer grade DV format called DV25 and file copy over an IEEE1394 wire. The capture was actually done in the hardware and its baked into the "cooked" data.. it "lossed" the color data before it was ever even transfered.. so there is nothing to preserve or get back.

I guess that explains why when I use a DirectX drop down option setting up a USB capture device, it will offer YUV or I420 data "formats".. while consumer grade DV equipment doesn't offer any choices.. its done, the video data is cooked and the only choice is to copy the finished file over... even if its copying a live camcorder event, its "cooking" the data stream while its copying into format DV25.

I understand the attraction to a user.

Fewer decisions to make, and the assumption that a "professional" already selected the best choices.. DV equipment certainly "seemed" to cost a lot more than "choicy" confusing hardware with so many options. (So) its quality "must" be better, its just not true.

NTSC DV is also not as good as PAL DV since PAL DV "standard" is 4:2:0 effectively YUV and the same color space "width" as VHS. But for some reason NTSC DV decided to use 4:1:0 which is a lot less color data. So it makes North America DV very different than European DV. DV is not DV universal the world over.

I see DV with no extra details as meaning "assume a fixed codec and that color space is reduced and cooked" and a video stream copied over a firewire IEEE1394 (precursor to USB) in a fixed file format. Its more comparable to MPEG2 in that you (could) edit it without re-encoding but that assumes the video editing software you use is "smart enough" to not re-encode by-default and introduce even further "losses"... simpler (dumb) editors don't pester users with choices and default to re-encode everything.

The simplicity offered by the DV equipment and editors in late 1990's and early 2000's came with a hefty price and hefty price tag. They were "lossey" by default and design and (not) designed to be upgradable or customizable.

A DV "tape" is therefore more like having a photo in jpeg format stored on a computer backup tape. It's lossey by design.

Its a lot easier on the user or the computer technician copying the file, if all the decisions are made before using a camcorder starts.

Its a lot easier on the user or the computer technican copying the file, if all the decisions are made before copying the video from a tape to a computer.

And given the very early date when all those decisions were "fixed in stone" and irrevocably "made".. certain realities of slow computers and (USB 1.1 really wasn't an option because it was far too slow) -- they "had" to fix what we call consumer grade DV at some pretty restrictive trade offs. Even firewire IEEE1394 was pretty slow by todays USB 2.0 and 3.0 standards.

If there ever were a DV2.0 it would probably be more expensive and a lot more flexible -- but again, I guess there was, it was this broadcast version of DV.. lots harder to use, because of all the choices you could make, and the equipment had to be designed with all those options in all the menus and buttons on the device.

Meanwhile.. plodding along.. generic video decoder capture cards and USB 2.0 then USB 3.0 came along.. faster computers.. and more feature rich computer software became available to the consumer.

DV was great for its time, but appears a relic of the past now.. not much different than a lesser version of VHS.

DV seems the "feature phone" version of video capture (get out of my way and make all the decisions for me)

Capture card seems the "android phone" version of video capture (give me every option and don't make any decisions for me)

Last edited by jwillis84; 09-07-2017 at 08:18 AM.
Reply With Quote
The following users thank jwillis84 for this useful post: dinkleberg (09-07-2017)
  #16  
09-07-2017, 12:01 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
I'm in agrement with most of the last post, but a couple of notes:

Quote:
Originally Posted by jwillis84 View Post
NTSC DV is also not as good as PAL DV since PAL DV "standard" is 4:2:0 effectively YUV and the same color space "width" as VHS. But for some reason NTSC DV decided to use 4:1:0 which is a lot less color data. So it makes North America DV very different than European DV. DV is not DV universal the world over.
NTSC DV isn't 4:1:0, it's 4:1:1. For every 4 pieces of luma data, DV has 2 pieces of chroma data (1 piece for U data ad 1 piece for V data). Whether that is stored as 4:2:0 or 4:1:1, it's still the same amount of data stored in different locations. The storage format for YV12 used by Avisynth, DVD, and BluiRay is 4:2:0, but Avisynth and VirtualDub don't have any problems decoding either setup.

In addition the DV differences with 4:2:0 and 4:1:1, Dv had to carry the maverick act further by insisting on interlace as bottom field first when the rest of the world was top field first.

The chroma loss mentioned with NTSC analog capture to DV arises from the fact that most analog tape formats (VHS, S_VHS, VHS-C, Hi8, analog8) store data as 4:2:2. Capture 4:2:2 to common 4:2:0 or 4:1:1 DV and you lose two pieces of chroma data, or 50%. It's true that much post-processing and the final encode to 4:2:0 delivery formats ends up having the same chroma data as DV. But the chroma re-sampling and interpolations used in more sophisticated post-processing such as Avisynth are cleaner than simple analog-to-DV capture. The latter simply ignores two pieces of chroma data rather than clever compensation schemes that help mask the loss.

An example is that resizing is often done by converting 4:2:0 or 4:2:2 to RGB24 and working the operation with a more complete chroma pallette, then subsampling chroma down after the fact. The results look better than just throwing away pixel data wholesale. Avisynth offers refinements such as resizing and color conversion using 16-bit data rather than the default 8-bit -- while the result gets resampled back to what we all know as 8-bit, 16-bit dithering makes a difference in the end.

Quote:
Originally Posted by jwillis84 View Post
I see DV with no extra details as meaning "assume a fixed codec and that color space is reduced and cooked" and a video stream copied over a firewire IEEE1394 (precursor to USB) in a fixed file format. Its more comparable to MPEG2 in that you (could) edit it without re-encoding but that assumes the video editing software you use is "smart enough" to not re-encode by-default and introduce even further "losses"... simpler (dumb) editors don't pester users with choices and default to re-encode everything.
MPEG2 can't be edited without re-encoding, although smart rendering MPEG2 editors can cleverly encode only a few surrounding parts of joined or cut GOP's (Groups of Pictures) without encoding the entire video. DV however is an intraframe format, meaning that each frame is complete unto itself. So cut-and-join edits in DV don't require smart rendering. The techy way of stating it is that MPEG is an interframe encoding scheme (what appears in each frame depends on what happens in other frames), while DV is intraframe (each frame is independent and complete, and doesn't care what the other frames are doing). But with both formats, any change to the image, frame dimensions, interlacing, colors, overlays, etc., requires re-encoding.

Quote:
Originally Posted by jwillis84 View Post
DV was great for its time, but appears a relic of the past now.. not much different than a lesser version of VHS.
True, DV is a relic but so is analog tape. Both require some cleanup to some extent or another. Both are better off when post-processed as lossless media before final delivery encoding.
Reply With Quote
The following users thank sanlyn for this useful post: dinkleberg (09-07-2017)
  #17  
09-07-2017, 12:37 PM
jwillis84 jwillis84 is offline
Free Member
 
Join Date: Aug 2017
Location: College Station, TX
Posts: 800
Thanked 217 Times in 174 Posts
4:2:0 and the whole 4:2:2 or 4:1:1 sounds really interesting. I know they represent color spaces or color storage formats, or both, but they do confuse me. I think I got the 4:2:0 from the guide and assumed that was correct.

I'd like to understand what they represent exactly, but so far.. I only have a general sense.

X:Y:Z

X = lumance
Y = some cross axes differential related to Red
Z = some cross axes differential related to Blue

I know its not as simple as RGB

but the urge to think in those terms is there
Reply With Quote
The following users thank jwillis84 for this useful post: dinkleberg (09-07-2017)
  #18  
09-07-2017, 02:59 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,307 Times in 982 Posts
Every YUV colorspace stores data representing three channels: Y= luminance, U (sometimes called "blue") is blue-yellow data, V (sometimes called "Red") is red-green data.

For every 4 chuncks of Y luminance data, 4:2:0 and 4:1:1 (YV12) store 1 U data chunck and 1 V data chunck.

For every 4 chuncks of Y luminance data, 4:2:2 (commonly used as YUY2) stores 2 U data chuncks and 2 V data chuncks.

The Y, U and V data bits are stored in different pixel locations along the horizontal and vertical axes. Because data is stored in different pixel locations, Y, U and V can be manipulated separately. For example, you can change brightness without changing hue or saturation. You can adjust red contrast without changing brightness, and so forth. You can reduce U and V to zero and still have a complete grayscale image in the Y channel (which the way your old black and white tv could still receive color broadcasts as monochrome).

RGB stores luminance and chroma information in the same pixel, and even in the same group of databits. Each pixel contains a hexadecimal number that stores the RGB values for each color, Red, Green, and Blue. Increasing the amount or intensity of one or two colors (or all three) affects the brightness and contrast of the entire formula. Each section of that multiple-number databit can be changed independently by altering any of the hexadecimal data bits. RGB colors are listed numerically as R,G,B numbers. Each chunk of hexadecimal data uses the placement formula RRGGBB for the three RGB colors. Examples of hexadecimal numbers for a few sample RGB color values are:
Pure Red = RGB 255,0,0 = hex FF0000
Pure Dark Red (maroon) = RGB 128,0,0 = hex 800000
Pure Green = RGB 0,255,0 = hex 00FF00
Pure Blue = RGB 0, 0,255 = hex 0000FF
Purple=RGB 128,0,128 = hex 800080
Brown = RGB 165,42,42 = hex A52A2A
U.S. olive drab = RGB 107,142,35 = hex 6B8E23

YUV indicates the way video is stored, RGB indicates the way YUV video is interpreted and displayed. The standard video YUV colorspace prefers a luma range (in RGB equivalents) of y=16-235 for luma and y=16-240 for chroma. When displayed as RGB, y=16-235 is expanded at the extremes at each end and is displayed as RGB 0-255.

In RGB color correction, the reference black, gray, and white points each have equal proportions of Red, Green, and Blue. That is to say:
pure zero black = RGB 0,0,0
dark gray = RGB 64,64,64
middle gray = RGB 128,128,128
light gray = RGB 192, 192, 192
video white = RGB 235 235,235
bright white = RGB 255,255,255
If all or most of those gray points can be baanced with those equivalewnt color values, all the other colors fall into place because R, G, and B will be in equilibrium.

YUV has similar Y, U and V proportions but are stored as separate values in separate pixels.

There is plenty of information on these concepts via Google.
Reply With Quote
The following users thank sanlyn for this useful post: dinkleberg (09-07-2017)
  #19  
09-07-2017, 03:08 PM
jwillis84 jwillis84 is offline
Free Member
 
Join Date: Aug 2017
Location: College Station, TX
Posts: 800
Thanked 217 Times in 174 Posts
This made the biggest impression on me, but most of it (unfortunately) sounds like word salad to me.

Quote:
Originally Posted by sanlyn View Post
YUV indicates the way video is stored, RGB indicates the way YUV video is interpreted and displayed. The standard video YUV colorspace prefers a luma range (in RGB equivalents) of y=16-235 for luma and y=16-240 for chroma. When displayed as RGB, y=16-235 is expanded at the extremes at each end and is displayed as RGB 0-255.
yeah.. I should Google more about it.
Reply With Quote
The following users thank jwillis84 for this useful post: dinkleberg (09-07-2017)
  #20  
09-16-2017, 05:29 PM
jwillis84 jwillis84 is offline
Free Member
 
Join Date: Aug 2017
Location: College Station, TX
Posts: 800
Thanked 217 Times in 174 Posts
tackling meaning of YUV vs YUY2 vs YUYV vs UYVY and 4:4:4 vs 4:2:2

this is my attempt, there are formal ways of stating a geometric proof.. but this is my commoner way of stating it

YUV refers to the [' idea '] of Y-lumanance (roughly yellow), U-color axis (roughly red), V-color axis (roughly blue)

It comes about because humans are more "sensitive" to fine detail made up of the color yellow (or) 'lumanance'

That is to say, in a B&W image if there is (no) color we are still able to (see) clearly

Adding back in color "elements" a little bit at a time is of great value, but we are just as happy with a greatly reduced in information image.

So YUV lays the ground work for prioritizing Y over U and V information. We can do without the U and V information but can't do without the Y information.

If you setup a "Ratio" of how important Y is over U and V you could say 4:2 or 2:1 that Y is twice as important as U and V

But there is another "dimension" to consider, literally, X (horiz) and Y (vertical)

Since a TV screen is drawn as a horizonal line stacked vertically, then 4:2: would represent the Y density relative to UV along the horizontal and adding in the third position 4:2:2 says "equally" subsampled in the vertical dimension.

The reason it starts with the (number) "4" is because you could choose to reduce the color information to 4:2:1 or 4:2:0

Or Yx:UVx:UVy if that makes any kind of sense.

Computer bits and bytes and words can be organized by handling this information as if it all fell into one long bit string called (packing) the image data

Or

Computer bit and bytes and words can be organized by handling this information in separate lumanance and color data "arrays" and woven back together as (overlaid planes) of image data called (planing) the image data

Each exists because the hardware from years ago was slow, and different silicon and cpu techniques were tried to accelerate the graphics cards.. as a result of history.. and papers that sprung up around them.. both techniques are still supported by coding libraries.

So.. RGB could be ARGB (Alpha channel)(Red)(Green)(Blue) or (8+8+8+8 = 32 bit) image data

RGB without the alpha channel is what we think of as VGA (Video Graphics Array or Adapter) (8+8+8 = 24 bit) image data

And 4:2:2 has nothing to do with 16 bit image data

Its more about "interleaving or weaving" more bits for Lumanance (Y) into a horizontal stream of bits used to make up a line of video.

Thinking as a linear stream of data bits in a long string, a video line could be:

8 bits Y, 8 bits U, 8 bits Y, 8 bits V (or YUYV) this is called YUY2 (because) all data formats have a 4 character code called the [four CC] to define what the bits mean when lined up ready to be turned into a video signal.

If you reverse the bits

8 bits U, 8 bits Y, 8 bits V, 8 bits Y (or UYVY) this is an entirely different format called UYVY (a different 4 character code)

Notice how the question (what does YUV mean?) (what does 4:2:2 mean?) changes and disappears.. its because you start from the assumption like with VGA it means a data bit format of storage ready to be turned into video.. but its not that.

YUV and 4:2:2 are a crutch for organizing your thoughts about B&W and Color video "details" that are important or that are not as important, and assigning them relative merits. After that the "similar" letters for Y,U,V are "reused" when coming up with an actual data bit storage format.. but not in the same way that R,G,B are used.. Y gets repeated for example.. and none of the bits get loaded into a color spray gun of Red paint, Green paint or Blue paint.. the bit stream gets "blended" into a complicated video signal that gets taken apart by whatever receives the video signal and turned back into a picture in many different ways.

The important thing is not YUV, YUY2 or 4:2:2

To most of us computer geeks we're actually concerned with the meaning of the data bit stream that will gets turned into a video signal.

The four CC (color code, chroma code?) defines how the bits are arranged, and they are always (two) words or four bytes, just like ARGB.. and [Y][u[Y][2] or u[Y]v[Y] are the same thing, just with the information switched around.

notice how YUY2 refers to [Y1]u followed by [Y2]v or (Y1uY2) shortened to [YU,Y2] it gets the name YUY2 from the fact that the second 'Y' immediately follows the first 'u' or (YU..Y2..)YV.. < this has the advantage of denoting the position of the "U" value relative to the first "Y" and emphasising the extra Y2 density of lumanance information.

YUY2 is four "digits" or "four characters" so it is a legitimate "four CC" name/code
UYVY is four "digits" or "four characters" so it is a legitimate "four CC" name/code

they are "opposites" of each other

YUY2 = YUYV,YUYV, ect..
UYVY = UYVY,UYVY, ect..

why they aren't called:

YUY2
UYY2

is probably a matter of visual confusion that might arise from two "Y"s next to each other in the UYY2 name

4:4:4 or 4:2:2 or 4:2:1 refers to the 4:X:Y or [Luma]:[Chroma1 x]:[Chroma2 y] or how the color infomation is "subsampled" in the x or y directions (the horizontal or the vertical video directions)

you might ask if there are other four CC formats? and there are, many in fact, but most refer to inline compression of the video bitstream format; these formats (YUY2 and UYVY) are not considered "lossy" even though they are subsampled, they are a native format for capturing video at the highest information density offered by the capture device

after a "lossless" capture the video data can be further reformated and reduced championing various factors considered by elements within the image stream or for other reasons different than the limits imposed by the capture hardware itself

Last edited by jwillis84; 09-16-2017 at 06:28 PM.
Reply With Quote
Reply




Tags
aiw ve, s-vhs, tbc-1000

Similar Threads
Thread Thread Starter Forum Replies Last Post
DTS audio file, unable to hear after authoring? manthing Encode, Convert for discs 3 02-12-2018 05:09 AM
Capturing Video8 tapes: resolution, filtering, codecs, audio? johnny7 Capture, Record, Transfer 11 11-07-2015 03:19 PM
Can an AC3 file be improved? How to filter audio quality? Superstar Restore, Filter, Improve Quality 1 09-17-2011 08:05 PM
Help fixing the audio file issue designunlimited Restore, Filter, Improve Quality 1 04-01-2011 11:36 AM
What are the best codecs to install ? jrnyhead Videography: Cameras, TVs and Players 1 04-28-2004 03:40 AM

Thread Tools



 
All times are GMT -5. The time now is 06:32 PM