Go Back    Forum > Digital Video > Video Project Help > Project Planning, Workflows

Reply
 
LinkBack Thread Tools
  #41  
12-29-2016, 06:39 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,322 Times in 991 Posts
If you captured to YUY2 color at 720x480 or smaller using huffyuv or Lagarith compression, about 8 to 10 seconds of edited lossless AVI would be less than the 99MB file limit. The edit scene should include motion of some kind, preferably someone walking or moving across the screen, moving their arms, or similar motion (no kamikaze fast camera pans, if you can avoid it, shots like that are just a blur).

You scroll in VirtualDub using the scrollbar or the navigation icons at the bottom of the VDub window. Note that the two rightmost icon pair is the start-positon and end-position selection icons. They look like two fish hooks, one pointing to the left (start point), one pointing to the right (end point). Scroll to where you want your edit to begin, then press the end-point icon (the right-pointing fish hook). In the scrollbar you'll see that the scrollbar is shaded blue to indicate that the selection extends from the start of the video to the point you just marked. Now click "Edit..." -> "Delete" (or press the Delete key on your keyboard). This remnoves the front portion of the video.

To mark the end of the selection you want to include, you want to delete everything that follows your selected sample. To do that, scroll or navigate to the end of your selection and click the "end-point" icon (right-pointing fish hook). You'll see the scrollbar shaded in blue to indicate the length of your selected sample. Click "Edit..." -> then click "Crop to selection". This will delete the remainder of the video file from view. (There is another way to mark that selection, but it seems to confuse most people when it's described).

DO NOT SAVE YET. On the top menu click "Video..." -> then in the drop-down menu activate (left click) "Direct Stream Copy". Then click "File..." -> "Save as Avi...", give your selection a name and location, save it, and post here using the "Go advanced" icon at the bottom of a reply window. Below the reply window you'll see an icon for managing files and attachments. Click that to open the dialog window. Files download faster than they upload, so just be patient and wait until the dialog box indicates that the upload is complete.
Reply With Quote
Someday, 12:01 PM
admin's Avatar
Ads / Sponsors
 
Join Date: ∞
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #42  
01-02-2017, 05:47 PM
mparade mparade is offline
Free Member
 
Join Date: Dec 2016
Posts: 34
Thanked 0 Times in 0 Posts
Hello sanlyn,

Tried doing some samples for you. Please find them as attachements.
After capturing about 20 pcs from my tape-collection I realized a lot of quality problems...a lot of tapes are in very bad condition.

The attachements are from different scenes on the same tape recorded in 1988-1989.

Your effort is highly appreciated!


Attached Files
File Type: avi at grandmas.avi (73.30 MB, 63 downloads)
File Type: avi at parcel.avi (90.45 MB, 18 downloads)
File Type: avi bone-yard.avi (89.82 MB, 15 downloads)
File Type: avi carnival.avi (48.45 MB, 20 downloads)
File Type: avi gym.avi (53.50 MB, 27 downloads)
File Type: avi flat.avi (82.54 MB, 19 downloads)
File Type: avi from car.avi (75.60 MB, 21 downloads)
Reply With Quote
  #43  
01-02-2017, 09:18 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,322 Times in 991 Posts
Thank you for taking the time for the samples. Downloading now and will be able to address comments in the morning.

[EDIT] After a quick look, these appear to be very workable captures. Input levels were well controlled despite some scenes with challenging light problems (which is an accomplishment in itself!). I can prepare more details later. Thank you again.

-- merged --

Again, these are good captures with nice level controls and excellent color. You are shooting under lighting conditions that effect the results in ways that Hollywood film crews take hours to correct before shooting ever begins. Two of these lighting problems are interior light and high-contrast daylight. Overall, however, the exposures work well (except when the camera's autogain feature starts meddling with light, as you can see).

Digital camera viewfinders show luminance pumping because its image is derived from the camera's digital sensor. Analog cameras simply show what the lens saw -- that image wasn't generated by the camera's sensor. So with older cameras a bright light source in a dim room would cause the camera's autogain electronics to attempt to average the lighting conditions and try to encompass extremes of brightness and darkness. Some cameras were better at this than others, but unfortunately the human eye and brain are looking at an uncorrected image and the brain is compensating for things such as light falloff. The exposure surprises come when the "corrected" images are played back.

The human eye can take in a contrast range that is far wider than film or video can accept. Film has a wide contrast range, analog and digital YUV video are more limited. Objects whose light values fall outside the range capability of a camera's electronics are simply cut off at the dark end or the bright end (it's called clipping). So, what your eyes and your camera lens see is seldom what ends up in a movie, in an uncontrolled light environment.

But overall, controlling levels during capture can avoid a lot of clipping, of which thankfully there's very little in these captures. Most such captures that we see in forums are dreadful -- highlights emerge as bright discolored smears of glowing-hot exploding suns, while crushed darks are grimy blackish blobs with no discernible detail. Those problems can't be corrected after clipping. I think you did excellent work at avoiding those effects.

All isn't perfect as you know (is it ever perfect? Answer: no, not even with the pros). A little post processing can make improvements, which is what lossless capture is about.

The clip "at grandmas.avi" shows the effects of light falloff, but here they're kept under good control. The exposure looks clean, even if the figure at the start of the clip is moving into dimmer light away from the bright window. Note that you still have good detail in that window, and you still have good shadow detail. In most such captures the window would be washed out completely and the shadow side of the figure's face would be no-detail blackish brown. Below is an original unfiltered image of frame 10 from that avi, as the figure moves into the darker area.

at grandmas frame 10:


I have resized the image and attached two histograms. Note in the above image that I have also removed the black side border and the bottom-border head switching noise so they wouldn't affect the histogramns. The histogram in the middle of the image is an Avisynth YUV histogram which graphs the wqy the video is stored as YUY2. The top white band in that YUV histogram is the luminance level, the two lower bands are the U and V channels. You can see that the white luminance band shows luma levels correctly contained within the "safe" area inside the shaded portions of the graph. At the right side of the image is an RGB histogram of the way YUV data will be displayed on a PC or TV as RGB. You'll note that the darks at the left side and the brights at the right side in RGB are "expanded" at each extreme, and that they populate the entire histogram without climbing up the side walls (if they're climbing up the side walls in RGB, they'll be clipped by the encoder, which will not accept detail beyond the range of RGB 0-255).

The YUV histogram is a built-in Avisynth filter. The RGB "parade" histogram is from VirtualDub. Histograms and similar graphs are essential tools in video. They tell a great deal about images whether they're video or still photos. If you want to know how they work, there's a very good free tutorial at a Photoshop forum with good examples of tone and contrast. The examples are for still cameras, but the principles are the same for photography and video.
Understanding histograms Part 1 and Part 2
http://www.cambridgeincolour.com/tut...istograms1.htm
http://www.cambridgeincolour.com/tut...istograms2.htm

The image below is frame 255 from "at grandmas.avi". It has a more even distribution of available light. When the figure moves into the darker middle portion of the clip you'll see the exposure change to a more evenly lighted area, revealing more shadow detail. Again, the histograms show how YUV video storage and RGB display are different at dark and bright extremes.


The image below is unfitered (but resized to 640x480) from frame 45 of "at parcel.avi". It's a good exposure but the only problem would be the black dog and the figure standing just behind it. It's good enough as-is, but I think you can see there's not much detail in the black pet because of the contrast range of bright sunlight.


One can get a little more detail out of the dark areas -- not very much, but enough to turn black blobs into something better -- by using an Avisynth filter named ContrastMask.avs. A demonstration of getting just that much more information is shown below ()with borders cleaned up and image centered):


The effect is cleaner in motion than in a still image. The ContrastMask filter retrieves detail from dark areas and at the same time calms bright highlights so that they don't become hot spots -- that would be a problem with the usual brightness and contrast filters. The only glitch with the filter is that brightening extends too far into the midrange and makes those midtones look less saturated. I fixed this by using VirtualDub's ColorMill filter to increase saturation in the dark areas and to lower brightness in the lower midtones.

Fortunately these shadow areas will look brighter on a TV than on a PC. If you aren't using a calibrated monitor, you'll find that video and pics look very different after calibration. But that's another subject entirely.

Some of your other captures might prove somewhat more difficult. My sister's kamikaze acrobatics with her own camera gave me plenty of headaches while working with her videos, which look far worse. I'll continue in the next post.


Attached Images
File Type: jpg grandmas frame 10 YUV-RGB.jpg (71.6 KB, 304 downloads)
File Type: jpg grandmas frame 255 YUV-RGB.jpg (82.5 KB, 301 downloads)
File Type: jpg at parcel frame 45 original.jpg (116.8 KB, 303 downloads)
File Type: jpg at parcel frame 45 ContrastMask.jpg (132.2 KB, 303 downloads)
Reply With Quote
  #44  
01-03-2017, 02:32 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,322 Times in 991 Posts
Some of the captured shots have people definitely in the dark, so to speak. The tendency is to overcompensate and try making these dark areas too bright, which will look weird especially on TV. For scenes like this there are several filters, two being Avisynth's HDRagc and AutoLevels plugins -- but sad to say, those two filters and many like them play havoc with most videos and are difficult to use. They're designed for consistently underexposed video. If the lighting changes during a shot, these auto filters cause trouble galore. I decided to use a milder auto gamma filter called AutoAdjust.dll. It's a very mild filter that's not difficult to use. I'll demonstrate:

The image below is frame 30 from "flat.avi". The only processing of the original frame as shown was to resize to 640x480, crop borders and head switching noise, and center the image. In Avisynth this process has no effect on the core image, which remains intact and as-is.

frame 30 original, borders adjusted:


Frame 30 after filtering with AutoAdjust:


Thanks to the camera's autogain this figure and other details are too dark when standing with that bright window in the camera's view. As you know, the shot changes when the figure moves into an area that isn't so affected by the window. Had autogain been turned off (if possible), exposure could have been set manually for the light in other parts of the room. I selected the AutoAdjust filter because it has gentle action, doesn't "pump" luma as the light changes, and can be set to have little or no effect on midtones and brights. I didn't want to overly brighten the darks because that would adversly affect everything when the light changes.

You can see that telling the filter not to change brights very much did help to darken the highlights in the window and bring up a little more detail in the curtains. When the figure moves away from the window and into other areas, the lighting looks normal. Together with the Avisynth filter, I used VirtualDub's gradation curves to very very very slightly brighten darks around the area of RGB 35, which included most of the shadowed area.

Another tweak that I worked on, although not absolutely necessary, was to work with over saturated red. The red cap in the image below of frame 294 is taking on a "dayglow" effect and slightly bleeding with the background wall and the coat collar. These are not easy to correct, but you can improve things:

Frame 294 original:


Frame 294 after:


The fix isn't altogether perfect, but it looks better as the video plays. In order to do this I had to de-interlace the video with Avisynth's yadif.dll (QTGMC might be slightly better, but much slower), apply the chroma bleed filter, then re-interlace. In any event, you can also see that the AutoAdjust filter didn't adversely affect the brighter portion of the image. All you can see of that filter is the slightly more revealing detail of the shadowed woman in the background.

I didn't go into Avisynth in detail here or into VirtualDub. It's the very thought of Avisynth that scares the pants off people because it is a command line utility that has no interface (the "interface" I used was VirtualDub). That sort of thing would be better placed in the restoration forum. An mp4 video of the filtered clip is attached.

Otherwise, we'll be glad to try to address other issues or complaints you have with these captures. There's really not much noise to worry about except for some very light filtering. The captures are very good.


Attached Images
File Type: jpg sample 01 frame 30 original.jpg (94.2 KB, 302 downloads)
File Type: jpg flat_00 autoadjust frame 30.jpg (97.5 KB, 306 downloads)
File Type: jpg sample_01 frame 294 chroma bleed.jpg (94.2 KB, 304 downloads)
Attached Files
File Type: mp4 flat_post_process.mp4 (7.44 MB, 13 downloads)
Reply With Quote
  #45  
01-08-2017, 04:18 PM
mparade mparade is offline
Free Member
 
Join Date: Dec 2016
Posts: 34
Thanked 0 Times in 0 Posts
Hello sanlyn,

First of all, thank you very much for your thorough reply! I really do not know how I could return it to you.
Sorry for my late answer: I have been capturing my VHS collection for two weeks using your "advanced capturing tutorial". I am right in the middle of that and cannot focus on post-processing yet. I wish there were a general "advanced tutorial" for post-processing as well.

After reading your letter I feel I have big "information gaps" in the following things and therefore in which I would need
your expertise, if possible:

- the general stages of post-processing starting from a capture that is a losslessly compressed avi container;
- what do you recommend for easy editing of the video (cut, join, delete). Vdub is preferred here as well?;
- to be prepared for filtering do you cut the capture on scene basis and process them scene-by-scene? (I think it depends on whether the filters used in a scene for a specific issue affect the other parts of the scene adversely)
- i haven't calibrated my monitor yet (if you had a link to a tutorial, it would be great also)
- what is the required order for filter usage in post-processing?
- what do you recommend for very light denoising (in the past I used MVDegrain)
- between filtering stages what kind of rendering do you prefer?
- is a separate avs script file suggested to make for each avi input file fed into Vdub?
- does it have a mean to bring back the black borders and center the image at the end of the filter-chain as a last command? (if using an avs script as an input in Vdub with several certain filters included, and after that want to use further filters in Vdub then I have to center the image and give back the black borders at the end of the filter chain for the image to fit to the standard?

Sorry for my several questions but a general "advanced post-processing tutorial" would be great as well.
If it is required I can open a new thread at "restoration".

Thank you very much again. Please think of the way how I can compensate for your efforts.

P.S. I will be searching for some more difficult artifacts in the captures above to filter.
Reply With Quote
  #46  
01-09-2017, 10:59 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,322 Times in 991 Posts
Quote:
Originally Posted by mparade View Post
I wish there were a general "advanced tutorial" for post-processing as well.
So do I. I've never seen one. The multitude of differences in video source quality likely has much to do with that situation.

Quote:
Originally Posted by mparade View Post
- the general stages of post-processing starting from a capture that is a losslessly compressed avi container
There are no strict rules. Everyone has preferences. With advanced users, analysis and correction of luma/chroma levels comes first. This involves tools like histograms and pixel samplers. A pixel sampler
is probably a new item for most people. Users pay big bucks for big-name software like Premiere Pro (or better) to get things like pixel color samplers, which is simply a tool that reads the color value of pixels under your mouse cursor or other probe tool. For us mere wage earners there are free desktop tools like the resizeable ColorPic and others (https://www.iconico.com/colorpic/).

Usually one adjusts levels in the original YUV colorspace first, but checks them in both YUV (the way video is stored) and in RGB (the way video is displayed). The main reason is to avoid unsafe luma and/or color values that result in the clipped brights or crushed darks described earlier, and to see how much leeway you have inside those safe levels for correcting things like color balance, saturation, and tonal contrast.

Next comes noise and obvious defects such as dropouts, edge stains or rainbows and other chroma noise, color bleed, edge effects such as halos and color offsets, and things like spots, excessive combing artifacts, "floating" tape noise, excessive grain, etc. There are specific filters for almost everything. Sometimes deinterlacing and re-interlacing are required, sometimes a simpler SeparateFields() will work. Usually color doesn't care about interlace or telecine. Much depends on the specific filter or defect. You mentioned MVDegrain (these days with the new MV tools it's MDegrain -1, -2, and -3). Its variations can be used with SeparateFields, although many would deinterlace/reinterlace if it wouldn't prove too destructive.

Fixing borders and centering the image can be done at any time. Some people crop off borders during early analysis stages to keep borders from throwing off histogram readings. Colorspace and frame structure dictate certain rules and cautions when cropping (see the bottom of this web page: http://avisynth.nl/index.php/Crop. The Avisynth code that I used to crop the samples in the earlier post was run in YUY2 color with interlaced frames. AddBorders() was used (http://avisynth.nl/index.php/AddBorders) to restore the original frame size and center the image:

Code:
Crop(0,0,-20,-10)
AddBorders(10,4,10,6)
Quote:
Originally Posted by mparade View Post
- does it have a mean to bring back the black borders and center the image at the end of the filter-chain as a last command? (if using an avs script as an input in Vdub with several certain filters included, and after that want to use further filters in Vdub then I have to center the image and give back the black borders at the end of the filter chain for the image to fit to the standard?
As you can see from the code posted above, cropping off borders and restoring the frame size are done at the same time. Don't process with odd frame sizes. Some filters require certain pixel- block dimensions. Standard video frame sizes like 720x576 will meet those requirements. I usually crop borders at the outset only temporarily, remove the crop command after I check levels, then use the actual crop-and-center near the end of processing.

Quote:
Originally Posted by mparade View Post
between filtering stages what kind of rendering do you prefer?
The filters used are in Avisynth and Virtualdub, not in NLE editors. Most people know editor NLE's as Premiere Elements, Premiere pro, Vegas, Movie Studio, etc., but they are not restoration tools. They fall far short of Avisynth and Virtualdub for denoising and repair, although the "pro" NLE's have very advanced color correction tools -- which, by the way, most casual owners pay high prices for but never use! (The average owner of Premiere Pro uses it like a low-cost budget NLE and are wasting their investment). So "rendering" between process stages doesn't apply. Lossless video is saved as new lossless working files for the next processing step or for encoding.

Because scripts are run in VirtualDub (and because you can always apply VDub filters to Avisynth's output at the same time), Virtualdub output settings are set to configure output colorspace and lossless compression. "Rendering" usually applies to encoding from an NLE. But in this case I don't use editors to encode. I use programs that are set up primarily as encoders, not primarily as editors.

By "edit" I think you mean cut-and-join or timeline operations, or isolating certain segments for work. That can be done in Avisynth scripts using the Trim command (http://avisynth.nl/index.php/Trim) or with the edit controls in VirtualDub. Or, lossless files can be trimmed in most NLE's and saved as lossless files again. The main point is that "rendering" as used by most NLE programs to mean "encoding" isn't done in intermediate stages. Lossy encoding is the last step, not an intermediate step. Each repeated lossy encoding step involves more and more incremental quality loss, so working files are saved using lossless compression. Lossy means "You get back less than you started with, and you can't go back to get what was lost." Lossless means that everything you compressed into the video is returned 100%.

So if you plan on using an NLE for timeline work and encoding, do your lossless corrections first, then import the pieces into your NLE for the production and encode step, which would be the last step in the workflow.

"Light denoising" could mean several things. MDegrain is one way (it is a spatio-temporal denoiser that works mostly on grain), and there is TemporalSoften, FluxSmooth, MCTemporalDenoise at "Very low" settings, VirtualDub's temporalsmoother at settings of 2 or 3 (never higher than 4), QTGMC at "super fast" settings with its EZDenoise filter set to about 2 or 4 -- there are a great many choices here, and it depends on the kind of "noise" you're talking about. For very light spot removal, try RemoveSpotsMC, which is a version of RemoveDirtMC. Or for a really bad case of white/black spots and dropouts, try RemoveSpotsMC3. If you wanted to smooth out gradients, or large flat areas such as sky where you want to preserve delicate changes in hue rather than see hard, ugly edges and macroblocks, there are denoisers like DeBlock_QED, GradFun2DBmod, or dfttest (the latter is also a grain cleaner). For rainbows and chroma noise you have Cnr2, SmoothUV, CamcorderColorDenoise, and many others. These filters have variable settings.

Sharpening should follow denosing, not precede it. After all, why would you want to sharpen noise first? Basic color correction is usually done during levels and contrast setup, but is almost always tweaked in RGB (usually with VirtualDub) later in processing because filtering often affects contrast and color. Favorite Vdub color plugins are ColorMill and gradation curves. I often do advanced color work with ColorFinesse in my Afterffects NLE but I save the results as lossless AVI.

The agenda I follow starts with Avisynith correction in the original colorspace (YUY2) as much as possible, then work in YV12 (which is what most of the denoisers work with), then final work in RGB. This avoids problems with multiple stages of back-and-forth colorspace conversion. It also gives you more control over when and how the conversions are specified. NLE's are sloppy with many colorspace conversions compared with Avisynth's higher precision.

If you're running an Avisynth script in VirtualDub and you want to save the output to lossless YUV for more work later, set VDub's output color to YUY2 or YV12, set the compression for your losssless compressor, then set video output for "fast recompress", then save the file. Note that any VDub filters you want to run won't be applied using "fast recompress", so set video output to "full processing mode" instead but keep your colorspace and compressor output settings. VDub's default output is always uncompressed RGB unless you specify otherwise.

If your script is working in YUV but you want to set it up to run VirtualDub RGB filters for work in VirtualDub, use this at the end of the script:

Code:
ConvertToRGB32(interlaced=true,matrix="Rec601")  #<- for intelaced or telecined video
or:
Code:
ConvertToRGB32(interlaced=false,matrix="Rec601")  #<- for purely progressive video
Quote:
Originally Posted by mparade View Post
- to be prepared for filtering do you cut the capture on scene basis and process them scene-by-scene? (I think it depends on whether the filters used in a scene for a specific issue affect the other parts of the scene adversely)
It depends on the video. Most of the time there is always at last one maverick segment that won't behave. On the other hand, full processing start-to-finish for a full hour or more of run-time video in one step can be really tiresome. I generally process as much length as I can using a single script and filter set in Avisynth and VirtualDub before having to use something different for a wayward segment. Those more or less "universal" settings can be re-used for other segments later, so it's not as if you have to repeatedly re-invent the wheel.

I have worked with retail VHS tape editions that defy reasonable treatment. I've had to cut them into small segments of as little as a few seconds, scene after scene, for 2 to 3 months of work to process a 90-minute movie. So home video is no exception regarding scene by scene differences but is often more consistent than retail VHS.

Quote:
Originally Posted by mparade View Post
- is a separate avs script file suggested to make for each avi input file fed into Vdub?
The script has to identify the location and name of each avi input, so in that sense you can use the same script repeatedly but you have to change a few lines to suit each input clip. I don't always save intermediate working files but I save the scripts.

Quote:
Originally Posted by mparade View Post
i haven't calibrated my monitor yet (if you had a link to a tutorial, it would be great also)
This subject is a whole can of worms. Essentially, if what you view in your uncalibrated monitor isn't accurate, the results won't be what you think they are. They will look very different on every monitor and every TV. Calibration, or what the pro's call grayscale adjustment, sets the monitor to common, well-defined display standards. Video enthusiasts and photographers adjust their equipment to those standards. They aren't "opinions" about what "looks nice" but are objective specifications. If someone else's monitor is out of whack, that's their problem and not yours. At least everyone gets the same adjusted input, if not the same output. But there is no sense working with an overly bright or too-blue monitor and setting your videos too dark or too yellow because of it -- that method guarantees your results will look whacky everywhere.

Setting up a TV into the neighborhood of "accuracy" is normally not too difficult if you have a good eye. Typically a budget monitor is 'way out of spec right from the box and is usually factory-set for showrooms, not for living rooms. They are almost always too bright and too blue, with a high gamma setting that looks snappy but has poor shadow detail and blows out highlights. More expensive monitors are closer to the norm but still have color and gamma errors.

PC monitors operate differently than TV and are very difficult to calibrate with the user controls provided. The most accurate setup uses an electronic colorimeter and an associated software kit to create a monitor profile that adjusts your monitor settings and your graphic card's output. This can be expensive (as with XRite's very popular EODISC3 i1 kit (https://www.amazon.co.uk/X-Rite-i1Di...keywords=xrite) or easier on the wallet such as Xrite's fully automated ColorMunki (https://www.amazon.co.uk/X-Rite-Colo...rds=colormunki). There are similar products from an outfit called Spyder.

A set of manual display test patterns is available from the lagom website (http://www.lagom.nl/lcd-test/?). Place yourself and your monitor into a dimly lighted environment and work your way through the test pattern pages. This is not the best way of doing it, but you can put yourself into a better viewing situation than you start with. You will also get an idea of how troublesome this can be manually and how far "off" and limited the typical monitor is.

The best way to illustrate what a monitor should look like and what a calibration does is in a an older test review for the XRite i1-Display2 calibration kit (now superceded by the EODISC3 at about the same price). This is the way most calibration kits work and illustrates how easy and accurate they can be: http://www.tftcentral.co.uk/reviews/...e_display2.htm. Plenty of pictures and graphs to illustrate their use and what their goals are.

Quote:
Originally Posted by mparade View Post
I will be searching for some more difficult artifacts in the captures above to filter.
No problem, but a slight correction here. "Artifacts" is one of those terms that is used very loosely. Technically, artifacts are digital phenomena, not analog. Analog definitely has its problems, but no one will complain if you call them artifacts. Rainbows (cyan and magenta color "blotches" or blemishes in clear areas like blue sky or skin tones) are analog chroma problems, along with dot crawl and chroma bleed. Grain and tape noise are analog luminance and chroma glitches. When those glitches are captured to lossy codecs like MPEG or DV and get translated into digital compression glitches, they technically become "artifacts" (mosquito noise, macroblocks, temporal chroma smearing, and so forth). In pure terms, analog doesn't have lossy digital compression artifacts, which is one reason for using lossless capture.

We are allowed to call practically any problem an artifact. It's just that knowing the source and the cause makes it easier to deal with.
Reply With Quote
The following users thank sanlyn for this useful post: mparade (01-10-2017)
  #47  
01-09-2017, 06:19 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,322 Times in 991 Posts
The earlier posted sample that I neglected was "from car.avi", mwhich has a typical problem: the effects of shooting through glass, and the effects of autogain features.

The image below is frame 85 of the original clip, reduced to a 4:3 image. Our clever brain compensates for what our eyes sees through car windows, but the camera doesn't oblige. The scene through the window is washed out, and reflections off the glass surface corrupt color and remove details. The camera's autogain makes sure the dark car interior gets good exposure, but it washes out the main point of interest which is the scenery outside.



frame 85 after corrections, with filter used:


It's possible to make corrections using a filter like VirtualDub's gradation curves, pictured above. Pulling the diagonal line toward the right side of the filter will darkens the colors affected. Here, the darkening is greater with brights and midtones than with darks. The dark part of the image isn't the important subject anyway. This very controlled correction isn't possible with the usual brightness and contrast filters.

O)f course most viewers likely wouldn't be bothered by the original. So frankly I'd save some work and leave thje original video as-is.


Attached Images
File Type: jpg frame 85 from car - original.jpg (83.0 KB, 299 downloads)
File Type: jpg frame 85 from car -color levels.jpg (119.9 KB, 300 downloads)
Reply With Quote
The following users thank sanlyn for this useful post: mparade (01-10-2017)
  #48  
01-10-2017, 06:01 PM
mparade mparade is offline
Free Member
 
Join Date: Dec 2016
Posts: 34
Thanked 0 Times in 0 Posts
Hello sanlyn,

Thank you very much, I can learn from you more and more. Regarding post-processing I will come back to you shortly (after I have finished with capturing my VHS collection).

Unfortunately, I have just realized by around 10 pcs of my tape-collection that those are moldy, each to a different extent. Do you have any experience with moldy tapes? I do not dare to play any of them even in my consumer level Panasonic VHS player to check the content of them. Can I do anything for such tapes? Is there still any company anywhere specilized to eliminate mold from tapes? Those tapes have very valuable videos for me.

Thank you very much for your opinion, as always.
Reply With Quote
  #49  
01-10-2017, 11:27 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,322 Times in 991 Posts
They should be sent to a professional service for cleaning. And, no, they are not free or cheapo, though likely not as expensive as you imagine. If you're of a mind to do it yourself you should know that it's not really as simple as YouTube videos present it. For instance, one of the better YouTube presentations at https://www.google.com/url?sa=t&rct=...z99GlnKt22pGxg mentions the use of an old used VCR and 95% isopropyl alcohol, when a visit to a hardware or paint shop for pure ethanol would be a better choice. Near the end of the video the user shows an opened tape cassette for cleaning but doesn't go through the process of disassembly or reassembly, which in itself is a project. There are other videos, but I think you can see from all of them that the process is over-simpilified.

Most pro services use a combination of manual and mechanized cleaning. Beware of any service that simply runs the tape through a cheap scrubber machine -- your tape will never be the same.
Reply With Quote
  #50  
02-06-2017, 06:17 PM
mparade mparade is offline
Free Member
 
Join Date: Dec 2016
Posts: 34
Thanked 0 Times in 0 Posts
Hello sanlyn,

Thanks to your help I have almost finished the capturing of all my tapes (during which there were some really impedimental circumstances, e.g. I have found recently 40 pcs of Video-8 tapes and needed to buy a Sony camera on eBay to be able to play them back correctly using a TBC, I have found a lot of SECAM tapes as well, and SECAM could not be handled by my Datavideo TBC-5000...).

First of all, please find as several attachments some really bad quality captures from the tape of year 1987 in which post-processing I would need your expertise, if you have again some time for that.

Your efforts are greatly appreciated!

Regards,

mparade


Attached Files
File Type: avi the hill.avi (72.87 MB, 21 downloads)
File Type: avi viewpoint1.avi (77.91 MB, 13 downloads)
File Type: avi riding a bike.avi (74.45 MB, 17 downloads)
File Type: avi kitchen2.avi (88.27 MB, 19 downloads)
File Type: avi kitchen1.avi (68.39 MB, 15 downloads)
File Type: avi just come from soccer world cup.avi (53.20 MB, 14 downloads)
File Type: avi grandma.avi (74.14 MB, 15 downloads)
File Type: avi dishwashing.avi (72.06 MB, 15 downloads)
File Type: avi coming from grandma.avi (94.26 MB, 16 downloads)
File Type: avi bikers2.avi (86.53 MB, 13 downloads)
File Type: avi bikers1.avi (85.09 MB, 17 downloads)
Reply With Quote
  #51  
02-07-2017, 12:08 AM
lordsmurf's Avatar
lordsmurf lordsmurf is offline
Site Staff | Video
 
Join Date: Dec 2002
Posts: 14,058
Thanked 2,555 Times in 2,173 Posts
Maybe - probably fixed or improved in Avisynth
No - not happening
Yes - probably definitely can be improved/fixed in Avisynth

the hill.avi (72.87 MB) - bad white balance, scene not colorful light - maybe
viewpoint1.avi (77.91 MB) - color shimmer from camera - maybe
riding a bike.avi (74.45 MB) - weak green color (from luma) - yes
kitchen2.avi (88.27 MB) - missing chroma channel, some other odd error - no
kitchen1.avi (68.39 MB) - luma washed/lost - no
just come from soccer world cup.avi (53.20 MB) - wrong WB, maybe missing chroma channel - no
grandma.avi (74.14 MB) - luma washed/lost - no
dishwashing.avi (72.06 MB) - camera noise - yes
coming from grandma.avi (94.26 MB) - tape chroma/tracking error - maybe
bikers2.avi (86.53 MB) - slight washed luma - honestly fine as is, maybe chroma NR at most (CCD in VirtualDub)
bikers1.avi (85.09 MB) - don't really see an issue, aside from quick blip

This is just a mixed of issues. And that's typical of a homeshot VHS collection.

- Did my advice help you? Then become a Premium Member and support this site.
- For sale in the marketplace: TBCs, workflows, capture cards, VCRs
Reply With Quote
  #52  
02-07-2017, 02:48 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,322 Times in 991 Posts
I came to similar conclusions. It appears that December 1987 was a bad era for your camera, mparade. Mostly electronic problems. I can't decide which is worse, the chroma flicker or the corrupted YUV from a bad CMOS. It looks as if AGC and autowhite didn't help, either. I think you did a decent job of controlling luma levels despite the oddball camera recording. Such tribulation comes with home video, but thankfully not always.

Below is frame 75 from the original "The hill.avi" with a YUV histogram attached. Blue dominates the image. The histogram shows no yellow or red in the image.


Below: The U (blue-yellow) channel has been pulled leftward toward the center, and the V channel (red_green) has been pushed rightward. This is not an ideal correction: you can't create color channels that don't exist, so an accurate rendering of the original color isn't possible. I take it that this is either an early morning or dusk exposure, so the image is warmed for that kind of light, with a little tweaking in VirtualDub with ColorMill.


The more difficult part was calming bad chroma flicker and cleaning the bad crosshatch noise in the clouds.

My conclusions on the samples:
A-The hill: CMOS defect, green crippled, U and V displaced, bad chroma flicker,tracking - partial fix
B-Viewpoint: CMOS defect - no fix
C-riding a bike: CMOS defect, chroma flickedr, tracking - same as "The hill" - partial fix
D-kitchen2: CMOS defect, AGC/autowhite error, chroma flicker - no fix
E-kitchen1: CMOS defect, AGC/autowhite error, chroma flicker - no fix
F-just come from soccer world cup: same as kitchen2 - no fix
G-grandma: CMOS defect, ruined by AGC, autowhite error - partial fix
H-dishwashing: autowhite error, chroma flicker - partial fix
i-coming from grandma: same as viewpoint.avi - no fix
J-bikers2: chrfoma flicker, AGC interference - fixed
K-bikers1: chroma flicker, horizontal dropout - fixed

I've attached a few samples of efforts at cleanup. Bad recordings are never going to be perfect, and sometimes not even pretty, but one does what one can. Later I'll post details about what I did with these, buit it will take a day to get the details cleaned up into readable text.

Still working on "coming from grandma" but it will be a very partial fix at best.


Attached Images
File Type: jpg frame 75 original YUV.jpg (58.5 KB, 424 downloads)
File Type: jpg frame 75 YUV adjust.jpg (59.3 KB, 287 downloads)
Attached Files
File Type: mp4 A_The Hill.mp4 (5.07 MB, 15 downloads)
File Type: mp4 C_riding a bike.mp4 (4.98 MB, 10 downloads)
File Type: mp4 G_grandma.mp4 (5.15 MB, 8 downloads)
File Type: mp4 H_dishwashing.mp4 (5.05 MB, 8 downloads)
File Type: mp4 J_bikers2.mp4 (5.63 MB, 10 downloads)
File Type: mp4 K_bikers1.mp4 (5.53 MB, 8 downloads)
Reply With Quote
  #53  
02-08-2017, 11:37 AM
mparade mparade is offline
Free Member
 
Join Date: Dec 2016
Posts: 34
Thanked 0 Times in 0 Posts
Thank you very much for your efforts in advance!
Reply With Quote
  #54  
02-09-2017, 07:00 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,322 Times in 991 Posts
Damaged or corrupt video is always a pain. Decent video doesn't need the kind of repair described here. Even when drastic measures are applied it's a rule of thumb that with "bad" video you can't win 'em all. Sometimes the fix can look good. Sometimes it's only paatial but offers something one can live with it when valuable archives are involved. Sometimes you just archive your losses and move on.

The workflow described here is fairly common but open to many variations. The flow begins with lossless captures that I store on external hard drives. Either Virtualdub or an Avisynth script is used to pull off segments of the capture for repair or restoration. The repaired segments will be reassembled later.

1. In a media player at full-screen, observe the video's behavior. This is the easy part. But you'd be amazed how people skip this step, missing obvious problems, and thus not finding a solution.

2. Open the video in an Avisynth script and run the script in VirtualDub. This allows analysis frame by frame. Using various deinterlace or smart bobbers (VirtualDub has a version of yadif built-in) to study how fields as well as frames behave, which in turn gives more insight into noise patterns and other problems.

3. Experiment with histograms to analyze color problems and with various filters for denoising and repair.

4. Run repair scripts or filters and save the results as lossless intermediate files. In many cases, the output from Avisynth scripts is further treated with VirtualDub color controls or filters applied to the script's input into VirtualDub. For future encoding, work files are often saved as YV12 for DVD or BluRay and web use. Huffyuv can't compress YV12, so I usually save YV12 with the Lagarith lossless codec.

5. Repaired segments are assembled in an editor or encoder, then encoded for final output.


"The Hill.avi" :

ColorYUV was used to restore a more normal histogram pattern in YUY2. Chroma is broken between fields, with the clouds wildly changing shape from field to field within frames. Chroma streaks and blotching mix with herringbone, crosshatch, and horizontal dropout patterns, mostly across the top half of the image and in bright areas. This noise was addressed first with MCTemporalDenoise working in choma-only mode, then a second script used QTGMC for deinterlacing and for motion and chroma smoothing. Then a strong multi-pass spot remover worked with a more random pattern of disassembled fields. Then the fields were reassembled and interleaved, and the video was reinterlaced.

The results of the script used additional VirtualDub filters for color tweaking. The Vdub filters were Camcorder Color Denoise ("CCD") and ColorMill. Vdub filter setup and settings were saved in a .vcf file for re-use later.

Because MCTemporalDenoise and QTGMC are CPU-intensive filters and slow runners, and to avoid memory-swap slowdown with slow filters, the script was run in two steps.

Step 1:
Code:
AviSource("Drive:\path\to\video\The Hill.avi")
Crop(0,0,-20,-8).AddBorders(10,4,10,4)
ColorYUV(off_u=-15,gain_v=30)
ConvertToYV12(interlaced=true)
MergeChroma(MCTemporalDenoise(interlaced=true,settings="Very High"))
return last
# ############################################################# 
# In VirtualDub, save this output in "fast recompress" mode as
# "The Hill 01.avi", using YV12 and Labgarith compression.
# This file will be the input file for step 2 of the routines.
# ############################################################

Step 2:

Code:
LoadCPlugin("Drive:\path\to\Avisynth\plugins\yadif.dll")

AviSource("Drive:\path\to\video\The Hill 01.avi")
AssumeTFF()
QTGMC(preset="Ultra Fast",EdiMode="RepYadif",EZDenoise=4,Denoiser="dfttest",\
  NoiseProcess=1,ChromaNoise=true,DenoiseMC=true,border=true)
SeparateFields()
a=last
e=a.SelectEven().RemoveSpotsMC2X()
o=a.SelectOdd().RemoveSpotsMC2X()
Interleave(e,o)
Weave()
RemoveDirtMC(20,false)
LimitedSharpenFaster(edgemode=2)
# ------------- re=interlace ----------------- ###
SeparateFields().SelectEvery(4,0,3).Weave()
ConvertToRGB32(matrix="Rec601",interlaced=true)
return last

"riding a bike.avi":

Problems similar to "the hill.avi', but worse. The disturbed flicker and crosshatch/herringbone/dropout problem was periodic, repeating every 2 frames and every 4 interlaced fields, with the 2nd and 4th field being the worst. The periodic peak was, therefore, in Even-numberd frames and fields. After runnjing MCTD (MCTemporalDenoise) on the chroma flicker, QTGMC was used to retain only odd-numbered fields. Discarding alternate fields gets progressive video but lowers temporal resolution (motion smoothness). This was preferred over the pad peaks in periodic noise and object distortion. Accurate color isn't possible because of YUV color channel data distortion, but a more naturakl warmth and absence of "purple reds" resulted from ColorYUV functions.

The Virtualdub filter applied to the output was fsn.vdf, which goes by the formal name of "Frequency Suppressor of the Noise". Herringbone on the automobiles in the image isn't entirely clean, but fsn.vdf made it look slightly smoother. Other filters were attempted but resulted in artifacts that were worse.

Again, a two-pass script was required:

Step 1:
Code:
AviSource("Drive:\path\to\video\riding a bike.avi")
Crop(0,0,-20,-8).AddBorders(10,4,10,4)
ColorYUV(off_v=30,off_u=-45)
ConvertToYV12(interlaced=true)
MergeChroma(MCTemporalDenoise(interlaced=true,settings="Very High"))
return last
# ############################################################# 
# In VirtualDub, save this output in "fast recompress" mode as
# "riding a bike 01.avi", using YV12 and Labgarith compression.
# This file will be the input file for step 2 of the routines.
# ############################################################

Step 2:

Code:
AviSource("Drive:\path\to\video\riding a bike 01.avi")
AssumeTFF()
QTGMC(preset="Very fast",EZDenoise=8,Denoiser="dfttest",\
  NoiseProcess=1,ChromaNoise=true,DenoiseMC=true,border=true).SelectOdd()
SeparateFields()
a=last
e=a.SelectEven().RemoveSpotsMC2X()
o=a.SelectOdd().RemoveSpotsMC2X()
Interleave(e,o)
Weave()
FixChromaBleeding()
RemoveDirtMC(20,false)
MergeChroma(awarpsharp2(depth=30))
LimitedSharpenFaster(edgemode=2)
ConvertToRGB32(matrix="Rec601",interlaced=false)
return last

"grandma.avi":

Defective AGC (or other bad circuitry) made the bright window worse than it should have been. I mentioned the AutoAdjust Avisynth plugin earlier. Here it's used to increase shadow detail by a small margin while ignoring the brightest pixels. I also enabled AutoAdjust's auto_balance parameter to warm the interior and get at least a more natural skin tone. No VirtualDub filters were used.

Code:
AviSource("Drive:\path\to\video\grandma.avi")
Crop(0,0,-20,-10).AddBorders(10,4,10,6)
ConvertToYV12(interlaced=true)
AutoAdjust(auto_gain=true, gain_mode=0, gamma_limit=8.0, bright_limit=1.0, bright_exclude=1.0,\
  high_quality=true, auto_balance=true)
Levels(16,0.95,255,16,235,dither=true,coring=false)
return last

"dishwashing.avi":

CMOS and flicker problems similar to other samples. In this case a perfect white or gray on many objects isn't pssible because of missing color data in YUV. I chose to correct mostly for flesh tones. The 2-step script is similar to those used earlier, with Odd frames selected to avoid the worst peaks of flicker and crosshatch noise in the upper left corner. The VirtualDub filter used to very mildly tweak color was colorMill.

Step 1:
Code:
AviSource("Drive:\path\to\video\dishwashing.avi")
Crop(0,0,-20,-12).AddBorders(10,6,10,6)
ColorYUV(off_u=-25)
ConvertToYV12(interlaced=true)
MergeChroma(MCTemporalDenoise(interlaced=true,settings="Very High"))
return last
# ############################################################# 
# In VirtualDub, save this output in "fast recompress" mode as
# "dishwashing 01.avi", using YV12 and Labgarith compression.
# This file will be the input file for step 2 of the routines.
# ############################################################
Step 2:
Code:
AviSource("Drive:\path\to\video\dishwashing 01.avi")
AssumeTFF()
QTGMC(preset="Very fast",EZDenoise=8,Denoiser="dfttest",\
  NoiseProcess=1,ChromaNoise=true,DenoiseMC=true,border=true).SelectOdd()
SeparateFields()
a=last
e=a.SelectEven().RemoveSpotsMC2X()
o=a.SelectOdd().RemoveSpotsMC2X()
Interleave(e,o)
Weave()
FixChromaBleeding()
RemoveDirtMC(20,false)
MergeChroma(awarpsharp2(depth=30))
LimitedSharpenFaster(edgemode=2)
ConvertToRGB32(matrix="Rec601",interlaced=false)
return last

"bikers2.avi":

This has flicker and some twittery crosshatching on the wall surface, and has a blue color cast that looks more like autowhite error than auto correction. MCTemporalDenbise ("MCTD") in chroma mode addressed the flicker, while AutoAdjust retrieved some shadow detail and helped to correct the color balance. No VirtualDub filters were used.

Code:
AviSource("Drive:\path\to\video\bikers2.avi")
Crop(0,0,-20,-8).AddBorders(10,4,10,4)
AssumeTFF()
ConvertToYV12(interlaced=true)
MergeChroma(MCTemporalDenoise(settings="Very High",interlaced=true))
AutoAdjust(auto_gain=true,gain_mode=1,gamma_limit=2.0,bright_limit=1.0,bright_exclude=1.0,\
  high_quality=true,auto_balance=true)

"bikers1.avi"

Again, this also has some flicker in the wall surface. At least the color balance isn't whacked. But overall it has twittery noise on edges. Also a bad horizontal dropout in frame 192. Actually the dropout is only in the bottom field of that frame (which you would discover if you examine the fields one by one in VirtuaLdub). There's not much action here, so ReplaceFramesMC2,avs was used to interpolate a replacement field, but deintelacing was required for that filter (you shouldn't use it on interlaceded frames). Deinterlacing with QTGMC was needed anyway to calm the edge buzz, and the vInverse() plugin was used to reduce excessivbe combing. In the script the field number being replaced is 385, because frame and field numbers are doubled when deinterlaced. ReplaceFramesMC2 can sometimes create bizarre distortions if motion is too complex, so it doesn't always work this well. No VDub filters were used.

Code:
AviSource("Drive:\path\to\video\bikers1.avi")
Crop(0,0,-20,-8).AddBorders(10,4,10,4)
AssumeTFF()
ConvertToYV12(interlaced=true)
QTGMC(preset="medium",EZDenoise=4,Denoiser="dfttest",\
  NoiseProcess=1,ChromaNoise=true,border=true)
vInverse()
ReplaceFramesMC2(385,1)
TemporalSoften(4,4,8,15,2)
# ------------ re-interlace -------------- ###
SeparateFields().SelectEvery(4,0,3).Weave()
Reply With Quote
  #55  
02-09-2017, 08:15 PM
lordsmurf's Avatar
lordsmurf lordsmurf is offline
Site Staff | Video
 
Join Date: Dec 2002
Posts: 14,058
Thanked 2,555 Times in 2,173 Posts
@sanlyn:
Don't use larger fonts. Font sizing can be messy when viewing mobile. Use [myhr][/myhr] or [h1hr][/h1hr] bbcode. However, please use those sparingly, in cases where separation is of utmost importance.

Tip: Quote your last post, see what I did.

- Did my advice help you? Then become a Premium Member and support this site.
- For sale in the marketplace: TBCs, workflows, capture cards, VCRs
Reply With Quote
The following users thank lordsmurf for this useful post: sanlyn (02-09-2017)
  #56  
02-09-2017, 08:36 PM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,322 Times in 991 Posts
So noted. Thanks.
Reply With Quote
  #57  
02-09-2017, 08:43 PM
lordsmurf's Avatar
lordsmurf lordsmurf is offline
Site Staff | Video
 
Join Date: Dec 2002
Posts: 14,058
Thanked 2,555 Times in 2,173 Posts
Quote:
Originally Posted by sanlyn View Post
So noted. Thanks.
Yeah, there's some nifty little-used custom bbcodes that have been added over the years. Everything from simple HR tags (line for spacing) to those before/after sliders for images. And we'll add more when deemed necessary.

- Did my advice help you? Then become a Premium Member and support this site.
- For sale in the marketplace: TBCs, workflows, capture cards, VCRs
Reply With Quote
  #58  
02-11-2017, 02:42 PM
mparade mparade is offline
Free Member
 
Join Date: Dec 2016
Posts: 34
Thanked 0 Times in 0 Posts
@sanlyn

I am much obliged again for your thorough answer. I have to say I have learned a lot again.

In one of your previous letters you mentioned pixel samplers and ColorFinesse. You mentioned as well that often a color correction is required later in processing in RGB colorspace (final work in RGB).

Could you please detail these flows to include them in my post-processing workflow, if possible?

Thank you very much!

Regards,

mparade
Reply With Quote
  #59  
02-12-2017, 08:48 AM
mparade mparade is offline
Free Member
 
Join Date: Dec 2016
Posts: 34
Thanked 0 Times in 0 Posts
Could you please send a link to ReplaceFramesMC2.avs to check?
I haven't found it anywhere in the internet.

Thank you very much.
Reply With Quote
  #60  
02-12-2017, 10:02 AM
sanlyn sanlyn is offline
Premium Member
 
Join Date: Aug 2009
Location: N. Carolina and NY, USA
Posts: 3,648
Thanked 1,322 Times in 991 Posts
I've been unable to reply to your request for more Virtualdub details (post #58). I've been traveling and using my laptop since yesterday, but I'll be able to reply later today or tomorrow.

Meanwhile my laptop does have copies of the original ReplaceFramesMC and ReplacefFramesMC2, both of which originally appeared at Videohelp and Doom9. Which one is better? The consensus seems to say that MC2 seems to work better when more motion is involved, while the original MC is faster. But most users say you just have to experiment. Both are attached.


Attached Files
File Type: avsi ReplaceFramesMC.avsi (923 Bytes, 42 downloads)
File Type: avsi ReplaceFramesMC2.avsi (1.8 KB, 34 downloads)
Reply With Quote
The following users thank sanlyn for this useful post: giamis (08-19-2017)
Reply




Similar Threads
Thread Thread Starter Forum Replies Last Post
Proper power adapter for AVT-8710 TBC? CyborgHyena Video Hardware Repair 5 12-07-2015 02:31 PM
Proper lighting for firewalk? premiumcapture Photo Cameras: Buying & Shooting 4 08-27-2014 12:28 AM
VHS capturing workflow - need confirmation, is this good? mrudic Project Planning, Workflows 12 01-31-2014 05:31 AM
Monitor calibration suggestions for video capturing/editing workflow Mejnour Project Planning, Workflows 10 02-11-2012 10:46 AM
Proper use of a Detailer in capturing VHS jrodefeld Capture, Record, Transfer 2 09-10-2011 10:17 PM

Thread Tools



 
All times are GMT -5. The time now is 04:26 PM