1 Attachment(s)
This is what I see in the 10 samples you've posted:
What they did for you is decent, but not what I'd call good. Better quality is easily possible, but most services are sadly not the ones to give it. That's why it's really important to do research about video, and hardware used, before going with them. I could do better without really even trying. For NR, removing grain, the better the source, the better it can be attacked. When the footage was deinterlaced, and has mosquito noise from compression, the ability to reduce/remove grain is significantly reduced. You're supposed to deinterlace/compress after restoring, not before. I'm actually more |
1 Attachment(s)
I'd echo that it's a bad transfer with many basic errors. I'd guess that you could probably do just as well with a cheap USB capture card and the generic software that comes with it. I'll give a few reasons why I think so:
A) if you suspect that your original source would require restoration and repair or other image mods such as contrast and color correction or applying effects such as fades and dissolves, you cripple your effort at the outset by having the source captured to a lossy final delivery codec. "Final delivery" means that such codecs are not designed for further modification without image degradation. Lossy final delivery formats like MPEG and h.264 also contain compression artifacts and signal loss that don't occur with lossless captures. Also, final delivery formats are interframe encodes, meaning that even simple cut and join edits entail yet another damaging stage of lossy re-encodes. For simple edits, such encodes require smart-rendering editors to avoid further quality loss. Almost all free editors and some cheap paid editors cannot smart-render h.264 encodes. B) Your mp4's were captured with an inferior and damaging deinterlace method that dropped alternate fields or frames, and seems to have used an inferior deinterlace method (yes, it does look like yadif) that resulted in many blurry and fuzzed edges. The original Hi8 was interlaced and after field dropping it was encoded as progressive, which actually causes playback problems that look like bad de-interlacing on DVD players. Besides losing 50% of your original color by encoding directly to YV12, you lost half of your original video frames and 50% of the original temporal resolution. None of these losses can be repaired, and edges cannot be smoothed or sharpened to mask the deinterlaced resizing effects. If the person who made these transfers claims that because they are progressive mp4's they are suitable for the internet, they're wrong: 720x480 anamorphic frames can't be used for posting to sites like YouTube or Facebook. No one makes this many newbie mistakes should be called a "pro" and get paid for it. C) Further, the audio has been encoded with low bitrate AAC audio at 44.1KHz. If you wanted DVD or standard def BluRay for final output, AAC audio at 96Kbps and 44.1 KHz cannot be used. Audio would have to go through another lossy stage of audio resampling and re-encoding for DVD or BluRay. D) There was apparently no control over captured input levels. On almost every sample, levels are invalid for standard digital video, meaning that levels exceed the range of luminance y=16 to y=235 and chroma UV=16 to UV=240. What this means is that invalid brights and highlights are clipped, and invalid darks or subzero blacks are crushed (i.e, the same thing as clipped). Clipped data is destroyed data -- there is no detail in clipped areas, since all values below or above the clipping value have been converted to the same value, so lost detail cannot be retrieved after clipping. Clipped darks are always zero black and will never be any other color. Don't confuse the noise in dim, underexposed, mottled, overly dark areas by thinking of it as "grain". Most of the noise in underexposed camera video is sensor noise, not grain. When the signal level is too dark, the signal strength of the camera's residual sensor noise is greater than the signal strength of the incoming image. Sensor noise differs from grain in that grain contains data of various kinds of values but sensor noise contains no usable image information (Sensor noise becomes zero-black and contains no other data). You can do all the filtering or brightening you want, but black sensor noise will always look like black mottling and won't go away. CMOS noise patterns after brightening, contrast masking, and filtering: http://www.digitalfaq.com/forum/atta...1&d=1536390039 Ann example of processing for gross underexpose is given here, posted just days ago: http://www.digitalfaq.com/forum/vide...html#post55882. These same videos were seen in other posts a few years ago, in (I think) a different forum. I believe lossless capture was the advice given for best results. |
Quote:
Quote:
Quote:
Quote:
|
Quote:
As I understand it, image sensors (typically CCD and later CMOS in handicams) had two noise components: Thermal noise (within the sensors noise) that is random and that under normal exposure (i.e., good light) conditions would be below black level and not visible on screen, but with poor light and AGC it rises above the black level and is visible. Also there can be a grain-like component that is fixed and results from slightly different sensitivity of the individual imaging cells in the sensor. |
Quote:
|
Quote:
Quote:
With QTGMC, however, the default is 59.94fps, after creating frames from fields using surrounding data AND advanced processing (anti-alias, NR, etc). Very advanced, very powerful. To match input fps, you must drop a newly created frame. And just for mentions, to expand on adaptive: The older adaptive was often a bob with some weak anti-aliasing applied, and sometimes took nearby frames into account. "adaptive" was an overrused term, and described anything not basic drop-frame back in the early 2000s, before the methods like nnedi, Yadif, and QTGMC existed. |
Quote:
Development of the CCD in the early 1980s led to its replacing the larger, heavier, more-power-required tubes in video cameras and camcorders. For example, the 1990 vintage Canon Hi8 A1Digital camcorder used 400K pixel 1/2" CCD as the image sensor. |
Quote:
I also compared this tape to other tapes and I noticed something - either the aspect ratio is a bit off, or there's very small black vertical bars on the edge of the screen, because, I actually measured it, each side is a little more vertically narrow than on other home movies I have which are also 4:3. Why is this? Also, can your service guarantee the best possible result? |
Quote:
Quote:
Quote:
|
Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.