Quantcast CCE with DVD-Rebuilder does Not Support KVCD Notch Matrix! - Page 5 - digitalFAQ.com Forums [Archives]
Go Back    digitalFAQ.com Forums [Archives] > Video Production Forums > Video Encoding and Conversion

Reply
 
LinkBack Thread Tools
  #81  
07-05-2004, 10:10 PM
The Untouchable The Untouchable is offline
Free Member
 
Join Date: Jan 2004
Location: Little India, British Columbia
Posts: 224
Thanks: 0
Thanked 0 Times in 0 Posts
shit phil, your right, ... that sux then,
u said u were gonna try queenc with kvcd matrix & trells
we use 1 at a time right or both at the same time?
Reply With Quote
Someday, 12:01 PM
admin's Avatar
Site Staff / Ad Manager
 
Join Date: Dec 2002
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #82  
07-06-2004, 01:52 AM
The Untouchable The Untouchable is offline
Free Member
 
Join Date: Jan 2004
Location: Little India, British Columbia
Posts: 224
Thanks: 0
Thanked 0 Times in 0 Posts
That would explain alot, anyhow i'ma go do a test kdvd with dvd rebuilder & quenc
Reply With Quote
  #83  
07-21-2004, 08:11 PM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by Dialhot
Finally no test is needed with QEnc. Not only the picture is AWFULL but I asked to keep french subtitle and it kept... dutch ones !
Strange. I've backed up two movies (well, one movie and one 4-episode TV series) and got excellent output from CCE. The TV series (The Shield Season 1 Volume 1 (R1)) wasn't as good because it's over 3 hours long, but it's far better than the DVD Shrink version I made. It's perfectly acceptable on my HDTV, which means it would look great on a standard definition TV. The most noticible problem was mosquito noise, which I think I could've fixed by adjusting the quality settings a little. I also could've got rid of the director's commentaries for an extra 250mb or so. Also I could've gained more bits by resizing to 704x480 and adding 8 pixels of black on each edge. But it was my first encode and I didn't really know what I was doing .

I used just FluxSmooth for both, though I'm going to try UnDot and Deen on my next encode.

As for the subs, that's obviously a bug. You should post a bug report.
Reply With Quote
  #84  
07-21-2004, 11:44 PM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
By the way, you can use custom matrices with DVD-RB and CCE using RBOpt. You can find it in the DVD Rebuilder forum on Doom9.
Reply With Quote
  #85  
07-22-2004, 01:09 AM
Boulder Boulder is offline
Free Member
 
Join Date: Sep 2002
Location: Lahti, Finland
Posts: 1,652
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by SansGrip
I used just FluxSmooth for both, though I'm going to try UnDot and Deen on my next encode.
A bit OT, but did you notice that sh0dan fixed a memory leak in FluxSmooth? He also optimized it a bit, his version is 1.01.

You might also want to try the combination RemoveGrain + RemoveDirt, both can be found at the D9 forum. I prefer them to UnDot and Deen.
Reply With Quote
  #86  
07-22-2004, 03:21 AM
Dialhot Dialhot is offline
Free Member
 
Join Date: May 2003
Posts: 10,463
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by SansGrip
As for the subs, that's obviously a bug. You should post a bug report.
Actually the bug is already corrected but there is a new release almost everyday and I Dled it one day too early

For the awfull picture : all is due to the cell-by-cell process but we already discuss about that. Quality between chapters change a lot according to their length and the overall quality if awfull. I'm not saying that any point in the movie was such ugly.
And it seems you used a patch that I didn't had so you encoded with Notch-Matrix where I used standard one. That makes a big diff
Reply With Quote
  #87  
07-22-2004, 09:40 AM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by Boulder
A bit OT, but did you notice that sh0dan fixed a memory leak in FluxSmooth? He also optimized it a bit, his version is 1.01.
Yup, I'm using his version .

Quote:
You might also want to try the combination RemoveGrain + RemoveDirt, both can be found at the D9 forum. I prefer them to UnDot and Deen.
Thanks for the tip. I'll give them a shot.

I re-encoded The Shield again last night: took out the director's commentary (unfortunately -- I find those interesting, but it was 200mb I needed for the video), resized to 704x480, added 8-pixel overscan (not sure if this does any good, since it's not a multiple of 16), used UnDot and Deen, increased the VBR bias (maybe shouldn't've) and the quality precedence. Came out much nicer than my last encode. Perhaps a little soft, but a lot less mosquito noise and hardly any macroblocks. Not bad for three hours of video.
Reply With Quote
  #88  
07-22-2004, 09:45 AM
Dialhot Dialhot is offline
Free Member
 
Join Date: May 2003
Posts: 10,463
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by SansGrip
but a lot less mosquito noise and hardly any macroblocks. Not bad for three hours of video.
Kwag pointed out that CCE adds noise during encoding, like your "blockbuster(noise)" filter and that is why CCE output is almost clean of blocks. But mosquitoes increase a lot once quality is decreased. For my part, above Q=30 I can't keep the encoded video.
But perhaps you encoded in 2pass ?
Reply With Quote
  #89  
07-22-2004, 09:54 AM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by Dialhot
For the awfull picture : all is due to the cell-by-cell process but we already discuss about that.
I honestly don't believe that doing it cell-by-cell causes that much of a quality drop. Obviously it would be optimal if the encoder could run over the entire movie, but DVD-RB assigns bitrates to each cell proportionally to the bitrates in the original movie. For example, in the encode I did last night it assigned (automatically) a bitrate of about 3500 to scenes with fast camera motion, and only 1000 to the credits. The average came out to ~3100.

I've re-encoded two different discs with it so far. The movie (The Station Agent, R1) looks outstanding, indistinguishable from the original. The Shield, R1 looks really really good since I re-encoded it again last night with different parameters, slightly more filtering, and at 704x480 instead of 720x480.

I'm not sure it's wise to dismiss a tool after one test. If we'd done that when designing the filesize prediction method it simply wouldn't exist, because my first 50 or 100 tests were failures. Kwag, I know, ran many more than I did. It took a long time before we came up with something workable.

Trying something once and saying "it doesn't work" without tweaking and testing and retesting and making AVS scripts to do side-by-side compares and taking screenshots and generating 700-post threads is not the KVCD way .

Quote:
And it seems you used a patch that I didn't had so you encoded with Notch-Matrix where I used standard one. That makes a big diff
Nope, I used the standard MPEG-2 matrix. I've yet to try the Notch with CCE, though I will next time I need to use a low bitrate.

RBOpt isn't a patch -- it's a standalone program that you can run between the "Prepare" and "Encode" phases in order to adjust bitrates, AVS scripts and CCE parameters. You can alter the bitrate for an entire VTS or right down to the individual cell level. That way if DVD-RB doesn't automatically reduce the bitrate on credits (because the authors of the source DVD didn't) you can do it manually. Or you can pull down the bitrate on cells you know contain only low-motion material, up it on high-action cells, pull it way down on studio logos if you haven't already stripped them out, etc. It's a fantastic tool for fine-tuning the bitrate allocation and I think you'd find it very useful in obtaining great results from DVD-RB...
Reply With Quote
  #90  
07-22-2004, 09:58 AM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by Dialhot
Kwag pointed out that CCE adds noise during encoding, like your "blockbuster(noise)" filter and that is why CCE output is almost clean of blocks.
I'm not sure it adds noise, but with the default settings it certainly does emphasise the high frequencies. I've not tested much so far but it seems you can adjust that with the "quality precedence" setting -- higher values seem to emphasise the high frequencies less than lower values.

Quote:
But perhaps you encoded in 2pass ?
Yup, always. DVD-RB by default does two passes by default (the initial pass to create the .vaf file, and then another to make the .m2v). I've read that three passes can sometimes improve the quality, especially at low bitrates, but beyond that one sees little difference. I've not tested this personally yet.
Reply With Quote
  #91  
07-22-2004, 11:11 AM
Boulder Boulder is offline
Free Member
 
Join Date: Sep 2002
Location: Lahti, Finland
Posts: 1,652
Thanks: 0
Thanked 0 Times in 0 Posts
OPV, dear Sir, OPV in CCE

-No More Multipass
Reply With Quote
  #92  
07-22-2004, 12:18 PM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by Boulder
OPV, dear Sir, OPV in CCE
Has JDobbs fixed the sizing problem yet? For some reason he has been insisting on coming up with his own formula when the issue of file size prediction has been tested ad nauseum everywhere, with established methods getting it down to 0.5-2% accuracy...

And is the quality as good as 2-pass?
Reply With Quote
  #93  
07-22-2004, 12:36 PM
Dialhot Dialhot is offline
Free Member
 
Join Date: May 2003
Posts: 10,463
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by SansGrip
I'm not sure it's wise to dismiss a tool after one test.
Correct in the average situation, not realyl in mine. Let say that I have enought backgroud to "feel" the problems even before they arrive. As soon as I saw that the encoding was cell-by-cell, I Knew what should be the problems. I just had to open 2 encoded part to see ALL THE PROBLEMS that I suspected. For me it's enought to conclude that the process is bad. And you can't fix a process by tweaking it.
I'm not sure that is a feeling esay to explain. It's like cooking a recipe, knowing you wont like it because it involves garlic and you don't like garlic. You need only one bite to see if the taste pleased your mouth or not. Here it's the same thing, except I can't decide to just drop the garlic from the recipe
Quote:
If we'd done that when designing the filesize prediction method it simply wouldn't exist, because my first 50 or 100 tests were failures.
Yeah, but you did it with the feeling that it should work. If you had in mind the felling that it can't work, then you probably didn't go after the third attempt
Quote:
I think you'd find it very useful in obtaining great results from DVD-RB...
I think that is probably the tool I missed, you're right.
Reply With Quote
  #94  
07-22-2004, 12:51 PM
Boulder Boulder is offline
Free Member
 
Join Date: Sep 2002
Location: Lahti, Finland
Posts: 1,652
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by SansGrip
Has JDobbs fixed the sizing problem yet? For some reason he has been insisting on coming up with his own formula when the issue of file size prediction has been tested ad nauseum everywhere, with established methods getting it down to 0.5-2% accuracy...
I don't know, I use QCCE as the tool for predicting the Q value and I do all my encodes manually. A 3% sample size and 0.5% error margin has proved to be a good combination.

Quote:
And is the quality as good as 2-pass?
The nice thing is that it's better It's the same as CQ in TMPGEnc, constant quality and a variable bitrate. The even nicer thing is that the vaf file is created at the same time and you can use this to run the second pass so you'll always hit the target if the Q mode encode was off by too much! This is what the D2SRoBa method is basically all about.
Reply With Quote
  #95  
07-22-2004, 02:20 PM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by Dialhot
As soon as I saw that the encoding was cell-by-cell, I Knew what should be the problems.
When DVDs are professionally mastered they are more often than not encoded a cell at a time. The compressionist will run tests over the entire material, it is true, but after that he builds up a map of which cells will require which amount of compression. The material is, usually, then encoded piece by piece.

Since the professional (and very highly paid) compressionist has already gone to this trouble for us, it makes sense in most cases to use precisely the method that DVD-RB uses: encode cell by cell and reduce the bitrate proportionally to the existing compression structure. That way if the compressionist originally decided to use half the average bitrate on a particular segment of the movie, so will DVD-RB. If he decides a segment needs a much higher bitrate, DVD-RB will use one too. Automatically.

Course, this assumes the compressionist knew what he was doing. Lower budget productions often can't afford the very best compressionists, and they can often be optimised. Luckily we can use RBOpt to redistribute the bitrate accordingly.

Quote:
I just had to open 2 encoded part to see ALL THE PROBLEMS that I suspected.
If you got such bad results I'd say some setting must've been wrong somewhere. My experience tells me with a little tweaking it's possible to get exceptional quality from cell-by-cell 2-pass.
Reply With Quote
  #96  
07-22-2004, 02:24 PM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by Boulder
I use QCCE as the tool for predicting the Q value and I do all my encodes manually.
Do you keep the menus etc.?

Quote:
A 3% sample size and 0.5% error margin has proved to be a good combination.
3% makes sense to me, though I still think it would be best to tie in the GOP length (I think each sample should perhaps be two GOP lengths in, er, length). JDobbs started with 0.1%. No wonder he had problems .

Quote:
The nice thing is that it's better
Sounds almost too good to be true. I suppose if it ever does overshoot target size one could use DVD Shrink to pull it down a couple of percent...

Quote:
It's the same as CQ in TMPGEnc, constant quality and a variable bitrate.
I know I ran hundreds and hundreds of tests with TMPGEnc way back when, but I've forgotten it all now. Maybe I should read that mammoth file size prediction thread again...

Quote:
The even nicer thing is that the vaf file is created at the same time and you can use this to run the second pass so you'll always hit the target if the Q mode encode was off by too much!
I smell a tool coming on...
Reply With Quote
  #97  
07-22-2004, 02:30 PM
Dialhot Dialhot is offline
Free Member
 
Join Date: May 2003
Posts: 10,463
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by SansGrip
If you got such bad results I'd say some setting must've been wrong somewhere. My experience tells me with a little tweaking it's possible to get exceptional quality from cell-by-cell 2-pass.
Reading you I wonder if the source I choosed for my test was the best candidate : the bitrate used for each cell didn't differ a lot from cell to cell. Maybe a diff of 5% between the lower and the higer one
I encoded Titanic, because it had two things that I wanted : a 3h length and less than 500MB of extras (mainly animated menus). I didn't want to use a DVD where half of the space is used (wasted) for the extras.
I have LOTR King return on my '"to-do" list. Perhaps it will be a better choice. I'll see that.
Reply With Quote
  #98  
07-28-2004, 07:16 PM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
By the way, I just encoded (with RB-Opt "SG" ) a sample of the second volume of the TV show I recently did with DVD-RB and CCE. But this time I used the Notch matrix... I've not watched it on the TV yet, but on the monitor the difference is clear. The Notch matrix is much better at lower bitrates .
Reply With Quote
  #99  
07-28-2004, 07:22 PM
Dialhot Dialhot is offline
Free Member
 
Join Date: May 2003
Posts: 10,463
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by SansGrip
The Notch matrix is much better at lower bitrates .
Set it as default matrix used in RB-opt before you send it back to Doom9
Reply With Quote
  #100  
07-28-2004, 07:26 PM
SansGrip SansGrip is offline
Free Member
 
Join Date: Nov 2002
Location: Ontario, Canada
Posts: 1,135
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by Dialhot
Quote:
Originally Posted by SansGrip
The Notch matrix is much better at lower bitrates .
Set it as default matrix used in RB-opt before you send it back to Doom9


I will certainly suggest to robot1 that it be included in the package...
Reply With Quote
Reply




Similar Threads
Thread Thread Starter Forum Replies Last Post
Is Notch matrix the default matrix from the KDVD template? jzhao66 Video Encoding and Conversion 2 06-29-2005 03:25 AM
KVCD: NOTCH Interlaced Matrix? incredible Video Encoding and Conversion 7 12-13-2003 11:23 AM
KVCD: Notch matrix, how to use/install? shibumi Video Encoding and Conversion 1 10-27-2003 04:50 PM
KVCD notch matrix! jorel Video Encoding and Conversion 1 08-28-2003 11:25 AM
Old KVCD Matrix verses new Notch Matrix jamesp Avisynth Scripting 4 03-20-2003 03:48 PM




 
All times are GMT -5. The time now is 07:33 AM  —  vBulletin © Jelsoft Enterprises Ltd