Quantcast What's Next for KVCD as a Format? - Page 2 - digitalFAQ.com Forums [Archives]
  #21  
04-06-2003, 09:05 PM
ovg64 ovg64 is offline
Free Member
 
Join Date: Jan 2003
Location: Puerto Rico
Posts: 423
Thanks: 0
Thanked 0 Times in 0 Posts
Send a message via MSN to ovg64
I uderstand, I my self would like to move up from my 1-1/2 Gig Athlon xp to a danddy 3 Gigabyte Pentium 4, I know TMPGEnc will benefit from it and if a kvcdx3 2Hrs. encode takes me 7-8 hrs. , with the Pentium it should be half that long to encode the same......
Reply With Quote
Someday, 12:01 PM
admin's Avatar
Site Staff / Ad Manager
 
Join Date: Dec 2002
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #22  
04-06-2003, 10:54 PM
Reno Reno is offline
Free Member
 
Join Date: Nov 2002
Location: Sunny California
Posts: 242
Thanks: 0
Thanked 0 Times in 0 Posts
I'd really like to see a renewed focus on compatability. The single biggest complaint I hear about this format is usually "the video looks awesome, but the sound doesn't match up right!" On my player, the KVCD standard works great, but on my friends' players it's a crapshoot.

Maybe a little more research on the multiplexing side would knock that out...
__________________
"There are two rules for ultimate success in life.
1. Never tell everything you know."
Reply With Quote
  #23  
04-09-2003, 12:13 PM
MrTibs MrTibs is offline
Free Member
 
Join Date: Aug 2002
Location: Canada
Posts: 200
Thanks: 0
Thanked 0 Times in 0 Posts
Personally, I think that a variable CQ/bitrate builder would move to "the next level".

Creating a tool to annalize the video sequence, break it up into different sections to encode a different max bit rates with different CQ levels and them re-join them at the end. Perhaps I am talking about an better encoder that does a much better job at CQ than the existing encoders.

For example, the "new" encoder could respond to Avisynth filter settings. Filters could be created to change the encoder settings dynamically (by scenes or video segments) or to base dynamic CQ on the presence of mosquitoes or macroblocks. This would give us amazing control over the encoding process with the ability to put our bits where we want them.
Reply With Quote
  #24  
04-09-2003, 01:40 PM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
Hi MrTibs,

What you are describing, would be to use the force picture type setting in TMPEG. A couple of months ago, I had an idea of analyzing the source to be encoded, similar to what bit rate viewer does, and then applying a bitrate ratio (compressibility of source) to the picture type settings in TMPEG. Basically I thought of making a program that would actually create a text file in TMPEG's frame number list (*.txt) that would be loaded in TMPEG. But because there's no constant relationship between the source compression and the compression that TMPEG would use on certain film, I discarded the idea. It was a good dream

-kwag
Reply With Quote
  #25  
04-09-2003, 04:01 PM
MrTibs MrTibs is offline
Free Member
 
Join Date: Aug 2002
Location: Canada
Posts: 200
Thanks: 0
Thanked 0 Times in 0 Posts
@Kwag

Thanks for the reply but I have some questions.

What if I modified the Blockbuster filter to spit out the txt file you suggested? I'm sure that you understand the issues better but if BlockBuster deteciton is able to determine areas that are hard to encode, (macroblocks) perhaps I could use that engine to control the GOP structure to be produced. This method may be rougher than your idea but perhaps would go a long way to controling compressability. As I posted in another area, I have the greatest challenges with dimly lit scenes. If blockbuster can be used to add noise, perhaps it could be used to detect problem frames.

What do you think? (I'll hack away at Blockbuster as a test it you give me an idea on how the GOP should change for better quality in dark scenes.)
Reply With Quote
  #26  
04-09-2003, 05:03 PM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by MrTibs
@Kwag

Thanks for the reply but I have some questions.

What if I modified the Blockbuster filter to spit out the txt file you suggested? I'm sure that you understand the issues better but if BlockBuster deteciton is able to determine areas that are hard to encode, (macroblocks) perhaps I could use that engine to control the GOP structure to be produced. This method may be rougher than your idea but perhaps would go a long way to controling compressability. As I posted in another area, I have the greatest challenges with dimly lit scenes. If blockbuster can be used to add noise, perhaps it could be used to detect problem frames.

What do you think? (I'll hack away at Blockbuster as a test it you give me an idea on how the GOP should change for better quality in dark scenes.)
Hi MrTibs,

The idea is great, but still, the problem is that there's no relationship between the source compression and the compression that TMPEG will be able to compress that source I hope you understand what I mean. You could hack Blockbuster's algorithm (or use it) to determine what frames need more/less bit rate, but it wouldn't be a 1:1 relation to what TMPEG is going to do. However, there is something that we could "possibly" try! If we know in advance the compression of the source, or better yet, we dynamically create a table of high/low bit rates of the source, we could then map this table to a corresponding bit rate table for I, B and P frames, and try to average the "wanted" average bit rate as given by moviestacker This would basically be a similar thing as a 2-pass VBR, but we would be making a ultra high first pass. Like bit rate viewer, that can read the complete source in about a minute or so. So after we parse the source, and create a bit rate map, then we know how to export that data into TMPEG's force picture type .txt format! How does that sound? I'm all for it. Maybe just a little standalone program that reads the source file, a la bit rate viewer, and creates the .txt file. Then it's just a matter of loading the .avs into TMPEG as usual, and then going into the GOP settings screen in TMPEG and loading the exported .txt file created with this utility. No more file prediction , because the .txt file will already have the proper average sum of the complete processed file, so we know the target file size will be bullseye
I'm all for it. Just need to figure out the fastest way to read a source material on a frame by frame basis, and create a bit rate map "on-the-fly" as the files is processed (just like bitrate viewer). I'll give you an example: Say we are going to encode with a MIN of 300, MAX of 2,500. After we parse the complete material, say a DVD (via an .avs), we normalize the bit rates. If the MIN bit rate of the material is 4,000Kbps and the MAX is 8,000Kbps, we'll "map" that range to 300 and 2,500. So because we know the bit rate for every frame, we can interpolate the 4,000-8,000 to 300-2,500 and maintain the average bit rate wanted by MovieStacker (or any VBR calculator) How does that sound?? Any ideas??

-kwag
Reply With Quote
  #27  
04-10-2003, 06:25 AM
GFR GFR is offline
Free Member
 
Join Date: May 2002
Posts: 438
Thanks: 0
Thanked 0 Times in 0 Posts
I think you've got something like that in some 2-pass DiVX encoding tools, after you run the first pass you end with a "log" that you can use to manually fine tune the bit rate allocation.... I think it's the "advanced" binary of xVid... I'll try to find out exactly what I'm talking about We could use the 1st pass divx log as a guide to the bit allocation in our kvcd encodes.
Reply With Quote
  #28  
04-10-2003, 08:26 AM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by GFR
I think you've got something like that in some 2-pass DiVX encoding tools, after you run the first pass you end with a "log" that you can use to manually fine tune the bit rate allocation.... I think it's the "advanced" binary of xVid... I'll try to find out exactly what I'm talking about We could use the 1st pass divx log as a guide to the bit allocation in our kvcd encodes.
Hi GFR,

If the data in that log is detailed for every frame in the source, then it's a piece of cake to write a small utility program to import it, analyze/normalize it, and export it in TMPEG's format

-kwag
Reply With Quote
  #29  
04-10-2003, 08:47 AM
bman bman is offline
Free Member
 
Join Date: Apr 2002
Posts: 356
Thanks: 0
Thanked 0 Times in 0 Posts
@ KWAG !
What about LOG file that TMPGenc produces during encoding , it have all needed info : bitrates,compressibility,avg. bitrate !
Maybe we can first run at low res as 352x240 (or even lower ) just to get LOG file fast as possible and after that run real encoding with higher res and manually adjust all wanted values ???
It's possible
bman
Reply With Quote
  #30  
04-10-2003, 08:56 AM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by bman
@ KWAG !
What about LOG file that TMPGenc produces during encoding , it have all needed info : bitrates,compressibility,avg. bitrate !
Yes it does, but we can't use that, because the information it contains is the output of TMPEG. We need the input information from the source

-kwag
Reply With Quote
  #31  
04-10-2003, 10:16 AM
MrTibs MrTibs is offline
Free Member
 
Join Date: Aug 2002
Location: Canada
Posts: 200
Thanks: 0
Thanked 0 Times in 0 Posts
It appears that we do know where KVCD will go next...

OK this is where I embarrass myself. In earlier discussions, Kwag suggested we didn't know the compressability of the source frames. In order to find the compressability of the frames will we need to run each frame through a DCT-Q-IDCT filter, then make a bitrate log for each frame?

Clearly, I don't understand the relationship between the Q value and the GOP structure in order to maintain a certain CQ. Perhaps one of you out there could sketch out the process on how the Q values and GOP structures are adjusted when the encoders are compressing the source.

Please forgive me for my ignorance, I understand the theory of MPEG1 but the CQ/VBR process is still mystery.
Reply With Quote
  #32  
04-10-2003, 05:26 PM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by MrTibs
In earlier discussions, Kwag suggested we didn't know the compressability of the source frames. In order to find the compressability of the frames will we need to run each frame through a DCT-Q-IDCT filter, then make a bitrate log for each frame?
Hi MrTibs,

Actually we can tell the compressibility of the source. The problem is that we can't tell the compressibility of the mpeg that TMPEG will create
That is why I suggested a "scan" of bitrate per frame on the source, and then scale that to MIN/MAX/AVG and feed TMPEG the data directly as picture type. This way, TMPEG will encode each frame at the specified bit rate, which it does provide for that on the advanced/force picture type settings. I'm currently hacking into DVD2AVI source, as I believe it's the ideal program to genete the raw frame/bit rate data. This way, after we finish saving DVD2AVI's project file, we'll have the .d2v, the AC3 or MP2, and a "someName.txt" file that we can process with a "soon-to-be-made" program that will analyze the data, normalize it, guarantee that the average bit rate will be the one we tell it to be, and generate a TMPEG .txt file ready to be processed. I already have the frame numbers identified in DVD2AVI and I'm looking at the source to find out where/if the bit rate information is available on a per frame basis. If anyone ( sh0dan, canman, sansgrip ) or any developer here that has worked a lot with mpeg2dec sources ( dvd2avi sources ), I'd appreciate if they help me identify the module, or the proper call to identify bit rate information on current processed frames

-kwag
Reply With Quote
  #33  
04-11-2003, 07:29 AM
GFR GFR is offline
Free Member
 
Join Date: May 2002
Posts: 438
Thanks: 0
Thanked 0 Times in 0 Posts
What I was talking about was manual "curve compression" in XVID, maybe we can use a similar strategy in MPEG encoding???

http://www.animemusicvideos.org/guides/avtech/xvid.html
Reply With Quote
  #34  
04-12-2003, 07:25 AM
Latexxx Latexxx is offline
Free Member
 
Join Date: Jun 2002
Location: Tampere, Finland
Posts: 65
Thanks: 0
Thanked 0 Times in 0 Posts
One option would be reading the bitrate/Q/etc. from original .vob files. The only problem is that filters change the compressibility. But maybe it could be predicted what the filters do for compressibility. Or are you already meaning this?
Reply With Quote
  #35  
04-12-2003, 11:20 AM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
Quote:
Originally Posted by Latexxx
Or are you already meaning this?
Yes, that's exactly what I mean
Then scaling the VOB bitrate to the 300-2,300 range we use on KVCDs, and normalizing the data so the total average will be the wanted average.
When this is feeded to TMPEG, it will encode with strict parameters per frame, using the data just like if it had done a 2-pass.

-kwag
Reply With Quote
  #36  
04-16-2003, 02:38 PM
MrTibs MrTibs is offline
Free Member
 
Join Date: Aug 2002
Location: Canada
Posts: 200
Thanks: 0
Thanked 0 Times in 0 Posts
Hey Kwag, how's your project going?

I realized after the other posts that you and I were talking about different things. I am mostly working with uncompresses Huffy sources so some kind of first pass scan is required.

To test the idea (very roughly) I did an encode with CQ@100 then re-encoded in TMPGEnc - 2 pass using the values I got out of Bitrate viewer. The result was a smaller file size and better quality than encodes at lower CQ levels but still not as good at the CQ@100. Or course, brightly lit frames looked excelent while dimly lit frames still posed a problem. In fact, I've noticed that even with CQ@100, dimly lit frames don't look all that great. I'm still doing research but I suspect that my resolution of 352x240 is the biggest factor.

This brings me to my question: Has anyone found that different Q Matrix(s) produce better results (not considering compressability) for different kinds of scenes? (i.e. action, dimly lit, bright, static, skin tones) I'm wondering about using multiple Q matrix in a single movie.
Reply With Quote
  #37  
04-16-2003, 02:58 PM
kwag kwag is offline
Free Member
 
Join Date: Apr 2002
Location: Puerto Rico, USA
Posts: 13,537
Thanks: 0
Thanked 0 Times in 0 Posts
Hi MrTibs,

The possibility of changing the matrix at GOP levels is possible with TMPEG, also selectable on the force picture type screen. I have not had more time to play with the DVD2AVI source code, and I didn't find where to pick out the bit rate at the frame level. I already have hooks to write the information to a "stats" file, but I can't find the functions to report the bit rate of each frame. I need help from someone who has worked more with these sources, because I haven't! And It's been 3 days now that I had to put that away, because of work
Hopefullt this weekend I can run Visual Studio and give it whirl again

-kwag
Reply With Quote
  #38  
04-20-2003, 05:21 AM
baker baker is offline
Free Member
 
Join Date: May 2002
Posts: 26
Thanks: 0
Thanked 0 Times in 0 Posts
First off can I say the homepage has improved greatly since I was last here!!

Next up I know kwags going to get wary wary angry when I start to mention CCE as he's actually got a wee personal vendandatta(how close was that spelling ) going against it.

But really with all these filters and all. I have managed to mees about with the CCE matrix and the gop structure a bit. I don't understand mpeg matrices so I can't play about with it until I find a good'un. However I can increase the gop structure which has been intresting....

Also if your on a slow computer don't complian, until recently I was on a 450mhz!!

Baker
Reply With Quote
  #39  
04-20-2003, 06:07 AM
Jellygoose Jellygoose is offline
Free Member
 
Join Date: Jun 2002
Location: Germany
Posts: 1,288
Thanks: 0
Thanked 0 Times in 0 Posts
20 hours?? what CPU do you have? mine take about 8 hours maximum...
__________________
j3llyG0053
Reply With Quote
  #40  
04-20-2003, 09:30 AM
vhelp vhelp is offline
Free Member
 
Join Date: Jan 2003
Posts: 1,009
Thanks: 0
Thanked 0 Times in 0 Posts
And a good morning to you all

.
.
Who's still doing 20 hours ??

..less you got some NR filtering going, I can't understand this
.
.
on another note.. whats next for kvcd?? ..

for me, I'm working w/ my Canon ZR-10 dv cam to see how I can get
as much quality from an encode, with all it's noise that DV has when you
shoot footage in low light. I did a few, while de-Interlacing (working some
more on de-Interlacing DV) Also, I'm using the kdvd template on
my DV source. Now, encoding "low light" source materials is proving to
be a pain. You can get by w/ 352x480 but I think that either the Q
will have to be raised, or the bitrate. I'm working on this now. Everyone
knows that the ZR-10 is not the best under low light condtions, but when
this is all you got, it's better than nothing

Filtering a DV source is my last resort, and so far, I haven't had to, but I
might try pixelDut() out today.

Kwag, curious.. do you have a DV cam yet ??
Can't say I remember you talking on this. But, its fun working with it. Now,
depending on my footage and light source, it can sometimes remind me of DVD
quality (course, I'm awake)

Anyone here using kvcd on their DV footage ?? I'm using kdvd for
obvious reasons.

I'll post whatever clips I can of my progress when I get the chance though.
SAMPLE Encodes.. KDVD; KVCD; CVD; SVCD; VCD and "x"
..case anybodys' interested.

So, I can see whats next for kvcd?? perhaps being adjusted or something
with DV sources to make

Everyone have a good day
-vhelp
Reply With Quote
Reply




Similar Threads
Thread Thread Starter Forum Replies Last Post
What is the mpeg-2 KVCD format? bigggt Video Encoding and Conversion 7 05-30-2004 03:20 AM
Rmvb to kvcd or another format? jdl Video Encoding and Conversion 2 10-23-2003 04:29 PM
KVCD is an Old format? WaSaBi Video Encoding and Conversion 1 10-18-2003 03:26 PM
KVCD is The format for me? CauCauCau Video Encoding and Conversion 1 11-12-2002 01:56 AM
KVCD: Cannot save to m2v format? 2COOL Video Encoding and Conversion 0 10-30-2002 03:08 PM

Thread Tools



 
All times are GMT -5. The time now is 09:11 PM  —  vBulletin Jelsoft Enterprises Ltd