digitalFAQ.com Forums [Archives]

digitalFAQ.com Forums [Archives] (http://www.digitalfaq.com/archives/)
-   Computers (http://www.digitalfaq.com/archives/computers/)
-   -   Linux: Building Encoding Cluster (http://www.digitalfaq.com/archives/computers/9090-linux-building-encoding.html)

ak47 04-15-2004 10:29 PM

don't give up
http://howto.ipng.be/openMosixWiki/i...ork%20smoothly

venkatk 04-16-2004 05:25 AM

Quote:

Originally Posted by ak47

As Mencoder does all filtering and encoding in one single process, it can't be migrated to cluster nodes. I am exploring other encoders which can work in cluster. Right now I am evaluating Transcode and FFMPEG.

I am not backing off from setting up cluster. I was waiting for motherboards and RAM from two cluster nodes. I have already received the motherboards. Over here motherboards like Intel 865GBFL are not stocked well as the volumes are not there.

venkatk 04-19-2004 12:04 AM

Hi,

I have one quick question?

Can we encode with "Mencoder" in multiple chunks and later merge to one single m2v?. I think this is the only way you can use "Mencoder" in a "Open Mosix" cluster environment.

kwag 04-19-2004 12:27 AM

Quote:

Originally Posted by venkatk
Hi,

I have one quick question?

Can we encode with "Mencoder" in multiple chunks and later merge to one single m2v?. I think this is the only way you can use "Mencoder" in a "Open Mosix" cluster environment.

Yes, but you should keep some things in mind.
If you plan to encode a set of ranges on each CPU, you should consider some things, like scene detection etc. So that the last frame per segment, is the last scene just before a scene change, and the next frame is another completely different scene. That should be the start frame for another CPU. This way, you'll maximize compression per set.
Hope you understand what I mean :roll:

-kwag

venkatk 04-19-2004 12:43 AM

Quote:

Originally Posted by kwag
Yes, but you should keep some things in mind.
If you plan to encode a set of ranges on each CPU, you should consider some things, like scene detection etc. So that the last frame per segment, is the last scene just before a scene change, and the next frame is another completely different scene. That should be the start frame for another CPU. This way, you'll maximize compression per set.
Hope you understand what I mean :roll:

-kwag

Of course, I knew there are some issues involved. But, I wanted to hear from experts.

What if we split the movie into different chunks based on chapters?

Here is my plan. Write a small script which will take the VOB(s) and chapter list as arguments and spawn multiple encodes depending on number of nodes available. Systems like "Condor" can do the scheduling for you :)


All times are GMT -5. The time now is 09:29 PM  —  vBulletin © Jelsoft Enterprises Ltd

Site design, images and content © 2002-2024 The Digital FAQ, www.digitalFAQ.com
Forum Software by vBulletin · Copyright © 2024 Jelsoft Enterprises Ltd.