Go Back    Forum > Digital Video > Video Project Help > Edit Video, Audio

Reply
 
LinkBack Thread Tools
  #21  
11-22-2021, 06:07 AM
lollo2 lollo2 is offline
Free Member
 
Join Date: Mar 2013
Location: Italy
Posts: 673
Thanked 189 Times in 163 Posts
Quote:
But whilst trying to find RemoveDirtSMC() ... I came across SpotLess
Yes, SpotLess is a sort of RemoveDirtSMC evolution, and very effective. Be careful with removing small moving object: some posts in doom9's forum explain how to use adaptive masks to solve the problem. Difficult to implement, but very nice results!

TemporalDegrain2 is an excellent at denoising/grain removal; it may be not fully suitable for your video, but with a high temporal radius (>3) and dfttest or KNLMeansCL postprocessing should clean the "standard" noise a lot. I experimented a temporal radius of 16 with SMDegrain (a similar filter) once, and although really really slow it was effective for defects where the solution was to "average" across a large number of frames.

Quote:
I didn't apply nnedi deinterlacing because I want to preserve as much detail as possible without any interpolation of the other fields.
nnedit "deinterlacing" I proposed is lossless, meaning that it just builds the progressive frame from the 2 fields; you then apply the "progressive" filter, and interlace back. No interpolation, no loss of details.

Quote:
I tried LSFMod and CAS with MergeChroma but it seemed worse
Sharpening may not give a significant improvement to the look of your videos. Preset "slow" for LSFMod and defaults for CAS are generally the best options, but you have to experiment a lot. It is really source dependent.

Quote:
I also found your YT channel useful (assuming it is the same person).
That channel was built to share experiences and highlight common problems I found on my workflow with some friends, working in the same project of digital conversion of old vhs/s-vhs TV series. It is somehow repetitive, by my captures are very similar to each other.


Quote:
Strangely, large block sizes worked better (from 8 to 12 to 16 saw an improvement), but perhaps I am just not understanding the code correctly. ThSAD over 1100 made no difference, so I kept it as low as possible to prevent unwanted changes.
blocksize is a "static" parameter and, given the characteristics of your source, a larger values should be better because your defects cover large parts of the images. thSAD is used for the motion vectors, a parameter related to temporal structure then) and again given your defects should not play a role here. Your findings look coherent to me.

And finally let me say that your final result is not too bad at all. Sure, with lot of time and trying many filters/parametrs/steps you may improve it even further, but do not over process, and stop once you are satisfied, otherwise it will never end
Reply With Quote
The following users thank lollo2 for this useful post: Winsordawson (11-22-2021)
Someday, 12:01 PM
admin's Avatar
Ads / Sponsors
 
Join Date: ∞
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #22  
11-22-2021, 10:10 AM
lollo2 lollo2 is offline
Free Member
 
Join Date: Mar 2013
Location: Italy
Posts: 673
Thanked 189 Times in 163 Posts
Quote:
No interpolation, no loss of details.
Obviously I meant "no loss of details". (there is interpolation)

A channel on S-VHS / VHS capture and AviSynth restoration https://bit.ly/3mHWbkN
Reply With Quote
  #23  
11-22-2021, 08:01 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
Thanks again. I will try the nnedi with double frame option and see if there is an improvement. You mentioned previously that I can only use KNLMeansCL on an old version of TemporalDegrain2. Do you know which version?

SpotLess has no produced any problems for me so far, perhaps because I choose a low threshold for when to not affect the block. But if it seems problematic I'll use a different strength on the sides. It surely works much faster than FixRipsP2!

You're right that one can go crazy trying to make it perfect, and I am of the "less is more" crowd. Do you have any suggestions for the glow off the woman's shoulder? MergeLuma removes it, but makes it an oil painting. Perhaps by playing with the aWarpSharp2 parameters? (I should add that my above script and videos make use of a ChromaShift(C=4, L=2) that I forgot to include here).

LSFMod and CAS perform the same function as aWarpSharp2 for MergeChroma/MergeLuma, right? It has been hard to find a proper explanation of the effect, but I assumed it was by sharpening the edges and then merging only those parts to the original. Learning from Doom9 is like going through a garbage can full of shredded notes.

I plan to upload a separate, deinterlaced version online, upscaled to HD so it gets a better bitrate by YouTube. Do you recommend that I keep my same script (plus an AddGrain and sharpening effect, which I also forgot to include above) and just use QTGMC without any denoising? Or throw away the above denoisers and use something from QTGMC? I don't want to go crazy because I care more about the interlaced version for archiving purposes.

By the way, if you like the show UFO (given your videos) you may also like The Invaders, although the British were usually less corny.
Reply With Quote
  #24  
11-23-2021, 03:09 AM
lollo2 lollo2 is offline
Free Member
 
Join Date: Mar 2013
Location: Italy
Posts: 673
Thanked 189 Times in 163 Posts
Quote:
You mentioned previously that I can only use KNLMeansCL on an old version of TemporalDegrain2. Do you know which version?
TemporalDegrain (without 2)

Quote:
It surely works much faster than FixRipsP2
Sure, but for some defects FixRipsP2 is sometime necessary: https://forum.videohelp.com/threads/...-Distortion%29

Quote:
MergeLuma removes it, but makes it an oil painting
Oil painting/plastic look and highlight of halos are the unwanted side effects of denoise/sharpening/restore etc...
Not easy to avoid it, a tune of the parameters of the filters or something like addGrain (inside the filter if available or outside the filter) it may help. Some denoisers have an option to re-inject some "new cleaner noise" based on what has been removed
MergeLuma itself should not produce plastic look, except if you do a temporal/spatial smoothing.

Quote:
LSFMod and CAS perform the same function as aWarpSharp2 for MergeChroma/MergeLuma, right?
The best sharpeners by default do not sharpen chroma. You just do it in special cases, if needed.
Sometime, to be sure that chroma is not touched, you force in the flow mergeChroma to use the chroma from video before sharpening.

Quote:
Learning from Doom9 is like going through a garbage can full of shredded notes.
The advantage reading there is that the "developers" of the filters participate, but often their documentation is weak and they think everybody "speaks" their same technical language, which is obscure for a beginner. On the other hand, I will always be grateful to them for their "free" releases and their effort for making AviSynth and VapourSynth and their filters the wonderful tools that they are!

Quote:
... upscaled to HD so it gets a better bitrate by YouTube
If you want to output a version for YouTube you need to deinterlace. In this case the nnedi3 fake deinterlacing is not needed.
You can just use QTGMC() (real bob deinterlacer) and eventually remove the denoiser, because QTGMC denoises by itself.
Then upscale to 1440x1080 (if your DAR is 4:3) with nnedi3_rpow2; doing this, YT should introduce less problems while compressing your video.
You can save/export your final video with the same lossless codec used for capturing, because YT is able to read it and this avoid a preliminary lossy compression on your side.

However, what I would experiment given the nature of your video is if the deinterlacing is more appropriate before or after the filtering (this last is uncommon). I have the impression that QTGMC may have troubles with the defecting frames.

option 1:
Code:
...
QTGMC
<filtering>
<upscale>
option 2:
Code:
...
nnedi3 fake deinterlacing
<filtering>
QTGMC
<upscale>
Reply With Quote
The following users thank lollo2 for this useful post: Winsordawson (11-23-2021)
  #25  
11-23-2021, 06:59 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
Thanks--I'll post an update once I try to implement your advice. That change after FixRipsP2() is quite impressive! But lots of detail is lost (like in the ear & hair) and two functions calls on each field would grind my computer to a halt. Hopefully the OP used a mask to just apply it to those lines!
Reply With Quote
  #26  
11-24-2021, 02:38 AM
lollo2 lollo2 is offline
Free Member
 
Join Date: Mar 2013
Location: Italy
Posts: 673
Thanked 189 Times in 163 Posts
Yes, that filtering was quite destructive and a last resort option! It was just an indication on how to proceed.

You are right, in general you want to apply a dedicated filter for a specific problem only to the concerned segment of the video, and eventually to a portion of the frame; and also with a "mask" to touch only where needed, but this last is not that easy.
Reply With Quote
  #27  
11-26-2021, 10:50 AM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
Yes, that is my goal! Perhaps I am naive, but masking a portion of frame does not seem too complicated if it is based on a crop (and not hue, saturation, or luminance).

Also, when you suggested to use nnedi3 deinterlacing first because it works better, in what way exactly? Do the spatial-temporal and temporal filters work better or can be used with less strength? I ask because there are some people (like Sanlyn) who suggest to de-interlace only when necessary as any method will bring a reduction in quality as half of the frames are removed (but then interpolated, so overall about a 25% on average according to LordSmurf).

But if the 25% loss means a less filtered look, it may be worth it. I would also think that deinterlacing would be less damaging to a clip like mine that has less movement, bringing down the loss even further.
Reply With Quote
  #28  
11-27-2021, 03:14 AM
lollo2 lollo2 is offline
Free Member
 
Join Date: Mar 2013
Location: Italy
Posts: 673
Thanked 189 Times in 163 Posts
Quote:
masking a portion of frame does not seem too complicated
I was talking about mask on "elements" of the picture, i.e. edges of the objects, gradients, luma subset, certain colors, etc., not on a portion of the frame, which is trivial.

Quote:
Also, when you suggested to use nnedi3 deinterlacing first because it works better, in what way exactly? Do the spatial-temporal and temporal filters work better or can be used with less strength?
Concerning deinterlacing before filtering here some discussion we had in the paste, explaining better than what I did the right procedure:

https://forum.doom9.org/showthread.php?t=86394

http://www.doom9.org/index.html?/cap..._avisynth.html

https://forum.doom9.org/showthread.php?t=167315

https://forum.doom9.org/showthread.php?t=59029

http://forum.doom9.net/showthread.ph...93#post1921993

https://forum.doom9.org/showpost.php...82&postcount=6

Quote:
I ask because there are some people...
I proposed a lossless deinterlacing -> filtering -> interlace back approach
Deinterlacing or not the video for final export is your choice. If you prefere to deinterlace the previous is useless and then better use QTGMC (before or after filtering in this special case).
Deinterlacing (QTGMC) is recommended for YouTube upload.

Quote:
...as any method will bring a reduction in quality as half of the frames are removed (but then interpolated, so overall about a 25% on average...
I am not sure I understand what you mean. To simplify, deinterlacing at double frame rate recreates by interpolation the full frame from the single field (and much more when using QTGMC() instead of a simple Bob() for example)

interlaced frames video, 25 frames (50 fields) per second (25 frames i)
frame1 frame2 frame3 frame4 frame5 frame6 frame7 frame8
A..............C.............E..............G..... ..........................................(field 0) even lines
b..............d.............f...............h.... ............................................(field 1) odd lines

Bob() deinterlaced
Nnedi3(field=-2) deinterlaced
QTGMC() deinterlaced
[frame count is doubled (relative position of frames in previous scheme does not match)]
frame1 frame2 frame3 frame4 frame5 frame6 frame7 frame8
A..........B'.........C..........D'.........E..... .....F'.........G..........H'....... (field 0) even lines
a'..........b.........c'..........d..........e'... .......f..........g'.........h.........(field 0) odd lines
x' and X' represents scanlines interpolated from X and x
Reply With Quote
The following users thank lollo2 for this useful post: Winsordawson (12-02-2021)
  #29  
12-02-2021, 09:54 AM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
Thank you for the reading material. The deal breaker for me is that sharpening really shouldn't be done on interlaced material. Also, I it seems that JDL_UnfoldFieldsVertical stacks the even and odd fields together, which I would think still suffer in quality because of the lack of information between line 1 and 3, line 3 and 5, etc.

I am gathering that your method works better because A and a' (interpolated A) are from the same space and same time, which helps with spatial-temporal filtering. The original A and b are from a different space and time. SelectEven/selectOdd provide all of the A lines at once but because the b lines come in the next field before A again temporal filtering will suffer.

In your experience, do you think using QTGMC with or without denoising is better, since it appears that I would only have the choice between dfttest and fft3dfilter that don't seem to work as well as TD2 and SpotLess?

Also, do you recommend QTGMC with NNEDI3 as interpolation or something else like "EEDI3+NNEDI3" (EEDI3 with sclip from NNEDI3) to get the benefit of both?
Reply With Quote
  #30  
12-04-2021, 03:45 AM
lollo2 lollo2 is offline
Free Member
 
Join Date: Mar 2013
Location: Italy
Posts: 673
Thanked 189 Times in 163 Posts
Quote:
The deal breaker for me is that sharpening really shouldn't be done on interlaced material
Yes, don't do it; some filter has an "interlaced=true" option, but most of them just do internally a separateFields(), so is not recommended either.

Quote:
lso, I it seems that JDL_UnfoldFieldsVertical stacks the even and odd fields together, which I would think still suffer in quality because of the lack of information between line 1 and 3, line 3 and 5, etc.
Nor really, because UnfoldFieldsVertical shifts all the even scanlines to the top half of the frame and all odd scanlines to the bottom. More details here https://forum.doom9.org/showthread.p...834#post354834, however this method is obsolete.

Quote:
SelectEven/selectOdd provide all of the A lines at once but because the b lines come in the next field before A again temporal filtering will suffer
For a temporal filtering the fields separation is not appropriate, it works well on original interlaced material. A deinterlace is more effective because the filter "works" with more "data".

Quote:
In your experience, do you think using QTGMC with or without denoising is better, since it appears that I would only have the choice between dfttest and fft3dfilter that don't seem to work as well as TD2 and SpotLess?
If you want just to filter your interlaced video "full QTGMC power" is not necessary, you can use nnedi3 and interlace back after the filtering, or QTGMG(lossless), but the first is easier.
If you want your final result to be deinterlaced (youtube or whatever) use QTGMC(). QTGMC denoises by itself, in a less effective way than TD2 as you said, so you may want to turn off its intrinsic denoise capability, which can be done only partially, and use TD2 after QTGMC. By doing so and sharpening after, be careful to do not introduce excessive smoothing and "plastic look".
SpotLess is more a "defect removal" than a denoiser. It is generally used before denoise and sharpening.

Quote:
Also, do you recommend QTGMC with NNEDI3 as interpolation or something else like "EEDI3+NNEDI3" (EEDI3 with sclip from NNEDI3) to get the benefit of both?
As pure interpolation nnedi3 and QTGMC are equivalent, because nnedi3 is used inside QTGMC.
If you are looking for the absolutely best procedure by "merging" eedi3 and nnedi3 I can't answer, depends on your videos if it is worth or not.

As general recommendation always experiment a lot yourself, and do not blindly trust our suggestions
Reply With Quote
The following users thank lollo2 for this useful post: Winsordawson (12-05-2021)
  #31  
12-05-2021, 09:06 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
Thanks for all the tips--I'll report back once I apply them. Also, how do you tell if a denoiser is temporal, spatial or spatial-temporal if it is not categorized as such on the Avisynth website or where it was posted?
Reply With Quote
  #32  
12-06-2021, 04:26 AM
lollo2 lollo2 is offline
Free Member
 
Join Date: Mar 2013
Location: Italy
Posts: 673
Thanked 189 Times in 163 Posts
Quote:
how do you tell if a denoiser is temporal, spatial or spatial-temporal
If the filter is an AviSynth script and not a compiled dll you can read the code:
- a pure spatial filter is where the filtering only occurs inside the single frame.
- in general, when you see a "motion vector" generation (MVTools) there is a temporal radius involved, so the processing concerns multiple frames (temporal filtering); a spatial filtering can be added or not.
- today, the best denoising filters are spatial-temporal, combining both approaches.

If the filter is a compiled dll we have to trust the author's documentation (often incomplete) or run some experiment on a reference clip to understand (not easy).
Reply With Quote
The following users thank lollo2 for this useful post: Winsordawson (12-06-2021)
  #33  
12-06-2021, 09:17 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
Thanks again. I was afraid that there was no way to determine the type of filter if it were a compiled dll besides guess and check. Luckily those cases are rare (after searching the forums for prior users).

Since you have been so helpful, could you explain why UtoY() and VtoY() have to be used? In the link below, someone uses it to reduce the chroma banding. I understand how it works, but why couldn't there just be filters that allow you to directly adjust the chroma channel, as opposed to copying the values to luma, adjusting the values, and then copying back to the U or V channels? Or an argument in a filter that lets you choose the plane?

https://forum.videohelp.com/threads/...os#post2536626
Reply With Quote
  #34  
12-07-2021, 05:49 AM
lollo2 lollo2 is offline
Free Member
 
Join Date: Mar 2013
Location: Italy
Posts: 673
Thanked 189 Times in 163 Posts
My guess is that ttempsmooth (I never used it) processes the chroma/luma planes together when testing for pixel similarity (https://forum.doom9.org/showthread.php?t=77856), while "themaster1" wanted to act only on chroma.

He writes here, so maybe he can explain better...
Reply With Quote
  #35  
12-07-2021, 12:21 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
Quote:
Originally Posted by lollo2 View Post
My guess is that ttempsmooth (I never used it) processes the chroma/luma planes together when testing for pixel similarity (https://forum.doom9.org/showthread.php?t=77856), while "themaster1" wanted to act only on chroma.

He writes here, so maybe he can explain better...
Thanks. What I mean is that I see this conversion of chroma to luma happen often with different filters, so is there some reason in how Avisynth was created that the programmers don't simply make an argument to allow for manipulating the chroma, versus conversion to luma first and then back? I was just curious but maybe themaster1 knows something.
Reply With Quote
  #36  
12-17-2021, 05:35 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
I thought I'd share this neat chroma effect that I came across on a bad part of the tape. This kind of color effect would take some serious masking in a NLE!

(Since both sides of the coat are the same color).
BEFORE.jpg

You must be logged in to view this content; either login or register for the forum. The attached screen shots, before/after images, photos and graphics are created/posted for the benefit of site members. And you are invited to join our digital media community.



Last edited by Winsordawson; 12-17-2021 at 05:46 PM.
Reply With Quote
  #37  
12-17-2021, 05:52 PM
lordsmurf's Avatar
lordsmurf lordsmurf is online now
Site Staff | Video
 
Join Date: Dec 2002
Posts: 13,501
Thanked 2,447 Times in 2,079 Posts
He's a Smurf!

- Did my advice help you? Then become a Premium Member and support this site.
- For sale in the marketplace: TBCs, workflows, capture cards, VCRs
Reply With Quote
The following users thank lordsmurf for this useful post: Winsordawson (12-17-2021)
  #38  
12-18-2021, 10:59 AM
lollo2 lollo2 is offline
Free Member
 
Join Date: Mar 2013
Location: Italy
Posts: 673
Thanked 189 Times in 163 Posts
Quote:
This kind of color effect would take some serious masking in a NLE!
In AviSynth/VapourSynth you can have a look to this procedure on how to use masks (although on a different subject):
https://forum.videohelp.com/threads/...on#post2640813

For your specific problem, maybe a NLE is more appropriate.

Good luck!
Reply With Quote
The following users thank lollo2 for this useful post: Winsordawson (12-18-2021)
  #39  
12-18-2021, 05:11 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
Thanks--I didn't find the tracking error problematic enough to remove. I just thought to share it because of the interesting colors. I have taken your advice and have tried to export an .AVI from VirtualDub based on the below script, but VirtualDub keeps saying there is an out of bound memory problem. There were no bad frames detected when I scanned the file (which is only a minute long). Do you have any suggestions I could look into (I am using 32-bit Avisynth+)?

To summarize the script, it was first edited in Premiere Pro with added segments. Those segments were removed so that they were not affected by the filters, then added in afterwards. I upscaled and resized to keep the 4:3 ratio and let YouTube add pillarboxes. Removing the resizing solves the issue, so maybe I am doing something wrong there?

Code:
video1 = AVISource("VW#1_AsherHada.avi").AssumeBFF
audio1 = video1
video1

ReplaceFramesMC2(192, 1)
ReplaceFramesMC2(194, 5)
ReplaceFramesMC2(268, 2)
ReplaceFramesMC2(315, 8)
ReplaceFramesMC2(359, 3)
ReplaceFramesMC2(500, 4)
ReplaceFramesMC2(520, 5)
ReplaceFramesMC2(5619, 1)

video2 = AudioDub(last, audio1)

seg1 = video2.Trim(3955,4093)
seg2 = video2.Trim(6264,6412)
seg3 = video2.Trim(10052,10200)
seg4 = video2.Trim(13981,14129)
seg5 = video2.Trim(17598,17746)
seg6 = video2.Trim(21230,21359)

vidEdit = video2.Trim(0, 3944) + video2.Trim(4094, 6263) + video2.Trim(6413, 10051) + video2.Trim(10201, 13980) + video2.Trim(14130,17597) + video2.Trim(17747,21230)

vidEdit = vidEdit.Crop(22,0, 0,0).AddBorders(10,0,12,0)

/*Double-checking correct width*/
# return Subtitle(last, String(vidEdit.Width), size = 32)   
last = vidEdit

AssumeBFF().nnedi3(field=-2)

FAN(lambda=5, plus=1, minus=50)
FAN(lambda=5, plus=50, minus=1)
ChromaShift(C=6, L=2)
SpotLess(RadT=3, ThSAD=1000, Blksz=16).SpotLess(RadT=5, ThSAD=300, Blksz=16).RemoveDirtSMC(20)

FixChromaBleedingMod(thr=7, strength=0.8)

MergeChroma(aWarpSharp2(thresh=200,depth=30, type=1,blur=4, chroma=3).aWarpSharp2(thresh=200,depth=30, type=1,blur=4,chroma=3))
TurnRight()
MergeChroma(aWarpSharp2(thresh=200,depth=30,type=1,blur=4).aWarpSharp2(thresh=200,depth=30,type=1,blur=4))
TurnLeft()

SmoothUV(radius=2, field=false)

AssumeBFF().SeparateFields().SelectEvery(4,0,3).Weave()

vidEditFinal = last

vidFinal = vidEditFinal.Trim(0, 3944) + seg1 + vidEditFinal.Trim(3945, 6113) +seg2 + vidEditFinal.Trim(6114, 9756) +seg3 + \
vidEditFinal.Trim(9757, 13529) + seg4 + vidEditFinal.Trim(13530, 17001) + seg5 + vidEditFinal.Trim(17002, 0) + seg6


last = vidFinal.Trim(0, 1800)


QTGMC( Preset="fast", EZKeepGrain=1.0, NoisePreset="Faster", NoiseProcess=0, FPSDivisor=2)
VInverse2()

nnedi3_rpow2(4, cshift="Spline36Resize", fwidth=1920, fheight=1440)

LSFmod(strength=100, preblur="ON")
AddGrainC(var=2)
ColorMatrix(mode="Rec.601->Rec.709")
return last
Also, I would have liked to keep the FPS at 59 but when I do there is a line of chroma that appears on every other frame. This goes away when I keep only half the frames but do not know why.

Thank you.
Reply With Quote
  #40  
12-18-2021, 10:43 PM
Winsordawson Winsordawson is offline
Free Member
 
Join Date: Sep 2010
Location: Behind you
Posts: 473
Thanked 29 Times in 25 Posts
I forgot to add that QTGMC darkened the whole video at first but from looking online the fix was to set NoiseProcess to 0. The video was still darken a bit after this change, by about 10 percent, crushing blacks. Is there a way around this besides raising the black level before the filter?
Reply With Quote
Reply




Tags
avsinfotools

Similar Threads
Thread Thread Starter Forum Replies Last Post
Adobe Premiere CS3 for compressing video? JoRodd Edit Video, Audio 16 05-11-2017 09:49 PM
Is there an alternative to Adobe Premiere? rappy Edit Video, Audio 1 12-27-2011 12:31 PM
Help with Adobe Premiere and Huffyuv Files jrodefeld Edit Video, Audio 2 12-04-2011 01:50 PM
Wow, Adobe Premiere Pro CS4 support RMVB anllyy Encode, Convert for streaming 2 01-17-2010 11:05 AM

Thread Tools



 
All times are GMT -5. The time now is 03:39 AM