Go Back    Forum > Digital Video > Video Project Help > Restore, Filter, Improve Quality

Reply
 
LinkBack Thread Tools
  #1  
12-28-2010, 06:18 PM
jmac698 jmac698 is offline
Free Member
 
Join Date: Dec 2010
Posts: 387
Thanked 73 Times in 56 Posts
Hello,
In several threads I refer to such things as software TBC and other tricks for repairing video. I thought a visual demonstration of the technique behind this would illustrate the concepts better. What I'm about to show you has *amazing* utility to fix all kinds of problems, as well as enhance video. If you understand this concept, you will be able to generate your own ideas of how things could be done, then it's just a matter of sitting down and doing some scripting.

It's the power of motion tracking. See the 3 pictures below, better yet download them and slideshow through them quickly.
It's 3 frames which combine the widescreen and full frame versions of a video. Using motion tracking, I am able to place the fullscreen version on top of the same spot of the widescreen version. By doing this, we are able to watch in realtime as the director pans and scans the video frame to create the full frame version. In this case, a 2nd actor joins the frame, and the director pans the image to include his face.
In the upper left corner you will notice some numbers. These are the detected coordinates of where the widescreen matches the fullscreen. You can see the number dx increasing. This is actually backwards, because of the order of operations I was using, but take the negative of that number. We see that the number is now decreaseing from 41 to 2 to -68. Here, 41 means 41 pixels to the right, and -68 means 68 pixels to the left. So the fullframe is moving to the left relative to the widescreen.

What is the use of this? Well, in this example, there is a significant increase in quality in the combined video. The widescreen version, even though it's anamorphic, is only 364 pixels high, while the full frame version is 480 pixels high, so there's a 30% increase in resolution here. More than that, the compression is a little better and you can really notice edges of objects look better. I can actually create the equivalent of a 720p HD video just from one DVD! That was just an experiment though.

Imagine you have two captures of the same video, but each capture contains different noise. Let's say it's from a VHS, and there's white lines and general noise. If you split the video to a single horizontal line at a time, and apply motion tracking, you can align the two lines of each capture to the same spot. If you do this you will see that a typical VCR has a dx of +-3 for most of the lines, in other words between two captures, each line has shifted left or right by up to 3 pixels. This is exactly the type of error that a 'line TBC' is supposed to fix.
Here's an illustration:
ABCDE line 1 of capture 1
CDEFG line 1 of capture 2
MNOPQ line 2 of capture 1
KLMNO line 2 of capture 2

What I call 'relative TBC' is this motion tracking comparing lines of two captures. It does *not* fix the relative alignment of two lines. Here's what it does:
abCDEfg line 1 of all captures
klMNOpq line 2 of all captures
I put some letters lowercase, that means we only have one copy of that letter. Where there's capital letters, we have two copies of the letter, and we can start to use this for denoising. See my post on averaging and median for more information on that denoising technique.
This alignment is the key to improving the technique to reduce blur.

This also gives us an idea for doing a 'real' TBC. Noise in general always averages to 0 - remember that! In our case, the jitter (or dx, or pixel shift) always averages to 0 as well. So what if we take a large number of captures, line them up in 'relative TBC', and then cut out all the lowercase letters? We should be left with a jitter of 0, in other words each line will be in a stable position. In fact, instead of just cutting out the lowercase letters, we'll just use them to align everything and extend our borders a little.
Because of jitter, we can peer a little beyond our capture window and see a few pixels that drift in from outside the border.
The funny thing is, with the power of this technique, a horribly jittery video is better! That's because we will eventually be exposed to every part of every line and can digitize a full capture window, no matter what part we see of it - even if we can capture just a dozen pixels per line, given enough time and enough jitter, we could reconstruct the entire video.

Here's a solution to another problem. A video is stable and fine except for 4 lines in the middle that have a large dx (they jump to the left by a large amount). Now let's compare the current frame to the previous frame looking for motion changes. In most cases, the last frame is quite similar to the current frame. If we look at the detected dx line-by-line, we can see a huge jump in dx values in the shifted lines. If you look for this pattern: large dx for 4 lines surrounded by smaller dx, then that is your detection of the bad lines.
Now how to fix those lines? We will fill them in with a prediction of what they should have been. Instead of the large DX's, we will artificially put in an average of the surrounding small DX's, with the result that those 4 lines will shift at a relative speed as the lines above and below it. This is called motion compensation. It's a very good way to fill in missing lines.
We didn't need a TBC to do this.
A TBC is a brute force, all all lines to one standard method. We are using a 'smart TBC', where we align small bits as needed, with the information we have at hand. A real TBC can only work with the sync signals that the hardware can see. We can't see the sync signals, we have to rely on motion tracking of some kind.

We can also denoise with just one capture, it's the same idea really but we artificially create two captures. Let's say a room is panned from left to right. If we align two frames, we get 2/3 of the room in common between the two frames. This is now like our multiple capture technique. The noise will be different on each frame, which means we can use median to denoise it. Instead of using 3 captures, we use 3 frames which hopefully show the same pixels in common somewhere on the screen. Of course this is only coincidental, so it fails sometimes, but overall it can be a very effective technique.
For example if the camera is on a tripod and some people are talking (inteview iwth grandma...), you should be able to perfectly clean up the background and grandma's body except little bits where her face and hands move. It will certainly look a lot better! That's how RemoveDirt works.


Attached Images
File Type: jpg 9341-sm.jpg (27.1 KB, 36 downloads)
File Type: jpg 9356-sm.jpg (30.2 KB, 33 downloads)
File Type: jpg 9371-sm.jpg (30.9 KB, 30 downloads)
Reply With Quote
The following users thank jmac698 for this useful post: 16mmJunkie (04-08-2011), admin (12-29-2010), kpmedia (12-29-2010)
Someday, 12:01 PM
admin's Avatar
Ads / Sponsors
 
Join Date: ∞
Posts: 42
Thanks: ∞
Thanked 42 Times in 42 Posts
  #2  
12-29-2010, 10:01 AM
admin's Avatar
admin admin is offline
Site Staff | Web Development
 
Join Date: Jul 2003
Posts: 4,310
Thanked 654 Times in 457 Posts
This comes back to the DSLR comparison. Motion tracking is a vital part of sports auto-focus systems, MPEG-2 GOP encoding, and a number of other areas. As the technology for tracking gets better -- almost "AI like" (artificial intelligence) -- the more abilities we'll see from it.

The key with "software TBC" comes from the judgment algorithms, I'd think.
How to decide when something is in the "wrong" place?

- Did this site help you? Then upgrade to Premium Member and show your support!
- Also: Like Us on Facebook for special DVD/Blu-ray news and deals!
Reply With Quote
  #3  
12-29-2010, 01:51 PM
jmac698 jmac698 is offline
Free Member
 
Join Date: Dec 2010
Posts: 387
Thanked 73 Times in 56 Posts
I think I forgot I wasn't on doom9. I don't think anyone here is interested in the technical stuff
Reply With Quote
  #4  
12-29-2010, 01:54 PM
admin's Avatar
admin admin is offline
Site Staff | Web Development
 
Join Date: Jul 2003
Posts: 4,310
Thanked 654 Times in 457 Posts
Oh, I'm interested.

Maybe I won't understand it all immediately, but it still makes for a good read to stumble through. And others will surely feel the same, so don't hold back. doom9 is indeed more into coding, but this is by no means an anti-coding type of site.

- Did this site help you? Then upgrade to Premium Member and show your support!
- Also: Like Us on Facebook for special DVD/Blu-ray news and deals!
Reply With Quote
  #5  
04-05-2011, 09:34 AM
deadhead deadhead is offline
Free Member
 
Join Date: Feb 2011
Posts: 3
Thanked 0 Times in 0 Posts
Your idea is the bycicle invention. Some advanced denoising algorithms have a much more advanced techniques to do the same things but they use parts of the full entire frames to find, compare and use, not only lines.
But the problem that there's no additional info (or very small amount) in next or previous frames so the only effect is denoising of original frame.

TBC does not denoise the image. It corrects sync impulses in recorded video during playback and your capture card or TV get a correct information when the next line / frame starts.
There's no denoise chains in classic TBC. TBC doesn't work with video data, only with sync data
Reply With Quote
  #6  
04-05-2011, 01:12 PM
admin's Avatar
admin admin is offline
Site Staff | Web Development
 
Join Date: Jul 2003
Posts: 4,310
Thanked 654 Times in 457 Posts
Quote:
TBC doesn't work with video data, only with sync data
You make an excellent point.

- Did this site help you? Then upgrade to Premium Member and show your support!
- Also: Like Us on Facebook for special DVD/Blu-ray news and deals!
Reply With Quote
  #7  
04-05-2011, 01:26 PM
jmac698 jmac698 is offline
Free Member
 
Join Date: Dec 2010
Posts: 387
Thanked 73 Times in 56 Posts
Which I already noted:
>A real TBC can only work with the sync signals that the hardware can see.

Yes, I agree with you - you didn't fully read or understand my post.
It sounds like you are referring to NLMeans, which is one of the best performing image denoising techniques today. Coincidently, I just tried the TNLMeans avisynth plugin on my "garbage standard" dropout video, and while it removes the white lines, it's extremely blurry, and enabling temporal aspect just creates motion blur. My technique on the other hand produces a perfect image.
I've also been working on a standalone version of my technique. Just cursing at .NET right now which is not really that powerful for my purposes (did you know it can't even plot a pixel? What an oversight!).
I've just tested an idea I had to measuring jitter, and it works very well, even in the face of cropping or a bit of clipping. (Oops, I guess no one is going to know what that means but anyway...).
Could be some more news soon..
Reply With Quote
  #8  
04-05-2011, 01:42 PM
admin's Avatar
admin admin is offline
Site Staff | Web Development
 
Join Date: Jul 2003
Posts: 4,310
Thanked 654 Times in 457 Posts
Quote:
I've just tested an idea I had to measuring jitter, and it works very well, even in the face of cropping or a bit of clipping. (Oops, I guess no one is going to know what that means but anyway...).
I do!
Clipping and cropping actually have several definitions. Some are video, some are not, and they can apply to various topics within video editing or video production.

I'll assume clipping = masking within the frame. (Premiere defines this as "clipping", too.)
And cropping = pixel removal, from outside edges inward.
Yes?

- Did this site help you? Then upgrade to Premium Member and show your support!
- Also: Like Us on Facebook for special DVD/Blu-ray news and deals!
Reply With Quote
  #9  
04-05-2011, 02:12 PM
deadhead deadhead is offline
Free Member
 
Join Date: Feb 2011
Posts: 3
Thanked 0 Times in 0 Posts
jmac698
Your post is so long, but if you clearly explain your idea maybe I will see the difference between simplified NLMeans and your idea.

About Temporal NLMeans.

If you correctly understand the NLMeans method it CAN'T create motion blur, because his idea is very simple - no matter (in current frame or in current+100 frame) where are those pieces of crap, but if they are similar to current piece - it's great and we use it. Otherwise - throw it.

And you can be sure that TNLMeans is not correctly implemented if it adds motion blur.

And one another moment. There's a BIG difference between analogue and digitized video. The problem is that your digitized jittered video doesn't contain one analogue line digitized to one digital line. It's a big problem.

Another moment. That your digitized twice/more or even neighbor frames can be ABSOLUTELY different in sense of pixel subsequences.

Maybe I can't clearly understand your idea, but try to explain it again.

About line shifting (e.g jitter). You need to undestand that you don't know the correct position of n-th line, and the shift can be the same in every next capture and you need to know the shift direction and dependences between lines. Otherwise you will get more ugly artefacts than time-base errors.

Anyway. Try to use it on the real jittered video and let us see the difference in quality. Because now I see three absolutely different images with some digits over.

And you shall remember that there's dependence between frames and if you will start to shift lines (or you will not?) you will get the awful effects in next frames..

Do for us two frames from video sequence. One, original jittered and resulted dejitterd frame. It will be the finest explanation that this idea works.

Last edited by deadhead; 04-05-2011 at 02:27 PM.
Reply With Quote
  #10  
04-05-2011, 03:44 PM
jmac698 jmac698 is offline
Free Member
 
Join Date: Dec 2010
Posts: 387
Thanked 73 Times in 56 Posts
admin
Clipping could be like "newspaper clipping", but I mean in the electronic sense; like the distortion when the VU Meter lights up red on a mixing board. In video this would be washed out, blown highlights, overbright. The detail is lost in these regions. My test worked even though this detail was lost, but relying only on the surrounding pixels which still had detail.

deadhead
My writing wasn't very good, I was listing all the possibilities derived from an abstract idea. I will explain it a different way for you, with a specific application.

You have a worn video tape. It has white lines all over it. This is caused by dropouts. If you play it twice, you notice that the lines are in a different spot. You have an idea, that you take the good pixels from each playback and combine them. Your first thought is to search for the white lines to form a mask. Then you combine the unmasked areas and you have recovered your video without the white lines. This removes 95% of the white lines even with just two passes. But there is a simpler way.

Make 3 recordings of the video, then take the median of them. I mean to take pixel 1 of recording 1, pixel 1 of recording 2, and pixel 1 of recording 3, then sort them, then take the middle value. If one of the pixels is white, it gets replaced by a good pixel from one of the other two clean copies.

If two pixels were white, you still have a bad pixel. This sometimes happens. Since you can't take a median with even numbers, you need 5 copies to perfectly restore the video.

But there are practical problems with this. First, it's hard to get 5 perfect digitizations of a video. Sometimes there are missing frames, so they are out of sync. I fixed this in two ways; first I tried different drivers and programs to reduce the problem. Then I used software to automatically compare frame-by-frame to put each video in sync.

The other problem is jitter. I mean jitter in the electronic term, as in "clock jitter", meaning a randomized timing. In a VCR, the motors don't run at a perfect speed, which makes each line of the picture shifted to the left or to the right.

When I try to restore the pixels, my "good" pixel is not coming from the right spot (what I think is pixel 1 is really (example) pixel 5). The overall effect is a horizontal blurriness.

This can be solved with a hardware TBC, which lines up each line of each copy. But I don't have a TBC, or the original tape is gone, or there is a copy of a copy with "burnt-in" jitter, so this is a practical problem which comes up in restoration services.

How do I solve that? I use a software TBC. This technique lines up each copy relatively. Here's an example:
Quote:
is jit*** and noise.
____This is jitter *** noi
_T*** is jitter and noise
And after a relative TBC:
Quote:
____ is jit*** and noise.
This is jitter *** noi
T*** is jitter and noise
And after the median of 3:
Quote:
This is jitter and noise.
A relative TBC lines up the same line in each copy, but comparing line to line, there is still the same jitter. I've only partially lined up the video to compare the same lines together for the median. However, I can do even better. The jitter in my example was (-5,4,1). Notice the average is 0. So if I take the average of the jitters, I get a lined up picture. The more copies I have, the better they will average out. I can do the same as a hardware TBC, but in software, at the same time as removing all the dropouts!

There is a still a 3rd practical problem. The brightness of each copy can be slightly different. It's a minor problem but I can correct for this too. You also have to make sure that none of the copies have clipping (blown out highlights or too dark).

That was a specific example of the abstract idea of motion tracking, which is what allows me to line up each line. It is computed with a formula called a Correlation.

The attached example shows dropouts removed from the median of 5 copies of the VHS. The noise is much better. There was no other denoising applied.


Attached Images
File Type: jpg sample.jpg (88.1 KB, 20 downloads)
File Type: jpg sample-demedian5.jpg (71.7 KB, 20 downloads)

Last edited by jmac698; 04-05-2011 at 03:57 PM.
Reply With Quote
  #11  
04-05-2011, 04:15 PM
jmac698 jmac698 is offline
Free Member
 
Join Date: Dec 2010
Posts: 387
Thanked 73 Times in 56 Posts
Further steps

Not only have you removed the droputs, but the median tends to cancel out other noise as well (you can see the black lines, color patches, and general noise is gone too). I've removed all the noise due to the tape itself. What's left is due to the original camera. For this noise I use further denoising techniques, including NLMeans. I can then apply sharpening, lens distortion removal, superresolution, deinterlacing, and end up with a 720p 60Hz video that looks a lot better. I can even make hires panoramics from the video. You can even remove handshake from the camera.
Reply With Quote
  #12  
04-05-2011, 04:24 PM
admin's Avatar
admin admin is offline
Site Staff | Web Development
 
Join Date: Jul 2003
Posts: 4,310
Thanked 654 Times in 457 Posts
Quote:
Clipping could be like "newspaper clipping", but I mean in the electronic sense; like the distortion when the VU Meter lights up red on a mixing board. In video this would be washed out, blown highlights, overbright. The detail is lost in these regions. My test worked even though this detail was lost, but relying only on the surrounding pixels which still had detail.
Ah, gotcha.

The sample images from two posts back almost looks like head noise, from a bad playback head. When the tape is available, simply using another VCR often corrects this. However, once it's committed to digital, that's when a solution like your would come in handy. I'm not even sure if something like Ikena or dTective could clarify to this level. I have the ability to visit a lab with dTective, and may do so in the near future, armed with ugly test clips on a flash drive. I know Ikena can clarify footage better than what your sample shows, but that's only (to my knowledge, at least) for still extraction for ID purposes -- not continuous frames (i.e, videos). Pretty sure the Snell box would also do little for this specific error, since it's made for film work.

I've had some success on stabilization with "Ikena lite", formally known as vReveal.



- Did this site help you? Then upgrade to Premium Member and show your support!
- Also: Like Us on Facebook for special DVD/Blu-ray news and deals!
Reply With Quote
  #13  
04-05-2011, 05:01 PM
jmac698 jmac698 is offline
Free Member
 
Join Date: Dec 2010
Posts: 387
Thanked 73 Times in 56 Posts
The Nature of the Noise

I hesitate to place a specifc cause on noise, but I've seen a number of typical explanations given such as dropouts or dirty heads. What I think it is is 4 separate things. The black lines appear to the right of hard edges transitioning from dark to light. I can recreate this with perfectly working VCRs. It's some kind of pure electronic problem. It follows a pattern and it should be possible to filter it out.
The white lines I like to call comets, because they are bright white for a few pixels and trail off to the right as light grey. If this were a physical flake of magnetic coating missing, it would always occur in the same spot. I think it has to do with a weak signal at the edge of detection. Combined with noise, the dropouts occur at random because the FM signal was at the edge of detection. The comet appearance would be the response of the electronics trying to transit from pure white back to a normal level. The color bands have to do with an edit nearby, I forget the full explanation for it right now. The random noise I guess is just tape noise.

As for this sample, I tried it on 5 different VCRs and this was the best result. It's definitely not a dirty head. I even have one of the recommended VCRs (a JVC SVHS), but my RCA provided a better picture. Tracking was also weak on this tape, and I adjusted it manually for each scene.

My example has no further processing. You haven't seen how much further I can clean that up.
I don't have a still handy right now.

I've played with some forensic programs, they were nothing special, just well known techniques you read in mathematical papers (articles) in journals, many of which you can get for free in avisynth or even matlab forms. While I'm sure they have some good results, don't let the marketing fool you, calling it forensic and for security use doesn't add any mystique to it to me That's just one of the biggest markets. I was just looking at some interesting software for film restoration, I'll try to find the link.
Reply With Quote
  #14  
04-06-2011, 08:01 AM
deadhead deadhead is offline
Free Member
 
Join Date: Feb 2011
Posts: 3
Thanked 0 Times in 0 Posts
To be honest I still not clearly see how you estimate which pixel is correct and which one is not.
You search through 3 pixel subsequences?

How you can estimate jitter if you know nothing about the correct line position?

It will be great to see some math or algorithm.

Thanks and sorry for my bad English.
Reply With Quote
  #15  
04-06-2011, 10:58 AM
jmac698 jmac698 is offline
Free Member
 
Join Date: Dec 2010
Posts: 387
Thanked 73 Times in 56 Posts
The math is easy:
play 1, pixel 1=170
play 2, pixel 1=65
play 3, pixel 1=72
sort=(65,72,170), median=72
The bad pixel was 170.

The TBC is easy. Random values are good! Not knowing is good! In statistics, random values have a Guassian distribution. It has a mean. I search for the expected value. With enough samples, my jitter is reduced.
Reply With Quote
  #16  
04-07-2011, 12:40 AM
admin's Avatar
admin admin is offline
Site Staff | Web Development
 
Join Date: Jul 2003
Posts: 4,310
Thanked 654 Times in 457 Posts
Do you have enough samples for testing and creating this?
If not, I know I can come up with more.

- Did this site help you? Then upgrade to Premium Member and show your support!
- Also: Like Us on Facebook for special DVD/Blu-ray news and deals!
Reply With Quote
  #17  
04-07-2011, 02:11 AM
jmac698 jmac698 is offline
Free Member
 
Join Date: Dec 2010
Posts: 387
Thanked 73 Times in 56 Posts
Do you have a sample you couldn't do much with? Make 5 recordings as a test, I'll process it.
We can compare techniques.
Reply With Quote
  #18  
07-17-2012, 07:16 PM
jmac698 jmac698 is offline
Free Member
 
Join Date: Dec 2010
Posts: 387
Thanked 73 Times in 56 Posts
I'm back to researching this again. I've just written new plugins to help with the processing http://forum.doom9.org/showthread.ph...13#post1581813
Just a review. I have a semi-working software TBC that can be applied only in these situations;
-a 2nd generation tape, where a hardware TBC can't fix the embedded line jitter
-a digitized jittered recording where the original analog is gone
-you don't happen to have a hardware TBC but do have one of the supported capture cards
-in all cases you have definite black borders that the software can line-up

Obviously this is a big limitation; sometimes we don't have clear black borders and certainly we don't have the few obsolete capture cards.

The solution is to use any lines which appear in the video. Below is an example of a video which I fixed because it happened to have internal lines.

I could generalize and automate this, but there would still have to be clear edges at a high angle. First of all, to find the edges you can use a standard algorithm such as Sobel. Next, you can find lines of a certain angle using a Gabor Filter. Finally you can force the found lines to be straight. This will only line-up the horizontal chunks of the video which have some reference lines. You can extend this area by using motion compensation. However, all this is happenstance, but will extend the number of scenes where the technique can work. You could extend this to various mathematical shapes like conic sections. The big question with intrinsic dejitter is finding some measure of "normality" between lines to make the image look natural. This is an unanswered question.


Attached Images
File Type: jpg intrinsic TBC test.jpg (46.5 KB, 26 downloads)

Last edited by jmac698; 07-17-2012 at 07:30 PM.
Reply With Quote
  #19  
07-19-2012, 01:42 AM
lordsmurf's Avatar
lordsmurf lordsmurf is offline
Site Staff | Video
 
Join Date: Dec 2002
Posts: 13,633
Thanked 2,458 Times in 2,090 Posts
Quote:
The big question with intrinsic dejitter is finding some measure of "normality" between lines to make the image look natural. This is an unanswered question.
I almost wonder if bayesian math methodology could be used, with sample images to generate the processing. It's how teachable spam filters work. And that's sort of how high end DSLRs work; for example, the Nikon D3 has a library of thousands of images by which to ping against when calculating exposure values.

Ideally, a person would
- grab a few reference frames,
- use a quick auto+manual correction tool,
- let the algo auto-correct,
- then manually adjust the lines as needed,
- and then save it as a TIFF and feed in back into the processor.

That would necessitate both an image program, as well as a video filter.
^ The two could be integrated, but let's not make this more complicated than it already is.

It has to be content-aware, and have an ability to learn.

- Did my advice help you? Then become a Premium Member and support this site.
- For sale in the marketplace: TBCs, workflows, capture cards, VCRs
Reply With Quote
  #20  
07-19-2012, 03:05 AM
jmac698 jmac698 is offline
Free Member
 
Join Date: Dec 2010
Posts: 387
Thanked 73 Times in 56 Posts
Good thoughts! The problem of automatically finding lines in an image has been covered, for example it's used to find what should be "straight" lines, and then use those to figure out the barrel distortion of lens. Or else you can just click two points of what should be a straight line between them and let the software line up all the points.

Other ways to adapt would be to to train a neural net, and there's already an Avisynth plugin for this. I know how to generate real images for it too.

Btw, I think I can fix that whole scene from the example, the only reason it screwed up is because it was hard to see the lines on top of the background.
Reply With Quote
Reply




Similar Threads
Thread Thread Starter Forum Replies Last Post
Tracking lines in video; how to fix these VHS tapes? deter Restore, Filter, Improve Quality 29 08-17-2020 01:28 PM
VCR tracking problems or grounding issue? Zerowalker Capture, Record, Transfer 9 08-06-2012 04:46 PM
Fixing a jittery video due to VHS tracking issues? via Email or PM Restore, Filter, Improve Quality 1 07-01-2012 12:22 AM
JVC HR-S7800U, poor Hi-Fi audio tracking NJRoadfan Video Hardware Repair 7 10-10-2010 11:11 PM
VHS-C tracking problem - service suggestion? lordsmurf Restore, Filter, Improve Quality 0 08-23-2010 03:19 AM




 
All times are GMT -5. The time now is 09:22 AM