Overclockers Australia Forums

OCAU News - Wiki - QuickLinks - Pix - Sponsors  

Go Back   Overclockers Australia Forums > Specific Hardware Topics > Storage & Backup

Notices


Sign up for a free OCAU account and this ad will go away!
Search our forums with Google:
Reply
 
Thread Tools
Old 12th January 2017, 8:44 AM   #166
ae00711
Member
 
ae00711's Avatar
 
Join Date: Apr 2013
Posts: 970
Default

Quote:
Originally Posted by NSanity View Post
Yes.

Use LSI/Avago HBA's + Storage Spaces + ReFS (w/ Integrity streams turned on) + Cluster Storage Volume.
where do I check this?
I've just made my first storage space/pool couple days ago.
Do I have to have 'CSV' enabled(?) to combat bitrot? (I thought it was just (non-RAID HBA) + Storage Spaces + REFS(?))
__________________
I LOVE CHING LIU
SMOKING IS UN-AUSTRALIAN
I prefer email to PM!

ae00711 is offline   Reply With Quote

Join OCAU to remove this ad!
Old 12th January 2017, 10:36 AM   #167
wwwww
Member
 
wwwww's Avatar
 
Join Date: Aug 2005
Location: Melbourne
Posts: 4,091
Default

Quote:
Originally Posted by Doc-of-FC View Post
Fuck ram, people are still unknowingly passing data to non BBU write caches on disks, with 128MB caches these days on large tubs that's insanity right there.

SLOG to your hearts content with SSD's that don't do power loss protection.

It's why I've got an intel DC series SSD as my bcache caching device, cache disabled backing devices all layered with btrfs on luks.

I invested in an E3 xeon a few years back and built an all in one.
A cheaper option would be to just get a Samsung Pro and disable the disk cache, they perform remarkably well without it.
__________________
wPrime 2.10 | Super PI 1.9

Quote:
Originally Posted by NSanity View Post
This is literally the worst advice on the internet, ever.
wwwww is online now   Reply With Quote
Old 12th January 2017, 3:29 PM   #168
Doc-of-FC
Member
 
Doc-of-FC's Avatar
 
Join Date: Aug 2001
Location: Canberra
Posts: 2,577
Default

Quote:
Originally Posted by wwwww View Post
A cheaper option would be to just get a Samsung Pro and disable the disk cache, they perform remarkably well without it.
The Intel DC S3500 was a remnant of a ZFS build, its cost was $0, It has on PCB capacitors for power loss prevention, hence its use, the samsung 850 pro with write cache disabled probably wouldn't hit as hard as the intel, and can't guarantee safe unexpected power loss writes: http://www.storagereview.com/images/...-Board-Top.jpg

The reason for it is as follows, I host several KVM virtual machines on my [AIO] computer, pfsense, windows (many editions) and lab servers. These compete for VM IO performance, especially at times when doing cold boots / restarts (Linux kernel update for example) and things start to get a bit messy with limited IO even on WD blacks, coupled with the write cache on the disks disabled things would grind to a halt.

So armed with an SSD that does power loss prevention, that safely writes the memory contents back to flash means that I've got a fast disk cache (using bcache witeback for VMs) on top of sync writes to the backing disks, which when completed bcache then clears them from the SSD.

Now this situation isn't immune from all scenarios under the sun, it's a marked upgrade from the standard on disk cache methodology though. I get massively reduced boot times as all the VM's frequent random blocks are served by the SSD (bcache LRU policy) (even with a cold block cache in linux memory) and write back gobbles data from SQL quite nicely, SQL aint ACID compliant if the design is flimsy.

Now if I wanted paranoid level storage, I'd use ZFS on freebsd with mirrored SLOG's, hence where the DC S3500 came from.
Doc-of-FC is offline   Reply With Quote
Old 12th January 2017, 3:43 PM   #169
wwwww
Member
 
wwwww's Avatar
 
Join Date: Aug 2005
Location: Melbourne
Posts: 4,091
Default

Quote:
Originally Posted by Doc-of-FC View Post
The Intel DC S3500 was a remnant of a ZFS build, its cost was $0, It has on PCB capacitors for power loss prevention, hence its use, the samsung 850 pro with write cache disabled probably wouldn't hit as hard as the intel, and can't guarantee safe unexpected power loss writes: http://www.storagereview.com/images/...-Board-Top.jpg

The reason for it is as follows, I host several KVM virtual machines on my [AIO] computer, pfsense, windows (many editions) and lab servers. These compete for VM IO performance, especially at times when doing cold boots / restarts (Linux kernel update for example) and things start to get a bit messy with limited IO even on WD blacks, coupled with the write cache on the disks disabled things would grind to a halt.

So armed with an SSD that does power loss prevention, that safely writes the memory contents back to flash means that I've got a fast disk cache (using bcache witeback for VMs) on top of sync writes to the backing disks, which when completed bcache then clears them from the SSD.

Now this situation isn't immune from all scenarios under the sun, it's a marked upgrade from the standard on disk cache methodology though. I get massively reduced boot times as all the VM's frequent random blocks are served by the SSD (bcache LRU policy) (even with a cold block cache in linux memory) and write back gobbles data from SQL quite nicely, SQL aint ACID compliant if the design is flimsy.

Now if I wanted paranoid level storage, I'd use ZFS on freebsd with mirrored SLOG's, hence where the DC S3500 came from.
With disabled write cache it won't report data as being written until it's written to the static memory so it does guarantee data retention as well as any capacitor/battery/capacitor+flash backed unit. The DC probably has more memory dedicated to parity (the 850 is a consumer drive) so has a lower bit error rate but the whole point of this thread is about the usage of filesystems that account for drive bit errors isn't it?

We use 850 pros with disabled disk cache in a production environment (though with separate flash backed DRAM) hosting many VM servers though we put the boot drives on mechanical disks, the SSDs are just for caches and databases and they perform remarkably well.

Of course the DC with cache is superior but we're talking an order of magnitude more in cost per gigabyte but if you got it for free then that's all good.
__________________
wPrime 2.10 | Super PI 1.9

Quote:
Originally Posted by NSanity View Post
This is literally the worst advice on the internet, ever.
wwwww is online now   Reply With Quote
Old 12th January 2017, 7:05 PM   #170
CirCit
Member
 
Join Date: Apr 2002
Posts: 116
Default

Quote:
Originally Posted by NSanity View Post
Use LSI/Avago HBA's + Storage Spaces + ReFS (w/ Integrity streams turned on) + Cluster Storage Volume.
I thought the whole idea of storage spaces was to get away from vendor tie ins?

Im sure I'll have to work out in vm's what cluster and scaleout do differently as I thought they did essentially the same thing

scrapping the storage spaces replication accross servers does refs interact with smb3 at all? like does the hash make it to the other end to be checked data got there safe? in which case robocopy would probably still be my best bet.
__________________
Mi Goreng Noodle Club
CirCit is offline   Reply With Quote
Old 12th January 2017, 7:09 PM   #171
ae00711
Member
 
ae00711's Avatar
 
Join Date: Apr 2013
Posts: 970
Default

Quote:
Originally Posted by CirCit View Post
I thought the whole idea of storage spaces was to get away from vendor tie ins?

Im sure I'll have to work out in vm's what cluster and scaleout do differently as I thought they did essentially the same thing

scrapping the storage spaces replication accross servers does refs interact with smb3 at all? like does the hash make it to the other end to be checked data got there safe? in which case robocopy would probably still be my best bet.
I'm quite certain NSanity just used LSI as an example, as LSI is by far the most popular non-RAID ('IT' pass-thru) HBA out there, particularly for home server enthusiasts.
__________________
I LOVE CHING LIU
SMOKING IS UN-AUSTRALIAN
I prefer email to PM!

ae00711 is offline   Reply With Quote
Old 12th January 2017, 7:49 PM   #172
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,813
Default

Quote:
Originally Posted by CirCit View Post
I thought the whole idea of storage spaces was to get away from vendor tie ins?
HAITCH BEEE AYYYY

Not a raid card.

LSI's shit works. Those IBM rebadged cross flashed stuff all the kids pick up on ebay for $15?

LSI.
NSanity is offline   Reply With Quote
Old 13th January 2017, 7:48 AM   #173
elvis Thread Starter
Old school old fool
 
elvis's Avatar
 
Join Date: Jun 2001
Location: Brisbane
Posts: 28,122
Default

Quote:
Originally Posted by NSanity View Post
LSI's shit works.
Being the pedantic dick I am for just a moment, LSI's shit started working when they bought out 3WARE, and stole all their tech. Prior to that everything LSI was a big proprietary mess, and Solaris/BSD/Linux support was next to zero (which is why all we *nix admins bought only 3WARE cards back in the day). But I digress...

Avago have since bought LSI, and now I see Broadcom have bought Avago. Round and round we go.
__________________
Play old games with me!
elvis is offline   Reply With Quote
Old 13th January 2017, 7:55 PM   #174
Doc-of-FC
Member
 
Doc-of-FC's Avatar
 
Join Date: Aug 2001
Location: Canberra
Posts: 2,577
Default

Quote:
Originally Posted by wwwww View Post
Of course the DC with cache is superior but we're talking an order of magnitude more in cost per gigabyte but if you got it for free then that's all good.
not really, I consider the SSD $0 because its already served its purpose, admittedly it wasn't cheap at the time 160GB DC S3500 was about $1.50 a GB where as my 256 GB 850 pro was about $1.00 a GB.

Out of interest I grabbed my spare S3500 160GB, taken a thrashing as an SLOG in its past life and hasn't been trimmed, and took my frequently trimmed windows 256GB 850 pro and passed them through DD and flashbench on the same HBA.

results:
tests in order (DD 850 pro / DCS3500) (flashbench 850 pro / DCS3500)
Code:
P9D-M flashbench-dev # dd if=/dev/urandom of=/dev/sde bs=4k count=100 conv=fdatasync
100+0 records in
100+0 records out
409600 bytes (410 kB, 400 KiB) copied, 0.00882489 s, 46.4 MB/s
P9D-M flashbench-dev # dd if=/dev/urandom of=/dev/sde bs=4k count=100 conv=fdatasync
100+0 records in
100+0 records out
409600 bytes (410 kB, 400 KiB) copied, 0.0111258 s, 36.8 MB/s
P9D-M flashbench-dev # dd if=/dev/urandom of=/dev/sde bs=4k count=100 conv=fdatasync
100+0 records in
100+0 records out
409600 bytes (410 kB, 400 KiB) copied, 0.0104273 s, 39.3 MB/s
P9D-M flashbench-dev # dd if=/dev/urandom of=/dev/sde bs=4k count=100 conv=fdatasync
100+0 records in
100+0 records out
409600 bytes (410 kB, 400 KiB) copied, 0.011564 s, 35.4 MB/s
P9D-M flashbench-dev # dd if=/dev/urandom of=/dev/sde bs=4k count=100 conv=fdatasync
100+0 records in
100+0 records out
409600 bytes (410 kB, 400 KiB) copied, 0.0111702 s, 36.7 MB/s
P9D-M flashbench-dev # dd if=/dev/urandom of=/dev/sde bs=4k count=100 conv=fdatasync
100+0 records in
100+0 records out
409600 bytes (410 kB, 400 KiB) copied, 0.0148941 s, 27.5 MB/s
Code:
P9D-M flashbench-dev # dd if=/dev/urandom of=/dev/sda bs=4k count=100 conv=fdatasync
100+0 records in
100+0 records out
409600 bytes (410 kB, 400 KiB) copied, 0.00313566 s, 131 MB/s
P9D-M flashbench-dev # dd if=/dev/urandom of=/dev/sda bs=4k count=100 conv=fdatasync
100+0 records in
100+0 records out
409600 bytes (410 kB, 400 KiB) copied, 0.00897152 s, 45.7 MB/s
P9D-M flashbench-dev # dd if=/dev/urandom of=/dev/sda bs=4k count=100 conv=fdatasync
100+0 records in
100+0 records out
409600 bytes (410 kB, 400 KiB) copied, 0.00332635 s, 123 MB/s
P9D-M flashbench-dev # dd if=/dev/urandom of=/dev/sda bs=4k count=100 conv=fdatasync
100+0 records in
100+0 records out
409600 bytes (410 kB, 400 KiB) copied, 0.00638372 s, 64.2 MB/s
P9D-M flashbench-dev # dd if=/dev/urandom of=/dev/sda bs=4k count=100 conv=fdatasync
100+0 records in
100+0 records out
409600 bytes (410 kB, 400 KiB) copied, 0.0076655 s, 53.4 MB/s
P9D-M flashbench-dev # dd if=/dev/urandom of=/dev/sda bs=4k count=100 conv=fdatasync
100+0 records in
100+0 records out
409600 bytes (410 kB, 400 KiB) copied, 0.00888064 s, 46.1 MB/s
P9D-M flashbench-dev # dd if=/dev/urandom of=/dev/sda bs=4k count=100 conv=fdatasync
100+0 records in
100+0 records out
409600 bytes (410 kB, 400 KiB) copied, 0.00399582 s, 103 MB/s
Code:
P9D-M nas # sdparm -i /dev/sde && sdparm --get=WCE /dev/sde
    /dev/sde: ATA       Samsung SSD 850   1B6Q
Device identification VPD page:
  Addressed logical unit:
    designator type: NAA,  code set: Binary
      0x50025388a06dfbe6
    /dev/sde: ATA       Samsung SSD 850   1B6Q
WCE         1  [cha: y]
P9D-M nas # sdparm -i /dev/sda && sdparm --get=WCE /dev/sda
    /dev/sda: ATA       INTEL SSDSC2BB16  2370
Device identification VPD page:
  Addressed logical unit:
    designator type: vendor specific [0x0],  code set: ASCII
      vendor specific: BTWL342306Q6160MGN  
    designator type: T10 vendor identification,  code set: ASCII
      vendor id: ATA     
      vendor specific: INTEL SSDSC2BB160G4                     BTWL342306Q6160MGN  
    designator type: NAA,  code set: Binary
      0x55cd2e404b4f23b7
    /dev/sda: ATA       INTEL SSDSC2BB16  2370
WCE         1  [cha: y, def:  1]
850 PRO 4k flash bench
Code:
P9D-M flashbench-dev # ./flashbench -a --blocksize=4096 /dev/sde
align 68719476736	pre 88.7Ás	on 90.5Ás	post 84Ás	diff 4.13Ás
align 34359738368	pre 95.5Ás	on 101Ás	post 95.9Ás	diff 5.33Ás
align 17179869184	pre 93.9Ás	on 101Ás	post 96.4Ás	diff 6.32Ás
align 8589934592	pre 97.6Ás	on 100Ás	post 94.4Ás	diff 4.51Ás
align 4294967296	pre 83.9Ás	on 89Ás	post 84Ás	diff 5.06Ás
align 2147483648	pre 99.3Ás	on 109Ás	post 107Ás	diff 6.53Ás
align 1073741824	pre 92.2Ás	on 99.8Ás	post 96.3Ás	diff 5.58Ás
align 536870912	pre 93.5Ás	on 101Ás	post 96.2Ás	diff 5.76Ás
align 268435456	pre 92.3Ás	on 98.7Ás	post 95.8Ás	diff 4.68Ás
align 134217728	pre 96.9Ás	on 101Ás	post 95.4Ás	diff 5.12Ás
align 67108864	pre 85.2Ás	on 89.8Ás	post 85.2Ás	diff 4.6Ás
align 33554432	pre 85.3Ás	on 89.6Ás	post 84.5Ás	diff 4.74Ás
align 16777216	pre 77.5Ás	on 80.6Ás	post 74.1Ás	diff 4.79Ás
align 8388608	pre 83.3Ás	on 89.5Ás	post 86.4Ás	diff 4.59Ás
align 4194304	pre 86.7Ás	on 91.6Ás	post 83.9Ás	diff 6.25Ás
align 2097152	pre 85.9Ás	on 90.9Ás	post 85.1Ás	diff 5.42Ás
align 1048576	pre 86.5Ás	on 88.4Ás	post 82.6Ás	diff 3.86Ás
align 524288	pre 85.8Ás	on 89.6Ás	post 85.2Ás	diff 4.11Ás
align 262144	pre 85.9Ás	on 92.3Ás	post 87.2Ás	diff 5.79Ás
align 131072	pre 84.7Ás	on 90.2Ás	post 85.1Ás	diff 5.25Ás
align 65536	pre 85.9Ás	on 89.5Ás	post 84.6Ás	diff 4.33Ás
align 32768	pre 82.6Ás	on 88.9Ás	post 85.8Ás	diff 4.7Ás
align 16384	pre 83.9Ás	on 88.9Ás	post 84.6Ás	diff 4.64Ás
align 8192	pre 85.3Ás	on 89Ás	post 84.5Ás	diff 4.05Ás
DC S3500 4k flash bench
Code:
P9D-M flashbench-dev # ./flashbench -a --blocksize=4096 /dev/sda
align 34359738368	pre 32.5Ás	on 33.6Ás	post 33.3Ás	diff 718ns
align 17179869184	pre 33.8Ás	on 32.9Ás	post 33.1Ás	diff -568ns
align 8589934592	pre 33.2Ás	on 33.9Ás	post 32.9Ás	diff 866ns
align 4294967296	pre 43.9Ás	on 42.4Ás	post 34.3Ás	diff 3.32Ás
align 2147483648	pre 42.1Ás	on 39.7Ás	post 40.6Ás	diff -1684ns
align 1073741824	pre 27.7Ás	on 28.7Ás	post 28.1Ás	diff 797ns
align 536870912	pre 29.1Ás	on 29.6Ás	post 29.8Ás	diff 94ns
align 268435456	pre 42.4Ás	on 41.1Ás	post 42.7Ás	diff -1495ns
align 134217728	pre 40.5Ás	on 40.7Ás	post 42.3Ás	diff -708ns
align 67108864	pre 41.7Ás	on 40.4Ás	post 41.6Ás	diff -1233ns
align 33554432	pre 42.2Ás	on 40.1Ás	post 41.7Ás	diff -1876ns
align 16777216	pre 43.1Ás	on 41Ás	post 40.8Ás	diff -966ns
align 8388608	pre 40.2Ás	on 39.9Ás	post 40Ás	diff -244ns
align 4194304	pre 41Ás	on 44.3Ás	post 44.9Ás	diff 1.36Ás
align 2097152	pre 46.2Ás	on 43.9Ás	post 41.8Ás	diff -72ns
align 1048576	pre 45Ás	on 44.1Ás	post 42.3Ás	diff 404ns
align 524288	pre 40.9Ás	on 42.3Ás	post 42.4Ás	diff 592ns
align 262144	pre 39.7Ás	on 37.3Ás	post 38.7Ás	diff -1812ns
align 131072	pre 40Ás	on 37.8Ás	post 38.6Ás	diff -1528ns
align 65536	pre 44.4Ás	on 43.8Ás	post 40Ás	diff 1.62Ás
align 32768	pre 39.8Ás	on 37.7Ás	post 40.6Ás	diff -2508ns
align 16384	pre 39.7Ás	on 38.9Ás	post 40.3Ás	diff -1120ns
align 8192	pre 39.5Ás	on 37.3Ás	post 40.4Ás	diff -2694ns

what's interesting is the write throughput for 4k is quite volatile on the DCS3500, more than i believed it would be. the 850 pro comparatively was quite stable on that metric through the quick and dirty 4k test.

the flash bench test shows the real value of the S3500 architecture, using a lookup index for blocks vs internal disk B-Tree it delivers data with half the latency of the 850 pro.
Doc-of-FC is offline   Reply With Quote
Old 14th January 2017, 8:05 AM   #175
Perko
Member
 
Perko's Avatar
 
Join Date: Aug 2011
Location: NW Tasmania
Posts: 1,736
Default

Quote:
Originally Posted by NSanity View Post
HAITCH BEEE AYYYY

Not a raid card.

LSI's shit works. Those IBM rebadged cross flashed stuff all the kids pick up on ebay for $15?

LSI.
Aitch*

Quote:
Originally Posted by elvis View Post
Being the pedantic dick I am for just a moment, LSI's shit started working when they bought out 3WARE, and stole all their tech. Prior to that everything LSI was a big proprietary mess, and Solaris/BSD/Linux support was next to zero (which is why all we *nix admins bought only 3WARE cards back in the day). But I digress...
I remember the first server that I got to re-purpose, running an old version of SuSE with one of these, or something very similar in it. It had been powered down for six months, and when I fired it up, four out of the twelve old SCSI drives were clicking like champions, and the old AT full tower gave me a tingle just to say hi. Fun times.
__________________
Main: Phanteks Enthoo Primo/Enermax Platimax 850/MSI X99A SLI Plus/i7-5820k @ 4.4GHz/Noctua NH-D15/Corsair Vengeance 3000MHz Low Profile/Galax GTX 1080 EXOC/ASUS Xonar STU + Beyer T70/Samsung 950 Pro 512GB + 1TB Caviar Black/Win 10 Pro/Cherry Compuregister - MX Clears/Mionix Naos 3200/X-Star DP2710 Glossy
Notebook: ASUS ROG G53SW w/ Win 10 + Ubuntu 16.04
Perko is offline   Reply With Quote
Old 16th January 2017, 2:15 PM   #176
rainwulf
Member
 
Join Date: Jan 2002
Location: bris.qld.aus
Posts: 3,879
Default

Quote:
Originally Posted by Diode View Post
Fantastic... out of 29,000 photos found 257 with miss matched hashes. Fun times going through and manually opening each photo and checking which ones are not damaged.
This is the main reason i went to ZFS. NTFS bitrot started hitting me back when i started using 2tb disks on hardware raid 5.
__________________
derp
rainwulf is offline   Reply With Quote
Old 16th January 2017, 2:31 PM   #177
rainwulf
Member
 
Join Date: Jan 2002
Location: bris.qld.aus
Posts: 3,879
Default

Quote:
Originally Posted by MUTMAN View Post
I thought it might take some head thrashing out from seeding lots of blocks ?
As a person with an over 100tb file server using 8tb disks for just movies and stuff, the L2ARC is an utter waste of time. It doesn't make any difference at all for media serving, and thats with 8tb archive SMR drives, not even standard drives.

The archive drives still read as fast as a normal hard drive, so 16 8tb drives have plenty of performance to fill a 1gb connection without wasting a ssd.
__________________
derp
rainwulf is offline   Reply With Quote
Old 16th January 2017, 2:47 PM   #178
rainwulf
Member
 
Join Date: Jan 2002
Location: bris.qld.aus
Posts: 3,879
Default

Quote:
Originally Posted by Perko View Post
Aitch*



I remember the first server that I got to re-purpose, running an old version of SuSE with one of these, or something very similar in it. It had been powered down for six months, and when I fired it up, four out of the twelve old SCSI drives were clicking like champions, and the old AT full tower gave me a tingle just to say hi. Fun times.
When i first saw that card i needed some time to myself....

ahem *cough*

That was back in the day i was JBODing 200gig disks with silicon image pata cards.

btw, you posted a PATA card, a scsi raid wouldn't have used something like that. It would have been one single card that would have been relatively long due to the ram cache.
__________________
derp
rainwulf is offline   Reply With Quote
Old 16th January 2017, 4:26 PM   #179
MUTMAN
Member
 
MUTMAN's Avatar
 
Join Date: Jun 2001
Location: brisvegas
Posts: 4,110
Default

Quote:
Originally Posted by rainwulf View Post
As a person with an over 100tb file server using 8tb disks for just movies and stuff, the L2ARC is an utter waste of time. It doesn't make any difference at all for media serving, and thats with 8tb archive SMR drives, not even standard drives.

The archive drives still read as fast as a normal hard drive, so 16 8tb drives have plenty of performance to fill a 1gb connection without wasting a ssd.
Yep, its been noted by others also that for just serving up media its a waste.
But. I have the ssd here doing nothing anyway.
And i can spin down a mechanical drive and let the torrent seeding hit a low power ssd, then thats a win for me.
Heat and power are higher on the priority list for me than most others id say
MUTMAN is offline   Reply With Quote
Old 16th January 2017, 5:52 PM   #180
NSanity
Member
 
NSanity's Avatar
 
Join Date: Mar 2002
Location: Canberra
Posts: 15,813
Default

Quote:
Originally Posted by rainwulf View Post
As a person with an over 100tb file server using 8tb disks for just movies and stuff, the L2ARC is an utter waste of time. It doesn't make any difference at all for media serving, and thats with 8tb archive SMR drives, not even standard drives.

The archive drives still read as fast as a normal hard drive, so 16 8tb drives have plenty of performance to fill a 1gb connection without wasting a ssd.
Not really sure of the use case for L2ARC.

My L2 Hitrate on a Virtual Host was ~4% (ARC was ~60%).

Awful lot of $ for fuck all effectiveness
NSanity is offline   Reply With Quote
Reply

Bookmarks

Sign up for a free OCAU account and this ad will go away!

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT +10. The time now is 11:29 AM.


Powered by vBulletin® Version 3.8.9
Copyright ©2000 - 2017, vBulletin Solutions, Inc.
OCAU is not responsible for the content of individual messages posted by others.
Other content copyright Overclockers Australia.
OCAU is hosted by Micron21!