1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Suggestions for a high performance system ?

Discussion in 'General Hardware' started by Shackley, 14 Jul 2006.

  1. Shackley


    Joined: 4 Aug 2003

    Posts: 3,054

    Hi, I would appreciate suggestions for a CPU and Motherboard for use in a workstation to be used for analysing geological survey data. The data sets are huge - 10s of Gb. Processing this data can take in excess of a day, so processing performance is important. The software is NOT multi-threaded. The system will not be used for other tasks while running the analysis. The system will NOT be used to run games, play music, watch videos, etc..

    I have been in touch with the software developers who say that I should look for a fast CPU with lots of fast RAM. I was thinking along the lines of an FX-60 with 3Gb PC3200. I have heard suggestions that the AM2 is not yet 'proven' and is no longer ahead of Intel CPUs anyhow. I plan to run 3 x 19" TFT screens using two PCI-E graphics cards - NOT in SLI mode. HDDs will be some PATA disk for OS & Software & a 500Gb SATA disk for data. None of these items are an issue. OS will probably be Windows 2000 SP4. Price is less of an issue than performance. Water cooling is out.

    Can anyone help ? Many thanks in advance.
  2. semi-pro waster

    Man of Honour

    Joined: 27 Sep 2004

    Posts: 25,829

    Location: Glasgow

    Why PATA for the OS drive, you are probably better going with Raptors or maybe some sort of Raid system(Raid5 with an aftermarket card?) for the whole system and that would allow some form of data security.

    An FX60 is a dual core which you say you don't need/want and it is still pretty expensive when something like the X2 4400(dual core also) will offer 90% of the performance about half the price plus could be overclocked. If it truly is only single core then for around a fifth of the price of an FX60 you could go for a A64 4000 San Diego.

    I'd probably try to opt for 4gb Ram as it is an easier configuration(or at least neater to my mind than 2x1gb and 2x512mb) although running 4x1gb might mean you must run the Ram at 2T rather than 1T. If you can get 2x2gb that might be best.
  3. Tigjaw


    Joined: 6 Oct 2005

    Posts: 659

    Location: West Midlands

    You may want to think about faster RAM, PC3200 really doesn't take advantage of the size of the RAM available. I'm getting Gskill's PC2-6400 2GB dual channel kit and I doubt my computer will be used quite as excessively. Think about at least PC5300.

    Also, I'm pretty sure if you have more than 2GB of RAM then you'll need XP Prof Operating System, Home doesn't recognise more than that amount of RAM.
  4. gamesaregood


    Joined: 17 Sep 2003

    Posts: 2,809

    Location: Suffolk


    Why go for the FX 60 its its a single thread app?

    I would pickup a Second hand Opteron which will do 2.8 / 3Ghz on air, or if you can find one a FX57.

    Use a SATA Drive for bootup, maybe even a nice raptor?

  5. mosfet

    Wise Guy

    Joined: 28 Sep 2004

    Posts: 1,152

    Location: London

    If you're serious about this then go for server equipment. Something like a Tyan mainboard, lots of RAM and a fast server chip.

    Even though you say the applications are single threaded, a dual core system will allow you to fully utilise a single core without impacting general OS performance, I believe Woodcrest (if you can get it) also utilises the ALU of an idle core to boost branch prediction performance for the loaded core, so single threaded apps benefit.

    For hard drives I'd recommend a Raptor (36GB or 74GB) for the OS drive and Seagate perpendicular (7200.10) drives, either 2x250 in RAID 0 or 1x500. If you need redunancy utilise another level of RAID (5,6 or 10).
  6. Shackley


    Joined: 4 Aug 2003

    Posts: 3,054

    I am serious about this. However, I can't really see the benefit of a 'server' MoBo. I have previously used Tyan & Gigabyte boards with Opterons. Personally, I am not impressed with Tyan as a company - if you are using hundreds or thousands of their boards a year, they may be OK, but not if you are using less than ten.

    It seems that mainstream MoBos offer better Overclocking options and are a whole lot cheaper - they may have stuff that you don't need / want - e.g. sound, USB & FireWire, but who cares ?

    'Lots of RAM' - great if Windows 2k (or XP Pro) was actually capable of using more than about 3.5Gb - they don't seem to be able to though. If I am wrong here (which I may well be), I would like to hear from anyone about using 4Gb or more with Windows 2k or XP Pro.

    Fast server chip - what had you in mind ?

    Accepted and anyhow, replacement single-core CPUs may not be available for much longer.

    Boot time and O/S access isn't really an issue. PATA seems a whole lot more 'predictable' at the moment (no [F6], etc.).

    I had considered RAID 5 for the data disk but tbh, again disk I/O time probably isn't a real issue. As I understand it, CPU and RAM performance seem to be absolutely key in this environment.

    'FX-60' - for the reasons mentioned above.
    'Second-hand or FX-57' - basically for reliability and future 'repair' reasons.
  7. mosfet

    Wise Guy

    Joined: 28 Sep 2004

    Posts: 1,152

    Location: London

    I don't understand, is this a serious application? If so, you should not be overclocking. Running a chip at stock, slightly overvolted for stability and well cooled is what you should be doing.

    I recommended a server board so you can use EEC Reg memory and the built in video, and benefit from the throughput of a server class chipset.

    32bit Windows, at least XP, can see up to 4GB AFAIK, but only 2GB can be assigned to any single running process. 64bit Windows, 2003 Edition etc, can all address multiple TBs of RAM, so no restrictions there.

    I don't exactly know your application, but by lots of RAM I simply meant more than the standard 1GB, should've made myself a little clearer.

    As for the Raptor, or any other small, fast OS drive for that matter, recommended to eliminate a potential bottleneck in the form of a slow pagefile, depends on the amount of data you're going to be regularly swapping I suppose.

    If you don't want to go down this path, then select a motherboard with a good reputation for reliability, quality RAM, a basic graphics card and a solid dual core CPU.

    Initial thoughts:

    ASUS A8N family,
    AMD Opteron 180
    Corsair 2GB DDR XMS3200PT TwinX
    nVidia 6200LE or similar
    Western Digital WD800JB (fast, reliable PATA OS drive)
    Seagate 7200.10 500GB
    Zalman 7700 or 9500 CPU HSFs,

    Just to clear things up, there's no problem installing windows on a SATA drive. You don't need to press F6 unless you're running a RAID setup. I've never had to install SATA drivers for any windows install on a SATA drive.
  8. HicRic


    Joined: 13 May 2006

    Posts: 287

    Location: england

    When it comes to performance cpu's these days surely conroe should be a big consideration? Just thought I'd throw that out there since nobody's mentioned it yet. :)
  9. Shackley


    Joined: 4 Aug 2003

    Posts: 3,054

    Thanks for all that, I understand better now.

    I'm not sure what your concern with Overclocking is - reliability I guess?

    I take your point about ECC memory but the in-built video isn’t of any use (I need to run 3 x screens).

    I haven’t used XP 64bit and had therefore not considered that it can use > 4Gb.

    I had planned to use a ‘fast’ data drive and to place the swapfile on that. Likewise, I hadn’t realised that Windows XP could automatically ‘pick up’ a SATA drive.

    For what it is worth, the application is STRATA from Hampson Russell - http://www.hampson-russell.com/vhr/bins/content_page.asp?cid=1886-1899-2102

    Again, many thanks – much food for thought there.

    It should, it’s just that I have enough problems trying to keep some sort of track of AMD CPUs without getting into Intel – my limitation, sorry.
  10. helpimcrap


    Joined: 13 Jan 2004

    Posts: 12,548

    Location: Leicestershire

    surely you wouldnt be OCing though would you because even a slight hiccup and you could lose a days worth of work?
  11. Shackley


    Joined: 4 Aug 2003

    Posts: 3,054

    Yes, and that does cause me concern.

    This suggests that you (and others) feel that OCing significantly reduces stability. I don't have much experience in this area, certainly none with massive OCing. However, I had got the idea that it need not have a detrimental impact on stability?

    You are absolutely right, I do want a stable, reliable system. However, if I can squeeze extra performance from it by changing voltages, FSB speed, CPU multiplier, memory timings AND cooling, I am keen to do whatever I can.
  12. BillytheImpaler

    Man of Honour

    Joined: 2 Aug 2005

    Posts: 8,741

    Location: Cleveland, Ohio, USA

    OC'ing is a bad idea when you're working with mission-critical data. Any minor hiccup that you would never notice playing a game or surfing the web may be disasterous for your calculation. Imagine that the processor is adding 1000 numbers together and the sum ends up being one number off. That could seriously mangle your work and you don't want that.

    You should be looking at the new Woodcrest chips. A good, fast one would do nicely. Beyond that you might want to consider a Core 2 Extreme, the fastest Conroe. Both those chips should be able to beat an AMD FX chip at the same clockspeed.

    How much memory does your analysis program want to use? 4 GiB of quick DDR2 might not be a bad idea.
  13. Shackley


    Joined: 4 Aug 2003

    Posts: 3,054

    Yup, but the Conroe chips aren't even avalable yet and lead times are likely to be long? And the X6800 Core 2 Extreme is likely to cost an absolute fortune.

    My understanding is that it will use whatever it can - within the constraints of Windows 2k or XP 32-bit (or Solaris, IRIX or RedHat. In a Windows environment, this probably means somewher between 3.5 and 4 Gb.

    I hear everything everyone seems to be saying about Overclocking - basically, "don't do it" if you value your data. :(
  14. BillytheImpaler

    Man of Honour

    Joined: 2 Aug 2005

    Posts: 8,741

    Location: Cleveland, Ohio, USA

    The Conroes from OcUK should ship in the next three or so days according to their web site. Woodcrests shipped a month ago. :)

    For what OS is the analysis program? 32-bit memory addressing allows for up to 4096 MiB main memory so anything lost below that is the fault of the motherboard IIRC.

    You say it uses as much memory as it can get. How much virtual memory is it using then? If it's using a ton you should move up to a 64-bit OS and address a ton of memory. If even that is not enough to hold it all you will need to consider a fast disk array. RAID 5 with 10k or 15k SCSI drives would work nicely.
  15. Stelly


    Joined: 5 Oct 2005

    Posts: 11,043

    Location: Liverpool

    I say Conroe as well :)

  16. brightside_


    Joined: 22 Jan 2006

    Posts: 494

    Location: Lincoln

    When I started overclocking, due to being new, I used to pickup quite a few errors on Prime95, Rounding errors mainly. With important data that could affect it big time. It might not even show itself during the preliminary stablitity tests, and you'd never find out.
  17. Psycho Sonny


    Joined: 21 Jun 2006

    Posts: 36,137

    yeh u really need a conroe for that amount of processing
  18. drunkenmaster


    Joined: 18 Oct 2002

    Posts: 33,194

    ok, server chipsets aren't any "faster" particularly, they don't offer more throughput, ok the woodcrest has a 1333fsb but it will make minimal difference and if as per usual server speeds are one or two behind desktop speeds the the extra cpu juice is going to affect stuff more.

    the main thing, if its 10gb files hard drive access is going to be the single biggest bottleneck most likely. it can entirely depend on extra how the data is being processed. for instance if you are fixing a 4 gb dvd with parity files you ideally want to have the entire file in memory as the cpu will need to constantly access multiple parts across the whole 4 gig file, with only 2gb's mem you constantly page to the hard disk and are waiting almost entirely on the hard drives. parity files have to access the whole file, your program might not need such constant access to the whole 10gigs but merely part at a time. run the app, tell us how much mem and virtual memory it uses(you can go to view>columns in task manager to show virtual mem per process) and also check how much your seeing in the performance tab of task manager, commited memory, page file usage. if these are basically maxing you out you want more and more memory and as fast hd's as you can get.

    os drive won't make much of a diff here. go with the pata, get 2x 320gb seagate 7200.10's and raid 1 them and have a 3rd one to backup all the data, or run 4/5 drives in raid 5, with raid 0 you'd have to manually back up your data but is basically the fastest/easiest way to use raid for speed.

    for cpu, how much cash can you spend on this computer, assuming your cpu is running at 100% all the time grab a x6800 conroe, or drop down to another speed grade based on your available budget. its got the strongest fpu and should outperform everything else quite easily. the latest motherboards( i assume am2 ones aswell but not been looking) can support 8gb's of mem. this will mean win xp 64 pro edition, or going with a server edition but you can deffo use all the mem. 2gb sticks are far far more expensive than 1gb sticks. so again check the performance info in task manager, we need to know if you are cpu limited, hdd limited, mem limited or all. if the program is mainly using cpu and accessing data slowly then more memory and akiller raid 0 setup might not help too much. you might need all three maxed out in which case, again if you can afford it/fits into budget, then 4x2gb sticks will be your very best option.

    definately let us know your budget and what the cpu usage, page file and physical mem usage tabs are telling us. that will give us a clearly definintion of where you need to upgrade and where you really might not gain anything.

    as for gfx, theres the matrox triple headed card which would give you 3 screens with one card, not sure how much they go for anymore(or tbh, if they are even available anymore). or a couple g550's matrox cards(think thats what they are called, me goes to check). can only find one pci-e matrox card right now, not triple headed, theres an external box you can connect up outside computer to i think any gfx card, or maybe just matrox's to give an extra display, i think if you have two nvidia cards and don't hit the "enable sli" function you can run 3 displays but in sli you can only run 1, though that might be only 2 displays then only 1 in sli haven't used sli in a long time.

    ok a pci dual head card can be had for £60ish, a pci-e dual head for like £80, mix and match, two of either, shouldn't really matter either way i don't think.
    Last edited: 15 Jul 2006
  19. Shackley


    Joined: 4 Aug 2003

    Posts: 3,054

    Very sensible comments and questions. To be honest, I don't really know the answers to most of them, I will have to research and speak again to the developers and will post a reply when I know more.

    As to budget, the user is currently using a Windows version of Strata. They approached me about converting to use a Linux version. Hampson Russell (HR - the developers of Strata) said that the cost just to migrate from the Windows version to the Linux version would be US$ 4,200 - this is NOT for a new license! Additionally, the user wants an additional, faster workstation. He also uses another application (2d/3dPAK from Seismic Micro-Technology - http://www.seismicmicro.com/Product/Product-2d3dPAK.htm) that 'benefits from' multiple, high resolution screens.

    Hampson Russell also said that on the same hardware, they wouldn't expect a Linux version to be any faster than the Windows version currently in use. The application is currently not multi-threaded and is not a 64-bit implementation - these changes ar currently in a 'transitional phase' whatever that may mean in practice - probably initially for Solaris and Linux, then for Windows.

    It is my understanding that Strata is CPU / Memory intensive, not Disk I/O bound. However, if the data runs to 10Gb, it must get from the hard disk(s) to RAM and back somehow :confused: