PC spec for Microsoft Access

After the thread about Vista I wondered if I could pick your brains on a variation.

We need new PCs at work for running some rather hefty Microsoft Access databases, and I am trying to find out what makes Access work quickly. We are connected to a network, and obviously running with any linked databases on the local drive, rather than the network speeds things up, but this is not always suitable.

The databases contain tables, or links to tables with up to a million or more records in them, and when compacted are in the 0.5 to 1.5 GB region.

We are quite used to them being slow, but are about to embark on building some new models and would like something quicker, while in the writing queries and macros stage. Should we concentrate the expense on Memory, or processor, or what?

Which of the newer generation of processors are most suitable, and which should be avoided for this purpose?

Thanks
 
Sorry Lee, don't know specifically what elements would boost performance for an Access system.

BUT

If new PCs are in order and you have a server / shared database, how about a rewrite using SQL Server? Will allow much more scope for large database development and is generally more suited to multiple user use...
(IMHO)
 
I agree with Paul, you should be using SQL not Access for a database of that size. The performance should be massively improved.

Paul
 
From a hardware perspective I would ensure that the system designated as a server has a Gigabit LAN card facilitating 1000mb network traffic, Intel 6600 upwards or AMD 4400 both Dual core. SATA Hardrives with bios raid/mirror facility. 2 Gig of Ram. One other consideration would be a decent Network Switch like an HP Procurve, this has 2 Gigabit Lan ports and is managed so you can spot network bottlenecks.
Ibm do some nice thinkcentre workstations with XP Pro and 3 years warranty from £289, PC World Business or Insight stock them.

All the above should be fine with a flatfile table based system, migrating to SQL or MYSQL even would improve performance.

If the server environment is an issue, SAMBA on Linux is awsome for sharing data on an MS network plus no licencing issues :pepper:


Good Luck
 

Lynn Larsen

Lynn Larsen
Lee,

In general Databases are only cpu hogs if you are doing a lot of change/add/delete work. If that is the case get the fastest CPU you can afford. Otherwise, they are disk intensive, so get the the fastest disks you can afford. If you can put the data on a striped array of disks it will help as well. (Don't get SATA unless you can afford at least double the capacity of what you need and then ONLY use them in a fault tolerant Raid array. The will protect you from data loss with these cheap, but prone to failure disks.)

I agree with software recommendations already made.

Lynn
 
Last edited:
Lee,

I'm a computer consultant working with large databases - I would recommend Visual Foxpro (v 9.0). It is Microsoft's 2nd tier database - faster and more robust than access, but not overkill like SQL Server 2005. I routinely use it to manage databases with 3+ million records and it is lightning fast.

The only real limitation is that your tables not exceeed 2 gig.

John
:pepper:
 
Thanks for the info. I think we are stuck with Access, and I think that is OK. Maybe I stressed the network too much. The models are being built by users who understand the business and can use the Access interface, rather than by support guys writing in code. The databases will not require multiple users, and will in general be based on one desktop only, but some will have one or two links to tables on the network.

We don't want to be installing and learning new software, unless it can be learnt very, very quickly.

It sounds like we will go for the best all round desktops we can afford, and will have to try to optimize the databases in various ways to speed up the the processing.

Thanks
 

Sandy

Gulf GT40
Lifetime Supporter
CPU generally is not going to be the limitation with Access, but can't say what the application is doing with the data so make make a huge difference. Access is more then likely I/O bound over the network. Sorting and Merging data is generally a CPU hog, but since most stuff in Access's case is I/O Limited (Network/Disk subsystems) the faster CPU's likely will not make thing smokin' fast but can't hurt as CPU is cheep these days. Memory can also help but not sure how Access can use it since it is not super smart. Add/Edit/Deletes are I/O intensive in most all cases, more so with a simple database (I use the Database term loosely with MS Access) .

We have used a couple of Access applications for business back end stuff where I work and all have fail for multi user work and are dodge-y at best and data integrity was poor. If you are sharing any table in Access ... back up often. Like others have suggested if you can use a real database instead of Access you would be better off (Free - MySql, Firebird, Postgress, or pay for use SQLServer, Sybase, Oracle etc) but to make most effective use you would need to redo your applications which doesn't sound like it's possible.

One other thing, a badly architected application can take down a database server or network with excessive data shuffling, for example loading the entire dataset into the Access Grid so you can navigate is a common problem this is especially a problem for non-client server applications that typically were written with the paradigm of local dataset access (Access, Paradox, DBase/Foxpro, and a host of other).

Like Joe mentioned if you don't have a Gigabit network, that will be a huge improvement if running 100BT. For the fun of it, you can run the Window's task manager and see what your CPU and NETWORK utilization is while running the application and that might also give you some clues as to what you could improve on without too much speculation.

And finally do check out the link that Richard sent. It is amusing as well as very informative, too bad the folks that wrote the Access apps here didn't read it.

Sandy
 
Back
Top