So - we have an internal company database, the usual kind of stuff: manages clients, phone calls, sales deals and client agreements/schemes.
It's an Access 2000 front-end, and an SQL Server 2000 Standard back-end. Single server, dual Xeon 3.2GHz, 2GB RAM, Windows Server 2003, gets about 40% CPU load all day, spread across the 4 cores visible to the OS (HT).
The back-end database is poorly designed, and has organically grown over 10+ years, maintained by less-than-skilled individuals. It is badly normalised, and some of the obvious problems include tables with tens of thousands of rows with no primary key or index, which are also used heavily in multi-table joins for some of the most heavily used parts of the system (e.g. a call manager application that sits on everyone's second monitor for 8 hours a day and runs a big inefficient query every few seconds).
The front-end is not much better, it's the typical mess of hundreds of forms, nested saved queries, poorly written embedded SQL in the VBA code, dozens of "quirks" etc, and whenever a change is made something unrelated seems to break. We have settled on one MDB that works "well enough" and now have a no-change policy on that as we have no Access heavyweights in-house (and no plans to hire one either).
The company is now slowly growing, increasing numbers of clients, calls etc, as well as a modest increase in the number of concurrent users, and performance has been getting noticeably worse just recently (waiting to move between forms, waiting for lists to populate etc)
Perfmon says:
- Disk transfers per second: between 0 and 30, average 4.
- Current disk queue length: hovers around 1
SQL Server's profiler sees hundreds of thousands of queries every minute. CPU usage on the clients is pretty much zero, indicating it's waiting on server-side queries to execute. I have put this workload through the DB Engine Tuning Advisor, applied its suggestions to a test backup, but this hasn't really made much difference.
By the way, we have a mix of 100MB and gigabit ethernet, all on one subnet, 40 ish users across two floors.
To the question.
As I see it we have two choices to resolve/improve this situation.
- We can scrap it and replace it with an entirely new CRM system, either bespoke or part bespoke
- We can extend the life of this system by chucking hardware at it.
We can build an Intel i7 system with crazy performance numbers for an order of magnitude less cost than replacing the software.
When a new system is eventually developed, it can be hosted on this box, so there's no wasted hardware. A new CRM system keeps getting put off, and off, and off - I don't see that happening for at least a year.
Any thoughts on this situation, especially if you've been here yourself, would be most appreciated.
Thanks
I'm going to disagree with everyone here. Chuck some hardware at it. It's cheap, fast, easy, and will buy you the time needed to implement a proper CRM solution. The reason I'm advocating something that is anathema to just about everyone on not only this board, but stackoverflow as well, is that I've been a project manager/manager and have been on the "Business" side for a while(business is in quotes due to my hatred for the word). Based on your description of the software, it will take close to a year to rebuild something else. Just discovering/documenting the business rules/quirks, will probably take 2 months. It will also be unbelievably expensive to develop. Especially when compared to the cost of a tricked out server.
I'm actually about to host a set of web apps for a company for just that reason. The internal IT department will not move it to better hardware because they want to redevelop them on a new platform. That cost is approximately triple what it would cost to move it to new hardware. Not too mention that the company might not have the contract renewed in a year.
You might not need to do either. My suggestion is to simply add some indexes/keys to the table.
tables with tens of thousands of rows with no primary key or index, which are also used heavily in multi-table joins
Before spending a lot of money or time, take a couple hours and add indexes (or primary keys if you can) to any tables involved in those joins ... particularly for columns used in a where clause. You could easily improve the performance by a factor of 10 in just a few hours.
The lack of disk I/O implies that queries are fed mostly out of RAM. If you 'suddenly' have your hot tables not fit in RAM anymore and the server starts working the disks, you may be in for a bad ride. 2GB or RAM isn't very much these days, but back in the SQL2000 era it would have been sizeable. I'm guessing that the amount of data that the application normally manipulates is smaller than the RAM that you have. You might want to look at the amount of "used" space in the data files. This will give you an idea of how much RAM the database might consume, worst-case. SQL Server doesn't keep the data that it doesn't need in RAM, but it can be difficult to know what tables are used and when.
Hyperthreading doesn't always helpful with SQL Server. You may get better performance turning it off. It's hard to test because flipping it off and on requires a reboot, and that's a big hassle on a production server.
"Hundreds of thousands of queries a minute" translates to thousands of queries a second. That sounds pretty busy, but much of that traffic may just be cursor fetches by Access. Access is particularly bad at efficiently retrieving result sets from SQL. You may get better performance by turning the SQL Server parallelization setting off.
You also want to look for blocking. Throwing hardware at a blocking problem does not always produce the hoped-for dramatic improvement. If there is not much blocking and queries are satisfied by RAM, rather than disk, you are basically relying on the processor's grunt, and their ability to pull data across the memory channels. In that case, faster hardware ought to provide a good improvement. If you are in a hurry (to get past this issue) and growing slowly, that might be good enough.
As a solution, adding hardware doesn't scale as well as database improvements. If you get a surge in growth, you may find your new hardware struggling. Another thought is that successful applications draw users. If the application becomes more responsive, users may be more likely to run more reports and such on it than they would if they needed to go for coffee while waiting for the report to finish.
If the database schema is really bad, you may be able to get some performance wins simply by looking at the indexing on the tables. Concentrate on tables that you know get queried often. You can use Profiler to watch queries running against the server, just tell it to look for queries that read a lot of data (like 100,000 pages) and then work down towards queries that don't read much. You mentioned that some of the tables don't have keys. Are there natural keys in the data, just not enforced by constraints or unique indexes?
Do the tables have clustered indexes? Lack of clustered indexing can cause all sorts of secondary effects.
Are there lots of nonclustered indexes, with many columns? This is often an attempt to build many covering indexes, rather than implementing a more effective indexing strategy. SQL Server can effectively build covering indexes on the fly during a query, if it makes sense to do so and there are supporting nonclustered and clustered indexes.
Lastly, it's worth asking: Is maintenance (reindexing and/or update statistics) being done on the tables?
this is a business question not a technical question.
As a business owner: How strategic is the system to the business? the less strategic, the less i care & fixing it & any money spent, is money I could be using elsewhere to grow my business.
Computer folk scare me as they all get in a big room & argue about design & cost me a fortune. Keep the system going! whether this means performance tuning (without re-architecting) or throwing more hardware at it, It's only a priority if it stops working.
As an IT consultant: Your system is legacy and has hidden operational costs. We can design a system that's right for you, that will scale and provide a platform for future growth & strategic advantage. Sign here & all your dreams will come true.
As an IT employee: I can be the superhero here & save the company by averting an imminent disaster by optimizing the hell out of this thing! my manager will shower me with gifts & praise as I'll have saved the company thousands.
Well... this is a while ago now, but I thought I'd record the outcome here.
In the end, I stepped through the VBA line by line to deal with another problem. It was then that I realised that some calls to fetch rowsets were blocking for 20-30+ seconds.
When I dug down into them, I found that the rowset was based on an MS Access query.
That was selecting data from another Access query.
That was selecting data from yet another Access query.
All of which looked like they'd been dragged and dropped together using the query designer.
I went through the top half dozen user pain points and found that without fail they were all exactly the same as this.
So I removed the stacks of chained queries entirely, and replaced each of them with a single pass-through query which could be written in T-SQL and executed directly on the server.
The improvement was absolutely vast in every case without fail and there was no more waiting for queries any more for anyone.
And then I left the company. No idea whether it's still there... but I don't miss it.
I say do both.
Right now your at 40% or so CPU you said? Are you user's complaining (a lot) yet? If not you still have breathing room. More memory might be just enough to do it for a while.
Question for the way to go, do you have in house software developers? If the answers is NO then don't even attempt to redo it. You'll end up exactly where you are now.
Assuming you have in house developers, do your in house developers have the ability to properly do a project? I'm talking spec'd out fully, properly (relistic) timeline, basicly the same as if it was a customer's project. If not, then don't bother or it will end up back where you are now.
Until companies relize they are also clients of themselves, and need to give the same resouces to internal projects, you'll end up exactly where you are now. Been there, done that, got an entire dresser of t-shirts.
So if you can't do it properly you're two choices are out of the box turn key, which you're staff will hate cause they now you have to fit the mould of the system you buy. Or it will be customizeable and you'll still have to spend PROJECT time customizing it.
OR Refactor what you do have. Remember people will expect all the same complete functionallity when the new one comes in, so thats why any other way you have to do everything at once. If you re-factor it, you have a chance to figure out how it works and then rather then ad-hoc changes you plan it out into many small sub projects.
Without seeing the system, I'd likely see about normalizing as much as I can in the back-end, move as much of the SQL into Stored procs. Then build a new front end either out of C# Forms or a webapp. If you can get your business logic and SQL out of the front end, it will be easier to re-do it later. By keeping what you do to small projects, if it gets pushed aside at anytime or stopped, you'll have made progress that will be used.
Some good replies here already- but might I just point out that (assuming there are in house developers) a relatively small amount of work will have a big impact- add primary keys (you shouldn't even have to change all of your queries to use them), add indexes to fields you do use, and tune your queries a bit and you could see an absolutely huge increase. Buy yourself some RAM for it now to buy the time and headroom you need to fix it, and then get to work.
On the subject of "fix it or ditch it" , if the system's features basically work for you and do what you need, don't rewrite the wheel. If your users are having to do gymnastics to use the thing because it does not fit your needs, then no point putting effort in to it.
Technical answer:
You've got a numbber of suggestions stating primary keys and indexing should be thoroughly reviewed. Access also really likes using a SQL Server TimeStamp aka RowVersion column on each table as this reduces a lot of time that Access spends deciding if a record has been changed when it comes to updating the records.
Business answer:
A new CRM solution is a lot of work in training folks and you'll never end up with a system that exactly suits your business requirements.
I'd find a good Access person who is also very knowledgable on SQL Server and get them to spend 3 or 6 months in normalzing the tables and fixing up user pain points. Ensure that person works on the saem floors as your users, although in a quiet space, and is accessible. Although not too accessible. Developers don't like interruptions. S
I'm posting a separate answer instead of just appending to Dayton's answer because there's a cost that's not being taken into account by the first few people to post an answer: The cost of retraining users and the cost of changing your business procedures to fit into a new software program. Otherwise, what he said.
One of the main reasons that companies develop their own software is that they have business procedures that don't match something that's on the market. Which is great -- a company's individual business procedures are a significant part of the value that a company brings to the table, and ARE the competitive advantage that a company has over the rest of it's market. To replace the software with something generic would require that you retrain your people and sacrifice the competitive advantage, or you'd have to customize the solution to match your business processes. Both are expensive and time-consuming. As a business consultant as well as a sysadmin, I've seen these costs kill small companies by themselves.
From your statements, it looks like you're pretty much processor/software bound. I'd do two things -- add indexes (within limits), especially to columns that don't currently use them. And I'd chuck the fastest set of processors you can at it, because it looks like that's where you're binding if you don't have that many drive reads 'cept at peak.
(Also, I'd upgrade the server edition as far as possible -- by the time you get this into place, Access 2000 and SQL Server 2000 will be ten years old. That's OLD in computer years!)
This needs a total re-structuring (re-architecturing). Rebuild the system from the ground up. This will save you a lot in the long run (overhead costs in maintenance). For the mean time, chuck hardware at it. I think this question is more of a "business case" than a technical inquiry. Technical-wise, the answer is an outright "chuck more power at it". Business-wise, Build a new system!