I am in charge of a smaller DB 300+ hundred megs 100'ish tables about 45 users hit it throughout the work day. Mostly reads, but a fair number of updates and inserts. I have been slowing learning the structure of the DB in order to get some performance out of it. I have heard that taking a look at indexes is a good place to start. All most all of the indexes for the tables of said DB are clustered, a few of them are non-clustered.
Is there any speed advantage over clustered vs non-clustered? I have a maintenance plan (yes yes, I know) that re-orgs and rebuild the indexes nightly before diff backups, is this good enough for now, until I get a better grip on index formation and utilization?
Is/are there (a) script(s) that will help me view 'performance' of the various indexes? Just how big of can of worms did I get my self into?
A clustered index determines the physical order of data in a table and is particularly efficient on columns that are often searched for ranges of values. They are also efficient for finding a specific row when the indexed value is unique.
Normally (there are exceptions), the clustered index should be on a column that increases monotonically - such as an identity column, or some other column where the value is increasing - and is unique. In many cases, the primary key is the ideal column for a clustered index (but don't put a clustered index on a uniqueidentifier/GUID column.)
From this MSDN article:
SQLServerpedia.com has some nice articles/tutorials for index tuning: Index Related DMV Queries and Using the Right Indexes for Optimal Performance.
I've read that it's very good practice to use a surrogate key & use a clustered index on that column. Typically, this will be an int column that will auto-increment (IDENTITY), or a uniqueidentifier (make it a sequential GUID to avoid performance issues later on!).
In doing so, your queries will do JOINs on those surrogate keys across tables, giving you performance & scalability.
As far as other (non-clustered) indexes, that choice depends on how your customers use your application. Too many indexes spells disaster for inserts/updates. Insufficient indexes slow down reads. You'll need to find a balance b/w the two. Columns that are used in conjunction with searches are logical candidates for indexing, including composite (multi-column) indexes (mind your column order, in that case).
If you want to get fancy, have a separate OLAP database for reporting on historical data.