From your experience, what's the upper limit of rows in a MyISAM table can MySQL handle efficiently on a server with Q9650 CPU (4-core, 3.0G) and 8G RAM.
I currently have a table with 15 million rows. It's pretty fast. If the scale increases to 1 billion rows, do I need to partition it into 10 tables with 100 million rows each?
I would not worry about application performance with 1 billion rows on a machine that can keep the indexes in memory. If you are serious about reaching 1 billion rows, you first have to do some math:
Next, move into your application uptime requirements.
I would worry more about the data lifecycle and data management of a multi-gigabyte table file of that magnitude before worrying about performance. With replication, you can make up a lot of the performance. Keeping the data sane and restoring from even small disasters (like corruption induced from bad ram) is more likely going to trouble you first.
I would also encourage you to take the table you have -- and add 1B rows of test data to it. This is extremely insightful to watch what happens to your system. Run some EXPLAINs on your queries against this new huge dataset. Time how long it takes to backup, and restore. You might need to adjust some requirements.
This is an interesting article about 1 billion rows in mysql.
Just to add to some of the comments above, I've billion row table before on quad-xeons, although with 32Gb RAM, not just 8.
To make sure our performance is good the tables are simplified and normalised as much as possible to keep them thin, and then have just a couple of indexes on them. The main point of those tables, the really large ones, for me, was just to write down time-series data. Lots of writes, all in order, and very few reads. The reads that were necessary were always searching for specific times against another column or 2, and so the index could take care of that.
The tables held on the SAN were backed up automatically by SRDF and on the occasion things did go wrong (disk full, etc) it took about 4 hours to repair.
Depends on the queries you're running. If you're doing
SELECT * FROM table
it's generally going to run a lot faster than a query with tenJOIN
s.depends on your hardware, data, what query you run and what you consider fast. for simple (
"select * from table where foo='bla'"
) queries, the calculation is easy: if your query uses a index and that index fits into the filesystem buffer of your OS, it'll be fast. if it doesn't fit, the query runs slower (how much slower depends on the amount of data mysql has to read and the speed of your disks)however, i would use a ACID compliant db like postgres with such tables, your dont want to repair a table with a billion rows