I've got a log parser in perl and it uses the mysql insert or update command on every line. Like:
Log line 1: 2012-12-01 12:12 [perld] Hello world #1
....
Log line N: 2012-12-01 12:NN [perld] Hello world #N
It parses these lines and gets "timestamp" and "#N" and inserts them into the mysql db (parses line with perl::tail function).
I've got about 100 lines/second so there would be about 100 inserts to mysql.
Is there any solution/algorithm to optimize that? Or maybe I should use something like buffer with insert?
The are several points you can check. You can try stored procedures (prepared statements). This can be implemented using dbi->prepare method in perl. This is generally faster than executing the same query multiple times as you need to send only the parameters.
One thing you can also try is to combine multiple insert statments into one and execute them as single statement.
Also, you can try MyISAM if you are not already using it. It is faster than InnoDB. You can try it if you don't use transactions.
Check the indexes on your table - if you have ones you don't need your queries may be taking a longer time than needed.
Also, are you sure your sql is what is taking the time - as Khaled says, you might benefit from pre-preparing the query, but there may be other places where things are slow - for example, you say you parse a date - maybe done easily with Date::Manip - but not quickly! Make sure you know where needs optimising before you start :)
MySQL supports multiline insert, like this:
etc. You can do one insert per second after parsing all the lines for that second or (better) allow for the threshold to be set at runtime which will allow you to figure out the setting performance-wise by experimenting.