Well the easiest way to speed up a database (but use some disk-space or memory) is to make sure you have indexes.
Mainly speed up read operation. but also update, since they need a read first. New inserts are speed up only, if there are constraint (uniqueness, etc) so they require a read.
Reading via an index can be a thousand times faster.
Most databases allow you to see the query execution plan, and you can compare before/after index.
In mysql use "explain select", in MS sql express there is a button to show it.
Basically, if you perform a search on a colum (or several colums) then an index on those column(s) might help.
yet too many indexes will eat a lot of space, and slow down inserts.
On top of that, partitioning, (vertical and or horizontal). If you db engine supports it. But you will only need this, if you have really huge tables.
Vertical partitioning can be simulated by hand/table design.
If you have a table with several small fields/columns (int, char(10),.. and one big textfield (10-20kb per row) It might be good to move the text to a partition/table of it's own.
Some database perform better on tables with fixed record len.
- varchar(50) is not fixed, as it will only take the space required by the data.
- char(50) will eat enough space to store 50 chars in each row, even if an empty string is stored.
So if your db falls into that category, then it may be useful to partition all none fixed columns into their own space (or have a table for them).
BUT that makes only sense if you have queries that will access ONLY fixed len data columns => otherwise it is a loss.
Maintain your tables. e.g some database require that you optimize tables with none fixed len, so fragmentation is reduced.
Some databases allow to pre-sort a table, that can in some cases gain speed....
If you got thousands of clients that do read only, then you can have several db-slaves following one master, the reading can be distributed between the slaves. Otherwise a cluster can help
....