In fact, I have a file of about 150MB size to distribute to clients.
To transfer such a big file over a poor WAN connection is very time-costing, says over 10 minutes.
Yikes
There is your problem.
I was wondering why even one record would be slow.
But you say over WAN (internet).
You SHOULD have mentioned that from the beginning.
The problem is not the record size of 150MB (although large) but the fact you're accessing the database over the internet. Database communication protocols are inherently chatty. That means for
every packet of 64K bytes there are multiple messages over the line. All are subject to a large transfer time (look at the ping-time of each message). And one record of 150Mb equals tens of thousands messages. Each with it's own 'ping'-delay.
Databases are not meant to be accessed directly over the internet. It's just to slow. And if you have your mysql server exposed to the full internet (without vpn etc)...
O, don't get me started about security. Just don't do it.
You are better of creating a multi-tier solution. Create a middle-man (for example a web-server) which does the actual sql query. The web-server is on the same network as the mysql-server, and only send json messages with the result back to the client. Only when you really need the actual record, then you transfer the actual data (and 150MB should transfer in 12 seconds to 2 minutes, 100Mbps resp. 10Mbps). With this solution it would even be possible to zip the resulting file before transfer. Or send multiple files in one zip. Because file transfer over the internet is much faster then 'database-protocol' (really, DB-protocol is really really chatty) you'll speed up your client considerably.
Really, databases are
not meant to be accessed directly over the internet and especially not for 'file-transfer'.