There are many answers to quote, but I read them all.
I have always preferred to use a DBMS. Indexing the tables helped me a lot in filtering data. However, deleting and changing some records took too long. My database was corrupted twice, because of my impatience.
When I decided to save the data directly to disk, I was very satisfied with the performance (readings were done in about 0.2 seconds on a hard disk). However, I would like to know if there was a better way to read this data, that's why the topic.
My queries are not very advanced. Basically I search for records within a period of dates, in addition to looking at some records prior to the first date of the filter. As I mentioned before, using a DBMS, the biggest problem was when updating / deleting data.
At the moment, splitting the data into several SQLite databases is working very well.
Someone suggested having an auxiliary table, whose primary key would be a date, and would have another field pointing to the "true record". Know that I thought about this hypothesis a lot, however, at the moment I think it is not necessary.
Despite having ~ 25,000 databases, my tables have a maximum of 300,000 records. When I make an appointment, I look only at an instrument / DB (although it is much more comfortable to have a single database, either with 1 table and millions of records, or with 25,000 tables and with about 200,000 records).
I honestly do not know if the solution adopted by me can be seen as "good programming practice". I would be very happy to read more opinions about it.