A custom approach to "improving" select queries in SQLite

  • Tutorial

First of all, I would like to note 2 points:
- this article describes the algorithm of work, and not how to optimize specific SQL queries;
- the approach described in the article is not relevant for the "productive" hardware, we will talk about how we managed to increase the "responsiveness" of the interface of the application intensively working with the database on budget smartphones running Android;

Briefly about the application

To make it clear what is at stake, in a nutshell I will say about the application itself: it was a system for organizing mobile trading, most users are sales agents (TAs). When a TA agent arrives at a point of sale, it needs to find the necessary very quickly goods, see their balances in warehouses (at the time of the last synchronization) and note the quantity necessary for the client. Those. TA operation algorithm looks like this:
- find product 1;
- enter the number;
- find item 2;
- enter the number;

they do it very quickly :)

Search Requirements

- Search by product name (and it can be both in Russian and in English);
- Each client can have an “own” price list;
- You need to be able to see both all the goods, and only those that are in stock;
- Have the ability to filter the list of products for the selected category with all the subcategories nested in it;

Initial data

The figures that I quote are real, otherwise I would not have to invent all this.
Number of clients: 500
Number of goods: 5000
When assigning an individual price to each client, we get: 5000 * 500 = 2 500 000 lines in the price table.

Despite the fact that SQLite is very fast, it was not possible to solve the problem “head on”. Or rather it will be said that everything worked, but not fast enough.


The first obvious thing is that the SQL query is generated dynamically (in order to exclude unnecessary joins and conditions from it, if they are not necessary). Also during the experiments it turned out that the union of the tables in the form:
select * from Table1, Table2 where Table1.Table2Id = Table2.Id

works a little faster than:
select * from Table1 join Table2 on Table1.Table2Id = Table2.Id

Well, and one more, quite natural moment (although I didn’t immediately think of it), this is a query of the form:
select Table1.Id from Table1, Table2 ... TableN
where Table1.Table2Id = Table2.Id ...

will run much faster than:
select Table1.Id, ... TableN.FieldN from Table1, Table2 ... TableN
where Table1.Table2Id = Table2.Id ...

Those. the same request, but returning only one field will be executed much faster than if you try to "pull out" all the necessary fields at once.

But what about the rest of the data?

And we get the rest of the data with a series of simple queries like:
select * from Table1 where Id = ?

This query is very fast because Id is the primary key.


The main optimization factor was the splitting of the process of obtaining data into several queries: one of them returns a list of identifiers, and the rest of the data is returned by much simpler queries.

As I have already said, this approach has significantly accelerated the speed of the application on budget smartphones. When scrolling through the list of goods (when several “simple” queries are performed for each appearing list item), no slowdowns are visible to the eye.

Also popular now: