SQLite is amazing
Most developers already know SQLite is great. If you have not tried it, you are missing out.
SQLite is extremely fast, featureful, reliable, and easy to use. When your use case fits, you should probably pick it.
It enables very low latency applications and lets you do things that would be impractical with other databases.
For example it is often fine to issue large numbers of small queries because N+1 Queries Are Not A Problem With SQLite.
I’ve scaled some very large applicatons on SQLite and can highly recommend it.
Using SQLite with Go is easy and fast
Using Go with SQLite is straightforward. There are multiple high quality libraries with different tradeoffs in performance, features, memory use, and portability.
Some libraries are database/sql
compatible and some are not. Some require CGO_ENABLED=1
and others do not.
The most widely used library is github.com/mattn/go-sqlite3. It calls the SQLite C code so it needs CGO_ENABLED=1
, and it is database/sql
compatible.
Most of the time I use this library because I rarely have a reason to pick another, and nothing else is as well tested or as featureful (extensions, user defined functions, highly configurable, easy to compile with custom options, etc).
Following SQLite best practices
But even after many years, one thing that still trips me up on a new project is remembering which best practices to apply.
I have rewritten the same helper code multiple times, rechecking, tweaking, benchmarking.
So I decided to finally build a proper library to make it easy to do the right thing and harder to do the wrong thing.
I called it sqlitebp
for SQLite “Best Practices”. It provides a more type safe API and better defaults when opening a SQLite database from Go.
package main
import (
"fmt"
"log"
"github.com/jacob2161/sqlitebp"
)
func main() {
// Creates or opens the database with best practice defaults
// (WAL, foreign keys, busy timeout, NORMAL synchronous, private cache, etc.)
db, err := sqlitebp.OpenReadWriteCreate("app.db")
if err != nil { log.Fatal(err) }
defer db.Close()
// Create a table (STRICT for stronger type enforcement)
if _, err := db.Exec(`CREATE TABLE IF NOT EXISTS users (id INTEGER PRIMARY KEY, name TEXT NOT NULL) STRICT`); err != nil {
log.Fatal(err)
}
// Insert a row
if _, err := db.Exec(`INSERT INTO users (name) VALUES (?)`, "Alice"); err != nil {
log.Fatal(err)
}
// Query a value
var count int
if err := db.QueryRow(`SELECT COUNT(*) FROM users`).Scan(&count); err != nil {
log.Fatal(err)
}
fmt.Println("User rows:", count)
}
Current features and best practices
WAL mode
Enabled by default because most use cases call for it. WAL mode is what enables high concurrency because readers do not block writers and writers append to the WAL then checkpoint.
Foreign keys
Foreign key enforcement is explicitly enabled so relational constraints actually apply. It’s easy to forget this. SQLite itself can not enable it by default for backward compatibility reasons.
Busy timeout
A default busy timeout of ten seconds prevents brief periods of write contention from returning SQLITE_BUSY
immediately. Instead it blocks the query until the write lock is released. This is much simpler than implementing retries but achieves approximately the same thing.
Synchronous NORMAL
NORMAL synchronous balances safety and throughput. You can lose the very last transactions on sudden power loss, but you avoid corruption and it has lower overhead compared with FULL which fsyncs in more cases. Can be set to FULL when required.
Private cache
A private page cache is enforced. Shared cache is deliberately avoided because it’s a legacy SQLite feature even though it seems like what you’d want at first glance.
Page cache size
A target page cache of 32 MiB is small enough that it should be fine in most applications and it reduces disk reads for many working sets while keeping memory bounded.
Small dynamic connection pool
Limits the connection count within a small range scaled by GOMAXPROCS (2-8). Extra connections usually degrade throughput due to file level locking and duplicated caches. Perhaps this should have a higher upper bound, but I need to do more benchmarking to be sure.
PRAGMA optimize
Runs PRAGMA optimize
when each new connection is created unless you disable it. This lets SQLite update analysis and internal stats. This may be changed to instead run periodically in a configurable way, which is probably the right thing to do, but this works well enough in practice.
Temp storage
Sets PRAGMA temp_store = MEMORY
so tables and sorting uses memory, instead of disk, by default. This improves performance significantly in many cases. You can force file backed temp storage to prevent excessive memory usage if required.
Standard disclaimer
This is a new library and I have not marked it v1 yet because there could be bugs or I might need to adjust the API. That said, it is simple and well-tested enough that I hope there are no serious issues and the API is likely to remain stable.
Cunningham’s Law
Cunningham’s Law says “the best way to get the right answer on the Internet is not to ask a question, but to post the wrong answer”.
That is part of the reason I am publishing this as a Go module for others. Even though I have used SQLite for 10+ years, and this stuff is pretty simple, things change and I could be mistaken about something.
Hopefully this library will nerdsnipe others into pointing out where I’m wrong!
Chat with me on Bluesky
I am @jacob.gold on Bluesky. I’m always happy to chat about Go or SQLite!