API¶
cysqlite APIs roughly follow the standard lib sqlite3 module.
Module¶
- connect(database, flags=None, timeout=5.0, vfs=None, uri=False, cached_statements=100, extensions=True, row_factory=None, autoconnect=True, pragmas=None)¶
Open a
Connectionto the provided database.- Parameters:
database (str,
pathlib.Path) – database filename or':memory:'for an in-memory database.flags (int) – control how database is opened. See Sqlite Connection Flags.
timeout (float) – seconds to retry acquiring write lock before raising a
OperationalErrorwhen table is locked.vfs (str) – VFS to use, optional.
uri (bool) – Allow connecting using a URI.
cached_statements (int) – Size of statement cache.
extensions (bool) – Support run-time loadable extensions.
row_factory – Factory implementation for constructing rows, e.g.
Rowautoconnect (bool) – Open connection when instantiated.
pragmas (dict) – Optional mapping of pragmas to specify when connection is opened, e.g.
{'journal_mode': 'wal'}.
- Returns:
Connection to database.
- Return type:
Default flags are
SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE. Ifuriis set or the database name begins with'file:',SQLITE_OPEN_URIwill be included.Example:
# Create a connection with a 1s timeout and use the cysqlite Row for # query result tuples. conn = connect('app.db', timeout=1.0, row_factory=Row)
- status(flag)¶
Read the current and highwater values for the given status flag.
- Parameters:
flag (int) – status values to read.
- Returns:
(current, highwater)
- Return type:
tuple
Example:
from cysqlite import status, SQLITE_STATUS_MEMORY_USED # How much memory has SQLite allocated across all connections? current, highwater = status(SQLITE_STATUS_MEMORY_USED) print(f'SQLite is using {current} bytes (peak: {highwater})')
See also
- sqlite_version¶
Version number of the runtime SQLite library as a string.
- sqlite_version_info¶
Version number of the runtime SQLite library as a tuple of integers.
- threadsafety¶
Integer constant required by the DB-API 2.0, stating the level of thread safety the cysqlite module supports. This attribute is set based on the default threading mode the underlying SQLite library is compiled with. The SQLite threading modes are:
Single-thread: In this mode, all mutexes are disabled and SQLite is unsafe to use in more than a single thread at once.
Multi-thread: In this mode, SQLite can be safely used by multiple threads provided that no single database connection is used simultaneously in two or more threads.
Serialized: In serialized mode, SQLite can be safely used by multiple threads with no restriction.
The value of this constant is determined by
sqlite3_threadsafe(), and will return one of three values:0 = SQLite is not thread-safe.
1 = The module may be used in a multi-threaded application, but Connections cannot be shared by threads.
3 = Connections may be shared by threads, no restrictions.
Connection¶
- class Connection(database, flags=None, timeout=5.0, vfs=None, uri=False, cached_statements=100, extensions=True, row_factory=None, autoconnect=True, pragmas=None)¶
Open a
Connectionto the provided database.- Parameters:
database (str,
pathlib.Path) – database filename or':memory:'for an in-memory database.flags (int) – control how database is opened. See Sqlite Connection Flags.
timeout (float) – seconds to retry acquiring write lock before giving up.
vfs (str) – VFS to use, optional.
uri (bool) – Allow connecting using a URI.
cached_statements (int) – Size of statement cache.
extensions (bool) – Support run-time loadable extensions.
row_factory – Factory implementation for constructing rows, e.g.
Rowautoconnect (bool) – Open connection when instantiated.
pragmas (dict) – Optional mapping of pragmas to specify when connection is opened, e.g.
{'journal_mode': 'wal'}.
Default flags are
SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE. Ifuriis set or the database name begins with'file:',SQLITE_OPEN_URIwill be included.Examples:
# Simple connection conn = cysqlite.Connection('app.db') # In-memory database conn = cysqlite.Connection(':memory:') # Read-only connection conn = cysqlite.Connection('data.db', flags=cysqlite.SQLITE_OPEN_READONLY) # Custom row factory conn = cysqlite.Connection('app.db', row_factory=cysqlite.Row)
- row_factory¶
Factory for creating row instances from query results, e.g.
Row. By default rows are returned astupleinstances.
- print_callback_tracebacks¶
Print tracebacks for exceptions raised in user-defined callbacks. Since these callbacks are executed by SQLite, exceptions may not propagate in cases where the SQLite callback specifies no return code (e.g. rollback hooks).
- property callback_error¶
Return the most-recent error raised inside a user-defined callback. Upon reading (or when the database is closed) this value is cleared.
- register_adapter(python_type, fn)¶
Register an adapter for Python data-types used as bound parameters, e.g. automatically serialize
dictas JSON.The
fnfunction accepts a single value and converts it to the appropriate type supported natively by SQLite (see SQLite Notes).- Parameters:
python_type – python data-type to adapt, e.g.
dict.fn –
callablethat accepts a single value and adapts it.
Data-type matching is exact, inheritance is not checked.
Example:
db = cysqlite.connect(':memory:') # Automatically serialize dict as JSON. db.register_adapter(dict, json.dumps) # Or use the `adapter()` decorator. @db.adapter(datetime.datetime) def adapt_datetime(value): # Converts datetimes to a UNIX timestamp. return value.timestamp() # Store Decimal as TEXT to avoid floating-point precision errors. @db.adapter(Decimal) def adapt_decimal(value): return str(value) vals = [ {'key': 'value'}, datetime.datetime(2026, 1, 2, 3, 4, 5), Decimal('1.3'), ] for val in vals: res = db.execute_scalar('select ?', (val,)) print(f'{res} ({type(res)})') # {"key": "value"} <class 'str'> # 1767344645.0 <class 'float'> # 1.3 <class 'str'>
See also
- adapter(python_type)¶
Decorator for registering user-defined adapter.
- Parameters:
python_type – python data-type to adapt, e.g.
dict.
db = cysqlite.connect(':memory:') @db.adapter(dict) def adapt_dict(value): return json.dumps(value)
See also
- unregister_adapter(python_type)¶
Unregister adapter for the given Python type.
- Parameters:
python_type – python type to unregister.
- Returns:
True or False if adapter was found and removed.
- register_converter(data_type, fn)¶
Register a converter for non-standard data-types declared in SQLite, e.g.
DATETIMEorNUMERIC(x, y). SQLite can only natively store a handful of data-types (see SQLite Notes for details).The
fnfunction accepts a single value and converts it.fn(value)will not be called if the value isNone, so you do not need to test forvalue is Nonein your converter functions.- Parameters:
data_type (str) – declared SQLite data-type to apply conversion to.
fn –
callablethat accepts a single value and converts it.
The converter
data_typeuses the following rules for matching what SQLite tells us:Matching is case-insensitive, e.g.
JSONorjsonis fine.Split on the first whitespace or
"("character, e.g. if SQLite tells us the data-type isNUMERIC(10, 2), cysqlite will attempt to find a converter namednumeric.
See also
Connection.converter()decorator.Example:
db = cysqlite.connect(':memory:') # Automatically parse and load data declared as JSON. db.register_converter('json', json.loads) # Or use the `converter()` decorator. @db.converter('datetime') def convert_datetime(value): # Converts our ISO-formatted date string into a python datetime. return datetime.datetime.fromisoformat(value) # Handle decimal data, matches NUMERIC or NUMERIC(10, 2). @db.converter('numeric') def convert_numeric(value): return Decimal(value).quantize(Decimal('1.00')) db.execute('create table vals (ts datetime, js json, dec numeric(10, 2))') # Create a TZ-aware datetime, a JSON object and a Decimal. ts = datetime.datetime(2026, 1, 2, 3, 4, 5).astimezone(datetime.timezone.utc) js = {'key': {'nested': 'value'}, 'arr': ['i0', 1, 2., None]} dec = Decimal('1.3') # When we INSERT the JSON, note that we need to dump it to string. db.execute('insert into vals (ts, js, dec) values (?, ?, ?)', (ts, json.dumps(js), dec)) # When reading back the data, it is converted automatically based on # the declared column types. row = db.execute_one('select * from vals') assert row == (ts, js, dec)
NULL values from SQLite are not converted, so your converter function does not need to handle
None:db.execute('insert into vals (ts, js, dec) values (?, ?, ?)', (None, None, None)) # Converters are not applied when value is NULL/None. row = db.execute_one('select * from vals') assert row == (None, None, None)
See also
SQLite Notes for more information on SQLite data-types.
- converter(data_type)¶
Decorator for registering user-defined converters.
- Parameters:
data_type (str) – declared SQLite data-type to apply conversion to.
db = cysqlite.connect(':memory:') @db.converter('datetime') def convert_datetime(value): # Converts our ISO-formatted date string into a python datetime. return datetime.datetime.fromisoformat(value)
See also
- unregister_converter(data_type)¶
Unregister converter for the given data type.
- Parameters:
data_type (str) – declared SQLite data-type to apply conversion to.
- Returns:
True or False if data-type was found and removed.
- register_type(data_type=None, converter=None, python_type=None, adapter=None)¶
- Parameters:
data_type (str) – declared SQLite data-type to apply conversion to.
converter –
callablethat accepts a single value and converts it.python_type – python data-type to adapt, e.g.
dict.adapter –
callablethat accepts a single value and adapts it.
Short-hand method to register a convert and an adapter at once.
Example:
db.register_type('json', json.loads, dict, json.dumps)
- connect()¶
Open connection to the database.
- Returns:
Trueif database was previously closed and is now open. If database was already open returnsFalse.- Return type:
bool
- Raises:
OperationalErrorif opening database fails.
- close(force=False)¶
Close the database, finalizing all cursors and other hooks and handles associated with the active connection.
- Parameters:
force (bool) – Force close, even if transaction is open (any open transaction will be rolled-back).
- Returns:
Trueif database was previously open and is now closed. If database was already closed returnsFalse.- Return type:
bool
- Raises:
OperationalErrorif transaction is still active (andforcenot specified), or an error occurs while closing the connection.
- is_closed()¶
- Returns:
whether database is currently closed.
- Return type:
bool
- cursor()¶
Create a reusable cursor for executing queries.
- Returns:
a cursor for executing queries.
- Return type:
See also
Note
The use of
cursor()is optional. There is no performance difference between creating a cursor and callingCursor.execute()versus callingConnection.execute(), as the latter will create and return aCursor.
- execute(sql, params=None)¶
Create a new
Cursorand callexecute()with the given sql and params.- Parameters:
sql (str) – SQL query to execute.
params (tuple, list, sequence, dict, or
None.) – parameters for query (optional).
- Returns:
cursor object.
- Return type:
Example:
db.execute('create table kv ("id" integer primary key, "key", "value")') # Iterate over results from a bulk-insert. curs = db.execute('insert into kv (key, value) values (?, ?), (?, ?) ' 'returning id, key', ('k1', 'v1', 'k2', 'v2')) for (i, k) in curs: print(f'inserted {k} with id={i}') # Retrieve a single row result. Use a named parameter. curs = db.execute('select * from kv where id = :pk', {'pk': 1}) row = curs.fetchone() print(f'retrieved row 1: {row}') # Empty result set. curs = db.execute('select * from kv where id = 0') assert curs.fetchone() is None assert curs.fetchall() == []
- executemany(sql, seq_of_params)¶
Create a new
Cursorand callexecutemany()with with given sql and seq_of_params.Queries executed by
executemany()must not return any result rows, or this will result in anOperationalError.- Parameters:
sql (str) – SQL query to execute.
seq_of_params (sequence of: tuple, list, sequence, or dict.) – iterable of parameters to repeatedly execute the query with.
- Returns:
cursor object.
- Return type:
Example:
db.execute('create table kv ("id" integer primary key, "key", "value")') curs = db.executemany('insert into kv (key, value) values (?, ?)', [('k1', 'v1'), ('k2', 'v2'), ('k3', 'v3')]) print(curs.lastrowid) # 3. print(curs.rowcount) # 3. curs = db.executemany('insert into kv (key, value) values (:k, :v)', [{'k': 'k4', 'v': 'v4'}, {'k': 'k5', 'v': 'v5'}]) print(curs.lastrowid) # 5. print(curs.rowcount) # 2.
- executescript(sql)¶
Execute the SQL statement(s) in sql.
- Parameters:
sql (str) – One or more SQL statements to execute in a script.
- Returns:
cursor object.
- Return type:
Example:
db.executescript(""" begin; create table users ( id integer not null primary key, name text not null, email text not null); create index users_email ON users (email); create table tweets ( id integer not null primary key, content text not null, user_id integer not null references users (id), timestamp integer not null); commit; """)
- execute_one(sql, params=None)¶
Create a new
Cursorand callexecute()with the given sql and params. Returns the first result row, if one exists.- Parameters:
sql (str) – SQL query to execute.
params (tuple, list, sequence, dict, or
None.) – parameters for query (optional).
- Returns:
first row or
Noneif no results.- Return type:
tuple,
Row, orNone
Example:
row = db.execute_one('select * from users where id = ?', (1,)) row = db.execute_one('select * from users where name = :username', {'username': 'charlie'})
- execute_scalar(sql, params=None)¶
Create a new
Cursorand callexecute()with the given sql and params. Returns the first value of the first result row, if one exists. Useful for aggregates or queries that only return a single value.- Parameters:
sql (str) – SQL query to execute.
params (tuple, list, sequence, dict, or
None.) – parameters for query (optional).
- Returns:
a single value or
Noneif no result.
count = db.execute_scalar('select count(*) from users')
- begin(lock=None)¶
Begin a transaction.
If a transaction is already active, raises
OperationalError.- Parameters:
lock (str) – type of SQLite lock to acquire,
DEFERRED(default),IMMEDIATE, orEXCLUSIVE.
Note
SQLite does not support nesting transactions using BEGIN/COMMIT. To have nested transactions, SAVEPOINTs must be used within a transaction. SAVEPOINTs are safe to nest.
To avoid having to do book-keeping, use
Connection.atomic()to decorate or wrap blocks which need to be executed atomically within a transactional context. The appropriate wrapper (transaction or savepoint) will automatically be used.See
Connection.atomic()for details.db = connect(':memory:') assert db.autocommit() # Autocommit mode by default. db.begin() # Now we are in a transaction. assert not db.autocommit() db.commit() assert db.autocommit() # Back in autocommit mode.
- commit()¶
Commit the current transaction.
If no transaction is active, raises
OperationalError.See also
- rollback()¶
Roll-back the currently-active transaction.
If no transaction is active, raises
OperationalError.See also
- autocommit()¶
Returns whether the database is in autocommit mode (no transaction is currently active).
- Return type:
bool
db = connect(':memory:') # We are in autocommit-mode by default. assert db.autocommit() db.begin() assert not db.autocommit() # We are in a transaction. db.commit() with db.atomic(): assert not db.autocommit() # In a transaction. assert db.autocommit() # Back in autocommit mode.
- property in_transaction¶
Returns whether a transaction is currently active.
- Return type:
bool
- atomic(lock=None)¶
Create a context-manager which runs any queries in the wrapped block in a transaction (or savepoint when blocks are nested).
Calls to
atomic()can be nested.atomic()can also be used as a decorator.- Parameters:
lock (str) – type of SQLite lock to acquire,
DEFERRED(default),IMMEDIATE, orEXCLUSIVE(only effective for outermost transaction).
Example code:
# All queries within the wrapped blocks are run inside a transaction # (or savepoint, when nested). When the block exits, the block will be # committed. If an unhandled exception occurs, the block will be # rolled back. with db.atomic() as txn: db.execute('...') with db.atomic() as nested_txn: db.execute('...') # nested_txn is committed now. # Both blocks committed. @db.atomic def atomic_function(): # All queries run inside a transaction (or savepoint) and commit # when the function returns, or roll-back if an exception occurs. db.execute('...')
Transactions and save-points can be explicitly committed or rolled-back within the wrapped block. If this occurs, a new transaction or savepoint is begun immediately after the commit/rollback.
Example:
with db.atomic() as txn: db.execute('insert into users (username) values (?)', ('alice',)) txn.commit() # Changes are saved and a new transaction begins. db.execute('insert into users (username) values (?)', ('bob',)) txn.rollback() # "bob" will not be saved. with db.atomic() as nested: db.execute('insert into users (username) values (?)', ('carl',)) nested.rollback() # Rollback occurs, new savepoint begins. db.execute('insert into users (username) values (?)', ('dale',)) # Block has exited - transactions implicitly committed. # Print the usernames of all users. print(db.execute('select username from users').fetchall()) # Prints [("alice",), ("dale",)]
- transaction(lock=None)¶
Create a context-manager that runs all queries in the wrapped block in a transaction.
Calls to
transaction()can be nested but only the outer-most transaction is used for implicit commits.transaction()can also be used as a decorator.- Parameters:
lock (str) – type of SQLite lock to acquire,
DEFERRED(default),IMMEDIATE, orEXCLUSIVE.
Example:
# Use as a context-manager. Commits on success, rolls back on error. with db.transaction() as txn: db.execute('insert into users (name) values (?)', ('alice',)) db.execute('insert into users (name) values (?)', ('bob',)) # Use as a decorator. @db.transaction() def create_user(name, email): db.execute('insert into users (name, email) values (?, ?)', (name, email)) # Acquire an EXCLUSIVE lock immediately to prevent other writers. with db.transaction('EXCLUSIVE') as txn: db.execute('update counters set n = n + 1 where name = ?', ('hits',))
Note
Most applications should prefer
Connection.atomic(), which automatically uses a transaction at the outermost level and savepoints for nested calls.
- savepoint(sid=None)¶
Create a context-manager that runs all queries in the wrapped block in a savepoint. Savepoints can only be used within an active transaction.
Calls to
savepoint()can be nested.savepoint()can also be used as a decorator.- Parameters:
sid (str) – savepoint id (optional).
Example:
with db.transaction(): db.execute('insert into users (name) values (?)', ('alice',)) # Nested savepoint - can be rolled back independently. with db.savepoint() as sp: db.execute('insert into users (name) values (?)', ('bob',)) sp.rollback() # Only 'bob' is rolled back. # 'alice' is still present when the transaction commits.
Note
Most applications should prefer
Connection.atomic(), which automatically uses a transaction at the outermost level and savepoints for nested calls.
- changes()¶
Return the number of rows modified, inserted or deleted by the most recently completed INSERT, UPDATE or DELETE statement on the database connection.
See sqlite3_changes for details on what operations are counted.
- Return type:
int
- total_changes()¶
Return the total number of rows inserted, modified or deleted by all INSERT, UPDATE or DELETE statements completed since the database connection was opened, including those executed as part of triggers.
See sqlite3_total_changes for details on what operations are counted.
- Return type:
int
- last_insert_rowid()¶
Return the rowid of the most recent successful INSERT into a rowid table or virtual table on database connection. Inserts into WITHOUT ROWID tables are not recorded. If no successful INSERTs into rowid tables have ever occurred on the database connection, returns zero.
See sqlite3_last_insert_rowid for more details.
- Return type:
int
- interrupt()¶
Cause any pending database operation to abort and return at its earliest opportunity. This routine is typically called in response to a user action such as pressing “Cancel” or Ctrl-C where the user wants a long query operation to halt immediately.
Example:
import signal def on_ctrl_c(signum, frame): db.interrupt() signal.signal(signal.SIGINT, on_ctrl_c) try: db.execute('select * from big_a, big_b, big_c').fetchall() except OperationalError: print('Query was interrupted.')
- status(flag)¶
Read the current and highwater values for the given db status flag.
- Parameters:
flag (int) – status values to read.
- Returns:
(current, highwater)
- Return type:
tuple
See also
Example:
db = connect('app.db') # Execute a query. db.execute('select * from register').fetchall() # Get the amount of memory used by cached data (pages, schema, etc). current, highwater = db.status(SQLITE_DBSTATUS_CACHE_USED) print(f'Cache: {current} bytes (peak: {highwater})')
- pragma(key, value=SENTINEL, database=None, multi=False, permanent=False)¶
Read or write a SQLite pragma to check or adjust run-time behavior.
- Parameters:
key (str) – pragma name.
value – value to use (when setting pragma).
database (str) – database name to apply pragma to, optional.
multi (bool) – when
True, return all results. Useful for pragmas that return multiple rows of data, e.g.table_list.permanent (bool) – Restore this pragma if connection closes and is re-opened with
connect().
- Returns:
the value of the specified pragma, or a
Cursorifmultiwas specified.
Example:
db = connect('app.db') # Set database to WAL-mode, which allows multiple readers to co-exist # with a single writer. db.pragma('journal_mode', 'wal') # Log the cache size. cache_size = db.pragma('cache_size') if cache_size < 0: # Negative values are KiB. size_in_kb = abs(cache_size) else: # Positive values are in pages. size_in_kb = cache_size * db.pragma('page_size') print(f'Cache size: {size_in_kb}') # List all tables in the database. for row in db.pragma('table_list'): print(row[1]) # Print just the table name.
- get_tables(database=None)¶
List all tables in the database.
- Parameters:
database (str) – database name, optional.
- Returns:
listof table names.
- get_views(database=None)¶
List all VIEWs in the database.
- Parameters:
database (str) – database name, optional.
- Returns:
listofViewtuples.
Example:
print(db.get_views()) [View(name='entries_public', sql='CREATE VIEW entries_public AS SELECT ... '), View(...), ...]
- get_indexes(table, database=None)¶
List all INDEXES on the table.
- Parameters:
table (str) – table name.
database (str) – database name, optional.
- Returns:
listofIndextuples.
Example:
print(db.get_indexes('entry')) [Index( name='entry_public_list', sql='CREATE INDEX "entry_public_list" ...', columns=['timestamp'], unique=False, table='entry'), Index( name='entry_slug', sql='CREATE UNIQUE INDEX "entry_slug" ON "entry" ("slug")', columns=['slug'], unique=True, table='entry')]
- get_columns(table, database=None)¶
List of columns in the table.
- Parameters:
table (str) – table name.
database (str) – database name to query, optional.
- Returns:
listofColumntuples.
Example:
print(db.get_columns('entry')) [Column( name='id', data_type='INTEGER', null=False, primary_key=True, table='entry'), Column( name='title', data_type='TEXT', null=False, primary_key=False, table='entry'), ...]
- get_primary_keys(table, database=None)¶
Return a list of column names that comprise the table primary key.
- Parameters:
table (str) – table name.
database (str) – database name to query, optional.
- Returns:
listof key column names.
Example:
print(db.get_primary_keys('entry')) ['id']
- get_foreign_keys(table, database=None)¶
List of foreign-keys in the table.
- Parameters:
table (str) – table name.
database (str) – database name to query, optional.
- Returns:
listofForeignKeytuples.
Example:
print(db.get_foreign_keys('entrytag')) [ForeignKeyMetadata( column='entry_id', dest_table='entry', dest_column='id', table='entrytag'), ...]
- table_column_metadata(table, column, database=None)¶
Provide full metadata for a column in a table.
- Parameters:
table (str) – table name.
column (str) – column name.
database (str) – database name to query, optional.
- Returns:
metadata for column.
- Return type:
ColumnMetadata
Example:
db.table_column_metadata('entry', 'title') ColumnMetadata( table='entry', column='title', datatype='TEXT', collation='BINARY', not_null=True, primary_key=False, auto_increment=False)
- backup(dest, pages=None, name=None, progress=None, src_name=None)¶
Perform an online backup to the given destination
Connection.- Parameters:
dest (Connection) – database to serve as destination for the backup.
pages (int) – Number of pages per iteration. Default value of -1 indicates all pages should be backed-up in a single step.
name (str) – Name of source database (may differ if you used ATTACH DATABASE to load multiple databases). Defaults to “main”.
progress – Progress callback, called with three parameters: the number of pages remaining, the total page count, and whether the backup is complete.
Example:
master = connect('master.db') replica = connect('replica.db') # Backup the contents of master to replica. master.backup(replica)
Progress example:
master = connect('master.db') replica = connect('replica.db') def progress(remaining, total, is_complete): print(f'{remaining}/{total} pages remaining') # Backup the contents of master to replica. master.backup(replica, pages=10, progress=progress)
- backup_to_file(filename, pages=None, name=None, progress=None, src_name=None)¶
Perform an online backup to the given destination file.
- Parameters:
filename (str) – database file to serve as destination for the backup.
pages (int) – Number of pages per iteration. Default value of -1 indicates all pages should be backed-up in a single step.
name (str) – Name of source database (may differ if you used ATTACH DATABASE to load multiple databases). Defaults to “main”.
progress – Progress callback, called with three parameters: the number of pages remaining, the total page count, and whether the backup is complete.
Example:
master = connect('master.db') # Backup the contents of master to replica.db. master.backup_to_file('replica.db')
- blob_open(table, column, rowid, read_only=False, database=None)¶
Open a
Blobhandle to an existing BLOB.- Parameters:
table (str) – table where blob is stored.
column (str) – column where blob is stored.
rowid (int) – id of row to open.
read_only (bool) – open blob in read-only mode.
database (str) – database name, optional.
- Returns:
a handle to access the BLOB data.
- Return type:
Note
The blob size cannot be changed using the
Blobclass. Use the SQL functionzeroblobto create a blob with a fixed size.See also
Example:
db.execute('create table register (' 'id integer primary key, ' 'data blob not null)') # Create a row with a 16-byte empty blob. db.execute('insert into register (data) values (zeroblob(?))', (16,)) rowid = db.last_insert_rowid() # Obtain a handle to access the BLOB. blob = db.blob_open('register', 'data', rowid) blob.write(b'abcdefgh') assert blob.tell() == 8 blob.seek(2) print(blob.read(4)) # b'cdef' blob.close() # Release the blob handle.
- load_extension(name)¶
Load an extension module from a shared library.
- Parameters:
name (str) – the path to the SQLite extension.
- Raises:
OperationalErrorif extension could not be loaded.
Example:
# Create connection and indicate we want to support loadable # extensions. db = connect(':memory:', extensions=True) # Alternately, we can enable loadable extensions later. db.set_load_extension(True) # Load the closure table extension. db.load_extension('./closure.so')
- create_function(fn, name=None, nargs=-1, deterministic=True)¶
Create or remove a user-defined SQL function.
- Parameters:
fn – A callable that is called when the SQL function is invoked. The callable must return a type natively supported by SQLite. Set to
Noneto remove an existing SQL function.name (str) – name of the SQL function. If unspecified, the name of the Python function will be used.
narg (int) – The number of arguments the SQL function can accept. If
-1, it may take any number of arguments.deterministic (bool) – If
True(the default), the created SQL function is marked as deterministic, which allows SQLite to perform additional optimizations: the result can be reused across references within a single row, and the function can be used inside indexes on expressions, CHECK constraints, generated columns, and partial-indexWHEREclauses.
Example:
def title_case(s): return s.title() if s else '' # Register our custom function. db.create_function(title_case) db.execute('select title_case(?)', ('heLLo wOrLd',)).fetchone() # ('Hello World',)
- create_aggregate(agg, name=None, nargs=-1, deterministic=True)¶
Create or remove a user-defined SQL aggregate function.
- Parameters:
agg (class | None) –
A class implementing the aggregate API. Must implement the following methods:
step(): Add a row to the aggregate.finalize(): Return the final result of the aggregate as a type natively supported by SQLite.
The number of arguments that the
step()method must accept is controlled by nargs.Set to
Noneto remove an existing SQL aggregate function.name (str) – name of the SQL aggregate function. If unspecified, the name of the Python class will be used.
nargs (int) – The number of arguments the SQL aggregate function can accept. If
-1, it may take any number of arguments.deterministic (bool) –
If
True(the default), the created SQL function is marked as deterministic, which allows SQLite to perform additional optimizations: the result can be reused across references within a single row, and the function can be used inside indexes on expressions, CHECK constraints, generated columns, and partial-indexWHEREclauses.
Examples:
class MD5(object): def initialize(self): self.md5 = hashlib.md5() def step(self, value): self.md5.update(value) def finalize(self): return self.md5.hexdigest() class Product(object): '''Like SUM() except calculates cumulative product.''' def __init__(self): self.product = 1 def step(self, value): self.product *= value def finalize(self): return self.product # Register our custom aggregates. db.create_aggregate(MD5, 'md5', 1) db.create_aggregate(Product)
- create_window_function(agg, name=None, nargs=-1, deterministic=True)¶
Create or remove a user-defined SQL window function.
- Parameters:
agg (class | None) –
A class implementing the window function API. Must implement the following methods:
step(): Add a row to the aggregate.inverse(): Inverse of thestepmethod.value(): Return the current value of the window function.finalize(): Return the final result of the window function as a type natively supported by SQLite.
The number of arguments that the
step()andinverse()methods must accept is controlled by nargs.Set to
Noneto remove an existing SQL window function.name (str) – name of the SQL window function. If unspecified, the name of the Python class will be used.
nargs (int) – The number of arguments the SQL window function can accept. If
-1, it may take any number of arguments.deterministic (bool) –
If
True(the default), the created SQL function is marked as deterministic, which allows SQLite to perform additional optimizations: the result can be reused across references within a single row, and the function can be used inside indexes on expressions, CHECK constraints, generated columns, and partial-indexWHEREclauses.
class MySum(object): def __init__(self): self._value = 0 def step(self, value): self._value += value def inverse(self, value): self._value -= value def value(self): return self._value def finalize(self): return self._value # Register our custom window function (roughly identical to SUM). db.create_window_function(MySum, 'mysum', 1, True)
- create_collation(fn, name)¶
Create a collation named name using the collating function fn. fn is passed two
stringarguments, and it should return aninteger.1if the first is ordered higher than the second-1if the first is ordered lower than the second0if they are ordered equal
- Parameters:
fn (callback | None) – A callable that is called when comparing values for ordering. Set to
Noneto remove an existing collation.name (str) – name of the SQL collation. If unspecified, the name of the Python function will be used.
The following example shows a reverse sorting collation:
def collate_reverse(string1, string2): if string1 == string2: return 0 elif string1 < string2: return 1 else: return -1 db.create_collation(collate_reverse, 'reverse')
- commit_hook(fn)¶
Register a callback to be executed whenever a transaction is committed on the current connection. The callback accepts no parameters and the return value is ignored.
However, if the callback raises a
ValueError, the transaction will be aborted and rolled-back.- Parameters:
fn (callable | None) – callable or
Noneto clear the current hook.
Example:
db = connect(':memory:') # Example that aborts all COMMITs between midnight and 12:59:59am. def on_commit(): if datetime.now().hour == 0: raise ValueError('No changes allowed this late.') db.commit_hook(on_commit) # Clear the commit hook. db.commit_hook(None)
- rollback_hook(fn)¶
Register a callback to be executed whenever a transaction is rolled back on the current connection. The callback accepts no parameters and the return value is ignored.
- Parameters:
fn (callable | None) – callable or
Noneto clear the current hook.
Example:
def on_rollback(): logger.info('Rolling back changes') db.rollback_hook(on_rollback) # Clear the rollback hook. db.rollback_hook(None)
- update_hook(fn)¶
- Parameters:
fn (callable | None) – callable or
Noneto clear the current hook.
Register a callback to be executed whenever the database is written to (via an UPDATE, INSERT or DELETE query). The callback should accept the following parameters:
query- the type of query, either INSERT, UPDATE or DELETE.database name - the default database is named main.
table name - name of table being modified.
rowid - the rowid of the row being modified.
The callback’s return value is ignored.
Example:
def on_update(query_type, db, table, rowid): # e.g. INSERT row 3 into table users. logger.info('%s row %s into table %s', query_type, rowid, table) db.update_hook(on_update)
- authorizer(fn)¶
- Parameters:
fn (callable | None) – callable or
Noneto clear the current authorizer.- Returns:
one of
SQLITE_OK,SQLITE_IGNOREorSQLITE_DENY.See also
Register an authorizer callback. Authorizer callbacks must accept 5 parameters, which vary depending on the operation being checked.
op: operation code, e.g.
SQLITE_INSERT.p1: operation-specific value, e.g. table name for
SQLITE_INSERT.p2: operation-specific value.
p3: database name, e.g.
"main".p4: inner-most trigger or view responsible for the access attempt if applicable, else
None.
See sqlite authorizer documentation for description of authorizer codes and values for parameters p1 and p2.
The authorizer callback must return one of:
SQLITE_OK: allow operation.SQLITE_IGNORE: allow statement compilation but prevent the operation from occurring.SQLITE_DENY: prevent statement compilation.
Only a single authorizer can be in place on a database connection at a time.
Example:
# Prevent any updates to the log table. def prevent_updating_log(op, p1, p2, p3, p4): if op == SQLITE_UPDATE and p1 == 'log': return SQLITE_DENY return SQLITE_OK db.authorizer(prevent_updating_log) # raises OperationalError - not authorized (code=23). db.execute('update log set status=? where id=?', (0, 1))
- trace(fn, mask=2, expand_sql=True)¶
- Parameters:
fn – callable or
Noneto clear the current trace hook.mask (int) – mask of what types of events to trace. Default value corresponds to
SQLITE_TRACE_PROFILE.expand_sql (bool) – Pass callback the
sqlite3_expanded_sql()fromsqlite3_stmt(expands bound parameters)
Mask must consist of one or more of the following constants combined with bitwise-or:
Register a trace hook. Trace callback must accept 4 parameters, which vary depending on the operation being traced:
event: type of event, e.g.
SQLITE_TRACE_PROFILE.sid: memory address of statement (for
SQLITE_TRACE_CLOSE,sid=-1).sql: SQL string. If
expand_sqlthen bound parameters will be expanded (forSQLITE_TRACE_CLOSE,sql=None).ns: estimated number of nanoseconds the statement took to run (only
SQLITE_TRACE_PROFILE), else -1.
See sqlite3_trace_v2 for more details on trace modes and parameters.
Any return value from callback is ignored.
Example:
# Log every statement and how long it took. def log_query(event, sid, sql, ns): if event == SQLITE_TRACE_PROFILE: ms = ns / 1_000_000 print(f'[{ms:.1f}ms] {sql}') # SQLITE_TRACE_PROFILE = 2 (the default mask). db.trace(log_query) db.execute('select * from users') # [0.1ms] select * from users # Remove the trace hook. db.trace(None)
- progress(fn, n=1)¶
- Parameters:
fn (callable | None) – callable or
Noneto clear the current progress handler.n (int) – approximate number of VM instructions to execute between calls to the progress handler.
Register a progress handler (
sqlite3_progress_handler).Callback takes zero arguments and returns
0to allow progress to continue or any non-zero value to interrupt progress.Example:
def allow_interrupt(): if halt_requested: return 1 # Triggers an interrupt. return SQLITE_OK # Install our progress handler. db.progress(allow_interrupt) # Begin a very long database operation, allow the user to interrupt. try: with db.atomic(): db.execute(...) except OperationalError: # e.g. OperationalError: interrupted, code=9 print('Query was interrupted.') # Remove the progress handler. db.progress(None)
- set_busy_handler(timeout=5.0)¶
- Parameters:
timeout (float) – seconds to retry acquiring write lock before raising a
OperationalErrorwhen table is locked.
Register a custom busy-handler that introduces jitter to help prevent multiple concurrent executions from sleeping and blocking in lock-step. Also retries more frequently than the standard implementation.
This is recommended over the default busy timeout for applications with high write concurrency.
Example:
db = connect('app.db') db.pragma('journal_mode', 'wal') # Replace the default busy timeout with the aggressive busy handler. db.set_busy_handler(timeout=10.0) # Now concurrent writers will use jittered retries instead of a # simple flat sleep, reducing lock contention.
- optimize(debug=False, run_tables=True, set_limit=True, check_table_sizes=False, dry_run=False)¶
- Parameters:
debug (bool) – debug-mode, do not actually perform any optimizations, instead print one line of text for each optimization that would have been done.
run_tables (bool) – run ANALYZE on tables that might benefit from it.
set_limit (bool) – when running ANALYZE set a temporary analysis limit to prevent excess run-time.
check_table_sizes (bool) – check size of all tables, not just those that have changed recently, to see if any would benefit from ANALYZE.
dry_run (bool) – see all optimizations that would have been performed without doing them.
Wrapper around
PRAGMA optimize.Example:
db = connect('app.db') # Preview what optimizations would be performed. for row in db.optimize(dry_run=True): print(row) # Run optimizations (typically called before closing the db). db.optimize() db.close()
- attach(filename, name)¶
- Parameters:
filename (str) – database filename.
name (str) – name for attached database.
- Raises:
OperationalErrorif database could not be attached.
Example:
db.attach('/path/to/secondary.db', 'secondary') curs = db.execute('select * from secondary.some_table')
- detach(name)¶
- Parameters:
name (str) – name of attached database to disconnect.
- Raises:
OperationalErrorif database not attached.
Example:
db.detach('secondary')
- database_list()¶
- Returns:
listof(name, filename)tuples for all databases active on the connection.
Example:
db = connect('app.db') print(db.database_list()) # [('main', '/path/to/app.db')] db.attach('/path/to/logs.db', 'logs') print(db.database_list()) # [('main', '/path/to/app.db'), ('logs', '/path/to/logs.db')]
- set_main_db_name(name)¶
- Parameters:
name (str) – new name for main database.
Rename the main database schema. After calling this, queries must use the new name when qualifying tables.
Example:
db = connect('app.db') db.set_main_db_name('appdb') # The main database is now called 'appdb' instead of 'main'. print(db.database_list()) # [('appdb', '/path/to/app.db')] # Qualified queries use the new name. db.execute('select * from appdb.users')
- set_autocheckpoint(n)¶
- Parameters:
n (int) – number of WAL frames between auto-checkpoints. Set to
0to disable automatic checkpointing.
Configure the WAL auto-checkpoint interval. After every n frames are written to the WAL, a passive checkpoint is run automatically.
Example:
db = connect('app.db') db.pragma('journal_mode', 'wal') # Checkpoint every 500 frames instead of the default 1000. db.set_autocheckpoint(500) # Disable auto-checkpointing (manage it manually). db.set_autocheckpoint(0)
- checkpoint(full=False, truncate=False, restart=False, name=None)¶
Perform a WAL checkpoint. By default will use
SQLITE_CHECKPOINT_PASSIVE.- Parameters:
full (bool) – use
SQLITE_CHECKPOINT_FULL- wait for writers then checkpoint.truncate (bool) – use
SQLITE_CHECKPOINT_TRUNCATE- like restart, but also truncates the WAL file.restart (bool) – use
SQLITE_CHECKPOINT_RESTART- like full, but also waits for readers.name (str) – database name to checkpoint, optional.
- Returns:
a tuple of
(wal_size, checkpointed_pages).- Raises:
ValueErrorif more than one offull,truncate, orrestartis specified.
See sqlite3_wal_checkpoint_v2 for details on the parameters and their behavior.
Example:
db = connect('app.db') db.pragma('journal_mode', 'wal') # ... perform writes ... # Passive checkpoint (does not block readers or writers). wal_size, checkpointed = db.checkpoint() # Full checkpoint (waits for writers, then checkpoints all frames). wal_size, checkpointed = db.checkpoint(full=True) # Truncate (checkpoint and then truncate the WAL file to zero bytes). db.checkpoint(truncate=True)
- set_load_extension(enabled)¶
- get_load_extension()¶
Enable or disable run-time loadable extensions.
- set_foreign_keys_enabled(enabled)¶
- get_foreign_keys_enabled()¶
Enable or disable foreign-key constraint enforcement. For historical reasons, many builds of SQLite do not ship with foreign-key constraints enabled by default.
Example:
db = connect(':memory:') # Check current state. print(db.get_foreign_keys_enabled()) # 0 (disabled by default) # Enable foreign key enforcement. db.set_foreign_keys_enabled(1) db.execute('create table parent (id integer primary key)') db.execute('create table child (id integer primary key, ' 'pid references parent(id))') db.execute('insert into parent (id) values (1)') db.execute('insert into child (id, pid) values (1, 1)') # OK. db.execute('insert into child (id, pid) values (2, 99)') # IntegrityError: FOREIGN KEY constraint failed.
- set_triggers_enabled(enabled)¶
- get_triggers_enabled()¶
Enable or disable triggers from being executed.
- db_config(op, setting=None)¶
- Parameters:
op (int) –
SQLITE_DBCONFIG_constant.setting (int) – New value for setting or
Noneto query the current value without changing it.
- Returns:
current value of the setting after the call.
- Return type:
int
Interface to
sqlite3_db_config().Example:
# Query current state of foreign keys without changing it. current = db.db_config(SQLITE_DBCONFIG_ENABLE_FKEY) # Enable defensive mode (prevents direct writes to internal # schema tables). db.db_config(SQLITE_DBCONFIG_DEFENSIVE, 1)
- setlimit(category, limit)¶
- getlimit(category)¶
Set or get the value of a run-time limit. Limits are defined by the Limit Constants.
See sqlite3 run-time limit categories for details.
Example:
# Get the current maximum SQL length. max_sql = db.getlimit(SQLITE_LIMIT_SQL_LENGTH) print(f'Max SQL length: {max_sql}') # Restrict the maximum number of attached databases to 3. old = db.setlimit(SQLITE_LIMIT_ATTACHED, 3) print(f'Previous limit was: {old}')
- file_control(op, val, name=None)¶
- Parameters:
op (int) –
SQLITE_FCNTL_constant.val (int) – Value for operation.
name (str) – optional name for database to apply fcntl to.
- Returns:
value returned by the file-control operation.
- Return type:
int
Interface to
sqlite3_file_control().Example:
# Enable persistent WAL mode (WAL file is not deleted on close). db.file_control(SQLITE_FCNTL_PERSIST_WAL, 1) # Get the data version (changes whenever the db is modified). version = db.file_control(SQLITE_FCNTL_DATA_VERSION, 0) print(f'Data version: {version}')
Cursor¶
- class Cursor(conn)¶
Cursors are created by calling
Connection.cursor()or by executing a query withConnection.execute().- Parameters:
conn (Connection) – connection the cursor is bound to.
- row_factory¶
Factory for creating row instances from query results, e.g.
Row. Uses the value from theConnection.row_factoryby default.Example:
cursor = db.cursor() cursor.row_factory = cysqlite.Row cursor.execute('SELECT * FROM users') for row in cursor: print(row['name']) # Access by column name
- execute(sql, params=None)¶
Execute the given sql and params.
- Parameters:
sql (str) – SQL query to execute.
params (tuple, list, sequence, dict, or
None.) – parameters for query (optional).
- Returns:
self
- Return type:
Example:
curs = db.cursor() curs.execute('create table kv ("id" integer primary key, "key", "value")') # Iterate over results from a bulk-insert. curs.execute('insert into kv (key, value) values (?, ?), (?, ?) ' 'returning id, key', ('k1', 'v1', 'k2', 'v2')) for (i, k) in curs: print(f'inserted {k} with id={i}') # Retrieve a single row result. curs.execute('select * from kv where id = :pk', {'pk': 1}) row = curs.fetchone() print(f'retrieved row 1: {row}') # Empty result set. curs.execute('select * from kv where id = 0') assert curs.fetchone() is None assert curs.fetchall() == []
- executemany(sql, seq_of_params)¶
Execute the given sql repeatedly for each parameter group in seq_of_params.
Queries executed by
executemany()must not return any result rows, or this will result in anOperationalError.- Parameters:
sql (str) – SQL query to execute.
seq_of_params (sequence of tuple, list, sequence, dict, or
None.) – iterable of parameters.
- Returns:
self
- Return type:
Example:
curs = db.cursor() curs.execute('create table kv ("id" integer primary key, "key", "value")') curs.executemany('insert into kv (key, value) values (?, ?)', [('k1', 'v1'), ('k2', 'v2'), ('k3', 'v3')]) print(curs.lastrowid) # 3. print(curs.rowcount) # 3. curs.executemany('insert into kv (key, value) values (:k, :v)', [{'k': 'k4', 'v': 'v4'}, {'k': 'k5', 'v': 'v5'}]) print(curs.lastrowid) # 5. print(curs.rowcount) # 2.
- executescript(sql)¶
Execute the SQL statement(s) in sql.
- Parameters:
sql (str) – One or more SQL statements to execute in a script.
- Returns:
self
- Return type:
Example:
curs = db.cursor() curs.executescript(""" begin; create table users ( id integer not null primary key, name text not null, email text not null); create index users_email ON users (email); create table tweets ( id integer not null primary key, content text not null, user_id integer not null references users (id), timestamp integer not null); commit; """)
- __iter__()¶
- __next__()¶
Cursors support the iterator protocol.
Example:
curs = db.execute('select username from users') for row in curs: print(row[0])
- fetchone()¶
Return the next row from the query result set. By default rows are returned as
tuple, but row type can be controlled by settingConnection.row_factory.If no results are available or cursor has been consumed returns
None.Example:
curs = db.execute('select 1') print(curs.fetchone()) # (1,) print(curs.fetchone()) # None
- fetchmany(size)¶
Fetch up to
sizerows from the query result set. By default rows are returned astuple, but row type can be controlled by settingConnection.row_factory.If no results are available or cursor has been consumed returns
[].Example:
curs = db.execute('select 1 union select 2') print(curs.fetchmany(10)) # [(1,), (2,)] print(curs.fetchmany(10)) # []
- fetchall()¶
Fetch all rows from the query result set. By default rows are returned as
tuple, but row type can be controlled by settingConnection.row_factory.If no results are available or cursor has been consumed returns
[].Example:
curs = db.execute('select 1 union select 2') print(curs.fetchall()) # [(1,), (2,)] print(curs.fetchall()) # []
- scalar()¶
Fetch a single scalar value from the query result set. If no results are available or cursor has been consumed returns
None.Example:
curs = db.execute('select 1') print(curs.scalar()) # 1 print(curs.scalar()) # None
- close()¶
Close the cursor and release associated resources.
Note
It is not necessary to explicitly close a cursor. Cursors will be closed during garbage collection automatically.
- columns()¶
Return a list of the names of columns in the result data.
- description¶
Return a DB-API style description of the row-data for the current query. cysqlite returns a list of tuples containing the individual column names for the query.
- lastrowid¶
Return the last-inserted rowid for the cursor.
- rowcount¶
Return the count of rows modified by the last operation. Returns
-1for queries that do not modify data.For DML statements with a
RETURNINGclause,rowcountis updated incrementally as the result set is stepped. It reflects the final affected-row count only after the cursor has been fully drained (viafetchall(), iteration to completion, or exhaustingfetchmany()). Readingrowcountmid-iteration is safe but may report a partial count; the exact pre-drain value depends on the SQLite query plan and should not be relied on.
Row Factories¶
- class Row(cursor, data)¶
- Parameters:
cursor (Cursor)
data (tuple)
Highly-optimized row class compatible with
sqlite3.Row, intended to be used withConnection.row_factoryorCursor.row_factory. Supports the following access patterns:Item lookup using column indices.
Dict lookup using column names.
Attribute access using attribute names.
Additionally supports dict-like interface methods.
- __len__()¶
- __eq__()¶
- __hash__()¶
- __contains__()¶
- get(key, default=None)¶
- keys()¶
- values()¶
- items()¶
- as_dict()¶
Example:
db = cysqlite.connect(':memory:') db.row_factory = cysqlite.Row db.execute('create table users (id integer primary key, ' 'username text, active integer)') db.execute('insert into users values (?, ?, ?)', (1, 'charles', 1)) row = db.execute('select * from users').fetchone() # repr shows all columns and values. print(row) # <Row(id=1, username='charles', active=1)> # Access by attribute name, column name, or index. print(row.username) # 'charles' print(row['active']) # 1 print(row[0]) # 1 # Iterate over values. print(list(row)) # [1, 'charles', 1] # Dict-like interface. print(row.keys()) # ['id', 'username', 'active'] print(row.as_dict()) # {'id': 1, 'username': 'charles', 'active': 1} # Test membership and safe access. print('username' in row) # True print(row.get('email', 'n/a')) # 'n/a' # Rows are usable in sets and as dict keys. seen = set() for row in db.execute('select * from users'): seen.add(row)
- dict_factory(cursor, data)¶
- Parameters:
cursor (Cursor)
data (tuple)
Row factory that returns rows as dicts, to be used with
Connection.row_factoryorCursor.row_factory.
Blob¶
- class Blob(conn, table, column, rowid, read_only=False, database=None)¶
Open a blob handle, stored in the given table/column/row, for incremental I/O. To allocate storage for new data, you can use the SQL
zeroblob(n)function, where n is the desired size in bytes.- Parameters:
conn –
Connectioninstance.table (str) – Name of table being accessed.
column (str) – Name of column being accessed.
rowid (int) – Primary-key of row being accessed.
read_only (bool) – Prevent any modifications to the blob data.
database (str) – Name of database containing table, optional.
Note
Blobimplementsio.RawIOBase.Example:
db = connect(':memory:') db.execute('create table raw_data (id integer primary key, data blob)') # Allocate 100MB of space for writing a large file incrementally: db.execute('insert into raw_data (data) values (zeroblob(?))', (1024 * 1024 * 100,)) rowid = db.last_insert_rowid() # Now we can open the row for incremental I/O: blob = Blob(db, 'raw_data', 'data', rowid) # Read from the file and write to the blob in chunks of 4096 bytes. while True: data = file_handle.read(4096) if not data: break blob.write(data) bytes_written = blob.tell() blob.close()
- property closed¶
Whether the blob handle is closed.
- read(size=-1)¶
- Parameters:
size (int) – Only read up to n bytes from current position in file.
- Returns:
Blob data
- Return type:
bytes
Read up to n bytes from the current position in the blob file. If n is less than zero, the entire blob will be read.
- seek(offset, whence=0)¶
- Parameters:
offset (int) – Seek to the given offset in the file.
whence (int) – Seek relative to the specified frame of reference.
Values for
whence:0: beginning of file1: current position2: end of file
Attempts to seek beyond the start or end of the file raise
ValueError.
- tell()¶
Return current offset within the file.
- write(data)¶
- Parameters:
data (buffer) – Data to be written: buffer (e.g.
bytes,bytearray,memoryview) orstr(will be encoded as UTF8)- Returns:
Number of bytes written.
- Return type:
int
Writes the given data, starting at the current position in the file. If the data is too large for the blob to store, raises
ValueError.
- __len__()¶
Return the length of the blob.
- close()¶
Close the file and free associated resources.
- reopen(rowid)¶
- Parameters:
rowid (int) – Primary key of row to open.
If a blob has already been opened for a given table/column, you can use the
reopen()method to re-use the sameBlobobject for accessing multiple rows in the table.Example:
db.execute('create table chunks (id integer primary key, data blob)') for i in range(3): db.execute('insert into chunks (data) values (zeroblob(16))') blob = db.blob_open('chunks', 'data', 1) blob.write(b'data for row 1!!') # Re-use the same handle for a different row. blob.reopen(2) blob.write(b'data for row 2!!') blob.reopen(3) blob.write(b'data for row 3!!') blob.close()
- readline(size=-1)¶
- Parameters:
size (int) – Maximum number of bytes to read.
- Returns:
Line of data from the current offset.
- Return type:
bytes
- readlines(hint=-1)¶
- Parameters:
hint (int) – Maximum number of bytes to read.
- Returns:
Lines of data from the current offset.
- Return type:
list
- readall()¶
Read entire blob from the current position to the end.
- readinto(b)¶
- Parameters:
b (buffer) – Read contents of blob into the given writeable buffer.
- Returns:
Number of bytes read.
- Return type:
int
- writelines(lines)¶
- Parameters:
lines – any sequence of lines to write.
- Returns:
Number of bytes written
- Return type:
int
- __iter__()¶
Allow iteration over the lines of data in the blob.
- __enter__()¶
- __exit__(exc_type, exc_val, exc_tb)¶
Allow the blob to be used as a context-manager, closing when the wrapped block exits.
Example:
db.execute('insert into raw_data (data) values (zeroblob(?))', (64,)) rowid = db.last_insert_rowid() with db.blob_open('raw_data', 'data', rowid) as blob: blob.write(b'hello ') blob.write(b'world!') # blob is now closed.
- __getitem__(key)¶
- __setitem__(key, value)¶
- Parameters:
key – an integer index or a
slice.
Read or write to the given index/slice. Indexing returns a single byte as an integer (0-255). Slicing returns
bytes.Example:
blob = db.blob_open('raw_data', 'data', rowid) blob.write(b'ABCDEFGH') # Read a single byte (returns an int). print(blob[0]) # 65 (ord('A')) # Read a slice (returns bytes). print(blob[0:4]) # b'ABCD' # Write a single byte. blob[0] = ord('Z') # Write a slice (must be exact same length). blob[4:8] = b'wxyz' print(blob[0:8]) # b'ZBCDwxyz' blob.close()
Transaction Wrappers¶
- class Atomic(conn, lock=None)¶
Context-manager or decorator implementation for
Connection.atomic(). Uses a transaction at the outermost level and savepoints for nested calls.- __enter__()¶
Begin the transaction or savepoint.
- __exit__(exc_type, exc_val, exc_tb)¶
Commit the transaction or savepoint if exiting cleanly. If an unhandled exception occurred, roll back.
- __call__(fn)¶
Decorate the wrapped function with a transaction or savepoint. Equivalent to:
@db.atomic() def modify_data(): ... def modify_data(): with db.atomic(): ...
- commit()¶
Explicitly commit the transaction/savepoint. A new transaction/savepoint will begin automatically.
- rollback()¶
Explicitly roll backthe transaction/savepoint. A new transaction/savepoint will begin automatically.
- class Transaction(conn, lock=None)¶
Context-manager or decorator implementation for
Connection.transaction(). Same API asAtomic.
- class Savepoint(conn, sid=None)¶
Context-manager or decorator implementation for
Connection.savepoint(). Same API asAtomic.
TableFunction¶
Implement a user-defined table-valued function. Unlike Connection.create_function()
or Connection.create_aggregate(), which return a single scalar value, a
table-valued function can return any number of rows of tabular data.
Example read-only TableFunction:
from cysqlite import TableFunction
class Series(TableFunction):
name = 'series' # Specify name - if empty class name is used.
# Name of columns in each row of generated data.
columns = ['value']
# Name of parameters the function may be called with.
params = ['start', 'stop', 'step']
def initialize(self, start=0, stop=None, step=1):
"""
Table-functions declare an initialize() method, which is
called with whatever arguments the user has called the
function with.
"""
self.start = self.current = start
self.stop = stop or float('Inf')
self.step = step
def iterate(self, idx):
"""
Iterate is called repeatedly by the SQLite database engine
until the required number of rows has been read **or** the
function raises a `StopIteration` signalling no more rows
are available.
"""
if self.current > self.stop:
raise StopIteration
ret, self.current = self.current, self.current + self.step
return (ret,)
# Register the table-function with our database.
db = connect(':memory:')
Series.register(db)
# Usage:
cursor = db.execute('select * from series(?, ?, ?)', (0, 5, 2))
for row in cursor:
print(row)
# (0,)
# (2,)
# (4,)
Example TableFunction that supports INSERT/UPDATE/DELETE queries:
class MemStore(TableFunction):
"""
In-memory key/value store exposed as a virtual-table.
"""
name = 'memstore'
columns = [
('id', 'INTEGER'),
('key', 'TEXT'),
('value', 'TEXT')]
params = []
_data = {}
_next_id = 1
def initialize(self, **filters):
pass
def iterate(self, idx):
keys = sorted(self._data)
if idx >= len(keys):
raise StopIteration
rowid = keys[idx]
row = self._data[rowid]
# Return a 2-tuple of (rowid, row-data) to specify rowid explicitly.
return (rowid, (row['id'], row['key'], row['value']))
def insert(self, rowid, values):
# rowid might be None, so we auto-generate
if rowid is None:
rowid = self._next_id
MemStore._next_id += 1
else:
rowid = int(rowid)
if rowid >= MemStore._next_id:
MemStore._next_id = rowid + 1
if len(values) < 3:
raise ValueError('Expected 3 values, got %s' % len(values))
u_rowid, key, value = values
self._data[rowid] = {
'id': int(u_rowid) if u_rowid is not None else rowid,
'key': str(key) if key is not None else key,
'value': str(value) if value is not None else value,
}
return rowid
def update(self, old_rowid, new_rowid, values):
old_rowid = int(old_rowid)
new_rowid = int(new_rowid)
if old_rowid not in self._data:
raise ValueError('Row %s not found' % old_rowid)
if len(values) < 3:
raise ValueError('Expected 3 values, got %s' % len(values))
uid, key, value = values
if uid:
new_rowid = uid
if old_rowid != new_rowid:
# User updated rowid, move data.
self._data[new_rowid] = self._data.pop(old_rowid)
rowid = new_rowid
else:
rowid = old_rowid
if uid:
self._data[rowid]['id'] = int(uid)
if key is not None:
self._data[rowid]['key'] = key
if value is not None:
self._data[rowid]['value'] = value
def delete(self, rowid):
rowid = int(rowid)
if rowid not in self._data:
raise ValueError('Row %s not found' % rowid)
del self._data[rowid]
# Register the table-function with our database.
db = connect(':memory:')
MemStore.register(db)
# Usage:
db.execute('insert into memstore (id, key, value) values (?, ?, ?)',
(1, 'k1', 'v1'))
db.execute('insert into memstore (key, value) values (?, ?), (?, ?)',
('k2', 'v2', 'k3', 'v3'))
assert db.last_insert_rowid() == 3
for row in db.execute('select * from memstore where value != ?', ('v2',)):
print(row)
# (1, 'k1', 'v1')
# (3, 'k3', 'v3')
db.execute('update memstore set value = ? where key = ?', ('v2y', 'k2'))
db.execute('update memstore set value = value || ?', ('z',))
for row in db.execute('select * from memstore'):
print(row)
# (1, 'k1', 'v1z')
# (2, 'k2', 'v1yz')
# (3, 'k3', 'v3z')
db.execute('delete from memstore where key = ?', ('k2',))
assert db.changes() == 1
db.execute('update memstore set id = ? where id = ?', (4, 3))
assert db.changes() == 1
db.execute('delete from memstore where key = ?', ('not-here',))
assert db.changes() == 0
for row in db.execute('select * from memstore'):
print(row)
# (1, 'k1', 'v1z')
# (4, 'k3', 'v3z')
print(MemStore._data)
# {1: {'id': 1, 'key': 'k1', 'value': 'v1z'},
# 4: {'id': 4, 'key': 'k3', 'value': 'v3z'}}
Note
A TableFunction must be registered with a database connection
before it can be used.
- class TableFunction¶
TableFunctionimplementations must provide two attributes and implement two methods, described below.- columns¶
A list containing the names of the columns for the data returned by the function. For example, a function that is used to split a string on a delimiter might specify 3 columns:
[substring, start_idx, end_idx].To specify data-types for your columns, you can specify a list of 2-tuples of
(column name, type), e.g.:columns = [('key', 'text'), ('data', 'blob')]
- params¶
The names of the parameters the function may be called with. All parameters, including optional parameters, should be listed. For example, a function that is used to split a string on a delimiter might specify 2 params:
[string, delimiter].
- name¶
Optional - specify the name for the table function. If not provided, name will be taken from the class name.
- print_tracebacks = True
Print a full traceback for any errors that occur in the table-function’s callback methods. When set to False, only the generic
OperationalErrorwill be visible.
- initialize(**parameter_values)¶
- Parameters:
parameter_values – Parameters the function was called with.
- Returns:
No return value.
The
initializemethod is called to initialize the table function with the parameters the user specified when calling the function.
- iterate(idx)¶
- Parameters:
idx (int) – current iteration step, or more specifically rowid.
- Returns:
A tuple of row data corresponding to the columns named in the
columnsattribute.Implementations may explicitly specify a
rowidby returning a 2-tuple of(rowid, (row, data, here)).- Raises:
StopIteration – To signal that no more rows are available.
This function is called repeatedly and returns successive rows of data. The function may terminate before all rows are consumed (especially if the user specified a
LIMITon the results). Alternatively, the function can signal that no more data is available by raising aStopIterationexception.
- insert(rowid, values)¶
- Parameters:
rowid (int) – rowid for inserted data, may be
None.values (list) – values to be inserted.
- Returns:
rowid of new data.
- Return type:
int
Handle INSERT into the virtual table.
If omitted, the virtual table will not support INSERT queries.
- update(old_rowid, new_rowid, values)¶
- Parameters:
old_rowid (int) – rowid for data being updated.
new_rowid (int) – new rowid for data (usually same as old_rowid).
values (list) – values to be updated.
- Returns:
no return value.
Handle UPDATE of a row in the virtual table.
If omitted, the virtual table will not support UPDATE queries.
- delete(rowid)¶
- Parameters:
rowid (int) – rowid to be deleted.
- Returns:
no return value.
Handle DELETE of a row in the virtual table.
If omitted, the virtual table will not support DELETE queries.
- classmethod register(conn)¶
- Parameters:
conn (Connection) – Connection to register table function.
Register the table function with the
Connection. Table-valued functions must be registered before they can be used in a query.Example:
class MyTableFunction(TableFunction): name = 'my_func' # ... other attributes and methods ... db = connect(':memory:') MyTableFunction.register(db)
Helpers¶
cysqlite includes a handful of miscellaneous helpers for various things that may come up when you’re using SQLite.
- rank_bm25(...)¶
Okapi BM25 ranking algorithm for use with SQLite full-text search extensions (FTS4 and FTS5 only).
Parameters are opaque as function is intended to be called with the output of the SQLite matchinfo(table, ‘pcnalx’) function, which provides the required metadata needed to rank the search results.
Warning
You must specify
'pcnalx'as the output format formatchinfo()to ensure that the statistics are read from the index correctly.Usage:
# Create an FTS4 virtual table to search. db.execute('create virtual table search using fts4 ' '(title, content, tokenize="porter")') # Insert sample data into search index. db.execute( 'insert into search (docid, title, content) values (?, ?, ?)', (1, 'some title', 'text I wish to search')) # Register the bm25 rank function. db.create_function(rank_bm25) # Perform search and rank results using BM25. The title will be # weighted as 2x more important than the body content, for example. # Search results are returned sorted most-relevant first. curs = db.execute(""" select docid, title, rank_bm25(matchinfo(search, ?), ?, ?) as score from search where search match ? order by score""", ('pcnalx', 2.0, 1.0, search_query)) for docid, title, score in curs: print('Document "%s" matched - score %s' % (title, score)) # No weighting, both columns equally scored. Results returned # most-relevant first. curs = db.execute(""" select docid, title, rank_bm25(matchinfo(search, ?)) as score from search where search match ? order by score""", ('pcnalx', search_query)) for docid, title, score in curs: print('Document "%s" matched - score %s' % (title, score))
- rank_lucene(...)¶
Works similarly to
rank_bm25()but uses a slightly different algorithm derived from Lucene. See the above section onrank_bm25()for usage example.
- levenshtein_dist(a, b)¶
C implementation of Levenshtein Distance algorithm for comparing similarity of two strings. Useful for fuzzy matching of strings.
Usage:
db = connect(':memory:') db.create_function(levenshtein_dist, 'levdist') print(db.execute_scalar( 'select levdist(?, ?)', ('cysqlite', 'cyqslite'))) # 2.
- damerau_levenshtein_dist(a, b)¶
C implementation of Damerau Levenshtein Distance algorithm for comparing similarity of two strings. Useful for fuzzy matching of strings.
Usage:
db = connect(':memory:') db.create_function(damerau_levenshtein_dist, 'dlevdist') print(db.execute_scalar( 'select dlevdist(?, ?)', ('cysqlite', 'cyqslite'))) # 1.
- class median¶
C implementation of an aggregate and window function that calculates the median of a sequence of values.
Usage:
db = connect(':memory:') db.create_aggregate(median) db.create_window_function(median) med = db.execute('select median(salary) from employees').scalar() # Get the employee salaries along w/median salary for their dept. curs = db.execute(""" select department, employee, salary, median(salary) over (partition by department) from employees order by department""")
Exceptions¶
Exceptions follow the DB-API 2.0 exception hierarchy.
- class SqliteError¶
Exception that is the base class of all other error exceptions. You can use this to catch all errors with one single except statement.
- class InterfaceError¶
Exception raised for errors that are related to the database interface rather than the database itself.
- class DatabaseError¶
Exception raised for errors that are related to the database. The following exception classes are all derived from
DatabaseError.
- class DataError¶
Exception raised for errors that are due to problems with the processed data like division by zero, numeric value out of range, etc.
Currently not used.
- class OperationalError¶
Exception raised for errors that are related to the database’s operation. Raised in most cases for errors when working with the database.
Corresponds to most SQLite error codes.
- class IntegrityError¶
Exception raised when the relational integrity of the database is affected, e.g. a foreign key check fails or a unique constraint fails.
Corresponds to
SQLITE_CONSTRAINTerrors.
- class InternalError¶
Exception raised when the database encounters an internal error.
Corresponds to
SQLITE_INTERNALerrors.
- class ProgrammingError¶
Exception raised for programming errors.
- class NotSupportedError¶
Exception raised in case a method or database API was used which is not supported by the database.
Currently not used.
IntegrityError Subclasses¶
OperationalError Subclasses¶
DatabaseError Subclasses¶
- class DatabaseCorruptError¶
Constants¶
- SQLITE_OK: int = 0¶
General success response code used by SQLite.
Error codes¶
- SQLITE_ERROR¶
- SQLITE_INTERNAL¶
- SQLITE_PERM¶
- SQLITE_ABORT¶
- SQLITE_BUSY¶
- SQLITE_LOCKED¶
- SQLITE_NOMEM¶
- SQLITE_READONLY¶
- SQLITE_INTERRUPT¶
- SQLITE_IOERR¶
- SQLITE_CORRUPT¶
- SQLITE_NOTFOUND¶
- SQLITE_FULL¶
- SQLITE_CANTOPEN¶
- SQLITE_PROTOCOL¶
- SQLITE_EMPTY¶
- SQLITE_SCHEMA¶
- SQLITE_TOOBIG¶
- SQLITE_CONSTRAINT¶
- SQLITE_MISMATCH¶
- SQLITE_MISUSE¶
- SQLITE_NOLFS¶
- SQLITE_AUTH¶
- SQLITE_FORMAT¶
- SQLITE_RANGE¶
- SQLITE_NOTADB¶
Sqlite Status Flags¶
See also
status()Read SQLite status, uses
sqlite3_statusinternally.
- SQLITE_STATUS_MEMORY_USED¶
- SQLITE_STATUS_PAGECACHE_USED¶
- SQLITE_STATUS_PAGECACHE_OVERFLOW¶
- SQLITE_STATUS_SCRATCH_USED¶
- SQLITE_STATUS_SCRATCH_OVERFLOW¶
- SQLITE_STATUS_MALLOC_SIZE¶
- SQLITE_STATUS_PARSER_STACK¶
- SQLITE_STATUS_PAGECACHE_SIZE¶
- SQLITE_STATUS_SCRATCH_SIZE¶
- SQLITE_STATUS_MALLOC_COUNT¶
See also
Connection.status()Read SQLite database status, uses
sqlite3_db_statusinternally.
- SQLITE_DBSTATUS_LOOKASIDE_USED¶
- SQLITE_DBSTATUS_CACHE_USED¶
- SQLITE_DBSTATUS_SCHEMA_USED¶
- SQLITE_DBSTATUS_STMT_USED¶
- SQLITE_DBSTATUS_LOOKASIDE_HIT¶
- SQLITE_DBSTATUS_LOOKASIDE_MISS_SIZE¶
- SQLITE_DBSTATUS_LOOKASIDE_MISS_FULL¶
- SQLITE_DBSTATUS_CACHE_HIT¶
- SQLITE_DBSTATUS_CACHE_MISS¶
- SQLITE_DBSTATUS_CACHE_WRITE¶
- SQLITE_DBSTATUS_DEFERRED_FKS¶
Sqlite Connection Flags¶
See also
Connectionandconnect()Flags for controlling how connection is opened.
VFS-only Connection Flags¶
Support for these flags is dependent on the VFS providing an implementation.
Trace Constants¶
See also
- Setting a trace callback.
Limit Constants¶
See also
- Setting and getting limits.
- SQLITE_LIMIT_LENGTH¶
- SQLITE_LIMIT_SQL_LENGTH¶
- SQLITE_LIMIT_COLUMN¶
- SQLITE_LIMIT_EXPR_DEPTH¶
- SQLITE_LIMIT_COMPOUND_SELECT¶
- SQLITE_LIMIT_VDBE_OP¶
- SQLITE_LIMIT_FUNCTION_ARG¶
- SQLITE_LIMIT_ATTACHED¶
- SQLITE_LIMIT_LIKE_PATTERN_LENGTH¶
- SQLITE_LIMIT_VARIABLE_NUMBER¶
- SQLITE_LIMIT_TRIGGER_DEPTH¶
- SQLITE_LIMIT_WORKER_THREADS¶