Update ToolBase: Load cascade of config files, add -c parameter.
PgManager
=========
PgManager is higher level database adapter for Python and Postgres. There is also MyManager for MySQL. Both are part of `pgtoolkit <http://hg.devl.cz/pgtoolkit>`_.
PgManager offers following convenience functionality over psycopg:
* **Save and reuse database connection parameters**
- parameters are linked to uniquely named ConnectionInfo structure
- using this identifier, you can retrieve Connection or Cursor object at any time
(new database connection is established when needed)
* **Connection pooling**
- connections with same identifier are pooled and reused
* **Easy query using the with statement**
- just use cursor() method in with statement
- PgManager will get a connection from pool, make cursor and clean up when you're finished
* **Dictionary rows**
- cursor has additional methods like fetchall_dict(), which
returns OrderedDict row instead of ordinary list-like row
Basic usage
-----------
::
from pgtoolkit import pgmanager
pgm = pgmanager.get_instance()
pgm.create_conn(hostaddr='127.0.0.1', dbname='postgres', isolation_level='autocommit')
with pgm.cursor() as curs:
curs.execute('SELECT now() AS now')
row = curs.fetchone_dict()
print(row.now)
Advanced usage
--------------
Study this when you cannot go with basic usage and need to know what actually happens.
Look also into source code, there are many details described in docstrings::
# create new connection info, this will not connect yet
pgm.create_conn(name='thatdb', hostaddr='127.0.0.1', dbname='postgres')
# borrow connection from pool (it will connect to DB here and possibly fail)
conn = pgm.get_conn('thatdb')
try:
# do stuff on that connection
curs = conn.cursor()
curs.execute('SELECT now() AS now')
row = curs.fetchone_dict()
# don't forget to commit when not in autocommit mode
curs.connection.commit()
print(row.now)
finally:
# return connection back to pool
# Beware: name must be correct here, it'll not be checked!
pgm.put_conn(conn, 'thatdb')
# close all connections in pool
pgm.close_conn('thatdb')
Logging
-------
PgManager logs all queries before they are executed, exceptions while executing
queries and also notices.
Queries and exceptions are logged into 'pgmanager_sql' logger,
notices into 'pgmanager_notices'. You need to setup logging for
these loggers to send messages into file or elsewhere.
Notices are logged only from cursor() method.
If you are using get_conn() instead, you must call log_notices() directly.
Listen - Notify
---------------
PostgreSQL supports asynchronous messages from server to clients.
This adds little more complexity to client programs but saves a lot of polling.
You should not use connection info created by create_conn() for LISTEN queries.
It will work in basic case, but it will broke when you add some sort of parallelism.
Use create_conn_listen instead. It allows to copy connection info parameters directly from another connection info or set up new one::
# create connection info without pool and with initial query 'LISTEN CHAT;'
pgm.create_conn_listen(name='listenconn', channel='CHAT', copy_dsn='thatdb')
while not done:
notify = pgm.wait_for_notify('listenconn', timeout=5.0)
if notify:
print('NOTIFY:', notify)
else:
print('timeout')
pgm.close_conn('listenconn')
Timeouts
--------
In production, it's very important to set reasonable timeouts for both connection
and query execution. Application should not freeze when database server gets out of reach.
Both timeouts can be set using :meth:`create_conn`::
pgm.create_conn(host='127.0.0.1', dbname='test',
init_statement='SET statement_timeout TO 20000;', # in ms, so it's 20s
connect_timeout=20, # also 20s
isolation_level='autocommit')
Methods
-------
.. autoclass:: pgtoolkit.pgmanager.PgManager
:members:
:undoc-members: