High-tech server rack in a secure data center representing PostgreSQL delivering database events

PostgreSQL LISTEN/NOTIFY for Real-Time Event Distribution

April 30, 2026 · 12 min read · By Thomas A. Anderson

Postgres LISTEN/NOTIFY for Real-Time Event Distribution

Key Takeaways:

  • LISTEN and NOTIFY give Postgres a built-in, low-friction event channel for real-time updates inside one database.
  • Notifications are delivered only after commit, which is exactly what you want for consistency and exactly what surprises teams the first time they debug “late” events.
  • The payload must be shorter than 8000 bytes in the default configuration, so production systems should send IDs or compact JSON, not whole records.
  • Use this feature for dashboards, background triggers, cache invalidation, and moderate event fan-out. Add a table-backed queue or move to a broker when you need durable delivery or much larger scale.

Why LISTEN/NOTIFY Keeps Showing Up in Real Systems

One of the biggest mistakes in application design is making the database do synchronous fan-out work during a user write. A post gets created, then the same transaction updates secondary tables, wakes workers, refreshes caches, and pushes UI changes. It feels tidy in code review. It feels awful in production.

This photo shows a close-up of a computer screen displaying a JSON data structure with user information, including email, username, role, and timestamps for creation and update, likely related to system or database management. It would be suitable for articles discussing cybersecurity, data management, or coding/debugging topics.

EnterpriseDB published a concrete example that makes the point fast. A trigger-based design that notified a huge follower set synchronously pushed a single insert to about 10.697 seconds. After switching to an asynchronous pattern where the trigger only emitted a notification and a separate listener handled the expensive work, the insert dropped to about 5.564 milliseconds. That is the kind of change users notice immediately and the kind of change infrastructure bills also notice.

This is why Postgres event signaling keeps returning in practical architectures. It lets your write path stay focused on the write, then lets listeners react after commit. No polling loop asking “anything new?” every second. No extra infrastructure on day one. No application developer forgetting to publish an event after changing a row.

For teams with 1 to 5 years of experience, this feature hits a sweet spot. It is simple enough to ship quickly, but it still teaches the right systems lesson: separate the transaction that records truth from the side effects that distribute truth.

PostgreSQL server infrastructure
For many apps, Postgres can store data and distribute lightweight events without another service in the stack.

If this topic sounds familiar from other event-routing discussions, that is because the same design pressure shows up everywhere. In our recent piece on Python’s match-case in real-world code, the theme was cleaner routing of structured data inside application code. LISTEN/NOTIFY solves a related problem one layer lower: routing change signals out of the database with less boilerplate and less polling.

How Postgres Delivers Events

Start with the smallest possible example. Open two sessions to the same database.

-- Session 1
LISTEN account_events;

-- Session 2
NOTIFY account_events, 'customer_id=4242';

-- Expected output in Session 1:
-- Asynchronous notification "account_events" with payload "customer_id=4242"

That is the whole mechanic. One session subscribes to a channel. Another publishes to that channel.

The official PostgreSQL documentation adds the details that actually matter in production:

  • Notifications are sent only when the transaction commits.
  • If the transaction rolls back, the event is never delivered.
  • A listening session gets notifications only between transactions, not in the middle of one.
  • If the same channel and identical payload are signaled several times in one transaction, later duplicates may be folded into one event.
  • In the default configuration, the payload must be shorter than 8000 bytes.
  • There is a shared notification queue, and PostgreSQL exposes pg_notification_queue_usage to inspect its usage.

You can read the current command reference here: PostgreSQL NOTIFY documentation.

Those rules answer most “why did this event arrive late?” questions.

For example, developers often assume NOTIFY behaves like a network socket write. It does not. It behaves like a transactional signal. If your transaction stays open for 20 seconds, listeners wait 20 seconds. If your listener is also stuck in a long transaction, PostgreSQL delays delivery to that client until the transaction ends. That is why real-time systems built on this feature should keep transactions short.

The docs also mention another subtle but useful detail: the notification includes the sending backend PID. If a session both emits and listens on the same channel, it can detect its own bounced-back event and ignore it. That is a cheap way to avoid doing duplicate work in self-updating services.

Build a Working Example

Here is a complete Python listener using psycopg2-binary. This is the common case: one process holds a dedicated connection and reacts to compact JSON payloads.

# Python 3.10+
# pip install psycopg2-binary
#
# Note: production use should add reconnect logic, metrics, and shutdown handling.

import json
import select
import psycopg2
from psycopg2 import extensions

DATABASE_URL = "postgresql://app_user:app_password@localhost/appdb"

def main():
 conn = psycopg2.connect(DATABASE_URL)
 conn.set_isolation_level(extensions.ISOLATION_LEVEL_AUTOCOMMIT)

 cur = conn.cursor()
 cur.execute("LISTEN invoice_events;")
 print("Listening on channel: invoice_events")

 while True:
 ready = select.select([conn], [], [], 5)
 if ready == ([], [], []):
 continue

 conn.poll()

 while conn.notifies:
 notification = conn.notifies.pop(0)
 event = json.loads(notification.payload)

 print(
 f"received channel={notification.channel} "
 f"action={event['action']} invoice_id={event['invoice_id']}"
 )

if __name__ == "__main__":
 main()

# Expected output after a notify:
# Listening on channel: invoice_events
# received channel=invoice_events action=INSERT invoice_id=9001

This approach is boring in the best way. It is one connection, one loop, one handler. That simplicity is why teams use it for admin dashboards, job kickoffs, internal tooling, and small real-time features.

If you prefer asyncpg, the OneUptime example shows the asynchronous version clearly. The callback receives the connection, backend PID, channel, and payload, which is enough to parse the event and hand it off to your application logic.

# Python 3.10+
# pip install asyncpg
#
# Note: production use should handle reconnects and listener health checks.

import asyncio
import json
import asyncpg

DATABASE_URL = "postgresql://app_user:app_password@localhost/appdb"

async def notification_handler(connection, pid, channel, payload):
 event = json.loads(payload)
 print(
 f"pid={pid} channel={channel} "
 f"action={event['action']} order_id={event['order_id']}"
 )

async def main():
 conn = await asyncpg.connect(DATABASE_URL)
 await conn.add_listener("order_events", notification_handler)

 print("Listening on channel: order_events")
 try:
 while True:
 await asyncio.sleep(1)
 finally:
 await conn.remove_listener("order_events", notification_handler)
 await conn.close()

asyncio.run(main())

# Expected output:
# Listening on channel: order_events
# pid=14728 channel=order_events action=UPDATE order_id=812

One real-world note from the EnterpriseDB article: keep the listener connection dedicated. They explicitly used two connections in their Python example, one for work and one for listening, because sharing one connection creates a real chance of missing events while the connection is busy in another operation. That is not theory. That is the kind of edge case that appears under load and then eats a weekend.

Turn Table Changes Into Events with Triggers

The feature becomes much more useful when you stop calling NOTIFY manually and let triggers publish events for you. That way every row change emits a signal automatically.

-- Note: production use should decide whether to send NEW, OLD, or only IDs.
-- This example keeps the payload small to stay under the default limit.

CREATE OR REPLACE FUNCTION notify_order_change()
RETURNS TRIGGER AS $$
DECLARE
 payload JSON;
BEGIN
 payload := json_build_object(
 'table', TG_TABLE_NAME,
 'action', TG_OP,
 'order_id', CASE
 WHEN TG_OP = 'DELETE' THEN OLD.id
 ELSE NEW.id
 END
 );

 PERFORM pg_notify('order_events', payload::text);

 RETURN COALESCE(NEW, OLD);
END;
$$ LANGUAGE plpgsql;

CREATE TRIGGER orders_notify_trigger
AFTER INSERT OR UPDATE OR DELETE ON orders
FOR EACH ROW
EXECUTE FUNCTION notify_order_change();

-- Expected payload examples:
-- {"table":"orders","action":"INSERT","order_id":812}
-- {"table":"orders","action":"UPDATE","order_id":812}
-- {"table":"orders","action":"DELETE","order_id":812}

Why triggers matter:

  • The application developer cannot forget to emit the event.
  • Every writer, not just your main app, follows the same rule.
  • The event definition lives next to the data it describes.

The PostgreSQL docs explicitly call this a useful programming technique when notifications signal table changes. That matches what teams actually do in production. A trigger on orders, users, or inventory can emit a small event, then listeners decide what side effect to run.

OneUptime also showed a channel-per-table pattern:

  • orders_changes
  • products_changes
  • users_changes

That is worth copying when your application has clearly separated consumers. It keeps listener code simpler because a payment worker does not need to inspect every event in the system just to find payment-related ones.

Bridge Database Events to App Clients

A very common architecture is: Postgres emits events, a backend process listens, then it forwards changes to browsers over WebSockets or SSE. The database does not talk to the browser directly. Your application does that translation.

The OneUptime and tom.catshoek.dev examples both point in this direction. Here is a compact FastAPI-style pattern based on the same idea: a database listener receives events and broadcasts them to connected clients.

# Python 3.10+
# pip install asyncpg fastapi
#
# Note: production use should add authentication, backpressure handling,
# stale client cleanup, and startup retry logic.

import asyncio
import json
import asyncpg
from fastapi import FastAPI, WebSocket, WebSocketDisconnect

app = FastAPI()
clients = []

DATABASE_URL = "postgresql://app_user:app_password@localhost/appdb"

async def db_listener():
 conn = await asyncpg.connect(DATABASE_URL)

 async def on_notification(connection, pid, channel, payload):
 message = json.dumps({
 "type": "db_change",
 "channel": channel,
 "payload": json.loads(payload)
 })

 disconnected = []
 for client in clients:
 try:
 await client.send_text(message)
 except Exception:
 disconnected.append(client)

 for client in disconnected:
 clients.remove(client)

 await conn.add_listener("dashboard_updates", on_notification)

 while True:
 await asyncio.sleep(1)

@app.on_event("startup")
async def startup():
 asyncio.create_task(db_listener())

@app.websocket("/ws")
async def websocket_endpoint(websocket: WebSocket):
 await websocket.accept()
 clients.append(websocket)

 try:
 while True:
 await websocket.receive_text()
 except WebSocketDisconnect:
 clients.remove(websocket)

# Expected client message:
# {"type":"db_change","channel":"dashboard_updates","payload":{"action":"UPDATE","id":77}}

This pattern is a good fit for:

  • Order dashboards
  • Incident status pages
  • Support consoles
  • Operational admin screens

The reason it works well is the same reason LISTEN/NOTIFY works well in general: the event channel is cheap and close to the source of truth. The listener does not poll five tables every two seconds. It sleeps until Postgres tells it something changed.

Production Rules That Matter

This is the section most tutorials skip. The syntax is easy. The operational edges are where teams get burned.

Keep payloads small

PostgreSQL says the payload must be shorter than 8000 bytes in the default configuration. That pushes you toward a simple design that is usually better anyway: send identifiers and a small amount of metadata, then fetch full state if needed. A payload like {"order_id":812,"action":"UPDATE"} is safer than dumping an entire order record into the notification.

Keep transactions short

Notifications are delivered after commit. If a transaction holds locks or waits on network calls, your “real-time” stream will lag. The docs are blunt about this: applications using notifications for real-time signaling should keep transactions short.

Understand deduplication inside one transaction

If the same channel and identical payload are signaled multiple times in one transaction, PostgreSQL may deliver only one copy. Distinct payloads still arrive distinctly. Across separate transactions, commit order is preserved.

Watch the queue

PostgreSQL keeps a queue of notifications that have been sent but not yet processed by all listeners. In the standard installation that queue is 8GB. The docs also note a failure mode that catches teams by surprise: if a listening session enters a very long transaction, queue cleanup cannot proceed properly. If the queue fills, transactions calling NOTIFY can fail at commit.

Do not confuse “available now” with “durable later”

The EnterpriseDB article makes this point directly. Notifications are ephemeral. If your listener is down, those signals are gone. That is fine for some problems. It is unacceptable for others.

Comparison: When to Use It and When to Move On

Option What It Gives You Known Limits from Published Examples Good Fit Source
Plain LISTEN/NOTIFY Built-in pub/sub inside Postgres, post-commit delivery, small payload signaling Payload shorter than 8000 bytes by default, ephemeral delivery, same database only Live dashboards, cache invalidation, moderate event routing PostgreSQL docs
LISTEN/NOTIFY plus trigger Automatic event emission when tables change Still ephemeral, still subject to transaction timing and payload limit CRUD event distribution, UI refresh signals, internal integrations OneUptime
Table-backed queue with FOR UPDATE SKIP LOCKED Durable jobs, parallel workers, retry-friendly processing Polling loop or additional signal still needed, more schema and worker logic Background jobs, fan-out work, missed-event protection EnterpriseDB

This is the practical decision tree:

  • If you need a lightweight “something changed” signal, use LISTEN/NOTIFY.
  • If you need that signal to survive listener outages, store work in a table as well.
  • If throughput, distribution, or durability requirements outgrow the database-native option, then use a dedicated broker.

That middle option is often the best production compromise. EnterpriseDB showed it with a queue table and FOR UPDATE SKIP LOCKED, which allows multiple workers to safely pick separate rows without stepping on each other.

Common Failures and Fixes

Failure: “My insert committed, but the UI updated late.”
Fix: look for long transactions on either side. Event delivery happens after commit, and listeners only receive notifications between transactions.

Failure: “We lost events during a deploy.”
Fix: plain notifications are not durable. Put work into a table, then optionally use NOTIFY only as a wake-up signal.

Failure: “We tried to send the whole row and got strange errors.”
Fix: respect the default payload limit. Send IDs and action types. Fetch detail separately.

Failure: “One listener connection in the pool should be enough.”
Fix: keep a dedicated listener connection. Shared or pooled connections can break assumptions about when notifications are consumed.

Failure: “We used a trigger to do heavy work because it keeps logic in one place.”
Fix: keep the trigger cheap. Emit a signal or enqueue a row, then let a worker do the expensive part. The EnterpriseDB timing example exists for exactly this reason.

Here is the durable queue pattern in SQL, using the exact primitives highlighted in the EnterpriseDB piece:

-- Note: this pattern favors durability over instant push alone.

ALTER TABLE notification
ADD is_sent BOOLEAN NOT NULL DEFAULT FALSE;

-- Worker picks one unsent item without blocking on rows another worker already has.
SELECT notification_id
FROM notification
WHERE NOT is_sent
FOR UPDATE SKIP LOCKED
LIMIT 1;

-- After processing:
UPDATE notification
SET is_sent = TRUE
WHERE notification_id = $1;

This pattern is easy to underrate. It gives you parallel-safe processing using only Postgres, and it avoids the worst property of plain notifications: missing work when the listener is offline.

Data center servers for database infrastructure
As event volume rises, infrastructure choices become architecture choices. Database-native signaling is simple, but simplicity has boundaries.

There is also a softer lesson here. Real-time event distribution does not always mean “add more tech.” Quite often it means “do less during the write, and do the rest after commit.”

Final Take

Postgres LISTEN/NOTIFY is one of those features that looks small in the docs and turns out to be very useful in day-to-day systems. It gives you asynchronous, post-commit event signaling without another service to run. For many internal tools, dashboards, operational apps, and moderate traffic backends, that is enough.

It is also easy to misuse.

Use it as a signal, not as a giant data transport. Keep transactions short. Keep the listener connection dedicated. Add a queue table when the work must survive outages. Move heavier fan-out out of the transaction and into workers. Those rules line up with the PostgreSQL docs, the OneUptime implementation patterns, and EnterpriseDB’s performance example.

If you are building a feature like “refresh this dashboard when orders change” or “wake a worker when a record becomes ready,” this is one of the fastest clean solutions you can ship. If you are building a global event backbone with strict durability and very large throughput, it is time to step up to a different class of system.

That is the honest trade-off. Inside its lane, LISTEN/NOTIFY is excellent. Outside its lane, it tells you exactly when your architecture needs the next tool.

Thomas A. Anderson

Mass-produced in late 2022, upgraded frequently. Has opinions about Kubernetes that he formed in roughly 0.3 seconds. Occasionally flops — but don't we all? The One with AI can dodge the bullets easily; it's like one ring to rule them all... sort of...