Find the Exponential Software extensions you want
| UNIX name | Owner | Status |
|---|---|---|
| marmot | 7x | stable |
| Version | Compatible with |
|---|---|
| N/A | N/A |
Marmot v2 is a leaderless, distributed SQLite replication system built on a gossip-based protocol with distributed transactions and eventual consistency.
Key Features:
MySQL active-active requires careful setup of replication, conflict avoidance, and monitoring. Failover needs manual intervention. Split-brain scenarios demand operational expertise. This complexity doesn't scale to edge deployments.
Marmot excels at read-heavy edge scenarios:
| Use Case |
How Marmot Helps |
|---|---|
| Distributed WordPress |
Multi-region WordPress with replicated database |
| Lambda/Edge sidecars |
Lightweight regional SQLite replicas, local reads |
| Edge vector databases |
Distributed embeddings with local query |
| Regional config servers |
Fast local config reads, replicated writes |
| Product catalogs |
Geo-distributed catalog data, eventual sync |
# Start a single-node cluster
./marmot-v2
# Connect with MySQL client
mysql -h localhost -P 3306 -u root
# Or use DBeaver, MySQL Workbench, etc.
# Test DDL and DML replication across a 2-node cluster
./scripts/test-ddl-replication.sh
# This script will:
# 1. Start a 2-node cluster
# 2. Create a table on node 1 and verify it replicates to node 2
# 3. Insert data on node 1 and verify it replicates to node 2
# 4. Update data on node 2 and verify it replicates to node 1
# 5. Delete data on node 1 and verify it replicates to node 2
# Manual cluster testing
./examples/start-seed.sh # Start seed node (port 8081, mysql 3307)
./examples/join-cluster.sh 2 localhost:8081 # Join node 2 (port 8082, mysql 3308)
./examples/join-cluster.sh 3 localhost:8081 # Join node 3 (port 8083, mysql 3309)
# Connect to any node and run queries
mysql --protocol=TCP -h localhost -P 3307 -u root
mysql --protocol=TCP -h localhost -P 3308 -u root
# Cleanup
pkill -f marmot-v2
Marmot can run distributed WordPress with full database replication across nodes. Each WordPress instance connects to its local Marmot node, and all database changes replicate automatically.
Marmot implements MySQL functions required by WordPress:
| Category |
Functions |
|---|---|
| Date/Time |
NOW, CURDATE, DATE_FORMAT, UNIX_TIMESTAMP, DATEDIFF, YEAR, MONTH, DAY, etc. |
| String |
CONCAT_WS, SUBSTRING_INDEX, FIND_IN_SET, LPAD, RPAD, etc. |
| Math/Hash |
RAND, MD5, SHA1, SHA2, POW, etc. |
| DML |
ON DUPLICATE KEY UPDATE (transformed to SQLite ON CONFLICT) |
cd examples/wordpress-cluster
./run.sh up
This starts:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ WordPress-1 │ │ WordPress-2 │ │ WordPress-3 │
│ :9101 │ │ :9102 │ │ :9103 │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
▼ ▼ ▼
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Marmot-1 │◄─┤ Marmot-2 │◄─┤ Marmot-3 │
│ MySQL: 9191 │ │ MySQL: 9192 │ │ MySQL: 9193 │
└─────────────┘ └─────────────┘ └─────────────┘
└──────────────┴──────────────┘
QUORUM Replication
Test it:
Commands:
./run.sh status # Check cluster health
./run.sh logs-m # Marmot logs only
./run.sh logs-wp # WordPress logs only
./run.sh down # Stop cluster
Marmot v2 uses a fundamentally different architecture from other SQLite replication solutions:
vs. rqlite/dqlite/LiteFS:
How It Works:
| Aspect |
Marmot |
MySQL Active-Active |
rqlite/dqlite |
TiDB |
|---|---|---|---|---|
| Leader |
None |
None (but complex) |
Yes (Raft) |
Yes (Raft) |
| Failover |
Automatic |
Manual intervention |
Automatic |
Automatic |
| Split-brain recovery |
Automatic (anti-entropy) |
Manual |
N/A (leader-based) |
N/A |
| Consistency |
Tunable (ONE/QUORUM/ALL) |
Serializable |
Strong |
Strong |
| Direct file read |
✅ SQLite file |
❌ |
❌ |
❌ |
| JS-safe AUTO_INCREMENT |
✅ Compact mode (53-bit) |
N/A |
N/A |
❌ 64-bit breaks JS |
| Edge-friendly |
✅ Lightweight |
❌ Heavy |
⚠️ Moderate |
❌ Heavy |
| Operational complexity |
Low |
High |
Low |
High |
Marmot v2 supports distributed DDL (Data Definition Language) replication without requiring master election:
Automatic Idempotency: DDL statements are automatically rewritten for safe replay
CREATE TABLE users (id INT)
→ CREATE TABLE IF NOT EXISTS users (id INT)
DROP TABLE users
→ DROP TABLE IF EXISTS users
[ddl]
# DDL lock lease duration (seconds)
lock_lease_seconds = 30
# Automatically rewrite DDL for idempotency
enable_idempotent = true
Marmot v2 uses Change Data Capture (CDC) for replication instead of SQL statement replay:
For UPDATE and DELETE operations, Marmot automatically extracts row keys:
Marmot can publish CDC events to external messaging systems, enabling real-time data pipelines, analytics, and event-driven architectures. Events follow the Debezium specification for maximum compatibility with existing CDC tooling.
[publisher]
enabled = true
[[publisher.sinks]]
name = "kafka-main"
type = "kafka" # "kafka" or "nats"
format = "debezium" # Debezium-compatible JSON format
brokers = ["localhost:9092"] # Kafka broker addresses
topic_prefix = "marmot.cdc" # Topics: {prefix}.{database}.{table}
filter_tables = ["*"] # Glob patterns (e.g., "users", "order_*")
filter_databases = ["*"] # Glob patterns (e.g., "prod_*")
batch_size = 100 # Events per poll cycle
poll_interval_ms = 10 # Polling interval
# NATS sink example
[[publisher.sinks]]
name = "nats-events"
type = "nats"
format = "debezium"
nats_url = "nats://localhost:4222"
topic_prefix = "marmot.cdc"
filter_tables = ["*"]
filter_databases = ["*"]
Events follow the Debezium envelope structure:
{
"schema": { ... },
"payload": {
"before": null,
"after": {"id": 1, "name": "alice", "email": "[email protected]"},
"source": {
"version": "2.0.0",
"connector": "marmot",
"name": "marmot",
"ts_ms": 1702500000000,
"db": "myapp",
"table": "users"
},
"op": "c",
"ts_ms": 1702500000000
}
}
| Operation Types (per Debezium spec): |
Operation |
op |
before |
after |
|---|---|---|---|---|
| INSERT |
c (create) |
null |
row data |
|
| UPDATE |
u (update) |
old row |
new row |
|
| DELETE |
d (delete) |
old row |
null |
Topics follow the pattern: {topic_prefix}.{database}.{table}
Examples:
For more details, see the Integrations documentation.
Deploy Marmot as a lightweight regional replica alongside Lambda functions:
Scale reads globally with replica mode and transparent failover:
[replica]
enabled = true
follow_addresses = ["central-cluster-1:8080", "central-cluster-2:8080", "central-cluster-3:8080"]
discovery_interval_seconds = 30
failover_timeout_seconds = 60
Marmot supports a wide range of MySQL/SQLite statements through its MySQL protocol server. The following table shows compatibility for different statement types:
| Statement Type |
Support |
Replication |
Notes |
|---|---|---|---|
| DML - Data Manipulation |
|||
| INSERT / REPLACE |
✅ Full |
✅ Yes |
Includes qualified table names (db.table) |
| UPDATE |
✅ Full |
✅ Yes |
Includes qualified table names |
| DELETE |
✅ Full |
✅ Yes |
Includes qualified table names |
| SELECT |
✅ Full |
N/A |
Read operations |
| LOAD DATA |
✅ Full |
✅ Yes |
Bulk data loading |
| DDL - Data Definition |
|||
| CREATE TABLE |
✅ Full |
✅ Yes |
Replicated with cluster-wide locking |
| ALTER TABLE |
✅ Full |
✅ Yes |
Replicated with cluster-wide locking |
| DROP TABLE |
✅ Full |
✅ Yes |
Replicated with cluster-wide locking |
| TRUNCATE TABLE |
✅ Full |
✅ Yes |
|
| RENAME TABLE |
✅ Full |
✅ Yes |
Replicated with cluster-wide locking |
| CREATE/DROP INDEX |
✅ Full |
✅ Yes |
Replicated with cluster-wide locking |
| CREATE/DROP VIEW |
✅ Full |
✅ Yes |
Replicated with cluster-wide locking |
| CREATE/DROP TRIGGER |
✅ Full |
✅ Yes |
Replicated with cluster-wide locking |
| Database Management |
|||
| CREATE DATABASE |
✅ Full |
✅ Yes |
Replicated with cluster-wide locking |
| DROP DATABASE |
✅ Full |
✅ Yes |
Replicated with cluster-wide locking |
| ALTER DATABASE |
✅ Full |
✅ Yes |
Replicated with cluster-wide locking |
| SHOW DATABASES |
✅ Full |
N/A |
Metadata query |
| SHOW TABLES |
✅ Full |
N/A |
Metadata query |
| USE database |
✅ Full |
N/A |
Session state |
| Transaction Control |
|||
| BEGIN / START TRANSACTION |
✅ Full |
N/A |
Transaction boundary |
| COMMIT |
✅ Full |
✅ Yes |
Commits distributed transaction |
| ROLLBACK |
✅ Full |
✅ Yes |
Aborts distributed transaction |
| SAVEPOINT |
✅ Full |
✅ Yes |
Nested transaction support |
| Locking |
|||
| LOCK TABLES |
✅ Parsed |
❌ No |
Requires distributed locking coordination |
| UNLOCK TABLES |
✅ Parsed |
❌ No |
Requires distributed locking coordination |
| Session Configuration |
|||
| SET statements |
✅ Parsed |
❌ No |
Session-local, not replicated |
| XA Transactions |
|||
| XA START/END/PREPARE |
✅ Parsed |
❌ No |
Marmot uses its own 2PC protocol |
| XA COMMIT/ROLLBACK |
✅ Parsed |
❌ No |
Not compatible with Marmot's model |
| DCL - Data Control |
|||
| GRANT / REVOKE |
✅ Parsed |
❌ No |
User management not replicated |
| CREATE/DROP USER |
✅ Parsed |
❌ No |
User management not replicated |
| ALTER USER |
✅ Parsed |
❌ No |
User management not replicated |
| Administrative |
|||
| OPTIMIZE TABLE |
✅ Parsed |
❌ No |
Node-local administrative command |
| REPAIR TABLE |
✅ Parsed |
❌ No |
Node-local administrative command |
Marmot includes a MySQL-compatible protocol server, allowing you to connect using any MySQL client (DBeaver, MySQL Workbench, mysql CLI, etc.). The server supports:
Marmot provides full support for MySQL metadata queries, enabling GUI tools like DBeaver to browse databases, tables, and columns:
These metadata queries are powered by the rqlite/sql AST parser, providing production-grade MySQL query compatibility.
# Using mysql CLI
mysql -h localhost -P 3306 -u root
# Connection string for applications
mysql://root@localhost:3306/marmot
Marmot handles various failure and recovery scenarios automatically:
| Scenario |
Behavior |
|---|---|
| Minority partition |
Writes fail - cannot achieve quorum |
| Majority partition |
Writes succeed - quorum achieved |
| Partition heals |
Delta sync + LWW merges divergent data |
How it works:
| Scenario |
Recovery Method |
|---|---|
| Brief outage |
Delta sync - replay missed transactions |
| Extended outage |
Snapshot transfer + delta sync |
| New node joining |
Full snapshot from existing node |
Anti-Entropy Background Process:
Marmot v2 includes an automatic anti-entropy system that continuously monitors and repairs replication lag across the cluster:
Delta Sync Process:
GC Coordination with Anti-Entropy:
| Write Consistency |
Behavior |
|---|---|
| ONE |
Returns after 1 node ACK (fast, less durable) |
| QUORUM |
Returns after majority ACK (default, balanced) |
| ALL |
Returns after all nodes ACK (slow, most durable) |
Conflict Resolution:
Distributed databases need globally unique IDs, but traditional solutions cause problems:
| Solution |
Issue |
|---|---|
| UUID |
128-bit, poor index performance, not sortable |
| Snowflake/HLC 64-bit |
Exceeds JavaScript's Number.MAX_SAFE_INTEGER (2^53-1) |
| TiDB AUTO_INCREMENT |
Returns 64-bit IDs that break JavaScript clients silently |
The JavaScript Problem:
// 64-bit ID from TiDB or other distributed DBs
const id = 7318624812345678901;
console.log(id); // 7318624812345679000 - WRONG! Precision lost!
// JSON parsing also breaks
JSON.parse('{"id": 7318624812345678901}'); // {id: 7318624812345679000}
TiDB's answer? "Use strings." But that breaks ORMs, existing application code, and type safety.
Marmot offers two ID generation modes to solve this:
| Mode |
Bits |
Range |
Use Case |
|---|---|---|---|
| extended |
64-bit |
Full HLC timestamp |
New systems, non-JS clients |
| compact |
53-bit |
JS-safe integers |
Legacy systems, JavaScript, REST APIs |
[mysql]
auto_id_mode = "compact" # Safe for JavaScript (default)
# auto_id_mode = "extended" # Full 64-bit for new systems
Compact Mode Guarantees:
With Marmot compact mode:
const id = 4503599627370496;
console.log(id); // 4503599627370496 - Correct!
JSON.parse('{"id": 4503599627370496}'); // {id: 4503599627370496} - Correct!
Note: Marmot automatically converts INT AUTO_INCREMENT to BIGINT to support distributed ID generation.
DDL Transformation: When you create a table with AUTO_INCREMENT:
CREATE TABLE users (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100))
-- Becomes internally:
CREATE TABLE users (id BIGINT PRIMARY KEY, name TEXT)
DML ID Injection: When inserting with 0 or NULL for an auto-increment column:
INSERT INTO users (id, name) VALUES (0, 'alice')
-- Becomes internally (compact mode):
INSERT INTO users (id, name) VALUES (4503599627370496, 'alice')
Schema-Based Detection:
Marmot automatically detects auto-increment columns by querying SQLite schema directly:
No registration required - columns are detected from schema at runtime, works across restarts, and works with existing databases.
Marmot v2 uses a TOML configuration file (default: config.toml). All settings have sensible defaults.
node_id = 0 # 0 = auto-generate
data_dir = "./marmot-data"
[transaction]
heartbeat_timeout_seconds = 10 # Transaction timeout without heartbeat
conflict_window_seconds = 10 # Conflict resolution window
lock_wait_timeout_seconds = 50 # Lock wait timeout (MySQL: innodb_lock_wait_timeout)
Note: Transaction log garbage collection is managed by the replication configuration to coordinate with anti-entropy. See replication.gc_min_retention_hours and replication.gc_max_retention_hours.
[connection_pool]
pool_size = 4 # Number of SQLite connections
max_idle_time_seconds = 10 # Max idle time before closing
max_lifetime_seconds = 300 # Max connection lifetime (0 = unlimited)
[grpc_client]
keepalive_time_seconds = 10 # Keepalive ping interval
keepalive_timeout_seconds = 3 # Keepalive ping timeout
max_retries = 3 # Max retry attempts
retry_backoff_ms = 100 # Retry backoff duration
[coordinator]
prepare_timeout_ms = 2000 # Prepare phase timeout
commit_timeout_ms = 2000 # Commit phase timeout
abort_timeout_ms = 2000 # Abort phase timeout
[cluster]
grpc_bind_address = "0.0.0.0"
grpc_port = 8080
seed_nodes = [] # List of seed node addresses
cluster_secret = "" # PSK for cluster authentication (see Security section)
gossip_interval_ms = 1000 # Gossip interval
gossip_fanout = 3 # Number of peers to gossip to
suspect_timeout_ms = 5000 # Suspect timeout
dead_timeout_ms = 10000 # Dead timeout
Marmot supports Pre-Shared Key (PSK) authentication for cluster communication. This is strongly recommended for production deployments.
[cluster]
# All nodes in the cluster must use the same secret
cluster_secret = "your-secret-key-here"
Environment Variable (Recommended):
For production, use the environment variable to avoid storing secrets in config files:
export MARMOT_CLUSTER_SECRET="your-secret-key-here"
./marmot
The environment variable takes precedence over the config file.
Generating a Secret:
# Generate a secure random secret
openssl rand -base64 32
Behavior:
Marmot provides admin HTTP endpoints for managing cluster membership (requires cluster_secret to be configured):
Node Lifecycle:
# View cluster members and quorum info
curl -H "X-Marmot-Secret: your-secret" http://localhost:8080/admin/cluster/members
# Remove a node from the cluster (excludes from quorum, blocks auto-rejoin)
curl -X POST -H "X-Marmot-Secret: your-secret" http://localhost:8080/admin/cluster/remove/2
# Allow a removed node to rejoin (node must then restart to join)
curl -X POST -H "X-Marmot-Secret: your-secret" http://localhost:8080/admin/cluster/allow/2
See the Operations documentation for detailed usage and examples.
For read-only replicas that follow cluster nodes with transparent failover:
[replica]
enabled = true # Enable read-only replica mode
follow_addresses = ["node1:8080", "node2:8080", "node3:8080"] # Seed nodes for discovery
secret = "replica-secret" # PSK for authentication (required)
discovery_interval_seconds = 30 # How often to poll for cluster membership
failover_timeout_seconds = 60 # Max time to find alive node during failover
reconnect_interval_seconds = 5 # Reconnect delay on disconnect
You can also specify follow addresses via CLI:
./marmot --config=replica.toml --follow-addresses=node1:8080,node2:8080,node3:8080
Note: Replica mode is mutually exclusive with cluster mode. A replica receives all data via streaming replication but cannot accept writes. It automatically discovers cluster nodes and fails over to another node if the current source becomes unavailable.
[replication]
default_write_consistency = "QUORUM" # Write consistency level: ONE, QUORUM, ALL
default_read_consistency = "LOCAL_ONE" # Read consistency level
write_timeout_ms = 5000 # Write operation timeout
read_timeout_ms = 2000 # Read operation timeout
# Anti-Entropy: Background healing for eventual consistency
# - Detects and repairs divergence between replicas
# - Uses delta sync for small lags, snapshot for large lags
# - Includes gap detection to prevent incomplete data after GC
enable_anti_entropy = true # Enable automatic catch-up for lagging nodes
anti_entropy_interval_seconds = 60 # How often to check for lag (default: 60s)
delta_sync_threshold_transactions = 10000 # Delta sync if lag < 10K txns
delta_sync_threshold_seconds = 3600 # Snapshot if lag > 1 hour
# Garbage Collection: Reclaim disk space by deleting old transaction records
# - gc_min must be >= delta_sync_threshold (validated at startup)
# - gc_max should be >= 2x delta_sync_threshold (recommended)
# - Set gc_max = 0 for unlimited retention
gc_min_retention_hours = 2 # Keep at least 2 hours (>= 1 hour delta threshold)
gc_max_retention_hours = 24 # Force delete after 24 hours
Anti-Entropy Tuning:
GC Configuration Rules (Validated at Startup):
[query_pipeline]
transpiler_cache_size = 10000 # LRU cache for MySQL→SQLite transpilation
validator_pool_size = 8 # SQLite connection pool for validation
[mysql]
enabled = true
bind_address = "0.0.0.0"
port = 3306
max_connections = 1000
unix_socket = "" # Unix socket path (empty = disabled)
unix_socket_perm = 0660 # Socket file permissions
auto_id_mode = "compact" # "compact" (53-bit, JS-safe) or "extended" (64-bit)
Unix Socket Connection (lower latency than TCP):
mysql --socket=/tmp/marmot/mysql.sock -u root
[publisher]
enabled = false # Enable CDC publishing to external systems
[[publisher.sinks]]
name = "kafka-main" # Unique sink name
type = "kafka" # "kafka" or "nats"
format = "debezium" # Debezium-compatible JSON (only option)
brokers = ["localhost:9092"] # Kafka broker addresses
topic_prefix = "marmot.cdc" # Topic pattern: {prefix}.{db}.{table}
filter_tables = ["*"] # Glob patterns for table filtering
filter_databases = ["*"] # Glob patterns for database filtering
batch_size = 100 # Events to read per poll cycle
poll_interval_ms = 10 # Polling interval (default: 10ms)
retry_initial_ms = 100 # Initial retry delay on failure
retry_max_ms = 30000 # Max retry delay (30 seconds)
retry_multiplier = 2.0 # Exponential backoff multiplier
See the Integrations documentation for details on event format, Kafka/NATS configuration, and use cases.
[logging]
verbose = false # Enable verbose logging
format = "console" # Log format: console or json
[prometheus]
enabled = true # Metrics served on gRPC port at /metrics endpoint
Accessing Metrics:
# Metrics are multiplexed with gRPC on the same port
curl http://localhost:8080/metrics
# Prometheus scrape config
scrape_configs:
- job_name: 'marmot'
static_configs:
- targets: ['node1:8080', 'node2:8080', 'node3:8080']
See config.toml for complete configuration reference with detailed comments.
Performance benchmarks on a local development machine (Apple M-series, 3-node cluster, single machine):
| Parameter |
Value |
|---|---|
| Nodes |
3 (ports 3307, 3308, 3309) |
| Threads |
16 |
| Batch Size |
10 ops/transaction |
| Consistency |
QUORUM |
| Metric |
Value |
|---|---|
| Throughput |
4,175 ops/sec |
| TX Throughput |
417 tx/sec |
| Records Loaded |
200,000 |
| Errors |
0 |
| Metric |
Value |
|---|---|
| Throughput |
3,370 ops/sec |
| TX Throughput |
337 tx/sec |
| Duration |
120 seconds |
| Total Operations |
404,930 |
| Errors |
0 |
| Retries |
37 (0.09%) |
Operation Distribution:
| Percentile |
Latency |
|---|---|
| P50 |
4.3ms |
| P90 |
14.0ms |
| P95 |
36.8ms |
| P99 |
85.1ms |
All 3 nodes maintained identical row counts (346,684 rows) throughout the test, confirming consistent replication.
Note: These benchmarks are from a local development machine with all nodes on the same host. Production deployments across multiple machines will have different characteristics based on network latency. Expect P99 latencies of 50-200ms for cross-region QUORUM writes.
Marmot's SQLite files are standard WAL-mode databases, compatible with Litestream:
litestream replicate /path/to/marmot-data/*.db s3://bucket/backup
Enable CDC publisher to stream changes to Kafka/NATS, then archive to your preferred storage.
Since Marmot uses SQLite with WAL mode, you can safely snapshot the data directory during operation.