How to Set Up ProxySQL Read/Write Split
Read/Write split is one of the most common query routing patterns. ProxySQL can route queries based on multiple criteria without requiring application modifications.
This guide assumes a primary MySQL server in hostgroup 1 and replicas in hostgroup 2.
Option 1: Port-Based Routing
Separate listening ports act as endpoints — the incoming port determines the target hostgroup.
Configure two listening ports:
SET mysql-interfaces='0.0.0.0:6401;0.0.0.0:6402';
SAVE MYSQL VARIABLES TO DISK;
PROXYSQL RESTART;
Add routing rules:
INSERT INTO mysql_query_rules (rule_id,active,proxy_port,destination_hostgroup,apply)
VALUES
(1,1,6401,1,1),
(2,1,6402,2,1);
LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;
Port 6401 routes to hostgroup 1 (primary); port 6402 routes to hostgroup 2 (replicas).
Limitation: The application must already distinguish between reads and writes and connect to the appropriate port. ProxySQL's real power is doing this transparently via query analysis.
Option 2: Generic Regex-Based Routing
Route all SELECT statements to replicas and all other queries to the primary:
UPDATE mysql_users SET default_hostgroup=1;
LOAD MYSQL USERS TO RUNTIME;
INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply)
VALUES
(1,1,'^SELECT.*FOR UPDATE',1,1),
(2,1,'^SELECT',2,1);
LOAD MYSQL QUERY RULES TO RUNTIME;
Generic SELECT routing rules cause problems with transactions, session state, and consistency. Analyse your workload first and create targeted rules instead.
Option 3: Intelligent Routing via Query Analysis (Recommended)
- Route all traffic to the primary initially
- Analyse
stats_mysql_query_digestto identify expensiveSELECTstatements - Determine which statements can safely run on replicas
- Add targeted routing rules
Analyse Query Performance
Top 5 queries by total execution time:
SELECT digest, SUBSTR(digest_text,0,25), count_star, sum_time
FROM stats_mysql_query_digest
WHERE digest_text LIKE 'SELECT%'
ORDER BY sum_time DESC LIMIT 5;
Top 5 by execution count:
SELECT digest, SUBSTR(digest_text,0,25), count_star, sum_time
FROM stats_mysql_query_digest
WHERE digest_text LIKE 'SELECT%'
ORDER BY count_star DESC LIMIT 5;
By maximum execution time:
SELECT digest, SUBSTR(digest_text,0,25), count_star, sum_time,
sum_time/count_star avg_time, min_time, max_time
FROM stats_mysql_query_digest
WHERE digest_text LIKE 'SELECT%'
ORDER BY max_time DESC LIMIT 5;
By total time, minimum individual execution > 1 ms:
SELECT digest, SUBSTR(digest_text,0,20), count_star, sum_time,
sum_time/count_star avg_time, min_time, max_time
FROM stats_mysql_query_digest
WHERE digest_text LIKE 'SELECT%' AND min_time > 1000
ORDER BY sum_time DESC LIMIT 5;
By total time, average execution ≥ 1 second (with percentage):
SELECT digest, SUBSTR(digest_text,0,25), count_star, sum_time,
sum_time/count_star avg_time,
ROUND(sum_time*100.00/(SELECT SUM(sum_time) FROM stats_mysql_query_digest),3) pct
FROM stats_mysql_query_digest
WHERE digest_text LIKE 'SELECT%' AND sum_time/count_star > 1000000
ORDER BY sum_time DESC LIMIT 5;
By total time, average execution ≥ 15 ms (with percentage of SELECT workload):
SELECT digest, SUBSTR(digest_text,0,25), count_star, sum_time,
sum_time/count_star avg_time,
ROUND(sum_time*100.00/(
SELECT SUM(sum_time) FROM stats_mysql_query_digest WHERE digest_text LIKE 'SELECT%'
),3) pct
FROM stats_mysql_query_digest
WHERE digest_text LIKE 'SELECT%' AND sum_time/count_star > 15000
ORDER BY sum_time DESC LIMIT 5;
Create Targeted Rules
Route a specific query (by digest hash) to replicas:
INSERT INTO mysql_query_rules (rule_id,active,digest,destination_hostgroup,apply)
VALUES (1,1,'0x38BE36BDFFDBE638',2,1);
Route all COUNT(*) queries to replicas:
INSERT INTO mysql_query_rules (rule_id,active,match_digest,destination_hostgroup,apply)
VALUES (1,1,'^SELECT COUNT\(\*\)',2,1);
Apply and persist:
LOAD MYSQL QUERY RULES TO RUNTIME;
SAVE MYSQL QUERY RULES TO DISK;
Best Practices
- Re-evaluate query patterns regularly as workloads evolve
- Prefer
match_digestovermatch_pattern— digests are smaller and faster to evaluate - Use digest-based rules for high-value individual queries, regex rules for broad patterns
- Always test rules against a representative workload before applying in production