I noticed the dockerd and docker-proxy processes were using more CPU than expected, 15% and 24% respectively, so I decided to use the "host" network to avoid the overhead. However, the results I got were much worse. How to explain this?
The same does not happen with a nginx container (performance increases from 43k req/sec to 48k req/sec with "host" network).
Scenario #1 - "bridge" network
Start CouchDB container: docker run -d -p 5984:5984 couchdb
$ wrk -d 60 http://localhost:5984/mydb/mydoc
Running 1m test @ http://localhost:5984/mydb/mydoc
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.51ms 197.54us 7.81ms 75.74%
Req/Sec 3.32k 111.59 3.60k 71.58%
396492 requests in 1.00m, 698.02MB read
Requests/sec: 6608.12
Transfer/sec: 11.63MB
Scenario #2 - "host" network
Start CouchDB container: docker run --net=host -d -p 5984:5984 couchdb
$ wrk -d 60 http://localhost:5984/mydb/mydoc
Running 1m test @ http://localhost:5984/mydb/mydoc
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 42.98ms 1.17ms 54.99ms 96.96%
Req/Sec 116.79 13.73 151.00 56.00%
13966 requests in 1.00m, 24.59MB read
Requests/sec: 232.57
Transfer/sec: 419.29KB
Environment
- Hardware: i7-6700K, 16GB, SSD
- OS: Fedora 25 (kernel 4.10.13)
- Database: CouchDB 1.6.1
0 Answers