I have used docker-bench-security on one of the kube-node to check best practices, I found one warning that PID limit is not set on containers. So Please provide a solution that how do I set PID limit for container in Kubernetes.
I am facing a weird issue with the docker networks. I am using an external network of bridge type named extenal_network in my docker containers with auto-restart enabled. I am not able to access my host network if any of the containers restarts due to some error may code or infra related.
Please refer attached screenshot for more clarity.
I've tried the below links but no luck.
- https://superuser.com/questions/1336567/installing-docker-ce-in-ubuntu-18-04-breaks-internet-connectivity-of-host
- https://success.docker.com/article/how-do-i-influence-which-network-address-ranges-docker-chooses-during-a-docker-network-create
- https://forums.docker.com/t/cant-access-internet-after-installing-docker-in-a-fresh-ubuntu-18-04-machine/53416
Dockerfile
FROM node:10.20.1-alpine
RUN apk add --no-cache python make g++
WORKDIR /home/app
COPY package.json package-lock.json* ./
RUN npm install
COPY . .
Docker-Compose
version: "3"
services:
app:
container_name: app
build:
context: ./
dockerfile: Dockerfile
image: 'network_poc:latest'
ports:
- 8080:8080
deploy:
resources:
limits:
memory: 2G
networks:
- extenal_network
restart: always
command: node index.js
networks:
shared_network:
external:
name: extenal_network
docker inspect extenal_network
[
{
"Name": "extenal_network",
"Id": "96476c227ddc14aa23d376392d380b2674fcbad109c90e7436c0cddd5c0a9ac5",
"Created": "2020-04-14T00:17:10.89980675+05:30",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"7621a2b5a7e460a905bf86c427aea38b6374ac621c0c1a2b9eca4b671aea4dfe": {
"Name": "app",
"EndpointID": "04e9d14a17af05eb7a2b478526365cbce7f726a62f5e2cd315244c2639891b1e",
"MacAddress": "**:**:**:**:**:**",
"IPv4Address": "172.18.0.2/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Any help or suggestion is highly appreciable.
In short, everything works until I try to access my apache server in a docker container from the internet. The incoming packets reach the docker container, but the outgoing packets get dropped by the NAT interface on docker-machine.
My setup is
- typical home LAN setup with a router connected to the internet
- osx machine running docker (ie - docker-machine aka boot2docker)
- a docker container running apache (exposing port 80 as 11111)
- I have apache running on osx (my host machine) as well which will serve as a troubleshooting tool.
- I added a bridged interface to docker-machine which connects docker-machine directly to my LAN and it gets a IP from my router.
- on my router (192.168.1.1) I've port forwarded 11111 to my docker-machine IP (192.168.1.102)
what I can do
- I can connect to apache from my host and from another computer on my
LAN (all 192.168.1.X) because of the bridge interface on
docker-machine. both 192.168.1.102:11111 and using
http://:11111 - so obviously my domain resolves it's IP properly and port forwarding on my router works fine.
- I can also easily access the web server on my host machine (osx) from the internet
what I've tried
- port forwarding on my host (osx) using pfctl
- using --net=host on my container
I've narrowed down the problem (using tcpdump and other experimentation) that when I connect to the web server in the container from a computer on my LAN the packets flow through the bridged interface (192.168.1.102) in docker-machine. When I connect from the internet the incoming packet flows through 192.168.1.102, but the return packet does not. Instead it goes through the NAT interface on docker-machine and gets dropped. I've proved this by doing "ifconfig eth0 down" on the NAT interface from inside docker-machine. Now obviously this screws up my docker-machine because I can no longer run docker commands and it kills my current ssh session. But when I did this, connectivity to the web server in the container from the internet works! So I proved that the return packet is the problem.
Now, can I use iptables inside docker-machine to route the packet properly so that it goes out the bridged interface instead of the NAT interface? I've tried it without success. Here is my iptables rules on docker-machine. I want to route any packets coming from my container with a source port of 11111 and route them to eth1 (the bridged interface).
sudo iptables -I FORWARD 1 -i docker0 -o eth1 -p tcp --syn --sport 11111 -m conntrack --ctstate NEW -j ACCEPT
docker@default:~$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere tcp spt:11111 flags:FIN,SYN,RST,ACK/SYN ctstate NEW
DOCKER-ISOLATION all -- anywhere anywhere
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
Am I going about this all wrong? Is there a standard way to set this up that I haven't explored?
After reading about some people's issue with Docker hosts' time going out of sync, I realized that my Docker host on Digital Ocean (created via docker-machine) may want ntp running on it, and that got me thinking about system updates to the Docker hosts in general.
There has already been a good discussion on applying updates to the actual docker services -- with general agreement that rebuilding images from updated base images is good solution -- but I haven't seen much focus on the Docker hosts themselves.
For those using Docker in a production environment, are you even bothering with docker-machine, or are you building and maintaining your Docker hosts with traditional tools like Chef/Puppet/etc?
Is it possible to wire up a small zero downtime deployment (*1) with two Amazon EC2 instances? I'd like to roll my services regularly to a new EC2 instance to avoid manual OS updates on the instances itself.
EC2-1: application serivces
EC2-2: database, consul registry for docker networking
EC2-1 would be the only public instance (bound to an Amazon Elastic IP). Shouldn't be a problem to replicate this one and change the Elastic IP to the new EC2 instance, right?.
However, I don't know if it is possible to switch EC2-2 as docker stores the docker-networking settings in the consul database. Can I start a replica of that instance and tell docker that it should now use the new consul instance for networking?
(*1) you can't guarantee zero-downtime in case of instance failures etc. with two instances. I mean zero-downtime while moving to new EC2 instances :)