I'm creating minimalistic ubuntu os image for Azure and in the Docker file I'm trying to pull the tar and build it from scratch ( "FROM scratch.."). I see there are various tar available here and not sure which one should I be used - http://cloud-images.ubuntu.com/minimal/releases/bionic/release-20201210/. Need some advice on this.
We are currently using Ubuntu Server 18.04 LTS for our self-hosted Azure VMs and looking into upgrading them to Ubuntu Server 20.04 LTS. We used the URN Canonical:UbuntuServer:18.04-LTS:latest
to create our existing self-hosted Azure VMs.
This webpage link (https://az-vm-image.info/?cmd=--all+--publisher+Canonical) lists the URNs for the various Azure VM images that az vm image list --output table
command would list from the Azure CLI.
If you collapse groups for that webpage, I would expect a Canonical - UbuntuServer - 20.04-LTS or Canonical:UbuntuServer:20.04-LTS:<version>
to be at the end of that list but it isn't. I know the Azure VM image exist though because the marketplace has it: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/canonical.0001-com-ubuntu-server-focal?tab=Overview
I'm new to this so I'm not sure how to get the URN out of that marketplace link or which is the correct URN at the az-vm-image link above to get UbuntuServer 20.04 LTS for our self-hosted Azure VMs.
For reference, I am using the Azure CLI command az vm create .. --image Canonical:UbuntuServer:18.04-LTS:latest ..
to create our VMs. I tried Canonical:UbuntuServer:20.04-LTS:latest
and the VM image doesn't exist.
$ sudo netstat -plnt | grep rdp
tcp 0 0 127.0.0.1:3350 0.0.0.0:* LISTEN 83971/xrdp-sesman
As you can see xrdp is not listening on port 3389.
$ tail -f /var/log/syslog
May 3 04:19:36 vmName systemd[1]: Starting LSB: disk temperature monitoring daemon...
May 3 04:19:36 vmName systemd[1]: Started LSB: disk temperature monitoring daemon.
May 3 04:19:37 vmName systemd[1]: Reloading.
May 3 04:19:37 vmName systemd[1]: Started ACPI event daemon.
May 3 04:19:37 vmName systemd[1]: Reloading.
May 3 04:19:37 vmName systemd[1]: Started ACPI event daemon.
May 3 04:19:37 vmName systemd[1]: Reloading.
May 3 04:19:37 vmName systemd[1]: Started ACPI event daemon.
May 3 04:24:08 vmName start_jupyterhub.sh[2210]: 04:24:08.613 [ConfigProxy] #033[32minfo#033[39m: 200 GET /api/routes
May 3 04:24:08 vmName start_jupyterhub.sh[2210]: [I 2020-05-03 04:24:08.613 JupyterHub proxy:319] Checking routes
May 3 04:29:08 vmName start_jupyterhub.sh[2210]: 04:29:08.613 [ConfigProxy] #033[32minfo#033[39m: 200 GET /api/routes
May 3 04:29:08 vmName start_jupyterhub.sh[2210]: [I 2020-05-03 04:29:08.613 JupyterHub proxy:319] Checking routes
$ telnet PUBLIC_IP 3389
Trying PUBLIC_IP...
telnet: Unable to connect to remote host: Connection refused
$ sudo systemctl status xrdp
● xrdp.service - LSB: Start xrdp and sesman daemons
Loaded: loaded (/etc/init.d/xrdp; bad; vendor preset: enabled)
Active: active (running) since Sun 2020-05-03 04:05:37 UTC; 14min ago
Docs: man:systemd-sysv-generator(8)
CGroup: /system.slice/xrdp.service
└─83971 /usr/sbin/xrdp-sesman
May 03 04:05:37 vmName systemd[1]: Starting LSB: Start xrdp and sesman daemons..
May 03 04:05:37 vmName xrdp[83956]: * Starting Remote Desktop Protocol server
May 03 04:05:37 vmName xrdp[83956]: ...done.
May 03 04:05:37 vmName systemd[1]: Started LSB: Start xrdp and sesman daemons.
I am trying to make an rdp connection to my linux vm on azure cloud. I am able to connect through ssh as you can see above. The NSG(firewall) is allowing this port, but the rdp is not listening to this port.
Here is some part of the output of:
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere multiport dports 3853
ACCEPT tcp -- anywhere anywhere multiport dports 3853
...
ACCEPT tcp -- anywhere anywhere multiport dports ldap
ACCEPT tcp -- anywhere anywhere multiport dports 3389
ACCEPT tcp -- anywhere anywhere multiport dports 3389
...
ACCEPT tcp -- anywhere anywhere multiport dports ldap
ACCEPT tcp -- anywhere anywhere multiport dports ldap
...
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
ACCEPT tcp -- anywhere anywhere multiport dports 3853
Chain DOCKER (1 references)
target prot opt source destination
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
target prot opt source destination
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
The situation is, for a simple docker
command:
docker run -d -p 3128:3128 my_squid_container
it works fine within my home. However, when using the same Docker container in the Azure Ubuntu server, I get:
$ curl --proxy http://localhost:3128 http://google.com
curl: (56) Recv failure: Connection reset by peer
After a heck of troubleshooting, it turns out that the Docker default network (of the the Azure Ubuntu server) is blocked, by some kind of firewall I suppose, but I couldn't figure out / confirm the real source.
There has been suspicious of UFW, but I saw:
- UFW not blocking connections to docker instance
- Uncomplicated Firewall (UFW) is not blocking anything when using Docker
Moreover, my UFW status is inactive:
$ sudo ufw status
Status: inactive
So is it true that UFW is blocking my Docker network connection?
Trying to answer it myself, I checked How do I know if my firewall is on?, and here is the relevant information that might help:
$ sudo ufw status
Status: inactive
$ sudo iptables -v -x -n -L
Chain INPUT (policy ACCEPT 186 packets, 67614 bytes)
pkts bytes target prot opt in out source destination
Chain FORWARD (policy DROP 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
83321 462267984 DOCKER-USER all -- * * 0.0.0.0/0 0.0.0.0/0
83321 462267984 DOCKER-ISOLATION-STAGE-1 all -- * * 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- * docker0 0.0.0.0/0 0.0.0.0/0 ctstate RELATED,ESTABLISHED
0 0 DOCKER all -- * docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
0 0 ACCEPT all -- docker0 docker0 0.0.0.0/0 0.0.0.0/0
Chain OUTPUT (policy ACCEPT 244 packets, 55542 bytes)
pkts bytes target prot opt in out source destination
Chain DOCKER (1 references)
pkts bytes target prot opt in out source destination
0 0 ACCEPT tcp -- !docker0 docker0 0.0.0.0/0 172.17.0.2 tcp dpt:3128
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
pkts bytes target prot opt in out source destination
0 0 DOCKER-ISOLATION-STAGE-2 all -- docker0 !docker0 0.0.0.0/0 0.0.0.0/0
83729 466271977 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-ISOLATION-STAGE-2 (1 references)
pkts bytes target prot opt in out source destination
0 0 DROP all -- * docker0 0.0.0.0/0 0.0.0.0/0
44567 231463994 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
Chain DOCKER-USER (1 references)
pkts bytes target prot opt in out source destination
245275 2311152470 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0
# NAT setting
$ sudo iptables -t nat -v -x -n -L
Chain PREROUTING (policy ACCEPT 60056 packets, 2443714 bytes)
pkts bytes target prot opt in out source destination
80820 3320327 DOCKER all -- * * 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT 60053 packets, 2443525 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 217217 packets, 13050882 bytes)
pkts bytes target prot opt in out source destination
0 0 DOCKER all -- * * 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT 217217 packets, 13050882 bytes)
pkts bytes target prot opt in out source destination
3 189 MASQUERADE all -- * !docker0 172.17.0.0/16 0.0.0.0/0
0 0 MASQUERADE tcp -- * * 172.17.0.2 172.17.0.2 tcp dpt:3128
Chain DOCKER (2 references)
pkts bytes target prot opt in out source destination
0 0 RETURN all -- docker0 * 0.0.0.0/0 0.0.0.0/0
0 0 DNAT tcp -- !docker0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3128 to:172.17.0.2:3128
I do have fail2ban
installed and running BTW, but I doubt that it is blocking my internal port usage.
All in all, who is blocking my docker default network? Thx.
UPDATE2:
The docker default network was blocked initially, but suddenly it worked for no reason, as explained below, but now docker network is blocked again. Good that I've listed the output of the iptable
, including the NAT setting, so I can do a comparison of then and now. And it turns out that for the above two iptable
commands, their rules are still the same -- the output only differ with packets & bytes numbers.
iptables -v -x -n -L
: https://paste.pics/7918fba5e040d63cfb0fc28d9f233835iptables -t nat -v -x -n -L
: https://paste.pics/ada83fb1ae4e933b2511a827094ce788
so UFW
should be ruled out from the cause. If confirmed I'll remove its tag.
UPDATE: (it suddenly worked for no reason for a very short while)
I don't know what happened, but I was about to give more info, and this is what I get, word for word, nothing more nothing less:
$ curl --proxy http://localhost:3128 https://google.com
curl: (56) Proxy CONNECT aborted
$ curl -v --proxy http://localhost:3128 http://google.com
* Rebuilt URL to: http://google.com/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 3128 (#0)
> GET http://google.com/ HTTP/1.1
> Host: google.com
> User-Agent: curl/7.58.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 301 Moved Permanently
< Location: http://www.google.com/
< Content-Type: text/html; charset=UTF-8
< Date: Sat, 10 Aug 2019 18:06:51 GMT
< Expires: Mon, 09 Sep 2019 18:06:51 GMT
< Cache-Control: public, max-age=2592000
< Server: gws
< Content-Length: 219
< X-XSS-Protection: 0
< X-Frame-Options: SAMEORIGIN
< X-Cache: MISS from 5c83ae696b4c
< X-Cache-Lookup: MISS from 5c83ae696b4c:3128
< Via: 1.1 5c83ae696b4c (squid/4.8)
< Connection: keep-alive
<
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
* Connection #0 to host localhost left intact
$ curl --proxy http://localhost:3128 https://google.com
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="https://www.google.com/">here</A>.
</BODY></HTML>
Honestly, I don't know what's going on, and you can see that it didn't work before, even today, and it is suddenly working right before my eyes.
So let me include back what I posted in OP (although no further details than that), and what happened before:
$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy DROP)
target prot opt source destination
DOCKER-USER all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-1 all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere ctstate RELATED,ESTABLISHED
DOCKER all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT all -- anywhere anywhere
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
Chain DOCKER (2 references)
target prot opt source destination
ACCEPT tcp -- anywhere 172.19.0.2 tcp dpt:3128
Chain DOCKER-ISOLATION-STAGE-1 (1 references)
target prot opt source destination
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
DOCKER-ISOLATION-STAGE-2 all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-ISOLATION-STAGE-2 (2 references)
target prot opt source destination
DROP all -- anywhere anywhere
DROP all -- anywhere anywhere
RETURN all -- anywhere anywhere
Chain DOCKER-USER (1 references)
target prot opt source destination
RETURN all -- anywhere anywhere
$ curl -v --proxy http://localhost:3128 http://google.com
* Rebuilt URL to: http://google.com/
* Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 3128 (#0)
> GET http://google.com/ HTTP/1.1
> Host: google.com
> User-Agent: curl/7.58.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
* Recv failure: Connection reset by peer
* stopped the pause stream!
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
Although there are many answers to this problem, the scenario is different here:
I’m using Ubuntu 18.04 on Azure.
By mistake, I made sudoers
file world-writable (sudo chmod o+w /etc/sudoers
). There is an appropriate way to fix.
For this, I need an Ubuntu user’s password
output:-
ubuntu@azurevm:~$ ls -la /etc/sudoers
-r--r---w- 1 root root 755 Jan 18 2018 /etc/sudoers
ubuntu@azurevm:~$
ubuntu@azurevm-VM:~$
ubuntu@azurevm:~$
ubuntu@azurevm:~$ pkexec chmod 0755 /etc/sudoers
==== AUTHENTICATING FOR org.freedesktop.policykit.exec ===
Authentication is needed to run `/bin/chmod' as the super user
Authenticating as: Ubuntu (ubuntu)
Password:
polkit-agent-helper-1: pam_authenticate failed: Authentication failure
==== AUTHENTICATION FAILED ===
Error executing command as another user: Not authorized
This incident has been reported.
ubuntu@azurevm:~$ uname -a
Linux azurevm 4.18.0-1018-azure #18~18.04.1-Ubuntu SMP Tue May 7 18:09:35 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
whereas the authentication mode chosen while deploying VM was SSH-based. There are other ways to fix. Neither I couldn't reset the password for ubuntu nor the azure CLI is accessible.
Let me know how I can fix this or if you need further information on this.