I run a docker private registry v2.7.0 as a kubernetes pod with a service and a persistent volume, thanks to the Varun Kumar G tutorial, which has been the only successful method on my setup, for kubernetes to pull from my private docker registry on my 3 node--on-premises--cluster with ubuntu 20.04 lts kvms.
The problem is with deleting images from the kubernetes pod docker registry v2.7.0 (had to use the previous version because latest v2.7.1 does not work with htpasswd). Furthermore I have read lots of similar threads like this, this and this.
With docker registry v2.7.1 run as a docker container, I had no problems deleting images,
but with docker registry v2.7.0 run as a kubernetes pod, the usual deletion steps result being unable to push the deleted image again, even after successfully deleting blobs and manually deleting image folders under /var/lib/registry/docker/registry/v2/repositories/
.
Below is the registry pod yaml
apiVersion: v1
kind: Pod
metadata:
name: dockreg-pod
labels:
app: mregistry
spec:
containers:
- name: registry
image: registry:2.7.0
imagePullPolicy: IfNotPresent
volumeMounts:
- name: repo-vol
mountPath: "/var/lib/registry"
- name: certs-vol
mountPath: "/certs"
readOnly: true
- name: auth-vol
mountPath: "/auth"
readOnly: true
env:
- name: REGISTRY_AUTH
value: "htpasswd"
- name: REGISTRY_AUTH_HTPASSWD_REALM
value: "Registry Realm"
- name: REGISTRY_AUTH_HTPASSWD_PATH
value: "/auth/htpasswd"
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/tls.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/tls.key"
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: "true"
volumes:
- name: repo-vol
persistentVolumeClaim:
claimName: repo-pvc
- name: certs-vol
secret:
secretName: certs-secret
- name: auth-vol
secret:
secretName: auth-secret
restartPolicy: Always
nodeName: spring
and following is the persistent volume yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: repo-pv
labels:
type: prstore
spec:
capacity:
storage: 7Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
fsType: ext4
path: /root/repo
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- spring
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: repo-pvc
labels:
type: prstore
spec:
selector:
matchLabels:
type: prstore
volumeMode: Filesystem
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 7Gi
Say I am pushing an image on a brand new registry pod having also wiped out the persistent storage beforehand.
root@sea:scripts# docker push dockreg:5000/mubu4:v4
The push refers to repository [dockreg:5000/mubu4]
9f54eef41275: Pushed
v4: digest: sha256:7bd0d9a9821815dccb5c53c18cea04591ec633e2e529c5cdd39681169589c17f size: 529
Deletion of images appears to work as intended until I try to push the deleted image again, at which point I am getting the dreaded Layer already exists
error.
As you might have seen above, I have included in the registry pod environment the following,
- name: REGISTRY_STORAGE_DELETE_ENABLED
value: "true"
otherwise I would get an unsupported
error from the curl -X DELETE
call, even after adding
delete:
enabled: true
in the /etc/docker/registry/config.yml
within the pod,
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /var/lib/registry
delete:
enabled: true
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3
that seems to make no difference in my use case.
Following are the deletion steps.
curl -u alexander:sofianos \
> -vsk -H "Accept: \
> application/vnd.docker.distribution.manifest.v2+json" \
> -X DELETE \
> https://dockreg:5000/v2/mubu4/manifests/sha256:\
> 7bd0d9a9821815dccb5c53c18cea04591ec633e2e529c5cdd39681169589c17f
The above prints among other things the following
> DELETE /v2/mubu4/manifests/sha256:7bd0d9a9821815dccb5c53c18cea04591ec633e2e529c5cdd39681169589c17f HTTP/2
> Host: dockreg:5000
> authorization: Basic YWxleGFuZGVyOnNvZmlhbm9z
> user-agent: curl/7.68.0
> accept: application/vnd.docker.distribution.manifest.v2+json
>
* Connection state changed (MAX_CONCURRENT_STREAMS == 250)!
< HTTP/2 202
< docker-distribution-api-version: registry/2.0
< x-content-type-options: nosniff
< content-length: 0
< date: Sat, 30 Oct 2021 13:25:53 GMT
<
* Connection #0 to host dockreg left intact
which seems to be in order.
Below, deleting blobs from within the registry pod
root@sea:scripts# kubectl exec -it dockreg-pod -- sh
/ # bin/registry garbage-collect /etc/docker/registry/config.yml
mubu4
0 blobs marked, 3 blobs and 0 manifests eligible for deletion
blob eligible for deletion: sha256:7b1a6ab2e44dbac178598dabe7cff59bd67233dba0b27e4fbd1f9d4b3c877a54
INFO[0000] Deleting blob: /docker/registry/v2/blobs/sha256/7b/7b1a6ab2e44dbac178598dabe7cff59bd67233dba0b27e4fbd1f9d4b3c877a54 go.version=go1.11.2 instance.id=82a101ee-47f4-4f4f-bc79-76d774b0924b service=registry
blob eligible for deletion: sha256:7bd0d9a9821815dccb5c53c18cea04591ec633e2e529c5cdd39681169589c17f
INFO[0000] Deleting blob: /docker/registry/v2/blobs/sha256/7b/7bd0d9a9821815dccb5c53c18cea04591ec633e2e529c5cdd39681169589c17f go.version=go1.11.2 instance.id=82a101ee-47f4-4f4f-bc79-76d774b0924b service=registry
blob eligible for deletion: sha256:ecb35fc8715f5ab1d9053ecb2f2d9ebbec4a59c0a0615d98de53bc29f7285085
INFO[0000] Deleting blob: /docker/registry/v2/blobs/sha256/ec/ecb35fc8715f5ab1d9053ecb2f2d9ebbec4a59c0a0615d98de53bc29f7285085 go.version=go1.11.2 instance.id=82a101ee-47f4-4f4f-bc79-76d774b0924b service=registry
Lastly, manually deleting the repository image
/ # rm -rf /var/lib/registry/docker/registry/v2/repositories/mubu4
On my persistent storage, the registry now looks like this
root@spring:repo# tree
.
└── docker
└── registry
└── v2
├── blobs
│ └── sha256
│ ├── 7b
│ └── ec
└── repositories
8 directories, 0 files
But when I try to push the deleted image again, I get
root@sea:scripts# docker push dockreg:5000/mubu4:v4
The push refers to repository [dockreg:5000/mubu4]
9f54eef41275: Layer already exists
v4: digest: sha256:7bd0d9a9821815dccb5c53c18cea04591ec633e2e529c5cdd39681169589c17f size: 529
and in my registry, the mubu4 image folder I previously deleted, has been mystiriously recreated through the above push command.
root@spring:repo# tree
.
└── docker
└── registry
└── v2
├── blobs
│ └── sha256
│ ├── 7b
│ └── ec
└── repositories
└── mubu4
└── _manifests
├── revisions
│ └── sha256
│ └── 7bd0d9a9821815dccb5c53c18cea04591ec633e2e529c5cdd39681169589c17f
│ └── link
└── tags
└── v4
├── current
│ └── link
└── index
└── sha256
└── 7bd0d9a9821815dccb5c53c18cea04591ec633e2e529c5cdd39681169589c17f
└── link
19 directories, 3 files
I also tried wiping out the persistent storage with
root@spring:repo# rm -rf *
to no avail. Trying to push the deleted image afterwards, still outputs the exact same Layer already exists
error, and the registry tree is being again auto-recreated, looking exactly as it does in the above tree output.
The question is what else can I try to make this work, and/or alternatively,
it follows from the above testing, that within the docker registry kubernetes pod, there are other files, that hold configuration where deleted images appear not to be deleted, and these files activate the recreation of the deleted image through a docker push call. Where should I look apart from the tree
/var/lib/registry/docker/registry/v2/
so I can delete all references to deleted images?