I am running a statefulset in my cluster and am trying to assign each of the replicas (3 in total) to assign their own PVC. Unfortunately, I can only provide 1 name as claimName. How do I make sure each starting pod grabs it's own pvc? The pods I am using are currently called test and so are the pvcs. So when I start up the containers I have a pod called test-0, test-1 and test-2. My PCVs are called claim-0, claim-1 and claim-2. What options do I have to tell pod test-0 to grab the claim-0 pvc, pod-1 to grab claim-1, etc? Thanks for your input.
realShadow's questions
I am trying to run postfix as a container in k8s. The container starts (including the svcs) but my config maps and secrets don't want to play nice. I tried the following:
- setup the config map with the user and password in clear text RESULT: WORKS
postmap -q someuser@localhost mysql:./virtual_mailbox.cf
- Encrypt the password and username with base64 (as per k8s instructions), read these encrypted values into the environment variables of the container (
envFrom:- secretRef: name: postfix-db-access
), try to connect to the database withpostmap
For this scenario the config map looks like the following:
1 apiVersion: v1
2 kind: ConfigMap
3 metadata:
4 name: postfix-db-configs
5 namespace: mailserver
6 data:
7 virtual_mailbox.cf: |
8 user=$(echo ${POSTFIX_USER} | base64 -d)
9 password=$(echo ${POSTFIX_PASS} | base64 -d)
10 hosts=database.default.svc.cluster.local
11 dbname=postfix
12 query=SELECT mail FROM generic_map WHERE local_mail='%s' AND active=1;
RESULT: FAILS. User '$(echo ${POSTFIX_USER} | base64 -d)' has no access to the database.
- Store the username and password for the postfix user in clear text in the secret like this:
1 apiVersion: v1
2 kind: Secret
3 metadata:
4 name: postfix-db-access
5 namespace: mailserver
6 type: Opaque
7 stringData:
8 POSTFIX_USER: PostfixUser
9 POSTFIX_PASS: somePassword
and the corresponding line in the config map
user=$(echo ${POSTFIX_USER})
RESULT: FAILS with user 'echo ${POSTFIX_USER}) has no access to the database'. The request does not process the environment variable, which is set correctly.
Connecting to the database and querying works fine with the command mysql -h database.default.svc.cluster.local -u postfix -p -e 'use postfix;SELECT mail FROM generic_map WHERE local_mail='someuser@localhost' AND active=1;
. I get all the results I need and expect.
The question is: how do I setup the secret and the config map so this process works and establishes the connection to the database as intended?
realshadow
I have 2 VMs, both running debian buster. One is a test VM, the other my production file server. On both I installed the nfs server package through "apt install". I created a share on the test VM called nfs under /mnt/nfs
. This folder is owned by nobody:nogroup. In my exports file I have the following content:
/mnt/nfs *(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
There is no volume or harddrive attached to this "mountpoint" it is just a directory. When I mount this folder on another machine through sudo mount -t nfs testVM:/mnt/nfs /mnt/disk
the volume is mounted as NFS4!!!!! This is what I want and it is awesome. (obviously the "other" machine contains the nfs-common package so I can mount the share in the first place)
Now to my issue. On my production file server, on which the same packages are installed, there are 6 volumes (raid partitions) mounted. They are managed by the host (proxmox virtualization environment) and passed through to the VM. I added which folder I wanted to share in the exports
file and exported these folders through exportfs -rav
. As an example, I have shared the following folder:
/srv/test *(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
Which was supposed to be a test to see if I share a folder without anything attached externally what would happen. Unfortunately that does not work and every time I am trying to mount an nfs share as NFS4 the mount reverts back to NFS3.
Whatever I am trying nothing gets mounted to any other VM or any other machine, for that matter, as an NFS4 share. Everything is just shared as NFS3, which I don't want as NFS4 supports additional features that I need in order to make other stuff work in my network (especially file locking).
Does anybody have any ideas why I am unable to us a NFS4 from my test machine but not from my production file server?
I am trying to setup a mariadb-galera cluster through the bitnami helm chart in my kubernetes cluster (1 master, 3 nodes). I have modified the myvalues.yaml to include an the existingClaim: dbstorage
andstorageClass: "nfs-storage"
. The image repository is 10.5.9-debian-10-r52, I added a root password and outcommented the . I did not define anything in the section db: and left that as the defaults. I also did not define anything under the accessModes:
as well as the size as that was defined in the existing persistence volume claimgalera.mariabackup
section like password and left the defaults.
As soon as I run the helm chart with helm install helm install mariadb-galera-cluster -f mariadb-galera.values.yaml bitnami/mariadb-galera --namespace database
and describe the pod I get the error message
Readiness probe failed: mysqladmin: connect to server at 'localhost' failed error: 'Can't connect to local MySQL server through socket '/opt/bitnami/mariadb/tmp/mysql.sock' (2)'
Check that mysqld is running and that the socket: '/opt/bitnami/mariadb/tmp/mysql.sock' exists!
When checking the container for that mysql.sock it is true, that it is not available in that location.
I am using a NFS provisioner to provision the persistent storage which works fine. On my nfs server I can see the directory being created and data being stored in it. It is a NFS3 directory that is used by the container.
When I access the container and try to run the scrips "run.sh" or "entrypoint.sh" in the folder /opt/bitnami/scripts/mariadb-galera
I am getting an error The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable or does not exist. Configurations based on environment variables will not be applied for this file.
but the file is right in the folder where it should be.
All components, like the stateful set are created and started properly as I can tell just the container, in my case it was obviously called mariadb-galera-cluster-0, is not finishing starting up because of the socket it can't find.
Version of Helm:
version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"clean", GoVersion:"go1.15.11"}
Version of Kubernetes
Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:31:21Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.0", GitCommit:"cb303e613a121a29364f75cc67d3d580833a7479", GitTreeState:"clean", BuildDate:"2021-04-08T16:25:06Z", GoVersion:"go1.16.1", Compiler:"gc", Platform:"linux/amd64"}
version of the values.yaml file
## Please, note that this will override the image parameters, including dependencies, configured to use the global v$
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
# global:
# imageRegistry: myRegistryName
# imagePullSecrets:
# - myRegistryKeySecretName
# storageClass: myStorageClass
## Bitnami MariaDB Galera image
## ref: https://hub.docker.com/r/bitnami/mariadb-galera/tags/
##
image:
registry: docker.io
repository: bitnami/mariadb-galera
tag: 10.5.9-debian-10-r52
## Specify a imagePullPolicy
## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## Set to true if you would like to see extra information on logs
##
debug: false
## String to partially override common.names.fullname template (will maintain the release name)
##
# nameOverride:
## String to fully override common.names.fullname template
##
# fullnameOverride:
## Use an alternate scheduler, e.g. "stork".
## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/
##
# schedulerName:
## Specifies the Kubernetes Cluster's Domain Name.
##
clusterDomain: cluster.local
## StatefulSet controller supports relax its ordering guarantees while preserving its uniqueness and identity guaran$
## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#pod-management-policy
##
podManagementPolicy: OrderedReady
## Deployment pod host aliases
## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/
hostAliases: []
## MariaDB Gallera K8s svc properties
##
service:
## Kubernetes service type and port number
##
type: ClusterIP
port: 3306
# clusterIP: None
## Specify the nodePort value for the LoadBalancer and NodePort service types.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
##
# nodePort: 30001
## Specify the externalIP value ClusterIP service type.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#external-ips
##
# externalIPs: []
## Set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
# loadBalancerIP:
## Load Balancer sources
## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-$
##
# loadBalancerSourceRanges:
# - 10.10.10.0/24
## Provide any additional annotations which may be required. This can be used to
## set the LoadBalancer service type to internal only.
## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
##
annotations: {}
## Headless service properties
##
headless:
## Additional annotations for headless service.
## Can be useful in case peer-finder is used in a sidecar,
## e.g.: service.alpha.kubernetes.io/tolerate-unready-endpoints="true"
##
annotations: {}
## Pods Service Account
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
## Specifies whether a ServiceAccount should be created
##
create: false
## The name of the ServiceAccount to use.
## If not set and create is true, a name is generated using the common.names.fullname template
##
name: ""
## An array to add extra environment variables
## For example:
## extraEnvVars:
## - name: TZ
## value: "Europe/Paris"
##
extraEnvVars:
## ConfigMap with extra env vars:
##
extraEnvVarsCM:
## Secret with extra env vars:
##
extraEnvVarsSecret:
## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
## Specifies whether RBAC rules should be created
##
create: false
## Pod Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
##
securityContext:
enabled: true
fsGroup: 1001
runAsUser: 1001
## Database credentials for root (admin) user
##
rootUser:
## MariaDB admin user
##
user: root
## MariaDB admin password
## Password is ignored if existingSecret is specified.
## ref: https://github.com/bitnami/bitnami-docker-mariadb-galera#setting-the-root-password-on-first-run
##
password: "ObviouslyIChangedThis"
## Option to force users to specify a password. That is required for 'helm upgrade' to work properly.
## If it is not force, a random password will be generated.
##
forcePassword: false
## Use existing secret (ignores rootUser.password, db.password, and galera.mariabackup.password)
##
# existingSecret:
## Custom db configuration
##
db:
## MariaDB username and password
## Password is ignored if existingSecret is specified.
## ref: https://github.com/bitnami/bitnami-docker-mariadb-galera#creating-a-database-user-on-first-run
##
user: ""
password: ""
## Database to create
## ref: https://github.com/bitnami/bitnami-docker-mariadb-galera#creating-a-database-on-first-run
##
name: my_database
## Option to force users to specify a password. That is required for 'helm upgrade' to work properly.
## If it is not force, a random password will be generated.
##
forcePassword: false
## Galera configuration
##
galera:
## Galera cluster name
##
name: galera
## Bootstraping options
## ref: https://github.com/bitnami/bitnami-docker-mariadb-galera#bootstraping
##
bootstrap:
## Node to bootstrap from, you will need to change this parameter in case you want to bootstrap from other node
##
bootstrapFromNode:
## Force safe_to_bootstrap in grastate.date file.
## This will set safe_to_bootstrap=1 in the node indicated by bootstrapFromNode.
##
forceSafeToBootstrap: false
## Credentials to perform backups
##
mariabackup:
## MariaBackup username and password
## Password is ignored if existingSecret is specified.
## ref: https://github.com/bitnami/bitnami-docker-mariadb-galera#setting-up-a-multi-master-cluster
##
user: mariabackup
password: ""
## Option to force users to specify a password. That is required for 'helm upgrade' to work properly.
## If it is not force, a random password will be generated.
##
forcePassword: false
## LDAP configuration
##
ldap:
## Enable LDAP support
##
enabled: false
uri: ""
base: ""
binddn: ""
bindpw: ""
bslookup:
filter:
map:
nss_initgroups_ignoreusers: root,nslcd
scope:
tls_reqcert:
## TLS configuration
##
tls:
## Enable TLS
##
enabled: false
## Name of the secret that contains the certificates
##
# certificatesSecret:
## Certificate filename
##
# certFilename:
## Certificate Key filename
##
# certKeyFilename:
## CA Certificate filename
##
# certCAFilename:
## Configure MariaDB with a custom my.cnf file
## ref: https://mysql.com/kb/en/mysql/configuring-mysql-with-mycnf/#example-of-configuration-file
## Alternatively, you can put your my.cnf under the files/ directory
mariadbConfiguration: |-
[client]
port=3306
socket=/opt/bitnami/mariadb/tmp/mysql.sock
plugin_dir=/opt/bitnami/mariadb/plugin
[mysqld]
default_storage_engine=InnoDB
basedir=/opt/bitnami/mariadb
datadir=/bitnami/mariadb/data
plugin_dir=/opt/bitnami/mariadb/plugin
tmpdir=/opt/bitnami/mariadb/tmp
socket=/opt/bitnami/mariadb/tmp/mysql.sock
pid_file=/opt/bitnami/mariadb/tmp/mysqld.pid
bind_address=0.0.0.0
## Character set
##
collation_server=utf8_unicode_ci
init_connect='SET NAMES utf8'
character_set_server=utf8
## MyISAM
##
key_buffer_size=32M
myisam_recover_options=FORCE,BACKUP
## Safety
##
skip_host_cache
skip_name_resolve
max_allowed_packet=16M
max_connect_errors=1000000
sql_mode=STRICT_TRANS_TABLES,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_AUTO_VALUE_ON_ZERO,NO_ENGINE_SUBSTI$
sysdate_is_now=1
## Binary Logging
##
log_bin=mysql-bin
expire_logs_days=14
# Disabling for performance per http://severalnines.com/blog/9-tips-going-production-galera-cluster-mysql
sync_binlog=0
# Required for Galera
binlog_format=row
## Caches and Limits
##
tmp_table_size=32M
max_heap_table_size=32M
# Re-enabling as now works with Maria 10.1.2
query_cache_type=1
query_cache_limit=4M
query_cache_size=256M
max_connections=500
thread_cache_size=50
open_files_limit=65535
table_definition_cache=4096
table_open_cache=4096
## InnoDB
##
innodb=FORCE
innodb_strict_mode=1
# Mandatory per https://github.com/codership/documentation/issues/25
innodb_autoinc_lock_mode=2
# Per https://www.percona.com/blog/2006/08/04/innodb-double-write/
innodb_doublewrite=1
innodb_flush_method=O_DIRECT
innodb_log_files_in_group=2
innodb_log_file_size=128M
innodb_flush_log_at_trx_commit=1
innodb_file_per_table=1
# 80% Memory is default reco.
# Need to re-evaluate when DB size grows
innodb_buffer_pool_size=2G
innodb_file_format=Barracuda
## Logging
##
log_error=/opt/bitnami/mariadb/logs/mysqld.log
slow_query_log_file=/opt/bitnami/mariadb/logs/mysqld.log
log_queries_not_using_indexes=1
slow_query_log=1
## SSL
## Use extraVolumes and extraVolumeMounts to mount /certs filesystem
# ssl_ca=/certs/ca.pem
# ssl_cert=/certs/server-cert.pem
# ssl_key=/certs/server-key.pem
[galera]
wsrep_on=ON
wsrep_provider=/opt/bitnami/mariadb/lib/libgalera_smm.so
wsrep_sst_method=mariabackup
wsrep_slave_threads=4
wsrep_cluster_address=gcomm://
wsrep_cluster_name=galera
wsrep_sst_auth="root:"
# Enabled for performance per https://mariadb.com/kb/en/innodb-system-variables/#innodb_flush_log_at_trx_commit
innodb_flush_log_at_trx_commit=2
# MYISAM REPLICATION SUPPORT #
wsrep_replicate_myisam=ON
[mariadb]
plugin_load_add=auth_pam
## Data-at-Rest Encryption
## Use extraVolumes and extraVolumeMounts to mount /encryption filesystem
# plugin_load_add=file_key_management
# file_key_management_filename=/encryption/keyfile.enc
# file_key_management_filekey=FILE:/encryption/keyfile.key
# file_key_management_encryption_algorithm=AES_CTR
# encrypt_binlog=ON
# encrypt_tmp_files=ON
## InnoDB/XtraDB Encryption
# innodb_encrypt_tables=ON
# innodb_encrypt_temporary_tables=ON
# innodb_encrypt_log=ON
# innodb_encryption_threads=4
# innodb_encryption_rotate_key_age=1
## Aria Encryption
# aria_encrypt_tables=ON
# encrypt_tmp_disk_tables=ON
## ConfigMap with MariaDB configuration
## NOTE: This will override mariadbConfiguration
##
# configurationConfigMap:
## Specify dictionary of scripts to be run at first boot
## Alternatively, you can put your scripts under the files/docker-entrypoint-initdb.d directory
##
# initdbScripts:
# my_init_script.sh: |
# #!/bin/sh
# echo "Do something."
## ConfigMap with scripts to be run at first boot
## Note: This will override initdbScripts
##
# initdbScriptsConfigMap:
## MariaDB additional command line flags
## Can be used to specify command line flags, for example:
##
## extraFlags: "--max-connect-errors=1000 --max_connections=155"
##
## Desired number of cluster nodes
##
replicaCount: 3
## updateStrategy for MariaDB Master StatefulSet
## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies
##
updateStrategy:
type: RollingUpdate
## Additional labels for MariaDB Galera pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}
## Additional annotations for MariaDB Galera pods
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}
## Pod affinity preset
## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## Allowed values: soft, hard
##
podAffinityPreset: ""
## Pod anti-affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
## Allowed values: soft, hard
##
podAntiAffinityPreset: soft
## Node affinity preset
## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity
## Allowed values: soft, hard
##
nodeAffinityPreset:
## Node affinity type
## Node affinity type
## Allowed values: soft, hard
##
type: ""
## Node label key to match
## E.g.
## key: "kubernetes.io/e2e-az-name"
##
key: ""
## Node label values to match
## E.g.
## values:
## - e2e-az1
## - e2e-az2
##
values: []
## Affinity for pod assignment. Evaluated as a template.
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set
##
affinity: {}
## Node labels for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}
## Tolerations for pod assignment. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
tolerations: []
## Enable persistence using Persistent Volume Claims
## ref: http://kubernetes.io/docs/user-guide/persistent-volumes/
##
persistence:
## If true, use a Persistent Volume Claim, If false, use emptyDir
##
enabled: true
# Enable persistence using an existing PVC
existingClaim: dbstorage
# Subdirectory of the volume to mount
# subPath:
mountPath: /bitnami/mariadb
## selector can be used to match an existing PersistentVolume
## selector:
## matchLabels:
## app: my-app
##
selector: {}
## Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "nfs-storage"
## Persistent Volume Claim annotations
##
annotations:
## Persistent Volume Access Mode
##
#accessModes:
# - ReadWriteOnce
## Persistent Volume size
##
#size: 8Gi
## Priority Class Name
#
# priorityClassName: 'priorityClass'
## Additional init containers
##
extraInitContainers: []
# - name: do-something
# image: bitnami/minideb
# command: ['do', 'something']
## Additional containers
##
extraContainers: []
## extraVolumes and extraVolumeMounts allows you to mount other volumes
## Example Use Cases:
## mount certificates to enable data-in-transit encryption
## mount keys for data-at-rest encryption using file plugin
# extraVolumes:
# - name: mariadb-certs
# secret:
# defaultMode: 288
# secretName: mariadb-certs
# - name: mariadb-encryption
# secret:
# defaultMode: 288
# secretName: mariadb-encryption
# extraVolumeMounts:
# - name: mariadb-certs
# mountPath: /certs
# readOnly: true
# - name: mariadb-encryption
# mountPath: /encryption
# readOnly: true
## MariaDB Galera containers' resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 0.5
# memory: 256Mi
requests: {}
# cpu: 0.5
# memory: 256Mi
## MariaDB Galera containers' liveness and readiness probes
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
enabled: true
## Initializing the database could take some time
##
initialDelaySeconds: 120
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
startupProbe:
enabled: false
## Initializing the database could take some time
##
initialDelaySeconds: 120
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
## Let's wait 600 seconds by default, it should give enough time in any cluster for mysql to init
##
failureThreshold: 48
## Pod disruption budget configuration
##
podDisruptionBudget:
## Specifies whether a Pod disruption budget should be created
##
create: false
minAvailable: 1
# maxUnavailable: 1
## Prometheus exporter configuration
##
metrics:
enabled: false
## Bitnami MySQL Prometheus exporter image
## ref: https://hub.docker.com/r/bitnami/mysqld-exporter/tags/
##
image:
registry: docker.io
repository: bitnami/mysqld-exporter
tag: 0.12.1-debian-10-r416
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistryKeySecretName
## MySQL exporter additional command line flags
## Can be used to specify command line flags
## E.g.:
## extraFlags:
## - --collect.binlog_size
##
extraFlags: []
## MySQL Prometheus exporter containers' resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
limits: {}
# cpu: 0.5
# memory: 256Mi
requests: {}
# cpu: 0.5
# memory: 256Mi
## MySQL Prometheus exporter service parameters
##
service:
type: ClusterIP
port: 9104
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "9104"
## Prometheus Operator ServiceMonitor configuration
##
serviceMonitor:
enabled: false
## Namespace in which Prometheus is running
##
# namespace: monitoring
## Interval at which metrics should be scraped.
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
##
# interval: 10s
## Timeout after which the scrape is ended
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
##
# scrapeTimeout: 10s
## ServiceMonitor selector labels
## ref: https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#prometheus-configuration
##
selector:
prometheus: kube-prometheus
## RelabelConfigs to apply to samples before scraping
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#relabelconfig
## Value is evalued as a template
##
relabelings: []
## MetricRelabelConfigs to apply to samples before ingestion
## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#relabelconfig
## Value is evalued as a template
##
metricRelabelings: []
# - sourceLabels:
# - "__name__"
# targetLabel: "__name__"
# action: replace
# regex: '(.*)'
# replacement: 'example_prefix_$1'
## Prometheus Operator PrometheusRule configuration
##
prometheusRules:
enabled: false
## Additional labels to add to the PrometheusRule so it is picked up by the operator.
## If using the [Helm Chart](https://github.com/helm/charts/tree/master/stable/prometheus-operator) this is the $
##
selector:
app: prometheus-operator
release: prometheus
## Rules as a map.
##
rules: {}
# - alert: MariaDB-Down
# annotations:
# message: 'MariaDB instance {{ $labels.instance }} is down'
# summary: MariaDB instance is down
# expr: absent(up{job="mariadb-galera"} == 1)
# labels:
# severity: warning
# service: mariadb-galera
# for: 5m
EDIT adding log file from container initialization:
mariadb 13:59:39.44
mariadb 13:59:39.44 Welcome to the Bitnami mariadb-galera container
mariadb 13:59:39.45 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb-galera
mariadb 13:59:39.45 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb-galera/issues
mariadb 13:59:39.46
mariadb 13:59:39.47 INFO ==> ** Starting MariaDB setup **
mariadb 13:59:40.30 INFO ==> Validating settings in MYSQL_*/MARIADB_* env vars
mariadb 13:59:40.41 DEBUG ==> Set Galera cluster address to gcomm://
mariadb 13:59:40.42 INFO ==> Initializing mariadb database
mariadb 13:59:40.43 DEBUG ==> Ensuring expected directories/files exist
mariadb 13:59:40.46 WARN ==> The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable or does not exist. Configurations based on environment variables will not be applied for this file.
mariadb 13:59:40.49 DEBUG ==> Cleaning data directory to ensure successfully initialization
Installing MariaDB/MySQL system tables in '/bitnami/mariadb/data' ...
Unfortunately that is it. The container is now in a "CrashLoopBackOff" state. It restarted 7 times already and will not finish. I can see on my NFS server the "data" directory is created and it contains 2 files, "aria_log_control" and "mysql-bin.index". The my.cnf looks like being owned by root but as far as I could find in the documentation the container is a non-root container so the file would not be writable. But it is definitely there. I checked that several times. If I describe the pod I get the error message mentioned above "Readiness probe failed".
END EDIT
EDIT 2 here are the details on the readiness, liveness probe.
## MariaDB Galera containers' liveness and readiness probes
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
enabled: true
## Initializing the database could take some time
##
initialDelaySeconds: 120
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
enabled: true
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
startupProbe:
enabled: false
## Initializing the database could take some time
##
initialDelaySeconds: 120
periodSeconds: 10
timeoutSeconds: 1
successThreshold: 1
## Let's wait 600 seconds by default, it should give enough time in any cluster for mysql to init
##
failureThreshold: 48
END EDIT
EDIT 3 I have no choice but to respond to my original post. I would otherwise exceed the character limit of stackoverflow. End EDIT
I would be grateful for any help or hint you can provide to get the cluster up and running.
realshadow
So I followed your advise @WytrzymałyWiktor and disabled the readiness and liveness probes. That way all 3 instances are spun up right away. The problem is they are not communicating among each other. Logs from the So despite having the pods running they are not working and communicating among each other.
I am working on setting up my firewall on my server right now and it drives me crazy. I am using nftables aand have the following ruleset:
table inet filter {
map whitelist {
type ipv4_addr . inet_service : verdict
elements = { 192.168.1.x . ssh : accept,
192.168.1.y . ssh : accept,
192.168.1.z . ssh : accept }
}
chain input {
type filter hook input priority 0; policy accept;
ct state established,related accept
iifname "lo" accept
tcp dport http ip saddr { 192.168.1.0/24 } accept comment "Accept HTTP traffic on PORT 80"
tcp dport netbios-ns ip saddr { 192.168.1.0/24 } accept comment "Accept NetBIOS Name Service (nmbd) on PORT 137"
tcp dport netbios-dgm ip saddr { 192.168.1.0/24 } accept comment "Accept NetBIOS Datagram Service (nmbd) on PORT 138"
tcp dport netbios-ssn ip saddr { 192.168.1.0/24 } accept comment "Accept NetBIOS Session Service (smbd) on PORT 139"
tcp dport https ip saddr { 192.168.1.0/24 } accept comment "Accept HTTPS traffic on PORT 443"
tcp dport microsoft-ds ip saddr { 192.168.1.0/24 } accept comment "Accept Microsoft Directory Services (smbd) on PORT 445"
tcp dport webmin ip saddr { 192.168.1.0/24 } accept comment "Accept traffic for WebMin Interface on PORT 10000"
udp dport netbios-ns ip saddr { 192.168.1.0/24 } accept comment "Accept NetBIOS Name Service (nmdb) on PORT 137"
udp dport netbios-dgm ip saddr { 192.168.1.0/24 } accept comment "Accept NetBIOS Datagram Service (nmbd) on PORT 138"
udp dport netbios-ssn ip saddr { 192.168.1.0/24 } accept comment "Accept NetBIOS Session Service (nmdb) on PORT 139"
udp dport microsoft-ds ip saddr { 192.168.1.0/24 } accept comment "Accept Microsoft Directory Service (smbd) on PORT 445"
meta nfproto ipv4 ip saddr . tcp dport vmap @whitelist
drop
}
chain output {
type filter hook output priority 0; policy accept;
}
chain forward {
type filter hook forward priority 0; policy drop;
}
}
I made sure the network range defined above is within the correct range. The range covers 254 addresses and my machines should be fine. I have no issues with my main machine as well one other one. These two machines have the IPs 192.168.1.42 and 192.168.1.181. But one other machine drives me crazy. As soon as the drop
part is added that machine with the IP 192.168.1.115 cannot access the server anymore.
My question is, as I just can't figure out why this one machine does not want to access the data on the server anymore, is there anything obvious why this access would not work? What am I missing?
thanks
realshadow
I have a question and I hope you can help. I have an email server setup and running. A few days ago the following updates were installed:
2019-08-15 22:23:40 upgrade php-common:all 2:69+0~20190303094804.15+stretch~1.gbp0f7465 2:70+0~20190814.17+debian9~1.gbp1e7da2
2019-08-15 22:23:44 upgrade php-igbinary:amd64 3.0.1+2.0.8-1+0~20190503122633.10+stretch~1.gbp63e2f2 3.0.1+2.0.8-2+0~20190814.12+debian9~1.gbpaafd11
2019-08-15 22:23:46 upgrade php-imagick:amd64 3.4.4-1+0~20190808.10+debian9~1.gbpc5da26 3.4.4-1+0~20190814.12+debian9~1.gbpc5da26
2019-08-15 22:23:48 upgrade php-msgpack:amd64 2.0.3+0.5.7-1+0~20190220080019.9+stretch~1.gbp75b3fa 2.0.3+0.5.7-2+0~20190814.11+debian9~1.gbpb26058
2019-08-15 22:23:49 upgrade php-memcached:amd64 3.1.3+2.2.0-1+0~20190606080312.9+stretch~1.gbpb26597 3.1.3+2.2.0-2+0~20190814.14+debian9~1.gbp5d60d1
2019-08-15 22:23:51 upgrade tzdata:all 2019a-0+deb9u1 2019b-0+deb9u1
2019-08-15 22:23:53 upgrade php:all 2:7.3+69+0~20190303094804.15+stretch~1.gbp0f7465 2:7.3+70+0~20190814.17+debian9~1.gbp1e7da2
2019-08-15 22:23:54 upgrade php-mysql:all 2:7.3+69+0~20190303094804.15+stretch~1.gbp0f7465 2:7.3+70+0~20190814.17+debian9~1.gbp1e7da2
After these updates I realized I had trouble accessing email, calendar and contacts, which are synchronized through z-push (ActiveSync) to any of my devices. Before these updates everything was working fine. I uninstalled PHP completely from the system and reinstalled it again, as suggested in the post here on the Nextcloud forum. The updated system is only running for a few minutes now but I ran into the same issue again. dovecot: auth-worker(16752): Error: mysql(127.0.0.1): Connect failed to database (postfix): Can't connect to MySQL server on '127.0.0.1' (111 "Connection refused") - waiting for 1 seconds before retry
My database processes between 1,000 and 5,000 queries a second, which, if I read my logs right, stem mostly from Nextcloud.
I would appreciate any feedback on how I can optimize my system so my email functionality is not impacted by any activity going on in the database.