I can add users to the cluster-role "cluster-admin" with:
oc adm policy add-cluster-role-to-user cluster-admin <user>
But how can I list all users with the role cluster-admin?
Environment: OpenShift 3.x
I can add users to the cluster-role "cluster-admin" with:
oc adm policy add-cluster-role-to-user cluster-admin <user>
But how can I list all users with the role cluster-admin?
Environment: OpenShift 3.x
I want to integrate my existing jenkins pipelines(external server) to openshift. I have found many commands about manipulating openshift through jenkins, but what I want to achieve is in the following photo, jenkins outputs shown to openshift. Any ideas?
I'm trying to create a production-ready Openshift Origin environment in AWS. I have experience with Kubernetes and CoreOS and kube-aws just makes things easy. You generate assets, run CloudFormation template and you are all set. Nodes with userdata are set up in an autoscaling group. Now if I want to do something similar with OpenShift Origin, how do I do that? Sure I want HA as well. Any working guides to get an idea? Running ansible every time to provision a new node just doesn't work for me. A node should bootstrap itself during a boot-time. Thanks
I have tried to run the guestbook example in Kubernetes Github repository but I can't reach this service from my local host. My test enviroment consists of two virtual machines (with CentOS7) provisioned by CloudStack, with OpenShift Origin installed on it. Here it's the services list:
[root@openshift-master amd64]# ./oc get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry 172.30.39.251 <none> 5000/TCP 1d
guestbook 172.30.55.125 nodes 3000/TCP 56m
kubernetes 172.30.0.1 <none> 443/TCP,53/UDP,53/TCP 1d
redis-master 172.30.24.94 <none> 6379/TCP 1h
redis-slave 172.30.132.250 <none> 6379/TCP 1h
router 172.30.33.117 <none> 80/TCP,443/TCP,1936/TCP 1d
The service exposed is guestbook. Here is the service guestbook description:
[root@openshift-master amd64]# ./oc describe svc guestbook
Name: guestbook
Namespace: default
Labels: app=guestbook
Selector: app=guestbook
Type: NodePort
IP: 172.30.55.125
Port: <unset> 3000/TCP
NodePort: <unset> 30642/TCP
Endpoints: 172.17.0.6:3000,172.17.0.7:3000,172.17.0.8:3000
Session Affinity: None
No events.
If I do:
curl 172.30.55.125:3000
It works only from the node who host the guestbook pod, from others node in the cluster and my host machine (192.168.1.2) It doesn't work.
I opened all ports in CloudStack, otherwise I can't ssh the nodes and in the node I set this firewall rule:
firewall-cmd --permanent --zone=public --add-port=30642/tcp
30642 is the NodePort, that is mandatory to reach it from out of the cluster. Have you any idea on how to resolve? Thanks in advance.
I'm a bit confused with persistent volumes in OpenShift and I'm wondering what happens if a pod with a persistent volume fails, is that volume lost forever? Is it possible to migrate a volume to another pod when that pod fails? If so, which kind of persistent volume supports migration?