I'm trying to create a production-ready Openshift Origin environment in AWS. I have experience with Kubernetes and CoreOS and kube-aws just makes things easy. You generate assets, run CloudFormation template and you are all set. Nodes with userdata are set up in an autoscaling group. Now if I want to do something similar with OpenShift Origin, how do I do that? Sure I want HA as well. Any working guides to get an idea? Running ansible every time to provision a new node just doesn't work for me. A node should bootstrap itself during a boot-time. Thanks
Dmytro Leonenko's questions
I have a cluster of 8 nodes in ec2. 4 per AZ with cluster.routing.allocation.awareness.attributes: aws_availability_zone I want to migrate all the shards to a different node type. So I set up to new nodes and added them to the cluster. Some shards are now moving to the new nodes. The end goal is to shut down all the old nodes with all the shards being split between two nodes. What's the best way to do that without shutting down nodes one by one and placing shards on nodes to be shut down soon?
EDIT: I suppose "cluster.routing.allocation.exclude._ip": "x.x.x.x, y.y.y.y, z.z.z.z" should work for me?
Have a challenging question for you. Have a linux box. Need to create directory where users would be able to create files, but remove/modify only files created by them. Simple enough to have sticky bit set and thats it. But then we want particular admin user to be able to remove files from this directory and not being root user. How to do that? NFS4_ACLs are possible there. But I'm sure they won't help. Ideas? Users: user1:uploaders user2:uploaders admin1:admins <--- should be able to manage files in group dir
sgid on dir makes it possible to protect files from being edited by other users, but nothing stops user from deleting other users' files. Thats the problem
UPDATE 1:
The question was for FS permissions and nfs4_acls just because the users would be working with files over sftp. So that sudo and other scripted ways are not possible. Possible is to use LD_PRELOAD for sftp-server and override the unlink syscall or something like that. So it falls in to openssh and sftp-server.
UPDATE 2:
The users are chrooted to the directory in question by openssh and the directory should be root:root owned for it to work. All the files are put in this directory without any structure (app specific). The admin is actually not the only user to manage uploaded files but rather a group of admin users.
How can one explain multiple occurrence of /dev/root device containing different filesystems in /proc/mounts ?
# cat /proc/mounts
rootfs / rootfs rw 0 0
/dev/root / ext3 rw,data=ordered 0 0
/dev /dev tmpfs rw 0 0
/proc /proc proc rw 0 0
/sys /sys sysfs rw 0 0
/proc/bus/usb /proc/bus/usb usbfs rw 0 0
devpts /dev/pts devpts rw 0 0
/dev/hda1 /boot ext3 rw,data=ordered 0 0
tmpfs /dev/shm tmpfs rw 0 0
none /proc/sys/fs/binfmt_misc binfmt_misc rw 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw 0 0
/etc/auto.misc /misc autofs rw,fd=7,pgrp=2161,timeout=300,minproto=5,maxproto=5,indirect 0 0
-hosts /net autofs rw,fd=13,pgrp=2161,timeout=300,minproto=5,maxproto=5,indirect 0 0
nfsd /proc/fs/nfsd nfsd rw 0 0
/dev/root /var/lib/nfs ext3 rw,data=ordered 0 0
OK. Here we have two. One for / partition (/proc/cmdline: ro root=/dev/VolGroup00/LogVol00) and one for /var/lib/nfs. BUT:
# ls -la /var/lib/nfs
total 68
drwxr-xr-x 8 root root 4096 Aug 7 09:45 .
drwxr-xr-x 32 root root 4096 Aug 3 09:32 ..
-rw-r--r-- 1 root root 0 Feb 25 16:26 etab
-rw-r--r-- 1 root root 0 Feb 25 16:26 rmtab
drwxr-xr-x 2 root root 4096 Feb 25 16:26 rpc_pipefs
drwxr-xr-x 2 root root 4096 Aug 7 08:03 sm
drwxr-xr-x 2 root root 4096 Aug 7 08:03 sm.bak
drwxr-xr-x 4 root root 4096 Aug 7 09:45 sm.ha
drwx------ 4 rpcuser rpcuser 4096 Aug 3 06:52 statd
-rw------- 1 root root 0 Feb 25 16:26 state
drwxr-xr-x 2 root root 4096 Feb 25 16:26 v4recovery
-rw-r--r-- 1 root root 0 Feb 25 16:26 xtab
The contents of root filesystem and /var/lib/nfs is different. How could one block device /dev/root reflect different filesystems? And why /dev/root used for /var/lib/nfs at all?
I have two subnets routed to my server from ISP. I have only one gateway ip. The gateway is on the same VLAN as my IP address. For example netowrk 1 is 1.0.0.0/24 and network 2 is 2.0.0.0/24. Both are routed to eth0 by my ISP. Gateway is 1.0.0.1. My host ip is 2.0.0.1/24 (eth0) So I can configure default gateway manually with
ip route add default dev eth0
ip route add default via 1.0.0.1
and then internet connection works properly. How do I configure it in /etc/sysconfig/network-scripts/ifcfg-eth0 ?
I tried to set GATEWAY=1.0.0.1 but it doesn't work. Tried to set GATEWAY and GATEWAYDEV in /etc/sysconfig/network and it does only what first command from listing above do.
I have RHEL6.1 with php. I've installed libmemcached-1.0.2 from src.rpm and compiled php-memcached 2.0.0b2. If I try to setSaslAuthData('user','pass') and write something to memcached (Couchbase server) I always end up with return code 5 (WRITE FAILURE). Commenting setSaslAuthData out turns to work with default bucket. But I need to get it working with SASL. Any ideas on what's wrong with my setup?
P.S. of course binary protocol is ON
I need to configure 3-node cluster with shared GFS2 filesystem. GFS2 needed as common Documentroot for apache behind the balancer. Can you suggest some guide on how to configure corosync+pacemaker+GFS2 on RHEL/SL 6.1? BTW I don't need DRBD as I have iSCSI as a shared block device.
I have mostly simple nginx configuration. I can't get caching to work
http {
...
server_tokens off;
proxy_hide_header X-Powered-By;
fastcgi_hide_header X-Powered-By;
client_header_timeout 1024;
client_body_timeout 1024;
send_timeout 9000;
proxy_read_timeout 4000;
connection_pool_size 256;
client_header_buffer_size 1k;
client_max_body_size 10m;
large_client_header_buffers 2 4k;
request_pool_size 4k;
proxy_buffers 8 32k;
proxy_buffering off;
proxy_buffer_size 32k;
server_names_hash_bucket_size 64;
output_buffers 3 16k;
postpone_output 1460;
sendfile off;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 30 100;
ignore_invalid_headers off;
log_format custom '$host $uri $remote_addr [$time_local] $status $bytes_sent [$request]';
proxy_cache_path /var/cache/nginx/cache levels=1:2 keys_zone=melco:500m inactive=15m max_size=1000m;
proxy_temp_path /var/cache/nginx/temp;
...
server {
.....
location = /rss.php {
access_log /var/log/nginx/rss.php.log custom;
proxy_cache melco;
proxy_cache_key "$host$request_uri$args";
proxy_ignore_headers "Cache-Control" "Expires";
proxy_cache_min_uses 1;
proxy_cache_valid 200 302 304 5m;
proxy_cache_use_stale http_502 http_503 http_504;
proxy_hide_header Set-Cookie;
proxy_pass http://192.168.10.102;
proxy_redirect off;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
}
...
headers are:
# curl -I http://mysite.com/rss.php
HTTP/1.1 200 OK
Server: nginx
Date: Sun, 10 Apr 2011 15:45:54 GMT
Content-Type: text/xml; charset=windows-1251
Connection: keep-alive
Keep-Alive: timeout=100
X-Powered-By: PHP/5.3.3-7+squeeze1
Pragma: no-cache
Generator: Nucleus CMS
Etag: "f263dc8eb016ffcb6d34b317b8d5a315"
Vary: Accept-Encoding
I can see requests in /var/log/nginx/rss.php.log but /var/cache/nginx/cache is always empty Permissions on /var/cache/nginx/cache set to www-data:www-data (nginx user:group). Any ides? Also how to see in access log if request was from cache?
P.S. nginx ver:
# nginx -V
nginx version: nginx/0.7.67
TLS SNI support enabled
configure arguments: --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-debug --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-ipv6 --with-mail --with-mail_ssl_module --add-module=/tmp/buildd/nginx-0.7.67/modules/nginx-upstream-fair
I want to know if it's possible to configure Apache for authentication against ActiveDirectory for users to use SIngle Sign-on. I know Internet Explorer can use it and it's called Windows Authentication.
I've enabled apache dav module on my site and configured digest authentication for it. Now i'm trying to connect windows drive to it. Command follows:
net use z: http://dav.mysite.com/Files /user:username *
then it asks for password. After that drive appears to be connected except for one detail. In server logs I can see strange 401 errors:
xx.xx.xx.xx - - [22/Mar/2011:23:05:04 +0000] "PROPFIND /Files HTTP/1.0" 401 751
xx.xx.xx.xx - username [22/Mar/2011:23:05:04 +0000] "PROPFIND /Files HTTP/1.0" 301 495
xx.xx.xx.xx - - [22/Mar/2011:23:05:04 +0000] "PROPFIND /Files/ HTTP/1.0" 401 751
xx.xx.xx.xx - username [22/Mar/2011:23:05:04 +0000] "PROPFIND /Files/ HTTP/1.0" 207 1175
xx.xx.xx.xx - - [22/Mar/2011:23:05:07 +0000] "PROPFIND /Files HTTP/1.0" 401 751
xx.xx.xx.xx - username [22/Mar/2011:23:05:07 +0000] "PROPFIND /Files HTTP/1.0" 301 495
xx.xx.xx.xx - - [22/Mar/2011:23:05:07 +0000] "PROPFIND /Files/ HTTP/1.0" 401 751
xx.xx.xx.xx - username [22/Mar/2011:23:05:07 +0000] "PROPFIND /Files/ HTTP/1.0" 207 1175
As you can see for any proper digest authenticated request it sends one wrong request
My apache config:
<VirtualHost xx.xx.xx.xx:80>
ServerAdmin [email protected]
ServerName dav.dav.mysite.com
DocumentRoot /var/www/dav.mysite.com/
UseCanonicalName Off
Alias /Files "/var/www/dav.mysite.com/"
BrowserMatch "Microsoft Data Access Internet Publishing Provider" redirect-carefully
BrowserMatch "MS FrontPage" redirect-carefully
BrowserMatch "^WebDrive" redirect-carefully
BrowserMatch "^WebDAVFS/1.[0123]" redirect-carefully
BrowserMatch "^gnome-vfs/1.0" redirect-carefully
BrowserMatch "^XML Spy" redirect-carefully
BrowserMatch "^Dreamweaver-WebDAV-SCM1" redirect-carefully
BrowserMatch "MSIE" AuthDigestEnableQueryStringHack=On
<Directory "/var/www/dav.mysite.com">
Dav On
Order allow,deny
Allow from all
AuthType Digest
AuthName "DAV-upload"
AuthDigestDomain /Files/
AuthDigestProvider file
AuthUserFile /var/www/webdav.passwd
Require valid-user
</Directory>
# Possible values include: debug, info, notice, warn, error, crit,
# alert, emerg.
LogLevel error
ErrorLog /var/log/apache2/dav.dav.mysite.com-error.log
CustomLog /var/log/apache2/dav.dav.mysite.com.log common
ServerSignature Off
</VirtualHost>
And it works very-very slowly. Why do you think it sends requests without authentiocation? BTW other webdav clients work properly
P.S. nginx is sitting in front of apache and passing ALL the traffic to it
I have Debian Linux server which serves several PHP sites. Today I received mail from Datacenter where they say my server is sending spam and attached spam message. This was really message from my server and I managed to find it in exim4 mainlog. Question: how to identify where is the vulnerability in PHP code and which site (i have 3) send this mail? I've chrooted one site already and disabled php mail and all exec,system etc. functions for it but i'm not sure taht it is site sending mail. Any way to log message body for all outgoing mail?
I've installed Debian Squeeze on my server several days ago. During install process installer asked me to provide USB flash drive with firmware aic94xx-seq.fw (file disappeared likely due to licensing, you can find it on adaptec site: http://www.adaptec.com/en-us/speed/scsi/linux/aic94xx-seq-30-1_tar_gz.htm). All went fine. Today I installed all updates to my system with "U" in aptitude. Aptitude installed kernel update 2.6.32-5 and created initrd accordingly. But now I can't boot up my system because it can't find LVM volumes on harddrive connected to Adaptec RAID card. How can I boot my system now? I have USB with firmware. I have netboot CD. Unfortunately when I tried to edit boot records in grub I found that there is no my old kernel anymore. The only kernel grub sees is the new vmlinuz kernel and new initrd How take make my server alive?
I've installed Solaris 11 express VM and installed tools over it. ESX reports VMware Tools: OK but I can't see vmxnet3 adapter in "ifconfig -a".
Ideas?
I want to setup multiple similar containers and I want to have de-duplication. The only solution I'm thinking of is to use OpenSolaris ZFS and share zfs volume with NFSv4 ro iSCSI target and create usual ext3 fs on it to use as OpenVZ VE storage.
Any other solutions for de-duplication?
What are your thoughts on that? Pos/Cons?
P.S. I've tried ZFS-fuse and it's consuming a lot of CPU even without significant containers usage. Bad idea anyway and for sure is bad for production Native Linux ZFS port is very unstable now
I'm setting up OpenVZ to divide several of my sites from each other. I have one external ip address which I use to serve my sites. I'm using nginx to proxy http requests to nginx in each container which then proxies them to apache or fast-cgi and serves it's static content. I want to have something similar to nginx for FTP to proxy requests to corresponding FTP server inside container based on it's hostname. Is virtual hosting possible in FTP protocol based on DNS hostname?
P.S. for sure I can setup corresponding path to /vz/private/.... in FTP server's configuration on global zone but it's not very elegant solution. What are the best practices for shared hosting?