We are building some security boundaries for our internal teams and would like to limit their ability to deploy services in Public Subnets. I can build a boundary policy for EC2 not to be deployed in public subnets but this only covers EC2 service. Is there a way to block all services, existing or future, from being deployed in a specific subnet?
Sergei's questions
We have Windows domain spread across multiple sites and we are using Ansible for orchestrating Windows rebuild process. During the rebuild, we observe some Kerberos related issues that we suspect may be to the way our workflow works
Rebuild process works as below:
- Since this is a rebuild, the computer object already exists in AD
- Kerberos ticket is created
- Rebuild process is starting, disks are wiped , Windows installed and computer is rejoined to Active Directory
- When computer is up and running, new Kerberos ticket is generated by Ansible to connect to this computer.
In some cases however we can see that Ansible fails to connect to the rebuilt server.
I am trying to understand what happens during this phase that may cause the issue. I see the process as follows:
- We create TGT ticket at the beginning of the Ansible play
- Server is rebuilt and rejoined the domain
- AD replication is in process and newly created computer account is not replicated to all KDC (DC)
- Ansible connects to one of the KDC that has not received update about computer rejoin and uses TGT to receive Service ticket to connect to the new server via WinRM . As a result, it gets WinRM service ticket signed using password for old computer account
- Ansible tries to connect to new server using this ticket and connection fails with an error 'WINRM CONNECTION ERROR: the specified credentials were rejected by the server'
To isolate replication issue, we are configuring Ansible kerberos client to use client's site DC as KDC. This did improve the process but we still see the error occasionally.
Can someone comment on whether our assumptions and fix is correct?
Newly install Windows 2016 Core server, trying to change service startup type.
It seems that default permissions for this service won't let me do it even with domain admin privileges.
[servername]PS C:\tmp> subinacl /service "WinHttpAutoProxySvc" /display
W i n H t t p A u t o P r o x y S v c - O p e n S e r v i c e E r r o r : 5 A c c e s s i s d e n i e d .
In Services applet, all properties of the service are greyed out. Other services are also affected. Is it something to do with Core?
I can see different time to respond when using large ping buffer size for one of our HP Blade servers. When using default buffer size, time to respond is the same on all servers.
As a first step to troubleshoot, I compared two servers in the same c7000 enclosure, where one server has the issue and other one does not.
Both servers are Windows 2k8 R2 BL490c Gen 7 servers, in the same C7000 enclosure, using same enclosure network module (HP 1/10Gb VC-Enet Module)
When pinging servers using large ping buffer for server1, I get consistent 1-2 ms response time. For server2, I get 3-4 ms :
ping -l 65500 server1
Pinging server1 [10.100.100.2] with 65500 bytes of data:
Reply from 10.100.100.2: bytes=65500 time=1ms TTL=127
Reply from 10.100.100.2: bytes=65500 time=2ms TTL=127
Reply from 10.100.100.2: bytes=65500 time=1ms TTL=127
Reply from 10.100.100.2: bytes=65500 time=1ms TTL=127
ping -l 65500 server2
Pinging server2 [10.100.100.3] with 65500 bytes of data:
Reply from 10.100.100.3: bytes=65500 time=3ms TTL=127
Reply from 10.100.100.3: bytes=65500 time=4ms TTL=127
Reply from 10.100.100.3: bytes=65500 time=4ms TTL=127
Reply from 10.100.100.3: bytes=65500 time=4ms TTL=127
This is consistent if I ping from different sources, i.e. from blades in oher enclosures
Both servers are using 2x bonded HP NC553i Dual Port FlexFabric 10Gb adapters
I can see that blade and Network adapter firmware on both blades is slightly different - it is actually newer on the server2
Other difference is that they are on the different VLAN, however we don't see any delay for blades in the same vlan on other enclosures.
I have checked that adapters are set to autonegotiate on both servers.
Where should I look to troubleshoot first? I would try to avoid firmware update as much as I can at this point.
I have a simple setup:
- Ubuntu 13.04 on internal USB thumbdrive
- 4xSATA drive in raidz spool with some volumes /tank/vol1, /tank/vol2
Where does zfs store its' configuration data - is it on member drives of zpool? What would happen if USB thumbdrive dies and I need to reinstall it and access data on zpool?
This seems like a simple problem but I am unable to fix it
Running dfsutil command in shell returns result
C:\Windows\system32>dfsutil link "\server.domain.com\DFSRootname\Sharename"Link Name="Sharename" State="OK" Timeout="1800" Target="\server1\sharename" State="ONLINE" [Site: site1] Target="\server2\sharename" State="OFFLINE" [Site: site2]
Done processing this command.
Trying to do the same in powershell
PS> $path = "\\server.domain.com\DFSRootname\Sharename" $dfsutil = "dfsutil" $option = "link" PS C:\Windows\system32> dfsutil link $path DFS Utility Version 5.2 (built on 5.2.3790.3959) Copyright (c) Microsoft Corporation. All rights reserved. Unrecognized option "ink"
Same with using Invoke-Expression
PS C:\Windows\system32> Invoke-Expression "$dfsutil $option $path" DFS Utility Version 5.2 (built on 5.2.3790.3959) Copyright (c) Microsoft Corporation. All rights reserved. Unrecognized option "ink"
We are planning to introduce version control for our servers team so we can keep our config files and code neat.
The server base are mostly Windows with some Linux and spread across several continents.
Since the main purpose of the project is to take control ( no pun intended) over config scripts spread and keep configfiles neat.
I am wondering if there are some existing best practices for structuring the repository. Unfortunately my google-fu fails me here. Apologies if this has been asked here already.
I can start with two repositories : 'scripts' and 'config files' , then create subdirectories as I go. However I am convinced this has been done before many many times and I would rather not repeat mistakes of others. Are there any good rules for organising data in 'scripts' and 'config files' directories?
As for choice of the source control system , I am leaning towards distributed VC system ( git, mercurial) that already have built in resilience for multisite deployment. Some other options are important too, i.e: authentication using groups in multiple LDAP servers (i.e. AD domains) and nice Windows GUI client to please windows users.
We have following setup:
- mountserver - debian linux
- fileserver1 - Windows 2008 R2 Storage server
- fileserver2 - Celerra NS20 exporting CIFS share
- workstation - windows 7 with mapped drive to share on fileserver2
What we are doing:
- mounted share from fileserver1 on mountserver, e.g. /shared/fileserver1
- mounted share from fileserver2 on mountserver, e.g. /shared/fileserver2
- ran rsync on mountserver to sync data from fileserver1 to fileserver2.Used atime as parameter to sync data not older than X
- after a while tried to delete data older that Y on /shared/fileserver2.
From what I see, linux stat command on mountserver returns following when quering file on /shared/fileserver2:
At the same time when I open property for the same file using mapped drive connected to fileserver2,I see following for the same file:
As you can see, Created date of 12 August shown in Windows Explorer is nowhere to be seen using stat command
Am I missing something here?
Joining Linux host to Windows AD is widely documented.However I struggle to find any guides or best practices on how to join Linux clones that were already members of the domain.
Naturally things start to break due to the identical SIDs .I cannot use 'net ads unjoin' as this would remove original SID from the domain...
One of my clients has a small office in a major city that used to be main office for the company.Due to the organizational changes main office was moved to a different country but all equipment and office space are still in place.Now CEO is trying to get value from this office and their IT guy suggested using it as online storage.
So you have an idea this is what they have (list as was given to me):
- 100Mb/100Mb fiber optic dedicated Internet
- ADSL Backup line
- one onsite rack
- x5 ProLiant DL360 G5 with 6 GB and 250GB array SAS each
- x2 NAS 4TB LACIE QUAD Drives
- QNAP 8TB ISCSI Array - main file storage pool
- 1GB managed netgear switch
- Fibre optics transcoder (router)
IT guy has written inhouse php/mysql webapp that allows secure online backup based on SFTP/WinSCP uploads for clients.No marketing was done and no business plan is in place.
CEO just wants to keep the costs down for having nearly empty office and would like to use this infrastracture as a revenue source.
In my opinion this is a right way to waste money considering existing offers for secure online backup storage and established companies are already offering a lot.
To get it working from what I see would require serious investment in offsite storage replication,removing single points of failure,certification,marketing etc.
Is there better way to reuse this equipment/office?
We have reverse web proxy on Apache2 that proxies requests to the remote webserver using URL http://server1 that gets content proxied from http://realserver1
Now I am trying to add another site to webproxy that does the same, the difference is that remote server already runs Apache as a reverse proxy (for the java application on the same host but different port).
I.e. webproxy serves URL http://server2 that gets content proxied from http://realserver2, which in turn gets its content from http://localhost:someport
Now I would expect that this setup would just work, however it does not.
Instead I am being redirected to http://realserver2.Any idea what ai am doing wrong?
Thank you!
Code can be seen in snipt.net/search?q=apache+reverse+proxies+chain
Can anyone suggest good log reporting software for Proftpd?
I am looking for something at least as good as http://xferlogdb.sourceforge.net where log is fed into the database and dynamic web pages are built to retrieve historical data and statistics per user, time period and so on.
Xferlogdb is very helpful , but unfortunately latest release is 2004
We are planning to move our production servers to the datacenter and virtualize remaining servers in the process.Datacenter will have HP blades with vSphere on top.Currentliy we are using Celerra NS20 as fileserver.Since datacenter is using HP kit and EVA 4400 as SAN, we cannot have Celerra there, as EMC supoprt for Celerra does not work for non EMC array.
I have searched for possible options and one of them was to have HP NAS blade X3800sb instead of Celerra.However this seems like overkill for me.We are only using Celerra for about 100 users and 50 servers and I think having X3800sb could be waste of resources.
The other option would be to have a virtual fileserver as a part of vmware environment in datacenter.We only need CIFS to be provided.The only option I can think of is Windows Storage server.We had a bad expirience with Windows servers used as fileservers ( memory leaks one thing) in the past and this was one of the reasons we moved to Celerra.
What are the other options?We need something as reliable as Celerra with as many options as possible.For example , Celerra has per folder quotas, deduplication, dynamic volume allocation, automatic failover, VTLU, replication.
Also we would need to replicate NAS data to the failover site.We could use block level replication , SAN-to-SAN, but this would mean wasted bandwidth, as we need only subset of folders to be replicated.We used CA XSoft for windows servers in the past and Celerra has option for Celerra replication.
Thank you very much in advance,
Please ask me if I missed any details!
We use FTP server to distribute a lot of files.Hard links are used extensively as we have a lot of identical files named differently.
We also have a secondary FTP/WWW server as failover for the primary one.Files are rscynced via ssh from the primary FTP. Both server have local RAID 1 disks.
Unfortunately secondary server has less storage than the primary one and this resulted in the situation where it ran out of storage.This is a production server and I need to have as little intervention as possible to fix this problem.
I have two ideas how to recify it:
1. Create LUN on iSCSI SAN and attach it to the secondary server.However this server has never been connected to iSCSi and I would need to organise a downtime to be on a safe side.Also there is a risk that iSCSi may not work well on this server as there is no time to test it.
2. open NFS on primary server ( both servers are on the same Gb switch).Serve /home where all files are via NFS.The problem here I don't know how NFS would perform in this setup and whether there are any protocol limitations - we never used NFS in our shop.
3. Any other setup
What would you do? This is more or less a temporary soution as we will be relocating to the datacenter within a year.
We use EMC Celerra filer for our EMC array and it supports NDMP.n remote site we have EVA4400 HP SAN and it seems that our only option for NAS is Windows Storage Option. What if we want to use NDMP?Surely HP should have something that match Celerra?
We are planning moving some of our infrastructure to the dataceneter.Most of our hardware is EMC/Dell and datacenter uses HP.
One of the components we use is EMC Celerra NS20 Filer connected to the EMC CX3-10 storage array.This is the primary device for users to access CIFS shares.
Now I am trying to move some infrastructure to the datacenter, some files would need to be served there as well.
The idea was to buy standalone Celerra (without any integrated storage) and connect it to the EVA 4400 that is already available in the datacenter.this would also let us use Celerra replicator for keeping two Celerras in sync if needed.
I have spoken to EMC support if it is possible to use EVA4
400 as storage for Celerra and their answer was "we don't support 3rd party storage", which is perfectly understandable.
Now I have two options :
- Buying Celerra with integrated storage .This will cost a lot than initially planned but solves the problem completely.
- Buy other filer that can be supported using EVA 4400 .Keeping Celerra and non EMC filer would be a next challenge in this case.
I have checked HP site and could not find any device that is similar to Celerra. What are the other filers that would work with HP storage? Do I have any other options to replicate rather than using EMC software?
Thank you very much in advance!
We have 20+ Windows 2k3 physical servers that are used for some heavy calculation jobs few times a week.Users logon via rdp to them and run some jobs.Once these jobs are complete, users save results on local harddrives on these servers that are shared.Resulting files can be few gigabytes in total, with average size of 100M per file. Once files are ready , script from the scriptserver connects to each servers share and synchronizes files on these servers to a fileshare on Celerra NS20 NAS.Once this syncing is done, files are sent to customers from the filer to ftp server.
This setup has been in place for many years and now we are virtualizing our infrastrucuture so I am thinking about getting rid of these servers and replacing them with VMs to save on power, space and hardware support.Server do not need to be in high availablity setup, but they do need a lot of memory and application they run is not multithreaded.
Current infrastructure that can be used:
- Vsphere infrastrucutre on Dell PowerEdge M600 blades.We may buy 2 more blades to accomodate these servers
- CX3-10 Fibre Channel SAN. We may buy extra disk tray to accommodate these servers.I am inclined to persuade management to go for FC disks.
- Celerra NS20 filer connected to SATA disk tray on this SAN
- Cisco Catalyst 3560 gigabit switches
My main concern is how to reorganise storage.As all servers will be on the same SAN, all this fiddling with shares will be gone.I am thinking about mapping drives to location on the NAS filer and then syncing files to the same location, however this seems like a duplication of data on the filer.
Maybe ther is a more elegant way to do rearrange storage in this setup and someone has been in the similiar situation?
Are there are any major faults in my plan? What pitfalls should I expect?
I am trying to enable AD authentication for Debian stable servers to enable users to logon via ssh authenticating against Windows AD. It all works fine and I can ssh to the server using my Windows credentials but I have noticed this message on remote ssh logon when logging on as root:
Your account has been locked. Please contact your System administrator
Your account has been locked. Please contact your System administrator
Your account has been locked. Please contact your System administrator
Last login: Sat Jun 13 14:15:14 2009 from workstation1
server1:~#
I have checked if I can login via local console as root and oops, I cannot. Same error pops up. This could kick me painfully in the future. At the same time I have tried the same setup for RedfHat and I don't have this problem. I believe the problem is somewhere in my pam configuration but can't see where.googling for error does not get me anywhere either.
Below are details for corresponding pam files on Debian and redhat...
Debian Version
common-account
account sufficient pam_winbind.so require_membership_of=S-1-5-21-602162358-1844823847-725345543-XXXXXX
account sufficient pam_winbind.so require_membership_of=S-1-5-21-602162358-1844823847-725345543-XXXXXX
account sufficient pam_winbind.so require_membership_of=S-1-5-21-602162358-1844823847-725345543-XXXXXX
account required pam_unix.so
common-auth
auth sufficient pam_winbind.so require_membership_of=S-1-5-21-602162358-1844823847-725345543-XXXXXX
auth sufficient pam_winbind.so require_membership_of=S-1-5-21-602162358-1844823847-725345543-XXXXXX
auth sufficient pam_winbind.so require_membership_of=S-1-5-21-602162358-1844823847-725345543-XXXXXX
auth required pam_unix.so nullok_secure
common-session
session required pam_mkhomedir.so skel=/etc/skel/ umask=0022
session sufficient pam_winbind.so require_membership_of=S-1-5-21-602162358-1844823847-725345543-XXXXXX
session sufficient pam_winbind.so require_membership_of=S-1-5-21-602162358-1844823847-725345543-XXXXXX
session sufficient pam_winbind.so require_membership_of=S-1-5-21-602162358-1844823847-725345543-XXXXX
session required pam_unix.so
RedHat system-auth file:
auth required pam_env.so
auth sufficient pam_unix.so nullok try_first_pass
auth sufficient pam_winbind.so use_first_pass
auth requisite pam_succeed_if.so uid >= 500 quiet
auth required pam_deny.so
account required pam_unix.so
account sufficient pam_succeed_if.so uid < 500 quiet
account sufficient pam_winbind.so use_first_pass
account required pam_permit.so
password requisite pam_cracklib.so try_first_pass retry=3
password sufficient pam_unix.so md5 shadow nullok try_first_pass use_authtok
password sufficient pam_winbind.so use_first_pass
password required pam_deny.so
session optional pam_keyinit.so revoke
session required pam_limits.so
session required pam_winbind.so use_first_pass
session [success=1 default=ignore] pam_succeed_if.so service in crond quiet use_uid
session required pam_unix.so
session optional pam_mkhomedir.so skel=etc/skel/ umask=0027
/etc/pam.d/sshd
# PAM configuration for the Secure Shell service
# Read environment variables from /etc/environment and
# /etc/security/pam_env.conf.
auth required pam_env.so # [1]
# In Debian 4.0 (etch), locale-related environment variables were moved to
# /etc/default/locale, so read that as well.
auth required pam_env.so envfile=/etc/default/locale
# Standard Un*x authentication.
@include common-auth
# Disallow non-root logins when /etc/nologin exists.
account required pam_nologin.so
# Uncomment and edit /etc/security/access.conf if you need to set complex
# access limits that are hard to express in sshd_config.
# account required pam_access.so
# Standard Un*x authorization.
@include common-account
# Standard Un*x session setup and teardown.
@include common-session
# Print the message of the day upon successful login.
session optional pam_motd.so # [1]
# Print the status of the user's mailbox upon successful login.
session optional pam_mail.so standard noenv # [1]
# Set up user limits from /etc/security/limits.conf.
session required pam_limits.so
# Set up SELinux capabilities (need modified pam)
# session required pam_selinux.so multiple
# Standard Un*x password updating.
@include common-password
We have few customer facing servers in DMZ that also have user accounts , all accounts are in shadow password file. I am trying to consolidate user logons and thinking about letting LAN users to authenticate against Active Directory.Services needing authentication are Apache, Proftpd and ssh. After consulting security team I have setup authentication DMZ that has LDAPS proxy that in turn contacts another LDAPS proxy (proxy2) in LAN and this one passes authentication info via LDAP (as LDAP bind) to AD controller.Second LDAP proxy only needed because AD server refuses speak TLS with our secure LDAP implemetation. This works for Apache using appropriate module.At a later stage I may try to move customer accounts from servers to LDAP proxy so they are not scattered around servers.
For SSH I joined proxy2 to Windows domain so users can logon using their windows credentials.Then I created ssh keys and copied them to DMZ servers using ssh-copy, to enable passwordless logon once users are authenticated.
Is this a good way to implement this kind of SSO?Did I miss any security issues here or maybe there is a better way ofachieving my goal?
What is the best way to share filesystem between Linux and Windows servers via SAN? We have frontend RHEL Linux servers and backend Windows 2k3 servers that pass files via database that is not a best solution. Am I correct to assume that clustered filesystem is a solution?if so, what would be a best one to use?