Is there an equivalent of AWS EC2 tags for Azure? For example, if an account wanted to assign name=value
pairs to an Azure machine and then subsequently query them.
Aitch's questions
I've recently got my puppetmaster and client up and running and have had the client correctly signed, then requested and applied simple changes, all good.
I have a growing number of machines (>100). They are not consistently named (historical reasons). They fall into a handful of categories (think of it like: dataserver_type1, dataserver_type2, webserver_type1, webserver_type2....). New instances of these types of machines are added weekly.
I don't understand (yet) or cannot see how I can declare a "generic" node of (say) "dataserver_type1" that contains whatever modules it needs, and set something in the client puppet.conf that says "I am a dataserver_type1" without using the hostname/FQDN
If I set the node name in the catalog as (say) "my-data-server-type1" - the certified hostname - it picks it up and works. I know you can use patterns for hostnames but as I said, my server names are not consistent, and I can't change them.
This seems disingenuous to have to edit a file and manually add a node for each server, when they continue to grow.
Edit:
Digging deeper, it seems roles may be what I want. But there still seems to be an element whereby the master has contain a list of roles that a specific named server should do. Perhaps what I am asking is, how can a client say "I want to be this role", without the server having to be updated?
My domain is already with Amazon Route 53 and I can use the cli53 command line tool quite happily to maintain it.
We are moving to use Amazon SES and I would like to add an SPF record to the domain, as per the docs... we have no prior existing record. This is just for automated emails to customers, not internal user accounts.
I can't for the life of me figure out what the command line is to do this, can anyone provide an example?
$ cli53 rrcreate -x 3600 -r --wait mydomain.com '' 'SPF' 'v=spf1 include:amazonses.com ?all'
Traceback (most recent call last):
...
boto.route53.exception.DNSServerError: DNSServerError: 400 Bad Request
...
Sender
InvalidChangeBatch
Invalid Resource Record: FATAL problem: InvalidTXTRDATA encountered at v=spf1 include:amazonses.com ?all
I have configured a working openvpn server (ubuntu 10.04) and client, no problems.
I generated certificates and key files and such like for a client machine.
We deploy a number of these generic client machines for data capture, anywhere between 10 and 20 a month. The reason for the VPN is to allow us to remote login for occasional support and monitoring. They send their data home via other means (not the VPN)
I am considering making the client config files generic and using them on all deployed machines (the "duplicate-cn" option on the server side).
My reasoning is this:
- The vpn server explicitly disallows ssh login from anywhere except our office, so a connected client cannot ssh into the server
- In addition, login to the openvpn server requires the X509 .pem keyfile (it's an Amazon EC2 instance)
- The server does not allow clients to see each other ("client-to-client" is commented out), and there is no access to any other networks, it's purely so we can ssh in to the client.
- We are lazy, and don't want to admin generating the certs, applying them to the machine (hence no longer generic and no longer hot swappable etc) and people will get confused and get this wrong.
The main drawback seems to be:
- It's difficult to tell which machine is which where there are many connections (I haven't found a solution for this yet)
The client machines are installed on "untrusted" sites, that is, I cannot guarantee their phyiscal security.
So my question is... what is the worst that could happen in this scenario? If a machine got compromised directly, the worst it could do is open a VPN tunnel (which it does automatically anyway!) but then could not really get anywhere beyond that. We could just block that IP at the firewall level once detected.
Is my thought process correct here or have I missed anything?
Edit:
I maybe should have said that the client machines are headless (no video/keyboard) and not accessed directly by the client sites (although you cannot outright guarantee that!). This is a machine-2-machine (M2M) environment. These are not (eg) laptops carried by sales folks.
I have a growing number of remote machines that ssh home and setup a connection to allow subsequent access via tunnel so I can get into them for maintenance.
Currently, I must manually configure these machines with a unique forward ssh port prior to install at the remote location by editing a script. The rest of the install is automated (PXE). It becomes tiresome (and risks error) setting this port manually, and prevents me confidently handing the full process off to a tech.
Question> Given a clean, debian install is it feasible to write a numeric hash function of (say) the mac address on eth0 that would be deterministic and fall in some range (say 30000-60000) and you could reasonably expect to be unique(++). I guess I've got bash, awk etc to play with. I would prefer to stick to shell related tools if possible but could use python if pushed.
(++) I would amend the tunnelling script to increment the port if it had problems on the first port tried.
Example Suggested Input: ifconfig eth0 | grep HWaddr | awk '{ print $5 }' == 08:00:27:aa:bb:cc
Example Required Output: 34567
I've been playing about with Amazon S3 a little for the first time and like what I see for various reasons relating to my potential use case.
We have multiple (online) remote server boxes harvesting sensor data that is regularly uploaded every hour or so (rsync'ed) to a VPS server. The number of remote server boxes is growing regularly and forecast to keep growing (hundreds). The servers are geographically dispersed. The servers are also automatically built, therefore generic with standard tools and not bespoke per location. The data is many hundreds of files per day.
I want to avoid a situation where I need to provision more VPS storage, or additional servers every time we hit the VPS capacity limit, after every N server deployments, whatever N might be.
The remote servers can never be considered fully secure due to us not knowing what might happen to them when we are not looking. Our current solution is a bit naive and simply restricts inbound rsync only over ssh to known mac address directories and a known public key. There are plenty of holes to pick in this, I know.
Let's say I write or use a script like s3cmd/s3sync to potentially push up the files.
Would I need to manage hundreds of access keys and have each server customized to include this (do-able, but key management becomes nightmarish?)
Could I restrict inbound connections somehow (eg by mac address), or just allow write-only to any client that was running the script? ( i could deal with a flood of data if someone got into a system? )
having a bucket per remote machine does not seem feasible due to bucket limits?
I don't think I want to use a single common key as if one machine is breached then potentially, a malicious hack could get access to the filestore key and start deleting for ll clients, correct?
I hope my inexperience has not blinded me to some other solution that might be suggested!
I've read lots of examples of people using S3 for backup, but can't really find anything about this sort of data collection, unless my google terminology is wrong...
I've written more than I should here, perhaps it can be summarised thus: In a perfect world I just want to have one of our techs install a new remote server into a location and it automagically starts sending files home with little or no intervention, and minimises risk? Pipedream or feasible?
TIA, Aitch
Edit 1: Perhaps bad form to answer ones own question but...
After much further googling and browsing it appears that the (new?) Identity and Access Management (IAM) might be what I need, it says "...IAM eliminates the need to share passwords or access keys, and makes it easy to enable or disable a User’s access as appropriate..." I may start thinking about using the hw mac address as some sort of unique user and a hash os some form as the password so it can be programatically set.
I admin a handful of cloud-based (VPS) servers for the company I work for.
The servers are minimal ubuntu installs that run bits of LAMP stacks / inbound data collection (rsync). The data is large but not personal, financial or anything like that (ie not that interesting)
Clearly on here people are forever asking about configuring firewalls and such like.
I use a bunch of approaches to secure the servers, for example (but not restricted to)
- ssh on non standard ports; no password typing, only known ssh keys from known ips for login etc
- https, and restricted shells (rssh) generally only from known keys/ips
- servers are minimal, up to date and patched regularly
- use things like rkhunter, cfengine, lynis denyhosts etc for monitoring
I have extensive experience of unix sys admin. I'm confident I know what I'm doing in my setups. I configure /etc files. I have never felt a compelling need to install stuff like firewalls: iptables etc.
Put aside for a moment the issues of physical security of the VPS.
Q? I can't decide whether I am being naive or the incremental protection a fw might offer is worth the effort of learning / installing and the additional complexity (packages, config files, possible support etc) on the servers.
To date (touch wood) I've never had any problems with security but I am not complacent about it either.