Is it possible to have different screensaver locking times for local vs. remote (rdp) logins on Windows? I don't know much about Windows admin, just that this config is pushed through GPO.
niXar's questions
I'm very new to Oracle. It looks like all the Oracle DBA use binary only dumps. This sometimes cause problems, and seems rather useless to me (performance gain is bound to be negligible), but what do I know.
Is there a good reason that escapes me?
Is there a tool like pg_dump for Postgres that can generate SQL statements from a database?
One of the application could be moving from one version to another, or converting the data to another DB.
I'm looking to deploy an application based on Oracle DB, and I was assuming that it would be possible to easily do active/passive clustering (with RH Cluster or Heartbeat) and synchronous replication a la drbd, but all the hosting providers I'm talking to are looking at me funny. Some have offered ghetto replication they called "log shipping," whereby files are asynchronously sync'd over the network, but that means that we may potentially lose up to an hour of data.
The alternative is to pay millions of dollars (well, tens to hundred of thousands) for Oracle Data Guard or somesuch.
I'm puzzled because I've worked for years on a very demanding system (tens to hundreds of GB of payment transactions) that did what I'm asking for close to $0, using PostgreSQL over DRBD over a Metropolitan Area Network.
I'm assuming here that SAN replication does the same thing as DRBD, i.e. synchronous replication where written blocks are ACK'd only after they've been written remotely. Am I wrong?
Am I missing something here?
I'm looking for a tool to analyze the traffic between two proprietary apps. It's https and I can control the certs and proxy.
I can't seem to be able to find a free/open source tool to do that, so before I roll my own, any recommendations?
Edit: the few I've seen do not look maintained anymore (Parros?)
I can't seem to get my iscsi targets working without starting the PowerPath module, even though I don't actually use it.
Is it at all possible?
Edit: Precision, I'm not currently even using multipathing. But if I don't start /etc/init.d/PowerPath, accessing the devices fail with I/O error. When I start this non-LSB compliant script, it then begins to work.
I'm at a loss.
We manage hundreds of RedHat Enterprise Linux servers happily with Puppet. One of the cool side effects is that we can go to /var/lib/puppet/yaml/facts and look at the output of the "facter" utility (part of Puppet).
Now I would like the same kind of convenience for more information, such as which services are up and running or deactivated, or the list of packages installed. I'm not quite talking about monitoring, since I'm not so much interested in generating alerts or graphs on this, but more on having the information centralized for analysis.
I see two parts to doing this:
first a mechanism for connecting the central repository to the clients. I remember that net-snmp already exposes the RPM database if allowed to do so, I guess it might or might be made to expose chkconfig.
second a tool to store said information.
Which tool could help with this? I'm looking for something that stores data in a convenient way, either SQL, YAML, XML or consistently formatted text files, and can be easily told who to talk to.
Is there a robust way of scripting (Unix shell) lun provisioning for an EMC Clariion? Navicli doesn't look very reliable, its output is not easy to parse (and just plain weird), and it does not look like it returns useful error codes.
I want something I could use like LVM, if that exists, e.g.:
if ! lvcreate -n $lunname -L $size $volumegroup
then
echo "Failed" >&2
exit 1
fi
A consultant told me that EMC writes terrible software on purpose so that they can sell very expensive add-ons, but I can not believe it's true.
Here is a very simple question that I wasn't able to find an easy answer for being entirely new to postfix:
I want all mail coming in from SMTP (or through local delivery for that matter) to drop in a single (special purpose) user's Maildir, for later pick-up by a web app through IMAP (Dovecot).
I already have it configured to find the destination through
local_recipient_maps = unix:passwd.byname
and created a user for that single purpose, but I'd rather have Postfix not even try to look up the user named in the incoming mail, just take the one user I specify in the config.
What's the simplest and most secure way to do this?
Addendum: this shows how to use virtual_alias_map, but I don't want virtual domains, I want all emails coming in, regardless of the (multiple) domains I've set up to go into the catch-all.
I don't think there is a right way to do the following (yet) in Puppet, but this strikes me as desirable.
I want my classes to be able to influence the content of a templated file from another class so that we can both avoid duplicating information, and put the info where it belongs.
For instance we have an "iptables" class, and various service classes, such as "postfix", "webappfoo" etc.
class webappfoo {
$myfwrules +> [ "-A INPUT -p tcp --state NEW -m tcp --dport 80 -j ACCEPT" ]
include apache
}
node webappserver {
include webappfoo
include iptables
}
But this does not work; the $myfwrules array only contains that line within webappfoo.
Note that a class can read into another one; for instance here iptables could just read $iptables::myfwrules; but I don't want iptables to have to know about webappfoo.
If I understand correctly, to join a Windows domain, a machine needs to have an account on Active Directory, and have the password to authenticate. Such a password is renewed automatically every 30 days.
Now, I have a Linux machine on the corporate network, with the IT dept's blessing, and Samba can join a domain and would allow me to do so, but they don't want to add it. I want to use the account I have on a Windows machine I'm going to reformat anyway. I couldn't find a way to decode the key stored in the registry.
I'm working on a setup with two datacenters linked by a MAN (bridged) and everything is doubled between them, in fail-over mode with RedHat Cluster, DRBD and that kinf of things.
I have one DNS server for each location, but it turns out that having both in /etc/resolv.conf doesn't help much; if one goes down, the client waits 10s or so half of the time. In other words, it's using them for load balancing, not fail-over. So I configured the two servers to use a VIP with ucarp (≈VRRP).
Is there a way to have my two DNS servers both be up and, for example, respond to the same IP, all the time? It's no big deal if one NS resquest gets two answers.
Is there a way to do this with Anycast / Multicast and so on?
Edit: turns out anycast won't do me any good in my scenario, I have only static routes, and most traffic is actually through a bridge.
What would be interesting would be a way to have two DNS servers answer to requests on the same IP, if that's somehow possible.