We have some tests we need to run in a server with 32Gb RAM.
All the servers we have access to have 64GB and we can't physically change this.
Is there some way of telling RHEL to use only some fixed amount of RAM, less than what is installed?
We have some tests we need to run in a server with 32Gb RAM.
All the servers we have access to have 64GB and we can't physically change this.
Is there some way of telling RHEL to use only some fixed amount of RAM, less than what is installed?
Yes, I know this is generally a bad idea, but we have a short-term need to do it.
Following this: https://docs.oracle.com/middleware/1221/wls/SECMG/ssl_version.htm#SECMG637
We have set
-Dweblogic.security.SSL.minimumProtocolVersion=SSLv3 (originally set to TLSv1.2)
-Dweblogic.security.SSL.protocolVersion=ALL
(the second shouldn't be needed, but should also be harmless)
This didn't work. According to this: https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html#enable-sslv3
JDK8 doesn't support SSLv3 out of the box, but can be enabled by:
If SSLv3 is absolutely required, the protocol can be reactivated at JRE level by removing "SSLv3" from the jdk.tls.disabledAlgorithms property in the java.security file or by dynamically setting this Security property before JSSE is initialized.
To enable SSLv3 protocol at deploy level, after following the above steps, edit the deployment.properties file and add the following:
deployment.security.SSLv3=true
We've done the first of these changes, but it's not clear what "enable... at deploy level" means for Weblogic, and we can't find a deployment.properties file.
Do we need to do this step? And if so, where is the deployment.properties file (or equivalent) for weblogic?
Alternatively, has anyone successfully re-enabled SSLv3 support in Weblogic 12 and can tell us what needs to be changed?
ue
We have a Apache forward proxy sending requests to thousands of back end servers.
The servers are faulty, in that they require both the initial challenge request (without authentication) and the next request (with authentication) to be on the same http connection. We cannot get this fixed in any reasonable timescale.
So we would like the forward proxy to use the same connection, possibly via connection pooling?
Apache forward proxies , by default, closes the connection as soon as it gets a response. This is by design
“The default worker for forward proxying does not use connection pooling in the naive sense. It closes each connection after each request.
What Ryujiro Shibuya was observing was that Apache signals it would keep the connection open even in forward proxy mode, but then acually closes the connection. We are discussing a fix to this, namely always signalling "Connection: close" from the beginning for the default forward and revere proxy workers. “
There is some suggestion it can be worked around:
"You can define explicit workers though (e.g. using ProxyPass for reverse and as Rüdiger wrote likely also in forward proxy mode, which then will use HTTP Keep-Alive (by default, depending on several config options)"
But I don't know how to do this. Something with ProxySet, possibly? The issue with that is that I need to somehow specify the URLs, but this is a forward proxy - there are many possible origin servers and I cannot enumerate them up front.
How should we configure things to get this connection re-use?
We need to proxy a request to a server. The sender has an autogenerated username and password, and will use basic authentication. We can't change these how these are generated by the sender.
The server will accept only usernames and passwords that are less than 32 characters. If the autogenerated ones are longer (and they often are) then the request will succeed if we truncate both to 32 characters.
So I need to examine the Authorization header, base64-decode the username and password, truncate each to at most 32 characters, and re-assemble the Authorization header before passing the request on.
Is there any way of doing this? I've perused the mod_headers doc and I can see how to do quite a lot of manipulation, but I can't see how to get this done...
We would like to send the request received by an Apache proxy to all of a set of downstream servers (in fact, also proxies, but I don't think this matters).
We know that all but at most one of these requests will fail, for a variety of reasons (server at that IP doesn't exist, is not listening on the right port, or the credentials are wrong).
We know that for one server, the request should work (but may not - the server may be powered off, not working correctly, overloaded etc). We don't know which of the servers this will be for any one request.
So we'd like to return the one correct response, if it happens, or if not any of the error responses (or a fixed failure response) should be returned.
Any ideas? It's not the most complex app to write if we need to do it from scratch, but we'd prefer to use Apache (which is already in place in our solution) if we can.
... in vCloud. There's a script here to do it for vSphere. Can a similar (or entirely different, I don't care!) approach be used for vCloud?
(The underlying problem is a lot of seldom or never used VMs consuming resources on our vCloud. We'd like to find the barely-used ones and work with the creators to remove them).
we have a new(ish) WLS 9.2 installation
Over the last day or so, after restarting WLS, after a while, the admin server unexpectedly shuts down, preceded in the logs by many (hundreds) occurrences of exceptions like the below. This may be connected with having done an EAR redeployment, but if so the adverse effect seems to persist over a WLS restart.
We have tried restoring LDAP (from a backup months ago which we are pretty sure is OK)
It seems to shut down in a semi-orderly way after being told to do so by the JVM - see the other excerpt from the log below. Goolgling suggests that's becasue system.exit has been called, but there's no indication I can see in the log about what has caused that to happen (other than the stream of exceptions)
Any ideas on causes or fixes?
####<Aug 13, 2014 6:50:12 AM BST> <Critical> <EmbeddedLDAP> <csrpth01-omch> <AdminServer_9002> <[ACTIVE] ExecuteThread: '4' for queue: 'weblogic.kernel.Default
(self-tuning)'> <<WLS Kernel>> <> <> <1407909012546> <000000> <java.lang.NullPointerException
at weblogic.socket.SocketMuxer.deliverExceptionAndCleanup(SocketMuxer.java:715)
at weblogic.socket.SocketMuxer.deliverEndOfStream(SocketMuxer.java:684)
at weblogic.ldap.MuxableSocketLDAP$LDAPSocket.close(MuxableSocketLDAP.java:118)
at com.octetstring.vde.Connection.close(Connection.java:166)
at com.octetstring.vde.WorkThread.executeWorkQueueItem(WorkThread.java:89)
at weblogic.ldap.LDAPExecuteRequest.run(LDAPExecuteRequest.java:50)
at weblogic.work.ServerWorkManagerImpl$WorkAdapterImpl.run(ServerWorkManagerImpl.java:518)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:181)
And
####<Aug 13, 2014 7:06:23 AM BST> <Notice> <WebLogicServer> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983789> <BEA-000388> <JVM
called WLS shutdown hook. The server will force shutdown now>
####<Aug 13, 2014 7:06:23 AM BST> <Alert> <WebLogicServer> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983802> <BEA-000396> <Serv
er shutdown has been requested by <WLS Kernel>>
####<Aug 13, 2014 7:06:23 AM BST> <Notice> <WebLogicServer> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983807> <BEA-000365> <Ser
ver state changed to FORCE_SUSPENDING>
####<Aug 13, 2014 7:06:23 AM BST> <Notice> <Server> <csrpth01-omch> <AdminServer_9002> <DynamicSSLListenThread[DefaultSecure]> <<WLS Kernel>> <> <> <14079099838
13> <BEA-002607> <Channel "DefaultSecure" listening on 10.204.1.232:7002 was shutdown.>
####<Aug 13, 2014 7:06:23 AM BST> <Info> <Deployer> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983822> <BEA-149059> <Module hdm-
dashboard-admin.war of application hdm-dashboard-admin is transitioning from STATE_ACTIVE to STATE_ADMIN on server AdminServer_9002.>
####<Aug 13, 2014 7:06:23 AM BST> <Info> <Deployer> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983830> <BEA-149060> <Module hdm-
dashboard-admin.war of application hdm-dashboard-admin successfully transitioned from STATE_ACTIVE to STATE_ADMIN on server AdminServer_9002.>
####<Aug 13, 2014 7:06:23 AM BST> <Notice> <WebLogicServer> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983838> <BEA-000365> <Ser
ver state changed to ADMIN>
####<Aug 13, 2014 7:06:23 AM BST> <Notice> <WebLogicServer> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983839> <BEA-000365> <Ser
ver state changed to FORCE_SHUTTING_DOWN>
####<Aug 13, 2014 7:06:23 AM BST> <Info> <JMX> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983857> <BEA-149513> <JMX Connector Se
rver stopped at service:jmx:iiop://staginghdm.vfl.vodafone:9002/jndi/weblogic.management.mbeanservers.domainruntime .>
####<Aug 13, 2014 7:06:23 AM BST> <Info> <Diagnostics> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983859> <BEA-320002> <The Diag
nostics subsystem is stopping on Server AdminServer_9002.>
####<Aug 13, 2014 7:06:23 AM BST> <Info> <JMX> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983925> <BEA-149513> <JMX Connector Se
rver stopped at service:jmx:iiop://staginghdm.vfl.vodafone:9002/jndi/weblogic.management.mbeanservers.edit .>
####<Aug 13, 2014 7:06:23 AM BST> <Info> <JMX> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983926> <BEA-149513> <JMX Connector Se
rver stopped at service:jmx:iiop://staginghdm.vfl.vodafone:9002/jndi/weblogic.management.mbeanservers.runtime .>
####<Aug 13, 2014 7:06:23 AM BST> <Info> <WebService> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983927> <BEA-220028> <Web Servi
ce reliable agents are suspended.>
####<Aug 13, 2014 7:06:23 AM BST> <Info> <WebService> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983937> <BEA-220029> <Web Servi
ce reliable agents are shut down.>
####<Aug 13, 2014 7:06:23 AM BST> <Info> <SAFService> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983943> <BEA-281004> <SAF Servi
ce has been suspended.>
####<Aug 13, 2014 7:06:23 AM BST> <Info> <SAFService> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983950> <BEA-281005> <SAF Servi
ce has been shut down.>
####<Aug 13, 2014 7:06:23 AM BST> <Info> <Deployer> <csrpth01-omch> <AdminServer_9002> <Thread-1> <<WLS Kernel>> <> <> <1407909983964> <BEA-149059> <Module hdm-
There's probably an existing question (or guide somewhere on the web) about this, but I couldn't find it.
We want to slowly migrate our user base from one implementation of the back end server to a new implementation on different servers.
There's already an Apache2 reverse proxy in front of the back end server.
So we'd like to proxy some source IP ranges/subnets to the new server, leaving all others redirecting to the original server. Then add to the IP ranges that proxy to the new server until they all do. Then remove the old server.
Can someone give me some pointers to how this is done in Apache?
The Apache 2.4 new features page says: "The source address used for proxy requests is now configurable."
I can't find out how. The words "Source address" aren't mentioned on the 2.4 mod_proxy page....
Googling suggests ProxySourceAddress as a relevant parameter, but the discussion seems to be about a patch to 2.2 not what was done in 2.4...
So can someone point me at the documentation I may have missed?
We have a moderately complex solution for which we need to construct a production environment.
There are around a dozen components (and here I'm using a definition of "component" which means "can fail independently of other components" - e.g. an Apache server, a Weblogic web app, an ftp server, an ejabberd server, etc). There are a number of weblogic web apps - and one thing we need to decide is how many weblogic containers to run these web apps in.
The system needs to be highly available, and communications in and out of the system are typically secured by SSL
Our datacentre team will handle things like VLAN design, racking, server specification and build.
So the kinds of decisions we still need to make are: How to map components to physical servers (and weblogic containers) Identify all communication paths, ensure all are either resilient or there's an "upstream" comms path that is resilient, and failover of that depends on all single-points of failure "downstream". Decide where to terminate SSL (on load balancers, or on Apache servers, for instance).
My question isn't really about how to make the decisions, but whether there are any standards for documenting (especially in diagrams) the design questions and the design decisions. It seems odd, for instance, that Visio doesn't have a template for something like this - it has templates for more physical layout, and for more logical /software architecture diagrams.
So right now I'm using a basic Visio diagram to represent each component, the commms between them with plans to augment this with hostnames, ports, whether each comms link is resilient etc, etc.
This all feels like something that must been done many times before. Are there standards for documenting this?
I have a need to provide an high-availability ftp/http file repository. Upload will happne to one server, but the uploaded file must be immediately visisble on all other servers
I can handle the failover of the servers themeselves using load balancers. But in the event of failure of one server, the other servers must see the same contents of the repository. Normally, I'd use a SAN for this, but in this case the data centre standards do not allow SAN/external storage - all storage will be local to the servers.
Cam I use Veritas Storage Manager (or any other product) to manage mirroring hte contents between servers in this way? Or does that require a SAN? I couldn't tell either way from a quick look at the data sheets etc.
Is there an ftp server that behaves as a 'distribution front end' to multiple other servers? So that when I upload a file, it accepts the contents, puts them on all of a list of other ftp servers and (importantly) does not confirm success of the upload until it's on all of the other servers?
Alternatively, if it could wait until (say) rsync had replicated the uploaded file to all the other servers before returning success (or, more generically, wait for some external command to complete before returning success).
Background:
We have an app that uploads files to a repository (using ftp or sftp), then immediately instructs a device to download the file (via http).
We need the repository to be load-balanced/highly-available/resilient. Our corporate hosting standards do not permit shared storage.
What we do with other related apps is have several ftp/http servers, and manually upload files to all of them before telling the app (and then the device) to use them. A load balancer distrbutes download requests. This works becasue those apps do not do the uploading, instead we configure them to use the URL of the previously uploaded files. The problem app doesn't do this, it does the upload itself.
We could use rsync or similar to replicate the files uploaded by the problem app to the multiple servers, but the use of these files is immediate, so they may not have replicated to the other servers when a request for them is received. The app cannot be configured to have a delay here.
But if the ftp server didn't return until the file had been replicated (either by the server itself doing all the replication/upload to other servers, or by it waiting for an external command to complete), then the app wouldn't tell the device to use the files until we knew they were everywhere. And it would all work.
Any pointers to suitable servers? Other ideas for solving the problem? (altering the app isn't possible in the timescales, unfortunately)
I have a requirement where I need a number of clients to connnect to an VPN. The clients should only be able to connect to other clients on the VPN - no other traffic should pass over the VPN. In particular, no traffic should pass through the server to non-VPN endpoints.
Can I set up openVPN this way?
Even better would be to have two classes of clients (Actresses and Bishops, say). Only Actresses should be able to connect to Bishops. Bishops can't connect to Actresses or other Bishops, and Actresses shouldn't be able to connect to other Actresses.
Is this possible too?
We have an application that cares about the order of cookie headers. It shouldn't, since this isn't mandated by the standards and indeed we're getting the headers in various different orders
So we would like to rewrite the headers in Apache so that the cookie headers always appear in a specific order. Is there any way of doing this?
An ideal solution would be specifically about cookie headers, but something that lets us mess with the header order more generally would do too.