In order to do ifconfig
in Linux and change the IP and VIP does it require root privileges?
Or it is also possible to do this via a non-root account?
user76678's questions
Is it possible to configure in the same NIC a VIP and a private IP (different) and both be enabled/accesible?
If yes, is there a guarantee that the communication path for these 2 IPs will be exactly the same?
Update: If this is possible, if I e.g. do a ping
(or traceroute
) from a specific machine will I always have the same route for both these 2 IPs?
I am reading about High Availability and I can not understand the following I read: On failover the primary IP migrates to the backup server BUT so must the MAC address.
Specifically I read that every machine has a unique address MAC that can be used by all interfaces in the machine. I don't get this part. Doesn't the MAC belong to the NIC? What is meant by interfaces in this sentence?
Also on failover the clients must update their IP/MAC mapping and found 3 ways for this one of which is by using a custom MAC and move it from primary to backup along with the public IP. How is this possible? Do high availability software e.g. Pacemaker do this? How?
I read somewhere that using shared disks for heartbeats is reliable even if network totally fails.
I read it in the context of remedy from split-brain.
Is this correct? But I can not understand this. If network fails, doesn't this also affect shared disks?
In a MS-Win 2008 SP2 I am trying to find the group CERTSVC_DCOM_ACCESS.
I followed this link Error in MS-CA request instructions but I can not seem to find that group.
Where it? Does it have a different name?
This question is related to my SQL Data type size
A varchar(max)
seems to be able to store up to 2GB.
What I can not understand though is that in this link sql row overflow MS says (my emphasis):
A table can contain a maximum of 8,060 bytes per row. In SQL Server 2008, this restriction is relaxed for tables that contain varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns. The length of each one of these columns must still fall within the limit of 8,000 bytes; however, their combined widths can exceed the 8,060-byte limit. This applies to varchar, nvarchar, varbinary, sql_variant, or CLR user-defined type columns when they are created and modified, and also to when data is updated or inserted.
I don't understand this statement.
They say that a varchar(max)
can hold up to 2GB but then they say in the above link that the column length can not be more than 8KB
Isn't this contradictory or am I missing something here?
It is not clear to me how a varchar(max)
is declared or used in an SQL Server e.g. 2005
Is a variable declared as varchar(300000)
for example considered as a varchar(max)
?
E.g. I am seeing in a DB in a table a variable is declared as varchar(8000)
.
Can I simply increase it to varchar(300000)
?
Thanks
I have 2 systems running MSWindows Server 2003 and 2008 respectively.
I have done all the updates.
I am interested in securing them e.g. close all unneeded services.
For example my concern is that if I start shuting down services that I consider as uneedded perhaps a needed service is depended on them
and will have problems.
Is there a tool that helps on this?
E.g. that gives a report and shuts everything down taking care of the dependencies? Or is it manual effort?
I am using an MS-SQL Server 2008 instance as back-end DB server for a project. I did some security tests on the machine hosting my project and SQL Server and got the following report on SQL Server:
On port 1853/TCP
a database server is running (specifically MSSQL and a version number) and that the response was available in the pre-login response.
How can I hide this information and SQL Server altogether?
I can't find a way to create a trigger and allow access to specific IPs but I am not sure if this addresses my problem here properly.
I recently discovered that MS-SQL Server 2008 imposes an upper limit of 8000 bytes for a column (for character data).
I need to store data that occusionally could surpass this limit.
Is there a way to do this?
The data are character strings.
Thanks
During SSL communication, the server sends its certificate to the client for authentication.
Optionally, the client could send its certificate too, for client authentication.
My question is, does the server (or client) send the entire chain to the client (i.e. signing certificates) or only its own certificate?
I have noticed that usually only its own certificate is being sent but I was wondering if it is configurable or it does not make sense to send the entire chain to the other party.
Thanks
For me it is trivial to configure Tomcat for client authentication. But trying it to do it in an IIS 7 server (running in Win2008R2 Server) it seems imposible.
In tomcat all I have to do is configure the container with my truststore. How is this done in IIS?
All I can find is in SSL settings to request client authentication, but I can not see how can I install certificates my server will trust. What I want to do, is configure IIS to trust specific (client) certificates (not created by the domain controller though. I.e. be of any user).
How can I do this?
UPDATE
I followed the links, but could not get it to work. Is there somewhere I can post, that IIS gurus could help?