Where can the pfsense log files be located and viewed?
I have searched the documentation and it doesn't indicate the log files location for the various components of pfsense.
Where can the pfsense log files be located and viewed?
I have searched the documentation and it doesn't indicate the log files location for the various components of pfsense.
I am running pfsense 2.0.3 nanobsd 4g i386 on virtualbox. VM configured with 4gb ram, there's 8 gb total on host system, with two net interfaces configured as host only. This will go on an SSD mini atx box, but for now I am just running on VM for learning pfsense.
I assigned interfaces, em0 to WAN, and em1 to LAN. From the windows host(hosting the VM) I brought up the browser and tried to connect to the LAN IP. I was intermettently getting timeouts and I would reboot the server or use the reboot web configurator option, and sometimes I could get the login screen but after logging in with default user/pass, I'd get a blank page. Absolutely no error messages or feedback of any kind. I typed password carefully, thinking maybe it was doing anonymous authentication, since according to their documentation provides a blank page by-design.
After many tries and reboots I finally got the wizard screen. I completed the wizard and the final page indicated it was going to redirect after a few moments, after a few minutes it redirected but failed to retrieve the next page. From there the web configurator again was not responsive, timing out. I rebooted and still same thing.
How do you troubleshoot something that gives you absolutely no feedback or error messages?
Any ideas about what might be wrong would be welcome, but primarily: How do I troubleshoot failures in the web configurator? Is there logs specific to the web configurator, or do I need to poke around in the web server logs, pfsense logs, etc.? Is there any documentation on directory structure that would help me find these? I've found from distribution to distribution, that each has it's own idea of where user programs, logs, etc are stored.
So my understanding of one scenario that ZFS addresses is where a RAID5 drive fails, and then during a rebuild it encountered some corrupt blocks of data and thus cannot restore that data. From Googling around I don't see this failure scenario demonstrated; either articles on a disk failure, or articles on healing data corruption, but not both.
1) Is ZFS using 3 drive raidz1 susceptible to this problem? I.e. if one drive is lost, replaced, and data corruption is encountered when reading/rebuilding, then there is no redundancy to repair this data. My understanding is that the corrupted data will be lost, correct? (I do understand that periodic scrubbing will minimize the risk, but lets assume some tiny amount of corruption occurred on one disk since the last scrubbing, and a different disk also failed, and thus the corruption is detected during the rebuild)
2) Does raidz2 4 drive setup protect against this scenario?
3) Does a two drive mirrored setup with copies=2 would protect against this scenario? I.e. one drive fails, but the other drive contains 2 copies of all data, so if corruption is encountered during rebuild, there is a redundant copy on that disk to restore from? It's appealing to me because it uses half as many disks as the raidz2 setup, even though I'd need larger disks.
I am not committed to ZFS, but it is what I've read the most about off and on for a couple years now.
It would be really nice if there were something similar to par archive/reed-solomon that generates some amount of parity that protects up to 10% data corruption and only uses an amount of space proportional to how much x% corruption protection you want. Then I'd just use a mirror setup and each disk in the mirror would contain a copy of that parity, which would be relatively small when compared to option #3 above. Unfortunately I don't think reed-solomon fits this scenario very well. I've been reading an old NASA document on implementing reed-solomon(the only comprehensive explanation I could find that didn't require buying a journal articular) and as far as I my understanding goes, the set of parity data would need to be completely regenerated for each incremental change to the source data. I.e. there's not an easy way to do incremental changes to the reed-solomon parity data in response to small incremental changes to the source data. I'm wondering though if there's something similar in concept(proportionally small amount of parity data protecting X% corruption ANYWHERE in the source data) out there that someone is aware of, but I think that's probably a pipe dream.
I have seen people recommend RAID 10 over RAID 5 for databases due to RAID 10 giving better performance and a better chance of recovering from a hardware failure.
This confuses me as I thought the purpose of using RAID 5 was more a matter of the parity allowing the detecting and correcting write errors to ensure the integrity of the data. My understanding was that RAID 10 can not recover from write errors. I.e. if a bit has an error, it will be the opposite of the bit in the mirrored drive, and thus it will be impossible to tell which bit is the one with the error, and which is the correct one.
However, I tried googling along the lines of detect "write error" with raid 5 vs raid 10 to see if anyone covered this point, and came up empty handed.
Am I making this all up in my head?
Can a RAID 5 array detect and recover from write errors using the 3 parity bit? Or does the detection not occur until much later when the data is read and the parity indicates an error?
If a RAID 10 array has a write error, will it be able to determine which of the mirrored bits is the one in error? I.e. the drive indicates a read failure for that particulor bit, or does it just see the bits do not match and since there is no parity it can't determine which is in error?
I see some discussion of rebuilds being triggered by a read error. Do write errors not get detected until later when the data is being read? In other words, does the writer error occur, but the erroneous data just sits there until possibly much later when the data is read and the parity indicates an error. Is that why you are at risk of getting additional read errors during rebuild, in that you could be writing a large amount of data with errors but the errors will not be detected until the next time the data is read?
I would like to clarify that tape backups do not address the above question. If you have a scenario where data integrity is very important, and you can't detect write errors, then all the tape backups in the world won't help you if the data you are backing up already has errors.
Scenario: Excel file, with a SSAS data source connection
Pivot Table/Charts with filters and slicers
Published to Sharepoint 2010, such that users can access the report as a Excel Web Access Web Part, such that they can't break/change the pivot table other than changing filters and slicers.
Note that I am NOT talking about powerpivot. Rather just using a regular data source connection.
As users access the report, I would like the most current data(within the last day) from SSAS to be reflected in the report. Assume that the SSAS database is refreshed daily already.
1) Does Excel Services and/or the webpart automatically refresh the data when the report is opened/viewed through the Web Access Web Part? And/or can it be configured to periodically refresh the data?
2) What are the server software requirements to support this? Does it require SQL Server Enterprise edition? Or is standard enough? Does it require Sharepoint 2010 Enterprise, or is standard enough? when I say "support this" I mean both the Excel Web Access Web Part as well as the refreshing of the data from SSAS into Excel.
3) Does the Web Part allow them to only interact with filters/slicers? Or will they be able to mess with/break the pivot table? If so, will changes/breakage they make only be persisted for their session and not effect future sessions or other users?
I would test this myself but I have had difficulty getting the trial of Sharepoint setup on my home computer(ultimately I won't even be the person setting it up in production anyways) and I just want to know what edition will support these features. Thanks.
If I have a mirrored pair of 250GB drives in a pool, and I later buy two more drives and add another mirrored pair to the same pool, can that second mirrored pair be 500gb? Such that my total usable space would be 750GB?
Or do all the mirrored pairs in a pool need to be the same size?
We are doing nightly full backups and noon differential backups. We use Full recovery model with SQL Server 2005, but logs are never backed up and are truncated(TRUNCATE_ONLY) after the full backup.
Restoring to a point in time is not a requirement, restoring to one of the nightly or noon backups is sufficient (Not my decision).
So the question at hand is, since they are throwing away the logs every night, is there any reason to not use Simple Recovery model? Is there any benefit to using Full Recovery model if we are throwing out the logs every night?
Thanks.
I installed SQL Server 2008 and typed in an instance name of sql2008
but it seems to have been isntalled as a default isntance. Trying to connect using .\sql2000
fails but using just the computer name succeeds. The service is listed with (MSSQLSERVER)
as if it is the default instance, but the data directories all have the .sql2008 instance name suffix. So not only is it not what I wanted, but the data directories and service names have inconsistent suffixes. Was there something else I needed to do besides specifying an instance name during installation? Maybe a checkbox I missed?
Is there a way to change from a default instance to a named instance?
Forgive me if I use any of these terms incorrectly.
I am wondering if there is any kind of software that would allow my two "join" two computers together such that a cluster unaware application could utilize their combined computing resources? By "cluster unaware" I mean an application that isn't designed to share work across multiple services. My understanding is that clustering is enabled by the specific application by it's architecture, such that messaging with multiple instances of the application coordinate the sharing of work. Instead I am looking for something that enables clustering at the OS or virtualization level, so that any application could essentially be clustered.
Failing that, I am also wondering about the following scenario: We have 3 different applications we will call A, B, and C. We have 2 single core computers. At any given time lets say that any combination of those applications will be CPU intensive. In cases where only 2 of those apps are very active, have one of them moved over to a different server. In a nutshell, some sort of dynamic automatic shuffling of the application's load.
I have heard of virtual machines that can be migrated across physical machines while live, but I am wondering if this can be done automatically in response to an application's or VM's CPU activity?