In Linux I have no problem dumping tons of data into a domain socket, but the same code on OS X 10.6.2 blows up after about 65 records. The socket reader code looks like
#!/usr/bin/perl
use strict;
use warnings;
use IO::Socket;
unlink "foo";
my $sock = IO::Socket::UNIX->new (
Local => 'foo',
Type => SOCK_DGRAM,
Timeout => 600,
) or die "Could not create socket: $!\n";
while (<$sock>) {
chomp;
print "[$_]\n";
}
And the client code looks like
#!/usr/bin/perl
use strict;
use warnings;
use IO::Socket;
my $sock = IO::Socket::UNIX->new (
Peer => 'foo',
Type => SOCK_DGRAM,
Timeout => 600,
) or die "Could not create socket: $!\n";
for my $i (1 .. 1_000_000) {
print $sock "$i\n" or die $!;
}
close $sock;
The error message I get is No buffer space available at write.pl line 15.
. It seems fairly obvious that there is a difference in the buffer size between Linux and OS X, but I don't know how to set it OS X (or what the possible negative side effects might be).
This code is really not good code. It sends packets as fast as it can, and it will run out of buffer space. I don't know why linux doesn't, but that's an oddity, not something to rely on.
Increasing buffer space won't help, it will just hide the bad code.
You can try
but I would heed the suggestion from Michael Graff that you should include some backoff and retry logic in your application code