Let's say a (very) large process is crashing and dumping core, and we know the cause from other information (possibly an assert message, maybe something else).
Is there a way to stop the core dump from being completely generated, since it's a waste in this case?
For instance, would kill -9 of a core dumping process interrupt the corefile generation?
Obviously, if we knew ahead of time that we don't want core dumps, we could set the ulimit appropriately or use the OS's various core file control utilities.
But this question is about the "core dump already in progress" stage...
(For instance, imagine I'm the requestor in https://stackoverflow.com/questions/18368242/how-to-bypass-a-2tb-core-dump-file-system-limit and don't want to waste 5-6 TB of disk space :) )
Generally: no, there is no way to Reliably kill a coredump.
That being said there is a possibility (at least in linux) for commercial *NIX probably no way
The possibility lies in the fact that the 3.x series of the kernel is able to interrupt file writing. One possibility is to find the thread that is doing the dumping and repeatedly send SIGKILL to it until it succeeds.
This patch series fixes the issue to some level.
Other possibility is to use the alternate syntax for the coredump_pattern. The manual says that since 2.6.19 instead of a pattern you can use a pipe and a program (with params) that will handle the dump. Ergo you will have control which dump will get written to where (/dev/null being the obvious candidate for your useless cores).
This patch also deserves a bit of attention: http://linux.derkeiler.com/Mailing-Lists/Kernel/2010-06/msg00918.html
check this link out,it may can be helpful
https://publib.boulder.ibm.com/httpserv/ihsdiag/coredumps.html
It looks like you could run ulimit -c (assuming you're using bash) to limit the core dump size.
See: https://askubuntu.com/questions/220905/how-to-remove-limit-on-core-dump-file-size
and
http://ss64.com/bash/ulimit.html