Simply, is there any quick alternative of iptables -F
command (that just "deletes everything") for nftables?
Such thing wouldn't have much theoretical purposes, but it's usually a lifesaver for administration of bad/gone-wrong setups.
Simply, is there any quick alternative of iptables -F
command (that just "deletes everything") for nftables?
Such thing wouldn't have much theoretical purposes, but it's usually a lifesaver for administration of bad/gone-wrong setups.
For traffic shaping I'm currently using a setup that looks exactly like the setup from LARTC, on this page:
http://lartc.org/howto/lartc.adv-filter.hashing.html
I have a simple problem with that - everytime I want to modify something in the hash table (like assign a IP to different flowid), I need to delete the whole filter table and add it again filter by filter. (I actually don't do it by hand, I have a nice program that does it for me... but still...) There is a problem - I got roughly 10k filters allocated this way and deleting and refilling the whole filtertable can get pretty lengthy, which is not exactly good for traffic shaping. My program could easily manage to delete only the rules that need to be deleted (thus reducing the whole problem to several commands and miliseconds), but I simply don't know the command that deletes only the one hashing rule.
My tc filter show:
filter parent 1: protocol ip pref 1 u32
filter parent 1: protocol ip pref 1 u32 fh 2: ht divisor 256
filter parent 1: protocol ip pref 1 u32 fh 2:a:800 order 2048 key ht 2 bkt a flowid 1:101
match 0a0a0a0a/ffffffff at 16
filter parent 1: protocol ip pref 1 u32 fh 2:c:800 order 2048 key ht 2 bkt c flowid 1:102
match 0a0a0a0c/ffffffff at 16
filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1
filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 link 2:
match 00000000/00000000 at 16
hash mask 000000ff at 16
The wish: 'tc filter del ...' command that removes only one specific filter (for example the 0a0a0a0a IP match (IP address 10.10.10.10)). Removal of some small subgroup would also be good - for example I could still recreate a bucket (bkt a) pretty fast.
My attempts: I tried to number all the filters using prio, but with no help -- they just create something unusuable (but deletable) below, but the bucketed filters remain there after that gets deleted.
Any ideas?
edit - I'm adding a simplified tl;dr description of the problem:
I created hash filter on some interfce just like in this http://lartc.org/howto/lartc.adv-filter.hashing.html
I want to find a command that deletes one rule (e.g. 1.2.1.123) from the table, leaving the rest untouched and working.
I'm currently working on a traffic shaping solution for ISP-level companies, and came to an interesting (kindof philosophical) problem.
Looking about the number of endpoints the system should handle (which is around ~20k) I got a little worried what would happen when I'd need to policy/shape traffic of more users. As I am currently using HFSC shaping tree (see tc-hfsc, mostly the same-but-cooler thing as better-known HTB) for whole network, I'd need to use more ClassIDs (obviously at least one for each user on the network). The problem which I found was that TC ClassID's are kindof limited - they are 16-bit numbers, which gives me a possible maximum of 64k users shaped by this solution.
Similarly, if I want to manage TC filters efficiently (e.g. not using the 'flush all technique'), I need to be able to delete or modify individual filter entries. (I'm using something similar to hash table from LARTC [1]). Again, the only method that seems to be working with this is to number all the filters using individual priorities (tc filter add dev ... prio 1). There's no other parameter that could be used for this purpose, and, regretably, prio is 16-bit-only as well.
My question is following: Does there exist some good method for enlarging the available "identifier space", such as 32-bit clsid's for 'tc class' command, and 32-bit priorities (or any other modification handles) for 'tc filter' command?
Thanks very much,
-mk
(btw I hope this will not go to "64k users should be enough for everyone" scenario...)
So I have an enormously large file (around 10GB) and need to sort it, just like in using 'sort' utility, but kindof more effectively.
Problem is, that I don't have memory, CPU power, time, nor free swapping space to power the whole sort.
The good thing is that file is already partially ordered (I can say that every line's distance from its final position is less than some value N). This kindof reminds me the classical computer-class example of using heapsort with heap of size N for this purpose.
Question: Is there some unix tool that already does that effectively, or do I need to code one myself?
Thanks -mk
So I'm trying to do roughly this
http://wiki.mikrotik.com/wiki/PCC
on Linux.
To explain a little further: PCC just takes, say, source address of the packet, hashes it, divides the hash by some number, and if the remainder is equal to some other number, it makes a rule match.
I'm actually using this to mostly randomly divide my network into several almost-equally-big groups. More specifically, six such groups would look like this:
Group 1: pcc_hash(source IP) % 6 = 0
Group 2: pcc_hash(source IP) % 6 = 1
... etc
The groups are then given some kind of common resource to share (say, bandwidth, or public IP address) that they don't like to change very often (esp. with public IP address).
My question is that if there is some good method to divide the network into any number of stochastically-equal subnets using some similar, preferably easy iptables rules.
I've succeeded in splitting the network into powers of two using u32 (2^n networks just by matching last n bits of source IP address). But some randomness would be great too, and having a network split into anything like exact thirds is impossible to do with this. Moreover, mikrotiks are essentially linux-based, so there has to be a way to do that :D
Is someone here aware of a good method, or at least of some good u32 documentation that would make this possible?
Thanks in advance
-mk
This isn't a real problem, but I guess it can point to something more serious - I recently upgraded to 2.6.36 linux kernel, and Load Average doesn't go under 1.0 - no matter how many tasks I have, no matter that CPU load is 0% and there are no processes waking up.
I wonder what could be causing this, and, well, if there's some nice way to debug this "problem".
I'm hoping that it won't lead to anything more serious (like some silent piece of kernel causing wakeups). The only problem that it makes now is probably that the 1.0 'bottom' doesn't look very healthy on graphs.
Can this be caused by tickless kernel?