Using ansible with iptables
is a bit clunky, particularly with rule ordering and duplication. The ansible iptables module does not check for existing rules before plopping new ones in, and older systems (without -C
) can be tricky to check for existence of rules. This post explores using ansible to update a firewall on a system that isn't 100% orchestrated – a system that had a standard starter firewall applied, but may have since diverged by having custom/unexpected rules applied manually. On such systems we can't simply flush tables to avoid rule duplication, but we found a way to trim duplicates with awk
.
Given the limits of ansible's iptables
module (does not check for rule existence first), it's worth considering lineinfile
or blockinfile
to update an iptables file produced with iptables-save
. For managing multiple rules, blockinfile
makes sense, and we can use the marker
attribute to try to avoid duplicates, but because the systems aren't 100% orchestrated, if an admin ever runs iptables-save
to manually update the firewall, any ansible markers would be lost and subsequent playbook runs would result in rule duplication. Perhaps we could get very creative with our markers and use firewall rules rather than file comments.
# This file demonstrates a creative use of markers. By using comments
# of actual firewall rules, the markers will persist through an
# iptables-save process either performed by orchestration, or by
# a sysadmin manually SSH'd into it.
- name: Add new rules to firewall
blockinfile:
path: /etc/sysconfig/iptables
marker: -A INPUT -i eth99 -j DROP -m comment --comment "{mark} AWX Rules"
insertafter: -A INPUT -i lo -j ACCEPT
block: |
-A INPUT -i eth1 -s 203.0.113.113/32 -J ACCEPT -m comment --comment "Host XYX"
-A INPUT -i eth1 -s 198.51.100.98/32 -J ACCEPT -m comment --comment "Host QRS"
Another solution to rule duplication is to filter them out. Both sort
and uniq
suffer from the same problem where a file (or stream) needs to be ordered first as both utilities can only detect a duplication if they occur on adjacent lines. When considering a block of firewall rules, the individual rules that get duplicated will never be adjacent.
There is an inventive solution to use sort to filter non-adjacent lines by temporarly inserting a line index, then using that index to reorder the file after filtering duplicates, and then removing the line index.
cat -n file_name | sort -uk2 | sort -n | cut -f2-
This won't work for firewalls, however. If we're trimming an iptables-save
file, we need to retain duplicate COMMIT
lines, which is how I stumbled upon a grand-daddy solution of linux systems using awk
.
With the magic of awk, we can filter out non-adjacent duplicates without messing about with re-ordering the stream.
awk '!a[$0]++'
Just pipe or cat a file or stream through that, and the result will be de-duplicated. How it works is a bit more complex, but essentially, some parts are implied and awk keeps an array of every unique line and rejects the duplicates as it processes the stream. We can extend this to ignore comment lines, COMMIT
lines, and whatever else needs to be allowed as a duplicated line.
# Accept duplicated comments or COMMIT lines, but flush all
# other duplicated lines.
awk '/^#/ || /COMMIT/ || !a[$0]++'