Every developer or operator who has spent any appreciable time in the development world has a closet full of lists like this. So, here's my list of technologies I have deployed, studied, supported, developed against, documented or contributed to. Of course, this does not represent the complete set of software to which I have been exposed over the years. Not only would that be crazy to try to capture, but many are and were proprietary, as well.
- scamper for massively parallel traceroute with support for RFC 4950 ICMP extensions to support MPLS.
- iproute2 for link and address management as well as policy routing on Linux
- The Linux tc tool and traffic control subsystem.
- High availability tooling heartbeat and keepalived.
- BGP/MRT tools such as dpkt, mrtparse, and custom, propietary libraries built for handling serialized forms of RFC 6396 MRT data.
- tcpdump and libpcap and derived tools like Wireshark for better diagnostic and visualization tracing
- RPM Package Manager for C with Autotools and CMake; for shell with atelerix for Perl with cpan2rpm and for Python with setuptools
- Travis CI for software build and automated tests
- Open Build Service (OBS) to perform Continuous Build, resulting in consistent, rebuildable, operationally installable software distributions; (see also thoughts on continuous build and microservices)
- PostgreSQL; 8.x and 9.x series, mostly DDL and DML; experience with dozens of data sets ranging from tiny (which fit in memory) to multi-TB; psql and SQLAlchemy tools; PostgreSQL is my preferred SQL engine.
- SQLite; prototyping and embedding in other projects that were not data-intensive
- MySQL; mostly 3.x and 4.x series, including building a high availability MySQL server on top of shared block storage and a pair of cooperating heartbeat/STONITH nodes. (I would not do that again; there are better ways, especially now; and I also prefer PostgreSQL.)
- pandas a great for in-memory data analysis tool, especially in concert with Jupyter Notebook.
- Core Hadoop and HDFS management tooling on a ~100TB data set; many map/reduce applications
- Supported custom binary data formats using syscall primitives: seek() and read()
- radix / patricia tries for modeling network prefix relationships (arbitrary node contents, for programmer flexibility)
- directed acyclic graphs (and the usual array of trees)
- serialized adjacency matrices to capture weighted graphs bigger than memory
- standard deviations on sliding windows for anomaly detection
- exponential weighted moving averages
- data set indexing and partitioning (for SQL and other tooling)
- various versions of OpenSSL and Certificate Authority management for small (customer) organizations
- supporting end-user and site-to-site VPNs with OpenVPN and the (mess of) IPSec interoperability with FreeS/WAN [now a defunct project]
- stunnel an SSL wrapper for arbitrary sockets
- Apache, Boa, and other webservers
- the (now, quite old) qmail MTA and several IMAP servers Cyrus-IMAP, Courier IMAP
- daemontools and runit for process management; connected to packaging and configuration management systems for supporting microservices
- systemd and, of course, the conventional SysVinit system
- interpreting strace for process and system analysis
- written many Nagios plugins
- DocBook 4.x (e.g. my choice for the linux-ip guide) and DocBook 5.x using libxml2 tools xmllint, xsltproc and the docbook-xsl-stylesheets
- DocBook 3.x and 4.x SGML, Linuxdoc a simpler SGML DTD created for Linux using openjade.
- Asciidoc
- reStructuredText (the choice of source for this document)
I have written variously in bash (proficient), python (proficient), C (novice), Perl (proficient) and SQL (competent) depending on performance goals, desired behaviour and system needs. Non-proprietary software I have written is listed in my list of publicly available software.