Squid : Build/Filter/Monitoring/Optimising


Squid is a very well known and reliable open source proxy server, It can also be used to provide a very secure Internet connection that is well known for stability and reliability.

"Tentacles on the network, ink in the cache!"

I have already written other article articles about installing squid, but this one is a more thorough article that kind of brings many of the concepts together and applies a bit of automation through bash scripting..

Squid is open-source

Squid, is the unsung hero, even in places where you do not expect it, As an open source solution, the only support you get is forum based which at times can be slow and it comes down to people’s spare time, However, as with many solutions that starters open source, many companies take products like this and charge corporations, a large amount of money to offer support that wraps around the community support.

Squid is Stable and Secure (depending on setup)

Squid over the years has become quite a stable solution and at the time of this blog, we are on version 3 - Obviously, because of its job, which is a Internet acceleration, cashing and proxy server it can be quite memory intensive, and it can also utilize the locally attached disk.

Squid in the N + x scenario

If you were looking to use a squid proxy server, I would highly recommend you have more than one in a load balancer configuration with affinity enabled, if you are looking to virtualize these servers, I would strongly recommend that all the memory allocated is directly assigned to the virtual machine and is not shared with other virtual machines, If it is virtual once again, I would recommend not sharing the hyper threading cause with other machines on the same host, And the final optimization I would do is to ensure you’re using a minimum of an SSD - I would not recommend you put the cash file on a slow, magnetic, spinning disk.

Physical v Virtual v Cloud

If you are looking at this guide, then you are obviously looking to deploy a squid server and the type of machine you have whether it be virtual or physical, or even in the cloud does not matter, Squid is more than capable of running in all those environments, but please remember N +1 - However, in the case of a proxy server, I would rather have N +3, but that is my personal decision.

Squid : Operating Systems

Now, let’s talk operating system, Squid can be run on windows, but I do not recommend you use Windows to run squid, Experience has taught me this in the past, As I said earlier, squid can be quite memory intensive and ironically, so can Windows depending on its configuration,.

Squid :  Windows is a poor choice

Windows also suffers from requiring regular Windows updates that require a full reboot in order to apply that update. furthermore, some of those updates negatively affect the operating system system performance and functionality, and sometimes you can get a black screen where the server doesn’t boot at all, it occasionally Microsoft will release hotfixes that break their own functionality within Windows and your applications 

You can also get you in a habit of rebooting Windows to solve problems with applications running on it, you should really be diagnosing why you’re rebooting windows, which is usually down to a memory leak or a resource drain.

Squid : If you must use Windows use Windows Core

If you do choose to run squid on Windows, please ensure you’re running Windows Core which while lacks a graphical user interface is actually quite a streamline and low footprint operating system, but you will need to be competent in Powershell and remote management to administer the server - The scripts I’ve also provided in this article will not work if you’re using Windows.

Squid really runs best of Linux, the distribution it runs on is not that important, however my go to distributions are CentOS and Kali, you can choose which ever distribution you’re happy with supporting - and before you ask Kali has more uses than just hacking and penetration testing.

Mission Control : Prepare for Ignition

Right, enough of the background and outlining what’s what, let’s get on with what we attempt to accomplish in this article, so simply put this is what we will go through:

  1. Install Squid if not installed
  2. Configure Squid with interactive script
  3. Apply filtering with DNSMarq/OpenDNS
  4. Monitor the Squid server with a script and email/HTML reporting via email
The first order of business is to look at the configuration script for squid that will detect if you have it installed and if it’s not install installed, it will install it and then it will continue on with the configuration.

Prepare for Interactive questions for Squid installation

During the running of the script, which is a bash script, you will be asked certain key questions, how you answer these questions will form the bases of how squid is configured so it’s best to get these right before you run the script, the questions you will be asked are as follows:
  1. Get cache admin contact

    The cache admin contact is the email address of the person responsible for this Squid proxy.
    Enter cache admin contact (e.g., webmaster@example.com): cache_admin@bear.local

  2. Get trusted network ranges

    Trusted network ranges are IP ranges that are allowed to use this proxy, addresses not on this list be get "Access Denied"

    Enter trusted network ranges (one per line, press Ctrl+D when finished): 10.168.1.0/24

  3. Get additional safe ports

    By default, this configuration allows ports 80 (HTTP) and 443 (HTTPS)."

    Enter any additional safe ports you want to allow (space-separated, or press Enter for none): 80 443, 22 ,21

  4. Get Squid listening port

    Squid needs a port to listen on for incoming connections, this is what clients will connect to.

    Enter the port for Squid to listen on (press Enter for default 3128):  3129
Before we dive into the script, you need to remember that you will need to set the script to executable using the command "chmod" before you can run the script, if you run the script as a normal user or on an unsupported Linux it will let you know as below:


First, we will need to create a new file with the contents of the script, which is shown below, to do that you will need to run this command from your shell:

nano optimize_squid.sh

This will start nano the text editor, From here, you then need to paste the script below into this file by copying the script and then pasting it into the nano text editor, once you have completed this action press CTRL + X and ensure you say yes, to save the file, You can keep the file name you’ve given at the start so press enter here and that should return you to the shell.

Script : optimize_squid.sh

#!/bin/bash

# Interactive Squid Optimization Script
# This script checks for Squid installation, installs if necessary, and then optimizes both the Linux system and Squid proxy server
# Compatible with CentOS and other major distributions

# Ensure the script is run as root
if [[ $EUID -ne 0 ]]; then
   echo "This script must be run as root" 
   exit 1
fi

# Function to check if a command exists
command_exists() {
    command -v "$1" >/dev/null 2>&1
}

# Function to get user input
get_input() {
    local prompt="$1"
    local variable="$2"
    read -p "$prompt" $variable
}

# Detect the Linux distribution
if [ -f /etc/os-release ]; then
    . /etc/os-release
    OS=$NAME
elif [ -f /etc/redhat-release ]; then
    OS=$(cat /etc/redhat-release | awk '{print $1}')
else
    OS=$(uname -s)
fi

# Check if Squid is installed
if command_exists squid; then
    echo "Squid is already installed. Moving on to optimization options."
else
    echo "Squid is not installed. Installing Squid..."
    
    case $OS in
        "CentOS Linux" | "Red Hat Enterprise Linux" | "Rocky Linux" | "AlmaLinux")
            yum install -y epel-release
            yum install -y squid
            ;;
        "Ubuntu" | "Debian GNU/Linux")
            apt-get update
            apt-get install -y squid
            ;;
        "Fedora")
            dnf install -y squid
            ;;
        *)
            echo "Unsupported operating system: $OS. Please install Squid manually and run this script again."
            exit 1
            ;;
    esac
    
    # Check if installation was successful
    if command_exists squid; then
        echo "Squid has been successfully installed."
    else
        echo "Failed to install Squid. Please install it manually and run this script again."
        exit 1
    fi
fi

echo "Proceeding with Squid optimization..."

# Get cache admin contact
echo "The cache admin contact is the email address of the person responsible for this Squid proxy."
echo "It will be displayed in error messages to users and can be used for abuse reports."
get_input "Enter cache admin contact (e.g., webmaster@example.com): " cache_admin

# Get trusted network ranges
echo "Trusted network ranges are IP ranges that are allowed to use this proxy."
echo "These should be internal networks or VPNs that you trust."
echo "Enter trusted network ranges (one per line, press Ctrl+D when finished):"
echo "Example format: 192.168.1.0/24"
mapfile -t trusted_ranges

# Get additional safe ports
echo "By default, this configuration allows ports 80 (HTTP) and 443 (HTTPS)."
echo "Enter any additional safe ports you want to allow (space-separated, or press Enter for none):"
read -a additional_safe_ports

# Get Squid listening port
echo "Squid needs a port to listen on for incoming connections."
echo "The default port is 3128, but you can change it if needed."
get_input "Enter the port for Squid to listen on (press Enter for default 3128): " squid_port
squid_port=${squid_port:-3128}

# Get total system memory
total_mem=$(free -m | awk '/^Mem:/{print $2}')

# Calculate Squid cache_mem (50% of total memory, max 4GB)
cache_mem=$((total_mem / 2))
if [ $cache_mem -gt 4096 ]; then
    cache_mem=4096
fi

# Calculate maximum_object_size_in_memory (1% of cache_mem)
max_obj_size_mem=$((cache_mem / 100))

echo "Based on your system's total memory of ${total_mem}MB:"
echo "- Setting Squid's cache_mem to ${cache_mem}MB"
echo "- Setting maximum_object_size_in_memory to ${max_obj_size_mem}MB"

# System-level optimizations
echo "Performing system-level optimizations..."

# Increase the maximum number of open file descriptors
echo "fs.file-max = 65535" >> /etc/sysctl.conf

# Increase the maximum number of concurrent connections
echo "net.core.somaxconn = 65535" >> /etc/sysctl.conf

# Increase the TCP backlog queue size
echo "net.ipv4.tcp_max_syn_backlog = 65535" >> /etc/sysctl.conf

# Disable TCP window scaling
echo "net.ipv4.tcp_window_scaling = 0" >> /etc/sysctl.conf

# Increase the maximum TCP write buffer space
echo "net.core.wmem_max = 12582912" >> /etc/sysctl.conf

# Increase the maximum TCP read buffer space
echo "net.core.rmem_max = 12582912" >> /etc/sysctl.conf

# Apply sysctl changes
sysctl -p

# Squid-specific optimizations
echo "Performing Squid-specific optimizations..."

# Determine Squid configuration file location
if [ -f /etc/squid/squid.conf ]; then
    SQUID_CONF="/etc/squid/squid.conf"
elif [ -f /etc/squid3/squid.conf ]; then
    SQUID_CONF="/etc/squid3/squid.conf"
else
    echo "Unable to find Squid configuration file. Please check your Squid installation."
    exit 1
fi

# Backup the original squid.conf
cp $SQUID_CONF ${SQUID_CONF}.bak

# Optimize Squid configuration
cat << EOF > $SQUID_CONF
# Squid optimized configuration

# Set cache admin contact
cache_mgr $cache_admin

# Performance optimizations
cache_mem ${cache_mem} MB
maximum_object_size_in_memory ${max_obj_size_mem} MB
maximum_object_size 512 MB
minimum_object_size 0 KB
cache_swap_low 90
cache_swap_high 95

# Memory optimization
memory_pools on
memory_pools_limit 50 MB
memory_cache_mode always

# Increase the number of file descriptors
max_filedesc 65535

# Adjust the number of children processes based on CPU cores
workers $(nproc)

# Tune cache replacement policy
cache_replacement_policy heap LFUDA

# Minimize disk IO
cache_dir ufs /var/spool/squid 10000 16 256

# Optimize client-side performance
client_lifetime 1 hour
half_closed_clients off

# Optimize server-side performance
pconn_timeout 5 minutes

# Enable collapsed forwarding
collapsed_forwarding on

# ACL definitions
acl SSL_ports port 443
acl Safe_ports port 80      # http
acl Safe_ports port 443     # https
acl CONNECT method CONNECT

EOF

# Add user-defined additional safe ports to the configuration
for port in "${additional_safe_ports[@]}"; do
    echo "acl Safe_ports port $port" >> $SQUID_CONF
done

# Continue with the rest of the configuration
cat << EOF >> $SQUID_CONF

# Access control
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access deny manager

# Add user-defined trusted ranges
EOF

# Add user-defined trusted ranges to the configuration
for range in "${trusted_ranges[@]}"; do
    echo "acl trusted_clients src $range" >> $SQUID_CONF
    echo "http_access allow trusted_clients" >> $SQUID_CONF
done

# Finalize the configuration
cat << EOF >> $SQUID_CONF
http_access allow localhost
http_access deny all

# Port configuration
http_port $squid_port

# Recommended minimum configuration:
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320
EOF

# Restart Squid to apply changes
if command_exists systemctl; then
    systemctl restart squid
elif command_exists service; then
    service squid restart
else
    echo "Unable to restart Squid. Please restart it manually."
fi

echo "Optimization complete. Squid is now listening on port $squid_port."
echo "Please test your Squid configuration thoroughly."


Pre-Flight Checks

You should now have a file in the directory you’ve saved optimize_squid.sh, This file should have the contents of the script that’s directly above in yellow.

The next step is to make the script executable, Which by default for security, it won’t be, This means if you try to run the script, you will simply get a permission denied error

Make script Executable

The command below will make the scripts executable by setting the execute attribute (depending on your permissions, you may need to pre-fix this command with sudo:

chmod +X optimize_squid.share 

If this command has run successfully, if you perform a directory listing in the directory where the file is located, you should notice it should now be a green in color as below:


Finally, we need to run the code which can be done with this command, and yes, this will definitely need to have the sudo prefix:

sudo ./optimize_squid.sh

This will then execute the script and will ask you the questions we spoke about earlier, which you will need to answer correctly, and then it will configure squid automatically and perform other system optimizations that include memory and file system, and squid performance operations that I have been using over the years, Once the script is complete, it should start the squid service and ask you to thoroughly check the configuration to make sure you’re happy with what’s been set.

Post-Flight : Quick Check

If you wish to perform a quick check on the squid service, you can simply run:

Note : you may need to substitute squid for squid2 or squid3 - depending on which version has been installed

systemctl status squid

This should then return the fact that squid is running with additional information that can be interesting to view.

Traffic Filtering Solutions - The Options

You have two options here, one of them is a free option and the other is a paid option. It goes without saying the paid option will be more effective at filtering threats, but the free option will block websites, just as effectively.

DNSMasq with dynamic block list

This solution uses DNSMasq and this particular software is a DNS cashing solution, which will work quite nicely with a dynamically generated block list that will block sites that are inappropriate for corporate environments, including malware, Adult content and other legally questionable sites you may not want loading at work.

This solution is fully scripted and will automatically set up a scheduled task using crontab To ensure the known block lists are regularly updated, and your squid server continues to dynamically block websites as they’re added from the list I’ve used in the past, However, if you have your own custom block list locations, you will manually need to add them to the script.

This automation will also crate a clean and minimalistic block page from HTML so your users will receive a professional block message rather than the default squid error message that’s not very friendly to the eye, but more on that later.

First, we need to create the script on the local file system so for that we will call once again on nano so let’s create a new file:

nano squid_blocklist.sh

Then we need to paste the contents of the scripture below, which will be in yellow into this file and then like we did before we need to save this file with CYRL + X then we need to remember to save the buffer and confirm the file name.

Script : squid_blocklist.sh

#!/bin/bash
# Ensure the script is run as root
if [ "$EUID" -ne 0 ]; then
  echo "Please run as root"
  exit
fi

# Install necessary packages
apt-get update
apt-get install -y squid dnsmasq wget

# Create a temporary directory
TEMP_DIR=$(mktemp -d)
cd "$TEMP_DIR"

# Download blocklists
echo "Downloading blocklists..."
wget -O malicious_sites.txt https://urlhaus.abuse.ch/downloads/text/
wget -O adult_sites.txt https://raw.githubusercontent.com/StevenBlack/hosts/master/alternates/porn/hosts
wget -O shalla_adult.tar.gz http://www.shallalist.de/Downloads/shallalist.tar.gz
wget -O openphish.txt https://openphish.com/feed.txt
wget -O dshield.txt https://www.dshield.org/feeds/suspiciousdomains_Low.txt

# Process blocklists
echo "Processing blocklists..."
cat malicious_sites.txt | grep -v "#" | awk '{print $2}' > blocked_domains.txt
cat adult_sites.txt | grep -v "#" | awk '{print $2}' >> blocked_domains.txt
tar -xzf shalla_adult.tar.gz
cat BL/porn/domains >> blocked_domains.txt
cat openphish.txt | awk -F/ '{print $3}' >> blocked_domains.txt
cat dshield.txt | grep -v "#" >> blocked_domains.txt

# Remove duplicates and sort
sort -u blocked_domains.txt -o /etc/squid/blocked_domains.txt

# Configure dnsmasq
cat << EOF > /etc/dnsmasq.conf
no-resolv
server=8.8.8.8
server=8.8.4.4
EOF

# Add blocked domains to dnsmasq
while read domain; do
    echo "address=/$domain/127.0.0.1" >> /etc/dnsmasq.conf
done < /etc/squid/blocked_domains.txt

# Create custom error page

cat << EOF > /etc/squid/block_page.html

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Access Denied</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            background-color: #f0f0f0;
            display: flex;
            justify-content: center;
            align-items: center;
            height: 100vh;
            margin: 0;
        }
        .container {
            background-color: white;
            padding: 2rem;
            border-radius: 10px;
            box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);
            text-align: center;
            max-width: 600px;
        }
        h1 {
            color: #d32f2f;
        }
        p {

            color: #333;
            line-height: 1.6;
        }
        .details {
            background-color: #f5f5f5;
            padding: 1rem;
            border-radius: 5px;
            margin-top: 1rem;
        }
    </style>
</head>
<body>
    <div class="container">
        <h1>Access Denied</h1>
        <p>The website you are trying to access has been blocked due to policy restrictions.</p>
        <div class="details">
            <p><strong>URL:</strong> %u</p>
            <p><strong>Category:</strong> %o</p>
            <p><strong>Client IP:</strong> %a</p>
        </div>
        <p>If you believe this is an error, please contact your system administrator.</p>
    </div>
</body>
</html>
EOF

# Configure Squid
cat << EOF > /etc/squid/squid.conf
http_port 3128
dns_nameservers 127.0.0.1
acl blocked_sites dstdomain "/etc/squid/blocked_domains.txt"
http_access deny blocked_sites

# Custom error page
deny_info ERR_ACCESS_DENIED blocked_sites
error_directory /etc/squid/
error_page 403 /etc/squid/block_page.html
http_access allow all

# Refresh pattern for better caching
refresh_pattern ^ftp:           1440    20%     10080
refresh_pattern ^gopher:        1440    0%      1440
refresh_pattern -i (/cgi-bin/|\?) 0     0%      0
refresh_pattern .               0       20%     4320
EOF

# Clean up
cd ~
rm -rf "$TEMP_DIR"

# Restart services
systemctl restart dnsmasq
systemctl restart squid
echo "DNS filtering, blocklists, and custom block page have been set up. Please configure your devices to use this server as a proxy."

# Create update script
cat << EOF > /usr/local/bin/update_blocklists.sh
#!/bin/bash
# Create a temporary directory
TEMP_DIR=\$(mktemp -d)
cd "\$TEMP_DIR"

# Download and process blocklists
wget -O malicious_sites.txt https://urlhaus.abuse.ch/downloads/text/
wget -O adult_sites.txt https://raw.githubusercontent.com/StevenBlack/hosts/master/alternates/porn/hosts
wget -O shalla_adult.tar.gz http://www.shallalist.de/Downloads/shallalist.tar.gz
wget -O openphish.txt https://openphish.com/feed.txt
wget -O dshield.txt https://www.dshield.org/feeds/suspiciousdomains_Low.txt

cat malicious_sites.txt | grep -v "#" | awk '{print \$2}' > blocked_domains.txt
cat adult_sites.txt | grep -v "#" | awk '{print \$2}' >> blocked_domains.txt
tar -xzf shalla_adult.tar.gz
cat BL/porn/domains >> blocked_domains.txt
cat openphish.txt | awk -F/ '{print \$3}' >> blocked_domains.txt
cat dshield.txt | grep -v "#" >> blocked_domains.txt

# Remove duplicates and sort
sort -u blocked_domains.txt -o /etc/squid/blocked_domains.txt

# Update dnsmasq configuration
sed -i '/^address=/d' /etc/dnsmasq.conf
while read domain; do
    echo "address=/\$domain/127.0.0.1" >> /etc/dnsmasq.conf
done < /etc/squid/blocked_domains.txt

# Clean up and restart services
cd ~
rm -rf "\$TEMP_DIR"
systemctl restart dnsmasq
systemctl restart squid
echo "Blocklists updated successfully."
EOF
chmod +x /usr/local/bin/update_blocklists.sh

# Set up cron job to update blocklists weekly
(crontab -l 2>/dev/null; echo "0 1 * * 0 /usr/local/bin/update_blocklists.sh") | crontab -
echo "Setup complete. A cron job has been set to update blocklists weekly."

Once the contents of the script above are pasted into the file and it’s been saved we need to make this file executable, which can be done with this command as before in the set up guide:

chmod +X squid_blocklist.sh

Then finally, we need to execute the command as an elevated user with this command:

sudo ./squid_blocklist.sh

This will then execute the script that will go off to the Internet download the common block list files format them correctly install the required components and then integrate those components within squid all automatically, It will also then set up a schedule task, so this happens weekly so you don’t need to worry about running the script again.

If you remember earlier, I mentioned I would go over the error message, When I blocked website is request requested by the user and squid attempts to serve this website rather than display the website it will display the custom HML that looks like this:

Remember, If you were using the free solution, squid will only block websites when they are added to these custom block lists that are parsed Into the running configuration block list, so do not expect quick responses when there are security, incidents or new websites because the source need to update their list before squid will start blocking it.

OpenDNS filtering and protection

This particular filtering solution requires a paid for openDNS business account, if you do not have a business account, this will not work for a corporation, However, if you are using this at home, you are more than welcome to follow this with an open DNS home account where you have registered.

You will need access to the openDNS dashboard to complete any of these steps and then link that configuration to the squid proxy server(s)

Obtain External IP Address

The first action on the list is to familiarize yourself with your external IP address, more specifically, your external IPv4 address - Address network this is usually the address on the public side of your NAT translation rule.

Obtain External from IP from public facing NAT configuration

NAT stands for network address translation and its a way of from a public IP address getting to an internal service on specified ports using that one Internet facing address, that means if you are using a 10.x network internally, it will not be this address.

The correct external IP is critical for the solution to work and it should also be a static address that will not change if you only have dynamic available you will have to handle that a different way by publishing a dynamic DNS updating record known as DDNS

Obtain your External IP

If you wish to find out what your IP address is externally and you’re not sure what it is then you can simply visit this website:

https://www.whatismyip.com/

When this website loads, you’re interested in the IPv4  address, you can see an example below of the IPv4 address it is circled in green:

Note : This is my dynamically assigned IP address that changes quite frequently, which is why there is really no need to redact it, IP address is on mobile networks are probably shared with quite a few other mobile phones.

Peform Magic in OpenDNS

Now we have our IP four address we can head over to the website shown below and login, Obviously, if you don’t have an account, you will need to create one. It only takes a couple of seconds, but be sure to enable MFA as an extra security layer.

https://dashboard.opendns.com/

If you already have an account, you can proceed to login with that account and then we can move onto to configuring OpenDNS Networking settings and filtering settings before we configure squid to use them.


When you are successfully logged in, you should see a screen similar to this one, We are interested in clicking on the settings tab which is in the orange bar towards the top of the website.


That will then take you to the screen showing above where you get the option to add a new network and it needs that IPv4 address we discovered or obtained earlier, enter that into the boxes (as you can see above)

When you click the ad network button, you will be prompted for a network name, Give that a meaningful name, here I have used “Blog Demo” - you will also notice another network here, but don’t worry about that that’s my personal network.


OpenDNS well, then confirm the network has been added with a nice green banner, and you should see it under my networks underneath the ad network section as you can see from above.

Note : if you get an error here saying this network cannot be added because it already exists. It means the external IP address you’ve given open DNS is already been used with another person subscription and you can only have one external IP address registered to one subscription.


Now we need to configure the settings for that new network you just created, so from the top of the screen, find the “settings for” section click on the drop-down box and choose the network you just created as above.


That will immediately drop you into the web filtering settings, which by default will be set to “none” this means your filter is essentially disabled so it doesn’t do anything so that’s the first thing we need to change, I have chosen for high protection, which will give you a description of what it belongs because if you wish to do more granular, you can always choose the customize button and manually select the categories you wish to block.

Once you have made your protection level and category selection, don’t forget to save your options. otherwise they won’t apply.


Then we move onto the security settings here you will notice that all the boxes are ticked excluding one, The one that is not ticked is the “Block internal IP addresses”  I would highly recommend you block this because it will protect you from DNS rebind attacks, once you have checked the box, don’t forget to save those options.


Next up, is the customization, This is where you can add a company logo and change the message on the block page, I am more than happy with the default block page so I have no reason to update this or change it so I will leave the default settings.


Next up is this statistics and logging, By default for some reason, this is disabled, I think it might have something to do with protecting your privacy from logging, but the fact that you’re giving all your DNS traffic to OpenDNS without being able to look at what’s going on sounds a little foolish to me, So I always enable statistics and logging so you can look later on at reports to do with website websites allowed, website websites visited, website blocked, type of records requested.

Finally, once you’ve got to this position, you have successfully configured the OpenDNS side of the configuration now you need to tell squid to use this protection by updating the name servers that it uses, remember, this will need to be done on all squid service as the default DNS resolver will be set by the operating system.

Configure Squid to use the new DNS servers

We now have to locate the squid configuration file so we can edit it, There is a high chance the configuration file will be in one of two locations, depending on the version that’s being installed by the script, these are those two locations:

SQUID_CONF="/etc/squid/squid.conf"

SQUID_CONF="/etc/squid3/squid.conf"


If you are not sure, which one then run the command below, you may or may not need to append the three on the end of the command:


systemctl status squid(3)


Now we need to edit that file with the command as the elevated user:


sudo nano /etc/squid/squid.conf


When you run this command, you should see the active configuration of squid on your screen, you then need to find an empty line and add the following string to that file:


dns_nameserver 208.67.220.220 208.67.222.222


You will need to save that file and return back to the shell, then you will need to restart squid for those settings to take effect:


systemctl restart squid


This completes the DNS filtering setup for your squid proxy servers, remember, you need to add this line to each proxy server that’s running squid.


OpenDNS FamilyShield


If you are not interested in paying for OpenDNS But you still wish to leverage some of the protection than the other option is you can also use family shield, this gives you zero control on categories and security and customization, but it does allow you to quite easily filter, adult content and other catogories deemed as not suitable for work.


Warning : if you opt for this method, you have zero control over, allowing or blocking websites and changing categorization as it’s a generic DNS filtering service that is more designed for the personal setting


This method does not require a paid account and can simply be enforced by adding the following configuration line to your squid configuration and then restarting the service:


dns_nameservers 208.67.220.123 208.67.222.123


Quad9 Web Filtering


This is not a good solution for the security minded that is free for everybody to use and like with family shield you get no control about what is blocked you are leaving back to the company that provides DNS service but this is quite a reliable service. Albeit there is only one DNS server in this configuration.


While it might be highly available, I always like a backup dearness server because sometimes infrastructure has problems, not foresee and with no DNS you have no Internet - I am sure that this address will be highly low, balanced and resilient, but I would still rather specify two addresses as it’s not very N+1 if you have a single address.


dns_nameservers 9.9.9.9


Squid Moniroring


If you’ve made it to this point, you’ve set up squid or technically the script has and you set up your DNS filtering with whatever method you decided to use and now you need to monitor your installation.


This is arguably one of the most important parts of the solution as you need to know when it’s not healthy before the end users notice. 


This is another script I have developed over the years That focuses on all the variables you need to make and assessment as to whether squid is in a healthy state or not, videos development I have now added a health card at the top that will even give you a healthy or unhealthy status, green or red respectively.


Its all about the CPU, RAM and Disk (and the Squid processes)

The script primarily focuses on system variables including CPU and memory and it also reports on squid specific processes, memory, and CPU utilization - however, I can have the house card at the top for a quick easy to look at glass as to whether it’s healthy or you need to read more of the report to see why it’s not healthy.


The image below shows you the overview of what the report looks like when it’s in your inbox:



I have also included all the key processes for all the users that are utilize the server so the top report shows the root account, then you will see from the image below it also reports local accounts and the squid process as well, not to mention the network connection health that focuses connections that have been detected, which would be handy if you suspect TCP exhaustion.



If you wish to replied the script, then the original code has been shown below:

Script: squid_monitor.sh

Note : If you choose to use the code, don’t forget to make it executable with the command “chmod +x”

#!/bin/bash


# Directory where the script is executed

REPORT_DIR=$(pwd)


# HTML report file path

HTML_FILE="${REPORT_DIR}/squid_stats_report_$(date +%Y-%m-%d).html"


# Email settings

SENDER="squid@a6n.co.uk"

RECIPIENT="lee@a6n.co.uk "

SUBJECT="Squid Process Statistics Report - $(date +%Y-%m-%d)"

SMTP_SERVER="smtp.bear.local"

SMTP_PORT="25"


# Function to start the HTML file and add a title

start_html_report() {

    echo "<html>" > $HTML_FILE

    echo "<head><title>Detailed Squid Process Statistics - $(date)</title>" >> $HTML_FILE

    echo "<style>

            body { font-family: Arial, sans-serif; margin: 20px; }

            h1 { color: #4CAF50; }

            h2 { color: #2196F3; }

            h3 { color: #FFC107; }

            pre { background-color: #f4f4f4; padding: 10px; border-radius: 5px; }

            p.status { font-weight: bold; color: #f44336; }

            .card { border-radius: 10px; padding: 20px; margin-bottom: 20px; text-align: center; font-size: 20px; }

            .green-card { background-color: #4CAF50; color: white; }

            .red-card { background-color: #f44336; color: white; }

          </style>" >> $HTML_FILE

    echo "</head>" >> $HTML_FILE

    echo "<body>" >> $HTML_FILE

    echo "<h1>Squid Process Statistics Report - $(date)</h1>" >> $HTML_FILE

}


# Function to display the health card at the top of the report

display_health_card() {

    if [ "$1" = "healthy" ]; then

        echo "<div class='card green-card'>Service Status: Healthy</div>" >> $HTML_FILE

    else

        echo "<div class='card red-card'>Service Status: Unhealthy</div>" >> $HTML_FILE

    fi

}


# Function to end the HTML file

end_html_report() {

    echo "</body>" >> $HTML_FILE

    echo "</html>" >> $HTML_FILE

}


# Function to capture detailed Squid process statistics

capture_squid_stats_html() {

    # Calculate health status

    local health_status="unhealthy"

    

    # Get all PIDs for Squid processes

    SQUID_PIDS=$(pgrep squid)

    if [ -n "$SQUID_PIDS" ]; then

        # Number of open and closed network connections

        OPEN_CONNECTIONS=$(ss -s | grep 'TCP:' | awk '{print $4}')

        CLOSED_CONNECTIONS=$(ss -s | grep 'TCP:' | awk '{print $6}')

        

        # Strip any non-numeric characters from the open connections value

        OPEN_CONNECTIONS_CLEAN=$(echo "$OPEN_CONNECTIONS" | sed 's/[^0-9]//g')

        

        # If there are open connections, mark the service as healthy

        if [ "$OPEN_CONNECTIONS_CLEAN" -gt 0 ]; then

            health_status="healthy"

        fi

        # Add health card at the top after determining the status

        display_health_card "$health_status"

        # Capture Squid statistics

        echo "<h2>Squid Process Details</h2>" >> $HTML_FILE

        # CPU and memory usage of Squid (handles multiple PIDs)

        echo "<h3>CPU and Memory Usage:</h3>" >> $HTML_FILE

        for PID in $SQUID_PIDS; do

            echo "<p><strong>PID $PID:</strong></p>" >> $HTML_FILE

            echo "<pre>$(ps -p $PID -o pid,ppid,cmd,%mem,%cpu --sort=-%mem)</pre>" >> $HTML_FILE

        done

        # More detailed stats using top for each PID

        echo "<h3>Detailed Process Statistics (from top):</h3>" >> $HTML_FILE

        for PID in $SQUID_PIDS; do

            echo "<p><strong>PID $PID:</strong></p>" >> $HTML_FILE

            echo "<pre>$(top -b -n 1 -p $PID | head -n 15)</pre>" >> $HTML_FILE

        done

        # Network statistics

        echo "<h3>Network Statistics (Open and Closed Connections):</h3>" >> $HTML_FILE

        echo "<p><strong>Open Connections:</strong> $OPEN_CONNECTIONS_CLEAN</p>" >> $HTML_FILE

        echo "<p><strong>Closed Connections:</strong> $CLOSED_CONNECTIONS</p>" >> $HTML_FILE

    else

        echo "<p class='status'>Status: Squid is not running.</p>" >> $HTML_FILE

    fi

}


# Function to send email

send_email() {

    (

        echo "From: $SENDER"

        echo "To: $RECIPIENT"

        echo "Subject: $SUBJECT"

        echo "MIME-Version: 1.0"

        echo "Content-Type: text/html"

        echo ""

        cat $HTML_FILE

    ) | sendmail -t -S "$SMTP_SERVER:$SMTP_PORT" -f "$SENDER"

}


# Start HTML report

start_html_report


# Capture Squid statistics and display health card

capture_squid_stats_html


# End HTML report

end_html_report


# Send email

send_email


# Clean up the HTML file

rm $HTML_FILE


System Monitoring with Charts (in colour Zones)


If you wish to get a report with nice colourful charts then you can wither do this for the server or the servers including squid statistics, I would probably recommend that system charting is more accurate and informative that includes squid data inside that chart. 


Note : You will need to install the require pre-requisite components for this to run which can be done with this command: 

pip install psutil matplotlib


Smaller Chart Sample?

The script below will give you 24 hours of data and sampling every 15 seconds, however if you are looking for a 5 minute sample then update these variables to get a 5 minute chart not a 24 hour chart:

# Constants

SAMPLE_INTERVAL = 1  # seconds

MONITORING_DURATION = 5 * 60  # 5 minutes in seconds


Script : squid_systemperformance.sh


#!/usr/bin/env python3


import time

import datetime

from collections import deque


# Check for required modules

required_modules = ['psutil', 'matplotlib']

missing_modules = []


for module in required_modules:

    try:

        __import__(module)

    except ImportError:

        missing_modules.append(module)


if missing_modules:

    print("Error: The following required modules are missing:")

    for module in missing_modules:

        print(f"  - {module}")

    print("\nPlease install them using pip:")

    print(f"pip install {' '.join(missing_modules)}")

    exit(1)


# If all modules are available, import them

import psutil

import matplotlib.pyplot as plt


# Constants

SAMPLE_INTERVAL = 15  # seconds

CHART_DURATION = 24 * 60 * 60  # 24 hours in seconds

MAX_SAMPLES = CHART_DURATION // SAMPLE_INTERVAL


def get_system_metrics():

    cpu_percent = psutil.cpu_percent(interval=1)

    mem = psutil.virtual_memory()

    disk = psutil.disk_usage('/')

    net_io = psutil.net_io_counters()

    return cpu_percent, mem.percent, disk.percent, net_io.bytes_sent, net_io.bytes_recv


def plot_metrics(timestamps, cpu_history, mem_history, disk_history, net_send_history, net_recv_history):

    fig, (ax1, ax2) = plt.subplots(2, 1, figsize=(20, 15))

    

    # Define thresholds

    warning_threshold = 70

    critical_threshold = 90

    

    # Plot CPU, Memory, and Disk Usage

    ax1.axhspan(0, warning_threshold, facecolor='green', alpha=0.3)

    ax1.axhspan(warning_threshold, critical_threshold, facecolor='yellow', alpha=0.3)

    ax1.axhspan(critical_threshold, 100, facecolor='red', alpha=0.3)

    

    ax1.plot(timestamps, cpu_history, label='CPU', color='blue')

    ax1.plot(timestamps, mem_history, label='Memory', color='purple')

    ax1.plot(timestamps, disk_history, label='Disk', color='orange')

    

    ax1.set_title('System Resource Usage (Last 24 Hours)')

    ax1.set_xlabel('Time')

    ax1.set_ylabel('Usage (%)')

    ax1.legend()

    ax1.set_ylim(0, 100)

    

    # Add threshold labels

    ax1.text(timestamps[-1], warning_threshold, 'Warning', verticalalignment='bottom', horizontalalignment='right')

    ax1.text(timestamps[-1], critical_threshold, 'Critical', verticalalignment='bottom', horizontalalignment='right')

    

    # Plot Network Traffic

    ax2.plot(timestamps, net_send_history, label='Sent', color='green')

    ax2.plot(timestamps, net_recv_history, label='Received', color='red')

    

    ax2.set_title('Network Traffic (Last 24 Hours)')

    ax2.set_xlabel('Time')

    ax2.set_ylabel('Traffic (MB/s)')

    ax2.legend()

    

    # Format x-axis to show time

    fig.autofmt_xdate()

    

    plt.tight_layout()

    plt.savefig('server_performance_24h.png')

    plt.close()


def main():

    cpu_history = deque(maxlen=MAX_SAMPLES)

    mem_history = deque(maxlen=MAX_SAMPLES)

    disk_history = deque(maxlen=MAX_SAMPLES)

    net_send_history = deque(maxlen=MAX_SAMPLES)

    net_recv_history = deque(maxlen=MAX_SAMPLES)

    timestamps = deque(maxlen=MAX_SAMPLES)


    print("Server Performance Monitoring Started. Press Ctrl+C to exit.")

    print(f"Collecting data every {SAMPLE_INTERVAL} seconds for a 24-hour chart...")


    last_report_time = time.time()

    last_net_io = psutil.net_io_counters()

    last_time = time.time()


    try:

        while True:

            start_time = time.time()


            cpu, mem, disk, net_sent, net_recv = get_system_metrics()

            

            # Calculate network speed

            time_elapsed = start_time - last_time

            net_send_speed = (net_sent - last_net_io.bytes_sent) / time_elapsed / 1_000_000  # MB/s

            net_recv_speed = (net_recv - last_net_io.bytes_recv) / time_elapsed / 1_000_000  # MB/s


            cpu_history.append(cpu)

            mem_history.append(mem)

            disk_history.append(disk)

            net_send_history.append(net_send_speed)

            net_recv_history.append(net_recv_speed)

            timestamps.append(datetime.datetime.now())


            # Generate report and plot every hour

            if time.time() - last_report_time >= 3600:  # 3600 seconds = 1 hour

                print("\n--- Hourly Report ---")

                print(f"Time: {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")

                print(f"Current CPU Usage: {cpu:.2f}%")

                print(f"Current Memory Usage: {mem:.2f}%")

                print(f"Current Disk Usage: {disk:.2f}%")

                print(f"Current Network Send Speed: {net_send_speed:.2f} MB/s")

                print(f"Current Network Receive Speed: {net_recv_speed:.2f} MB/s")


                plot_metrics(timestamps, cpu_history, mem_history, disk_history, net_send_history, net_recv_history)

                print("\n24-hour chart updated: server_performance_24h.png")

                

                last_report_time = time.time()

            

            last_net_io = psutil.net_io_counters()

            last_time = start_time

            

            # Sleep for the remainder of the interval

            time_elapsed = time.time() - start_time

            time.sleep(max(SAMPLE_INTERVAL - time_elapsed, 0))


    except KeyboardInterrupt:

        print("\nMonitoring stopped.")


if __name__ == "__main__":

    main()


You will then get a chart that looks like this, you will get system performance on the top and network utilisation on the bottom, with the color bandings that make it very easy to assess the server health., this is a maintenance node so the traffic is low.


Viewpoint : Human Interaction


This particular section could be a bit controversial for many people, Squid is a product that’s pretty self-sufficient and once it’s running, it stays running until it’s interfered with by human commands or the server restarts and it hasn’t been enabled probably to restart on system reboot.


If you find on a reboot, this happens to you outside of following this guide then you simply need to configure squid to restart on a reboot, which can be accomplished with the following command:


systemctl restart squid 


This command will then ask you to authenticate with a privileged account and then when you reboot, your server, squid will automatically start.


That being said, if it’s been set up correctly to start with, it should not need people to log into it daily to check variables and system services. You have a script that does that weekly which you I’m not sure about weekly. You can quite easily change that to a daily task with crontab.


If you do change the monitoring to a daily task, please do not fall into the trap of human confirmation bias where you regularly see an email that’s Green so when the email is not Green, you automatically assume everything’s OK and don’t do any investigation - This is the reason I like weekly reports.


Limit the people with shell access!

I would also recommend that you have a limited set of people that can login into your squid servers via Shell and manage them, you don’t particularly need to worry about the people that like a graphical user interface because squid doesn’t have one so you need to learn shell otherwise you can’t manage with whatsoever.


Graphical user interfaces (GUI) are a must?


If you are only able to administer the servers by clicking on pictures with a graphical user interface, then that is a shame because you have limited yourself severely in what data you can get from services and applications running on those service as there is only so much information you can put in this case on a website or in the graphical experience section of the operating system.


If the option of a graphical management interface is priority here then I would highly suggest you purchase an Apple device that runs macOS.


A Mac mini or Mac mini studio would be the perfect device, if you have high work loads, you may wish to go with the Mac mini studio as it has more resources available to it, air more accurately you can specify with more resources.


If you wish to go down this route, you simply need to install macOS and then get an application called Squidman.


https://squidman.net/squidman/


If you navigate to this page, it’s regularly updated and it’s a very stable product that gives you a interface that you can manage with a mouse and a keyboard obviously.



That concludes this guide and the squid installing and monitoring and on-going maintenance.

Previous Post Next Post

نموذج الاتصال