I have completed a couple of posts about the Mac Content Cache service, but would it not be nice to get a report that shows the "served" content in a simple chart which includes data from the internet and data served internally?
Yes, I though so to and the end result will look like this, here you can see that 15GB has be served to clients (the blue bar) while data from the Internet (the red bar) is very insignificant, showing the performance of the caching server - this can be seem below:
In order to get to this we need a daemon that will run and collect the data from the server, the command we are looking to use to get this data is this command:
AssetCacheManagerUtil status
Then from this command we are interested in the following fields:
TotalBytesReturnedToClients: 514.69 GB
TotalBytesStoredFromOrigin: 54.21 GB
We need the daemon to log this data every 2 hours to a log file which will be located here:
/var/log/cache_stats.csv
/var/log/cache_stats.csv
Then the script need to do some maths on the numbers being logged to compare the previous value with the new value, after the 2 hours as elapsed, this will give you a "difference" in the data between sample 1 and sample 2, therefore this log file will look exactly like this:
timestamp,to_clients_gb,from_origin_gb,to_clients_since_last_gb,from_origin_since_last_gb
2025-02-05 07:02,488.06,53.84,15.45,.61
2025-02-05 07:05:14,488.88,53.84,.82,0
2025-02-05 07:06:14,488.99,53.84,.11,0
2025-02-05 07:07:14,489.09,53.84,.10,0
Then the data is bold shows the differences between the new value and the current value, this is what we will use later on for the charting.
Script : cache_monitor.py
#!/bin/bash
LOG_FILE="/var/log/cache_stats.csv"
PID_FILE="/var/run/cache_monitor.pid"
DEBUG_LOG="/var/log/cache_monitor_debug.log"
log_debug() {
echo "$(date): $1" >> "$DEBUG_LOG"
}
get_stats() {
local TIME=$(date '+%Y-%m-%d %H:%M:%S')
local CACHE_DATA=$(AssetCacheManagerUtil status)
local TO_CLIENTS=$(echo "$CACHE_DATA" | grep "TotalBytesReturnedToClients:" | awk '{print $2}')
local FROM_ORIGIN=$(echo "$CACHE_DATA" | grep "TotalBytesStoredFromOrigin:" | awk '{print $2}')
if [ ! -f "$LOG_FILE" ]; then
echo "timestamp,to_clients_gb,from_origin_gb,to_clients_since_last_gb,from_origin_since_last_gb" > "$LOG_FILE"
echo "$TIME,$TO_CLIENTS,$FROM_ORIGIN,0,0" >> "$LOG_FILE"
else
local LAST_VALUES=$(tail -n 1 "$LOG_FILE" | cut -d',' -f2,3)
local LAST_TO_CLIENTS=$(echo $LAST_VALUES | cut -d',' -f1)
local LAST_FROM_ORIGIN=$(echo $LAST_VALUES | cut -d',' -f2)
local TO_CLIENTS_DIFF=$(echo "$TO_CLIENTS - $LAST_TO_CLIENTS" | bc)
local FROM_ORIGIN_DIFF=$(echo "$FROM_ORIGIN - $LAST_FROM_ORIGIN" | bc)
echo "$TIME,$TO_CLIENTS,$FROM_ORIGIN,$TO_CLIENTS_DIFF,$FROM_ORIGIN_DIFF" >> "$LOG_FILE"
fi
log_debug "Stats updated"
}
check_time_for_update() {
# Convert minutes to number without leading zeros
local minutes=$(date '+%M')
minutes=$((10#$minutes)) # Force base 10
# Check if we're at 0 or 30 minutes
if [ $minutes -eq 0 ] || [ $minutes -eq 30 ]; then
return 0
fi
return 1
}
monitor_loop() {
trap 'log_debug "Trapped error $? in monitor loop"; exit 1' ERR
while true; do
log_debug "Check cycle"
if check_time_for_update; then
get_stats
fi
# Verify we can still write to logs
if ! touch "$DEBUG_LOG" 2>/dev/null || ! touch "$LOG_FILE" 2>/dev/null; then
log_debug "Lost write permissions to log files"
exit 1
fi
sleep 60 & wait $!
done
}
start() {
if [ -f "$PID_FILE" ]; then
if ps -p $(cat "$PID_FILE") > /dev/null 2>&1; then
echo "Service already running"
exit 1
else
echo "Stale PID file found, removing"
rm "$PID_FILE"
fi
fi
# Clear and set up logs
echo "Starting service at $(date)" > "$DEBUG_LOG"
chmod 666 "$DEBUG_LOG"
touch "$LOG_FILE"
chmod 666 "$LOG_FILE"
# Start monitoring in background and disown
monitor_loop &
local daemon_pid=$!
disown $daemon_pid
# Save PID
echo $daemon_pid > "$PID_FILE"
chmod 644 "$PID_FILE"
echo "Service started with PID $daemon_pid"
log_debug "Service started with PID $daemon_pid"
}
stop() {
if [ -f "$PID_FILE" ]; then
local PID=$(cat "$PID_FILE")
if ps -p $PID > /dev/null 2>&1; then
echo "Stopping service PID: $PID"
kill $PID
sleep 1
if ps -p $PID > /dev/null 2>&1; then
echo "Force stopping service"
kill -9 $PID
fi
else
echo "Process $PID not running"
fi
rm "$PID_FILE"
log_debug "Service stopped"
echo "Service stopped"
else
echo "Service not running"
fi
}
restart() {
stop
sleep 2
start
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
restart
;;
*)
echo "Usage: $0 {start|stop|restart}"
exit 1
;;
esac
exit 0
Now we have the data being looged in the daemon we need to control that daemon which can be accomplished with the commands below:
sudo ./cache_monitor.py start # Start daemon
sudo ./cache_monitor.py stop # Stop daemon
sudo ./cache_monitor.py restart # Restart daemon
Obviously you need to ensure the daemon is running for the log to be populated and then produce the data for the charts.
Obviously you need to ensure the daemon is running for the log to be populated and then produce the data for the charts.
Now we can move on to the HTML creation that will read the log file as then produce the website dynamically based on the data.
Script : generate_report.py
#!/bin/bash
# Create a temp file for data processing
TEMP_FILE=$(mktemp)
# Get the last 30 lines of data (excluding header) and save to temp file
tail -n 30 /var/log/cache_stats.csv | grep -v "timestamp" > "$TEMP_FILE"
# Initialize arrays
DATES="["
TO_CLIENTS="["
FROM_ORIGIN="["
# Process each line of the data
while IFS=, read -r timestamp _ _ to_clients from_origin; do
# Use the full timestamp
DATES="$DATES\"$timestamp\","
TO_CLIENTS="$TO_CLIENTS$to_clients,"
FROM_ORIGIN="$FROM_ORIGIN$from_origin,"
done < "$TEMP_FILE"
# Remove trailing commas and close arrays
DATES="${DATES%,}]"
TO_CLIENTS="${TO_CLIENTS%,}]"
FROM_ORIGIN="${FROM_ORIGIN%,}]"
# Create the HTML file
cat > cache_report.html << EOF
<!DOCTYPE html>
<html>
<head>
<title>Cache Statistics</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/3.9.1/chart.min.js"></script>
<style>
body {
font-family: -apple-system, BlinkMacSystemFont, sans-serif;
margin: 0;
padding: 20px;
background: #f5f5f7;
}
.container {
max-width: 1200px;
margin: 0 auto;
background: white;
padding: 20px;
border-radius: 10px;
box-shadow: 0 1px 3px rgba(0,0,0,0.1);
}
.chart-container {
width: 100%;
height: 500px;
position: relative;
}
</style>
</head>
<body>
<div class="container">
<h1>Cache Statistics Report</h1>
<div class="chart-container">
<canvas id="myChart"></canvas>
</div>
</div>
<script>
// Data arrays from CSV
const chartData = {
labels: $DATES,
datasets: [
{
label: 'To Clients (GB)',
data: $TO_CLIENTS,
borderColor: '#007AFF',
backgroundColor: 'rgba(0, 122, 255, 0.1)',
tension: 0.1
},
{
label: 'From Origin (GB)',
data: $FROM_ORIGIN,
borderColor: '#FF3B30',
backgroundColor: 'rgba(255, 59, 48, 0.1)',
tension: 0.1
}
]
};
// Create the chart
const ctx = document.getElementById('myChart');
new Chart(ctx, {
type: 'line',
data: chartData,
options: {
responsive: true,
maintainAspectRatio: false,
plugins: {
title: {
display: true,
text: 'Cache Transfer Statistics'
}
},
scales: {
y: {
beginAtZero: true,
title: {
display: true,
text: 'Data Transfer (GB)'
}
},
x: {
ticks: {
maxRotation: 45,
minRotation: 45
}
}
}
}
});
</script>
</body>
</html>
EOF
# Clean up
rm "$TEMP_FILE"
# Debug output
echo "Data arrays created:"
echo "Dates: $DATES"
echo "To Clients: $TO_CLIENTS"
echo "From Origin: $FROM_ORIGIN"
echo "Report generated as cache_report.html"