KOCER.DEV

NET.LATENCY: 24ms
00:00:00

SNIPPETS

Useful commands I keep coming back to.

Tree without junk

FINDList project files while skipping build artifacts and dependency folders.
# List project files, skip build artifacts
$ find . \( -name "node_modules" \
$ -o -name "dist" \
$ -o -name ".next" \
$ -o -name "__pycache__" \) \
$ -prune -o -print

Docker nuke

DOCKERReclaim disk space by removing all unused Docker resources.
# Reclaim disk from Docker
$ docker system prune -af --volumes
# Check what's eating space
$ docker system df

Watch logs live

LOGSTail application and service logs in real-time with filtering.
# Tail multiple PM2 apps with color
$ pm2 logs --lines 50
# Or filter systemd service logs
$ journalctl -fu caddy --no-pager | \
$ grep --color=auto "err"

Quick disk audit

DISKFind what's consuming disk space on the server.
# Top 10 largest dirs from root
$ du -h --max-depth=1 / 2>/dev/null | \
$ sort -rh | head -10
# Find files over 100MB
$ find / -size +100M -type f 2>/dev/null

SSH tunnel to DB

SSHForward a remote database port through SSH for local access.
# Forward remote Postgres to local
$ ssh -L 5432:localhost:5432 \
$ -N -f user@remote-host
# Then connect locally
$ psql -h localhost -U admin mydb

Git search & clean

GITSearch commit history for strings and clean up merged branches.
# Find who last changed a line
$ git log -p -S "DATABASE_URL" \
$ --all -- "*.ts"
# Nuke merged branches
$ git branch --merged main | \
$ grep -v main | xargs git branch -d

Process detective

PROCFind what's using a port or eating resources.
# What's on port 3000?
$ lsof -i :3000
# Top memory hogs
$ ps aux --sort=-%mem | head -10
# Watch a process tree
$ pstree -p $(pgrep -f node)

Quick Postgres

PSQLCommon psql one-liners for database inspection.
# Table sizes
$ psql -c "SELECT relname, \
$ pg_size_pretty(pg_total_relation_size(oid)) \
$ FROM pg_class ORDER BY \
$ pg_total_relation_size(oid) DESC LIMIT 10;"
# Kill idle connections
$ psql -c "SELECT pg_terminate_backend(pid) \
$ FROM pg_stat_activity \
$ WHERE state = 'idle' \
$ AND query_start < now() - '5 min'::interval;"

Spark submit

SPARKSubmit a PySpark job with common configurations.
# Submit with dynamic allocation
$ spark-submit \
$ --master yarn \
$ --deploy-mode cluster \
$ --conf spark.dynamicAllocation.enabled=true \
$ --conf spark.shuffle.service.enabled=true \
$ --py-files deps.zip \
$ main.py

Kafka quick ops

KAFKACommon Kafka CLI commands for topic management and debugging.
# List topics
$ kafka-topics.sh --bootstrap-server \
$ localhost:9092 --list
# Peek at messages
$ kafka-console-consumer.sh \
$ --bootstrap-server localhost:9092 \
$ --topic my-topic \
$ --from-beginning --max-messages 5

AWS S3 sync

AWSSync data to/from S3 with common flags.
# Sync local to S3 (dry run first)
$ aws s3 sync ./data s3://bucket/data \
$ --exclude "*.tmp" --dryrun
# Download with parallel transfers
$ aws configure set \
$ s3.max_concurrent_requests 20
$ aws s3 cp s3://bucket/big.parquet . \
$ --expected-size 5368709120

Systemd service

SYSTEMDManage and debug systemd services.
# Check why a service failed
$ systemctl status myapp.service
$ journalctl -u myapp -n 50 --no-pager
# Reload after editing unit file
$ systemctl daemon-reload
$ systemctl restart myapp