[{"categories":["System Administration"],"content":"Problem When SSH’ing from Ghostty terminal emulator to RHEL 8.10 and Nutanix CVMs, the Return key behaved unexpectedly:\nPressing Return added a visual space character The last character typed was deleted (e.g., ’s’ in ’ls') A second Return press was needed to send the incomplete command Example:\n[user@host ~]# ls [user@host ~]# l # Character deleted, space added [user@host ~]# bash: l: command not found This issue only occurred on RHEL 8.10 and Nutanix CVMs. Other systems (RHEL 9.4, RHEL 7.9, Ubuntu 24.04, Debian 13) worked perfectly fine.\nRoot Cause The issue was a terminal type incompatibility between Ghostty and RHEL 8.10’s terminfo database.\nWhat happens:\nGhostty advertises itself as TERM=xterm-ghostty when connecting via SSH RHEL 8.10’s terminfo database doesn’t have a definition for xterm-ghostty, so it falls back to a generic or incompatible terminal type This fallback causes readline (bash’s line editing library) to misinterpret the Return key escape sequence as a backspace command Newer systems (RHEL 9.4+, Ubuntu 24.04, Debian 13) have updated terminfo databases that handle xterm-ghostty correctly, or have better fallback behavior Solution The fix is simple: force the terminal type to linux, which has a stable, widely-compatible escape sequence set.\nOption 1: SSH Config (Recommended - Client-side) Edit or create ~/.ssh/config and add:\nHost * SetEnv TERM=linux This applies the fix to all SSH connections without requiring changes on any servers.\nAdvantages:\nOne-time setup No server changes needed Works with any terminal emulator (not just Ghostty) Survives Ghostty updates Option 2: Server-side (If SSH config not available) Add to ~/.bashrc on affected servers:\nexport TERM=linux This forces the shell to use the linux terminal type on each login.\nVerification After applying the fix, SSH back to your RHEL 8.10 or Nutanix system:\nssh root@your-rhel-8-system Type a command and press Return - it should now execute normally without the delete/space behavior:\n[root@host ~]# ls anaconda-ks.cfg Desktop Documents Downloads [root@host ~]# pwd /root [root@host ~]# Why TERM=linux Works The linux terminal type:\nUses simpler, more stable escape sequences Is compatible across all RHEL/Linux versions Works perfectly with Ghostty’s implementation Does not break any functionality The xterm-ghostty definition is not available in RHEL 8.10’s terminfo database, so it either falls back to a generic type or uses an incompatible definition, causing readline to misinterpret Return as Backspace.\nAlternative: Ghostty Configuration Ghostty doesn’t currently have a built-in config option to change the TERM announcement (it’s hardcoded to xterm-ghostty). However, the SSH config method above provides a clean, terminal-agnostic solution.\nIf you want a Ghostty-specific approach, you would need to:\nModify Ghostty’s source code to support TERM override (not practical) Use SSH config (recommended) Apply server-side fixes Summary Approach Pros Cons SSH Config One-time setup, applies globally, no server changes None Server-side ~/.bashrc Works if SSH config unavailable Requires changes on each server Ghostty config Would be terminal-centric Not currently supported Recommended: Use SSH config - it’s the most elegant and centralized solution.\nAdditional Notes This issue is specific to RHEL 8.10 and Nutanix CVMs (which likely run RHEL 8.x) RHEL 7.9 and RHEL 9.4+ do not have this issue The fix applies globally to all SSH connections, not just Ghostty The linux terminal type is widely supported and stable across all Linux distributions ","summary":null,"tags":["Ghostty","Terminal","RHEL 8","SSH","Troubleshooting"],"title":"Fixing Ghostty Return Key Issue on RHEL 8.10 and Nutanix CVMs","uri":"/post/2026-04-08-ghostty-rhel-8-terminal-issue/"},{"categories":["webserver","linux","firewall"],"content":"This document describes:\nBuilding a custom Caddy container image with the GeoIP plugin (so Caddy can enrich access logs with country code/name). Configuring Caddy JSON access logs to include GeoIP fields. Setting up Fail2Ban to parse Caddy logs and send Pushover notifications with GeoIP info via mmdblookup. Optional “SOC dashboard” style fields (severity, jail type, ban time, until). Assumptions: Debian-based host, Docker + Compose, Caddy running in a container, Fail2Ban running on the host.\n1) Why a custom Caddy build? Stock Caddy does not include third‑party modules. If your Caddyfile contains a directive from a plugin (e.g., geoip), Caddy will fail to start with:\nunrecognized directive: geoip So we build Caddy with the GeoIP module compiled in.\n2) Build a custom Caddy image with GeoIP support We use xcaddy to compile Caddy with the plugin:\ngithub.com/IT-Hock/caddy-geoip Example Dockerfile # ---- builder stage ---- FROM caddy:builder AS builder RUN xcaddy build --with github.com/IT-Hock/caddy-geoip # ---- runtime stage ---- FROM caddy:latest COPY --from=builder /usr/bin/caddy /usr/bin/caddy Example docker-compose.yam file services: webserver: image: nginx:alpine container_name: webserver command: \u003e sh -c \"printf 'Hello, world!\\n' \u003e /usr/share/nginx/html/index.html \u0026\u0026 nginx -g 'daemon off;'\" expose: - \"80\" restart: unless-stopped caddy: build: context: . container_name: caddy restart: unless-stopped ports: - \"80:80\" - \"443:443\" volumes: - caddy_data:/data - caddy_config:/config - /var/log/caddy:/var/log/caddy - ./Caddyfile:/etc/caddy/Caddyfile:ro - ./GeoLite2-Country.mmdb:/data/GeoLite2-Country.mmdb:ro volumes: caddy_data: caddy_config: Build and run with Docker Compose docker build . docker compose up -d Verify modules are present Inside the container:\ndocker exec -it caddy caddy list-modules | grep -i geo Expected (or similar):\nhttp.handlers.geoip caddy.logging.encoders.filter.geoip If you see those, the plugin is compiled in.\n3) Configure Caddy to enrich access logs with GeoIP Once the plugin exists, Caddy can add GeoIP information into access logs (JSON logs).\nAdd GeoIP fields to JSON access log In your log format / encoder, ensure you output these fields:\ngeoip_country_code geoip_country_name Example: (conceptual; actual stanza depends on your Caddyfile layout)\nlog { output file /var/log/caddy/my-caddy-access.log format json } Caddyfile (final) This is the Caddyfile we ended up with (GeoIP runs first, adds country code/name into the JSON access log, drops common scanner noise early, and reverse-proxies the rest to our webserver):\n{ order geoip first } hostname.example.com { route { geoip * /data/GeoLite2-Country.mmdb # Drop common web-scanner noise early (adjust as you like) @scanners { path_regexp bad ^/(?:\\.env|.*\\.env|wp-|wp/|wordpress|actuator|phpmyadmin|\\.vscode|cgi-bin|vendor/|src/|config/|\\.git|\\.DS_Store) } respond @scanners 404 # OPTIONAL: add GeoIP fields into the access log entries # (these placeholders are provided by the GeoIP plugin) log_append geoip_country_code {geoip_country_code} log_append geoip_country_name {geoip_country_name} # Everything else goes to webserver reverse_proxy http://webserver:80 { header_up Host {host} header_up X-Forwarded-Proto {scheme} header_up X-Forwarded-Host {host} header_up X-Forwarded-For {remote} } } log { output file /var/log/caddy/my-caddy-access.log format json } } Notes order geoip first ensures the GeoIP handler runs early enough that placeholders like {geoip_country_code} / {geoip_country_name} are available for later handlers (like log_append). The @scanners matcher + respond 404 is purely “noise reduction” so common mass-scans don’t hit our Webserver at all. log_append adds extra top-level fields into each JSON access log entry, which is why Fail2Ban can later enrich notifications without doing GeoIP itself (but we chose mmdblookup in the notification path instead). The header_up lines are optional: Caddy’s reverse_proxy already forwards most of these by default; keeping Host is sometimes useful when upstream apps care about it. Validate Caddyfile You can validate without starting Caddy:\ndocker exec -it caddy caddy validate --config /etc/caddy/Caddyfile --adapter caddyfile Then ensure the GeoIP handler/filter is enabled in the relevant request path so those fields get added.\nConfirm GeoIP fields appear sudo tail -n 1 /var/log/caddy/my-caddy-access.log | jq . Example output includes:\n\"geoip_country_name\": \"Denmark\", \"geoip_country_code\": \"DK\" 4) Fail2Ban setup for Caddy logs Fail2Ban watches the Caddy access log and bans abusive IPs according to your jail/filter rules.\nJail (example idea) Your jail points at:\n/var/log/caddy/my-caddy-access.log and uses your chosen filter and ban time settings.\n5) Pushover notifications + GeoIP via mmdblookup (Option A) Instead of scraping GeoIP from Caddy logs, we can resolve GeoIP directly from a local MaxMind DB using mmdblookup.\nInstall mmdblookup On Debian:\nsudo apt-get update sudo apt-get install -y mmdb-bin Place the MaxMind DB Example path used below:\n/data/GeoLite2-Country.mmdb (Adjust to your real location, and ensure Fail2Ban can read it.)\n6) Pushover sender script File: /usr/local/bin/fail2ban-pushover\nSends a message to Pushover using env vars.\n#!/usr/bin/env bash set -euo pipefail TITLE=\"${1:-Fail2Ban}\" MESSAGE=\"${2:-}\" PRIORITY=\"${3:-0}\" : \"${PUSHOVER_USER_KEY:?missing PUSHOVER_USER_KEY}\" : \"${PUSHOVER_APP_TOKEN:?missing PUSHOVER_APP_TOKEN}\" DEVICE=\"${PUSHOVER_DEVICE:-}\" SOUND=\"${PUSHOVER_SOUND:-}\" URL=\"${PUSHOVER_URL:-}\" URL_TITLE=\"${PUSHOVER_URL_TITLE:-}\" args=( -fsS --retry 2 --retry-delay 1 -X POST -d \"token=${PUSHOVER_APP_TOKEN}\" -d \"user=${PUSHOVER_USER_KEY}\" -d \"title=${TITLE}\" --data-urlencode \"message=${MESSAGE}\" -d \"priority=${PRIORITY}\" https://api.pushover.net/1/messages.json ) [[ -n \"$DEVICE\" ]] \u0026\u0026 args+=( -d \"device=${DEVICE}\" ) [[ -n \"$SOUND\" ]] \u0026\u0026 args+=( -d \"sound=${SOUND}\" ) [[ -n \"$URL\" ]] \u0026\u0026 args+=( -d \"url=${URL}\" ) [[ -n \"$URL_TITLE\" ]] \u0026\u0026 args+=( -d \"url_title=${URL_TITLE}\" ) timeout 10s curl --connect-timeout 3 --max-time 8 \"${args[@]}\" \u003e/dev/null || true Make executable:\nsudo chmod +x /usr/local/bin/fail2ban-pushover Provide secrets to Fail2Ban environment Fail2Ban runs as a service; it may not inherit your shell env. Provide secrets in a root‑only env file and source it in the script or service.\nOne pattern:\n/etc/fail2ban/pushover.env (permissions 600) PUSHOVER_USER_KEY=\"user-token\" PUSHOVER_APP_TOKEN=\"app-token\" Then in /usr/local/bin/fail2ban-pushover, add near the top:\n# Optional: source env for systemd services if [[ -r /etc/fail2ban/pushover.env ]]; then # shellcheck disable=SC1091 source /etc/fail2ban/pushover.env fi 7) SOC-style wrapper: Fail2Ban -\u003e Pushover with GeoIP, severity, jail type File: /usr/local/bin/fail2ban-pushover-soc\nThis wrapper:\nlooks up GeoIP via mmdblookup formats a clean message optionally adds “SOC dashboard” fields: Severity emoji based on attempts Jail type (auth vs scan vs other) Ban time + until (if passed by Fail2Ban action) Wrapper script (example) #!/usr/bin/env bash set -euo pipefail # Optional debugging: # exec 2\u003e\u003e/var/log/fail2ban-pushover-soc.err # set -x mmdb_str() { sed -n 's/^[[:space:]]*\"\\([^\"]*\\)\".*/\\1/p' | head -n1 } JAIL=\"${1:?jail}\" IP=\"${2:?ip}\" FAILURES_RAW=\"${3:-}\" LOGPATH=\"${4:-}\" # ignore provided hostname (can be \"\u003chostname\u003e\"); compute reliable one: HOST=\"$(hostname -f 2\u003e/dev/null || hostname)\" PRIO=\"${6:-0}\" # Optional extra args if you decide to pass them from the action: EVENT=\"${7:-ban}\" # ban | unban BAN_EPOCH=\"${8:-}\" TIMEOUT_S=\"${9:-}\" # Normalize failures to integer FAILURES=0 if [[ \"${FAILURES_RAW}\" =~ ^[0-9]+$ ]]; then FAILURES=\"${FAILURES_RAW}\" fi # --- GeoIP from MaxMind DB (mmdblookup) --- DB=\"/data/GeoLite2-Country.mmdb\" GEO=\"\" if command -v mmdblookup \u003e/dev/null 2\u003e\u00261 \u0026\u0026 [[ -r \"${DB}\" ]]; then CC=\"$(mmdblookup --file \"${DB}\" --ip \"${IP}\" country iso_code 2\u003e/dev/null | mmdb_str || true)\" CN=\"$(mmdblookup --file \"${DB}\" --ip \"${IP}\" country names en 2\u003e/dev/null | mmdb_str || true)\" GEO=\"${CC}${CC:+ - }${CN}\" fi # Jail type (simple mapping; extend as you like) JAIL_TYPE=\"other\" case \"${JAIL,,}\" in *scan*) JAIL_TYPE=\"scan\" ;; *auth*|*login*) JAIL_TYPE=\"auth\" ;; esac # Severity based on attempts (tune thresholds) SEV=\"🟢\" if (( FAILURES \u003e= 25 )); then SEV=\"🔴\" elif (( FAILURES \u003e= 10 )); then SEV=\"🟠\" elif (( FAILURES \u003e= 1 )); then SEV=\"🟡\" fi # Cosmetic ban time / until (requires passing banEpoch/timeout from the action) BANTIME_LINE=\"\" UNTIL_LINE=\"\" if [[ \"${EVENT}\" == \"ban\" \u0026\u0026 \"${BAN_EPOCH}\" =~ ^[0-9]+$ \u0026\u0026 \"${TIMEOUT_S}\" =~ ^[0-9]+$ ]]; then if (( TIMEOUT_S \u003e= 86400 )); then days=$(( TIMEOUT_S / 86400 )) BANTIME_LINE=\"Ban time: ${days}d\" elif (( TIMEOUT_S \u003e= 3600 )); then hrs=$(( TIMEOUT_S / 3600 )) BANTIME_LINE=\"Ban time: ${hrs}h\" elif (( TIMEOUT_S \u003e= 60 )); then mins=$(( TIMEOUT_S / 60 )) BANTIME_LINE=\"Ban time: ${mins}m\" else BANTIME_LINE=\"Ban time: ${TIMEOUT_S}s\" fi until_epoch=$(( BAN_EPOCH + TIMEOUT_S )) UNTIL_LINE=\"Until: $(date -d \"@${until_epoch}\" '+%Y-%m-%d %H:%M:%S %Z' 2\u003e/dev/null || true)\" fi # Title + message if [[ \"${EVENT}\" == \"unban\" ]]; then TITLE=\"✅ Fail2Ban — Unbanned (${JAIL})\" ACTION_LINE=\"Action: IP unbanned\" else TITLE=\"${SEV} Fail2Ban — Banned (${JAIL_TYPE})\" ACTION_LINE=\"Action: IP banned\" fi MSG=$( printf \"%s\\n\" \\ \"Host: ${HOST}\" \\ \"Service: ${JAIL}\" \\ \"${ACTION_LINE}\" \\ \"Source IP: ${IP}\" \\ \"${GEO:+GeoIP: ${GEO}}\" \\ \"Attempts: ${FAILURES}\" \\ \"${BANTIME_LINE}\" \\ \"${UNTIL_LINE}\" \\ \"${LOGPATH:+Log file: ${LOGPATH}}\" ) # IMPORTANT: never block Fail2Ban; swallow failures /usr/local/bin/fail2ban-pushover \"${TITLE}\" \"${MSG}\" \"${PRIO}\" || true exit 0 Make executable:\nsudo chmod +x /usr/local/bin/fail2ban-pushover-soc 8) Fail2Ban action definition (pushover) File: /etc/fail2ban/action.d/pushover.conf\nBasic version (6 args):\n[Definition] actionstart = actionstop = actionban = /usr/local/bin/fail2ban-pushover-soc \"\u003cname\u003e\" \"\u003cip\u003e\" \"\u003cfailures\u003e\" \"\u003clogpath\u003e\" \"\u003chostname\u003e\" \"0\" actionunban= /usr/local/bin/fail2ban-pushover-soc \"\u003cname\u003e\" \"\u003cip\u003e\" \"\" \"\u003clogpath\u003e\" \"\u003chostname\u003e\" \"-1\" Optional: pass banEpoch/timeout for cosmetic ban time + until If your Fail2Ban version supports these action properties (common), you can pass:\n\u003cbanEpoch\u003e \u003ctimeout\u003e [Definition] actionstart = actionstop = actionban = /usr/local/bin/fail2ban-pushover-soc \"\u003cname\u003e\" \"\u003cip\u003e\" \"\u003cfailures\u003e\" \"\u003clogpath\u003e\" \"\u003chostname\u003e\" \"0\" \"ban\" \"\u003cbanEpoch\u003e\" \"\u003ctimeout\u003e\" actionunban= /usr/local/bin/fail2ban-pushover-soc \"\u003cname\u003e\" \"\u003cip\u003e\" \"\u003cfailures\u003e\" \"\u003clogpath\u003e\" \"\u003chostname\u003e\" \"-1\" \"unban\" \"\u003cbanEpoch\u003e\" \"\u003ctimeout\u003e\" Reload Fail2Ban after changes:\nsudo fail2ban-client reload 9) Testing \u0026 troubleshooting Trigger a manual ban sudo fail2ban-client set caddy banip 1.2.3.4 Check Fail2Ban log sudo tail -n 100 /var/log/fail2ban.log If your action times out Fail2Ban kills actions that run too long (commonly 60s). Causes include:\nmissing env vars causing scripts to block/hang network issues to Pushover API script attempting slow external commands Fix by:\nmaking Pushover sender robust (curl --retry, -fsS) ensuring secrets are available to the service ensuring the wrapper never blocks Fail2Ban (|| true + exit 0) Why did “Host: ” show up? Fail2Ban sometimes passes the literal placeholder text \"\u003chostname\u003e\" depending on config/version.\nThe wrapper script computes a reliable hostname using:\nHOST=\"$(hostname -f 2\u003e/dev/null || hostname)\" So the “Host:” line shows a real FQDN like hostname.example.com.\n10) Summary Caddy: custom-built with GeoIP module using xcaddy, verified via caddy list-modules. Logs: JSON access log enriched with geoip_country_code and geoip_country_name. Fail2Ban: watches Caddy access log, bans offenders. Notifications: a custom wrapper uses mmdblookup to generate GeoIP and sends clean messages to Pushover. SOC polish: severity, jail type, ban time/until can be included without breaking Fail2Ban. ","summary":" This document describes:\n- Building a **custom Caddy container image** with the **GeoIP plugin** (so Caddy can enrich access logs with country code/name). - Configuring **Caddy JSON access logs** to include GeoIP fields. - Setting up **Fail2Ban** to parse Caddy logs and send **Pushover notifications** with GeoIP info via `mmdblookup`. - Optional “SOC dashboard” style fields (severity, jail type, ban time, until). ","tags":["caddy","fail2ban","pushover"],"title":"Caddy + GeoIP + Fail2Ban (Pushover) — Setup Notes","uri":"/post/2026-01-27-caddy-geoip-fail2ban/"},{"categories":["Hugo"],"content":"Managing images in Hugo doesn’t have to be a mess of \u003cimg\u003e tags and custom styling. With the hugo-img-lightbox module, you can add responsive images, lazy loading, automatic resizing, figcaptions, and Lightbox2 support — all through a simple shortcode.\nIn this post, I’ll show you how the module works, how to install it, and how to use it in your content.\n🔧 What the Module Does This module provides:\nA shortcode: {{\u003c img src=“image.png” alt=\"…\" caption=\"…\" \u003e}} for inserting images Automatic resizing into responsive sizes (500w, 800w, 1200w) Lazy loading \u003cimg\u003e elements Semantic \u003cfigure\u003e and optional \u003cfigcaption\u003e Lightbox2 integration for full-size preview on click Conditional asset loading (JS/CSS is only injected if the shortcode is used) 📦 Installation (via Hugo Modules) Make sure you have Go installed (brew install go or visit go.dev )\nIn the root of your Hugo site, initialize Hugo Modules (if not already):\nhugo mod init mysite.local In your config.toml, add the module:\n[module] [[module.imports]] path = \"github.com/kholmqivst/hugo-img-lightbox\" Run Hugo to fetch it:\nhugo mod get github.com/kholmqivst/hugo-img-lightbox In your layout (usually layouts/_default/baseof.html), load the Lightbox assets if needed:\n{{ if .Scratch.Get \"usesLightbox\" }} {{ partial \"lightbox.html\" . }} {{ end }} 🖼️ Usage in Markdown Once set up, you can use the shortcode like this:\n{{\u003c img src=“Sophos-DHCP-over-IPSec.png” alt=“Sophos UI” caption=“Sophos DHCP configuration over IPSec” \u003e}}\nThis will:\nFind the image in the same folder as your index.md (i.e., a page bundle) Resize it to multiple widths Wrap it in a \u003cfigure\u003e with a caption Enable Lightbox2 when clicked 🧪 How It Works Internally The img.html shortcode uses .Page.Resources.Match to find the image in the current page bundle. It resizes the image to multiple sizes and builds a srcset. It sets a flag: .Page.Scratch.Set \"usesLightbox\" true, so your layout knows to load the Lightbox assets. If a caption is given, it is rendered as a \u003cfigcaption\u003e. The Lightbox partial is only loaded if at least one image on the page uses the shortcode. ✅ Benefits Keeps your Markdown clean Respects page bundles and Hugo image processing Avoids loading unused JS or CSS Easily extendable if you want to support static images or external links 👋 Final Thoughts This module helps keep your images clean, performant, and modern — and integrates beautifully with Hugo’s existing features like page bundles and image processing.\nIf you want to add features like fallback to static images, thumbnail-only lightboxes, or even gallery support, this module is a great foundation.\nYou can find the source and instructions on GitHub:\n👉 github.com/kholmqvist/hugo-img-lightbox Happy Hugo-ing!\n","summary":"How to use the hugo-img-lightbox module to easily add responsive images with captions and Lightbox support in Hugo.","tags":["hugo","images","lightbox","shortcodes","modules"],"title":"Responsive Images and Lightbox with Hugo Modules","uri":"/post/2025-03-26-my-first-hugo-module/"},{"categories":["firewall"],"content":" Configure the Sophos Firewall to function as a DHCP relay agent, enabling it to forward DHCP Discover and Request packets from local clients to a centralized DHCP server located behind the head office firewall. Ensure that the relay traffic is routed over an established route-based IPsec VPN tunnel for secure transmission.\nHeadquaters DHCP Create a DHCP Scope in the DHCP Server 10.20.30.2 DHCP Range 172.30.39.200-254 Subnet mask 255.255.255.0 Default Gateway 172.30.39.1 DNS Servers 10.20.30.2, 172.30.39.1 IPSec Create an IPSec tunnel with VTI interfaces Name Headquaters IP Version IPv4 Connection Type Tunnel interface Gateway type Responder Profile IKEv2 Authentication Type RSA Key — — Listening interface Port2 - 1.1.1.1 Local ID Type IP adddress Local ID 1.1.1.1 Local Subnet Any — — Remote Gateway address 2.2.2.2 Remote ID Type IP adddress Remote ID 2.2.2.2 Remote Subnet Any XFRM Interface IPv4 10.254.0.1 Route Traffic over IPsec I will configure two SD-WAN routes on the HQ firewall. The first route will direct traffic from the internal DHCP server to the Branch Office firewall. The second route (optional) will route internet-bound traffic destined for the Branch Office network. This setup is intended to enable centralized internet breakout, allowing Branch Office traffic to be backhauled through the HQ firewall for internet access.\nName BranchOffice DHCP — — Source Networks DHCP Server (10.20.30.2) Destination Networks BranchOffice_Relay (172.30.39.1) Services DHCP — — Link selection settings Primary and Backup gateways Primary gateway BranchOffice - 10.254.0.2 Route only through specified gateways Checked Name BranchOffice Internet — — Source Networks Internet IPv4 (This is a hostgroup in Sophos Firewall) Destination Networks BranchOffice (172.30.39.0/24) Services Any — — Link selection settings Primary and Backup gateways Primary gateway BranchOffice - 10.254.0.2 Route only through specified gateways Checked HQ Firewall Rules We will configure four firewall rules on the HQ firewall:\nRule 1: Permits DHCP relay traffic from the Branch Office firewall to the DHCP server located in the HQ network, allowing DHCP Discover and Request messages to be forwarded appropriately.\nRule 2: Allows Branch Office network to make DNS queries on the DHCP Server.\nRule 3: Allows internet-bound traffic originating from the Branch Office network to traverse the HQ firewall for centralized internet access.\nRule 4: Enables return traffic from the internet to reach the Branch Office network, completing the flow for NAT and stateful inspection.\nBranch Office IPSec Branch Office Name Branch Office IP Version IPv4 Connection Type Tunnel interface Gateway type Initiate the connection Profile IKEv2 Authentication Type RSA Key — — Listening interface Port2 - 2.2.2.2 Local ID Type IP adddress Local ID 2.2.2.2 Local Subnet Any — — Remote Gateway address 1.1.1.1 Remote ID Type IP adddress Remote ID 1.1.1.1 Remote Subnet Any XFRM Interface IPv4 10.254.0.2 Branch Office DHCP Configure a DHCP relay on the Branch Office router. Specify the LAN interface (BranchOffice - 172.30.39.1) as the source interface for relay operations. Set the DHCP server IP address to 10.20.30.2. Ensure the option ‘Relay through IPsec’ is enabled to forward DHCP packets securely over the IPsec tunnel.\nConfigure IPsec Route and Source NAT for System-Generated Traffic to DHCP Server On the Branch Office firewall, configure an IPsec route to ensure system-generated traffic (e.g., DHCP relay packets) is forwarded to the DHCP server located at the Head Office via the IPsec tunnel. Additionally, apply source NAT to translate the internal source IP (originating from the Branch Office LAN interface) to the DHCP server’s IP at the Head Office to ensure proper routing and response.\nAccess the Device Console:\nFrom the CLI menu, select option 4 for the Device Console.\nConfigure System Traffic Source NAT:\nApply source NAT to translate the firewall’s LAN interface IP (used by the DHCP relay agent) to the destination DHCP server IP. This ensures the return traffic is correctly routed.\nset advanced-firewall sys-traffic-nat add destination \u003cDHCP_Server_IP\u003e snatip \u003cBranch_LAN_Interface_IP\u003e Example: set advanced-firewall sys-traffic-nat add destination 10.20.30.2 snatip 172.30.39.1 Note: These commands are essential for relayed DHCP packets initiated by the firewall to be transmitted over the IPsec tunnel and correctly processed by the remote DHCP server.\nBranch Office SD-WAN Route I will configure two SD-WAN routes on the Branch Office firewall. The first route will forward DHCP relay traffic from the Branch Office to the DHCP server located at the Head Office. The second route (optional) will direct internet-bound traffic from the Branch Office network through the IPsec tunnel to the Head Office, enabling centralized internet breakout. This setup ensures that all Branch Office internet traffic is backhauled via the HQ firewall for unified security and policy enforcement.\nName HQ DHCP — — Source Networks Any Destination Networks HQ DHCP Server (10.20.30.2) Services DHCP — — Link selection settings Primary and Backup gateways Primary gateway HQ - 10.254.0.1 Route only through specified gateways Checked Name HQ Internet — — Source Networks BranchOffice (172.30.39.0/24) Destination Networks Any Services Any — — Link selection settings Primary and Backup gateways Primary gateway HQ - 10.254.0.1 Route only through specified gateways Checked Branch Office Firewall Rules Inbound Access to Branch Office Network:\nCreate a firewall rule to allow inbound traffic destined for the Branch Office LAN subnet 172.30.39.0/24. This rule should permit traffic arriving over the IPsec tunnel or other trusted interfaces, based on your topology and security policies.\nOutbound Access from Branch Office to HQ:\nDefine a firewall rule to allow outbound traffic originating from the Branch Office subnet 172.30.39.0/24 towards the Head Office network. This rule enables inter-site communication over the IPsec VPN tunnel and ensures proper routing of internal services such as DHCP, DNS, or centralized internet breakout.\n","summary":"Set up Sophos Firewall as a DHCP relay to forward client requests to a central DHCP server via a route-based IPsec VPN.","tags":["Sophos","XG","DHCP","IPSec","VPN"],"title":"DHCP over Route-Based IPSec in Sophos Firewal","uri":"/post/2025-03-19-sophos-firewall-dhcp-over-wan/"},{"categories":["config","apple","ssh","vim"],"content":"##.mackup.cfg\n[storage] engine = icloud [applications_to_sync] bash brave curl git microsoft-remote-desktop p10k spotify ssh vim vscode tmux wireshark zsh .mackup/ssh.cfg [application] name = SSH [configuration_files] .ssh ","summary":"backup configuration files on mac","tags":["macOS"],"title":"Mackup","uri":"/post/2024-12-16-mackup/"},{"categories":["dns","linux","firewall"],"content":"Overview I have 5 DNS servers, each are announcing the anycast ip with exabgp and are running knot-resolver as DNS resolver. I use pfSense as my router and have installed the FRR package on it.\nAnycast IP: 172.16.0.1\nDNS1: 172.30.31.31\nDNS2: 172.30.31.32\nDNS3: 172.30.31.33\nDNS4: 172.30.31.34\nDNS5: 172.30.31.35\nBGP Router: 172.30.31.1\nLocal AS: 65000\nBind anycast ip on loopback interface Start by binding the anycast IP to the loopback interface on each DNS server. /etc/netplan/00-installer-config.yaml\nuser@dns5:~$ sudo vim /etc/netplan/00-installer-config.yaml # This is the network config written by 'subiquity' network: ethernets: lo: match: name: lo addresses: [ 172.16.0.1/32 ] ens18: addresses: - 172.30.31.35/24 nameservers: addresses: - 127.0.0.1 routes: - to: default via: 172.30.31.1 version: 2 user@dns5:~$ sudo netplan apply As shown below, the anycast ip is now bound to the loopback interface.\nuser@dns5:~$ ip a 1: lo: \u003cLOOPBACK,UP,LOWER_UP\u003e mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 172.16.0.1/32 scope global lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens18: \u003cBROADCAST,MULTICAST,UP,LOWER_UP\u003e mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether bc:24:11:a2:58:87 brd ff:ff:ff:ff:ff:ff altname enp0s18 inet 172.30.31.35/24 brd 172.30.31.255 scope global ens18 valid_lft forever preferred_lft forever inet6 fe80::be24:11ff:fea2:5887/64 scope link valid_lft forever preferred_lft forever Configure DNS software Install and configure knot-resolver, so it listens on both the anycast ip (172.16.0.1) and the local ip (172.30.31.35)\nuser@dns5:~$ wget https://secure.nic.cz/files/knot-resolver/knot-resolver-release.deb user@dns5:~$ sudo dpkg -i knot-resolver-release.deb user@dns5:~$ sudo apt update ; sudo apt install -y knot-resolver /etc/knot-resolver/kresd.conf\n-- SPDX-License-Identifier: CC0-1.0 -- vim:syntax=lua:set ts=4 sw=4: -- Refer to manual: https://knot-resolver.readthedocs.org/en/stable/ -- Network interface configuration net.listen('127.0.0.1', 53, { kind = 'dns' }) net.listen('172.16.0.1', 53, { kind = 'dns' }) net.listen('172.30.31.35', 53, { kind = 'dns' }) -- Load useful modules modules = { 'hints \u003e iterate', -- Allow loading /etc/hosts or custom root hints 'stats', -- Track internal statistics 'predict', -- Prefetch expiring/frequent records 'prefill', -- Prefill Cache } -- Prefill prefill.config({ ['.'] = { url = 'https://www.internic.net/domain/root.zone', interval = 86400, -- seconds ca_file = '/etc/pki/tls/certs/ca-bundle.crt', -- optional } }) -- Cache size cache.size = 100 * MB -- set downstream bufsize to 4096 and upstream bufsize to 1232 net.bufsize(4096, 1232) user@dns5:~$ sudo systemctl enable --now kresd@1.service Configure Exabgp First install exabgp and then configure the exabgp service\nuser@dns5:~$ sudo apt install -y exabgp user@dns5:~$ sudo vim /etc/exabgp/exabgp.conf Configure pfSense as BGP neighbor and tell exabgp to use script dns-check.sh to announce the route\nprocess announce-routes { run /etc/exabgp/dns-check.sh; encoder text; } neighbor 172.30.31.1 { local-address 172.30.31.35; local-as 65000; peer-as 65000; api { processes [ announce-routes ]; } } dns-check.sh Now create a script /etc/exabgp/dns-check.sh that checks if local dns is working and announce route if it’s working.\n#!/usr/bin/bash ANYCAST_IP=\"172.16.0.1\" LOCAL_IP=\"172.30.31.35\" while true; do /usr/bin/dig google.com @127.0.0.1 \u003e /dev/null; if [ \"$?\" == 0 ]; then echo \"announce route ${ANYCAST_IP} next-hop ${LOCAL_IP}\" else echo \"withdraw route ${ANYCAST_IP} next-hop ${LOCAL_IP}\" fi sleep 1 done Make the script executeable\nuser@dns5:~$ sudo chmod +x /etc/exabgp/dns-check.sh Create exabgp.service Create a systemd service for exabgp so it auto starts /etc/systemd/system/exabgp.service\n[Unit] Description=ExaBGP Documentation=man:exabgp(1) Documentation=man:exabgp.conf(5) Documentation=https://github.com/Exa-Networks/exabgp/wiki After=network.target ConditionPathExists=/etc/exabgp/exabgp.conf [Service] User=exabgp Group=exabgp RuntimeDirectory=exabgp RuntimeDirectoryMode=0750 ExecStartPre=-/usr/bin/mkfifo /run/exabgp/exabgp.in ExecStartPre=-/usr/bin/mkfifo /run/exabgp/exabgp.out Environment=exabgp_daemon_daemonize=false Environment=ETC=/etc ExecStart=/usr/sbin/exabgp /etc/exabgp/exabgp.conf ExecReload=/bin/kill -USR1 $MAINPID Restart=always CapabilityBoundingSet=CAP_NET_ADMIN CAP_NET_BIND_SERVICE AmbientCapabilities=CAP_NET_ADMIN CAP_NET_BIND_SERVICE [Install] WantedBy=multi-user.target user@dns5:~$ sudo systemctl daemon-reload user@dns5:~$ sudo systemctl enable --now exabgp.service Create neighbor in pfSense Let’s see if BGP if the route is working\n","summary":"Ubuntu Anycast DNS Server with BGP announcement to pfSense","tags":["pfsense","knot-resolver","bgp","anycast","dns"],"title":"Anycast DNS PfSense","uri":"/post/2024-04-15-anycast-dns-pfsense/"},{"categories":["linux","firewall"],"content":"I have recently bought a GL-iNet AXT-1800 Travel router. However the AXT-1800 ships with Tailscale 1.32.2 which is really outdated compared to the current 1.60.1 version of Tailscale.\nThis is how i updated it to version 1.60.1 Step 1. Start by installing UPX (Note. I used a Mac)\nbrew install upx # Download the latest version of tailscale curl https://pkgs.tailscale.com/stable/tailscale_1.60.1_arm.tgz -o # Untar the compressed file tar -xvzf tailscale_1.60.1_arm.tgz # Go to the tailscale_1.60.1 directory cd tailscale_1.60.1 # Compress tailscale and tailscaled upx --best tailscale upx --best tailscaled Step 2. Upload the new version to the GL-iNet router. I initially tried using SCP, but that didn’t work so i ended up uploading my upx compressed files to my website tailscale tailscaled ssh root@192.168.8.1 # Login to the router # Stop the tailscale service /etc/init.d/tailscale stop # Take a backup of tailscale and tailscaled cp /usr/sbin/tailscale /tmp/tailscale.bak cp /usr/sbin/tailscaled /tmp/tailscaled.bak # Download the new files from my website curl https://holmq.dk/files/tailscale_1.60.1_arm/tailscale -o /usr/sbin/tailscale curl https://holmq.dk/files/tailscale_1.60.1_arm/tailscaled -o /usr/sbin/tailscaled # Make the files executeable chmod +x /usr/sbin/tailscale chmod +x /usr/sbin/tailscaled # Start Tailscale /etc/init.d/tailscale start ","summary":"How to upgrade tailscale on GL-iNet AXT-1800","tags":["Tailscale","VPN","router","wireguard"],"title":"Update Tailscale on GLiNet AXT-1800","uri":"/post/2024-03-10-tailscale-glinet-atx1800/"},{"categories":["linux","firewall"],"content":"## Set default policies iptables -P INPUT DROP iptables -P FOWARD DROP iptables -P OUTPUT DROP ## Allow traffic to and from the loopback interface iptables -A INPUT -i lo -j ACCEPT iptables -A OUTPUT -o lo -j ACCEPT ## Allow outbound connections iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT ## Allow others to ping this machine iptables -A INPUT -p icmp --icmp-type 8 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT ## Ratelimit incomming SSH connections iptables -A INPUT -p tcp --dport ssh -m state --state NEW -m recent --update --seconds 60 --hitcount 4 -j DROP iptabes -A INPUT -p tcp --dport ssh -m state --state NEW -m recent --set iptables -A INPUT -p tcp --dport ssh -m state --state NEW -j ACCEPT ## Save rules on Debian/Ubuntu apt install iptables-persistent netfilter-persistent save ## Save rules on RHEL chkconfig iptables on service iptables save General network settings ## Drop ICMP echo-request messages. Setting net.ipv4.icmp_echo_ignore_broadcasts to 1 will cause the system to ignore all ICMP echo and timestamp requests to broadcast and multicast addresses sysctl -w net.ipv4.icmp_echo_ignore_broadcasts=1 ## Drop source routed packets. Source routing allows a sender to partially or fully specify the route packets take through a network. In contrast, non-source routed packets travel a path determined by routers in the network. sysctl -w net.ipv4.conf.all.accept_source_route=0 sysctl -w net.ipv6.conf.all.accept_source_route=0 ## Enable TCP SYN cookie protection from SYN floods. Attackers use SYN flood attacks to perform a denial of service attacked on a system by sending many SYN packets without completing the three way handshake sysctl -w net.ipv4.tcp_syncookie=1 ## Don't accept ICMP redirect messages. Attackers could use bogus ICMP redirect messages to maliciously alter the system routing tables and get them to send packets to incorrect networks and allow your system packets to be captured sysctl -w net.ipv4.conf.all.accept_redirects=0 sysctl -w net.ipv6.conf.all.accept_redirects=0 ## Don't send ICMP redirect messages. syctl -w net.ipv4.conf.all.send_redirects=0 ## Enable Reverse Path Filtering. Essentially, with reverse path filtering, if the return packet does not go out the same interface that the corresponding source packet came from, the packet is dropped (and logged if log_martians is set) sysctl -w net.ipv4.conf.all.rp_filter=1 ## Log packets with wrong source addresses sysctl -w net.ipv4.conf.interface.log_martians=1 ","summary":"","tags":["iptables","firewall","netfilter"],"title":"IPtables","uri":"/post/2024-02-06-iptables/"},{"categories":["linux"],"content":"Minisign is used by VyOS developers as a tool to sign files and verify signatures.\nSign files minisign -Sm vyos-LTS-1.3.4-amd64.iso Verify signature minisign -Vm vyos-LTS-1.3.4-amd64.iso -P RWTHO8ibvoCdIvZWhNLftqRJpN25VCVHNjh4feXXO0Gi7b6wKwlZ2MMS Signature and comment signature verified Trusted comment: timestamp:1697622624\tfile:vyos-LTS-1.3.4-amd64.iso\thashed Sign with a comment minisign -Sm vyos-LTS-1.3.4-amd64.iso -c \"build by Kenneth\" My public key RWTHO8ibvoCdIvZWhNLftqRJpN25VCVHNjh4feXXO0Gi7b6wKwlZ2MMS ","summary":"Sign and verify signatues with minisign","tags":["Minisign","VyOS"],"title":"Minisign","uri":"/post/2023-10-18-minisign/"},{"categories":["firewall"],"content":"I recently needed to create a new site to site VPN, but there was a few challenges to this. First of all the router of the new site is behind NAT and it would be moved to other physical locations everynow and then. I needed something that works both behind NAT and initiates the connection, that’s when I started to think about wireguard. I have used wireguard in the past, so it wasn’t exactly new to me.\nThe other challenge was overlapping networks. The new site used 192.168.1.0/24 for it’s network and I already had that network connected to my site. So I need to use Network Address Translation to rewrite the source/destination address of the packages.\nSite A LAN Network 1: 192.168.92.0/24\nLAN Network 2: 192.168.1.0/24\nTranslated network for Site B: 192.168.10.0/24\nWireguard Interface: 10.10.10.1/30\nSite B LAN Network: 192.168.1.0/24\nWireguard Interface: 10.10.10.2/30\nHere’s how it works. Without NAT a packet from Client A (192.168.92.2) to 192.168.1.100 would arrive at Server A100 (192.168.1.100) since network 192.168.1.0/24 is physically connected to Router A, so how do we get the traffic to Server B100?\nThis is where 1:1 NAT comes in.\nOn Router A, create a static route for network 192.168.10.0/24 destined to an interface on Router B (Wireguard 10.10.10.2/30 in my case). On Router B NAT rules are created so packages destined to network 192.168.10.0/24 gets rewritten to 192.168.1.0/24 which is physically connected at Router B. This means whenever Client A needs to communicate with Server B100, it needs to use IP 192.168.10.100 instead. Read more about NAT here Source Address Destination Address Rewritten Destination Address 192.168.92.2 192.168.10.100 192.168.1.100 I won’t go into details about how Wireguard works or is set up, so if you need help with that, look at the documentation HERE Configuration Site A Allow access to the opposite network in wireguard. Please note that I’m using 192.168.10.0/24 as my translated network for site B\n1. Wireguard interface assignment and settings 2. Firewall Rules Create a firewall rule that allows all traffic over the wireguard tunnel. You can always make it more strict later on, when you know it’s working\n3. Create a static route to Site B Configuration Site B Allow access to the opposite network in wireguard\n1. Assign wireguard to a interface Find your wireguard network port in the dropdown list and add it as an interface\nGive the interface a static IPv4 addres. I’m using 10.10.10.2/30 on Site B\n2. Firewall Rules Create a firewall rule that allows all traffic over the wireguard tunnel\n3. Outbound NAT Traffic destined to 192.168.92.0/24 needs it’s source to be rewritten to an address in the 192.168.10.0/24 network. I’m using Bitmask to keep the last portion of the address identical during translation, it makes it a lot easier when looking at firewall logs.\nSet Outbound NAT Mode to Hybrid\n3.1. Create a 1:1 NAT rule Create a rule, so traffic incoming traffic on the wireguard interface to network 192.168.10.0/24 is translated to 192.168.1.0/24\nHere is an overview after the rule has been created\n","summary":"","tags":["pfSense","vpn","wireguard","NAT","1:1"],"title":"VPN with overlapping networks","uri":"/post/2023-08-28-pfsense-vpn-overlapping-networks/"},{"categories":["linux","webserver"],"content":"Download files curl -O https://test.example.com/madplan.json curl -O -L http://test.example.com/madplan.json # Follows links. In this example the http request will be redirected to https curl -o test.json https://test.example.com/test.json # Saves the file as test.json Send host header Usefull when the server is hosting multiple domains\ncurl -H \"host: test.example.com\" http://172.16.0.150 Show host headers curl -I http://test.example.com HTTP/2 200 server: openresty date: Fri, 18 Aug 2023 11:03:58 GMT content-type: text/html content-length: 1543 last-modified: Fri, 18 Aug 2023 10:54:36 GMT etag: \"64df4dec-607\" accept-ranges: bytes Force dns resolution This can be really usefull when testing configuration before making any changes to DNS.\ncurl https://test.example.com --resolve test.example.com:443:172.16.0.150 ","summary":"","tags":["curl"],"title":"Tips and trick using curl","uri":"/post/2023-08-18-curl-tricks/"},{"categories":["linux","firewall"],"content":"Sometimes you just need to do a speedtest and doing so either requires a browser or installing speedtest-cli, but I don’t like doing that on a firewall, so I recently found a way to test my internet speed with curl and python.\ncurl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python - # With Python3 curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python3 - Output curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python - Retrieving speedtest.net configuration... Testing from Hiper A/S... Retrieving speedtest.net server list... Selecting best server based on ping... Hosted by TDC Group (Copenhagen) [1.41 km]: 4.779 ms Testing download speed...................................................................... Download: 918.19 Mbit/s Testing upload speed........................................................................ Upload: 859.04 Mbit/s ","summary":"Run a speedtest using CLI","tags":["speedtest","curl","python"],"title":"Speedtest using curl","uri":"/post/2023-07-14-speedtest-using-curl/"},{"categories":["firewall"],"content":"I recenty decided to play around with VyOS got completely in to it when I figured out that it could run containers using podman - Read more about it in one of my other blog posts. I have now replaced my pfSense firewall with VyOS and now it’s time to setup IPv6 on it.\nInterfaces As show below, eth0 is my LAN interface and eth1.101 is my WAN (Remember Hiper uses vlan 101)\nkho@fw3:~$ show interfaces ethernet Codes: S - State, L - Link, u - Up, D - Down, A - Admin Down Interface IP Address S/L Description --------- ---------- --- ----------- eth0 10.10.10.1/24 u/u LAN eth1 - u/u eth1.101 185.50.xxx.xxx/22 u/u Hiper WAN eth2 - u/D eth3 - u/D eth4 - u/D eth5 - u/D eth6 - u/D eth7 - u/D Create Firewall Rules IPv6 relies on ICMP, so we need to create a few firewall rules\nconfigure edit firewall ipv6-name WAN-IN-IPv6 set default-action drop set rule 10 action accept set rule 10 description \"allow established\" set rule 10 protocol all set rule 10 state established enable set rule 10 state related enable set rule 20 action drop set rule 20 description \"drop invalid packets\" set rule 20 protocol all set rule 20 state invalid enable set rule 30 action accept set rule 30 description \"allow ICMPv6\" set rule 30 protocol icmpv6 edit firewall ipv6-name WAN-LOCAL-IPv6 set default-action drop set rule 10 action accept set rule 10 description \"allow established\" set rule 10 protocol all set rule 10 state established enable set rule 10 state related enable set rule 20 action drop set rule 20 description \"drop invalid packets\" set rule 20 protocol all set rule 20 state invalid enable set rule 30 action accept set rule 30 description \"allow ICMPv6\" set rule 30 protocol icmpv6 set rule 40 action accept set rule 40 description \"allow DHCPv6 client/server\" set rule 40 destination port 546 set rule 40 source port 547 set rule 40 protocol udp commit Configure WAN Interface configure set interfaces ethernet eth1 vif 101 address dhcpv6 set interfaces ethernet eth1 vif 101 dhcpv6-options rapid-commit set interfaces ethernet eth1 vif 101 dhcpv6-options pd 0 interface eth0 sla-id 1 set interfaces ethernet eth1 vif 101 dhcpv6-options pd 0 interface eth0 address 1 set interfaces ethernet eth1 vif 101 dhcpv6-options pd 0 length 48 set interfaces ethernet eth1 vif 101 ipv6 address autoconf set service router-advert interface eth0 default-lifetime 300 set service router-advert interface eth0 default-preference high set service router-advert interface eth0 hop-limit 64 set service router-advert interface eth0 interval max 30 set service router-advert interface eth0 link-mtu 1500 set service router-advert interface eth0 managed-flag set service router-advert interface eth0 other-config-flag set service router-advert interface eth0 prefix ::/64 preferred-lifetime 300 set service router-advert interface eth0 prefix ::/64 valid-lifetime 900 set service router-advert interface eth0 reachable-time 900000 set service router-advert interface eth0 retrans-timer 0 set interfaces ethernet eth1 vif 101 firewall in ipv6-name WAN-IN-IPv6 set interfaces ethernet eth1 vif 101 firewall local ipv6-name WAN-LOCAL-IPv6 commit save We now got IPv6 kho@fw3:~$ show interfaces ethernet Codes: S - State, L - Link, u - Up, D - Down, A - Admin Down Interface IP Address S/L Description --------- ---------- --- ----------- eth0 10.10.10.1/24 u/u LAN 2a05:f6c7:xxxx:x::1/64 eth1 - u/u eth1.101 185.50.xxx.xxx/22 u/u Hiper WAN 2a05:f6c7:x:xxxx::/128 eth2 - u/D eth3 - u/D eth4 - u/D eth5 - u/D eth6 - u/D eth7 - u/D ","summary":"IPv6 on VyOS with danish ISP Hiper","tags":["VyOS","IPv6","Hiper"],"title":"VyOS - Hiper IPv6","uri":"/post/2023-07-14-vyos-hiper-ipv6/"},{"categories":["firewall"],"content":"I used to add the debian repository from https://pkg.cloudflare.com and install the package locally on my VyOS firewall, but found that VyOS can run docker containers, so I decided to give it a try. I like the idea of not having to install any Third Party software on my firewall and get a simpler/more portable configuration.\nI’m using cloudflared for two things. First of all i’m using it to access my VyOS from the Internet by using Cloudflare Zero Trust. Secondly I’m using it as a DoH (DNS over HTTPS) client to get my DNS queries encrypted.\nAdd container image to VyOS add container image cloudflare/cloudflared:latest 1. Create the Cloudflared container. configure set container name cloudflared set container name cloudflared allow-host-networks set container name cloudflared restart always set container name cloudflared cap-add net-raw set container name cloudflared memory 128 # Decrease memory limit from 512MB to 128MB. set container name cloudflared image cloudflare/cloudflared:latest set container name cloudflared command 'tunnel --no-autoupdate run' set container name cloudflared environment TUNNEL_TOKEN value 'YOUR_TOKEN' set container name cloudflared environment TZ value 'Europe/Berlin' commit save Create the rest of your configuration from the Cloudflare Zero Trust Dashboard\n2. Cloudflared DNS Proxy as DoH client configure set container name cloudflared-dns-proxy set container name cloudflared-dns-proxy allow-host-networks set container name cloudflared-dns-proxy restart always set container name cloudflared-dns-proxy image cloudflare/cloudflared:latest set container name cloudflared-dns-proxy cap-add net-bind-service set container name cloudflared-dns-proxy memory 64 # Set memory limit to 64MB. Default is 512MB set container name cloudflared-dns-proxy command 'proxy-dns' set container name cloudflared-dns-proxy environment TUNNEL_DNS_ADDRESS value '127.0.0.1' set container name cloudflared-dns-proxy environment TUNNEL_DNS_PORT value '53' set container name cloudflared-dns-proxy environment TUNNEL_DNS_UPSTREAM value 'https://1.1.1.1/dns-query, https://dns.google/dns-query' # Use cloudflare dns as primary and google as backup resolver set system name-server 1.1.1.1 # Set VyOS to use Cloudflare DNS as resolver for all system lookups set service dns forwarding listen-address 192.168.1.1 # Listen on LAN IP set service dns forwarding allow-from 192.168.1.0/24 # Allow local network clients to query the firewall set service dns forwarding name-server 127.0.0.1 # forward the query from local clients to cloudflared-dns-proxy container commit ; save VyOS Container documentation ","summary":"Run cloudflared as a container in VyOS","tags":["VyOS","docker","cloudflare","cloudflared"],"title":"Running Cloudflared on VyOS","uri":"/post/2023-07-10-vyos-cloudflared-container/"},{"categories":["programmering"],"content":"I have started learning javascript and will use this post as notes for later.\nJavaScript uses camelCase where CSS uses kebab-case.\nCSS\tJavaScript\nbackground-color\tbackgroundColor\ncolor\tcolor\nfont-size\tfontSize\nz-index\tzIndex\n[ ] = Array\n{ } = Object\n1_000_000 = 1.000.000 det er bare mere læseligt\nCallback = A callback is a function passed as an argument to another function\nConstructor = Function som automatisk bliver kørt i en klasse\n#varNavn = private Instance variable. Disse kan ikke bruges i en constructor uden at deklarere variablen først.\n#privateMethod() = private instance method\nJSON.parse(string) = konvertere JSON string til JSON Object\nJSON.stringify(object) = konvertere JSON Object til JSON string\nNullish coalescing The nullish coalescing (??) operator is a logical operator that returns its right-hand side operand when its left-hand side operand is null or undefined, and otherwise returns its left-hand side operand\nconst foo = { someFooProp: \"hi\" }; console.log(foo.someFooProp?.toUpperCase() ?? \"not available\"); // \"HI\" console.log(foo.someBarProp?.toUpperCase() ?? \"not available\"); // \"not available\" Fetch() This .json() method on response is almost exactly the same as JSON.parse(string) that you used in the previous chapter. The only difference however is that response.json() is non-blocking and asynchronous meaning that it returns a promise\nImplicit return\nWhenever you have a function that has a body of only 1 line and returns the result of that one line, you can omit the return and write it in a shorter syntax\nJust like how fetch(URL) returns a promise, the response.json() method also returns a promise. This means that we cannot read its result directly. Instead, we have to resolve the promise with .then(callback).\nIt’s extremely important to add a console.log(data) the first time you work with a URL so that you can see, visualize \u0026 understand what kind of data this API (and this particular URL) is returning.\nHTML \u0026 Javascript You can access the DOM in JavaScript with the document variable.\nThe document.querySelector() (note the capital S character) method expects a CSS selector. That’s the same as the selectors you’d write in your CSS file.\ndocument.querySelector(“CSS-selector”) returns an object which is an instance of HTMLElement. HTMLElement is the parent class that every single HTML element in your page inherits from. This means that every element on your page is an instance of a single class which is HTMLElement\n## Type selectors \u003ch1\u003eBig Headline\u003c/h1\u003e \u003cscript\u003e const title = document.querySelector(“h1”); \u003c/script\u003e ## ID selector \u003cdiv id=“navbar”\u003e\u003c/div\u003e \u003cscript\u003e const navbar = document.querySelector(“#navbar”); \u003c/script\u003e ## Class selector \u003cdiv class=“item”\u003e\u003c/div\u003e \u003cscript\u003e const item = document.querySelector(“.item”); \u003c/script\u003e ## Descendant selector \u003cdiv id=\"banner\"\u003e \u003cdiv class=\"item\"\u003e\u003c/div\u003e \u003c/div\u003e \u003cscript\u003e // \"space character\" ( ) for descendant const item = document.querySelector(\"#banner .item\"); \u003c/script\u003e ## Attribute selector \u003cinput type=\"text\" placeholder=\"Your name here\" disabled\u003e \u003cscript\u003e // find the element with the disabled attribute document.querySelector(\"[disabled]\"); \u003c/script\u003e Element.textContent The textContent property returns the text that’s in between the opening tag (for example, ) and the corresponding closing tag (for example, ).\nFinding multiple elements document.querySelectorAll(“CSS-selector”);\nWhile document.querySelector() might return null (when no items are found), the document.querySelectorAll() will always return a NodeList. This is an important difference.\n\u003cp id=\"first\"\u003eFirst paragraph\u003c/p\u003e \u003cp id=\"second\"\u003eSecond paragraph\u003c/p\u003e document.querySelectorAll(\"p\"); // NodeList(2) [p#first, p#second] const items = [...document.querySelectorAll(\"div\")]; // Array As you can see, you can convert a NodeList into an array using the array spread syntax (…) which spreads every single item of the NodeList, into a new array.\nElement.innerHTML innerHTML will return the HTML string inside of the element (it will not strip out HTML tags). Where textContet will return the text with all HTML tags removed.\nIf the string that you’re rendering is coming from your users (for example, a string coming from a comment box that the user can fill), then you should avoid using innerHTML as your users will be able to write HTML \u0026 JavaScript code inside of your page which may lead to security issues. This is called Cross-Site Scripting (XSS) attack.\nElement.value To read the written content of an input element, you have to use value property:\nElement.classList element.classList returns an object containing methods that let you manage the classes of an element.\nelement.classList.add(className) will add the class.\nelement.classList.remove(className) will remove the class.\nelement.classList.contains(className) returns true when the element has the class and false otherwise.\nelement.classList.toggle(className) will add it when it’s not already present and remove it otherwise.\nelement.classList.replace(oldClassName, newClassName) will replace the oldClassName with the newClassName.\nelement.classList.add() can be used to add multiple classes at the same time.\nelement.classList.remove() can be used to remove multiple classes at the same time.\nelement.getAttribute(key) gets the value of a certain attribute by its key.\nelement.removeAttribute(key) removes an attribute.\nelement.setAttribute(key, value) writes a new attribute (or updates the value of an old one that already exists).\nelement.hasAttribute(key) checks whether an attribute exists or not. It always returns a boolean.\nElement.remove() Completely removes the element from DOM where innerHTML empties the content of the element.\ndocument.body If you need to access the element of the page, instead of finding it with querySelector, you can access it with document.body directly\ndocument.documentElement Access tag directly with document.documentElement\nDataset element.dataset returns an object containing all the data- attributes on that element.\nData attribute names are converted from kebab-case to camelCase.\nData values are always saved as a string. value === “true” allows you to convert “true” and “false” into a boolean.\nelement.parentElement The element.parentElement property returns the parent element of the current element.\nelement.closest(“CSS-selector”) The element.closest(“CSS-selector”) method returns the closest parent that matches the CSS-selector you specified. It searches for parent elements and goes up one by one.\nelement.insertAdjacentHTML(position, htmlString) The element.insertAdjacentHTML will place the htmlString without having to reconstruct the remaining HTML inside the element. It could either prepend or append depending on the position that you provide.\ninnerHTML += … is inefficient because it recreates the entire HTML. This could also remove existing event listeners.\nInstead, when you want to add a piece of HTML, you should use the insertAdjacentHTML method.\nelement.insertAdjacentHTML(position, htmlString) will prepend/append the htmlString depending on the position.\nA position of beforeend will append (add at the end).\nA position of afterbegin will prepend (add at the beginning).\ndocument.createElement For example, instead of writing the htmlString Hello World\n, you can construct it with the document.createElement() method:\nconst paragraph = document.createElement(\"p\"); paragraph.classList.add(\"text-center\"); paragraph.textContent = \"Hello World\"; console.log(paragraph); // \u003cp class=\"text-center\"\u003eHello World\u003c/p\u003e (as an element not as a string) You can then use the element.appendChild() method to append it somewhere in the DOM. For example: document.body.appendChild(paragraph); document.addEventListener The element.addEventListener(eventType, callback) method allows you to wait for an event to happen on an element. Once that event occurs (the user clicks on the button), the callback function will execute.\nThe event.currentTarget refers to the element to which the event listener has been attached.\nevent.preventDefault(); makes disables reload on form submit\nform.addEventListener(\"submit\", event =\u003e { event.preventDefault(); // the form will not reload anymore }); -focus is triggered when the user enters focus (the cursor) in a textbox.\n-blur is triggered when the user removes focus (the cursor) from a textbox.\n-DOMContentLoaded is fired when the browser has finished loading \u0026 constructing the entire HTML on your page.\n-scroll is triggered every time the user scrolls.\n-change is used to know when a has a new option chosen.\n-keydown and keyup are used to know when the user has typed a character on the keyboard.\nEvent bubbleing \u003ca class=\"card\"\u003e \u003cbutton\u003eclose\u003c/button\u003e \u003c/a\u003e document.querySelector(\".card\").addEventListener(\"click\", event =\u003e { console.log(\"Card clicked\"); }); document.querySelector(\".card button\").addEventListener(\"click\", event =\u003e { console.log(\"Close clicked\"); }); Clicking on the close button will make both events fire due to event bubbling. You can disable that by calling event.stopPropagation():\n","summary":"I have started learning javascript and will use this post as notes for later","tags":["javascript"],"title":"JavaScript notes","uri":"/post/2023-06-02-javascript/"},{"categories":["Firewall"],"content":"From the console of Sophos XG.\n4. Device Console\nType the following command:\nsystem diagnostics utilities ping interface Port2 1.1.1.1 # This is useful if you have multiple wan connections There’s a lot of utilities there, like the bandwidth monitor, connection list and so on\n","summary":"","tags":["Sophos","XG","Troubleshooting","diagnostics"],"title":"Sophos XG CLI Diagnostics","uri":"/post/2023-05-04-sophos-xg-diagnostics/"},{"categories":["hcl"],"content":"Here’s the systemd script i’m using to get my sametime containers created and running after the parent OS has been rebooted.\nCreate the docker-samtime.service in /etc/systemd/system/\n[Unit] Description=Docker Sametime Service Requires=docker.service After=docker.service [Service] Type=oneshot RemainAfterExit=yes WorkingDirectory=/opt/sametime ExecStart=/usr/local/bin/docker-compose up -d --scale jibri=5 ExecStop=/usr/local/bin/docker-compose down ExecStopPost=/opt/sametime/CleanUpMultiJibri.sh ExecReload=/usr/local/bin/docker-compose restart [Install] WantedBy=multi-user.target Run systemctl daemon-reload to make systemd aware of the new service\nRun systemct enable docker-sametime.service to enable the service in systemd\n","summary":"systemd script to get sametime containers created after the parent OS is restarted","tags":["docker","sametime","docker-compose","linux","systemd"],"title":"Sametime 12 autostart with Systemd","uri":"/post/2023-04-19-sametime-12-autostart/"},{"categories":["cloudflare"],"content":"During the process of adding a domain to Cloudflare, they scan the current dns records and create them for you, which is very nice. However this can also be annoying. I have a case where we bought a domain just to own it for future use, moved it to cloudflare and they created 60+ dns records for me. We’re not going to use this domain right now, so I just wanted to delete the records and add a few spf, dmarc records to prevent the mail from being used for emails. Apparently there is no way to do a bulk deletion from they webinterface and I’m lazy, so fortunately this can be done by using their REST API , So I created the script below.\nFeel free to use my script, you just need to do the following:\nCreate a API Token for the specified domain with DNS edit permissions Copy the Zone ID for the specific domain #!/bin/bash # Author: Kenneth # Date: 17/3/2023 # token=\"\u003cAPI Token\u003e\" # Replace \u003cAPI Token\u003e with your token zone_id=\"\u003cZone ID\u003e\" # Repace \u003cZone ID\u003e with your domains Zone ID # Test Token: function test_token() { curl -X GET \"https://api.cloudflare.com/client/v4/user/tokens/verify\" \\ -H \"Authorization: Bearer $token\" \\ -H \"Content-Type:application/json\" } # List Records in zoneid function list_records() { curl -X GET \"https://api.cloudflare.com/client/v4/zones/$zone_id/dns_records\" \\ -H \"Authorization: Bearer $token\" \\ -H \"Content-Type: application/json\" } # Delete Records in zone function delete_records() { printf \"\\n $(curl -X DELETE \"https://api.cloudflare.com/client/v4/zones/$zone_id/dns_records/$1\" \\ -H \"Authorization: Bearer $token\" \\ -H \"Content-Type: application/json\") \" } # Loop through records function main(){ list_records | sed -e 's/[{}]/''/g' | awk -v RS=',\"' -F: '/^id/ {print $2}' | sed 's/^.//' | sed 's/.$//' | while read record_id ; do delete_records $record_id done } main ","summary":"This is how I'm using the cloudflare api to bulk delete dns records","tags":["dns","api","cloudflare","curl"],"title":"Cloudflare delete dns records in bulk","uri":"/post/2023-03-17-cloudflare-bulk-delete-dns-records/"},{"categories":["cloudflare","firewall"],"content":"In my previous post about installation of cloudflared on pfSense I configured my tunnel using config.yaml and started the tunnel using my cf.sh shell script. A lot has happened since i wrote that post and it’s now possible to configure the tunnel directly from Cloudflares Zero Trust dashboard. This post shows how the tunnel can be configured to connect to a default pfSense installation.\nCreate a new tunnel\nThe cloudflared service install command is not supported on FreeBSD at the time of writing, so please press next\nConfigure your tunnel. In this example the webinterface on my pfsense is using the self-signed certificate on port 443\nThe tunnel is now created. Copy the Tunnel-ID\nRun the tunnel from the pfSense to see if it works and the tunnel gets active. The command can be copied below. Remember to replace Tunnel-ID with your actual ID with the one from step 4\n/usr/local/bin/cloudflared tunnel run \u003cTunnel-ID\u003e I’m still using the pfSense Cron package to make sure the tunnel is being started after a reboot\n","summary":"This is an update to my original post about cloudflared installation on pfSense","tags":["cloudflared","cloudflare","pfSense","argo","tunnel"],"title":"Cloudflared on pfSense - Part 2","uri":"/post/2023-03-08-cloudflared-on-pfsense-part2/"},{"categories":["webserver"],"content":"I have recently created my own nginx cluster. I use nginx as a reverse proxy and needed a high-availability solution. There’s already support for this in Nginx Plus , but I’m compiling my own version of the Open Source version of nginx, so I looked at their documentation as inspiration and created my own scripts. I have two servers a Primary (Node A - Active) and a Secondary (Node B - Passive), my Primary node is the where I edit/update my nginx configration and then it synchronizes the changes to my secondary node. Here’s what you need to get started\nPrerequisites 3 IP addresses - Each server needs an IP address and the last is used as the floating IP address which is the one users should access 2 RHEL/CentOS Servers Nginx - Webserver Keepalived - VRRP software Rsync - Synchronization software Install software Install the following packages on both servers\nyum install -y nginx keepalived rsync Firewall Configuration This should be applied to both servers. If you are serving websites on non-standard ports, then remember to open them as well\nfirewall-cmd --add-service=http --permanent firewall-cmd --add-service=https --permanent firewall-cmd --add-rich-rule='rule protocol value=\"vrrp\" accept' --permanent firewall-cmd --reload Tweak the Linux Kernel The Nginx and Keepalived process needs the ability to bind to a non-local IP address. This is done by creating a file in /etc/sysctl.d/\necho \"net.ipv4.ip_nonlocal_bind=1\" \u003e /etc/sysctl.d/90-keepalived.conf NOTE. The above needs to be done on both servers\nNode A - Primary Nginx Server - 192.168.1.11 /etc/keepalived/keepalived.conf ! Configuration File for keepalived global_defs { enable_script_security\t# Enable Script Security script_user YOUR-USERNAME\t# Run script as this user. For security reasons, don't use root } vrrp_script track_nginx {\t# Tracking script to determine if the service is healthy script \"/usr/sbin/pidof nginx\"\t# Checks if nginx is running interval 2\t# Checks every 2 seconds timeout 1 } vrrp_instance VI_1 { state MASTER\t# MASTER/BACKUP interface ens192\t# Name of interface to be used for VRRP virtual_router_id 51\t# Router ID needs to match on all nodes priority 200\t# A higher number has higher priority advert_int 1 authentication { auth_type PASS auth_pass YOUR-PASSWORD\t# Maximum 8 characters } unicast_peer { 192.168.1.12\t# IP address of our secondary node } virtual_ipaddress { 192.168.1.10/24 dev ens192\t# Floating/Virtual IP and the interface name to bind it to } track_script { track_nginx\t# Tracking script defined above } } Start Keepalived and Nginx systemctl enable --now keepalived.service systemctl enable --now nginx.service Sync Nginx files Now we need to detect changes in /etc/nginx and synchronize them to our secondary server. This is a two step process, first we need a script to do the actual synchronization and then we need to run it when a change has been made.\nCreate a script called NginxSync.sh #!/bin/bash NodeB=\"192.168.1.12\" # IP address of the Node B/Secondary nginx server ## Check if nginx configuration is valid before synchronization if out=$(nginx -t 2\u003e\u00261); then ## Sync Files rsync -a --delete /etc/nginx/ $NodeB:/etc/nginx ## Restart Nginx ssh $NodeB \"systemctl restart nginx.service\" echo \"Success\" else echo \"Failure, because $out\" fi\t1a. Make the script executable ```bash chmod +x NginxSync.sh 2. We need to create a ssh key and copy the public key to Node B ```bash ssh-keygen -t rsa -b 4096 -C \"Nginx Primary\" -f ~/.ssh/id_NodeA_rsa -N \"\" ## Copy public key to Node B ssh-copy-id -i ~./ssh/id_NodeA_rsa.pub root@192.168.1.12 Now we need to create the monitor script. It’s made of two files placed in /etc/systemd/system/\nnginxFileChange.service [Unit] Description = Starts the synchronization job from Node A to Node B Documentation = man:systemd.service [Service] Type=oneshot ExecStart=/YOUR-PATH-TO/NginxSync.sh\t# Remember to change this line to your needs 2. **nginxFileChange.path** ```bash [Unit] Description = Triggers the nginxFileChange.service which synchronizes changes Documentation = man:systemd.path [Path] PathModified=/etc/nginx/\t# Path to the nginx config folder Unit=nginxFileChange.service [Install] WantedBy=multi-user.target\t# Requires at least runlevel 3 otherwise our NginxSync.sh script wont work 3. Start the nginxFileChange.path service ```bash systemctl daemon-reload systemctl enable --now nginxFileChange.path\t# Enables the file monitor check systemctl status nginxFileChange.service\t# Shows status of the sync service ### Node B - Secondary Nginx Server - 192.168.1.12 1. /etc/keepalived/keepalived.conf ```bash ! Configuration File for keepalived global_defs { enable_script_security script_user root } vrrp_script track_nginx { script \"/usr/bin/killall -0 nginx\" interval 2 timeout 1 } vrrp_instance VI_1 { state BACKUP interface ens192 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass YOUR-PASSWORD } unicast_peer { 192.168.1.11 } virtual_ipaddress { 192.168.1.10/24 dev ens192 } track_script { track_nginx } } Start Keepalived and Nginx systemctl enable --now keepalived.service systemctl enable --now nginx.service Troubleshooting Here is a few useful commands to see if it’s working\nip address\t# Shows network interfaces and IP journalctl -r -u keepalived\t# Shows nginx systemd log journalctl -r -u nginx\t# Shows keepalived systemd log systemctl enable service-name.service\t# Auto starts the service at boot systemctl enable --now service-name.service\t# Equal to systemctl enable + systemctl start systemctl status nginx.service\t# Shows service status systemctl status keepalived.service systenctl status nginxFileChange.path systemctl start nginx.service\t# Start Service systemctl start keepalived.service systemctl start nginxFileChange.path systemctl stop nginx.service\t# Stops Service systemctl stop keepalived.service systemctl stop nginxFileChange.path ","summary":"How I built my own nginx cluster","tags":["nginx","active","passive","cluster","keepalived","vrrp"],"title":"Nginx Active-Passive Cluster","uri":"/post/2023-02-21-active-passive-nginx-cluster/"},{"categories":["Outdoor"],"content":"Her er min pakkeliste over de ting jeg gerne tager med på tur\nBeklædning Diverse Ild Uldstrømper (husk et ekstra par) Førstehjælpssæt Flint og Stål Uldundertøj Vabelplaster Fatwood Fleece eller andet mellemlag Håndvarmer Charcloth Buff Plastikposer Sterinbomber Hue Bushbox Handsker Skaldbukser Skaldjakke Mad Ly Navigation Spisegrej Tarp Blyant Olie Pløkker Kompas Salt \u0026 Peber Reb \u0026 Snor Kort Dej Petroleumslampe Notesblok Bøf Petroleum Vejrudsigt Kartofler Frysetørret morgenmad Vand Kaffe / The Vandflaske Pande Gryde Personligpleje Sovegrej Værktøj Håndsprit Hovedpude Feltspade Tandbørste Liggeunderlag Kniv Tandpasta Sovepose Sav Toiletpapir Økse Ørepropper ","summary":"Min pakkeliste over udstyr jeg har med på tur","tags":["Pakkeliste","Spejder","Bushbox","fatwood","spejder"],"title":"Spejderens Pakkeliste","uri":"/post/2023-01-13-spejder-pakkeliste/"},{"categories":["food"],"content":"Efter årtier som hemmelighedsstemplet, er Mor Ullas opskrift på brunsviger blevet frigjort. Det har fået internettet til at koge fuldstændig over af ren kærlighed til brunsviger.\nDejen 50g Gær 100g Margarine 650g Mel 60g Sukker 4 dl. Mælk 1 tsk. Kardemomme 1 tsk. Salt Fyld 4 spsk. Sirup 400g Farin 200g Margarine Sådan gør du: Smelt margarine ved lav varme og tilføj mælken, varm blandingen op så det bliver lunkent (Temperaturen må IKKE overstige 37°C ellers dræber du gærcellerne), rør nu gæren ud i blandingen og tilsæt derefter resten af ingredienserne. Nu skal dejen forhæves i 30 minutter, og derefter skal dejen fordeles ud i en bradepande til efterhævning i 20 minuttter. Kagen bages ved 200°C i 15 minutter. Mens kagen er i ovnen, så kan du lave fyldet ved at smelte alle ingredienserne i en gryde\n","summary":"Mor Ullas legendariske opskrift på brunsviger","tags":["brunsviger","kagemand","kagekone","kageperson","kageindivid","kage"],"title":"Mor Ullas legendariske brunsviger","uri":"/post/2023-01-12-brunsviger/"},{"categories":["kubernetes"],"content":"I recently had an issue with my kube-apiserver restarting all the time, which meant I couldn’t use kubectl. My issue was related to expired certificates, which explains why my issue happened out of the blue.. It turns out the certificates has a 1 year validation period. This shouldn’t be an issue since you’re expected to upgrade your cluster every now and then. This is however not the case for me because my deployed software needs a specific version of kubernetes and docker with helm2 so I’m not able to update my cluster until my software vendor supports a newer kubernetes version.\nCheck your certificate expiration kubeadm alpha certs check-expiration # Old versions of kubeadm kubeadm certs check-expiration # Newer versions of kubeadm [check-expiration] Reading configuration from the cluster... [check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED admin.conf Jan 02, 2024 00:31 UTC 362d no apiserver Jan 02, 2024 00:31 UTC 362d ca no apiserver-etcd-client Jan 02, 2024 00:31 UTC 362d etcd-ca no apiserver-kubelet-client Jan 02, 2024 00:31 UTC 362d ca no controller-manager.conf Jan 02, 2024 00:31 UTC 362d no etcd-healthcheck-client Jan 02, 2024 00:31 UTC 362d etcd-ca no etcd-peer Jan 02, 2024 00:31 UTC 362d etcd-ca no etcd-server Jan 02, 2024 00:31 UTC 362d etcd-ca no front-proxy-client Jan 02, 2024 00:31 UTC 362d front-proxy-ca no scheduler.conf Jan 02, 2024 00:31 UTC 362d no CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED ca Dec 20, 2031 09:22 UTC 8y no etcd-ca Dec 20, 2031 09:22 UTC 8y no front-proxy-ca Dec 20, 2031 09:22 UTC 8y no Renew your certificates Use the following command to renew the certificates. This has to be done on all master nodes in your cluster. Wait a few minutes after the certificates has been renewed\nkubeadm alpha certs renew all kubeadm certs renew all \u003c--- Newer versions of kubeadm Update your config file so kubectl can connect using the new certificates sudo cp /etc/kubernetes/admin.conf ~/.kube/config sudo chown $(id -u):$(id -g) ~/.kube/config ","summary":"","tags":["certificate","kubeadm","kube-apiserver","restarting","docker"],"title":"Expired Kubernetes Certificates","uri":"/post/2023-01-04-expired-kubernetes-certificates/"},{"categories":["cloudflare"],"content":"You can use a Cloudflare Tunnel to securely access your Windows machine remotely.\nEnable Remote Desktop Make sure RDP is enabled in Windows\nCreate a new Tunnel from your Cloudflare Zero Trust dashboard\nInstall cloudflared on your Windows machine and connect it to your new tunnel. We’re now done on the “Server” side of the configuration.\nConfigure your client Install cloudfared on your client Connect to the RDP tunnel by running the following command cloudflared access rdp --hostname \u003cYOUR HOSTNAME\u003e --url rdp://localhost:3389 Configure your RDP Client\nYou’re now ready to connect\nFor more help see here ","summary":"","tags":["cloudfared","tunnel","access","argo"],"title":"Secure RDP with Cloudflare Zero Trust","uri":"/post/2022-10-24-cloudflared-access-rdp/"},{"categories":["firewall"],"content":"This is a short post that shows how you can build your own VyOS LTS iso image. I have build mine on Debian Buster (Debian 10)!\n1. Install docker $ sudo apt-get update $ sudo apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common $ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - $ sudo add-apt-repository \"deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable\" $ sudo apt-get update $ sudo apt-get install -y docker-ce $ sudo usermod -aG docker \u003cYour username\u003e 2. Download and Build iso. At this time of writing version 1.3.2 is the latest LTS release\n1. git clone -b 1.3.2 --single-branch https://github.com/vyos/vyos-build 2. docker run --rm -it --privileged -v $(pwd):/vyos -w /vyos vyos/vyos-build:current bash 3. ./configure --architecture amd64 --build-by \"j.randomhacker@vyos.io\" --build-type \"release\" --version \"LTS 1.3.2\" 4. sudo make iso More information can be read here ","summary":"Brief demonstation on how to build VyOS from source","tags":["VyOS","docker"],"title":"Build VyOS From Source","uri":"/post/2022-10-03-build-vyos-from-source/"},{"categories":["dns","cloudflare","firewall"],"content":"VyOS is using PowerDNS recurser for DNS forwarding. Unfortunately it’s not possible to make encrypted DNS queries from it, so here’s a work around with cloudflared tunnel as a DNS Proxy\n1. Log in to vyos as root and create a directory in /etc for cloudflared ssh vyos@192.168.1.1 # Change the ip to your routers ip vyos@vyos:~$ conf vyos@vyos# sudo -s root@vyos# mkdir /etc/cloudflared 2. Install cloudflared root@vyos# wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb root@vyos# dpkg -i cloudflared-linux-amd64.deb 3. Configure cloudflared root@vyos# cloudflared tunnel login root@vyos# cloudflared tunnel create vyos # This will create a tunnel-id.json file with your cloudflare credentials. echo \" tunnel: \u003cYOUR-TUNNEL-ID\u003e credentials-file: /etc/cloudflared/\u003cYOUR-TUNNEL-ID\u003e.json proxy-dns: true proxy-dns-port: 53 proxy-dns-address: 127.0.0.1 proxy-dns-upstream: - https://cloudflare-dns.com/dns-query - https://security.cloudflare-dns.com/dns-query # Blocks Malware - https://family.cloudflare-dns.com/dns-query # Blocks Malware and Adult Content - https://dns.quad9.net/dns-query - https://dns.google/dns-query - https://doh.opendns.com/dns-query - https://doh.familyshield.opendns.com/dns-query # This is their familyshield with adult content filtering \" \u003e /etc/cloudflared/config.yml root@vyos# cloudflared service Install root@vyos# systemctl enable --now cloudflared.service You can choose the proxy-dns-upstream server of your liking. I have listed a few of the public resolvers with support for DoH (RFC 8484)\n4. Configure VyOS to use the dns-proxy vyos@vyos# set system name-server 127.0.0.1 vyos@vyos# delete system name-server x.x.x.x # This is optional and is needed if your system is already configured to use a dns resolver vyos@vyos# commit vyos@vyos# save 5. Verify it’s working vyos@vyos# dig google.com ; \u003c\u003c\u003e\u003e DiG 9.16.27-Debian \u003c\u003c\u003e\u003e google.com ;; global options: +cmd ;; Got answer: ;; -\u003e\u003eHEADER\u003c\u003c- opcode: QUERY, status: NOERROR, id: 444 ;; flags: qr rd ra; QUERY: 1, ANSWER: 6, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 91031139f2dce048 (echoed) ;; QUESTION SECTION: ;google.com.\tIN\tA ;; ANSWER SECTION: google.com.\t217\tIN\tA\t142.251.9.101 google.com.\t217\tIN\tA\t142.251.9.102 google.com.\t217\tIN\tA\t142.251.9.100 google.com.\t217\tIN\tA\t142.251.9.139 google.com.\t217\tIN\tA\t142.251.9.138 google.com.\t217\tIN\tA\t142.251.9.113 ;; Query time: 43 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Mon Oct 03 10:45:01 CEST 2022 ;; MSG SIZE rcvd: 207 As shown above, VyOS is now using port 53 on localhost for dns resolution\n","summary":"This is instructions for running DNS over HTTPS on your VyOS router, by using Cloudflare Tunnel","tags":["VyOS","cloudflared","dns","https","argo","tunnel","proxy"],"title":"DNS over HTTPS in VyOS","uri":"/post/2022-10-03-vyos-dns-over-https/"},{"categories":["cloudflare","firewall"],"content":"NOTE: Remember to create a backup before you proceed! Start by editing /usr/local/etc/pkg/repos/pfsense.repo and change the first line so it looks like this FreeBSD: { url: \"pkg+http://pkg.FreeBSD.org/${ABI}/latest\", mirror_type: \"srv\", signature_type: \"fingerprints\", fingerprints: \"/usr/share/keys/pkg\", enabled: yes } Install Cloudflared pkg install cloudflared Output: [22.05-RELEASE]/root: pkg install cloudflared Updating FreeBSD repository catalogue... FreeBSD repository is up to date. Updating pfSense-core repository catalogue... pfSense-core repository is up to date. Updating pfSense repository catalogue... pfSense repository is up to date. All repositories are up to date. New version of pkg detected; it needs to be installed first. The following 1 package(s) will be affected (of 0 checked): Installed packages to be UPGRADED: pkg: 1.17.5_2 -\u003e 1.18.4 [FreeBSD] Number of packages to be upgraded: 1 The process will require 15 MiB more space. 7 MiB to be downloaded. Proceed with this action? [y/N]: y [1/1] Fetching pkg-1.18.4.pkg: 100% 7 MiB 7.7MB/s 00:01 Checking integrity... done (0 conflicting) [1/1] Upgrading pkg from 1.17.5_2 to 1.18.4... [1/1] Extracting pkg-1.18.4: 100% You may need to manually remove /usr/local/etc/pkg.conf if it is no longer needed. Updating FreeBSD repository catalogue... FreeBSD repository is up to date. Updating pfSense-core repository catalogue... pfSense-core repository is up to date. Updating pfSense repository catalogue... pfSense repository is up to date. All repositories are up to date. The following 22 package(s) will be affected (of 0 checked): New packages to be INSTALLED: brotli: 1.0.9,1 [FreeBSD] cloudflared: 2022.7.1_2 [FreeBSD] fontconfig: 2.14.0,1 [FreeBSD] freetype2: 2.12.1_2 [FreeBSD] gdbm: 1.23 [FreeBSD] giflib: 5.2.1 [FreeBSD] gmp: 6.2.1 [FreeBSD] graphite2: 1.3.14 [FreeBSD] jbigkit: 2.1_1 [FreeBSD] jpeg-turbo: 2.1.4 [FreeBSD] libdeflate: 1.13 [FreeBSD] libfontenc: 1.1.4 [FreeBSD] libsodium: 1.0.18 [FreeBSD] libunwind: 20211201_1 [FreeBSD] libyaml: 0.2.5 [FreeBSD] lua53: 5.3.6 [FreeBSD] nettle: 3.8.1 [FreeBSD] pixman: 0.40.0_1 [FreeBSD] png: 1.6.37_1 [FreeBSD] tcl86: 8.6.12 [FreeBSD] tiff: 4.4.0 [FreeBSD] zstd: 1.5.2_1 [FreeBSD] Number of packages to be installed: 22 The process will require 71 MiB more space. 16 MiB to be downloaded. Proceed with this action? [y/N]: y [1/22] Fetching freetype2-2.12.1_2.pkg: 100% 1 MiB 1.1MB/s 00:01 [2/22] Fetching nettle-3.8.1.pkg: 100% 1 MiB 1.4MB/s 00:01 [3/22] Fetching giflib-5.2.1.pkg: 100% 232 KiB 237.5kB/s 00:01 [4/22] Fetching graphite2-1.3.14.pkg: 100% 100 KiB 102.2kB/s 00:01 [5/22] Fetching gdbm-1.23.pkg: 100% 208 KiB 212.9kB/s 00:01 [6/22] Fetching libunwind-20211201_1.pkg: 100% 127 KiB 130.3kB/s 00:01 [7/22] Fetching zstd-1.5.2_1.pkg: 100% 587 KiB 601.3kB/s 00:01 [8/22] Fetching brotli-1.0.9,1.pkg: 100% 355 KiB 363.7kB/s 00:01 [9/22] Fetching fontconfig-2.14.0,1.pkg: 100% 455 KiB 465.7kB/s 00:01 [10/22] Fetching jbigkit-2.1_1.pkg: 100% 73 KiB 74.6kB/s 00:01 [11/22] Fetching tiff-4.4.0.pkg: 100% 854 KiB 874.4kB/s 00:01 [12/22] Fetching tcl86-8.6.12.pkg: 100% 2 MiB 2.5MB/s 00:01 [13/22] Fetching png-1.6.37_1.pkg: 100% 290 KiB 297.2kB/s 00:01 [14/22] Fetching jpeg-turbo-2.1.4.pkg: 100% 366 KiB 374.6kB/s 00:01 [15/22] Fetching libyaml-0.2.5.pkg: 100% 71 KiB 72.4kB/s 00:01 [16/22] Fetching libdeflate-1.13.pkg: 100% 74 KiB 75.7kB/s 00:01 [17/22] Fetching lua53-5.3.6.pkg: 100% 281 KiB 288.1kB/s 00:01 [18/22] Fetching cloudflared-2022.7.1_2.pkg: 100% 6 MiB 6.3MB/s 00:01 [19/22] Fetching libfontenc-1.1.4.pkg: 100% 19 KiB 19.9kB/s 00:01 [20/22] Fetching gmp-6.2.1.pkg: 100% 477 KiB 488.4kB/s 00:01 [21/22] Fetching pixman-0.40.0_1.pkg: 100% 324 KiB 331.6kB/s 00:01 [22/22] Fetching libsodium-1.0.18.pkg: 100% 263 KiB 268.9kB/s 00:01 Checking integrity... done (0 conflicting) [1/22] Installing brotli-1.0.9,1... [1/22] Extracting brotli-1.0.9,1: 100% [2/22] Installing png-1.6.37_1... [2/22] Extracting png-1.6.37_1: 100% [3/22] Installing freetype2-2.12.1_2... [3/22] Extracting freetype2-2.12.1_2: 100% [4/22] Installing zstd-1.5.2_1... [4/22] Extracting zstd-1.5.2_1: 100% [5/22] Installing jbigkit-2.1_1... [5/22] Extracting jbigkit-2.1_1: 100% [6/22] Installing jpeg-turbo-2.1.4... [6/22] Extracting jpeg-turbo-2.1.4: 100% [7/22] Installing libdeflate-1.13... [7/22] Extracting libdeflate-1.13: 100% [8/22] Installing gmp-6.2.1... [8/22] Extracting gmp-6.2.1: 100% [9/22] Installing nettle-3.8.1... [9/22] Extracting nettle-3.8.1: 100% [10/22] Installing giflib-5.2.1... [10/22] Extracting giflib-5.2.1: 100% [11/22] Installing graphite2-1.3.14... [11/22] Extracting graphite2-1.3.14: 100% [12/22] Installing gdbm-1.23... [12/22] Extracting gdbm-1.23: 100% [13/22] Installing libunwind-20211201_1... [13/22] Extracting libunwind-20211201_1: 100% [14/22] Installing fontconfig-2.14.0,1... [14/22] Extracting fontconfig-2.14.0,1: 100% [15/22] Installing tiff-4.4.0... [15/22] Extracting tiff-4.4.0: 100% [16/22] Installing tcl86-8.6.12... [16/22] Extracting tcl86-8.6.12: 100% [17/22] Installing libyaml-0.2.5... [17/22] Extracting libyaml-0.2.5: 100% [18/22] Installing lua53-5.3.6... [18/22] Extracting lua53-5.3.6: 100% [19/22] Installing cloudflared-2022.7.1_2... [19/22] Extracting cloudflared-2022.7.1_2: 100% [20/22] Installing libfontenc-1.1.4... [20/22] Extracting libfontenc-1.1.4: 100% [21/22] Installing pixman-0.40.0_1... [21/22] Extracting pixman-0.40.0_1: 100% [22/22] Installing libsodium-1.0.18... [22/22] Extracting libsodium-1.0.18: 100% Running fc-cache to build fontconfig cache... ===== Message from freetype2-2.12.1_2: -- The 2.7.x series now uses the new subpixel hinting mode (V40 port's option) as the default, emulating a modern version of ClearType. This change inevitably leads to different rendering results, and you might change port's options to adapt it to your taste (or use the new \"FREETYPE_PROPERTIES\" environment variable). The environment variable \"FREETYPE_PROPERTIES\" can be used to control the driver properties. Example: FREETYPE_PROPERTIES=truetype:interpreter-version=35 \\ cff:no-stem-darkening=1 \\ autofitter:warping=1 This allows to select, say, the subpixel hinting mode at runtime for a given application. If LONG_PCF_NAMES port's option was enabled, the PCF family names may include the foundry and information whether they contain wide characters. For example, \"Sony Fixed\" or \"Misc Fixed Wide\", instead of \"Fixed\". This can be disabled at run time with using pcf:no-long-family-names property, if needed. Example: FREETYPE_PROPERTIES=pcf:no-long-family-names=1 How to recreate fontconfig cache with using such environment variable, if needed: # env FREETYPE_PROPERTIES=pcf:no-long-family-names=1 fc-cache -fsv The controllable properties are listed in the section \"Controlling FreeType Modules\" in the reference's table of contents (/usr/local/share/doc/freetype2/reference/index.html, if documentation was installed). Disable the FreeBSD repo again by setting enabled to no in /usr/local/etc/pkg/repos/pfsense.repo FreeBSD: { url: \"pkg+http://pkg.FreeBSD.org/${ABI}/latest\", mirror_type: \"srv\", signature_type: \"fingerprints\", fingerprints: \"/usr/share/keys/pkg\", enabled: no } Revert some of the packages to the pfSense maintained version pkg upgrade Output: [22.05-RELEASE]/root: pkg update Updating pfSense-core repository catalogue... pfSense-core repository is up to date. Updating pfSense repository catalogue... pfSense repository is up to date. All repositories are up to date. [22.05-RELEASE]/root: pkg upgrade Updating pfSense-core repository catalogue... pfSense-core repository is up to date. Updating pfSense repository catalogue... pfSense repository is up to date. All repositories are up to date. Checking for upgrades (13 candidates): 100% Processing candidates (13 candidates): 100% The following 5 package(s) will be affected (of 0 checked): Installed packages to be REINSTALLED: brotli-1.0.9,1 [pfSense] (options changed) giflib-5.2.1 [pfSense] (options changed) jbigkit-2.1_1 [pfSense] (options changed) libsodium-1.0.18 [pfSense] (options changed) lua53-5.3.6 [pfSense] (options changed) Number of packages to be reinstalled: 5 The operation will free 1 MiB. 899 KiB to be downloaded. Proceed with this action? [y/N]: y [1/5] Fetching lua53-5.3.6.pkg: 100% 196 KiB 200.4kB/s 00:01 [2/5] Fetching giflib-5.2.1.pkg: 100% 71 KiB 73.1kB/s 00:01 [3/5] Fetching brotli-1.0.9,1.pkg: 100% 352 KiB 360.7kB/s 00:01 [4/5] Fetching libsodium-1.0.18.pkg: 100% 215 KiB 220.1kB/s 00:01 [5/5] Fetching jbigkit-2.1_1.pkg: 100% 65 KiB 66.1kB/s 00:01 Checking integrity... done (0 conflicting) [1/5] Reinstalling lua53-5.3.6... [1/5] Extracting lua53-5.3.6: 100% [2/5] Reinstalling giflib-5.2.1... [2/5] Extracting giflib-5.2.1: 100% [3/5] Reinstalling brotli-1.0.9,1... [3/5] Extracting brotli-1.0.9,1: 100% [4/5] Reinstalling libsodium-1.0.18... [4/5] Extracting libsodium-1.0.18: 100% [5/5] Reinstalling jbigkit-2.1_1... [5/5] Extracting jbigkit-2.1_1: 100% cloudflared can be found in /usr/local/bin/cloudflared\nI won’t go into details about how you use cloudflared, but here’s the documentation https://developers.cloudflare.com/cloudflare-one/setup to get you started\nCloudflared config.yaml file\ntunnel: \u003ctunnel-id\u003e credentials-file: /root/.cloudflared/\u003ctunnel-id-file\u003e.json warp-routing: enabled: true ingress: - hostname: pfsense.example.com service: https://localhost:443 originRequest: # Only needed if protocol is https and the certificate hostname differs from the hostname in Cloudflare originServerName: \"pfsense1.example.com\" # - hostname: pfsense-ssh.example.com service: ssh://localhost:22 - service: http_status:404 Create a DNS record cloudflared tunnel route dns \u003ctunnel id\u003e pfsense.example.com Create a persistent startup service in /usr/local/etc/rc.d/ REMEMBER to use your tunnel id from above echo \" #!/bin/sh # PROVIDE: cf # REQUIRE: cleanvar SERVERS # Options to configure cf(cloudflared) via /etc/rc.conf: # # cf_enable (bool)\tEnable service on boot #\tDefault: NO # # cf_conf (str)\tConfig file to use #\tDefault: /usr/local/etc/cloudflared/config.yml # # cf_mode (str)\tMode to run cloudflared as (e.g. 'tunnel', 'tunnel run' #\tor 'proxy-dns'). Should you use the default, a free #\ttunnel is set up for you. #\tDefault: \"tunnel\" # # cf_origin_cert (str) path to origin certificate # Default: \"/etc/cloudflared/cert.pem\" # # cf_tunnel_id (str) Your Cloudflared Tunnel ID # . /etc/rc.subr name=\"cf\" rcvar=\"${name}_enable\" : ${cf_enable:=\"YES\"} : ${cf_conf:=\"/root/.cloudflared/config.yml\"} : ${cf_origin_cert:=\"/root/.cloudflared/cert.pem\"} : ${cf_mode:=\"tunnel run\"} : ${cf_tunnel_id=\"\u003cYOUR TUNNEL ID\u003e\"} logfile=\"/var/log/cloudflared.log\" pidfile=\"/var/run/cloudflared.pid\" procname=\"/usr/local/bin/cloudflared\" command=\"/usr/sbin/daemon\" command_args=\"-o ${logfile} -p ${pidfile} -f ${procname} --origincert ${cf_origin_cert} --config ${cf_conf} ${cf_mode} ${cf_tunnel_id}\" stop_postcmd=\"killall cloudflared\" load_rc_config $name run_rc_command \"$1\" \" \u003e /usr/local/etc/rc.d/cf.sh chmod +x /usr/local/etc/rc.d/cf.sh Install the cron package in pfSense and configure it to start the script on boot\n@reboot root /usr/local/etc/rc.d/cf.sh start ","summary":"","tags":["cloudflared","cloudflare","pfSense","argo","tunnel"],"title":"Install Cloudflared on pfSense","uri":"/post/2022-09-16-cloudflared-on-pfsense/"},{"categories":["hcl"],"content":"Customizing Sametime 12 In file custom.env add the following\nREACT_APP_PRODUCT_LOGO=/images/branding/my-logo.jpg # Change login logo REACT_APP_MEETING_BANNER_IMAGE=/images/branding/my-logo.jpg # Change meetings logo REACT_APP_MEETING_BACKGROUND_IMAGE=/images/branding/my-theme.jpg # Change meetings background The image files needs to be placed in sametime-config/web/branding/. You can place your favicon.ico file there as well\nSametime 12 awareness in HCL Verse Adding your own company branding in sametime 12\nIn file custom.env set STI__ST_BB_NAMES__ST_AUTH_TOKEN=Fork:Jwt,Ltpa Export your LTPA token from WebSphere. See Here Copy the LTPA token file from WebSphere to your Sametime server\nIn file .env set\nENABLE_LTPA=true LTPA_KEYS_FILE_PATH=/PATH_TO_FILE_OUTSIDE_OF_SAMETIME_FOLDER/ltpa.keys LTPA_KEYS=/ltpa-config/ltpa.keys LTPA_KEYS_PASSWORD=YOUR_EXPORT_PASSWORD In file docker-compose.yml set SAMETIME_EXTERNAL_WARINTEGRATION=true Sametime 12 Start/Stop Script I have written a script for starting and stopping the sametime containers. The script is for Sametime 12 on Docker.\nIt can be found here:\ngit clone https://github.com/kholmqvist/sametime.git Please note that my sametime folder is placed in /opt. So if yours is placed elsewhere, then change the sametime_dir=\"/opt/sametime\" variable to your needs\n","summary":"","tags":["Sametime","Docker","Linux","RHEL","RedHat","MongoDB","Verse"],"title":"Sametime 12","uri":"/post/2022-07-07-sametime12/"},{"categories":["linux"],"content":"This is just a brief overview of the options I’m using every now and then.\nSSH examples ssh 192.168.1.2\n– SSH to IP 192.168.1.2 as your current user ssh root@192.168.1.2 – SSH to IP 192.168.1.2 as the root user ssh 192.168.1.2 -p 2222\n– SSH defaults to port 22, but by using -p you can connect to ssh on other ports ssh -i ~/.ssh/id_rsa\n– Use this specific private key for authentication. This is useful when you have multiple key pairs ssh -A\n– Enables forwarding of connections from an authentication agent such as ssh-agent ssh -L 1234:localhost:80 192.168.1.2\n– TCP Port or socket forwarding. In this example I’m forwarding port 80 from the server 192.168.1.2 to port 1234 on my local machine. Opening http://localhost:1234 in a browser will now show the webpage running on server 192.168.1.2 ssh -o “VerifyHostKeyDNS ask” abc.example.com\n– Specifies whether to verify the remote key using DNS and SSHFP resource records. If this option is set to yes, the client will implicitly trust keys that match a secure fingerprint from DNS. Insecure fingerprints will be handled as if this option was set to ask. If this option is set to ask, information on fingerprint match will be displayed, scp admin@serverA:/myfile.txt admin@serverB:/myfile.txt\n– Copies myfile.txt from Server A to Server B, using my machine as intermediate scp -r admin@serverA:/myfolder admin@serverB:/\n– Copies myfolder and its content from Server A to Server B ssh -J user@ServerA user@ServerB\n– Use server A as a jump host to reach server B. SSH Keygen examples ssh-keygen\n– Generates an RSA key. This is the default setting ssh-keygen -C – Provides a comment, otherwise your username@localmachine will be used. Using a comment is useful if you have a specific keypair for a single host or customer ssh-keygen -f – Specifies filename of keypair ssh-keygen -H\n– In .ssh/known_hosts file, the hostnames and addresses will be shown as hashed values so the files value won’t be revealed ssh-keygen -N – Adds a passphrase to the private key ssh-keygen -f ~./ssh/id_rsa -p\n– Changes passphrase of the private key id_rsa ssh-keygen -r abc.example.com -f /etc/ssh/ssh_id_host_id25519.pub\n– Print the SSHFP fingerprint record for DNS fingerprint verification. Note this command is run from server abc.example.com so we can get the fingerprint from it’s host key. The records can be added to DNS ssh-keygen -R abc.example.com\n– Removes abc.example.com from the ~/.ssh/known_hosts file. This is useful to delete hashed hosts ssh-keygen -t – used for specifying the key type you want to create. Supported values are “dsa”, “ecdsa”, “ecdsa-sk”, “ed25519”, “ed25519-sk”, or “rsa” SSH Add Adds private key identities to the OpenSSH authentication agent\nssh-add ~/.ssh/id_rsa\n– Adds my private key to ssh-agent ssh-add -l\n– Shows a summary of the keys added to ssh-agent ssh-add -L\n– Shows a detailed view of keys added to ssh-agent ssh-add -d ~/.ssh/id_rsa\n– Removes the specified private key from ssh-agent ssh-add -D\n– Removes all keys from ssh-agent ssh-add -K\n–Load resident keys from a FIDO authenticator Sign files with SSH Signing a file is a way to show the file hasn’t been tampered with\nStart by creating the file test.txt with the following content: This is my document and i want the receiver to verify that it hasn't been tampered with. Let’s sign test.txt ssh-keygen -Y sign -f ~/.ssh/id_rsa -n file test.txt Signing file test.txt Write signature to test.txt.sig -Y sign # Tells ssh to use the sign function -f # Sign with my private file -n file # File is the namespace it could be email as well if i was signing an email Let’s see the contents of test.txt.sig -----BEGIN SSH SIGNATURE----- U1NIU0lHAAAAAQAAAEoAAAAac2stc3NoLWVkMjU1MTlAb3BlbnNzaC5jb20AAAAgWbOshU iG4m+k8aBY4J21ofo4yjnIxZAjNBzqFqxFYmgAAAAEc3NoOgAAAARmaWxlAAAAAAAAAAZz aGE1MTIAAABnAAAAGnNrLXNzaC1lZDI1NTE5QG9wZW5zc2guY29tAAAAQEPM2iXUIlP+UO sdPR6icOa1KurqI31tuzfzaJiiTcNE52UEHkQmJGOtN2sZ9YPD+1m6E2QhkM10EqZzXK8+ BwMBAAACAg== -----END SSH SIGNATURE----- Before we can verify the signatures, we need a file with the public keys of the signer. I will create a file called signers.txt and add the public keys from John and Bob john@example.com sk-ssh-ed25519@openssh.com AAAAGnNrLXNzaC1lZDI1NTE5QG9wZW5zc2guY29tAAAAIFmzrIVIhuJvpPGgWOCdtaH6OMo5yMWQIzQc6hasRWJoAAAABHNzaDo= bob@example.com sk-ssh-ed25519@openssh.com AAAAGnNrLXNzaC1lZDI1NTE5QG9wZW5zc2guY29tAAAAIEaH+d2/cPolLrvFjsE0orogMUOPgkq5oCaP+boNCGcQAAAABHNzaDo= Verify the signatures ssh-keygen -Y verify -f signers.txt -I john@example.com -n file -s test.txt.sig \u003c test.txt Good \"file\" signature for john@example.com with ED25519-SK key SHA256:BgPMRhYf1AgUdACHH6hNwwIsDomxXal9awV7IhqLfIs -Y verify # Tells ssh to use the verify function -f # The signers file with all the public keys from my trusted signers -I # The username/email of the person who signed the file. This has to match with the public key in the signers file -n # File is the namespace it could be email as well if i was signing an email -s # The .sig file \u003c orgfile # the original file Common issues Unable to negotiate with x.x.x.x port 22: no matching host key type found. Their offer: ssh-rsa,ssh-dss\nThe ssh server is offering authentication over ssh-rsa or ssh-dss. ssh-rsa(RSA/SHA1) has been deprecated since OpenSSH 8.2. You can get around this error by adding -o “HostKeyAlgorithms +ssh-rsa”\nExample: ssh x.x.x.x -o “HostKeyAlgorithms +ssh-rsa”\n","summary":"","tags":["SSH","Public Key","Private Key"],"title":"SSH Tips and Tricks","uri":"/post/2022-06-03-ssh-tips-and-tricks/"},{"categories":["linux"],"content":"Let’s say you can access Server A with SSH from your local pc, but you can’t access Server B. Server A however can access it on the IP level, but Server B does only have your public key in it’s .ssh/authorized_keys file, so how do you access it? The answer is SSH Agent Forwarding. SSH-Agent will keep your key in memory so you won’t have to type in your passphrase every time the key is used.\nssh-agent zsh - (zsh is the shell i’m using. It could also be bash or whatever shell you’re using) ssh-add ~/.ssh/id_rsa - Adds my private key to ssh-agent ssh-add -l - Shows a summary of the keys added to ssh-agent. Use ssh-add -L for detailed view ssh -A user@server_A - Enables forwarding of connections from an authentication agent You are now connected to Server A and are now able to ssh to Server B without having the private key on Server A\nCommands Summary ssh-agent YOUR-SHELL # (zsh is the shell i'm using. It could also be bash or whatever shell you're using) ssh-add ~/.ssh/id_rsa # Adds my private key to ssh-agent ssh-add -l # Shows a summary of the keys added to ssh-agent ssh-add -L # Shows a detailed view of keys added to ssh-agent ssh-add -d ~/.ssh/id_rsa # Removes the specified private key from ssh-agent ssh-add -D # Removes all keys from ssh-agent ssh-add -K # Load resident keys from a FIDO authenticator ssh -A user@ip # Enables forwarding of connections from an authentication agent Detailed examples is shown in this guide ","summary":"A simple way to connect to a server or pc without having the private key on the jump server","tags":["SSH","Agent","Forwarding"],"title":"SSH Agent Forwarding","uri":"/post/2022-06-02-ssh-agent-forwarding/"},{"categories":["DNS","Cloudflare"],"content":"I had an issue with the public IP changing a few times a day. The firewall I’m using has some builtin DDNS services, but Cloudflare isn’t one of them. So I decided to create my own. The script uses Amazons checkip service to get the current public ip and if it doesn’t match with the dns record then the record will be updated.\nI use cron to run the script every hour and log the output\n0 * * * * sh /path/to/cloudflare-ddns-update.sh \u003e\u003e /var/log/cloudflare-ddns-update.log The script can be downloaded here: cloudflare-ddns-update.sh or you can see the content below\n#!/bin/bash # Update a Cloudflare DNS A record with the Public IP of the source machine # Prerequisites: # - DNS Record has to be created manually at Cloudflare # - Cloudflare API Token with edit dns zone permissions https://dash.cloudflare.com/profile/api-tokens # - curl, jq needs to be installed # Proxy - uncomment and provide details if using a proxy #export https_proxy=http://\u003cproxyuser\u003e:\u003cproxypassword\u003e@\u003cproxyip\u003e:\u003cproxyport\u003e # Cloudflare zone is the zone which holds the record zone=\"mydomain.com\" # dnsrecord is the A record which will be updated dnsrecord=\"example.mydomain.com\" ## Cloudflare authentication details ## keep these private cloudflare_api_token=\"my-super-secret-api-token\" function update_ip() { # get the zone id for the requested zone zoneid=$(curl -s -X GET \"https://api.cloudflare.com/client/v4/zones?name=$zone\u0026status=active\" \\ -H \"Content-Type: application/json\" \\ -H \"Authorization: Bearer $cloudflare_api_token\" | jq -r '{\"result\"}[] | .[0] | .id') echo \"Zoneid for $zone is $zoneid\" # get the dns record id dnsrecordid=$(curl -s -X GET \"https://api.cloudflare.com/client/v4/zones/$zoneid/dns_records?type=A\u0026name=$dnsrecord\" \\ -H \"Content-Type: application/json\" \\ -H \"Authorization: Bearer $cloudflare_api_token\" | jq -r '{\"result\"}[] | .[0] | .id') echo \"DNSrecordid for $dnsrecord is $dnsrecordid\" # update the record curl -s -X PUT \"https://api.cloudflare.com/client/v4/zones/$zoneid/dns_records/$dnsrecordid\" \\ -H \"Content-Type: application/json\" \\ -H \"Authorization: Bearer $cloudflare_api_token\" \\ --data \"{\\\"type\\\":\\\"A\\\",\\\"name\\\":\\\"$dnsrecord\\\",\\\"content\\\":\\\"$ip\\\",\\\"ttl\\\":1,\\\"proxied\\\":false}\" | jq } function get_ip() { # Get the public IP address ip=$(curl -s -X GET https://checkip.amazonaws.com) dnsip=$(dig $dnsrecord +short @1.1.1.1) echo \"Public IP is $ip\" if [[ \"$dnsip\" == \"$ip\" ]]; then echo \"$dnsrecord is currently set to $ip; no changes needed\" exit fi update_ip } echo \"\" echo \"-- $(date '+%d/%m/%Y %H:%M:%S') --\" # Run function get_ip get_ip ","summary":"Bash script to update a dns record to the public ip","tags":["Cloudflare","DNS","DDNS","Bash","Shell","Script"],"title":"Cloudflare DDNS Script","uri":"/post/2022-04-08-cloudflare-ddns-bash-script/"},{"categories":["Linux"],"content":"Here’s some configuration examples from a VRRP(Virtual Router Redundancy Protocol) experiment i did. This is used to create a high available DNS resolver with Unbound . I used RHEL 8 as my distribution of choice, but I’m sure this can be used on any RHEL deviate or linux distribution\nSoftware requirements:\nkeepalived unbound net-tools Topology: I have 2 VMs within the same network.\nHost A (172.16.0.90) Host B (172.16.0.91) VIP(Virtual IP Address) 172.16.0.92 1. Support Floating IP in kernel Create /etc/sysctl.d/01-vrrp.conf\nnet.ipv4.ip_nonlocal_bind=1 Append it by rebooting or running sysctl -p\n2. Unbound.conf Create /etc/unbound/unbound.conf\nserver: username: \"unbound\" directory: \"/etc/unbound\" chroot: \"/etc/unbound\" pidfile: \"unbound.pid\" do-daemonize: no # Set to no when use-systemd is enabled use-systemd: yes #module-config: \"validator iterator\" # General Settings port: 53 do-ip4: yes do-ip6: no do-udp: yes do-tcp: yes interface: 0.0.0.0 interface: ::0 interface-automatic: yes hide-identity: no hide-version: no version: \"\" edns-buffer-size: 1232\t# Prevent IP fragmentation. DNS Flag Day 2020 so-rcvbuf: 2m\t#4m so-sndbuf: 2m\t#4m so-reuseport: yes\t# Faster UDP with multithreading (linux only) # TCP incoming-num-tcp: 10 outgoing-num-tcp: 10 # Perfomance Tuning num-threads: 2\t# number of cores. Threading is disabled if set to 1 num-queries-per-thread: 4096 # Caching cache-min-ttl: 7200 cache-max-ttl: 86400 msg-buffer-size: 8192 # Default Value 65552 msg-cache-size: 50m msg-cache-slabs: 4 # power of 2 to num-threads rrset-cache-size: 100m # rrset=msg*2 rrset-cache-slabs: 4 infra-cache-slabs: 4 infra-cache-numhosts: 10000 infra-cache-min-rtt: 120 key-cache-size: 100k key-cache-slabs: 1 neg-cache-size: 10k prefetch: yes prefetch-key: yes #serve-expired: yes #serve-expired-ttl: 86400 # Query localhost do-not-query-localhost: no\t# Default is yes. If no, then localhost can be used to send queries to. # Private Addresses RFC1918 # DNS Rebinding Prevention private-address: 10.0.0.0/8 private-address: 169.254.0.0/16 private-address: 172.16.0.0/12 private-address: 192.168.0.0/16 private-address: fd00::/8 private-address: fe80::/10 # Access List access-control: 172.16.0.0/24 allow # Forward DNS Requests to public resolvers forward-zone: name: \".\" forward-tls-upstream: no #forward-addr: 1.1.1.1\t# Cloudflare DNS Primary #forward-addr: 1.0.0.1\t# Cloudflare DNS Secondary #forward-addr: 1.1.1.2\t# Cloudflare DNS Malware Filtering #forward-addr: 1.0.0.2\t# Cloudflare DNS Malware Filtering Secondary forward-addr: 1.1.1.3\t# Cloudflare DNS Malware + Adult Filtering forward-addr: 1.0.0.3\t# Cloudflare DNS Malware + Adult Filtering Secondary #forward-addr: 8.8.8.8\t# Google DNS Primary #forward-addr: 8.8.4.4\t# Google DNS Secondary #forward-addr: 9.9.9.9\t# Quad9 DNS 3. KeepAlived config Primary Host (172.16.0.90) Add this to /etc/keepalived/keepalived.conf You need to change some of the parameters. I use the pidfile to check if unbound is running in the chk_unbound script\n! Configuration File for keepalived global_defs { notification_email { YOU@YOURDOMAIN.com } notification_email_from YOU@YOURDOMAIN.com smtp_server SMTP_SERVER_IP OR FQDN smtp_connect_timeout 30 } vrrp_script chk_unbound { script \"/usr/sbin/pidof unbound\" interval 5 } vrrp_instance VI_1 { state MASTER interface ens192 virtual_router_id 51 priority 101 advert_int 1 authentication { auth_type AH auth_pass P@ssw0rd } unicast_src_ip 172.16.0.90 unicast_peer { 172.16.0.91 } virtual_ipaddress { 172.16.0.92 dev ens192 label ens192:vip } track_script { chk_unbound } } 4. KeepAlived config Secondary Host (172.16.0.91) ! Configuration File for keepalived global_defs { notification_email { YOU@YOURDOMAIN.com } notification_email_from YOU@YOURDOMAIN.com smtp_server SMTP_SERVER_IP or FQDN smtp_connect_timeout 30 } vrrp_script chk_unbound { script \"/usr/sbin/pidof unbound\" interval 5 } vrrp_instance VI_1 { state BACKUP interface ens192 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type AH auth_pass P@ssw0rd } unicast_src_ip 172.16.0.91 unicast_peer { 172.16.0.90 } virtual_ipaddress { 172.16.0.92 dev ens192 label ens192:vip } track_script { chk_unbound } } 5. Enable and start KeepAlived systemctl enable --now keepalived.service 6. Firewall Create a firewall rule so the keepalived instances can communicate and get health status\nfirewall-cmd --add-rich-rule='rule protocol value=\"vrrp\" accept' --permanent firewall-cmd --reload ","summary":"","tags":["VRRP","DNS","Unbound"],"title":"Linux VRRP","uri":"/post/2022-03-25-linux-vrrp/"},{"categories":["Windows"],"content":"Step 1: Run DISM (Deployment Image Servicing and Management) # check for corrupted files dism /online /cleanup-image /scanhealth # repair corrupted files dism /online /cleanup-image /restorehealth Step 2: Run SFC (System File Checker) SFC /SCANNOW ","summary":"Ways to repair a broken Windows 10 installation","tags":["Microsoft","Windows","SFC","DISM"],"title":"Repair Windows 10","uri":"/post/2022-03-10-repair-windows-10/"},{"categories":["linux"],"content":"I have a few CentOS machines that needs to be converted to RHEL and that can be done using the convert2rhel script. However I’m running CentOS 8 Stream, which can’t be converted to RHEL 8, so I have to do a dowgrade to CentOS 8 first.\nRemove the centos-stream package. dnf remove centos-stream-release I got the message “Problem: The operation would result in removing the following protected packages: setup” So I had to do this first before attempting step 1 again. mv /etc/yum/protected.d/setup.conf /etc/yum/protected.d/setup.conf.backup remember to move it back once centos-stream-release is removed\nCopy the repository files from a new CentOS 8 installation to /etc/yum.repos.d/.\nRun a distro-sync.\ndnf distro-sync --releasever 8 Reboot and Welcome to CentOS8. cat /etc/centos-release; CentOS Linux release 8.5.2111 Convert it to RHEL 8 as mentioned above. ","summary":"","tags":["Centos","8","Stream","RHEL"],"title":"Downgrade Centos 8 Stream to Centos 8","uri":"/post/2021-12-13-downgrade-centos-8-stream-to-centos-8/"},{"categories":["HCL"],"content":" The Problem I have recently had an issue with preview from HCL Docs not working in my HCL Verse webmail. The error i got in my Google Chrome console was “Fail to transfer attachment to Docs Viewer: 404”. The HCL Docs version I’m running is: “HCL Connections Docs 2.0.1”\nI did some google searching on the matter and one of the causes to the problem could be if “iNotes_WA_Security_NonceCheck=0” was set in notes.ini, mine didn’t contain that. I was back scratching my head since this had worked previously. I took a look in the documentation Integrating with HCL Connections Docs to see if I had missed something.\nEnable Debugging Debugging was enabled by adding the following parameters in \u003cDomino\u003e/data/domino/workspace/.config/rcpinstall.properties and restart HTTP task.\ncom.ibm.domino.attachservice.servlets.attachment.AbstractAttachmentAccessor.level=FINEST com.ibm.domino.attachservice.servlets.level=FINEST Logs will be made in \u003cDomino\u003e/data/domino/workspace/logs\nThe issue \u003cCommonBaseEvent creationTime=“2021-12-03T13:57:39.970+01:00” globalInstanceId=“EL7f00000200017d805e0b0a00000073” msg=“Certificate with subject CN=verse.example.com, issued by CN=R3, O=Let's Encrypt, C=US, is not trusted. Validation failed with error 3659.” severity=“30” version=“1.0.1”\u003e\"\nThis error message occurs when cross certification is not properly configured. Please ensure that certificate has been imported and cross certified into names.nsf. Here is the file I used Let’s Encrypt ROOT Certificate Import the internet certificate:\nLaunch Domino Administrator, connect to the service manager Open names.nsf of the service manager, navigate to Configuration -\u003e Security -\u003e Certificates On the menu bar click Actions -\u003e Import Internet Certificates Locate the Internet certificate file, click “accept all” button in the dialog (if required), click on import successfully message Create cross certificate:\nLocate the imported certificate in Internet Certifiers section. Double click to open it On the menu bar click Action -\u003e Create Cross Certificate Click OK on the pop-up dialog Choose the correct cert ID and server Input the certifier password and click cross certify button You should be able to find the cross certificate in Internet Cross Certificates section That did the trick and now preview is working again. All of this happened because I started using Let’s encrypt certificates in Domino 12 ","summary":"Fail to transfer attachment to Docs Viewer: 404","tags":["Verse","Connections","Docs","Let's encrypt","certificate","Cross Certify","Domino","Preview","404"],"title":"HCL Verse: Docs Viewer 404","uri":"/post/2021-12-8-hcl-verse-preview-404-docs/"},{"categories":["Kubernetes"],"content":"Show cluster status Check deployment status: kubectl cluster-info kubectl get namespace kubectl get deployments # Show deployments in default namespace kubectl get deployments -n \u003cnamespace\u003e # Show deployment status in specific namespace kubectl get pods kubectl get pods -n \u003cnamespace\u003e kubectl get pods --all-namespaces -o wide # Shows all pods accross all namespaces kubectl get service kubectl get service -n \u003cnamespace\u003e kubeadm token create --print-join-command # Show the join command for adding worker nodes kubectl scale --replicas=0 deployment/\u003cyour-deployment\u003e kubectl label node \u003cnodename\u003e \u003clabelname\u003e=allow # Add label to node kubectl label node \u003cnodename\u003e \u003clabelname\u003e- # Remove label from node Delete a pod and namespace Method 1: To delete everything from the current namespace (which is normally the default namespace) using kubectl delete:\nkubectl delete all --all all refers to all resource types such as pods, deployments, services, etc. –all is used to delete every object of that resource type instead of specifying it using its name or label.\nTo delete everything from a certain namespace you use the -n flag:\nkubectl delete all --all -n \u003cnamespace\u003e Method 2: You can also delete a namespace and re-create it. This will delete everything that belongs to it:\nkubectl delete namespace \u003cnamespace\u003e kubectl create namespace \u003cnamespace\u003e Edit CoreDNS kubectl -n kube-system edit configmap/coredns Delete current coredns pod and let a new respawn kubectl get pod -A kubectl delete pod -n kube-system core-dns-######### Delete multiple pods kubectl delete --all pods --namespace=\u003cnamespace\u003e Updating Pod image You need image: app-name:latest and imagePullPolicy: Always in your deployment and kubernetes 1.15 or newer\nkubectl rollout restart deployment/DEPLOYMENTNAME ","summary":"Userful commands to get you started with Kubernetes","tags":["docker","kubectl","kubeadm","Kubernetes","containerd","pods","k8s"],"title":"Kubernetes Tips and Tricks","uri":"/post/2021-10-06-kubernetes/"},{"categories":["Webserver"],"content":" To enable Syntax Highlight in Vim start by downloading nginx.vim and place it in your ~/.vim/syntax/ folder\nCreate ~.vimrc and add the following to it\nset number # Shows line numbers syntax on # Enables syntax highlighting colorscheme default # Default color scheme set tabstop=2 # Set tabs to 2 spaces set autoindent # Enables auto indent You do now have to choose how you want your syntax highlighting to work. It can be done either by telling vim where the Nginx configuration files are located or by adding a line in each nginx configuration file that tells vim which syntax highlighting to use. Option A. Create a file called filetype.vim in ~/.vim/ and add the following to it au BufRead,BufNewFile /etc/nginx/*,/etc/nginx/conf.d/*,/usr/local/nginx/conf/* if \u0026ft == '' | setfiletype nginx | endif - Option B. Add *# vim: syntax=nginx* to each nginx configuration file ```bash vim: syntax=nginx - Pro: This method works for Nginx files outside of */etc/nginx/conf.d/, /etc/nginx/, /usr/local/src/nginx/conf/* - Con: You need to add this line to the file ","summary":"Make your vim colourful when editing nginx configuration files","tags":["Nginx","Vim","Syntax","Highlight"],"title":"Nginx Syntax Highlight in Vim","uri":"/post/2021-09-22-vim-syntax-highlight-nginx/"},{"categories":["Firewall"],"content":"From the console of Sophos XG.\n5. Device Management \u003e 3. Advanced Shell\nType the following command:\nservice sslvpn:restart -ds nosync ","summary":"","tags":["Sophos","XG","SSH","SSL","VPN","restart"],"title":"Restart Sophos XG SSL VPN","uri":"/post/2021-09-14-sophos-xg-restart-sslvpn/"},{"categories":["Outdoor"],"content":" Warbonnet Ridgerunner (2020 Model with updated corner configuration) Double layer Color: Fern Green Warbonnet Ridgerunner with Robens Trace Underquilt\n","summary":null,"tags":["Warbonnet Outdoors","Ridgerunner","Bridge","Hammock"],"title":"Warbonnet Ridgerunner","uri":"/post/2021-09-13-warbonnet-ridgerunner/"},{"categories":["Programmering"],"content":" Start by deleting all the .DS_Store files in your current directory and subdirectories. find . -name .DS_Store -print0 | xargs -0 git rm -f --ignore-unmatch Note this should be done in your project folder\nAdd .DS_Store to .gitingore echo \".DS_Store\" \u003e\u003e .gitignore Commit the .gitignore file git add .gitignore git commit -m \".DS_Store deleted\" git push ","summary":"A simple method to ignore .DS_Store files during git commits","tags":["git",".gitignore",".DS_Store"],"title":"Ignore .DS_Store in git commits","uri":"/post/2021-09-09-.gitignore/"},{"categories":["Outdoor"],"content":" Warbonnet Blackbird XLC (Updated 2018 model) Light weight 2 layer Color: Dark Foliage Green Warbonnet Wookie -17C Underquilt in Bushwack camo. The stuff sack color is called Fern Green\n","summary":null,"tags":["Warbonnet Outdoors","Blackbird","XLC","GE","Hammock","Wookie","Underquilt","UQ","Bushwack Camo","Dark Foliage Green","Super Fly","Tarp"],"title":"Warbonnet Blackbird XLC","uri":"/post/2021-09-09-warbonnet-blackbird-xlc/"},{"categories":["Food"],"content":"Pizzaer: 10stk af 280g dej\nBiga:\n500ml vand 3g tørgær 1000g tipo00 mel Pizzadej:\nBiga 1000g tipo00 mel 1000ml vand 60g salt 3g tørgær Start med at lave din Biga. Det gøres ved at røre tørgæren ud i vandet, hæld så alt melet i, sæt låg på skålen og ryst den i cirka 3 minutter indtil det ligner røræg. Stil din Biga i køleskabet i 24 til 48 timer.\n","summary":null,"tags":["BBQ","Grill","Pizza"],"title":"Pizzadej med Polish","uri":"/post/2021-09-03-polish-pizza-dej/"},{"categories":["Webserver"],"content":"This is how I have setup automatic certificate renewal on my linux Webserver. I’m using Cloudflare as a DNS provider and are using their API Tokens to verify ownership of my domain, when requesting a certificate from Let’s Encrypt\nPrerequisites:\nCentOS/RHEL DNS hosted by Cloudflare Software: git nginx curl SSL Folder: create folder ssl in /etc/nginx/ Step 1 - Download and install acme.sh Note. Zerossl is the default CA in acme.sh version 3.0 and above, so this has to be changed to Let’s Encrypt\ngit clone https://github.com/acmesh-official/acme.sh.git cd .acme.sh ./acme.sh --install -m yourmail@domain.com --server letsencrypt ./acme.sh --set-default-ca --server letsencrypt Step 2 - Verify domain ownership using Cloudflare API export CF_Token=\"xxxxx\" export CF_Account_ID=\"xxxxx\" export CF_Zone_ID=\"xxxxx\" # This is optional and can be used if you got more than one dns zone The variables are saved in the account.conf file in your .acme.sh folder if you need to modify them later on.\nStep 3 - Issuing a cetificate ./acme.sh --issue --dns dns_cf -d YOURDOMAIN.com --ocsp-must-staple # Creates RSA Certificate ./acme.sh --issue --dns dns_cf -d YOURDOMAIN.com --keylength ec-384 --ocsp-must-staple # Optional. Create ECDSA Certificate Step 4 - Configure Nginx Add these SSL settings to your server{} block. Change YOURDOMAIN.com to your needs\nserver { listen 443 ssl http2; server_name YOURDOMAIN.com; ## RSA Certificates ssl_certificate ssl/YOURDOMAIN.fullchain.cer; ssl_certificate_key ssl/YOURDOMAIN.key; ## ECCC/ECDA Certificates ssl_certificate ssl/YOURDOMAIN.fullchain.ecc.cer; ssl_certificate_key ssl/YOURDOMAIN.ecc.key; ## Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits or higher ## Run this command to create the File ## openssl dhparam -dsaparam -out /etc/nginx/ssl/dhparam.pem 4096 ssl_dhparam ssl/dhparam.pem; ## disable SSLv3(enabled by default since nginx 0.8.19) since it's less secure then TLS http://en.wikipedia.org/wiki/Secure_Sockets_Layer#SSL_3.0 ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers \"CHACHA20+POLY1305:EECDH+AESGCM:EDH+AESGCM\"; ssl_prefer_server_ciphers off; ## Verify chain of trust of OCSP response using Root CA and Intermediate certs ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate ssl/YOURDOMAIN.fullchain.cer; ## maximum number and size of buffers for large headers to read from client request large_client_header_buffers 16 256k; client_body_buffer_size 128k; } Step 5 - Install certificate(s) in /etc/nginx/ssl/ export YOURDOMAIN=\"YOURDOMAIN.com\" export NGINX_SSL=\"/etc/nginx/ssl\" ./acme.sh -d \"$YOURDOMAIN\" --install-cert --reloadcmd \"systemctl reload nginx\" --fullchain-file \"${NGINX_SSL}/$YOURDOMAIN.fullchain.cer\" --key-file \"${NGINX_SSL}/$YOURDOMAIN.key\" --cert-file \"${NGINX_SSL}/$YOURDOMAIN.cer\" ./acme.sh -d \"$YOURDOMAIN\" --ecc --install-cert --reloadcmd \"systemctl reload nginx\" --fullchain-file \"${NGINX_SSL}/$YOURDOMAIN.fullchain.ecc.cer\" --key-file \"${NGINX_SSL}/$YOURDOMAIN.ecc.key\" --cert-file \"${NGINX_SSL}/$YOURDOMAIN.ecc.cer\" Step 6 - Enable acme.sh autoupgrade and update it to latest release ./acme.sh --upgrade --auto-upgrade Step 7 - Configure e-mail notifications # These are required: export SMTP_FROM=\"from@example.com\" # just the email address (no display names) export SMTP_TO=\"to@example.com,to2@example.net\" # just the email address, use commas between multiple emails export SMTP_HOST=\"smtp.example.com\" export SMTP_SECURE=\"tls\" # one of \"none\", \"ssl\" (implicit TLS, TLS Wrapper), \"tls\" (explicit TLS, STARTTLS) # The default port depends on SMTP_SECURE: none=25, ssl=465, tls=587. # If your SMTP server uses a different port, set it: export SMTP_PORT=\"2525\" # If your SMTP server requires AUTH (login), set: export SMTP_USERNAME=\"\u003cusername\u003e\" export SMTP_PASSWORD=\"\u003cpassword\u003e\" # acme.sh will try to use the python3, python2.7, or curl found on the PATH. # If it can't find one, or to run a specific command, set: export SMTP_BIN=\"/path/to/python_or_curl\" # If your SMTP server is very slow to respond, you may need to set: export SMTP_TIMEOUT=\"30\" # seconds for SMTP operations to timeout, default 30 ./acme.sh --set-notify --notify-hook smtp For more information visit acme.sh ","summary":null,"tags":["Security","Nginx","Cloudflare","TLS","SSL"],"title":"Let's Encrypt certificates + Nginx + Cloudflare","uri":"/post/2021-08-20-lets-encrypt-nginx/"},{"categories":["HCL"],"content":"I just upgraded my HCL Sametime community server from 11.5 to 11.6. The upgrade ran successfully, but sametime didn’t load when Domino 11.0.1 FP3 was started.\nI tried load staddin from the server console and it showed Sametime: Server startup successful… well that wasn’t the case when i looked at the console on my linux server.\nThe console showed the following message:\n/opt/hcl/domino/notes/11000100/linux/staddin: errror while loadin shared libraries: libcrypto.so.1.1: cannot open shared object file: No such file or directory\n/opt/hcl/domino/notes/11000100/linux/staddin: errror while loadin shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory\nHowever there was a libcrypto-domino.so.1.1 and a libssl-domino.so.1.1 in the /opt/hcl/domino/notes/11000100/linux folder, so I created a symbolic link and that did the trick, now my Sametime Server actually started.\nI used these commands to create the symbolic links\nln -s libcrypto-domino.so.1.1 libcrypto.so.1.1 ln -s libssl-domino.so.1.1 libssl.so.1.1 ","summary":null,"tags":["HCL","Sametime","Domino","Linux","CentOS"],"title":"HCL Sametime 11.6 failed to load","uri":"/post/2021-06-03-hcl-sametime/"},{"categories":["Firewall"],"content":"5. Device Management \u003e 3. Advanced Shell\nip route show table 220\t# Prints the kernel IPsec routes route -n\t# Prints routing table service sslvpn:restart -ds nosync # Restart SSL VPN service Link: Sophos XG drop-packet-capture ","summary":null,"tags":["Security","Sophos","XG","VPN","SSH","Route","Kernel","IPSec"],"title":"Sophos XG v18 CLI Commands","uri":"/post/2021-04-13-sophos-xg-cli/"},{"categories":["DNS"],"content":"I have created a Compile-Unbound.sh script #!/bin/bash # Variables BASEDIR=$(dirname \"$0\") localsrc=\"/usr/local/src\" ub_log=\"/var/log/unbound\" ub=\"unbound-1.12.0\" opnssl=\"openssl-1.1.1h\" libmnl=\"libmnl-1.0.4\" libnghttp2=\"nghttp2-1.41.0\" # Download required software function dwnlsw() { ubsrc=\"https://nlnetlabs.nl/downloads/unbound/$ub.tar.gz\" opensslsrc=\"https://www.openssl.org/source/$opnssl.tar.gz\" libmnlsrc=\"https://www.netfilter.org/projects/libmnl/files/$libmnl.tar.bz2\" libnghttp2src=\"https://github.com/nghttp2/nghttp2/releases/download/v1.41.0/$libnghttp2.tar.gz\" wget -P $localsrc $ubsrc $opensslsrc $libmnlsrc $libnghttp2src } # Unpack software function extractsw() { tar -xvf $localsrc/$ub.tar.gz -C $localsrc tar -xvf $localsrc/$opnssl.tar.gz -C $localsrc tar -xvf $localsrc/$libmnl.tar.bz2 -C $localsrc tar -xvf $localsrc/$libnghttp2.tar.gz -C $localsrc } # Install needed software from repo function installfromrepo() { yum install -y epel-release ; yum install -y expat-devel libmnl libevent-devel openssl-devel systemd-devel hiredis-devel python3 python3-devel swig systemd-timesyncd ; yum groupinstall -y \"Development Tools\" ; yum erase -y unbound alternatives --set python /usr/bin/python3 } # Add unbound user and group function adduser() { useradd -M unbound usermod -L unbound groupadd unbound usermod -a -G unbound unbound } # Compile OpenSSL function compileopenssl() { cd $localsrc/$opnssl ; ./config ; make ; make install } # Compile libmnl function compilelibmnl() { cd $localsrc/$libmnl ; ./configure ; make ; make install } # Compile libnghttp2 function compilelibnghttp2() { cd $localsrc/$libnghttp2 ; ./configure ; make ; make install } # Compile Unbound function compileub() { cd $localsrc/$ub ; ./configure --prefix=/usr --sysconfdir=/etc --disable-static --with-pidfile=/etc/unbound/unbound.pid --with-username=unbound --with-ssl --with-libexpat=/usr --with-libmnl --with-libevent --with-pthreads --with-libhiredis --with-libnghttp2 --with-pyunbound --with-pythonmodule --enable-cachedb --enable-checking --enable-subnet --enable-ipset ; make; make install } # Install systemd function function ubsystemd() { cp unbound.service /usr/lib/systemd/system/unbound.service systemctl daemon-reload systemctl stop systemd-resolved.service systemctl disable systemd-resolved.service systemctl enable --now systemd-timesyncd.service systemctl enable unbound.service systemctl start unbound.service } # Create logfile function ublogfile() { touch /var/log/unbound/unbound.log chown unbound:unbound /var/log/unbound/unbound.log } # Setup function. Runs the above functions function setup() { mkdir $ub_log dwnlsw extractsw | tee $ub_log/untar_software.log installfromrepo | tee $ub_log/install_dependencies.log compileopenssl | tee $ub_log/compile_openssl.log compilelibmnl | tee $ub_log/compile_limnl.log compilelibnghttp2 | tee $ub_log/compile_libnghttp2.log adduser compileub | tee $ub_log/compile_unbound.log ublogfile ubsystemd echo \"\" echo \"logs can be found in $ub_log!!\" echo \"\" } # Run setup function if [ -e /etc/centos-release ]; then if [ $(whoami) != \"root\" ]; then echo \"please run as root\" else setup fi else echo \"Your distribution is not supported!\" echo \"This script is only supported on CentOS 8\" fi Create unbound.service and place it in /usr/lib/systemd/system/\n[Unit] Description=Unbound DNS server After=network-online.target Before=nss-lookup.target Wants=network-online.target nss-lookup.target [Install] WantedBy=multi-user.target [Service] Type=simple PIDFile=/etc/unbound/unbound.pid ExecStart=/usr/sbin/unbound -c /etc/unbound/unbound.conf ExecReload=+/bin/kill -HUP $MAINPID ExecStop=+/bin/kill -TERM $MAINPID #KillMode=process #Restart=on-failure ","summary":null,"tags":["DNS","Unbound","Linux"],"title":"Compile Unbound DNS Resolver","uri":"/post/2020-10-22-unbound-dns-resolver/"},{"categories":["Firewall"],"content":"Den danske ISP Hiper tilbyder sine DSL og Fiberkunder en /48 (65536 net, så det burde være rigeligt) native IPv6 adresser. Hiper tilbyder også at man kan benytte sin egen router i stedet for den Zyxel router de udleverer. For at bruge sin egen router skal man konfigurere sit WAN interface med VLAN 101 tagged. Se mere her https://www.hiper.dk/bredbaand/fiber Jeg benytter en Netgate XG-7100 med pfSense som router, men jeg havde svært ved at få den til at modtage en IPv6 addresse fra Hiper. Så jeg måtte grave deres udstyr frem og tage udgangspunkt i de indstillinger der var lavet. Udfordringen var bare at der ikke var ret meget hjælp at hente og informationerne var få. Så her er de indstillinger jeg er kommet frem til virker.\nSæt IPv6 på WAN til DHCPv6\nUnder DHCP6 Client Configuration sætter jeg DHCPv6 Prefix Delegation Size til 48 som Hiper skriver at de giver til kunderne. Derudover har jeg fundet ud af at der skal flueben i “Do not wait for RA” for at få det til at virke\nPå mit LAN interface sætter jeg IPv6 Configuration til Track Interface\nLængere nede på siden sættes Track IPv6 Interface til WAN og IPv6 Prefix ID lader jeg stå til 0.\nNB. Det er min erfaring at man i pfSense skal tilbage til WAN interfacet og lave en Save/Apply før end WAN og LAN modtager IPv6 adresser\nFor at der kan laves IPv6 Neighbour Discovery skal der laves en firewall regel der tillader ICMPv6 på WAN interfacet\nSæt DHCPv6 Server op på LAN Interfacet. Her tildeler jeg en /64 til LAN\nUnder Router Advertisements sættes Router Mode til Assisted\nNu opretter jeg to firewall regler, den første tillader mine klienter at lave DNS requests på min router og den anden regel tillader IPv6 trafik fra LAN til any destination, den er lavet lidt løs for at teste\nTil slut tester jeg min IPv6 forbindelse på https://test-ipv6.com for at sikre mig at der er hul igennem\nUPDATE: pfSense WAN 15/12/2022 ","summary":null,"tags":["IPv6","Hiper","pfSense","ISP","Fiber"],"title":"Hiper IPv6 configuration on pfSense","uri":"/Hiper-IPv6-pfSense/"},{"categories":["Webserver"],"content":"Insert this code into your Nginx server {} block\nserver { listen 80; server_name YOURDOMAIN.com; return 301 https://$host$request_uri; } ","summary":null,"tags":["Security","Nginx","Redirect","301","HTTPS"],"title":"Nginx - Redirect to https","uri":"/post/2019-11-18-nginx-redirect-to-https/"},{"categories":["Firewall"],"content":"Sophos recently hosted a webinar with some updates to their version 18 EAP.\nIt’s exciting to see them move forward and i’ll be looking forward to try out v.18 EAP2. EAP1 has been a bit rough around the edges for me, so hopefully it has been fixed in the next version :)\nHere’s a link to the original blog post https://news.sophos.com/en-us/2019/11/07/webcast-xg-firewall-v18-innovations-live/ ","summary":null,"tags":["Security","Sophos","XG"],"title":"Sophos XG v18 Webinar","uri":"/post/2019-11-18-sophox-xg-v18-webinar/"},{"categories":["Webserver"],"content":"Insert this code into your Nginx server {} block\n#Hotlink protection for filetype .js .css .png .jpg .jpeg .gif .ico .svg .webp location ~* \\.(js|css|png|jpg|jpeg|gif|ico|svg|webp)$ { #YOURDOMAIN.COM is the only domain allowed as a referrer valid_referers none blocked .YOURDOMAIN.com; #Change .YOURDOMAIN.com or use the server_names variable if ($invalid_referer) { rewrite (.*) /images/padlock.jpg redirect; } } #End hotlink loop location = /images/padlock.jpg { } Test your configuration by creating a file on another domain with one of your images as source in a tag\n\u003chtml\u003e \u003chead\u003e \u003ctitle\u003ehotlink test\u003c/title\u003e \u003c/head\u003e \u003cbody\u003e \u003cimg src=\"http://YOURDOMAIN.com/someimage.jpg\"\u003e \u003c/body\u003e \u003c/html\u003e The bookmarks picture is requested, but a padlock is shown\nThe referrer field in the request header isn’t YOURDOMAIN.com which is accepted in the nginx code, so this triggers the redirect to the padlock image\nHere’s a link to the padlock image ","summary":null,"tags":["Security","Nginx"],"title":"Hotlink Protection with Nginx","uri":"/post/2019-11-15-nginx-hotlink-protection/"},{"categories":["Food"],"content":" 370ml lunken vand 1pk tørgær 60ml olivenolie 12g salt 100g durummel 500g tipo00 mel Ingredienserne blandes sammen i den rækkefølge de er nævnt. Det giver en mængde der svarer til 4 pizzaer, men jeg laver kun 3 ud af dem for så kan jeg få nogle større pizzaer end normalt. Jeg lader dejen stå i en skål i stuen og hæve. 2 timer inden pizzaerne skal laves bliver dejen delt op i det antal pizzaer jeg vil lave og får lov at efterhæve på bagepapir i en slukket ovn.\nGrillen tændes op med en mængde af briketter der svarer til en hel grillstarter med top på. Når briketterne er klar så hældes de ud i grillen og reflektorpladen lægges ovenpå til indirekte varme. Nu får grillen lov at varme op til over 300C mens at jeg laver pizzaerne klar.\nNår pizzaerne og grillen er klar lægges pizzastenen på og bliver opvarmet i 10min inden den første pizza lægges på. Min erfaring er at stenen ikke skal have meget mere end 10 minutter for ellers når den at optage for meget varme så bundene bliver brændte længe inden fyldet er klar.\n","summary":null,"tags":["BBQ","Grill","Pizza"],"title":"Pizza på Weber Summit Charcoal","uri":"/post/2017-05-14-pizza/"},{"categories":["Food"],"content":"Når jeg laver ribs bruger jeg 3-2-1 metoden. Det går i sin enkelthed ud på at de først får 3 timer på grillen, så 2 timer i folie og så 1 time på grillen igen.\nDer er indkøbt 3 sider kamben hos slagter kolberg De er frosne når jeg køber dem, så jeg lader dem gerne ligge og tø op i køleskabet. Hvis de skal bruges her og nu har haft puttet dem i en spand med lunken vand\nDet første jeg gør er at fjerne hinden på undersiden af benene\nNår hinden er fjernet bliver oversiden af benene smurt ind i oliven olie for at få rubben til bedre at hænge fast\nI dette tilfælde har jeg brugt en rub fra Famous Dave\nJeg giver kun kødet rub på oversiden af benene. Benene bliver sat på køl igen i en 3-4 timer inden de skal på grillen\nGrillen er gjort klar til røgning ved cirka 100-110 grader Celsius. Jeg har fyldt den med Heat Beads og nogle trækul da jeg synes det giver en rigtig god røg. Som røgtræ bruger jeg en blanding af bøg, blomme og æbletræ. Jeg bruger større stykker træ som jeg placerer i yderkanten af de tændte kul\nUnder risten har jeg sat en bakke hvor jeg fylder kogende vand i, det gør jeg for dels at undgå kødet bliver tørt og for at få røgen til at binde\nNu skal grillen bare stå ved 110C i 3 timer\nUndervejs bliver der sprøjtet med æblejuice i en vandforstøver, det gør jeg cirka hver time\nEfter at benene har været på grillen i 3 timer bliver de taget af. Man kan se at kødet så småt er begyndt at trække sig tilbage\nBenene bliver lagt i sølvpapir og penslet med smeltet smør, derefter drysser jeg noget “tryllestøv” over dem inden de bliver pakket stramt i sølvpapiret inden de lægges på grillen igen. Tryllestøvet består af brun farin og cayennepeber, det giver sødme og et skud varme\nNu skal de ligge på grillen i 2 timer. I denne periode vil kødet trække sig endnu mere fra benene og blive utrolig møre af at blive “dampet” inde i alufolien\nEfter de to timer bliver benene pakket ud igen og smurt med en hjemmelavet BBQ sauce (VM-Ribs) fra det danske grill landshold. Nu skal benene have lov at ligge 1 time på grillen inden de skal tages af og hvile. Som det ses på billedet er det svært at tage hele siden ud af alufolien uden at de går fra hinanden\nEfter at have fået lov at hvilke i 15 minutter bliver benene serveret med hjemmelavede pommesfritter der er lavet på rotisseri i Webers finmaskede kurv.\n","summary":null,"tags":["BBQ","Grill","Ribs"],"title":"Ribs på Weber Summit Charcoal","uri":"/post/2017-03-12-ribs/"},{"categories":["Programmering"],"content":"Here’s a bit of python code I have written to help make difficult decisions in life.. :-)\n#!/usr/bin/python # Kenneth 2014 # http://www.holmq.dk # Version 0.1 ########################## import random, time class magickanswer: def magic(self): number = random.randrange(0,3,1); return number; def answer(): generate = magickanswer(); n = generate.magic(); options = [“Yes”,”No”,”Maybe”]; print “Kenneth says: %s” % options[n]; count = 0; while count \u003c 3: answer(); time.sleep(1); count = count+1; ","summary":null,"tags":["Programmering","Python"],"title":"Python - Decision Generator","uri":"/post/2015-02-09-python-decision-generator/"}]