Nix Config
A set of configs for my machines using the Pure Module Pattern:
- powerhouse (desktop) - NixOS with Plasma, NVIDIA, Pro Audio
- capacitor (server) - NixOS headless server with monitoring stack
- turbine (laptop) - macOS with nix-darwin (Intel)
- battery (future) - NixOS
Features
- Pure Module Architecture: 20+ modules using
mkEnableOption- import to make available, enable to activate - Bundle-Based Organization: Clean imports via
modules/desktop,modules/hardware,modules/services - Declarative API: Hosts declare capabilities via
desktop.plasma.enable = true, not file imports - Cross-platform: Supports both NixOS and macOS with shared configurations
- Home Manager integration: User-specific configs with modular component imports
- Unified GPG/SSH: Integrated authentication and encryption strategy
- Homebrew support (macOS): GUI applications and Mac-specific software
- Cross-compilation: Build and validate nix-darwin configs from Linux
- ISO Installers: Custom NixOS installer ISOs with pre-configured flakes
Documentation
- Complete Documentation - Full documentation site with detailed guides
- Migration Guide - Complete Arch Linux to NixOS migration guide with Windows dual-boot
- Secret Management - Ultra-secure, declarative secret management with sops and age
- GPG/SSH Strategy - Unified authentication and encryption across all systems
- Homebrew Integration - Managing GUI apps and Mac-specific software
- Cross-Platform Development - Building nix-darwin configs from Linux
- GitHub Copilot Agent - Development environment for Copilot coding agent
- Restic Backup Configuration - Secure, declarative backups with Restic
- Module Architecture - Understanding the modular system
- Current Setup Status - Current configuration status and checklist
Repository Structure
├── docs/ # Documentation
├── hosts/ # Host-specific configurations
│ ├── capacitor/ # NixOS homelab server
│ ├── powerhouse/ # NixOS desktop
│ └── turbine/ # macOS laptop
├── lib/ # Shared library functions (mkHost abstraction)
├── modules/ # Modular components (20+ Pure Modules)
│ ├── desktop/ # Desktop environments bundle (plasma, sddm)
│ ├── hardware/ # Hardware bundle (nvidia, bluetooth)
│ ├── media/ # Media bundle (audio)
│ ├── services/ # Services bundle (backup, monitoring, etc.)
│ └── virtualization/ # Virtualization bundle (podman, qemu)
├── users/ # User-specific configurations
├── flake.nix # Main flake configuration with mkHost
└── mise.toml # Development commands and tasks
The Pure Module Pattern
This repository uses Pure Modules with explicit enable options. Unlike traditional Nix configs where importing = activating, here you:
- Import the bundle to make options available
- Explicitly enable what you want
Example - Old Pattern (import = activate):
# Problem: Just importing activates the feature
imports = [
../../modules/desktop/plasma.nix # Always enables Plasma
];
Example - New Pattern (Pure Module):
# Import makes options available
imports = [
../../modules/desktop # Bundle: plasma, sddm available
];
# Explicitly choose what to enable
desktop.plasma.enable = true;
desktop.sddm.enable = true;
# Other options available but inactive:
# desktop.sddm.enable = false; # Implicit - not enabled
Declarative API Examples
Desktop/Workstation (powerhouse):
# Desktop Environment
desktop.plasma.enable = true;
desktop.sddm.enable = true;
themes.stylix.enable = true;
# Hardware
modules.hardware.nvidia.enable = true;
modules.hardware.bluetooth.enable = true;
media.audio.enable = true;
media.audio.lowLatency = true;
# Services
services.backup.enable = true;
services.monitoring.enable = true;
services.monitoring.exporters.enable = true;
# Virtualization
virtualization.podman.enable = true;
virtualization.hypervisor.enable = true; # Run VMs
Server (capacitor):
# No desktop environment - headless server
# services.backup.enable = true; # Already enabled via service config
services.monitoring.enable = true;
services.monitoring.server.enable = true; # Full Prometheus/Grafana
# Virtualization for containers only
virtualization.podman.enable = true;
virtualization.hypervisor.enable = false; # Don't run VMs on server
Available Module Bundles
Desktop Bundle (modules/desktop/):
desktop.plasma.enable- KDE Plasma 6 desktopdesktop.sddm.enable- SDDM display managerdesktop.sddm.theme- Optional theme
Hardware Bundle (modules/hardware/):
modules.hardware.nvidia.enable- NVIDIA GPU driversmodules.hardware.nvidia.open- Use open-source kernel modulesmodules.hardware.bluetooth.enable- Bluetooth subsystemmodules.hardware.bluetooth.powerOnBoot- Auto-power behavior
Media Bundle (modules/media/):
media.audio.enable- PipeWire audio stackmedia.audio.lowLatency- Zen kernel for pro audio (⚠️ changes kernel)media.audio.proAudio- JACK support, Bitwig Studio
Services Bundle (modules/services/):
services.backup.enable- Restic backupservices.monitoring.enable- Prometheus/Grafana stackservices.monitoring.exporters.enable- Lightweight node exportersservices.monitoring.server.enable- Heavy monitoring server- Plus: download, git, media, ollama, opencode, storage
Virtualization Bundle (modules/virtualization/):
virtualization.podman.enable- Container enginevirtualization.podman.dockerCompat- Docker aliasvirtualization.hypervisor.enable- Run VMs (libvirtd, virt-manager)virtualization.guest.enable- QEMU guest agent (when this IS a VM)
Adding a New Host
The lib.mkHost function provides the abstraction:
# flake.nix
my-new-host = lib.mkHost {
hostname = "my-new-host";
system = "x86_64-linux";
user = "brancengregory";
builder = nixpkgs.lib.nixosSystem;
homeManagerModule = home-manager.nixosModules.home-manager;
sopsModule = sops-nix.nixosModules.sops;
isDesktop = true; # or false for server
extraModules = [
# Additional modules specific to this host
];
};
Then in hosts/my-new-host/config.nix:
{
imports = [
../../modules/desktop
../../modules/hardware
../../modules/services
];
# Declare capabilities
desktop.plasma.enable = true;
modules.hardware.nvidia.enable = true;
services.backup.enable = true;
system.stateVersion = "25.11";
}
Quick Start
Prerequisites
- Install Nix with flakes enabled
- Clone this repository
- Enter the development environment
Development Environment
# Enter development shell
mise dev
# or: nix develop
# View available commands
mise help
Building Configurations
# Build NixOS configurations
mise build-powerhouse
mise build-capacitor
# Cross-compile nix-darwin config from Linux
mise build-turbine
# Validate nix-darwin config (faster)
mise check-darwin
# Build all configurations
mise build-all
# Build ISO installers
mise build-powerhouse-iso
mise build-capacitor-iso
Design Philosophy
- Pure Modules: All modules use
mkEnableOptionwithdefault = false - Explicit over Implicit: No accidental activation via imports
- Safe Defaults: Hardware defaults to disabled (prevents kernel panics)
- Clear Separation: Server, Desktop, and Laptop have distinct capabilities
- Bundle Organization: Related modules grouped for clean imports
- Declarative Intent: Host configs read like a capability manifest
Status: ✅ Refactor Complete - All system modules use Pure Module pattern
Module Architecture
This configuration uses the Pure Module Pattern with 20+ modules organized across 10 categories. Each module defines options that can be explicitly enabled, following the NixOS module system best practices.
Design Philosophy
Unlike traditional Nix configurations where importing a module immediately activates it, this repository uses Pure Modules with explicit mkEnableOption:
- Import the bundle to make options available
- Explicitly enable what you want with
enable = true - Safe defaults - everything defaults to disabled
Example - Pure Module Pattern
# Import the bundle (makes options available)
imports = [
../../modules/desktop # plasma, sddm available but inactive
];
# Explicitly choose what to enable
desktop.plasma.enable = true;
desktop.sddm.enable = true;
desktop.sddm.theme = "sugar-dark"; # Optional configuration
# Other options available but NOT active:
# desktop.hyprland.enable = false; # Implicit - not enabled
Module Categories
Desktop Bundle (modules/desktop/)
Available options:
desktop.plasma.enable- KDE Plasma 6 desktop environmentdesktop.plasma.lookAndFeel- Theme selectiondesktop.sddm.enable- SDDM display managerdesktop.sddm.theme- Optional theme (e.g., "sugar-dark")
User-level (modules/home/desktop/):
home.desktop.plasma.enable- Plasma user settings via plasma-managerhome.desktop.hyprland.enable- Hyprland window managerhome.desktop.hyprland.enableNvidiaPatches- GPU-specific fixes
Hardware Bundle (modules/hardware/)
Available options:
modules.hardware.nvidia.enable- NVIDIA GPU driversmodules.hardware.nvidia.open- Use open-source kernel modules (Turing+)modules.hardware.nvidia.powerManagement.enable- VRAM save on sleep (experimental)modules.hardware.bluetooth.enable- Bluetooth subsystemmodules.hardware.bluetooth.powerOnBoot- Auto-power behavior (default: false)modules.hardware.bluetooth.guiManager- Enable blueman GUI
Media Bundle (modules/media/)
Available options:
media.audio.enable- PipeWire audio stackmedia.audio.server- Choose: "pipewire" | "pulse" | "alsa" | "none"media.audio.lowLatency- Use Zen kernel (⚠️ changes kernel!)media.audio.proAudio- JACK support, real-time limits, Bitwig Studiomedia.audio.user- User to add to audio group (default: "brancengregory")
Network Modules (modules/network/)
Already using Pure Module pattern:
networking.wireguard-mesh.enable- WireGuard VPN meshservices.caddy.enable- Reverse proxyservices.netbird.enable- Netbird VPN
OS Modules (modules/os/)
Base system modules (always-on):
common.nix- Universal settings (flakes, experimental flags)nixos.nix- NixOS-specific system configurationdarwin.nix- macOS-specific system configuration
These are imported directly by lib.mkHost, not via bundles.
Security Modules (modules/security/)
Already using Pure Module pattern:
security.gpg.enable- Declarative GPG key importsecurity.ssh.hostKeysDeclarative.enable- SSH host key managementsops(via sops-nix module) - Secret management
Services Bundle (modules/services/)
Available options (all using Pure Module pattern):
services.backup.enable- Restic backup with configurable repository, pathsservices.monitoring.enable- Prometheus/Grafana stackservices.monitoring.exporters.enable- Lightweight node exporters (all nodes)services.monitoring.server.enable- Heavy Prometheus/Grafana server (monitoring host only)
services.download-stack.enable- qBittorrent, SABnzbdservices.git-server.enable- Forgejo Git serverservices.media.enable- Jellyfin, *arr appsservices.ollama-server.enable- Ollama LLM serverservices.opencode-server.enable- OpenCode serverservices.storage.enable- Minio, NFS, mergerfs, SnapRAID
Virtualization Bundle (modules/virtualization/)
Available options:
virtualization.podman.enable- Container engine (Docker replacement)virtualization.podman.dockerCompat- Create 'docker' aliasvirtualization.podman.dnsEnabled- Container-to-container DNSvirtualization.hypervisor.enable- Run VMs (libvirtd, virt-manager, QEMU)virtualization.hypervisor.virtManager- Enable virt-manager GUIvirtualization.hypervisor.swtpm- Software TPM (for Windows 11 VMs)virtualization.hypervisor.spice- SPICE protocol supportvirtualization.guest.enable- QEMU guest agent (for when this machine IS a VM)
Critical: Use hypervisor when running VMs, guest when this is a VM. Never enable hypervisor inside a VM!
Themes Bundle (modules/themes/)
Available options:
themes.stylix.enable- Unified theming systemthemes.stylix.image- Wallpaper paththemes.stylix.base16Scheme- Color scheme filethemes.stylix.autoEnable- Auto-enable all targets (default: false)
Host Configuration Pattern
Modern host configurations use the lib.mkHost abstraction:
Example: Server (capacitor)
# flake.nix
capacitor = lib.mkHost {
hostname = "capacitor";
system = "x86_64-linux";
user = "brancengregory";
builder = nixpkgs.lib.nixosSystem;
homeManagerModule = home-manager.nixosModules.home-manager;
sopsModule = sops-nix.nixosModules.sops;
isDesktop = false; # Headless server
extraModules = [ inputs.disko.nixosModules.disko ];
};
# hosts/capacitor/config.nix
{ config, pkgs, ... }: {
imports = [
../../modules/services # All services available
../../modules/virtualization # Podman available
# No desktop, hardware, or media bundles - this is a server!
];
# Enable only what the server needs
services.backup.enable = true;
services.monitoring = {
enable = true;
exporters.enable = true;
server.enable = true; # This is the monitoring host
};
virtualization.podman.enable = true;
virtualization.hypervisor.enable = false; # Don't run VMs on server
system.stateVersion = "25.11";
}
Example: Desktop (powerhouse)
# flake.nix
powerhouse = lib.mkHost {
hostname = "powerhouse";
system = "x86_64-linux";
user = "brancengregory";
builder = nixpkgs.lib.nixosSystem;
homeManagerModule = home-manager.nixosModules.home-manager;
sopsModule = sops-nix.nixosModules.sops;
isDesktop = true;
extraModules = [
inputs.disko.nixosModules.disko
inputs.plasma-manager.homeModules.plasma-manager
];
};
# hosts/powerhouse/config.nix
{ config, pkgs, ... }: {
imports = [
../../modules/desktop # Plasma, SDDM available
../../modules/hardware # NVIDIA, Bluetooth available
../../modules/media # Audio available
../../modules/services # Backup, monitoring available
../../modules/virtualization # Podman, QEMU available
../../modules/themes # Stylix available
];
# Desktop Environment
desktop.plasma.enable = true;
desktop.sddm.enable = true;
themes.stylix.enable = true;
# Hardware
modules.hardware.nvidia.enable = true;
modules.hardware.bluetooth.enable = true;
# Audio (Pro audio setup)
media.audio.enable = true;
media.audio.lowLatency = true;
media.audio.proAudio = true;
# Services
services.backup.enable = true;
services.monitoring = {
enable = true;
exporters.enable = true; # Just the lightweight exporter
server.enable = false; # Don't run heavy server on desktop
};
# Virtualization (Run VMs, not a VM)
virtualization.podman.enable = true;
virtualization.hypervisor = {
enable = true;
virtManager = true;
swtpm = true; # For Windows 11 VMs
};
system.stateVersion = "25.11";
}
User Configuration Pattern
User configurations (users/*/home.nix) are imported automatically by lib.mkHost. They should use the standard home-manager module pattern:
{ config, pkgs, lib, isLinux, isDesktop, ... }: {
imports = [
../../modules/home/desktop/plasma.nix # If using plasma
];
# Desktop-specific settings
home.desktop.plasma = lib.mkIf isDesktop {
enable = true;
virtualDesktops = 4;
};
# Program configurations
programs.git.enable = true;
programs.zsh.enable = true;
}
Adding New Modules
1. Create the Module
# modules/category/my-module.nix
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.category.myModule;
in {
options.category.myModule = {
enable = mkEnableOption "my module description";
someOption = mkOption {
type = types.str;
default = "default-value";
description = "Description of this option";
};
};
config = mkIf cfg.enable {
# Configuration only applied when enabled
services.myService.enable = true;
services.myService.setting = cfg.someOption;
};
}
2. Add to Bundle (if applicable)
# modules/category/default.nix
{ lib, ... }: {
imports = [
./my-module.nix
./existing-module.nix
];
}
3. Import Bundle in Host Config
# hosts/my-host/config.nix
{
imports = [
../../modules/category # Import the bundle
];
# Enable the specific module
category.myModule.enable = true;
category.myModule.someOption = "custom-value";
}
4. Test
# Validate syntax
nix flake check
# Test specific host
mise build-my-host
Module Dependencies
Modules can reference other modules' options:
{ config, pkgs, lib, ... }:
with lib;
let
cfg = config.services.myService;
in {
options.services.myService = {
enable = mkEnableOption "my service";
};
config = mkIf cfg.enable {
# Reference another module's option
services.dependency.enable = mkDefault true;
# Or check if another module is enabled
environment.systemPackages = mkIf config.services.backup.enable [
pkgs.restic
];
};
}
Best Practices
- Always use
mkEnableOptionwithdefault = false - Namespace options to avoid conflicts (e.g.,
modules.hardware.nvidianot justnvidia) - Guard with
mkIf cfg.enable- don't apply config when disabled - Use bundles - group related modules in
default.nix - Explicit over implicit - no "magic" activation via imports
- Document options - provide clear descriptions and examples
Complete Option Reference
Desktop
desktop.plasma.enabledesktop.plasma.lookAndFeeldesktop.sddm.enabledesktop.sddm.theme
Hardware
modules.hardware.nvidia.enablemodules.hardware.nvidia.openmodules.hardware.nvidia.powerManagement.enablemodules.hardware.nvidia.powerManagement.finegrainedmodules.hardware.nvidia.nvidiaSettingsmodules.hardware.bluetooth.enablemodules.hardware.bluetooth.powerOnBootmodules.hardware.bluetooth.guiManager
Media
media.audio.enablemedia.audio.servermedia.audio.lowLatencymedia.audio.proAudiomedia.audio.user
Services
services.backup.enableservices.monitoring.enableservices.monitoring.exporters.enableservices.monitoring.exporters.portservices.monitoring.exporters.collectorsservices.monitoring.server.enableservices.monitoring.server.prometheusPortservices.monitoring.server.grafanaPortservices.monitoring.server.grafanaBindservices.download-stack.enableservices.git-server.enableservices.media.enableservices.ollama-server.enableservices.opencode-server.enableservices.storage.enable
Virtualization
virtualization.podman.enablevirtualization.podman.dockerCompatvirtualization.podman.dnsEnabledvirtualization.podman.extraPackagesvirtualization.hypervisor.enablevirtualization.hypervisor.virtManagervirtualization.hypervisor.swtpmvirtualization.hypervisor.spicevirtualization.guest.enablevirtualization.guest.spice
Themes
themes.stylix.enablethemes.stylix.imagethemes.stylix.base16Schemethemes.stylix.autoEnable
Home Desktop
home.desktop.plasma.enablehome.desktop.plasma.lookAndFeelhome.desktop.plasma.virtualDesktopshome.desktop.hyprland.enablehome.desktop.hyprland.enableNvidiaPatches
Secret Management Guide
This document describes the ultra-secure, fully declarative secret management system for the NixOS infrastructure using sops-nix.
Quick Start Guide
This section provides a quickstart for adding sops-nix to your Nix configuration. For comprehensive secret generation and management, see the sections below.
Why sops-nix?
- Atomic & Declarative: Secrets are deployed alongside configuration
- GitOps Friendly: Encrypted secrets stored directly in Git (YAML/JSON)
- Native Integration: NixOS and Home Manager modules
- Flexible Keys: Works with Age and SSH keys
Prerequisites
Tools are available via nix develop or:
nix-shell -p sops age ssh-to-age
Basic Setup
1. Add Flake Input:
inputs.sops-nix = {
url = "github:Mic92/sops-nix";
inputs.nixpkgs.follows = "nixpkgs";
};
2. Configure .sops.yaml:
keys:
- &user_brancen age1... # Your age public key
- &host_powerhouse age1... # Convert from SSH: ssh-to-age
creation_rules:
- path_regex: secrets/[^/]+\.(yaml|json)$
key_groups:
- age:
- *user_brancen
- *host_powerhouse
3. Create Secrets:
mkdir secrets
sops secrets/general.yaml
# Add secrets in the editor, save and close
4. Configure System:
sops.defaultSopsFile = ../../secrets/general.yaml;
sops.secrets."my_secret" = {};
5. Configure Home Manager (Optional):
sops.secrets.api_key = {
path = "${config.home.homeDirectory}/.api_key";
mode = "0600";
};
Workflow
- Edit:
sops secrets/general.yaml - Commit:
git add secrets/ && git commit - Apply:
nixos-rebuild switchorhome-manager switch - Rotate:
sops updatekeys secrets/general.yaml
Infrastructure Secret Management
For comprehensive infrastructure secrets (GPG keys, WireGuard, SSH host keys, etc.), see below.
Architecture Overview
The secret management system follows these principles:
- Air-Gapped Generation: All cryptographic secrets are generated in a secure, offline environment
- Encrypted Storage: Secrets are encrypted with sops using age
- Version Control: Encrypted secrets are committed to git, providing audit trail and backup
- Declarative Deployment: Secrets are deployed via NixOS modules, no manual copying required
- Reproducibility: Complete infrastructure can be rebuilt from git repository
Secret Types
1. GPG Keys
⚠️ IMPORTANT: GPG Secret Keys Are NOT Stored in SOPS
GPG secret keys live exclusively on Nitrokey 3 hardware tokens. They are NEVER stored in SOPS or any filesystem.
- Storage: Hardware tokens only (Nitrokey 3)
- Public Key: Available in
keys/brancen-gregory-public.ascand on keys.openpgp.org - Provisioning: Manual via
gpg --card-editafter inserting hardware token - Documentation: See Hardware Token Guide and GPG/SSH Strategy
Key Information:
- Fingerprint:
0A8C406B92CEFC33A51EC4933D9E0666449B886D - Key ID:
3D9E0666449B886D - Keyserver: https://keys.openpgp.org
2. WireGuard Keys
- Hub (Capacitor): Server keys with listening port
- Spokes: Client keys with assigned IPs in 10.0.0.0/24
- Preshared Keys: Per-pair PSKs for additional security
- Storage:
secrets/secrets.yamlunderwireguard:tree
3. SSH Host Keys
- Ed25519: Primary host key for each machine
- RSA (optional): Legacy compatibility
- Storage:
secrets/secrets.yamlunderssh:tree
4. Age Keys
- Per-Host Keys: Each host has its own age key for sops decryption
- Your Master Key: Your personal age key for editing secrets
- Storage:
secrets/secrets.yamlunderage:tree
5. Application Secrets
- Restic: Backup repository passwords
- Database: Connection strings and credentials
- API Keys: External service credentials
- Storage:
secrets/secrets.yamlunder application-specific trees
Directory Structure
.
├── secrets/
│ ├── secrets.yaml # Main encrypted secrets file
│ └── vm_host_key # VM-specific SSH key (development)
├── keys/
│ └── brancen-gregory-public.asc # GPG public key backup (not secret)
├── .sops.yaml # SOPS configuration with age recipients
└── scripts/
├── generate-all-secrets.sh # Generate infrastructure secrets
└── generate-host-secrets.sh # Generate secrets for single host
Note: GPG secret keys are NOT stored in this repository. They reside exclusively on Nitrokey 3 hardware tokens.
Workflow
Initial Setup (One-Time)
1. Enter Development Shell
# Clone repository
git clone https://github.com/brancengregory/nix-config.git
cd nix-config
# Enter development shell (provides all required tools)
nix develop
# Verify tools are available
sops --version
age --version
wg --version
2. Create Your Age Key
# Create directory for sops keys
mkdir -p ~/.config/sops/age
# Generate age key
age-keygen -o ~/.config/sops/age/keys.txt
# Note the public key - you'll add it to .sops.yaml
export SOPS_AGE_KEY_FILE=~/.config/sops/age/keys.txt
3. Generate All Secrets
# Run the master generation script
./scripts/generate-all-secrets.sh
# This will:
# - Generate WireGuard keys for all hosts
# - Generate SSH host keys
# - Generate age keys for all hosts
# - Generate application secrets (restic, etc.)
# - Update .sops.yaml with new recipients
#
# NOTE: GPG keys are NOT generated - they are on Nitrokey hardware tokens
4. Review and Commit
# Review the generated secrets
sops secrets/secrets.yaml
# Check .sops.yaml
sops -d .sops.yaml
# Commit everything
git add secrets/ .sops.yaml
git commit -m "feat: generate all infrastructure secrets"
git push
Adding a New Host
# Generate secrets for new host
./scripts/generate-host-secrets.sh battery
# This will:
# - Assign next available IP (10.0.0.x)
# - Generate WireGuard keys
# - Generate SSH host keys
# - Generate age key
# - Update .sops.yaml
# Review and commit
git add secrets/ .sops.yaml
git commit -m "feat: add battery host secrets"
git push
# Create host configuration
mkdir -p hosts/battery
cp hosts/powerhouse/config.nix hosts/battery/
# ... customize for battery ...
# Deploy
nixos-install --flake .#battery
Editing Secrets
# Edit secrets file
sops secrets/secrets.yaml
# Edit specific key
sops --set '["wireguard"]["powerhouse"]["private_key"] "new-value"' secrets/secrets.yaml
# Extract value
sops -d --extract '["wireguard"]["powerhouse"]["public_key"]' secrets/secrets.yaml
Rotating Secrets
Rotate WireGuard Keys
# Generate new key
NEW_KEY=$(wg genkey)
# Update in sops
sops --set '["wireguard"]["powerhouse"]["private_key"] "'$NEW_KEY'"' secrets/secrets.yaml
# Update public key
NEW_PUB=$(echo "$NEW_KEY" | wg pubkey)
sops --set '["wireguard"]["powerhouse"]["public_key"] "'$NEW_PUB'"' secrets/secrets.yaml
# Commit and deploy
git commit -am "security: rotate powerhouse WireGuard keys"
nixos-rebuild switch --flake .#powerhouse
Rotate GPG Subkeys
⚠️ Hardware Token Procedure
GPG keys are stored on Nitrokey hardware tokens. To rotate:
- Use offline master key backup to generate new subkeys
- Move new subkeys to Nitrokey 3 (both primary and backup tokens)
- Revoke old keys on keyserver
- Update all authorized_keys files
See Hardware Token Guide for detailed procedures.
Rotate Age Keys
# Generate new age key
NEW_KEY=$(age-keygen 2>&1)
NEW_PUB=$(echo "$NEW_KEY" | grep "Public key" | cut -d: -f2 | tr -d ' ')
# Update in secrets
sops --set '["age"]["powerhouse"]["public"] "'$NEW_PUB'"' secrets/secrets.yaml
# Update .sops.yaml (replace old key)
# ... edit .sops.yaml ...
# Re-encrypt all secrets with new recipients
sops updatekeys secrets/secrets.yaml
Host Configuration
Using Declarative Secrets in NixOS
WireGuard Hub Configuration (Capacitor)
{ config, ... }:
{
imports = [ ../../modules/network ]; # Bundle: wireguard available
networking.wireguard-mesh = {
enable = true;
nodeName = "capacitor";
hubNodeName = "capacitor"; # This is the hub
nodes = {
capacitor = {
ip = "10.0.0.1";
publicKey = "CAPACITOR_PUBLIC_KEY";
isServer = true;
};
powerhouse = {
ip = "10.0.0.2";
publicKey = "POWERHOUSE_PUBLIC_KEY";
};
# ... other nodes
};
privateKeyFile = config.sops.secrets."wireguard/capacitor/private_key".path;
};
# Declare secrets
sops.secrets."wireguard/capacitor/private_key" = {};
}
WireGuard Spoke Configuration (Powerhouse)
{ config, ... }:
{
imports = [ ../../modules/network ]; # Bundle: wireguard available
networking.wireguard-mesh = {
enable = true;
nodeName = "powerhouse";
hubNodeName = "capacitor"; # Connects to capacitor
nodes = {
capacitor = {
ip = "10.0.0.1";
publicKey = "CAPACITOR_PUBLIC_KEY";
isServer = true;
endpoint = "capacitor.example.com:51820";
};
powerhouse = {
ip = "10.0.0.2";
publicKey = "POWERHOUSE_PUBLIC_KEY";
};
# ... other nodes
};
privateKeyFile = config.sops.secrets."wireguard/powerhouse/private_key".path;
presharedKeyFile = config.sops.secrets."wireguard/powerhouse/preshared_key".path;
};
# Declare secrets
sops.secrets."wireguard/powerhouse/private_key" = {};
sops.secrets."wireguard/powerhouse/preshared_key" = {};
}
GPG Hardware Token Support
⚠️ GPG Secret Keys Are On Hardware Tokens, Not in SOPS
{ config, ... }:
{
imports = [ ../../modules/security ]; # Bundle: gpg, ssh available
# Enable hardware token support (scdaemon, pcscd)
# Secret keys are NOT imported - they remain on Nitrokey
security.gpg = {
enable = true; # Enables smart card daemon support only
};
# After deployment, provision manually:
# 1. Insert Nitrokey
# 2. gpg --card-edit -> fetch -> quit
# 3. gpg-connect-agent "scd serialno" "learn --force" /bye
# 4. Test: ssh-add -L && git commit --allow-empty -m "Test"
}
Note: See Hardware Token Guide for complete provisioning procedures.
SSH Host Keys
{ config, ... }:
{
imports = [ ../../modules/security ]; # Bundle: gpg, ssh available
services.openssh.hostKeysDeclarative = {
enable = true;
ed25519 = {
privateKeyFile = config.sops.secrets."ssh/powerhouse/host_key".path;
publicKeyFile = config.sops.secrets."ssh/powerhouse/host_key_pub".path;
};
extraAuthorizedKeys = [
"ssh-ed25519 AAAAC3... brancengregory@turbine"
];
};
# Declare secrets
sops.secrets."ssh/powerhouse/host_key" = {};
sops.secrets."ssh/powerhouse/host_key_pub" = {};
}
Security Considerations
Threat Model
Protected Against:
- ✅ Secrets stored in plain text
- ✅ Secrets transmitted over network
- ✅ Accidental secret exposure in git
- ✅ Single point of failure (distributed keys)
- ✅ Lost laptop (per-device revocation)
Requires Protection:
- ⚠️ Your age master key (
~/.config/sops/age/keys.txt) - ⚠️ Machine running secret generation (air-gapped preferred)
- ⚠️ Git repository access (encrypted, but still sensitive)
Best Practices
-
Generate in Secure Environment
- Use air-gapped machine or live USB
- No network connection during generation
- Wipe temporary files securely
-
Backup Your Age Master Key
# Print key for backup cat ~/.config/sops/age/keys.txt # Store in: # - Password manager # - Offline backup (USB in safe) # - Paper backup (write it down) -
Regular Rotation
- WireGuard keys: Every 6-12 months
- GPG subkeys: Every 1-2 years (via hardware token re-flash)
- Age keys: Every 2-3 years
- Application passwords: As needed
-
Hardware Token Security
- GPG keys never leave Nitrokey 3 hardware
- Keep backup token in secure offline location
- Never export or backup secret keys to files
-
Access Control
- Limit who can decrypt secrets.yaml
- Use separate age keys per admin
- Document who has access in .sops.yaml
-
Audit Trail
- Review git history for secret changes
- Monitor for unauthorized modifications
- Use signed commits for sensitive changes
Recovery Scenarios
Lost Age Master Key
# You can still recover if you have access to any host's age key
# Extract from host:
sops -d --extract '["age"]["powerhouse"]["private"]' secrets/secrets.yaml > ~/.config/sops/age/keys.txt
# Then generate new master key and re-encrypt
age-keygen -o ~/.config/sops/age/keys.txt.new
# Update .sops.yaml with new public key
# Re-encrypt all secrets
sops updatekeys secrets/secrets.yaml
Lost Git Repository
# Clone from remote (secrets are encrypted)
git clone https://github.com/brancengregory/nix-config.git
# Decrypt with your age key
export SOPS_AGE_KEY_FILE=~/.config/sops/age/keys.txt
sops -d secrets/secrets.yaml
# Infrastructure can be fully restored
Compromised Host
# Revoke the compromised host's keys
# 1. Generate new keys for the host
./scripts/generate-host-secrets.sh compromised-host
# 2. Rotate WireGuard keys for all peers
# (since PSK is compromised)
for host in powerhouse turbine capacitor battery; do
if [ "$host" != "compromised-host" ]; then
NEW_PSK=$(wg genpsk)
sops --set '["wireguard"]["'$host'"]["preshared_key"] "'$NEW_PSK'"' secrets/secrets.yaml
fi
done
# 3. Commit and deploy everywhere
git commit -am "security: rotate keys after compromise"
for host in powerhouse turbine capacitor battery; do
nixos-rebuild switch --flake .#$host &
done
wait
Troubleshooting
SOPS "config file not found"
# Ensure .sops.yaml exists in repo root
ls -la .sops.yaml
# Check its content
sops -d .sops.yaml
"Failed to decrypt"
# Verify age key is available
echo $SOPS_AGE_KEY_FILE
cat $SOPS_AGE_KEY_FILE
# Check if your key is in .sops.yaml recipients
sops -d .sops.yaml | grep "age:"
# Re-encrypt with your key
sops updatekeys secrets/secrets.yaml
"Failed to write to secrets.yaml"
# Check file permissions
ls -la secrets/
# Ensure directory is writable
chmod u+w secrets/
# Check if file is locked by another process
lsof secrets/secrets.yaml
Hardware Token Not Working
Issue: GPG operations fail, gpg --card-status shows no card
Solution:
# Check if Nitrokey is detected
gpg --card-status
# If not detected, check USB connection
lsusb | grep -i nitro
# Restart scdaemon
gpgconf --kill scdaemon
gpg-connect-agent /bye
# Fetch public key from keyserver
gpg --card-edit
# gpg/card> fetch
# gpg/card> quit
# Create stubs
gpg-connect-agent "scd serialno" "learn --force" /bye
# Verify
gpg --list-secret-keys
ssh-add -L | grep cardno
See Hardware Token Guide for complete troubleshooting.
Migration from energize.sh
The old energize.sh script generated secrets manually on each host. The new system is fully declarative:
| Aspect | energize.sh (Old) | New System |
|---|---|---|
| Generation | Per-host manual | Centralized, air-gapped |
| Storage | Plain text files | Encrypted in git |
| Distribution | Manual copy | Declarative NixOS |
| Backup | None | Git history + age keys |
| Rotation | Manual | Scripted |
| Reproducibility | None | Complete from git |
Migration Steps
-
Backup Existing Keys
# On each host tar czf ~/keys-backup.tar.gz ~/.ssh /etc/ssh # NOTE: Do NOT backup ~/.gnupg secret keys - they are on hardware tokens -
Generate New Declarative Secrets
# In secure environment ./scripts/generate-all-secrets.sh -
Update Host Configurations
- Add new modules to each host
- Reference new secret paths
- Remove old key references
-
Deploy
# Deploy to each host nixos-rebuild switch --flake .#powerhouse # ... repeat for each host -
Verify
- Check hardware token:
gpg --card-status - Check WireGuard:
wg show - Check SSH:
ssh-keygen -lf /etc/ssh/ssh_host_ed25519_key.pub
- Check hardware token:
See Also
- Hardware Token Guide - Nitrokey 3 setup and daily operation
- GPG/SSH Strategy - Hardware-first authentication workflow
- Deployment Guide - New host provisioning with hardware tokens
- SOPS - Secrets OPerationS tool
- Age - Modern encryption tool
Last Updated: 2026-03-04
Hardware Token Model - No Software GPG Keys
Cross-Platform Development
This flake supports building and validating nix-darwin configurations from Linux systems.
Features
Cross-Compilation
Build nix-darwin configurations on Linux without requiring a macOS system:
# Build the full darwin configuration from Linux
nix build .#turbine-darwin
# Or use the convenience command
mise build-turbine
# The result will be in ./result/
Configuration Validation
Validate nix-darwin configurations without performing a full build:
# Check if the darwin configuration is valid
nix build .#turbine-check
# Or use the convenience command
mise check-darwin
# This is faster than a full build and useful for CI/testing
Note: Some darwin packages with system-specific dependencies may fail during cross-compilation. The validation target helps catch syntax and basic configuration errors without requiring full package builds.
Development Environment
Use the provided development shell for cross-platform work:
# Enter the development environment
nix develop
# Or
mise dev
# This provides tools like nixos-rebuild, nix-output-monitor, alejandra, and mise
Linux Builder Setup
For optimal performance when building nix-darwin configurations from Linux, you should set up nix-darwin's linux-builder feature. This enables remote building and improves cross-compilation performance.
What is nix-darwin's linux-builder?
The linux-builder is a virtual machine that runs on macOS and allows you to build Linux packages remotely. However, in our case, we're doing the reverse - building macOS packages from Linux. The linux-builder configuration in nix-darwin can be adapted for this purpose.
Setting up Remote Building
- Configure a macOS builder (if you have access to a macOS machine):
Add to your /etc/nix/nix.conf on the Linux system:
builders = ssh://username@macos-host x86_64-darwin /path/to/remote/nix 2 1 kvm,nixos-test,big-parallel,benchmark
builders-use-substitutes = true
- Alternative: Use GitHub Actions macOS runners for CI/CD:
name: Build Darwin Config
on: [push, pull_request]
jobs:
build-darwin:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v27
- name: Validate darwin config
run: mise check-darwin
build-darwin-native:
runs-on: macos-latest
steps:
- uses: actions/checkout@v4
- uses: cachix/install-nix-action@v27
- name: Build darwin config natively
run: mise build-turbine
- Local cross-compilation (current setup):
For development and testing, you can use the cross-compilation support directly:
# Quick validation (recommended for most cases)
mise check-darwin
# Full cross-compilation (may be slower without remote builder)
mise build-turbine
Performance Tips
- Use
mise check-darwinfor quick validation during development - Set up remote builders for production deployments
- Use binary caches to avoid rebuilding common packages:
# Add to your nix configuration
nix.settings.substituters = [
"https://cache.nixos.org/"
"https://nix-community.cachix.org"
];
Troubleshooting
If you encounter issues with cross-compilation:
-
Check available builders:
nix show-config | grep builders -
Verify cross-compilation support:
nix-instantiate --eval -E 'builtins.currentSystem' -
Use verbose output for debugging:
nix build .#turbine-darwin --verbose
Use Cases
CI/CD Validation
In continuous integration, you can validate both NixOS and nix-darwin configurations on Linux runners:
# Example GitHub Actions workflow
- name: Validate NixOS configuration
run: mise build-powerhouse
- name: Validate nix-darwin configuration
run: mise check-darwin
- name: Cross-compile darwin configuration
run: mise build-turbine
Development Workflow
When developing on Linux but targeting macOS:
- Edit configurations in your preferred Linux environment
- Validate syntax with
mise check-darwin - Cross-compile with
mise build-turbineto catch platform-specific issues - Deploy the configuration on actual macOS hardware when ready
Requirements
- Nix with flakes enabled
- Linux system with sufficient disk space for cross-compilation
- Network access to download dependencies
Troubleshooting
Build Errors
If cross-compilation fails:
- Check that all dependencies support the target platform
- Some macOS-specific packages may not cross-compile successfully
- Consider using remote builders for problematic packages
Performance
Cross-compilation can be resource-intensive:
- Use
nix build --max-jobs autoto utilize all CPU cores - Consider using a remote darwin builder for better performance
- The validation target (
turbine-check) is much faster than full builds
Remote Builders
For better performance, you can set up a remote macOS builder:
# In your nix configuration
nix.buildMachines = [{
hostName = "mac-builder.example.com";
system = "x86_64-darwin";
maxJobs = 4;
speedFactor = 2;
supportedFeatures = [ "nixos-test" "benchmark" "big-parallel" ];
mandatoryFeatures = [ ];
}];
This allows Nix to automatically use the remote macOS system for darwin-specific builds.
Homebrew Integration
This nix-darwin configuration includes Homebrew support for managing Mac applications. Here's how to use it:
Philosophy
- Prefer nixpkgs: Use packages from nixpkgs whenever possible (managed in
users/brancengregory/home.nix) - Use Homebrew for GUI apps: GUI applications and Mac-specific software that aren't available or don't work well in nixpkgs
- Minimal Homebrew usage: Only use Homebrew when nixpkgs isn't sufficient
Configuration Files
modules/os/darwin.nix: Contains Homebrew configuration (casks, brews, taps, masApps)users/brancengregory/home.nix: Contains nixpkgs packages (CLI tools, fonts, etc.)
Adding Applications
GUI Applications (Recommended for Homebrew)
Edit modules/os/darwin.nix and add applications to the casks list:
casks = [
"visual-studio-code"
"docker"
"slack"
"1password"
];
CLI Tools (Recommended for nixpkgs)
Edit users/brancengregory/home.nix and add packages to the home.packages list:
home.packages = with pkgs; [
git
neovim
bat
# ... other packages
];
When to Use Homebrew for CLI Tools
Only use Homebrew brews for CLI tools that:
- Aren't available in nixpkgs
- Don't work properly when installed via nixpkgs
- Require system integration that nixpkgs can't provide
Mac App Store Apps
For apps from the Mac App Store, add them to masApps with their app ID:
masApps = {
"Xcode" = 497799835;
"Pages" = 409201541;
};
Migration from Existing Homebrew
If you already have Homebrew installations:
- Inventory existing apps: Run
brew listandbrew list --cask - Add to nix-darwin: Add the applications you want to keep to the appropriate lists
- Apply configuration: Run
darwin-rebuild switch --flake .#turbine - Cleanup: The
cleanup = "zap"setting will remove unmanaged packages
Applying Changes
After modifying the configuration:
darwin-rebuild switch --flake .#turbine
This will:
- Install new packages/casks
- Remove packages not in the configuration (due to
cleanup = "zap") - Update existing packages (due to
upgrade = true)
Unified GPG/SSH Strategy (Hardware Token Edition)
Hardware-backed authentication using Nitrokey 3 tokens with automatic stub management
This document outlines the hardware-first GPG/SSH configuration where all secret keys reside exclusively on Nitrokey 3 hardware tokens, and hosts use lightweight "stubs" that reference keys on the hardware.
Overview
Architecture
┌─────────────────────────────────────────────────────────────────┐
│ USER APPLICATIONS │
│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────┐│
│ │ GPG Client │ │ SSH Client │ │ Git (signed)││
│ └─────────┬───────┘ └─────────┬───────┘ └──────┬──────┘│
│ │ │ │ │
│ └──────────────────────┼───────────────────┘ │
│ │ │
│ ┌─────────────▼──────────────┐ │
│ │ GPG Agent │ │
│ │ ┌─────────────────────┐ │ │
│ │ │ Stub Management │ │ │
│ │ │ SSH Auth Bridge │ │ │
│ │ │ Smart Card Daemon │ │ │
│ │ └─────────────────────┘ │ │
│ └───────────┬──────────────┘ │
└──────────────────────────────────┼────────────────────────────┘
│
┌──────────────▼───────────────┐
│ HARDWARE TOKEN │
│ ┌──────────────────────┐ │
│ │ Nitrokey 3 │ │
│ │ ├─ Signing Subkey │ │
│ │ ├─ Encryption Subkey │ │
│ │ └─ Auth Subkey (SSH) │ │
│ └──────────────────────┘ │
└───────────────────────────────┘
Key Principles
- Hardware-First: All secret keys live exclusively on Nitrokey 3 tokens
- Stub Model: Hosts have lightweight references (stubs), not actual keys
- Automatic Discovery: Stubs created automatically when hardware key is used
- Cross-Platform: Identical workflow on Linux (powerhouse/capacitor) and macOS (turbine)
- Manual Provisioning: No automated scripts - fully documented procedures
What Changed from Per-Host Model
| Aspect | Old Model | New Model |
|---|---|---|
| Key Storage | Per-host keys in SOPS | Single hardware key pair |
| Secret Material | Filesystem + SOPS | Hardware token only |
| Host Keys | Different per host | Same keys everywhere |
| Provisioning | Import from SOPS | Stubs from hardware |
| Backup Strategy | SOPS backups | Identical backup token |
How Stubs Work
What is a Stub?
A stub is a lightweight reference file stored in ~/.gnupg/private-keys-v1.d/ that:
- Points to a key on the hardware token (by serial number)
- Contains key metadata (algorithm, keygrip)
- Does NOT contain secret key material
Stub vs. Full Key
Full Key (NOT used in this model):
~/.gnupg/private-keys-v1.d/XXXXXXX.key:
- Secret key material (encrypted)
- Can decrypt/sign without hardware
- Size: ~1-2 KB
Stub (what we use):
~/.gnupg/private-keys-v1.d/XXXXXXX.key:
- Keygrip reference
- Hardware token serial
- Algorithm info
- No secret material
- Size: ~200 bytes
- Points to hardware for actual operations
Creating Stubs
Automatic (Preferred):
# 1. Insert Nitrokey
# 2. Any GPG operation creates stubs
gpg --card-status # View token info (creates stubs)
ssh-add -L # View SSH key (creates auth stub)
git commit -m "test" # Sign (creates signing stub)
# 3. Stubs now exist
gpg --list-secret-keys
# Shows: 'ssb>' notation (secret subkey stub)
Manual (if automatic fails):
# Fetch public key from keyserver
gpg --card-edit
# gpg/card> fetch
# gpg/card> quit
# Force stub creation
gpg-connect-agent "scd serialno" "learn --force" /bye
Magic Recovery (New Machine Setup)
Overview
Setting up GPG/SSH on a new machine with existing hardware keys:
Before: Needed to import secret keys from backup/SOPS
After: Just plug in token and create stubs
Procedure
Step 1: Install System
# Install NixOS or home-manager as normal
# See docs/DEPLOYMENT.md for full procedure
Step 2: Insert Nitrokey
# Physically insert token into USB port
# Wait for LED to stabilize (steady light)
Step 3: Fetch Public Key
# Download from keyserver
gpg --card-edit
# gpg/card> fetch # Automatically fetches from keys.openpgp.org
# gpg/card> quit
# Alternative: Import from local copy
gpg --import keys/brancen-gregory-public.asc
Step 4: Create Stubs
# Link hardware to GPG (creates stubs)
gpg-connect-agent "scd serialno" "learn --force" /bye
# Or simply use any GPG operation
gpg --list-secret-keys # Shows stubs created
Step 5: Verify SSH
# Check SSH key is available
ssh-add -L
# Expected output:
# ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH... cardno:000F_XXXXXXXX
Step 6: Test
# Test SSH authentication
ssh git@github.com
# Should show: Hi brancengregory! You've successfully authenticated...
# Test Git signing
git commit --allow-empty -m "Test hardware key signing"
git log --show-signature -1
# Should show: Good signature from "Brancen Gregory"
Result
- ✅ No secret key import needed
- ✅ No SOPS secrets for GPG
- ✅ Stubs created automatically
- ✅ Hardware provides all secret operations
- ✅ Both Nitrokeys work identically
Daily Workflow
SSH Authentication
Automatic (once stubs exist):
ssh user@server
# System prompts for PIN (if cache expired)
# LED blinks - touch token
# Authentication complete
With Tmux:
tmux new-session -s work
ssh user@server # Works seamlessly in tmux
# If issues:
refresh_gpg # Updates GPG_TTY for current pane
Git Commit Signing
Automatic (enabled by default):
git commit -m "Update configuration"
# Automatically signed with hardware key
# PIN prompt if cache expired
# Touch token when LED blinks
Verify Signature:
git log --show-signature
# Shows: Good signature from "Brancen Gregory <brancengregory@gmail.com>"
GPG Operations
Encrypt File:
gpg --encrypt --recipient brancengregory@gmail.com file.txt
# PIN prompt
# Touch token
# Encrypted file created
Sign File:
gpg --sign file.txt
# PIN prompt
# Touch token
# Signed file created
Key Management
Your Keys
Hardware Token Keys:
- Fingerprint:
0A8C406B92CEFC33A51EC4933D9E0666449B886D - Key ID:
3D9E0666449B886D - Keyserver: https://keys.openpgp.org
Key Structure:
Master Key (Certify only)
├─ Signing Subkey → Git commit signing
├─ Encryption Subkey → File/email encryption
└─ Authentication Subkey → SSH authentication
Keyserver Integration
Publish Key:
# If you update/extend keys
gpg --keyserver hkps://keys.openpgp.org --send-keys 3D9E0666449B886D
Fetch on New Machine:
gpg --card-edit
# gpg/card> fetch
# gpg/card> quit
# Or directly:
curl https://keys.openpgp.org/vks/v1/by-fingerprint/0A8C406B92CEFC33A51EC4933D9E0666449B886D | gpg --import
Local Backup
Public Key in Repository:
# Located at: keys/brancen-gregory-public.asc
# Use if keyserver unavailable:
gpg --import keys/brancen-gregory-public.asc
SSH Public Key Management
The SSH public key is derived from the GPG authentication subkey on your Nitrokey 3. Unlike traditional SSH keys stored in ~/.ssh/, this key lives exclusively on the hardware token and is exposed through the GPG agent.
How It Works
Authentication Flow:
- SSH client requests authentication
- GPG agent forwards request to Nitrokey
- Nitrokey performs cryptographic operation
- Private key never leaves the hardware token
Key Storage:
- ❌ Not stored in
~/.ssh/id_*files - ❌ No secret key material on filesystem
- ✅ Key provided by GPG agent via
SSH_AUTH_SOCK
Viewing Your SSH Key
From GPG Agent:
# Shows key with cardno notation
ssh-add -L | grep cardno
# Expected output:
# ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAI... cardno:000F_9D1F273F0000
From GPG:
# Export SSH public key from GPG
gpg --export-ssh-key 3D9E0666449B886D
# Expected output:
# ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAI... openpgp:0xBCF5F8B3
Repository Backup
SSH Public Key File:
- Location:
keys/id_nitrokey.pub - Format: OpenSSH (RFC 4716)
- Source: Exported from Nitrokey authentication subkey
Adding to GitHub/GitLab
Method 1: From Repository:
# Copy to clipboard (macOS)
cat keys/id_nitrokey.pub | pbcopy
# Or display and copy manually
cat keys/id_nitrokey.pub
Method 2: From Agent (Live):
# Copy current key from GPG agent
ssh-add -L | grep cardno | pbcopy
Then:
- Go to GitHub Settings → SSH and GPG keys
- Click "New SSH key"
- Paste the key
- Save
Adding to Servers
Direct from Agent:
# Append to remote authorized_keys
ssh-add -L | grep cardno | ssh user@server "cat >> ~/.ssh/authorized_keys"
Using Exported File:
# Copy key to local .ssh first
gpg --export-ssh-key 3D9E0666449B886D > ~/.ssh/id_nitrokey.pub
chmod 644 ~/.ssh/id_nitrokey.pub
# Then use ssh-copy-id
ssh-copy-id -i ~/.ssh/id_nitrokey.pub user@server
Manual Method:
# On server, edit authorized_keys and add:
# ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAI... cardno:000F_9D1F273F0000
No IdentityFile Required
Important: You do NOT need IdentityFile in ~/.ssh/config. The GPG agent provides the key automatically via SSH_AUTH_SOCK.
Correct Configuration:
Host github.com
HostName github.com
User git
PreferredAuthentications publickey
# No IdentityFile - key comes from GPG agent
Why No IdentityFile:
- GPG agent intercepts SSH authentication requests
- Forwards to Nitrokey for cryptographic operations
- Hardware token never exposes private key
- More secure than file-based keys
Testing SSH Authentication
Test with GitHub:
ssh git@github.com
# Expected output:
# Hi brancengregory! You've successfully authenticated...
Test with Any Server:
ssh user@server
# Should prompt for PIN if not cached
# Then prompt for touch confirmation on token
Hardware Token Details
Device Information
Primary Token:
- Model: Nitrokey 3
- Serial: [Check with
gpg --card-status] - Location: [Your secure location]
- Usage: Daily operations
Backup Token:
- Model: Nitrokey 3
- Serial: [Check with
gpg --card-status] - Location: [Your backup secure location]
- Usage: Emergency/backup (identical keys)
PIN Management
User PIN: 6-8 digits
- Daily operations (sign, encrypt, auth)
- 3 attempts before temporary lock
- Unlocked with Admin PIN
Admin PIN: (change from default during setup!)
- Card administration
- Reset User PIN
- Never for daily use
Change PINs:
gpg --card-edit
# gpg/card> admin
# gpg/card> passwd
# Follow prompts
# gpg/card> quit
Touch Confirmation (UIF)
User Interface Flags (UIF):
- Configured: All subkeys require touch
- LED blinks when touch needed
- Press token surface to confirm
Verify:
gpg --card-edit
# gpg/card> uif
# Shows current UIF status
Configuration
Current Configuration Files
GPG Agent (modules/home/gpg.nix):
services.gpg-agent = {
enable = true;
enableSshSupport = true;
enableScDaemon = true; # Smart card daemon for hardware tokens
# Cache settings
defaultCacheTtl = 28800; # 8 hours
defaultCacheTtlSsh = 28800; # 8 hours
maxCacheTtl = 86400; # 24 hours
maxCacheTtlSsh = 86400; # 24 hours
};
Git Signing (modules/home/programs/git.nix):
signing = {
key = "3D9E0666449B886D"; # Hardware token subkey
signByDefault = true;
};
ZSH Integration (modules/home/terminal/zsh.nix):
# Environment setup
export GPG_TTY=$(tty)
export SSH_AUTH_SOCK=$(gpgconf --list-dirs agent-ssh-socket)
# Hardware token aliases
nitro-status = "gpg --card-status";
nitro-fetch = "gpg --card-edit";
nitro-learn = "gpg-connect-agent 'scd serialno' 'learn --force' /bye";
Tmux Integration
Configuration (modules/home/terminal/tmux.nix):
set-option -g update-environment "DISPLAY SSH_ASKPASS SSH_AGENT_PID SSH_CONNECTION SSH_AUTH_SOCK WINDOWID XAUTHORITY GPG_TTY"
# Refresh GPG_TTY on session events
set-hook -g session-created 'run-shell "export GPG_TTY=$(tty) && gpg-connect-agent updatestartuptty /bye >/dev/null 2>&1 || true"'
set-hook -g client-attached 'run-shell "export GPG_TTY=$(tty) && gpg-connect-agent updatestartuptty /bye >/dev/null 2>&1 || true"'
Troubleshooting
Hardware Token Not Detected
Symptom: gpg --card-status shows "No such device"
Solutions:
# Check USB
lsusb | grep -i nitro
# Check scdaemon
gpgconf --check-programs
ps aux | grep scdaemon
# Restart
gpgconf --kill scdaemon
gpg-connect-agent /bye
# Try different port
SSH Key Not Available
Symptom: ssh-add -L doesn't show cardno key
Solutions:
# Create stubs
gpg-connect-agent "scd serialno" "learn --force" /bye
# Check GPG agent
echo $SSH_AUTH_SOCK
gpgconf --list-dirs agent-ssh-socket
# Restart agent
gpgconf --kill gpg-agent && gpgconf --launch gpg-agent
Git Signing Fails
Symptom: git commit fails with GPG error
Solutions:
# Check signing key
git config user.signingkey
# Should be: 3D9E0666449B886D
# Test GPG directly
echo "test" | gpg --clearsign
# Check stubs
gpg --list-secret-keys 3D9E0666449B886D
PIN Entry Issues
Symptom: PIN dialog doesn't appear
Solutions:
# Set GPG_TTY
export GPG_TTY=$(tty)
gpg-connect-agent updatestartuptty /bye
# Test pinentry
echo "GETPIN" | pinentry-curses
# In tmux:
refresh_gpg
Security Model
Threat Protection
Hardware Token Provides:
- ✅ Physical possession requirement
- ✅ Keys never leave hardware (even during operations)
- ✅ Protection against key extraction attacks
- ✅ 5th Amendment protection (can't be compelled to reveal PIN)
- ✅ PIN + touch dual authentication
SSH Host Keys Provide:
- ✅ Server identity verification
- ✅ Protection against man-in-the-middle attacks
- ✅ Pre-distributed trust (no TOFU)
SOPS Provides:
- ✅ Encrypted secret distribution
- ✅ Host-specific credentials
- ✅ Declarative configuration
Trust Model
┌─────────────────────────────────────────┐
│ TRUST HIERARCHY │
├─────────────────────────────────────────┤
│ 1. Hardware Token (Root of Trust) │
│ └─ Physical possession + PIN │
│ │
│ 2. SSH Host Keys (Server Identity) │
│ └─ Pre-verified in SOPS │
│ │
│ 3. SOPS + Age (Secret Distribution) │
│ └─ Host-specific decryption │
└─────────────────────────────────────────┘
Best Practices
Daily Use
- Insert token when starting work
- Remove when done (optional but good practice)
- Verify LED behavior:
- Steady = ready
- Blinking = touch needed
- Use aliases for common operations
Security
- Never export secret keys (they stay on hardware)
- Backup token kept offline (emergency only)
- Public key published (keyserver + repo backup)
- PIN not written down (memorize only)
- Touch required (UIF enabled for all operations)
Maintenance
- Monthly: Test backup token
- Quarterly: Verify keyserver publication
- Annually: Consider subkey rotation
Migration from Old Model
For Existing Installations
If you have per-host GPG keys in SOPS:
-
Backup existing GPG directory:
cp -r ~/.gnupg ~/.gnupg.backup.$(date +%Y%m%d) -
Remove old keys:
rm -rf ~/.gnupg/private-keys-v1.d/* rm ~/.gnupg/pubring.kbx* -
Provision with hardware token:
gpg --card-edit # fetch # quit gpg-connect-agent "scd serialno" "learn --force" /bye -
Verify:
gpg --list-secret-keys # Should show stubs ssh-add -L # Should show cardno git commit --allow-empty -m "Test" -
Clean SOPS:
# Remove GPG sections from secrets/secrets.yaml # See docs/DEPLOYMENT.md
Related Documentation
- Hardware Tokens:
docs/HARDWARE-KEYS.md- Detailed token management - Deployment:
docs/DEPLOYMENT.md- New machine provisioning - Secret Management:
docs/SECRET_MANAGEMENT.md- SOPS and age keys - Security:
docs/SECURITY.md- Threat model and practices - FIDO2 (Future):
docs/FIDO2-RESIDENT-KEYS.md- Alternative SSH method - GPG Public Key:
keys/brancen-gregory-public.asc- Local backup - SSH Public Key:
keys/id_nitrokey.pub- SSH authentication key
Quick Reference Commands
# Check token
gpg --card-status
# Create stubs
gpg-connect-agent "scd serialno" "learn --force" /bye
# List keys with stubs
gpg --list-secret-keys
# Show SSH key
ssh-add -L | grep cardno
# Export SSH key
gpg --export-ssh-key 3D9E0666449B886D
# Copy SSH key to clipboard (macOS)
gpg --export-ssh-key 3D9E0666449B886D | pbcopy
# Fetch from keyserver
gpg --card-edit -> fetch -> quit
# Restart agent
gpgconf --kill gpg-agent && gpgconf --launch gpg-agent
# Refresh in tmux
refresh_gpg
# Hardware token aliases
nitro-status # gpg --card-status
nitro-fetch # gpg --card-edit
nitro-learn # Create stubs
Last Updated: 2026-03-04
Hardware Token Model - No Per-Host Keys
Security Guidelines
This document outlines security best practices and considerations for this nix-config repository.
Security Review Summary
✅ Repository Status: Ready for public transition
Security Audit Results
✅ Clean: No hardcoded secrets, API tokens, or private keys found
✅ Clean: No sensitive credentials committed to repository
✅ Clean: Strong security configurations implemented
✅ Fixed: Removed hardcoded password from user configuration
Security Configurations
SSH Security
- Password authentication disabled:
PasswordAuthentication no - Strong ciphers: Modern encryption algorithms only
- Host key verification:
StrictHostKeyChecking ask - No agent forwarding:
ForwardAgent noandForwardX11 no - Key algorithms: Ed25519 and strong RSA/ECDSA only
GPG Security
- Modern algorithms: SHA512, AES256, strong key preferences
- Secure defaults: No weak ciphers, strong S2K settings
- Key management: Proper keyserver configuration
- Hardware token support: Nitrokey 3 hardware tokens with GPG agent integration
User Account Security
- No hardcoded passwords: Users must set passwords during setup
- Wheel group: Administrative access properly configured
- Shell security: Zsh with proper configuration
Best Practices for Users
Initial Setup
- Set secure password: Run
sudo passwd brancengregoryafter first boot - Generate GPG keys: Follow GPG-SSH-STRATEGY.md
- Configure SSH keys: Add your public keys to services (GitHub, etc.)
- Review configurations: Customize settings for your environment
Ongoing Security
- Regular updates: Keep system packages updated
- Key rotation: Follow GPG key rotation schedule
- Monitor access: Review SSH and GPG logs periodically
- Backup strategy: Secure backup of GPG keys and important data
Before Contributing
- No secrets: Never commit passwords, private keys, or API tokens
- Personal info: Remove personal email/username if contributing upstream
- Configuration review: Ensure changes don't introduce security vulnerabilities
Security Contact
For security-related questions or to report vulnerabilities:
- Open an issue with the "security" label
- Follow responsible disclosure practices
See Also
- GPG/SSH Strategy - Detailed authentication and encryption configuration
- Secret Management - Secure credential management with sops-nix
- Contributing - Development workflow and security guidelines
References
- GPG/SSH Strategy - Detailed security configuration
- NixOS Security - Official security documentation
- Home Manager Security - User-level security considerations
Unified Theming with Stylix
This repository uses Stylix to provide a unified, system-wide aesthetic across both NixOS (powerhouse) and macOS (turbine).
Overview
Stylix allows us to define a single source of truth for our system's visual identity, including:
- Color Scheme: Tokyo Night Dark
- Wallpaper: NixOS Dracula (master artwork)
- Fonts: Fira Code Nerd Font (Monospace), DejaVu (Sans/Serif)
- Cursor: Breeze Snow (24px)
- Opacity: 95% terminal transparency
Architecture
The styling configuration is centralized in modules/themes/stylix.nix.
Selective Targeting
To avoid conflicts with the KDE Plasma 6 desktop environment (which is managed authoritatively via plasma-manager), Stylix is configured with autoEnable = false.
We selectively enable Stylix for specific, safe targets:
- System Level: Console and GTK.
- User Level: Starship and Ghostty (configured in
users/brancengregory/home.nix).
Configuration
Global Settings (modules/themes/stylix.nix)
stylix = {
enable = true;
autoEnable = false; # Prevents interference with Plasma 6
base16Scheme = "${pkgs.base16-schemes}/share/themes/tokyo-night-dark.yaml";
image = pkgs.fetchurl {
url = "https://raw.githubusercontent.com/NixOS/nixos-artwork/master/wallpapers/nix-wallpaper-dracula.png";
sha256 = "...";
};
fonts.monospace = {
package = pkgs.nerd-fonts.fira-code;
name = "FiraCode Nerd Font Mono";
};
# ... cursor and opacity settings
};
User Overrides (users/brancengregory/home.nix)
In the Home Manager profile, we explicitly enable Stylix for terminal tools:
stylix.targets = {
starship.enable = true;
ghostty.enable = true;
};
Changing the Theme
To change the entire system's look, you only need to modify modules/themes/stylix.nix. For example, to switch to Gruvbox:
- Update
base16Schemeto point to a gruvbox YAML file. - Update the
imageURL and hash to a matching wallpaper. - Run
mise build-powerhouseormise build-capacitor(or runnixos-rebuild switchon the target system) to apply changes.
Troubleshooting
Version Mismatches
If you see warnings about Stylix and NixOS/Home Manager version mismatches, they are suppressed using stylix.enableReleaseChecks = false in home-manager.sharedModules.
UI Artifacts
If a specific application looks broken with Stylix colors, you can disable theming for just that app in the targets block of either the system or user configuration.
Restic Backup Configuration
This project uses Restic for backups, configured declaratively via NixOS. The configuration expects specific credential files to exist on the system to authenticate with the backup repository (e.g., Google Cloud Storage).
1. Quick Start (Manual Setup)
For the service to start correctly, you need to manually create the secret files on the target machine.
Step 1: Create the secrets directory
sudo mkdir -p /etc/nixos/secrets
sudo chmod 700 /etc/nixos/secrets
Step 2: Create the Password File
This file contains only the repository password.
sudo touch /etc/nixos/secrets/restic-password
sudo chmod 600 /etc/nixos/secrets/restic-password
sudo nano /etc/nixos/secrets/restic-password
# Paste your repository password (no newlines)
Step 3: Create the Environment File
This file contains environment variables for the backend (e.g., GCS credentials).
sudo touch /etc/nixos/secrets/restic-env
sudo chmod 600 /etc/nixos/secrets/restic-env
sudo nano /etc/nixos/secrets/restic-env
Content for Google Cloud Storage:
GOOGLE_PROJECT_ID=your-project-id
GOOGLE_APPLICATION_CREDENTIALS=/etc/nixos/secrets/gcs-key.json
Step 4: GCS Key File (If using GCS)
If you defined GOOGLE_APPLICATION_CREDENTIALS above, you need that file too.
# Copy your JSON key file to the server
sudo cp /path/to/your-key.json /etc/nixos/secrets/gcs-key.json
sudo chmod 600 /etc/nixos/secrets/gcs-key.json
Step 5: Test the Service
Rebuild your system or restart the service:
sudo systemctl restart restic-backups-daily-home
sudo systemctl status restic-backups-daily-home
2. Production (Recommended: sops-nix)
For a fully declarative and secure production setup, avoid manually placing files. Instead, use sops-nix to encrypt secrets within this git repository.
- Install sops: Add
sopsto your environment. - Generate Keys: Create an SSH or Age key for your host.
- Encrypt: Create a
secrets.yamlfile encrypted with that key containing the file contents. - Configure NixOS: Use
sops-nixmodule to decryptsecrets.yamlat runtime and place the files in/run/secrets/.
Example sops-nix config:
sops.secrets.restic-password = {};
sops.secrets.restic-env = {};
services.restic.backups.daily-home = {
passwordFile = config.sops.secrets.restic-password.path;
environmentFile = config.sops.secrets.restic-env.path;
};
NixOS Migration Guide: Arch Linux to NixOS
This guide documents the migration from Arch Linux to NixOS on powerhouse, with Windows dual-boot.
Overview
- Source System: Arch Linux with LVM+LUKS on 2x 1.8TB NVMe drives
- Target System: NixOS with LUKS+Btrfs on nvme1n1, Windows on nvme0n1
- Dual-boot: systemd-boot with Windows chainloading
- Backup Strategy: Restic to Google Cloud Storage (pre-requisite)
Pre-Migration Checklist
1. Backup Current System
Before proceeding, ensure complete backups:
# Verify restic backup is up to date
restic -r gs:powerhouse-backup:/ snapshots
# Run fresh backup
restic -r gs:powerhouse-backup:/ backup /home/brancengregory \
--exclude="/home/brancengregory/.cache" \
--exclude="/home/brancengregory/Downloads" \
--exclude="/home/brancengregory/.local/share/Trash"
# Verify backup integrity
restic -r gs:powerhouse-backup:/ check
2. Export Configuration Data
# Create export directory
mkdir -p ~/migration-exports
# SSH keys
cp -r ~/.ssh ~/migration-exports/
# GPG keys - NOTE: Secret keys are on Nitrokey hardware tokens
# Only export public keys and trust database (no private keys)
gpg --export --armor > ~/migration-exports/gpg-public-keys.asc
cp ~/.gnupg/trustdb.gpg ~/migration-exports/ 2>/dev/null || true
# DO NOT export secret keys - they remain on hardware token
# Browser profiles (bookmarks, extensions, etc.)
# Firefox
cp -r ~/.mozilla ~/migration-exports/ 2>/dev/null || true
# Chrome/Chromium
cp -r ~/.config/google-chrome ~/migration-exports/ 2>/dev/null || true
cp -r ~/.config/chromium ~/migration-exports/ 2>/dev/null || true
# Document any manually installed packages
pacman -Qe > ~/migration-exports/explicit-packages.txt
pacman -Qm > ~/migration-exports/aur-packages.txt
# Copy exports to external storage or restic backup
restic -r gs:powerhouse-backup:/ backup ~/migration-exports
3. Gather System Information
# Save current partition layout
lsblk -f > ~/migration-exports/lsblk-layout.txt
fdisk -l > ~/migration-exports/fdisk-layout.txt
# Save network configuration
ip addr > ~/migration-exports/network-config.txt
cat /etc/resolv.conf > ~/migration-exports/resolv.conf
# Save custom systemd services
ls -la /etc/systemd/system/ > ~/migration-exports/custom-services.txt
# Save cron jobs
crontab -l > ~/migration-exports/crontab.txt 2>/dev/null || true
sudo crontab -l > ~/migration-exports/root-crontab.txt 2>/dev/null || true
Phase 1: Windows Installation
1.1 Prepare Windows Installation
- Download Windows 11 ISO from Microsoft
- Create bootable USB with Ventoy or Rufus
- Backup Windows product key (if applicable)
1.2 Install Windows on nvme0n1
Boot from Windows USB:
- Boot into Windows installer
- Select "Custom: Install Windows only (advanced)"
- Important: Only select nvme0n1 (Drive 0), NOT nvme1n1
Partition Layout for nvme0n1:
Disk 0 (nvme0n1) - 1.8TB
├── System Partition (100MB) - EFI
├── MSR (16MB)
├── Windows (1024GB) - C: drive
└── Unallocated (~780GB) - Future use
Installation Steps:
- Delete all partitions on Disk 0
- Select unallocated space, click "New"
- Windows will create necessary partitions
- Adjust C: drive to 1024000 MB (1TB)
- Leave remaining ~780GB unallocated
- Install Windows to the 1TB partition
1.3 Post-Windows Setup
- Complete Windows setup (skip Microsoft account if desired)
- Install drivers (GPU, chipset, etc.)
- Enable BitLocker if desired (on Windows drive only)
- IMPORTANT: Note the PARTUUID of the Windows ESP partition
# In Windows PowerShell (Admin)
diskpart
list disk
select disk 0
list partition
select partition 1 # Usually the EFI partition
detail partition
# Note the "Partition UUID" value
Phase 2: NixOS Installation Preparation
2.1 Generate SSH Host Key
Before installation, generate the powerhouse SSH host key:
# On your current Arch system or another Linux machine
mkdir -p ~/powerhouse-ssh-key
cd ~/powerhouse-ssh-key
# Generate host key
ssh-keygen -t ed25519 -f ssh_host_ed25519_key -N "" -C "powerhouse-host-key"
# The files will be:
# - ssh_host_ed25519_key (private)
# - ssh_host_ed25519_key.pub (public)
2.2 Add SSH Key to Secrets
# Copy public key to this repo's secrets directory
cp ~/powerhouse-ssh-key/ssh_host_ed25519_key.pub /path/to/nix-config/secrets/powerhouse_host_key.pub
# Add private key to secrets.yaml using sops
cd /path/to/nix-config
# Edit secrets.yaml
sops secrets/secrets.yaml
# Add new entries:
# ssh:
# host_key: |
# -----BEGIN OPENSSH PRIVATE KEY-----
# ... (paste contents of ssh_host_ed25519_key)
# -----END OPENSSH PRIVATE KEY-----
# host_key_pub: ssh-ed25519 AAAAC3NzaC... brancengregory@powerhouse
2.3 Update Configuration
Update the Windows ESP PARTUUID in hosts/powerhouse/config.nix:
fileSystems."/boot/efi-windows" = {
device = "/dev/disk/by-partuuid/XXXX-XXXX"; # Replace with actual PARTUUID
fsType = "vfat";
options = ["noauto" "x-systemd.automount"];
};
2.4 Build NixOS Installer
Create a custom NixOS installer with your flake:
cd /path/to/nix-config
# Build the installer ISO
nix build .#nixosConfigurations.powerhouse.config.system.build.isoImage
# Or download standard NixOS ISO and use manually
Phase 3: NixOS Installation
3.1 Boot NixOS Installer
- Boot from NixOS USB
- Select "NixOS Installer" from boot menu
- Connect to internet (WiFi or Ethernet)
3.2 Prepare Installation Environment
# Set up networking (if WiFi)
nmtui
# Clone your flake (or use local copy)
git clone https://github.com/brancengregory/nix-config.git /mnt/nix-config
cd /mnt/nix-config
# Or copy from USB/mounted drive
cp -r /path/to/nix-config /mnt/
3.3 Run Disko Partitioning
WARNING: This will DESTROY all data on nvme1n1!
cd /mnt/nix-config
# Dry run first to verify layout
sudo nix run github:nix-community/disko -- --mode disko hosts/powerhouse/disks/main.nix --dry-run
# If layout looks correct, run for real
sudo nix run github:nix-community/disko -- --mode disko hosts/powerhouse/disks/main.nix
# Verify partitions
lsblk
3.4 Generate Hardware Configuration
# Mount the new root
sudo mount /dev/mapper/crypted /mnt
sudo mount /dev/nvme1n1p1 /mnt/boot
# Generate hardware config
sudo nixos-generate-config --root /mnt
# Copy generated hardware config to your flake
cp /mnt/etc/nixos/hardware-configuration.nix hosts/powerhouse/hardware-generated.nix
# Compare with existing and merge any new modules
# Then update hardware.nix if needed
3.5 Install NixOS
# Install NixOS with your flake
sudo nixos-install --flake .#powerhouse
# Set root password
sudo passwd
# Reboot
sudo reboot
Phase 4: Post-Installation
4.1 Initial Setup
After first boot:
# Set user password
sudo passwd brancengregory
# Verify boot entries
sudo bootctl list
# Check all filesystems are mounted correctly
df -h
lsblk
4.2 Configure Bootloader for Windows
If Windows doesn't appear in boot menu:
# Mount Windows ESP
sudo mkdir -p /boot/efi-windows
sudo mount /dev/nvme0n1p1 /boot/efi-windows # Adjust partition number
# Copy Windows bootloader
sudo mkdir -p /boot/EFI/Microsoft/Boot
sudo cp -r /boot/efi-windows/EFI/Microsoft/Boot/* /boot/EFI/Microsoft/Boot/
# Update systemd-boot
sudo bootctl update
# Reboot and check
sudo reboot
4.3 Restore Data from Backup
# Install restic if not already present
# (should be in your home.nix)
# Restore home directory
restic -r gs:powerhouse-backup:/ restore latest --target /tmp/restore --include "/home/brancengregory"
# Copy files to proper location
sudo cp -r /tmp/restore/home/brancengregory/* /home/brancengregory/
sudo chown -R brancengregory:users /home/brancengregory
# Restore migration exports
restic -r gs:powerhouse-backup:/ restore latest --target /tmp/migration --include "/migration-exports"
4.4 Restore SSH Keys and Provision GPG
# Restore SSH keys
mkdir -p ~/.ssh
chmod 700 ~/.ssh
cp /tmp/migration/migration-exports/.ssh/* ~/.ssh/
chmod 600 ~/.ssh/*
chmod 644 ~/.ssh/*.pub 2>/dev/null || true
# GPG: Insert Nitrokey hardware token
# Stubs are created automatically - no key import needed
gpg --card-edit
# gpg/card> fetch # Downloads public key from keyserver
# gpg/card> quit
# Link hardware (creates stubs)
gpg-connect-agent "scd serialno" "learn --force" /bye
# Verify
gpg --list-secret-keys # Should show 'ssb>' (stub notation)
ssh-add -L | grep cardno # Should show SSH key
4.5 Verify Configuration
# Check snapper is working
snapper list
snapper list --config home
# Verify NVIDIA drivers
nvidia-smi
# Check Btrfs subvolumes
sudo btrfs subvolume list /
# Verify swap
swapon -s
free -h
# Test boot entries
sudo bootctl list
Phase 5: Final Configuration
5.1 Update Flake with Real Hardware Config
After installation, update the repository:
cd ~/nix-config # or wherever you cloned it
# Update hardware.nix with any changes from install
# Update secrets with powerhouse SSH host key
# Commit changes
git add .
git commit -m "Update powerhouse config after installation"
git push
5.2 Enable Automatic Backups
Ensure restic backup service is working:
# Check service status
systemctl status restic-backups-daily-home
# Test backup manually
sudo systemctl start restic-backups-daily-home
sudo journalctl -u restic-backups-daily-home -f
# Verify backup in cloud
restic -r gs:powerhouse-backup:/ snapshots
Troubleshooting
Windows Not Showing in Boot Menu
# Manually add Windows entry
sudo mkdir -p /boot/loader/entries
sudo tee /boot/loader/entries/windows.conf <<EOF
title Windows 11
efi /EFI/Microsoft/Boot/bootmgfw.efi
EOF
# Or copy bootloader files
sudo mkdir -p /boot/EFI/Microsoft/Boot
sudo cp /boot/efi-windows/EFI/Microsoft/Boot/bootmgfw.efi /boot/EFI/Microsoft/Boot/
sudo cp /boot/efi-windows/EFI/Microsoft/Boot/*.efi /boot/EFI/Microsoft/Boot/
LUKS Boot Issues
If system doesn't prompt for LUKS password:
# From NixOS installer, chroot into system
sudo mount /dev/mapper/crypted /mnt
sudo mount /dev/nvme1n1p1 /mnt/boot
sudo nixos-enter
# Check initrd configuration
cat /etc/nixos/configuration.nix | grep -A5 "initrd"
# Ensure cryptd is in initrd
boot.initrd.luks.devices."crypted".device = "/dev/disk/by-uuid/...";
NVIDIA Driver Issues
If graphical session fails to start:
# Boot to multi-user.target (no GUI)
systemctl isolate multi-user.target
# Check NVIDIA module
modinfo nvidia
# Check X11 logs
journalctl -u display-manager -b
# Try disabling modesetting temporarily
# Edit config to set: hardware.nvidia.modesetting.enable = false;
BTRFS Subvolume Issues
# Check subvolumes
sudo btrfs subvolume list /
# If @home not mounted correctly
sudo umount /home
sudo mount -o subvol=@home,compress=zstd,noatime /dev/mapper/crypted /home
# Update /etc/fstab or fix in configuration.nix
Rollback Plan
If migration fails catastrophically:
- Boot Arch Linux live USB
- Unlock LUKS volumes:
cryptsetup open /dev/nvme0n1p2 cryptkeeper - Mount and access data
- Restore from restic backup if needed
- Reinstall GRUB if needed:
mount /dev/mapper/cryptkeeper-root /mnt mount /dev/nvme0n1p1 /mnt/boot arch-chroot /mnt grub-install --target=x86_64-efi --efi-directory=/boot --bootloader-id=GRUB grub-mkconfig -o /boot/grub/grub.cfg
Post-Migration Checklist
- System boots successfully
- Windows boots from systemd-boot menu
- LUKS password prompt works
- All Btrfs subvolumes mounted correctly
- NVIDIA drivers loaded
- Plasma desktop starts
-
Snapper snapshots working (test:
snapper create -d "test") - Home directory restored with correct permissions
-
SSH keys working (test:
ssh -T git@github.com) -
GPG hardware token working (test:
gpg --card-status,ssh-add -L) - Restic backup configured and tested
- All critical applications installed and working
- Firefox/Chrome profiles restored
- Network configuration working (WiFi, VPN, etc.)
- Printer/scanner working (if applicable)
- Bluetooth devices paired
Notes
- Keep Arch installation USB accessible during first week
- Monitor disk usage on new Btrfs layout:
btrfs filesystem df / - Test snapper rollback:
sudo snapper rollback - Document any manual tweaks needed after install
- Update this guide with lessons learned
Resources
GitHub Copilot Coding Agent Environment
This repository is configured with a GitHub Copilot coding agent environment that provides seamless access to all essential Nix development tools and commands.
Features
The Copilot agent environment automatically provides:
- Nix with flakes support - Reliable installation using DeterminateSystems/nix-installer-action
- Development shell - Access to all tools via
nix develop - Command runner -
misefor convenient development commands - Code formatting -
alejandraNix formatter - Cross-compilation - Build and validate configs across platforms
- Testing framework - Automated validation and testing
- Binary caching - Fast builds with magic-nix-cache-action
Available Commands
Core Nix Commands
nix flake check # Check flake syntax and evaluate all outputs
nix develop # Enter development shell with all tools
nix build .#<package> # Build specific packages or configurations
Mise Task Shortcuts
mise help # Show all available development commands
mise check # Check flake syntax (alias for nix flake check)
mise check-darwin # Validate nix-darwin config (fast validation)
mise build-turbine # Cross-compile full nix-darwin config from Linux
mise build-powerhouse # Build powerhouse NixOS configuration
mise build-capacitor # Build capacitor NixOS configuration
mise format # Format Nix files using alejandra
mise test # Run validation tests
mise clean # Clean build results and artifacts
mise dev # Enter development shell
Environment Setup
The environment uses GitHub Copilot's official setup workflow approach:
-
Copilot Setup Steps -
.github/workflows/copilot-setup-steps.ymlfollows GitHub's official documentation and uses:DeterminateSystems/nix-installer-action@mainfor reliable, fast Nix installationDeterminateSystems/magic-nix-cache-action@mainfor automatic binary caching- Proper flakes configuration and validation
-
Development Shell - The agent enters the Nix development shell defined in
flake.nix -
Tool Access - All development tools become available (mise, alejandra, etc.)
Benefits of GitHub Copilot Setup Approach
- Official - Follows GitHub's official documentation for Copilot setup steps
- Simple - Single workflow file instead of multiple configuration files
- Reliable - Uses proven, well-tested installation methods
- Speed - Fast installation (~4s on Linux, ~20s on macOS)
- Caching - Binary cache integration for faster builds
- Maintenance - Minimal configuration to maintain
- Compatibility - Works around firewall restrictions
Configuration Files
Primary Configuration
.github/workflows/copilot-setup-steps.yml- GitHub Copilot setup workflow following official documentation
Supporting Files
flake.nix- Nix flake with development shell definitionmise.toml- Development task configuration
The setup provides:
- Automatic Nix installation - Uses DeterminateSystems actions for reliable setup
- Development shell access - All tools available via
nix develop - Flake validation - Ensures configuration syntax is correct
Validation
The copilot-setup-steps.yml workflow automatically validates the environment by:
- ✅ Installing Nix with flakes support
- ✅ Checking flake syntax with
nix flake check - ✅ Entering development shell to verify tools are available
You can manually run the workflow to test the setup:
gh workflow run copilot-setup-steps.yml
Troubleshooting
Common Issues
"nix: command not found"
- The copilot-setup-steps.yml workflow automatically installs Nix
- Or manually install:
curl -L https://nixos.org/nix/install | sh - Source environment:
source ~/.nix-profile/etc/profile.d/nix.sh
"experimental features not enabled"
- The copilot-setup-steps.yml workflow handles this automatically
- Manual fix:
echo "experimental-features = nix-command flakes" >> ~/.config/nix/nix.conf
"mise: command not found"
- Install mise:
curl https://mise.run | sh - Or enter dev shell:
nix develop
Flake evaluation errors
- Check syntax:
nix flake check --no-build - Update inputs:
nix flake update
GitHub Actions setup issues
- Check workflow logs for detailed error messages
- Ensure
DeterminateSystems/nix-installer-action@mainis used - Verify GitHub token permissions for private repositories
- Magic Nix Cache provides automatic caching for faster builds
Build failures
- Clean previous builds:
mise clean - Check available packages:
nix flake show
Development Workflow
The typical development workflow with the Copilot agent:
- Validate configuration:
nix flake check - Make changes to Nix files
- Format code:
mise format - Test changes:
mise testor specific validation commands - Build configurations:
mise build-turbineormise build-powerhouse
Cross-Platform Support
The environment supports cross-platform development:
- Linux: Full support for NixOS configurations and cross-compilation to macOS
- macOS: Native nix-darwin support with Linux VM building capabilities
- Validation: Quick config validation without full builds using
mise check-darwin
Integration with Repository
The Copilot agent environment is fully integrated with this repository's structure:
- Flake-based: Uses the
devShellsdefined inflake.nix - Mise integration: All
mise.tomltasks are available - Cross-compilation: Supports the existing cross-platform workflow
- Testing: Integrates with existing test scripts
This provides the Copilot agent with full access to all development capabilities of this Nix configuration repository.
Contributing
This page outlines how to contribute to the Nix configuration and documentation.
Development Environment
- Install Nix with flakes enabled
- Clone the repository
- Enter the development environment:
mise dev
# or: nix develop
Making Changes
Configuration Changes
- Make your changes to the appropriate files in
hosts/,modules/, orusers/ - Test your changes:
mise check-darwin # for nix-darwin configs mise build-powerhouse # for NixOS configs - Format your code:
mise format
Documentation Changes
- Edit the markdown files in the
docs/directory - Build and preview the documentation:
mise docs-build mise docs-serve - The documentation site will be automatically deployed when changes are merged to main
Available Commands
Run mise help to see all available development commands.