What is Mad Mode?
My day job is writing software to support research at the KU Med Center informatics division, though I'm best known for my work on HTML and Web Architecture at W3C.
I'm a family man, which gives a certain perspective on the KC area, America, and the world we live in.
Between all that, I like to tinker. The bane of my existence is doing things I know the computer could do for me. Have you ever had one of those ideas that won't let go, not even to eat or sleep? My mom said the first time she saw me like that was after they gave me tinker-toys for my 3rd or 4th Christmas. She said I was in "mad scientist mode," just like my father, a chemistry professor.
My git annex is a mess of tiny files and commits; I'm inclined to start over, but I can't just declare bankruptcy; there's some stuff that I think is nowhere else:
See also:
progress:
2026-01-18 17:14 a245f27 feat(ertp-ledgerguise): add escrow layer and mint charting
resolved Dec 2024
see Dec 2024 resolution
feeling like this falls under "I need another project like I need a hole in the head."
rust : Hardened JS :: Formal Verification : Capability Security
Capability security and formal verification are the best tools I see for managing the complexity in modern digital infrastructure. Rust is more of a formal verification tool: the rust compiler absolutely guarantees certain properties of programs. Until runtime, that is -- no matter how correct your code is, it's vulnerable to code that you link with. Capability platforms such as Hardened JS take a different approach: even if some components are faulty or malicious, your code can defend itself against them.
Even better is when they are combined, as in the rust cap-std library. CHERI processors provide capability security in hardware. Apple's Memory Integrity Enforcement (MIE) and Android Arm Memory Tagging Extension are getting very close!
error: you need to load the kernel first
failure mode: when I boot from the hard disk, a UEFI shell starts but doesn't pass control to the next step. I can do so manually; then grub comes up...
I got a grub boot menu, but when i choose the 1st item, it just comes back. when I choose it again, I get "error: you need to load the kernel first"
full aider/gemini chat session: ocap-osdev-aider-chat-history.md
OCaps for AI: Dependencies
The AI crank in this meme does a great job of expressing my concerns about the destabilizing impact of AI. But just like the Rust rocket is poised to counter-balance that impact, capability-based security, which provides scalable support for the principle of least authority (POLA), could be just as important, if not more.
ack: outerheaven
p.s. Is this the ultimate XKCD “Dependency” derivative? « The Wiert Corner – irregular stream of stuff does a nice job of explaining the cartoon.
introduction diagram taken from Bringing Object-orientation to Security Programming
Isolated DHCP Server / segment on a Raspberry Pi class device?
🎯 Overview and Technical Implementation
The Isolated DHCP Server strategy is the chosen path to overcome the limitations of consumer mesh Wi-Fi systems (like Google Wi-Fi) which do not allow the configuration of custom DHCP options 66 (next-server) and 67 (filename). This guide serves as the definitive technical implementation plan for this strategy, consolidating all configuration and workflow details.
This approach involves setting up a dedicated server (like a Raspberry Pi or Mini-PC) to run the DHCP and TFTP/HTTP services on a separate, isolated network segment.
The "Isolation" Principle
By running the PXE service on its own subnet, we ensure:
🛠️ Configuration Details (ISC DHCP Server)
The configuration below uses isc-dhcp-server to manage the isolated subnet 192.168.86.0/24. This setup is crucial as it directs the PXE client to the correct server IP and the appropriate bootloader file based on its architecture.
Critical Parameters
dhcpd.conf Configuration Snippet
The following excerpt from dhcpd.conf implements the conditional boot loading, which is a best practice for modern PXE environments supporting both legacy and UEFI systems.
# tftpd stuff from netboot.xyz
option arch code 93 = unsigned integer 16;
subnet 192.168.86.0 netmask 255.255.255.0 {
range 192.168.86.10 192.168.86.200; # IP range for PXE clients
next-server 192.168.86.62; # The static IP of the PXE/TFTP host
option subnet-mask 255.255.255.0;
option routers 192.168.86.1; # The gateway for the isolated subnet
option broadcast-address 192.168.86.255;
option domain-name-servers 1.1.1.1;
# Conditional booting logic based on DHCP Option 93 (client architecture)
if exists user-class and ( option user-class = "iPXE" ) {
# Clients that report as iPXE get the netboot.xyz menu via HTTP
filename "[http://boot.netboot.xyz/menu.ipxe\](http://boot.netboot.xyz/menu.ipxe)";
} elsif option arch = encode-int ( 16, 16 ) {
# UEFI clients (arch 16, 64-bit) get the UEFI bootloader via HTTP (Best Practice)
filename "[http://boot.netboot.xyz/ipxe/netboot.xyz.efi\](http://boot.netboot.xyz/ipxe/netboot.xyz.efi)";
option vendor-class-identifier "HTTPClient";
} elsif option arch = 00:07 {
# Other UEFI/x64 clients get the standard UEFI bootloader
filename "netboot.xyz.efi";
} else {
# Legacy BIOS clients get the legacy bootloader
filename "netboot.xyz.kpxe";
}
}
🚀 OSDev and Docker Integration
Once the DHCP server is running on the isolated segment, the netboot.xyz Docker container will serve the actual boot files.
The next critical step is ensuring the local custom OSDev images are accessible:
Replace Google Wi-Fi devices with OpenWRT device?
This document details the requirements and trade-offs for Strategy D: Replace Main Router, which involves upgrading the entire network backbone to a Prosumer-grade device (like the GL.iNet GL-MT6000, or Flint 2) to natively support PXE/DHCP Option 66/67.
Key Comparison: Google Wi-Fi (Current) vs. GL.iNet GL-MT6000 (Proposed)
Replacing the router solves the underlying DHCP issue (Strategy B failure) while significantly upgrading network performance (as you noted, the Flint 2 is much faster).
💡 Rationale for Strategy D
The primary driver for considering the GL-MT6000 is its native support for OpenWrt, which grants full administrative control over the DHCP server.
The main obstacle is the High Effort involved: replacing the main router requires configuring and migrating all existing network settings (Port Forwarding, Static IPs, Wi-Fi SSIDs, etc.) to the new device, resulting in full network disruption during the transition.
Replace Router? Isolated Server?
1. Networking Infrastructure (The PXE Prerequisite)
The primary obstacle is ensuring the client machine (T430) receives a DHCP offer correctly pointing it to the boot server.
misc notes
2. Boot Service Content (The Content)
3. Critical Remaining Steps
Did I get a vm working 3 years ago?
I'm struggling with how to sneakernet? again today; in preparation for taking that question upstream, I'm committing the recent progress:
2025-11-29 12:06 300bf48 chore(t430): genode persistent config with wm
In doing so, I find notes on getting a vm actually working:
2023-01-29 01:05 d1c6d3a feat(t430): boot linux from vbox6 in genode
persistent config puzzle solved using Ventoy boot tool
For a while, I had a very slow debug cycle:
nios-minimal-25.05.xyz.isofrom USBw3msudo dd if=sculpt-25-10.img of=/dev/sda bs=1M conv=fsyncConnecting to wifi
per Networking in the installer docs:
Then I discovered Ventoy, which lets me put a whole bunch of boot images in one partition of the USB stick and choose among them at boot time.
Ironically, using one of the images as a boot disk prohibits mounting the partition to see the others! (
can't open blockdev)Fortunately, Ventoy has an option to leave some unused space when setting up a device. So I put
ventoy-1.1.07-linux.tar.gzon/dev/sdb3, booted a small linux distro, and tried to use that to set up the ssd. No joy:mkexfatfsnot found. So I booted a beefier distro (gparted? ubuntu mate?) and used that to runbash Ventoy2Disk.sh -i /dev/sda -g -r 100000.Then I grabbed a genode image (again) and put it in the
Ventoyvolume. I booted that, and used inspect to achieve a persistent configuration:My Ventoy thumb drive now subsumes a whole collection of thumb drives that I used to keep:
p.s. Oops; the provenance of Ventoy is not exactly squeaky clean.
Sculpt 25-10 SSD install: linux required
Dividing attention with a Chief's game (a downer until they pulled off the win in overtime!), I reminded myself how to install genode on the thinkpad's SSD. After I went through all the trouble to add a connector to the downstairs ethernet cable that was severed in my office move, I discovered that Genode / Sculpt supports wifi on this thinkpad now. Is that new as of 25.10?
I was surprised to learn that Genode isn't as self-replicating as I'm used to: after I had booted from a USB stick, it's more or less infeasible to set up the internal SSD to boot genode.
Had to boot linux and copy the boot image from there. (The usual "disk image writer" or just
ddworks.)no progress on setting up a vm
sculpt-25-10.img (35 MiB)
SHA256
0530fe9b464e717c1b6114d57893783c00946f3fe53a18721b5560ae1fd247adObsidian Web Clipper looks like a contender!
I don't see support for importing highlights, but the source is open (MIT license).
Highlighting isn't as convenient as diigo, but then: sometimes having the diigo widget pop up every time I select text is annoying.
In contrast, the Obsidian Web Clipper delighted me with the properties it gleaned:
I tried it on an item where I'd expect great markup: Thoughts on the Resiliency of Web Projects • Aaron Parecki. It got some false positives: these are authors of comments.
Ah... I see
EC signature date
16 November 2022
Start date
1 January 2023
A project timeline would be in the future.
I'm talking about recent past events.
When was the NLnet award announcement?
ah... that's particularly handy, @tarcieri . It pin-points my unease with WASI: I'm 100% fine with Runtime capabilities.
But references to so-called Link-time capabilities are ambient authority, no?
The OCap Discipine definition I use includes:
and by "global namespace" I also mean module namespaces.
how about dated items - blog updates, releases, etc.
When was the NLNet thing?
It seems that in cases such as Activities, Binder stuff is used to implement capability security.
But for app permissions, not so much. An LLM summarized it this way:
The Two-Step Security Model
The Android system treats the process of getting and using a service like a two-step authentication:
Step 1: Discovery (The Forgeable Part)
The client process asks the Service Manager for the Binder object.
Request: "Give me the Binder object for 'media.camera'."
Result: The Service Manager simply returns the object reference (the IBinder proxy) to any caller. This is intentional; it allows any app to try to use the service.
Forgeability Status: Forgeable. You can ask for any service name, and if it exists, you'll get the object. At this point, you have the capability reference, but no actual capability to invoke it.
Step 2: Enforcement (The Unforgeable Part)
The client now attempts to call a method on the Binder object (e.g., takePicture()). This is where the security check occurs.
Binder Driver Action: When the call hits the kernel, the Binder driver automatically attaches the caller's kernel-level, unforgeable PID/UID credentials to the transaction data.
Camera Service Action: The remote Camera Service (the server) receives the transaction, and before executing the method, it calls the enforcePermission() system API.
Security Check: The Camera Service asks the core Android security system: "Does the UID attached to this transaction have the android.permission.CAMERA permission?"
Unforgeability Status: Unforgeable. The UID is a kernel property of the calling process, assigned at app launch and cannot be manipulated by the unprivileged app.
In this model, the Binder object reference is not a security token; it's just a routing handle. The UID is the unforgeable security identity.
-- https://stackoverflow.com/a/10590957
A Microformats Translator for Zotero?
pls excuse LLM gorp. here's hoping for time to refine this to be more in my own voice
When adding a well-marked blog post using h-entry to Zotero, the primary translator often misses key data. Zotero favors machine-first standards like COinS and Schema.org (JSON-LD), overlooking the clean, non-redundant nature of microformats. A dedicated translator is the solution to reliably harvest metadata directly from the visible HTML structure.
Quick-n-Dirty Implementation Sketch
An 80% solution for a custom Zotero translator only needs simple CSS selectors to pull the key data points directly from the h-entry classes. This avoids the fragility of full HTML scraping while leveraging the non-redundant nature of microformats.
The simplicity of using
doc.querySelector()for these three fields keeps the build cost negligible. This principle is proven by the successful HTML-to-LaTeX translator I wrote, which uses h-entry as a single source for content conversion. The Zotero development focus on COinS [e.g., See Zotero Translator documentation on COinS priority] is the practical reason this small, custom tool remains necessary.Zotero, Diigo, and Hypothesis: annotation, archiving, tagging
In a Zotero 7 blog item, I learned it does webpage snapshots. Nice! Cached Pages (Webpage Backup) is one of the main features that keeps me paying for Diigo premium. As noted above, Diigo search flakes out at an alarming rate. Zotero is a mature organization.
Could Zotero subsume my Diigo usage entirely? Unfortunately, no; the bookmarking / annotation UX is just too klunky for something that I use many times a day, and it's missing support for reverse links:
Despite the nice support for annotation standards in Hypothesis, I have yet to get past the kicking-the-tires stage with it.
Meanwhile, since archiving is what brought on this recent episode, we should look at my archive.org usage too:
For both personal knowledge capture and archival access, Governance / Stability is important:
Zotero Governance
Handy:
Diigo UX backed by Zotero storage?
I have already scripted a tool to export my Diigo history to JSON. Can Zotero's SQLite schema subsume the Diigo JSON format? It has two main parts: item metadata (URL, tags, title) and a nested
annotationsarray.1. Metadata: A Straightforward Mapping
This part is a simple data transformation exercise. Zotero's relational database has clear tables for this data, and Diigo JSON maps cleanly to it.
url,title,tags, and timestamps (created_at).itemstable: for the core "Web Page" item type.itemDataanditemDataValues: for the URL.itemTagsandtags: for a proper relational representation of the tags.dateAddedanddateModified: for the timestamps.2. Annotations: Content vs. Position
The Diigo JSON format for annotations is structured as a simple array of objects. Each object has
content(the highlighted text) andcomments. The critical missing piece is any positional data—there are no coordinates, no character offsets, nothing that says where on the page the highlight was.Zotero's annotation system is built around a local, archived HTML file (the "snapshot"). The annotations in Zotero's database are linked to a specific position within that file. Without the positional data from Diigo, re-creating the highlights in Zotero's reader isn't straightforward.
fixed in cce1954