Permanent Notes
The Systems That Watch Back

A colleague saw me online late last week. She said I shouldn't feel like I have to stay late.

She wasn't complaining. It was kind. And it cracked open something I hadn't examined. I was online because I wanted to be — continuing on things I genuinely enjoy building, self-motivated, on my own time. But she didn't see that. She saw the signal.

The dual-frame trick

I think in two languages. Not bilingually in the way most people imagine — not translating back and forth. More like running two cultural operating systems simultaneously. English gives me structure and boundaries. Chinese gives me intuition and depth. A multicultural background puts you in a natural position of switching between mindsets — not just words, but entire ways of seeing.

When I review code, the English pass says: this logic handles the edge case correctly. The Chinese pass says: wait — if the caller doesn't know this assumption exists, this code is actually a trap.

They're not saying the same thing. And the gap between them is where I catch what a single frame misses.

This week I realized that dual-frame thinking isn't just a language trick. It's a general pattern: any time you hold two models of the same situation, you create a checking mechanism. The interesting question is what happens when you point that mechanism at yourself.

Progressive disclosure and the loading decision

I've spent more than six years building a personal knowledge system. The biggest lesson isn't about capturing more — it's about loading less.

I focus on daily detail — the granular work, the immediate context. And then I regularly step back: weekly, monthly, quarterly, yearly. Each altitude compares against the goals I set at that level. Not telling myself what to do. Setting the ground and the goals — not a ceiling that restricts what I can do.

The pattern I landed on for the system itself: progressive disclosure through naming and metadata. A good filename is self-classifying. You know whether to open it without reading anything else. When the filename alone isn't enough, a description kicks in — not as a summary, but as a trigger. One line that, after you've gone through the deeper understanding, reminds you. It fires your memory of the full content and lets you use the details without re-reading them.

The chain: filename (classification) → description (memory trigger) → section headers (navigation) → body (detail) → linked references (depth).

Each layer earns its place only when the layer above it can't orient you alone. This is architecture — putting the right information at the right layer.

But here's what I noticed: the same principle applies to how we present ourselves. The "filename" people see — your visible behavior, your work hours, your pace — is their load decision about you. They don't open the "body" (your intentions, your reasons). They decide based on what's immediately visible. Only when you do everything great — when every visible layer signals quality — do people start to want to know you more. That's when they open the body.

The mirror turns inward

I'd always thought of my overtime as a personal choice. It's the time where I continue on my passion — the things I genuinely enjoy, am self-motivated to do, and want to build. But I never modeled the externalities.

Engineers think in systems, but we often draw the boundary wrong. If your model of "productivity" assumes the people around you are unaffected by your pace — the model is incomplete. You optimized one node while generating heat that throttles adjacent nodes.

There's a term: local optimization vs global optimization. You can make yourself run fast, but if that speed creates invisible pressure on the people beside you, the system overall degrades.

The examined life, as I'm learning to practice it, isn't just "am I doing the right thing?" It's: "what effects am I creating that I haven't included in my model?"

More than six years of this practice, and what I love most is the ability to talk back to myself. Not journaling as venting. Journaling as a genuine interlocutor — the version of you that asks harder questions than anyone else would.

This week I found something I knew (dual-frame thinking catches blind spots), applied it to something I built (progressive disclosure puts right info at right layer), watched someone act on what they knew (writing 15 comments on a PR that wasn't their responsibility), and then — finally — pointed all of it inward.

The systems you build to think better eventually turn their lens back on you. That's not a bug. That's the examined life working as intended.


This is part of my ongoing exploration of what happens when you treat your life as a system worth engineering and a question worth examining.