A Theory Of Dynamic Social Systems Modeling

by Tim Post

12 min read

I have a weird little theory to share; something that I noticed a long time ago, when I noticed that social reactions would be almost as fun to simulate as physical ones in games like Angry Birds. I've always felt that physics mapped to our social behavior way more than just being a great analogy, but couldn't really figure out how.

It wasn't until I was coaxing a meter stick to make interesting noises as it oscillated off the side of a table while CNN was covering a war that a light bulb went on - dampened oscillation immediately jumped into my brain but it seemed like an oversimplification. Then, weirdly, it just kept applying, at least anecdotally. It actually fits and works.

Why weird? Because social systems and cultures don't exactly have mass and objectively-quantifiable inputs, they just often behave as though they do as they change in reaction to their own unrest. That makes modeling them with physics methodology tedious, but doable, as long as I don't start pulling needed things out of thin air just to make the math work.

And, that took a while to figure out. If you're making up "controls" to make a physics model happy for unintended use, what you have is a metaphor, not a physics model.

generated fractal image

People frequently, and very loosely, also use words like friction to describe resistance to ideas in social systems across cultures, but that use is almost always metaphorical. It's a "curiously strong" Altoids-grade metaphor, but that's all it is: a metaphor.

I have something way more concrete than that, but in this model it's not people that have mass, it's their behavior in relation to everything happening around them. A culture's mass is simply how easily it's disturbed.

That's not a metaphor or gesturing anymore; that's actually testable!

Continue Reading →

AGI Maps Weirdly Well to Christianity's Book of Revelation

by Tim Post

5 min read

While written with a light heart and touch, this is a serious post. I went through a bunch of permutations of the title and there's just no way that it doesn't come off as click-bait, but that's honestly not deliberate; it's just part of the nature of the topic and why I decided to research and write about it.

Very to-the-point-ly: AGI has the same pattern matrix as the Book of Revelation: A prophecy, an event, a lot of discussion and arguments around if the event happened, is still happening, has already happened, or is yet to come, and an nearly-identical pattern of social distortion surrounding it.

What's also noteworthy is its meaning to people, which can be extremely visceral, even juxtaposed with their spiritual identity. I'm not going to really comment on that other than to point out that it's happening, which makes the time that we live in really significant in a historical sense.

What you believe in, if anything, isn't questioned in this piece. My goal is only to point out the multitude of similarities between now and then in an objective and (hopefully) somewhat entertaining way, with a data-over-dogma approach to both Revelations and the fury surrounding artificial general intelligence (AGI).

A cartoon version of Jesus next to an AI-powered robot next to a pit of apocalyptic beasts

I'd also like to point out that the two (AGI & Revelation) map conceptually extremely well, but are incapable of mapping physically because LLMs are real and not metaphysical beings.

Continue Reading →

When KPIs Turn Maladaptive: "Carrots Unrestrained"

by Tim Post

16 min read

Do you have opinions regarding the usefulness of key performance indicators (KPIs), good or bad? I do too, mostly not-good, and all of them are strongly-held. Somewhere around ten-ish years ago, I was in 'metrics hell', and I promised myself I'd find time to write this once I finally got out of it.

It took ten years of almost writing the rant I wanted to write until I finally zoomed out enough on the history around them to grasp how we ended up in this seventh level of performance optimized hell. I knew that if I understood how, I'd be closer to understanding why, and possibly closer to helping leave the work world a little better than I found it.

So, I researched, and researched, and finally feel like I found the answer I was seeking. And, well, I'll do my best to share it in a way that doesn't pack in any more boring business history than necessary, and I'll do my best to keep it upbeat and witty while I poke the very existence of these life-monopolizing little ass hats with very sharp sticks! Let's begin with a simple challenge:

If given a goal (A), how do we objectively define KPIs to keep us progressing toward A as we gain momentum? What keeps the entropy from momentum alone from being what drives us to complete the goal?

Do we need to have that conversation for relatively simple goals? If not, then When should we objectively define what those KPIs should be? That sounds like it would cause some strain to lift, so maybe we can save some effort by asking Can we objectively define what KPIs should be?

Looking back on 30 years of being in tech from the first dot-com boom of the 90s and now working on open source alternatives to big AI, I thought back on so many wonderful ways companies turned from gold to poop through KPI idolization. In my observation, the top comorbidity in failed developer-facing startups has been, by far, blind worship of the the all mighty key performance indicator (KPI). This is a call to spot and stop madness if and when you see it.

We need some context from history to illustrate how we ended up with these sadistic little bastards ruling our professional existence, so let's start by going back in time to 1954 to get a malt shake. In fact, in 1954, a rather revolutionary book called The Practice Of Management (archive link) first hit the shelves which lowered the altitude of business reviews by focusing most on objectives rather than gut feelings and intuition, which is how they sailed by the metaphorical stars back then with very little qualitative data to steer by.

Peter Drucker's book planted the seeds that became the modern KPI, years before anyone was using the term metric outside of engineering.

Alt Text Here

We need to adjust our perspectives a bit, so they're not alien to 1954 business culture. For instance, in 2025, most people would consider decision-making to be a process. In 1954, it was a practice and one best suited for a sole practitioner who had a set of gut feelings that could be trusted for the job. Experience could, and often did come later, for those that could prove their instincts trustworthy. This is one of the reasons that executive teams were primarily made up people from nearly-identical backgrounds.

Continue Reading →

Squelch LLM Hallucination Via Structured Uncertainty Handling

by Tim Post

15 min read

Sometimes great conversations start by someone sharing an unpopular opinion. I'll put one out there that I think might just resonate well with anyone that's used a large language model (LLM) to produce some kind of structured outcome:

90% or more of what's wrong with people's perception of AI has little to do with the technology itself, but more to do with their expectations of what it's supposed to be able to do.

LLMs are great at getting a wide-variety of things most of the way there with a rather convenient rate of reliability. They don't do such a good job of putting human touches on things, but I don't expect them to, because, well, they're not human. We can also deduce that they're also not psychic by virtue of reason (and/or them not being human, if reason alone doesn't eliminate it), so they can't read our minds when they're not sure on how to proceed.

However, commercial LLMs are marketed as being capable of replicating your human touches, combined with super human accuracy and attention to detail. In many pre-conceived and well- tested situations, they do an okay and predictable job. But, the real product commercial providers want you to have isn't the product that they've built so far, at least as far as accuracy goes, and they don't do a good job of warning you about that aside from their terms of service.

And unfortunately, since most LLM companies never provide guidance on how to interact with models about their uncertainty, in the space where users write prompts ... your assumption is left that models are also psychic. This is ... not helpful, of them, and not always entirely accidental.

Research is shedding more light on why hallucinations happen and it's pointing more directly at model uncertainty; both in generation as well as in accepting input. It's something that has to be solved during model training in order to get at a guaranteed ~0.01% rate (it's impossible to mathematically avoid entirely). Companies put all their effort into avoiding it so they don't need to warn you, which means there's little instrumentation to see it happen and correct it.

The good news? there are not-hard things you can do right now to mitigate uncertainty while it's better resolved through better training, and in some cases, a few extra prompt input tokens can be all you need to squelch many common hallucination opportunities.

Cartoonish indecisive LLM

Uncertainty, at least as it relates to LLMs, is something that I think about a lot, maybe even a little too much.

Continue Reading →

Copyright © 2024—2025 Tim Post. Thanks Lume and Open Dyslexic