A Theory Of Dynamic Social Systems Modeling
by Tim Post
12 min read
I have a weird little theory to share; something that I noticed a long time ago, when I noticed that social reactions would be almost as fun to simulate as physical ones in games like Angry Birds. I've always felt that physics mapped to our social behavior way more than just being a great analogy, but couldn't really figure out how.
It wasn't until I was coaxing a meter stick to make interesting noises as it oscillated off the side of a table while CNN was covering a war that a light bulb went on - dampened oscillation immediately jumped into my brain but it seemed like an oversimplification. Then, weirdly, it just kept applying, at least anecdotally. It actually fits and works.
Why weird? Because social systems and cultures don't exactly have mass and objectively-quantifiable inputs, they just often behave as though they do as they change in reaction to their own unrest. That makes modeling them with physics methodology tedious, but doable, as long as I don't start pulling needed things out of thin air just to make the math work.
And, that took a while to figure out. If you're making up "controls" to make a physics model happy for unintended use, what you have is a metaphor, not a physics model.
People frequently, and very loosely, also use words like friction to describe resistance to ideas in social systems across cultures, but that use is almost always metaphorical. It's a "curiously strong" Altoids-grade metaphor, but that's all it is: a metaphor.
I have something way more concrete than that, but in this model it's not people that have mass, it's their behavior in relation to everything happening around them. A culture's mass is simply how easily it's disturbed.
That's not a metaphor or gesturing anymore; that's actually testable!
DSSM: (Dynamic Social Systems Modeling) - the theory!
Dynamic Social Systems Modeling (DSSM) applies physics-based modeling to understand long-term social and historical dynamics.
By treating social disharmony as a latent (a hidden, underlying measure that we can infer from observable effects) quantity that behaves like a damped oscillator, DSSM provides a mathematical framework for analyzing how societies respond to disruptions, as well as how they respond to equilibrium.
By measuring the value of the latent quantity and resulting, sometimes inverse response, we can:
-
Understand when social dynamics approach or exceed the threshold of past breaks in equilibrium, pinpoint one's location within the oscillation dampening cycle, and predict (with average weather report accuracy) when upcoming events are likely to occur based on trends.
-
Pinpoint voids where all indicators point toward something major in history having occurred in a region, where no recorded history explaining it exists. This points to either a lack of information in the model's corpus, or, theoretically, suppression of history.
-
Remove causative factors from history to produce realistic simulations of "what would have happened if [event] never took place?" for research and academic thought experiments.
-
Better understand diasporas.
I'd really love to study my own family's dispersal throughout the diaspora that was last impacted by WWI; more on that later.
DSSM's core falsifiable theory - version 0.1.0:
This is a stab with a light in the dark to simply share what I think I've figured out, not a formal paper or submission. Maths were never in my stronger skills drawer, but fortunately, this isn't terribly complicated.
I'm writing this for anyone that sees what I see and wants to see how it shakes out: historians, developers, physicists, anthropologists - this could be a very useful tool. But without some help, this is as far as I can take it.
The (T) <-> (P) <-> (E) Framework
The T/P/E framework models a society’s baseline tension (T), external perturbations (P), and elastic response (E).
If the theory validates, it means every social system can also be described through three tightly-interacting parameters:
-
T (Tension): The system's baseline systemic tension.
-
P (Perturbations (catalysts)): Events that push society from equilibrium, regardless of cause or origin, even self-observation.
-
E (Elasticity): What resists deformation during perturbations and exerts a bias toward equilibrium.
When a major event like war displaces equilibrium, the system doesn’t glide back smoothly; it rings like a struck bell, losing amplitude as frictional mechanisms (laws, culture, fatigue) dissipate energy.
The simple validation mathematical model
Like a meter-stick oscillates once whacked while hanging partly off the edge of a table, the fundamental equation of DSSM practically mirrors a textbook damped, driven oscillator:
$$ \ddot{x}(t) + 2\zeta\omega\dot{x}(t) + \omega^2 x(t) = \frac{1}{M_s}F(t) $$
Where:
-
- $$(x(t))$$
- normalized social displacement A latent quantity representing tension
-
- $$(\zeta)$$
- social damping ratio Stability mechanisms
-
- $$(\omega)$$
- natural frequency Rate of systemic oscillation
-
- $$(M_s)$$
- effective social mass Inertial resistance to change
-
- $$(F(t))$$
- social forcing function Cumulative shocks and trends, background
The physics are literal; what’s “quasi” is only the mass term, which measures how much integrated force is required to alter a system’s trajectory in any given state.
I was correct that, to make this work, I absolutely could not "just assert mass as a needed constant", but must scope it in a way that it acts as the variable that the models expect. It doesn't have to be physical mass, it just has to be measurable, proportional, and predictable in reproduction.
In practice:
$$ (M_s) $$
behaves like inertial affinity: a quantifiable measure of how resistant a culture or polity is to acceleration. We treat it as a phenomenological parameter.
The Forcing Function
Catalysts (driver classes) enter the model through the forcing term:
$$ (F(t)) $$
Initially, we treat all catalysts as impulses:
$$ F(t) = \sum_{k} \sum_i A_{k,i}, \delta(t - t_{k,i}) $$
This captures discrete, high-impact events — wars, depressions, revolutions, pandemics. And, while living through modern history might feel like one extreme shock after another, it's really more granular than that, just in ways that have gotten almost too obscure to even notice anymore. Once proven through pulses, DSSM must then extend to include continuous forcing if it's going to be practically useful for real research:
$$ F(t) = F_{\text{impulse}}(t) + F_{\text{drift}}(t) $$
... where:
$$ (F_{\text{drift}}(t)) $$
accounts for slow-moving drivers such as policy drift, technological diffusion, or generational turnover. There's a lot more to be understood about quantifying drift; period historians (especially those with sociological, medical and political expertise) are required to differentiate signal from noise.
Again, the impulse-only model acts as a falsification test: if reactivity patterns across all known dimensions (political, economic, cultural) align as output from the model during periods of known historical convergence peaks, the theory holds up under its simplest conditions.
Above, K indexes driver classes (types of perturbations) where I indexes individual impulse events. Care should be taken to not expect dynamic equilibrium; in fact independent phase shifts should be expected around events as different driver classes had advanced warning of catastrophic drivers by virtue of just being literate, just as one example.
The passing of these phases (like watching traffic) is of great interest if "voids" are going to be surfaced.
Parameters and Estimation
This is the foggy part, because it's a chicken-before-egg sort of speculation. I don't have a reference for magnitude yet so it's hard to say exact numerical outcomes that I expect parameters to conform to. They'll be determined empirically, when accurate embeddings become available.
Natural frequency: $$(\omega)$$
Inferred from observed recurrence periods of upheaval; for yearly data:
$$ (\omega = 2\pi / T) $$
(T here represents time; I know I've used it elsewhere (did I mention this is a draft?))
I think the smallest, meaningful chunk where there's probability of at least 33% of driver classes overlapping is probably somewhere around 3.xx years, but that's almost as wild as educated as far as guesses go at this point.
Damping ratio:
$$ (\zeta) $$
Estimated empirically from successive peak amplitudes using the logarithmic decrement method (a standard way of measuring how quickly oscillations dampen):
$$ \delta = \ln!\left(\frac{x_n}{x_{n+1}}\right), \quad \zeta \approx \frac{\delta}{\sqrt{4\pi^2 + \delta^2}} $$
Effective social mass:
$$ (M_s) $$
A fitted parameter describing systemic inertia. Higher = slower reactions, lower = volatile. Systems with millions of members and very low systemic inertia would be a "highly-reactive regime." Viscosity could also be a good term.
Worth noting: mass is relative to driver classes in this equation; a system may be extremely resilient to political and economic upheaval, but easily scattered by climate or geotectonic impulses; it shouldn't be mistaken as invariant even if only one driver class is used.
However, rigor demands that the same classes driving the same impulses with the same seed produce the same mass (more in key principles below).
Key principles in theory validation
Going back to the beginning, the intent here isn't to produce a strong metaphor, but to create a functional modeling tool. That means, certain assumptions about what will be visible must be met in order for any data to be considered as valid:
-
Oscillatory Responses: Societies don't move linearly from one state to another; they oscillate around equilibrium with decreasing amplitude, cross-dimensionally / domain. If we're getting noting but upside-down thumbtacks and the entire time series has been exhausted, we can't continue.
-
Multiple Timescales: Different social systems (economic, political, cultural) oscillate at different frequencies. History is not univocal and neither is any negative representation of it. If all driver classes produce the same magnitudes, we very likely have a problem with how it's modeled.
-
Coupled Dynamics: Driver classes interact but aren't strictly causally chained. They may influence, but don't necessarily cause each other. We must investigate and understand every new relationship, and success of the project depends on finding these very early.
-
Directional Asymmetry: Social "friction" may differ based on direction of change. However, not every
+5is accompanied by a-5, not every reaction is proportional, and some should be very disproportional from that time slice (one of the reasons it was chosen for validation). -
Self-Observation Effect: Increased collective awareness affects system behavior (the system observing itself changes damping characteristics). We don't have to deal with the advent of social media just for validation, but will eventually need to hone in on this quite keenly. We must be able to measure it.
Proposed sources (1860-1930 US focus):
- Climate and weather data
- Personal diaries across social strata
- Newspaper articles and classifieds
- Economic indicators
- Crime and justice records
- Geological/tectonic data
- Cultural artifacts (songs, literature, art)
- Engage with academic partners to obtain and learn to understand oppressed and suppressed voices from the eras.
Temporal Embedding Model Required:
The project needs to run embeddings using language as it was used at the time. That means, at a minimum, it must be able to run faithful embeddings that can encompass:
- Late antebellum English
- Reconstruction & Gilded Age vernacular
- Progressive era industrial speech
- Pre-radio Americana slang
- Early proto-modern "talkies"-era conversational tone (old war instruction reels)
This requires a great deal of care and a unique mixture of human experts to validate the training pairs; it's not a task that can be taken lightly. The care needed in respect of and for the voices from our past, because successfully creating truly accurate embeddings means being able to bring these voices back, generatively, with stunning realism, in a league way ahead of what we expect from LLMs now in 2025.
I have a note about important next steps, near the end.
1-3 billion parameter investigative LLMs:
Trained on the same sets as the embedding model, the investigative model will be able to answer questions about specific context it is given, or questions surrounding context it was trained on, when it comes to exploring the data. The model will be essential to understand event variance from as many vantage points as possible.
The investigative LLM will not be trained on modern language; its training will
cut off on 01/01/1930. A second version of it which will also be trained on
modern conversational languages will serve as a good interpreter, but we must
take care to not taint the historical model with modern language.
Neither model should be made open source because of the dangers to the most marginalized voices and fragments of consciousness in history; but source available to established academic curators would seem reasonable, along with a very fast patch lane if necessary.
Analysis plan:
Yes, there is one! While I have broken this down into phases, the sequence will
run quite often between 2 - 1 at first, and then 2 - 1 - 3, and then
ultimately 2 - 3 - 4. Put simply, time is just needed to fumble around a bit
until all loaders "just work" and simulations are just a matter of queuing up a
run.
-
Calibration Phase: Start with causally independent variables (weather/climate) to establish baseline oscillation patterns. This sort of resembles how one would calibrate a scope.
-
Enrichment Phase: Layer in dependent variables (diaries, economics, social movements) while observing how these things stand apart and interact with each other. Slowly train in more and more artifacts while conducting thorough verification tests (outlined previously).
-
Validation Phase: Test whether observed historical dynamics match predicted oscillation patterns. We have both spikes and lulls in both directions that align correctly with times that we believe stoked instability.
-
Discovery Phase: Identify "void history" - momentum vectors that left minimal direct historical evidence but can be inferred from their effects. We have a picture that superimposes onto recorded history, but with plenty of data in times that history has no events, too.
-
Narration Phase: Be able to explain this like an oncologist explains a contrast MRI.
Papers (upcoming)
I'm mulling around writing these in order to help attract attention to the idea and possibly enlist some help getting access to the training data needed. If the abstracts to these aren't obvious by now, I've done a terrible job on this post:
-
Dynamic Social Systems Modeling: A Physics-Based Framework for Understanding Historical Change
-
Detecting Dark History Like Dark Matter: Inferring Undocumented Social Movements Through Oscillation Analysis
-
Directional Asymmetry in Social Friction: Why Equal and Opposite Forces Produce Unequal Outcomes
It's hard to write more without validating the theory, so basically, I'm kind of confined to papers that make people want to help me validate the theory :D If you have ideas or want to give one a shot, please let me know (just informally, an email is fine).
And that brings me to where I want to leave off with this:
Why hand it off?
Mostly, because of the potential applications if proven, and because I'm way too late in my career to see something of this size through to completion. It's not a huge undertaking, but it's a long one. It could take a few years to build the embedding model alone, given the scarcity of training material and need to digitize it manually.
But, if proven ... wow. Here's just a few ways it could be impactful:
-
Identify hidden social movements and suppressed resistance
-
Understand why certain periods were more volatile than others
-
Predict "near-miss" historical outcomes that almost occurred
-
Offers a methodologically rigorous (discovery-focused, not interpretation-driven) framework for understanding change over time.
-
Provide predictive mathematical rigor to social dynamics
-
Look at cause and effect in terms of decades, centuries and longer.
It's also valuable if disproven. The worst case is, this goes back to being just a strong metaphor that almost holds up when describing social systems. If it fails to validate, we will know exactly why. That's very useful to not get stuck on in the future.
It's never been more important to take a long, hard, dispassionate, unbiased look at how we got where we've arrived. This is a step in that direction. Nothing is truly unbiased; what I'm suggesting is we limit bias to strictly what was, by not training on modern corpus at all. This lets us learn about biases back then, without observation affecting it.
Important near-future steps to take
I write this as a Jewish person whose family was profoundly impacted by having to flee Europe in a hurry, after fleeing Spain centuries earlier. As someone who built a career partly around fostering trust and safety in the digital age, I can think of a bunch of questions right off the top of my head that I'd want answered before this moved forward:
-
Who has the right to synthesize the speech of enslaved people or genocide survivors? What assurances do I have that the model I help make isn't used to mock concentration camp victims and survivors?
-
How do / can we prevent this from dehumanizing suffering by reducing it to oscillations? An entire tragic event, or a crowning achievement of humanity, reduced to a few numbers? Callousness could set in fairly unnoticed.
-
What safeguards prevent misuse to minimize atrocities or generate harmful content? Moderation layers must be composed of temporal embeddings as well, which makes them even more difficult to train in. Given that, How should access to the generative model be structured? There's no easy way to protect it through safeguards, as the safeguards have to be written with so many different sets of period vectors.
-
How do descendant communities participate in the governance of the models, they help create, not just consult?
I don't have the answers to these. I have some of them, but I'm not an ethicist, I'm not a descendent of an enslaved person, I am not so many things that are needed to fill gaps here. I do have a start to many of these answers, and I'm happy to help even more, but this is definitely a group activity.
These are not at all concerns unique to this project, but due to its nature, the real potential for harm is greater than usual. This is even more reason that I worked hard to get what I've got here out of my head an into a blog post where sunlight can hit it and others can (hopefully) see what I see in systems.
But it's way, way, way too big to hold in one old dude's head anymore. Especially one that's had as much radiation and chemo as mine.
Now, it's in yours; I hope you can do something useful with it. I hope I'll get to see it realized! While the math and mappings, well, as far as I can tell, really works out well for this, knowing is something else entirely.
Contact
Me : Tim Post (retired software engineer) <timthepost@protonmail.com>