Duri Chitayat
I Think Im Thinking

I Think Im Thinking

May 2025

When technology makes choice, agency, democracy an illusion

“The most common form of human stupidity is forgetting what we are trying to do.” — Friedrich Nietzsche

Free Will

Putting aside the arguement that free will is fundementally an illusion. What would it mean when technology mediates every choice we make? What is individual agency or collective governance when technology has the power to monitor, predict, and even control our toughts and actions?

I find the debate over Free Will immensely attractive. What does it mean if we're not in control of our thoughts and decisions? Do "I" exist without agency? Unlike any other species that I'm aware of, we derrive value (at least in part) basd on our sense of agency. "I did that"...

I bought a house. I married the girl. I did a good job.

But what if "I" didn't have anything to do with those decisions. What if my energy, my behavior, my struggle was directed by something else outside of the space inside my head.

Of course on a certian level we're all aware that we don't control our decisions entirely but we believe we have agency. We have the ability and do in fact exercise judgement that could be leveled towards more than one outcome. What if that were to change?

Gravity

Gravity is a benevolent master that forms the fabric of our world, but its why we walk, talk, and sleep the way we do. Its there but most of the time we don't think about it.

AI is digital gravity.

Whether we are tapping, swiping, and scrolling - or simply out having a burger - we are gently being pulled by algorithms we do not see. In UX, design chioices measured in milliseconds can have large effects: move a button 30 pixels, click-through rises 6 percent; auto-play by default, dwell time doubles. Individually these nudges feel trivial. Collectively they create the choice architecture that Cass Sunstein and Richard Thaler warned about in their 2008 work on “libertarian paternalism.”¹

Over the last decade, machine-learning algorithms have turned nudges into micro-targeted, self-optimizing persuasion loops (Facebook’s 2018 “Like Prediction” model used 300 clicks to predict personality more accurately than a spouse).² Attention has become an extractive industry—Shoshana Zuboff’s “surveillance capitalism.”³

Persuasion

Large language models don’t merely predict text; they model the latent space of human preference. Add real-time A/B feedback and we get systems that evolve messages faster than cognition can flag them as manipulation. Stanford researchers (Feb 2024) found GPT-4-generated political messages beat human copy by 17 percent in persuasion efficacy.⁴

Layer in mixed-reality lenses and biosensors:

  • Contextual priming Glasses adjust ambient color to lift mood before an ad.
  • Affective loops Wearables detect micro-stress, throttle feed intensity to keep you “engaged but not agitated.”
  • Synthetic social proof LLMs clone a friend’s voice to deliver brand endorsements.

The line between suggestion and script starts to blur.

Free Will on a Sliding Scale

Neuroscience frames willpower as a limited metabolic resource.⁵ Persuasive tech exploits that constraint, shifting decisions from System 2 (deliberative) to System 1 (automatic). It’s not that free will evaporates; it simply gets budgeted.

Daily Micro-Decisions20252035 (projection)
Unmediated choice≈45 %<15 %
Nudged by design≈40 %≈55 %
Algorithmically orchestrated≈15 %≈30 %

(Synthetic estimate derived from Andersen et al., “Digital Choice Architectures,” 2023.)⁶


Democracy under Algorithmic Sovereignty

Deliberative democracy assumes citizens form preferences independently and aggregate them through voting and free speech. When preference formation itself is outsourced to opaque platforms, the locus of sovereignty drifts:

  • Information asymmetry Platforms observe us in petabytes; we glimpse them through TOS blurbs.
  • Scale-driven consolidation Persuasion gets cheaper the more data you own, hence media mergers.
  • Attention balkanization Reinforcement-learning feeds intensify homophily; Sunstein’s 2022 study links algorithmic curation to a 12 percent rise in ideological extremity over six months.⁷

Public opinion risks becoming a managed variable, elections mere A/B tests.

Counter-Weights We Can Still Build

  1. Design Friction on Purpose Surfacing “Are you sure?” moments restores deliberation.
  2. Transparency APIs & Audits External researchers need real-time access to model behavior logs.
  3. Data Cooperatives Let users pool data and bargain collectively—Glen Weyl’s “Data Dignity.”
  4. Pluralistic Feeds Inject diversity scores into ranking objectives to strengthen epistemic resilience.
  5. Agency Tooling AI-powered “fourth-party firewalls” that spot and flag persuasive patterns.

 Leaving the Cursor Blinking

Foucault noted that power thrives when it turns invisible, when the watchtower dissolves into habit. Our interfaces are halfway there already. Yet every system we build is, at root, a human proposal—version-controlled, debuggable, forkable. The next commit is always open. The bigger question is who seizes the keyboard.

References

  1. Sunstein, C.R., & Thaler, R.H. Nudge: Improving Decisions About Health, Wealth, and Happiness. Yale University Press, 2008.
  2. Kosinski, M. et al. “Private traits and attributes are predictable from digital records of human behavior.” PNAS, 2013.
  3. Zuboff, S. The Age of Surveillance Capitalism. PublicAffairs, 2019.
  4. Stanford HAI. “Large Language Models and Political Persuasion.” Working Paper, Feb 2024.
  5. Baumeister, R.F. “Ego Depletion and Self-Control.” Psychological Science, 2018.
  6. Andersen, J. et al. “Digital choice architectures.” Nature Human Behaviour, 2023.
  7. Sunstein, C.R. “Algorithmic Extremism.” Harvard Law Review Forum, 2022.