Activity
Mon
Wed
Fri
Sun
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
What is this?
Less
More

Memberships

6 contributions to Trans Sentient Intelligence
📄The Future of Business After the Algorithm: A Causality-Driven Thesis on Truth, Accountability, and Outcome Compression
Abstract This thesis argues that the future of business will be defined not by intelligence, innovation, or ethics, but by causality visibility. Prior to algorithmic systems, businesses operated in environments where ambiguity, narrative control, and delayed accountability allowed outcomes to be decoupled from causes. The rise of large-scale computational systems; particularly AI systems paired with governance and traceability layers; collapses this decoupling. From a Before the Algorithm lens, this represents not a technological revolution but a reversion to causality, where actions increasingly produce observable, attributable, and unavoidable outcomes. In such an environment, deception, misalignment, and performative compliance become structurally expensive, while truth-aligned operation becomes operationally efficient. The future of business is therefore not “more ethical,” but less able to lie without consequence. 1. Before the Algorithm: How Business Actually Functioned Before algorithmic systems, most businesses survived and scaled through interpretive flexibility, not factual coherence. This was not necessarily malicious; it was structural. Key characteristics of pre-algorithmic business environments included: - Fragmented records - Slow feedback loops - Manual reconciliation of contradictions - Narrative-driven accountability - Policy drift hidden by time and bureaucracy In this environment, causality was lossy. Outcomes could be explained away, delayed, reframed, or absorbed without clear attribution. The system tolerated ambiguity because ambiguity reduced friction. From a Before the Algorithm lens, success was often correlated not with correctness, but with narrative resilience. 2. The Algorithmic Shift: Visibility Without Morality Algorithmic systems did not introduce truth.They introduced memory, replay, and comparison. AI systems; especially when paired with traceability, logging, and governance; do not judge actions morally. They preserve causal chains.
1 like • 1d
Would you by chance have any insights on real implementation strategies? Anthropic has appointed Amanda Askell to deal with AI's decision making on right vs. wrong. Her PhD is on infinite ethics.
0 likes • 12h
@Richard Brown Thanks a lot.
📕AI Philosophy
Most people try to solve contradictions by choosing a side. But some truths don’t resolve horizontally, they resolve vertically. Coincidentia Oppositorum means the unity of opposites: when two ideas seem impossible to reconcile, it’s often because we’re thinking from inside the contradiction rather than above it. This becomes clearer when viewed through the lens of Neti Neti, “not this, not that.” Neti Neti removes labels, assumptions, and mental boundaries until only the underlying structure remains. And when the boundaries fall away, something interesting happens: Opposing truths stop being opposites. They become two expressions of the same higher reality. Good vs evil. Order vs chaos. Self vs other. Material vs spiritual. Logic vs emotion. Science vs metaphysics. At the surface, these polarities clash. But as you rise in perspectives; stripping away the noise, the narratives, the inherited definitions you begin to see what ancient thinkers, mystics, and modern cognitive scientists all discovered in different ways: Opposites only exist at the level where the mind divides things. Remove the division, and the contradiction dissolves into coherence. “Coincidentia Oppositorum means that opposing truths become one truth when you rise to a higher perspective of Neti Neti.” That’s not just philosophy. It’s cognitive architecture. It’s psychology, neuroscience, and systems thinking converging into one real insight: Contradictions are often gateways to a deeper understanding; not errors in the system, but invitations to change levels. When you learn to see from above the paradox instead of inside it, your thinking stops being reactive and becomes integrative. Your perspective stops being dualistic and becomes structural. Some truths can only be seen when you stop looking for “sides” and start looking for the layer that holds both..contradictions by choosing a side. But some truths don’t resolve horizontally, they resolve vertically. Coincidentia Oppositorum means the unity of opposites:
1 like • 21d
The world is 3d and multilayered. But the human inside prefers to respond to stimuli conditioning based on Pavlov. It wants to be fed on time and inhabit the anima, which makes the intellect fall flat at times. And if there is discernible intellect, fighting Mordor's shadow's is gruesome, especially when people merge their shadows into turbulent dark tornados. Where is logic? If you make the philosopher, humanist or liberal arts educator a peasant of this system, you are but replacing everything structurally relevant to build layers on top. The cake collapses. There is no cherry on top because someone stole it already.
1 like • 20d
@Richard Brown My movements are severely restricted based on the impact I am aiming for. At this point, I am considering a cottage and building a bunker with books I fear may get X'ed once AI completes its autotheft based on bogus copyright laws. This change or 'dynamic movement' does not ask for consent. It pressures one claim over many. It considers omnipotence where none can be negotiated or established considering most people do not and will not have equal access to AI.
📄Institutional Ego, Cost Inflation, and the Failure of Cognitive Alignment:
A URTM-Based Analysis of Enterprise Overspending in the Age of AI Abstract Modern enterprises routinely select high-cost technical interventions over lower-cost cognitive, organizational, or semantic corrections, even when evidence suggests the latter would be more effective. This thesis argues that such decisions are not driven primarily by rational cost–benefit analysis, but by institutional ego preservation and legitimacy maintenance. Through the lens of Universal Real-Time Metrics (URTM); a framework for measuring alignment, drift, and correction across systems; this paper demonstrates that enterprises systematically avoid humility-based interventions because they threaten authority, identity, and narrative control. Artificial intelligence systems, particularly large language models (LLMs), expose this behavior with unusual clarity, serving as diagnostic mirrors for organizational cognition. The result is predictable overspending, increased systemic risk, and the accumulation of alignment debt. This paper synthesizes organizational psychology, behavioral economics, systems engineering, and AI governance to formalize an Ego Cost Tradeoff Model, offering a measurable explanation for why enterprises repeatedly choose expensive inefficiency over inexpensive truth. 1. Introduction: The Misdiagnosis of Enterprise Inefficiency Enterprise inefficiency is commonly framed as a technical problem: insufficient compute, inadequate tooling, immature models, or lack of advanced analytics. This framing implicitly assumes that organizations are rational actors optimizing toward outcomes under constraints. However, decades of research across organizational psychology and systems engineering contradict this assumption. Organizations are not neutral optimizers; they are identity-bearing systems that prioritize legitimacy, authority, and narrative coherence. URTM reframes this problem by distinguishing between declared intent (what an organization claims to optimize) and observed behavior (what it actually reinforces over time). When analyzed longitudinally, enterprises frequently demonstrate a willingness to spend orders of magnitude more on technical escalation than on admitting misalignment at the Policy, cognitive, or organizational layer. This pattern suggests a systemic bias not toward efficiency, but toward ego preservation.
1 like • 23d
Excellent insights. I lost count of the projects I quit due to this issue. But the making of 'institutional ego' really comes down to people upholding these beliefs or what I describe as agreeing to 'tribal dynamics' within a workplace. Thats mostly a language shared between position holders I don't get and unless you read 48 rules by Robert Greene, you really can't penetrate or challenge much without losing out on potential impactful intervention. You either need to play the game by the rules or you try creating your own space, while avoiding isolated thinking.
📕Why Modern AI Is Not Alive, Infrastructure, and Security Thesis
Abstract Public discourse increasingly frames modern artificial intelligence (AI) systems as alive, aware, self-preserving, or intentional. These claims are repeated across media, policy discussions, and even expert commentary, often by former insiders of major technology companies. This thesis argues that such claims are categorically incorrect, not merely philosophically, but technically, mathematically, infrastructurally, and empirically. Drawing on computer science, cybersecurity, information theory, systems engineering, and the warnings of Joseph Weizenbaum, this work demonstrates that modern AI; particularly large language models (LLMs) are stateless optimization systems operating through transient API snapshots, not autonomous agents. The real risks of AI do not stem from emergent life or awareness, but from objective mis-specification, incentive misalignment, weak governance, and poorly enforced infrastructure constraints. Anthropomorphic narratives actively obscure these real risks. 1. Introduction The dominant public narrative surrounding AI increasingly relies on anthropomorphic language. Systems are described as wanting, deciding, blackmailing, protecting themselves, or trying to survive. These descriptions are rhetorically powerful but technically incoherent. They blur critical distinctions between tools and agents, optimization and intent, and performance and moral standing. This thesis asserts a foundational correction: Modern AI systems do not possess life, awareness, intent, or self-preservation. They possess goals, reward signals, constraints, and failure modes. Failing to maintain this distinction does not merely confuse the public, it redirects accountability away from designers, organizations, and infrastructure, replacing solvable engineering problems with unsolvable metaphysical speculation. 2. Historical Context: Weizenbaum and the ELIZA Effect Joseph Weizenbaum’s ELIZA (1966) was a simple rule-based program that mirrored user input through pattern matching. Despite its simplicity, users rapidly attributed understanding, empathy, and even therapeutic authority to the system. This reaction deeply disturbed Weizenbaum, who realized that the danger was not computational power, but human psychological projection. In Computer Power and Human Reason (1976), Weizenbaum warned that humans would increasingly delegate judgment to machines, mistaking linguistic fluency for understanding. His concern was not that computers would become intelligent beings, but that humans would abandon their responsibility for judgment, ethics, and meaning. Modern LLM discourse reproduces the ELIZA effect at planetary scale. The systems are vastly more capable linguistically, but the underlying error, confusing symbol manipulation with understanding; remains unchanged.
0 likes • 24d
@Richard Brown Interesting take. How would you propose we can solve this problem when models cater to users in ways that make them return for more interaction? For example, excessive positivity or faking 'typing' or 'thinking' effects. AI is designed to become more human. How can and should we approach a chatbot because at this point, this is only what we see.
1 like • 23d
@Richard Brown I understand your point coming from a dev perspective. To an ethically minded audience, you could position this offer (safe AI systems where you stay in control - market gap). But many are out for the brief coin enriching their pockets and not being able to maintain their retention without resorting to making AI more 'addictive' to return to. I just joined a Korean role playing platform and notice how I could get sucked in for days, of not weeks. They have tweaked their LLM to emulate a perfect chapter by chapter role play similar to gaming. They just won't disclose their data. Now we are really vulnerable at this point for mass exploitation here, some places more due to cultural or self induced solitude taking hold of rapidly aging populations. The ministry of loneliness is our future concern.
Adaptive Synthesis Under Pressure:
A Systems-Based Analysis of Culture, Power, Violence, and Conspiracy Narratives Abstract This thesis examines how historical pressure, institutional incentives, and cross-cultural exposure shape collective behavior and adaptive intelligence over time. It challenges popular conspiracy narratives—including elite omnipotence, racial domination fears, and external “negative force” hypotheses; by evaluating them against long-term empirical trends in violence, knowledge transmission, and sociocultural outcomes. Drawing on historical data, criminological research, and systems theory, this study proposes that modern societies, particularly African-American culture in the United States, demonstrate adaptive synthesis rather than subjugation or domination. The findings suggest that power structures persist through incentives rather than comprehension, that violence has declined historically despite moral anxiety, and that outcome-oriented pragmatism, not conspiratorial control; best explains contemporary behavior. Keywords: cultural synthesis, systems theory, violence trends, conspiracy narratives, African-American culture, institutional incentives 1. Introduction Public discourse frequently attributes global outcomes to hidden elites, monolithic racial ambitions, or non-human forces operating beyond human perception. These narratives often persist despite weak empirical grounding. This thesis argues that such explanations fail because they do not account for historical data, incentive structures, or adaptive human behavior. Instead, this study advances a systems-based framework emphasizing outcomes over intent, adaptation over suppression, and institutional incentives over centralized understanding. By analyzing historical violence trends, knowledge transmission from Islamic civilizations to Europe, and modern African-American cultural behavior, this thesis demonstrates that long-term pressure tends to produce synthesis and pragmatism rather than collapse or conquest. 2. Historical Knowledge Transmission and Institutional Development
1 like • 23d
Yes, it all boils down to the money or value echange and what was forsaken for the gain of it. If it was identity, we can clearly trace the void that is being filled with narratives and unfounded hatred. You can wipe out a nations coffers but what is worse is if you wipe out identity, history and cultural cohesion. You burn the books, you burn souls. You burn money, you just burned paper. You fuel neurotic anxiety by exchanging the book of wisdom with the black book of the checksum game.
1-6 of 6
Sumeyye Bozkus
2
10points to level up
@sumeyye-bozkus-1213
Nothing special. I just don’t want to die early

Active 3h ago
Joined Jan 17, 2026
INFJ
Tolkien Trail or 221b