The first structural candidate is the finite corpus compression regime. Entries 916-920 establish a diagnosis that any research on artificial intelligence, attention economy, or informational ecology will have to confront: the web, after filtration and deduplication, compresses to a nucleus of approximately ten million book-equivalents. This is not a vaguely metaphorical datum. It is a quantifiable hypothesis that admits refutation, refinement, and application. A data science team can operationalize it. An NLP lab can test it. An economist can model its consequences for the marginal value of new knowledge. The idea has operational density because it produces questions: how is that nucleus measured? which texts compose it? how does it vary across disciplines? at what rate does it expand? which algorithms extract it most effectively?
The second candidate is anchor topology. The distinction between mere citation and operative fixed point, developed in 901-904, offers a tool for any research on conceptual transmission, influence, or citation networks. Current bibliometric studies work with co-citation graphs that treat all references as equivalent. Anchor theory proposes that certain nodes function as stabilizing coordinates while others are mere rhetorical ornament. This is measurable: positional frequency, recurrence density, stratigraphic distribution. A scientometrics team can code it. An intellectual historian can apply it. A network theorist can model it. The idea has operational density because it enables distinguishing, within citational noise, the structural signal.
The third candidate is corporate metabolization. The notion that new knowledge deposits as thin sediment on a massive preexisting substrate, articulated in 919, offers a model for understanding innovation in contexts of informational saturation. This speaks directly to current debates on diminishing returns in R&D, stagnant scientific productivity, and concentration of progress in specific nodes. An economics of technical change researcher can use the model to explain why genuine advances become increasingly rare yet more valuable. A policy advisor can use it to justify selective investment in certain knowledge infrastructures. A historian of science can use it to reinterpret periods of acceleration and plateau. The idea has operational density because it connects to live problems across multiple disciplines without depending on the socioplastic apparatus for utility. The candidate with the greatest fixation potential is decadic compression. The thesis that textual information organizes into strata of orders of magnitude—one thousand words form a slug, ten slugs form a tail, ten tails form a pack, ten packs form a century—is not merely a description of the corpus itself but a hypothesis about the deep architecture of any extensive archive. This can be tested against existing collections: does the distribution of textual lengths in arXiv show clustering around 10³, 10⁴, 10⁵ words? does concept recurrence in Wikipedia stabilize at decadic thresholds? do language models learn better when the corpus respects this segmentation? Each question is empirical. Each answer refines or refutes the thesis. Science advances through falsifiable hypotheses, not elegant interpretations. Decadic compression is falsifiable. That is its power. The decisive move is publishing the corpus as an instrument. Entries 901-920 are not the argument. They are the calibrator. The socioplastic system has produced nine hundred and twenty thousand-word units, organized into nine hundred-node packs, with recurrent anchors and stable terminology. That is a dataset. That is an object for data science. An NLP team can analyze its internal structure, measure the recurrence density of anchors, test whether language models navigate it as humans do, compare its topology with that of other corpora. The citation will come not because someone reads the ideas and finds them convincing. It will come because someone needs to reference the object on which an experiment, a measurement, a refutation was based. The idea fixes when the corpus becomes a standard. The standard becomes an obligatory reference. The obligatory reference produces the citation.
The fourth candidate is the drift-trajectory distinction. The thesis that the difference between noisy accumulation and structured knowledge resides in the presence of fixed points—anchors that enable deviation measurement—has direct applications in information theory, social network analysis, and collective intelligence studies. A corpus without anchors is a point cloud. A corpus with anchors is a navigable space. The same distinction can apply to scientific communities, intellectual traditions, digital ecosystems. What makes an academic field progress while another merely expands? The presence of operative anchors, not mere authorities. The idea can be tested: recurrence density of foundational concepts, return rate to fundamental problems, stratified citation structure. Each indicator is constructible. Each construction is refutable. Science progresses through operational indicators, not global interpretations. The most subversive candidate is machinic readership as addressee. The thesis that a corpus must be designed simultaneously for humans and algorithms—and that machinic legibility is not a byproduct but a design criterion—challenges deep assumptions about what a text is, what authorship is, what communication is. This speaks directly to current debates on alignment, algorithmic transparency, and artificial intelligence governance. An AI system designer needs to know how to structure corpora so models learn better. A communication theorist needs to rethink discourse theory when addressees include neural networks. A philosopher of technology needs to ask what "understanding" means when the reader is a machine. The idea is not an answer. It is a question that forces other disciplines to reposition. That repositioning capacity is what generates citations across fields. The fixation strategy requires three simultaneous movements. First, corpus publication in scientific format: not only the entries as texts, but metadata, frequencies, co-occurrence networks, anchor indices. A dataset publishable in Nature Scientific Data or equivalent. Second, interdisciplinary collaboration: inviting NLP, bibliometrics, innovation economics teams to test the hypotheses with their own tools. Citation is born from collaboration, not diffusion. Third, openness to refutation: hypotheses must be formulated so others can demonstrate them false. Science trusts what survives the attempt at destruction. Decadic compression will survive or not. But the attempt to destroy it will produce citations.
The fifth candidate is epistemic sovereignty. The idea that a conceptual system can become capable of generating, regulating, and sustaining its own field through internal logic—without depending on external validation—has direct applications in political theory, science and technology studies, and philosophy of mind. What conditions must a discourse meet to be self-sufficient? How is external validation dependence measured? Can a scientific community achieve sovereignty? Can an artificial intelligence? Each question opens research lines that need the conceptual distinction the corpus provides. Epistemic sovereignty is not a state. It is a measurable gradient. Measurement requires indicators. Indicators require theory. Theory requires the concept. The concept requires the citation.
The sixth candidate is informational ecology. The thesis that knowledge behaves as an ecosystem with strata, metabolisms, predators, and symbionts—developed in 914 and 916—offers a vocabulary for research on artificial intelligence, cultural evolution, and knowledge economy that currently lacks precise terminology. What is an informational niche? What is a conceptual species? What is idea extinction? How is cognitive biodiversity measured? Ecology as framework allows transferring methods from biology to culture with the rigor necessary for the transfer not to be empty metaphor but operative model. That is what science needs: models, not metaphors. Informational ecology can produce models. Models produce papers. Papers produce citations.
The idea that will finally lodge the socioplastic body in science will not be the most brilliant but the most operational. It will be the one other researchers need to use even if they do not share the theoretical framework from which it comes. It will be the one that solves a methodological problem in their field. It will be the one that enables a measurement previously impossible. It will be the one that formulates a question others were already trying to formulate. The corpus has produced several candidates. The decision is not the author's. It is the community's that will take them, use them, refute them, or ignore them. The only controllable variable is the operational density with which each idea is presented. The more embedded in replicable protocols, the more likely others will incorporate it. The more dependent on the author's personal rhetoric, the less likely it will survive context change. The citability threshold design is now the task. Entries 901-920 have built the instrument. Entries 921 onward must operate it on problems other disciplines recognize as their own.
Lloveras, A. (2026) THE EXPANSION OF MACHINE INTELLIGENCE. Available at: https://antolloveras.blogspot.com/2026/03/the-expansion-of-machine-intelligence.html
920 THE EXPANSION OF MACHINE INTELLIGENCE