Babble,
I've been tinkering with computer programming again recently.
And it got me to realizing that, in much the same way I've come to view my participation in other systems as interaction between different subsystems of myself and them, and so being able to more cleanly maintain different attitudes toward the thing, I can probably do that with computing.
Like, to interact with most parts of reality I have to make some compromises. (Which sounds more moral or judgemental than I mean it; I also view my existence as essentially a certain set of compromises reality has made with itself.
I'm thinking that if I can map out the compromises, in some way, and create different environments where different levels of compromise are at play, it'll allow me a more full range of expression, which seems more necessary toward expressing, than having access to perfectly high-fidelity expression, which isn't precluded by temporary compromises anyway.
Putting it in any sort of concrete terms: My idea computer is probably something running Guix, with, probably, Racket as the most complex interpreter I've got going, and everything in my stack is limited to what those things can do. (Which is more than plenty, but the structuration to do it smoothly and interoperably with other systems like the Web isn't there.)
And in practice, I'm using Windows 11 as the main operating system on my laptop, which is about as far from a Guix system as one can get. (Though one can go nerdier than Guix: I want to brag I used to run Plan 9 from Bell Labs as my daily driver.)
I keep stepping away to do chores during this babble but a thought is that things should be able to flow from less-nerdy systems into the nerdier ones, but the nerdier ones don't necessarily need to communicate with the less-nerdy ones.
That is, I need Windows 11 to be able to talk to Ubuntu, to talk to Emacs, but I don't really need Emacs to talk to Windows 11 (beyond things like sharing my copy/paste clipboard.)
Or, to use another example: I would like to be able to interface with Discord servers with my MUD engine, and represent Discord-concepts as MUD-concepts. But I don't need to be able to translate MUD-concepts into Discord-grammar.
That last clause there got me close to the idea i wanted: registers of computing as grammars that exist in their own relationship to each other.
This maps with how my research is becoming organized, though I'm realizing that I've arrived at an impasse of sorts, as recognizing this structuration forming is what motivated me to look back toward computer programming.
So back in the 1830s, Auguste Comte, a French philosopher, proposed a hierarchy of the sciences. This was part of a wider idea of his, positivism, which suggested that human knowledge progressed from theological to metaphysical to physical. In other terms, from abstract to concrete. Thus, mathematics was the foundation, with astronomy coming next, then physics, then chemistry, biology, and finally sociology, a term Comte coined himself.1, 2
Comte called sociology the "queen of sciences," if I remember, because it was the "crown" on top of all of them, that brought humans, the doers of science, into alignment with mathematics, the foundations of science.
In the nearly-200 years since Comte's work, there's been a lot of thought on how to frame sciences. Comte's positivism and reductionism hasn't ever left, but the way it's been handled don't (usually) operate from the same ecclesiastical position, so have in their own way refuted the central claim, that sciences are cleanly founded upon each other.
By the late 19th Century, knowledge production had professionalized, with universities and academic journals, which grouped sciences based on what they studied: formal, natural, or social. In 1938ce, the International Encyclopedia of Unified Science was published, and by 1948ce, the Hempel-Oppenheim model attempted to codify the relations between sciences.
In 1961ce, Ernest Nagel published The Structure of Science, which further solidified the idea of intertheoretic reduction via intertheoretic bridge laws.
Then, the next year, Herbert Simon published The Architecture of Complexity, which used cybernetics to soften the claim, pointing to different levels of complexity having different principles.
The same year, Thomas Kuhn, with Structure, further undercut the idea of a unified hierarchy of sciences, looking at how some sciences had incommmensurable paradigms. And in 1970ce, Donald Davidson, with Mental Events, proposed anomalous monism: mental events supervene on physical events, but are not reducible themselves to physical laws.
Then in 1972ce, Philip Anderson used physics itself to change intertheoretic reduction, by highlighting emergence that produces novel concepts and laws at higher levels that cannot be derived from microphysical laws.
Jerry Fodor generalized this non-reductionist stance in 1974ce with Special Sciences, claiming multiple realizability precludes reducting "special sciences" lilke psychology and sociology to physics.
Paul Feyerabend, the next year, in Against Method, argues fro methodological pluralism, establishing what non-reductionist scientific practice might look like.
Jaegwon Kim took these ideas and voiced a necessary concern, the causal exclusion problem: if higher-level sciencees aren't reducible, are they causally impotent?
This is an idea I want to slow down on, recognizing that this babble has moved into a rough history of the hierarchy of sciences, when it started as an attempt to consider what software to use where and how. But, this will be, I think, important on this stuff: the compromises I discussed earlier have causes, and given the relationship between attributing and affect, and what i know from attribution theory and affect theory, combined with my own ideas: being able to attribute the cause of why I'm using a specific software will help me hold the right mood for that software.
So, causal exclusion. Imagine we're working with two levels of properties, the physical (neurons firing, molecules colliding, etc.) and mental (beliefs, desires, etc.)
Davidson and Fodor said that mental supervenes on physical - no mental change without physical change. But… they also claimed that mental changes are causal. For example, seeing rain outside is physical. But it's your belief that it's raining that makes you grab a jacket.
The physical stuff has proper "causal closure," and mental stuff depends on the physical. This leads to an issue of overdetermination: under Davidson/Fodor modeling, deciding to grab an umbrella has two full and independent causes, physical and mental.
Which would mean the mental isn't actually doing any causal work - it's epiphenomenal: along for the ride, but not making anything happen.
This issue creates an impasse within irreductive physicalism: mental causes are real but not reducible, which is unstable: either the mental can be reduced to the physical (so it has real causal power) or the mental is a impotent byproduct.
Essentially, higher-level causes seem excluded by the sufficiency of their physical bases. Put another way, the causality described by things like sociology or psychology is excluded by that causality's own supervenience on physical causality.
Or, put another way, irreducible supervenience causes epiphenomenalism. It forces philosophers to choose reduction or explain away supervenience.
This critique sparked a few directions. Reductionism accepts Kim's critique as a conclusion: higher-level causes are nothing over and above physical causes.
Emergentism argues that higher levels can have novel cause powers that are irreducible and not supervenient.
Interventionism argues that causation should be understood as an explanatory practice. That is, higher-level causes are real because they allow us to track regularities that may be, themselves, manipulable, even if they are underpinned physically.
And pluralism says causality is a context-dependent story, so the exclusion problem is precluded.
These views were explored through the second-half of the 20th Century: Bruno Latour intorduced actor-network theory, John Dupré advanced promiscuous realism, and Nancy Cartwright wrote The Dappled World, which framed the cosmos as a mosiac of local orders, not governed by universal laws.
Early in the 21st Century, the understanding of sciences took a mechanistic turn. The philosophizing of this probably peaked in 2007ce, with Carl Craver writing Explaining the Brain, and Every Thing Must Go from James Ladyman and Don Ross.
Basically, the understanding became that the sciences were unified not through a hierarchy of causality, but at the level of mathematical structure, which was manifest in the mechanisms present in each field.
I've been focusing on theory pretty abstractly here, and I reach a point where I cannot tell the history further without backing up and pointing out that since Kim's critiques, computers were becoming more and more a part of life, often developing near the same academic contexts that were producing these ideas. I would like to, at some point, go back over this history and compare it to the history of concepts in computer science. It seems like there might be an overlap between the concepts of science and hierarchy affecting early computing, and the speed of computation rapidly demonstrating the limits of those concepts, and allowing us to explore other ideas.
But for now I'll just say that in the contemporary day, with multi-scale modeling (climate science, systems biology, neuroscience), hybrid fields (biophysics, econophysics, computational social sciencce), the functional understanding of sciecne is as a network of partially autonomous, partially integrated models. It isn't Comte's strict ladder, nor is it a dappled patchwork, but a web of mechanisms, models, and concepts.
Notably, there is still a universalism inherit to this formation of science, even as it often refers to itself as pluralistic: mathematical modeling is assumed to be functionally accurate, which itself makes some assumptions about a coherent world. This is an assumption that I've apparently explored too poorly in my research, because I can't figure out what to link to. Ordo universi, the ecclesiastical expression of the idea, didn't even exist until I just linked it. Maybe in the future it'll have more information pointing back to it. Unfortunately, critiquing this assumption is beyond the scope of this babble, I think, especially as accepting that assumption is one of the compromises I make to use computers (and more generally).
I in fact want to, in this movement of this babble, consider what it would look like to lean into this emergent (yet not really formalized) principles of science. I was planning to do so by attempting to build a shared scientific grammar rooted in homotopy type theory and lambda calculus…
…and then as I was writing this babble, I saw that Stephen Wolfram released The Ruliology of Lambdas. It's a wonderful demonstration of how even in abstract mathematical systems, behaviors quickly become irreducible, unpredictable, and indeterminable.
Ruliology, established by Wolfram, "examines how simple computational rules can generate complex behaviors." Which is to say, it's the -ology of the rules that scientific fields are believed to share, under this conception of the sciences. Notably, ruliology does not approach this belief as a belief, to be tested, but as an assumed law of the systems it studies. That makes it a maximalist claim, and I'm generally more interested in minimalist approaches.
So, backing up, I want to do something specific: reckon a set of minimal claims that must be committed to, in order to practice what is, today, called "science."3
Contemporary science simply doesn't rest on the claims it was built on: there's no hierarchy, there's no reduction, there's no universal law, and no single-method positivism.
So what commitments must be made to do something that counts as science?
- Order
- In order for observation and modeling to exist, order must exist.
- Epistemic optimism
- Through doing science, this order can be known
- Representational optimism
- Knowing the order lets us represent it
- Relational optimism
- Representing the order lets us communicate it as part of other orders
Or put another way, the cosmos is patterned, those patterns can be tracked, represented, and integrated.
I'm struggling to express that last point, but basically: a claim of science today is that heterogeneous methods and models can be used together, if they share… points of articulation? Heterogenous models are usable together if they can be articulated together, which requires common condition, structure, or function.
Which pretty neatly brings me toward the language I've been using in my own relational dynamics, which is… almost a segue back toward what this babble started on, which is finding a way to allow the different registers of my research and theory to communicate more easily.
The walk-through of hierarchy-to-web science history is useful because in some ways, computers have embedded hierarchies, and in others, they have a web, and in others, they're on the Web.
I feel like I have one part of this thing I'm trying to do… on its way to being developed, with my conceptions of how condition, structure, and function interact.
But I also am certain that within computer science, there are tests of the edges of these concepts that I should be looking at.
Which is my non-segue to saying, I suspect it might be good scientific practice to find how to express things in some variant of lambda calculus, and I suspect it may be something like Scheme, Guile, or Racket, when I am trying to form an agnostic, aphoristic expression of field-specific concepts.
But let me try and move this back toward the pragmatic: deciding how I'll do the various things I want to do through computers. I must note, this is only one mode of deciding, and like an overworked scientist trying their best, I'll try and use a few applicable methods to test my plans.
I said that I want "registers of computing as grammars that exist in their own relationship to each other," which sounds a lot like how the sciences exist.
Which makes me realize that what I've done so far in this babble is look at what conditions to use to give "science" it shape or structure, as a concept. What I haven't done is describe what the shape or structure is.
If a science is articulable representations of detectable patterns - which is, I must say, a pretty wonderfully broad description of the concept - then the "shape" of a science is the set of those representations. Representations here are symbolic, which means that our description of science, formed from looking at the conditions that give it its structure, is that a science is a grammar of signs, which means we should look at science through the lens of semiotics. But if we call it a grammar of reductions, we get lambda calculus, if we say grammar of morphisms we get category theory, and if we say grammar of paths, we get homotopy-type theory. Grammar of syntax gets us linguistics. I could probably do other ways but those are the most relevant to computationability, I reckon.
It was crammed in that list but homotopy-type theory is the one I suspect I will use as my most universal grammar, as it's what I've found most able to express relational dynamics.
Also, this is not directly connected to anything, but I want to say that I might see the "ruliology" of a thing being a map of all possible rules within a system, and the "semiosphere" is a map of those rules which are used within a system. But that's not a very clear distinction, written like that, so I'll move on.
…But move on to where? This babble is perhaps the babbliest in a while.
Might be best to just, move on? Let me think about what I've done in this babble.
I started with talking about different forms of computing involving different forms of compromise. I talked about these shapes being about moods, not just epistemics.
From there, I kind of jumped based on my own assumptions equivalencies in the shape of computing and the shape of science, which had to do with, though I didn't say it quite this way, grammars that formed out of the constraints of the ruliology of each type of computing and science, and finding commensurate points of articulation between them creates the general field of computing, or science.
Then I talked about the causal exclusion problem, and how, from that, science moved toward this webby structure. I'm wondering if I want to go back and examine: does the webby structure address Kim's critique?
I'm also wondering about… how this concept relates to computers as cause-effect machines, which is one way of looking at input-output. And I'm wondering how that relates to boundaries and translations.
But, looking at webby sciences brought me to a set of claims about what makes "a science" these days - useful because a lot of my research is in the boundaries between fields, or how those fields boundary with my own ideas.
And, then I talked about how there's a few sciences for looking at webby things, and semiotics seems the most relevant for starting with, when describing the shape of a science, and science.
It feels like I'm kind of dancing around the issue here… I guess maybe I should ask myself, what is the minimum condition, structure, and functionality, that a semiosphere requires? That'll give me the minimum structure that… any thing that can communicate with other things?… must have, I reckon.
I think a facet of this may be too, identifying which elements of a semiosphere are involved in articulation/translation, and which are internal.
So, a semiosphere, has the following terms that are helpful to define it:
- object
- sign
- index
- icon
- symbol
- semiosphere
- core
- periphery
- boundary
- translation
- text
- memory
- autocommunicating
- code
The question is which of these are necessary: if we were to construct a digital object that represents a semiosphere, what attributes and methods would it minimally require? (I'm seeing how this question relates to, generally, "how do I represent a MUD in the most agnostic way, but we'll get back to that.)
…I think I need to be better about making new babbles, because it's actually now
, more than a day after I started this, and there's been a lot of moves. But I also can see how I'm VERY close to being able to name which specific thing is happening that warrants a new babble, so I'm going to not worry too much about it now, but instead just move to a new babble.Backlinks
Footnotes:
Notably, astronomy was foundational to physics, because in Comte's day, physical phenomena were generally believed to be terrestrial manifestations of the laws that governed the movement of astronomical bodies, which were believed to be manifestations of pure mathematics. That last part might be true, but I'm trying to express that today, our models of the mathematics are significantly more complex.
Also of note, Comte didn't believe in psychology, splitting it between biology and sociology. This is out-of-step with the ideas of psychoanalysis, but in-line with schizoanalysis
I'd like to then ask if there is any functional distinction between this "science" and field-specific derivatives like "decolonial science," or whether those terms are grounded in a form of science that's obsolete.