Blog
October 21, 2024
Finite state automata (FSA) can be modeled with reactions of the form $A + B \to C + D.$ FSAs operate over regular languages, and all FSAs are equivalent, so our job is to find some reaction which corresponds to determining whether or not a given "word" (sequence of inputs to the reaction) is a member of the language.
Generally, we will be associating languages of various grammars in the Chomsky hierarchy to certain combinations of "aliquots" added to a one-pot reaction, and in this case we want our aliquots to be potassium iodide and silver nitride. Take the language over the alphabet ${a,b}$ consisting of all words with at least one $a$ and one $b.$ Now associate $a$ with some part of $\text{KIO}_3$ and $b$ with some part of $\text{AgNO}_3.$ Then, the reaction
$$
\text{KIO}_3 + \text{AgNO}_3 \to \text{AgIO}_3 (\text{s}) + \text{KNO}_3
$$
only occurs when both of the reactants are present in solution, so the word is in the language if and only if silver iodide is present. (Or, equivalently, heat is released).
Type-2 grammars consist of languages that can be modeled with pushdown automatas, which differ from FSAs in that they have a stack that can store strings of arbitrary sizes. We call these languages "context-free languages", and the reactions which we associate to context-free languages are those with intermediaries. Again, because of automata equivalence, we can consider the simple case of the Dyck language: the collection of parentheses-strings that never contain more closed parentheses than open parentheses at any $i$ and contain exactly equal amounts of closed and open parentheses at $i=n.$
We associate this with the $pH$ reaction of sodium hydroxide and acetic acid (vinegar), with the amounts of each aliquot normalized to create identical disturbances in the $pH$ of the solution. Note that as $pH$ indicator aliquot is present at the beginning and end of the reaction (we associate it with the start-and-end token), the aliquot serves as the intermediary (the "stack", if you will). So, if $pH \geq \text{midpoint } pH$ throughout the reaction, but is $\text{midpoint } pH$ at the end, the reactor accepts the word. If not, it does not.
Incidentally, you can interpret this as the enthalpy yield $Y_{\Delta H} (\%)$ of the computation, defined as
$$
Y_{\Delta H} (\%) = \frac{\text{reaction heat during computation}}{\text{formation heat of input string}} \times 100.
$$
Dyck words maximize the enthalpy yield, whereas all other input sequences with imbalanced numbers of parentheses have lower enthalpy yields. Implication: all PDAs are doing something like "enthalpy maximization" in their computation. Couldn't find a good reference or exposition here, but something to look into.
How do we model Turing machines? You can think of a Turing machine as a "two-stack" PDA, where each stack corresponds to moving left or right on the tape. Physically, this implies that we want to model TMs with a reaction with at least two interdependent intermediaries, and we want it to be "expressive" enough to model "non-linearities". Oscillatory redox reactions are a natural choice, of which the Belousov-Zhabotinsky (BZ) reaction is the most famous.
A typical BZ reaction involves the combination of sodium bromate and malonic acid, with the main structure as follows:
$$
3\text{BrO}_3^- + 3\text{CH}_2(\text{COOH})_2 + 3\text{H}^+ \to 3\text{BrCH}(\text{COOH})_2 + 4\text{CO}_2 + 2\text{HCOOH} + 5\text{H}_2\text{O}.
$$
BZ reactions have a ton of macro-structure. Color changes as a function of the amount of oxidized catalyst, the proportions of the reactants and products fluctuate periodically, and even spatial patterns emerge from micro-heterogeneity in concentrations (e.g. reaction-diffusion waves, Pack patterns). These properties are incredibly interesting in and of themselves, but all we need for modeling TMs is that the reaction is sensitive to the addition of small amounts of aliquot.
Consider the language $L_3 = \{a^nb^nc^n \mid n \geq 0\}.$ Dueñas-Díez and Pérez-Mercader associate the letter $a$ with sodium bromate and $b$ with malonic acid. $c$ must somehow be dependent on the concentrations of $a$ and $b,$ so we associate $c$ with the $pH$ of the one-pot reactor, which we can read with sodium hydroxide. An aliquot of the rubidium catalyst maps to the start-and-end token.
Oscillation frequency $f$ is proportional to $[\text{BrO}_3]^\alpha \times [\text{CH}_2(\text{COOH})_2]^{\beta} \times [\text{NaOH}]^{-\gamma},$ but it can also be modeled as a nonlinear function of the difference between the maximum redox value of the reaction and the mean redox value of a given oscillation, that is:
$$
D = V_{\text{max}} + \left( V_T + \frac{V_P - V_T}{2}\right),
$$
where $V_T$ and $V_P$ are the trough and peak potentials, respectively, and $V_\text{max}$ is the maximum potential. Ultimately, the final frequency of the reaction can be modeled as a quadratic in $D_{\#}$ to high-precision ($\#$ denotes the start-and-end token, so it can be taken to be the last timestep in reaction coordinates).
What actually allows word-by-word identification though, is the sensitivity of the oscillatory patterns to the concentrations of specific intermediaries:
The various "out-of-order" signatures for words not in $L_3$ can be explained as follows. Each symbol has an associated distinct pathway in the reaction network. Hence, if the aliquot added is for the same symbol as the previous one, the pathway is not changed but reinforced. In contrast, when the aliquot is different, the reaction is shifted from one dominant pathway to another pathway, thus reconfiguring the key intermediate concentrations and, in turn, leading to distinctive changes in the oscillatory patterns. The change from one pathway, say 1, to say pathway 2 impacts the oscillations differently than going from pathway 2 to pathway 1. This is what allows the machine to give unique distinctive behaviors for out-of-order substrings.
Thermodynamically, characterizing word acceptance is a little bit more involved than that of PDAs or FSAs, but it can still be done. Define the area of a word as
$$
A^{\text(Word)} = V_{\text{max}} + \tau' - \int_{t_{\#} + 30}^{t_{\#} + \tau} V_\text{osc}(t) \, dt,
$$
where $t_{\#}$ is the time in reaction coordinates where the end token is added, $\tau'$ is the time interval between symbols, $V_\text{max}$ is the maximum redox potential, and $V_\text{osc}$ is the measured redox potential by the Nerst equation
$$
V_\text{osc} = V_0 + \frac{RT}{nF} \ln \left( \frac{[Ru(bpy)^{3+}_{3}]}{[Ru(bpy)^{2+}_{3}]} \right),
$$
where $Ru(bpy)^{3+}_{3}$ and $Ru(bpy)^{2+}_{3}$ are the concentrations of the oxidized and reduced catalyst of the BZ reaction, respectively. Now, the Gibbs free energy can be related to the redox potential as so:
$$
\Delta G_\text{osc} = -nFV_\text{osc},
$$
so the area of a word can be rewritten in terms of the free energy as
$$
A^{\text(Word)} = - \frac{1}{n_eF} \left( \Delta G ' \times \tau ' - \int_{t_{\#} + 30}^{t_{\#} + \tau} \Delta G_\text{osc}(t) dt\right).
$$
Accepted words all have some constant value of $A^{\text(Word)},$ while rejected words have a value that is dependent on the word.
$L_3$ is a context-sensitive language, so it is only a member of the Type-1 grammar not the Type-0 grammar. However, for our purposes (realizing some practical implementation of a TM) it is roughly equivalent, as any finite TM can be realized as a two-stack PDA, and this models a two-stack PDA quite well.
References
October 17, 2024
A species is a reproductively isolated population. In essence, it consists of organisms which can only breed with each other, so its ability to self-replicate is entirely self-contained. In practice, the abstraction only applies well to macroflora and macrofauna, which is still enough to inform our intuitions of super-organismal interaction.
Interspecific interactions can frequently be modeled by considering the relevant species as agents in their own right. Agents motivated by self-sustention to acquire resources, preserve the health of their subagents, and bargain or compete with others on the same playing field as themselves. Parasitism, predation, pollination—all organismal interactions generalizable to super-organismal interactions.
Optimization of the genome does not occur at the level of the organism, nor does it occur at the level of the tribe. It occurs on the level of the genome, and selects for genes which encode traits which are more fit. From this perspective, it makes sense for "species" to be a natural abstraction. Yet, I claim there are properties which species have that make them particularly nice examples of super-organisms in action. Namely:
- Boundaries between species are clear and well-defined, due to reproductive isolation;
- Competitive dynamics between species are natural to consider, rather than having to move up or down a vertical hierarchy;
- The "intentional stance", when applied to species, is simple: reproduction.
However, it is precisely because species have such nice properties that we should be incredibly cautious when using them as intuition pumps for other kinds of super-organisms, such as nation-states, companies, or egregores. For instance:
- Boundaries between nation-states and companies are relatively straightforward to define (determined by citizenship or residency and employment, respectively). Boundaries between egregores are . . . complicated, to say the least.
- Company competition is generally modelable with agent-agent dynamics, and so is nation-state competition. But the act of "merging" (via acquisition, immigration, etc.) is available to them in a way that it is not to species. (Again, egregores are complicated . . .)
- The goal of a company is to maximize shareholder value. The goal of a nation-state is . . . to provide value to its citizens? The "goal" of an egregore is ostensibly to self-perpetuate and . . . fullfill whichever values it wants to fulfill.
These "issues" are downstream from horizontal boundaries between other super-organisms we want to consider being less strong than the divides between idealized species. While Schelling was able to develop doctines of mutually-assured destruction for Soviet-American relations, many other nation-state interactions are heavily mediated by immigration and economic intertwinement. It makes less sense to separate China and America than it does to separate foxes and rabbits.
Don't species run into the same issues as well? Humans are all members of one species, and we manage to have absurd amounts of intraspecial conflict. Similarly, tribal dynamics in various populations are often net negative for the population as a whole. Why shall we uphold species as the canonical referent for superorganisms?
Species are self-sustaining and isolated. The platonic ideal of a species would not only be reproductively isolated, but also resource isolated, in that the only use for the resources which organisms of a species would need to thrive were ones which were unusable for any other purpose. Horizontal differentiation is necessary to generalize agent modeling to systems larger than ourselves, and species possess a kind of horizontal differentiation which is important and powerful.
A corollary of this observation is that insofar as our intuitions for "superorganismal interaction" are based on species-to-species interaction, they should be tuned to the extent to which the superorganisms we have in mind are similar to species. AI-human interaction in worlds where AIs have completely different hardware substrates to humans are notably distinct from ones in which humans have high-bandwidth implants and absurd cognitive enhancement, so they can engage in more symbiotic relationships.
I would be interested in fleshing out these ideas more rigorously, either in the form of case studies or via a debate. If you are interested, feel free to reach out.
Crossposted to LessWrong.
June 30, 2024
The Probabilistic Payor's Lemma implies the following cooperation strategy:
Let $A_{1}, \ldots, A_{n}$ be agents in a multiplayer Prisoner's Dilemma, with the ability to return either 'Cooperate' or 'Defect' (which we model as the agents being logical statements resolving to either 'True' or 'False'). Each $A_{i}$ behaves as follows:
$$
, \vdash \Box_{p_{i}} \left( \Box_{\max {p_{1},\ldots, p_{n}}}\bigwedge_{k=1}^n A_{k} \to \bigwedge_{k=1}^n A_{k} \right) \to A_{i}
$$
Where $p_i$ represents each individual agents' threshold for cooperation (as a probability in $[0,1]$), $\Box_p , \phi$ returns True if credence in the statement $\phi$ is greater than $p,$ and the conjunction of $A_{1}, \ldots, A_{n}$ represents 'everyone cooperates'. Then, by the PPL, all agents cooperate, provided that all $\mathbb{P}_{A_{i}}$ give credence to the cooperation statement greater than each and every $A_{i}$'s individual thresholds for cooperation.
This formulation is desirable for a number of reasons: firstly, the Payor's Lemma is much simpler to prove than Lob's Theorem, and doesn't carry with it the same strange consequences as a result of asserting an arbitrary modal-fixedpoint; second, when we relax the necessitation requirement from 'provability' to 'belief', this gives us behavior much more similar to how agents actually I read it as it emphasizing the notion of 'evidence' being important.
However, the consistency of this 'p-belief' modal operator rests on the self-referential probabilistic logic proposed by Christiano 2012, which, while being consistent, has a few undesirable properties: the distribution over sentences automatically assigns probability 1 to all True statements and 0 to all False ones (meaning it can only really model uncertainty for statements not provable within the system).
I propose that we can transfer the intuitions we have from probabilistic modal logic to a setting where 'p-belief' is analogous to calling a 'reflective oracle', and this system gets us similar (or identical) properties of cooperation.
Oracles
A probabilistic oracle $O$ is a function from $\mathbb{N} \to [0,1]^\mathbb{N}.$ Here, its domain is meant to represent an indexing of probabilistic oracle machines, which are simply Turing machines allowed to call an oracle for input. An oracle can be queried with tuples of the form $(M, p),$ where $M$ is a probabilistic oracle machine and $p$ is a rational number between 0 and 1. By Fallenstein et. al. 2015, there exists a reflective oracle on each set of queries such that $O(M,p) = 1$ if $\mathbb{P}(M() = 1) > p,$ and $O(M,p) = 0$ if $\mathbb{P}(M() = 0) < 1-p$ (check this).
Notice that a reflective oracle has similar properties to the $Bel$ operator in self-referential probabilistic logic. It has a coherent probability distribution over probabilistic oracle machines (as opposed to sentences), it only gives information about the probability to arbitrary precision via queries ($O(M,p)$ vs. $Bel(\phi)$). So, it would be great if there was a canonical method of relating the two.
Peano Arithmetic is Turing-complete, there exists a method of embedding arbitrary Turing machines in statements in predicate logic and there also exist various methods for embedding Turing machines in PA. We can form a correspondence where implications are preserved: notably, $x\to y$ simply represents the program if TM(x), then TM(y)
, and negations just make the original TM output 1 where it outputted 0 and vice versa.
(Specifically, we're identifying non-halting Turing machines with propositions and operations on those propositions with different ways of composing the component associated Turing machines. Roughly, a Turing machine outputting 1 on an input is equivalent to a given sentence being true on that input)
CDT, expected utility maximizing agents with access to the same reflective oracle will reach Nash equilibria, because reflective oracles can model other oracles and other oracles that are called by other probabilistic oracle machines---so, at least in the unbounded setting, we don't have to worry about infinite regresses, because the oracles are guaranteed to halt.
So, we can consider the following bot:
$$
A_{i} := O_{i} \left( O_{\bigcap i} \left( \bigwedge_{k=1}^n A_{k}\right) \to \bigwedge_{k=1}^n A_{k}, , p_{i}\right),
$$
where $A_i$ is an agent represented by a oracle machine, $O_i$ is the probabilistic oracle affiliated with the agent, $O_{\bigcap i}$ is the closure of all agents' oracles, and $p_{i} \in \mathbb{Q} \cap [0,1]$ is an individual probability threshold set by each agent.
How do we get these closures? Well, ideally $O_{\bigcap i}$ returns $0$ for queries $(M,p)$ if $p < \min{p_{M_1}, \ldots, p_{M_n}}$ and $1$ if $p > \max {p_{M_1}, \ldots, p_{M_n}},$ and randomizes for queries in the middle---for the purposes of this cooperation strategy, this turns out to work.
I claim this set of agents has the same behavior as those acting in accordance with the PPL: they will all cooperate if the 'evidence' for cooperating is above each agents' individual threshold $p_i.$ In the previous case, the 'evidence' was the statement $\Box_{\max {p_{1},\ldots, p_{n}}}\bigwedge_{k=1}^n A_{k} \to \bigwedge_{k=1}^n A_{k}.$ Here, the evidence is the statement $O_{\bigcap i} \left( \bigwedge_{k=1}^n A_{k}\right) \to \bigwedge_{k=1}^n A_{k}.$
To flesh out the correspondence further, we can show that the relevant properties of the $p$-belief operator are found in reflective oracles as well: namely, that instances of the weak distribution axiom schema are coherent and that necessitation holds.
For necessitation, $\vdash \phi \implies \vdash \Box_{p}\phi$ turns into $M_{\phi}() = 1$ implying that $O(M_{\phi},p)=1,$ which is true by the properties of reflective oracles. For weak distributivity, $\vdash \phi \to \psi \implies \vdash \Box_{p} \phi \to \Box_{p}\psi$ can be analogized to 'if it is true that the Turing machine associated with $\phi$ outputs 1 implies that the Turing machine associated with $\psi$ outputs 1, then you should be at least $\phi$-certain that $\psi$-outputs 1, so $O(M_{\phi},p)$ should imply $O(M_{\psi}, p)$ in all cases (because oracles represent true properties of probabilistic oracle machines, which Turing machines can be embedded into).
Models
Moreover, we can consider oracles to be a rough model of the p-belief modal language in which the probabilistic Payor's Lemma holds. We can get an explicit model to ensure consistency (see the links with Christiano's system, as well as its interpretation in neighborhood semantics), but oracles seem like a good intuition pump because they actively admit queries of the same form as $Bel(\phi)>p,$ and they are a nice computable analog.
They're a bit like the probabilistic logic in the sense that a typical reflective oracle just has full information about what the output of a Turing machine will be if it halts, and the probabilistic logic gives $\mathbb{P}(\phi)=1$ to all sentences which are deducible from the set of tautologies in the language. So the correspondence has some meat.
Crossposted to LessWrong.
June 17, 2024
Intro
We start with a typical Kripke frame $\mathcal{F} = (\mathcal{W}, R)$ where $W$ is the set of possible worlds (think possible truth assignments to your set of propositional variables) and $R$ is a binary relation on those worlds.
A Kripke model is just some triple $(\mathcal{W}, R, \Vdash)$ where $\Vdash$ is the forcing relation which determines the relationship between various worlds $w\in\mathcal{W}$ and modal formulas. It's characterized by the following rules:
- $w \Vdash \neg A$ if and only if $w \not\Vdash A$
- $w \Vdash A \to B$ if and only if $w \Vdash \neg A$ or $w \vdash B$
- $w \Vdash \Box A$ if and only if for all $u$ satisfying $wRu$ we have that $u \Vdash A.$
(this also subsequently implies that $A$ is possible in world $w$ if and only if it is true in at least one world $u$ adjacent to $w,$ because it's the dual of necessity)
Different modal logics are generated by changing the assumptions on $R$. For instance, forcing $R$ to be reflexive guarantees that $\textbf{T}: \Box A \to A$ will be satisfied. However, all Kripke models satisfy axiom $\textbf{K}: \Box (A \to B) \to (\Box A \to \Box B),$ which makes Kripke models very good for representing normal modal logics and very bad at representing non-normal modal logics.
Necessitation as an operator has become popular in representing agents' "beliefs". E.g., you can take an agent, represent their policy in some modal logic, and $\Box \phi$ determines their belief in $\phi$ while $\vdash \phi$ determines whether or not $\phi$ is true in that world. And this is nice for a lot of things, and Kripke models are great for modeling provability logic, for instance.
But if we want necessity to be some sort of probabilistic operator---$\mathbb{P}(\phi) > p$, for instance---then this becomes unsatisfactory. Probabilities don't distribute over conditionals like that. So we want to relax this somehow.
Enter neighborhood semantics. We define a neighborhood frame to be a pair $(\mathcal{W}, \mathcal{N}),$ where again $\mathcal{W}$ is the set of possible worlds and $\mathcal{N}: \mathcal{W} \to 2 ^ {2 ^ \mathcal{W}}$ is a "neighborhood function" which maps each world $w \in \mathcal{W}$ to a set of their neighborhoods.
Necessity here is defined differently: $w \Vdash \Box \phi$ if and only if the truth set of $\phi$ is a neighborhood of $w.$ That is, there has to be a neighborhood of $w$ that contains exactly all worlds in which $\phi$ is true, and then $w \Vdash \Box \phi.$
It's easy to see that this is a generalization of a Kripke frame. You can convert every Kripke frame into a neighborhood frame by making the neighborhoods of each world exactly the truth sets of all $\phi$ that are necessitated by that world.
However, you can't go in reverse, because having different neighborhoods attached to the same world means that criterions for necessity can be different for each $\phi.$ For instance, $\Box \phi$ might be satisfied by a neighborhood $U$ which is disjoint from a neighborhood $V$ which satisfied $\Box \psi.$ This is not representable in Kripke models---there's only one neighborhood for each world.
You can pretty easily construct a world with neighborhood semantics that doesn't satisfy $\mathbf{K}.$ Or mainly, the neighborhood associated with $\Box(A \to B)$ doesn't have to be the same one associated with $\Box A,$ and then as a result this correspondence no longer holds in generality.
Applications to Mathematical Reality
Ok, so why were Kripke semantics useful in the first place? They provided a very clean structure by which they could model logics in a way that allowed one to verify properties like completeness/consistency/decidability well (all Kripke models are just graphs, in essence). Partially this is because they have some formal grounding in set theory. But also because formalizing logical structures as directed graphs is just really powerful and a really useful abstraction to reason about various systems.
In the same vein, reducing other systems to neighborhood semantics allows you to reason about their relevant properties simply and somewhat easily. Intuitively, we have a lot of structures that deal with collections of sets cough topologies cough and it'd be nice to bring some of them to the study of modal logic.
Then there's the question of what modal logics can simulate. Games feature prominently. Provability logic is quite famous. But more generally, modal logics are ways of modeling systems and structures that allow you to abstract away a bunch of irrelevant features.
(They're also all fragments of FOL (first-order-logic), and as a result don't necessarily have the same trials and tribulations that come with additional complexity)
April 20, 2024
tl;dr—you can improve performance (really, fitting to physics-like models) by placing your activation functions (nonlinearities) on the edges between your nodes and making them learnable rather than fixed. your nodes then just pool the activations from each of the edges pointing to it
this differs from MLPs, which have activation functions on nodes (they do the aggregation and the nonlinearity), while applying a linear transformation in between (weights and biases).
KANs have no such analogous linear transformation, instead, they approximate good activation functions with B-splines. Apparently, this allows them to escape the curse of dimensionality because the function can be “approximated well with a residue rate independent of the dimension” (emphasis theirs)
i get the vibe that this is pretty dependent on the underlying generating process being approximable by smooth functions, and I also don’t think that GPU kernels have been optimized well for KANs? plausibly we can just write better kernels in CUDA/Triton for this. I think I’ll do this
ah, also the reason why they work: the Kolmogorov-Arnold Representation theorem guarantees that any multivariate continuous function can be represented as a superposition of two univariate continuous functions. MLPs satisfy a different universal approximation theorem that applies to Borel-measurable functions.
Readings:
April 10, 2024
Neuron activations have a privileged basis. The residual stream does not. This is because the nonlinearities are applied to the activations on an element-wise basis, which means that representing the non-linear computation under a change of basis gets messy and results in the element-wise computation no longer acting on elements independently. Another way of thinking about this is that the residual stream is rotation-independent because all interventions (reading/writing) to it are done via linear maps. Multiplying the writes by some rotation matrix $R$ and the reads by some rotation matrix $R^{-1}$ will leave the info & computation done by the stream unchanged. IIRC this is also empirically shown in the grokking paper, where the algorithm is found up to rotation.
It seems that features is just the term we use to represent the units of a transformer's ontology. Alternative definitions include "a property of an input to a model, or some subset of it" (Neel) or "[a] scalar function of the input" (Chris Olah). Sometimes 'meaningful feature' is used to refer to features which are interpretable. However, Anthropic's toy model of bottleneck superposition just takes a 5-dim input $x$ and considers each dimension a feature in and of itself. This is more in line with Olah's definition? Maybe this confusion will resolve itself with practice.
Superposition itself is the phenomenon where there model represents more features than dimensions. In the Toy Models paper, Anthropic found that when features are dense models learn something like PCA, in that the two most important features have orthogonal representations and the rest are not learned. But as sparsity increases, we have the following:
Sparsity here is defined as the probability of the corresponding element in $x$ to the feature as being non-zero (e.g. is the feature represented in the input, roughly). We can also have different thresholds for sparsity that make sense, as in not nonzero but not above ~0.1. For instance, the second image has 80% sparsity because 4/5 of the features represented (and we assume that features are distributed evenly across the input set). In fact, it often makes sense (to me!) to think about sparsity as 1- feature probability rather than the other way around.
This is possible because sets of points in high-dimensional space can be embedded into a lower-dimension space in such a way that the distances between the points are nearly preserved by the Johnson-Lindenstrauss lemma. (I think this also implies that there can be exponentially many almost-orthogonal vectors in an n-dimensional space (exponential in the number of dimensions)).
Neel proposes two kinds of superposition: bottleneck superposition and neuron superposition:
- bottleneck superposition occurs in places like keys, queries, and the residual stream---places where a linear map goes from a large dimension to a relatively smaller dimension, and therefore information must be stored more efficiently/compactly
- neuron superposition occurs when dim(activation_space) < num_features, and is computation rather than storage. Computation occurs when info gets passed through nonlinearities, and n non-linearities attend to >n features in this case.
Consider a given neuron. A neuron is more likely to be polysemantic if the features it represents are sparse w.r.t. each other, and higher sparsity corresponds to more superposition in the general case. One reason we should expect superposition to be naturally occuring is that it will almost always make sense for networks to try to represent ever-more-sparse yet useful features, and these can be added to popular features at almost 0 cost.
Reading:
March 20, 2024
A post-apocalyptic fever dream. The oldest civilized metropolis. Where sons are pathetic in the eyes of their father, and both are pathetic in the eyes of their grandfathers—all while wearing blackened sunglasses and leather jackets. Grown, not made.
Rome is, perhaps, the first place I recognized as solely for visiting, never living. Unlike Tokyo, one feels this immediately. Japan’s undesirability stems primarily from its inordinate, sprawling bureaucracy that is, for the most part, hidden from the typical visitor. Rome’s undesirability is apparent for all to see—it’s loud, stifling, unmaintained, and requires arduous traversals.
Population c. 100 C.E.: 1 million people. Population c. 1000 C.E.: 35,000. Population c. 2024 C.E.: 2.8 million people. Rebound? Not so fast—the global population in the year 100 was just 200 million.
And this is obvious. The city center is still dominated by the Colosseum, the Imperial fora, and Trajan’s Market. Only the Vittoriano holds a candle to their extant glory. Yet the hordes of tourists still walk down the Via dei Forti Imperiali and congregate in stupidly long lines at the ticket booth to see ruins!
I walked across the city from east to west, passing by a secondary school, flea market, and various patisseries (is that the correct wording?). The pastries were incredible. The flea market reminded me of Mexico, interestingly enough. Felt very Catholic.
(All the buses and trams run late in Rome. This too, is very Catholic, as Orwell picked up on during his time in Catalonia and as anyone visiting a Mexican house would know. Plausibly also Irish?)
Rome’s ivies pervade its structures. Villas, monuments, churches (all 900 of them), and fountains all fall victim to these creepers. It gives the perception of a ruined city, that Roman glory has come and gone—and when one is aware of Italian history, it is very, very hard to perceive Rome as anything else than an overgrown still-surviving bastion against the continuing spirit of the Vandals.
Roman pines, too, are fungiform. Respighi’s tone poem doesn’t do justice to them. Perhaps this is just a Mediterranean vibe? But amongst the monumental Classical, Romanesque, and Neoclassical structures of the Piazza Venezia, these pines are punctual. Don’t really know how else to convey it.
It is difficult to comprehend how the animalistic, gladitorial Roman society became the seat of the Catholic Church. This city is clearly Gaian, and clearly ruled by the very same gods their Pantheon pays homage to. The Christian God is not of nature, it is apart from nature. It is not Dionysian, it is not even Apollonian (because it cannot recognize or respect the Dionysian, and as such cannot exist in context of it, only apart from it). And yet, the Pope persists.
I have not seen the Vatican yet. I want to. I will be back.
November 28, 2023
In summary: A probabilistic version of the Payor's Lemma holds under the logic proposed in the Definability of Truth in Probabilistic Logic. This gives us modal fixed-point-esque group cooperation even under probabilistic guarantees.
Background
Payor's Lemma: If $\vdash \Box (\Box x \to x) \to x,$ then $\vdash x.$
We assume two rules of inference:
- Necessitation: $\vdash x \implies \vdash \Box x,$
- Distributivity: $\vdash \Box(x \to y) \implies \vdash \Box x \to \Box y.$
Proof:
- $\vdash x \to (\Box x \to x),$ by tautology;
- $\vdash \Box x \to \Box (\Box x \to x),$ by 1 via necessitation and distributivity;
- $\vdash \Box (\Box x \to x) \to x$, by assumption;
- $\vdash \Box x \to x,$ from 2 and 3 by modus ponens;
- $\vdash \Box (\Box x \to x),$ from 4 by necessitation;
- $\vdash x,$ from 5 and 3 by modus ponens.
The Payor's Lemma is provable in all normal modal logics (as it can be proved in $K,$ the weakest, because it only uses necessitation and distributivity). Its proof sidesteps the assertion of an arbitrary modal fixedpoint, does not require internal necessitation ($\vdash \Box x \implies \vdash \Box \Box x$), and provides the groundwork for Lobian handshake-based cooperation without Lob's theorem.
It is known that Lob's theorem fails to hold in reflective theories of logical uncertainty. However, a proof of a probabilistic Payor's lemma has been proposed, which modifies the rules of inference necessary to be:
- Necessitation: $\vdash x \implies \vdash \Box_p , x,$
- Weak Distributivity: $\vdash x \to y \implies \vdash \Box_p , x \to \Box_p , y.$
where here we take $\Box_p$ to be an operator which returns True if the internal credence of $x$ is greater than $p$ and False if not. (Formalisms incoming).
The question is then: does there exist a consistent formalism under which these rules of inference hold? The answer is yes, and it is provided by Christiano 2012.
Setup
(Regurgitation and rewording of the relevant parts of the Definability of Truth)
Let $L$ be some language and $T$ be a theory over that language. Assume that $L$ is powerful enough to admit a Godel encoding and that it contains terms which correspond to the rational numbers $\mathbb{Q}.$ Let $\phi_1, \phi_{2} \ldots$ be some fixed enumeration of all sentences in $L.$ Let $\ulcorner \phi \urcorner$ represent the Godel encoding of $\phi.$
We are interested in the existence and behavior of a function $\mathbb{P}: L \to [0,1]^L,$ which assigns a real-valued probability in $[0,1]$ to each well-formed sentence of $L.$ We are guaranteed the coherency of $\mathbb{P}$ with the following assumptions:
- For all $\phi, \psi \in L$ we have that $\mathbb{P}(\phi) = \mathbb{P}(\phi \land \psi) + \mathbb{P}(\phi \lor \neg \psi).$
- For each tautology $\phi,$ we have $\mathbb{P}(\phi) = 1.$
- For each contradiction $\phi,$ we have $\mathbb{P}(\phi) = 0.$
Note: I think that 2 & 3 are redundant (as says John Baez), and that these axioms do not necessarily constrain $\mathbb{P}$ to $[0,1]$ in and of themselves (hence the extra restriction). However, neither concern is relevant to the result.
A coherent $\mathbb{P}$ corresponds to a distribution $\mu$ over models of $L.$ A coherent $\mathbb{P}$ which gives probability 1 to $T$ corresponds to a distribution $\mu$ over models of $T$. We denote a function which generates a distribution over models of a given theory $T$ as $\mathbb{P}_T.$
Syntactic-Probabilistic Correspondence: Observe that $\mathbb{P}_T(\phi) =1 \iff T \vdash \phi.$ This allows us to interchange the notions of syntactic consequence and probabilistic certainty.
Now, we want $\mathbb{P}$ to give sane probabilities to sentences which talk about the probability $\mathbb{P}$ gives them. This means that we need some way of giving $L$ the ability to talk about itself.
Consider the formula $Bel.$ $Bel$ takes as input the Godel encodings of sentences. $Bel(\ulcorner \phi \urcorner)$ contains arbitrarily precise information about $\mathbb{P}(\phi).$ In other words, if $\mathbb{P}(\phi) = p,$ then the statement $Bel(\ulcorner \phi \urcorner) > a$ is True for all $a < p,$ and the statement $Bel(\ulcorner \phi \urcorner) > b$ is False for all $b > p.$ $Bel$ is fundamentally a part of the system, as opposed to being some metalanguage concept.
(These are identical properties to that represented in Christiano 2012 by $\mathbb{P}(\ulcorner \phi \urcorner).$ I simply choose to represent $\mathbb{P}(\ulcorner \phi \urcorner)$ with $Bel(\ulcorner \phi \urcorner)$ as it (1) reduces notational uncertainty and (2) seems to be more in the spirit of Godel's $Bew$ for provability logic).
Let $L'$ denote the language created by affixing $Bel$ to $L.$ Then, there exists a coherent $\mathbb{P}T$ for a given consistent theory $T$ over $L$ such that the following reflection principle is satisfied:
$$\forall \phi \in L' , , \forall a,b, \in \mathbb{Q} : (a < \mathbb{P}{T}(\phi)<b) \implies \mathbb{P}_{T}(a < Bel(\ulcorner \phi \urcorner) < b) = 1.$$
In other words, $a < \mathbb{P}_T(\phi) < b$ implies $T \vdash a < Bel(\ulcorner \phi \urcorner) < b.$
Proof
(From now, for simplicity, we use $\mathbb{P}$ to refer to $\mathbb{P}_T$ and $\vdash$ to refer to $T \vdash.$ You can think of this as fixing some theory $T$ and operating within it).
Let $\Box_p , (\phi)$ represent the sentence $Bel(\ulcorner \phi \urcorner) > p,$ for some $p \in \mathbb{Q}.$ We abbreviate $\Box_p , (\phi)$ as $\Box_p , \phi.$ Then, we have the following:
Probabilistic Payor's Lemma: If $\vdash \Box_p , (\Box_p , x \to x) \to x,$ then $\vdash x.$
Proof as per Demski:
- $\vdash x \to (\Box_{p},x \to x),$ by tautology;
- $\vdash \Box_{p}, x \to \Box_{p}, (\Box_{p}, x \to x),$ by 1 via weak distributivity,
- $\vdash \Box_{p} (\Box_{p} , x \to x) \to x$, by assumption;
- $\vdash \Box_{p} , x \to x,$ from 2 and 3 by modus ponens;
- $\vdash \Box_{p}, (\Box_{p}, x \to x),$ from 4 by necessitation;
- $\vdash x,$ from 5 and 3 by modus ponens.
Rules of Inference🔗
Necessitation: $\vdash x \implies \vdash \Box_p , x.$ If $\vdash x,$ then $\mathbb{P}(x) = 1$ by syntactic-probabilistic correspondence, so by the reflection principle we have $\mathbb{P}(\Box_p , x) = 1,$ and as such $\vdash \Box_p , x$ by syntactic-probabilistic correspondence.
Weak Distributivity: $\vdash x \to y \implies \vdash \Box_p , x \to \Box_p , y.$ The proof of this is slightly more involved.
From $\vdash x \to y$ we have (via correspondence) that $\mathbb{P}(x \to y) = 1,$ so $\mathbb{P}(\neg x \lor y) = 1.$ We want to prove that $\mathbb{P}(\Box_p , x \to \Box_p , y) = 1$ from this, or $\mathbb{P}((Bel(\ulcorner x \urcorner) \leq p) \lor (Bel(\ulcorner y \urcorner) > p)) = 1.$ We can do casework on $x$. If $\mathbb{P}(x) \leq p,$ then weak distributivity follows from vacuousness. If $\mathbb{P}(x) >p,$ then as
$$
\begin{align*}
\mathbb{P}(\neg x \lor y) &= \mathbb{P}(x \land(\neg x \lor y)) + \mathbb{P}(\neg x \land (\neg x \lor y)), \
1 &= \mathbb{P}(x \land y) + \mathbb{P}(\neg x \lor (\neg x \land y)), \
1 &= \mathbb{P}(x \land y) + \mathbb{P}(\neg x),
\end{align*}
$$
$\mathbb{P}(\neg x) < 1-p,$ so $\mathbb{P}(x \land y) < p,$ and therefore $\mathbb{P}(y) > p.$ Then, $Bel(\ulcorner y \urcorner) > p$ is True by reflection, so by correspondence it follows that $\vdash \Box_p , x \to \Box_p y.$
(I'm pretty sure this modal logic, following necessitation and weak distributivity, is not normal (it's weaker than $K$). This may have some implications? But in the 'agent' context I don't think that restricting ourselves to modal logics makes sense).
Bots
Consider agents $A,B,C$ which return True to signify cooperation in a multi-agent Prisoner's Dilemma and False to signify defection. (Similar setup to Critch's ). Each agent has 'beliefs' $\mathbb{P}_A, \mathbb{P}_B, \mathbb{P}_C : L \to [0,1]^L$ representing their credences over all formal statements in their respective languages (we are assuming they share the same language: this is unnecessary).
Each agent has the ability to reason about their own 'beliefs' about the world arbitrarily precisely, and this allows them full knowledge of their utility function (if they are VNM agents, and up to the complexity of the world-states they can internally represent). Then, these agents can be modeled with Christiano's probabilistic logic! And I would argue it is natural to do so (you could easily imagine an agent having access to its own beliefs with arbitrary precision by, say, repeatedly querying its own preferences).
Then, if $A,B,C$ each behave in the following manner:
- $\vdash \Box_a , (\Box_e , E \to E) \to A,$
- $\vdash \Box_b , (\Box_e , E \to E) \to B,$
- $\vdash \Box_c , (\Box_e , E \to E) \to C,$
where $E = A \land B \land C$ and $e = \max ({ a,b,c }),$ they will cooperate by the probabilistic Payor's lemma.
Proof:
- $\vdash \Box_a , (\Box_e , E \to E) \land \Box_b , (\Box_e , E \to E) \land \Box_c , (\Box_e , E \to E) \to A \land B \land C,$ via conjunction;
- $\vdash \Box_e , (\Box_e , E \to E) \to E,$ as if the $e$-threshold is satisfied all others are as well;
- $\vdash E,$ by probabilistic Payor.
This can be extended to arbitrarily many agents. Moreso, the valuable insight here is that cooperation is achieved when the evidence that the group cooperates exceeds each and every member's individual threshold for cooperation. A formalism of the intuitive strategy 'I will only cooperate if there are no defectors' (or perhaps 'we will only cooperate if there are no defectors').
It is important to note that any $\mathbb{P}$ is going to be uncomputable. However, I think modeling agents as having arbitrary access to their beliefs is in line with existing 'ideal' models (think VNM -- I suspect that this formalism closely maps to VNM agents that have access to arbitrary information about their utility function, at least in the form of preferences), and these agents play well with modal fixedpoint cooperation.
Acknowledgements
This work was done while I was a 2023 Summer Research Fellow at the Center on Long-Term Risk. Many thanks to Abram Demski, my mentor who got me started on this project, as well as Sam Eisenstat for some helpful conversations. CLR was a great place to work! Would highly recommend if you're interested in s-risk reduction.
Crossposted to the AI Alignment Forum.
September 13, 2023
Geneva is evil.
It's overpriced, loud, and dirty. Paying ten francs for a medicore street taco is no way to live life. God forbid you visit the city center during the day, and stay as far away from Geneva station as you can. I thought the air was supposed to be good in the Alps?
But above all, it reeks of fakeness.
It calls itself the "Peace Capital", claims it's too good to have twin cities, and prides itself on its cosmopolitanism. On what grounds? Before Hitler's fall, Geneva's only claim to facilitating international diplomacy was hosting the League of Nations -- admittedly the best international governing body we've had thus far, but still. After, every international organization and their shadow backers clamored to have their headquarters (or at least their European headquarters) in Geneva. The UN, WHO, UNHCR, Red Cross, WTO, WIPO, WMO, ILO, ...
Did you know that the largest non-financial services industry in Geneva is watchmaking? Rolex, Patek Philippe, etc. have factories just outside of Geneva proper. To be fair, 'financial services' also excludes commodity trading, of which Geneva is to oil, sugar, grains, and coffee as Rotterdam is to metals. Vitol & Trafigura both have their headquarters in Geneva (and one must wonder whether or not this is for convenience or to take advantage of lax Swiss banking laws...remember Marc Rich?)
Two-thirds of the corporate tax in Geneva comes from commodity trading, banking, and watchmaking. These international organizations? Don't contribute to the economy. (Yes, they bring people & these people use services & this allows Geneva natives to benefit from the overwhelming amount of NGOs and international bodies in their city. Still.)
Tragically, Geneva once had a soul. The 'Protestant Rome' which once served as the birthplace of the Calvinist Revolution was annexed by Catholic France & revolted as a response. The city had opinions that informed its identity -- not a pseudo-identity formed from undeserved arrogance & globalism.
Demographic shifts (mostly French immigration to French-speaking Switzerland) led to Catholics forming the largest religious group in Geneva today, followed by atheists. (I am not blaming immigration for Geneva's soullessness! it is just another piece of the puzzle). This, along with its absurd emphasis on being a truly international city, undergird the sense that Geneve has lost its way.
And the Jet d'Eau... really? Geneva really had to take the self-masturbatory imagery to another level...
September 12, 2023
One recounts that Washington Irving, who was traveling in Spain at the time, suggested the name to his brother, a local resident; this explanation ignores the fact that Irving returned to the United States in 1832. Others award the honor to Two Stickney, son of the major who quaintly numbered his sons and named his daughters after States. The most popular version attributes the naming to Willard J. Daniels, a merchant, who reportedly suggested Toledo because it 'is easy to pronounce, is pleasant in sound, and there is no other city of that name on the American continent.'
- The Ohio Guide on the naming of Toledo, Ohio
I found myself in Toledo one night.
I was trying to get from Ann Arbor to Boston via Amtrak (I had no working photo ID, and at the time I didn't realize that TSA takes IDs up to one year expired as valid for domestic travel) and for some reason my connection was in Toledo. Bus from Ann Arbor to Toledo, train from Toledo to Boston. Simple.
(it actually was quite simple --- this won't be some sort of beginning to a trashy horror story. Pinky promise)
Abandoned Factories
1967: Super Bowl 1, Apollo 1 blows up, Ali fights the draft, Thurgood Marshall rises to the court, and Detroit dies.
Woe befell America's automotive capital with the long, hot summer of '67 and some of the bloodiest race riots in American history. Eventually LBJ used the Insurrection Act to send the National Guard to quell the riots, but it left the west of Lake Erie a shell of its former self.
Today, Detroit is almost a ghost town. It's defaulted on its debt (and gone bankrupt!), has the 4th highest murder rate in major cities in the USA, and its former mayor was convicted on 24 felony counts and sentenced to 28 years in prison.
Luckily, I wasn't in Detroit! So you can imagine how surprised I was to find a ramshackle paper mill right next to the train station. And next to that was a junkyard, and next to that was another unused factory, and next to that was... you get the picture.
If you were in front of an abandoned factory at 3AM I would certainly hope you at least took a look around inside. Not that I would ever do such a thing, but it seems like such a missed opportunity...
[pictures incoming! in future updates :D]
Apparently the dereliction of Detroit's manufacturing capacity took Toledo (and eventually, the rest of the Midwest) with it.
Fellow Travelers
Mainstays on the Amtrak: Mennonites, punks, and wannabe vagabonds.
Mennonites & the Amish are often mistaken for each other. Both practice a certain amount of technological ascetism (more extreme in the conservative branches), both are Anabaptist derived, and both use the Amtrak as a form of transportation. But you probably see the conservative Mennonites (given their distinctive dress -- 1850s cowboy vibes), especially given that large numbers of them settled in the Midwest.
Punks are interchangeable. Spiky pink hair, silver chains, black skinny jeans, relationships with inappropriate age gaps -- always the same. Entertaining, in small doses.
Wannabe vagabonds: me! :D
Highly, highly recommend talking to the person sitting next to (or in front of, or behind) you on the train. Americans like talking to people, and you probably have nothing better to do. The WiFi is atrocious.
Why Is The Station So BIG
MLK Plaza is Toledo's station. It is massive.
Four floors. Gothic. Nearly a century and a half old. A multimillion dollar investment in the mid-20th century. To serve an area that is now dead.
At least National Train Day is a week early in Toledo. The city probably needs the consolation.
Microcosm
Walk around Toledo at night. See the emptiness. Feel the emptiness. Get in touch with the dying Rust Belt. And maybe visit the first ever hippoquarium exhibit in a zoo.
Would rec.
May 10, 2023
Epistemic status: Vaguely confused and probably lacking a sufficient technical background to get all the terms right. Is very cool though, so I figured I'd write this.
And what are these Fluxions? The Velocities of evanescent Increments? And what are these same evanescent Increments? They are neither finite Quantities nor Quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?
George Berkeley, The Analyst
When calculus was invented, it didn't make sense. Newton and Leibniz played fast and dirty with mathematical rigor to develop methods that arrived at the correct answers, but no one knew why. It took another one and a half centuries for Cauchy and Weierstrass develop analysis, and in the meantime people like Berkeley refused to accept the methods utilizing these "ghosts of departed quantities."
Cauchy's and Weierstrass's solution to the crisis of calculus was to define infinitesimals in terms of limits. In other words, to not describe the behavior of functions directly acting on infinitesimals, but rather to frame the the entire endeavour as studying the behaviors of certain operations in the limit, in that weird superposition of being arbitrarily close to something yet not it.
(And here I realize that math is better shown, not told)
The limit of a function $f(x)$at $x=a$ is $L$ if for any $\epsilon>0$ there exists some $\delta > 0$ such that if
$$|x-a|<\delta,$$
then
$$|f(x)-L|<\epsilon.$$
Essentially, the limit exists if there's some value $\delta$ that forces $f(x)$ to be within $\epsilon$ of $L$ if $x$ is within $\delta$ of $a$. Note that this has to hold true for all $\epsilon$, and you choose $\epsilon$ first!
From this we get the well-known definition of the derivative:
$$f'(x) = \lim_{h \to 0} \frac{f(x+h)-f(x)}{h}$$
and you can define the integral similarly.
The limit solved calculus's rigor problem. From the limit the entire field of analysis was invented and placed on solid ground, and this foundation has stood to this day.
Yet, it seems like we lose something important when we replace the idea of the "infinitesimally small" with the "arbitrarily close to." Could we actually make numbers that were infinitely small?
The Sequence Construction
Imagine some mathematical object that had all the relevant properties of the real numbers (addition, multiplication are associative and commutative, is closed, etc.) but had infinitely small and infinitely large numbers. What does this object look like?
We can take the set of all infinite sequences of real numbers $\mathbb{R}^\mathbb{N}$ as a starting point. A typical element $a\in\mathbb{R}^\mathbb{N}$ would be
$$a = (a_0, , a_1, , a_2, \ldots)$$
where $a_0, a_1, a_2, \ldots$ is some infinite sequence of real numbers.
We can define addition and multiplication element-wise as:
$$a + b = (a_{0} + b_{0}, , a_{1} + b_{1}, a_{2} + b_{2}, \ldots),$$
$$a \cdot b = (a_0 \cdot b_0, a_1 \cdot b_1, a_2 \cdot b_2, \ldots).$$
You can verify that this is a commutative ring, which means that these operations behave nicely. Yet, being a commutative ring is not the same thing as being an ordered field, which is what we eventually want if our desired object is to have the same properties as the reals.
To get from $\mathbb{R}^\mathbb{N}$ to a field structure, we have to modify it to accommodate well-defined division. The typical way of doing this is looking at how to introduce the zero product property: i.e. ensuring that if $a,b \in \mathbb{R}^\mathbb{N}$ then if $ab = 0$ either one of $a,b$ is $0$.
If we let $0$ be the sequence of all zeros $(0,0,0,\ldots)$ in $\mathbb{R}^\mathbb{N},$ then it is clear that we can have two non-zero elements multiply to get zero. If we have
$$a = (a, 0, 0, 0, \ldots),$$
and
$$b = (0,b,b,b, \ldots),$$
then neither of these are the zero element, yet their product is zero.
How do we fix this? Equivalence classes!
Our problem is that there are too many distinct "zero-like" things in the ring of real numbered sequences. Intuitively, we should expect the sequence $(0,1,0,0,\ldots)$ to be basically zero, and we want to find a good condensation of $\mathbb{R}^\mathbb{N}$ that allows for this.
In other words, how do we make all the sequences with "almost all" their elements as zero to be equal to zero?
Almost All Agreement ft. Ultrafilters
Taken from "five ways to say "Almost Always" and actually mean it":
A filter $\mathcal{F}$ on an arbitrary set $I$ is a collection of subsets of $I$ that is closed under set intersections and supersets. (Note that this means that the smallest filter on $I$ is $I$ itself).
An ultrafilter is a filter which, for every $A \subseteq I$, contains either $A$ or its complement. A principal ultrafilter contains a finite set.
A nonprincipal ultrafilter does not.
This turns out to be an incredibly powerful mathematical tool, and can be used to generalize the concept of "almost all" to esoteric mathematical objects that might not have well-defined or intuitive properties.
Let's say we define some nonprincipal ultrafilter $\mathcal{U}$ on the natural numbers. This will contain all cofinite sets, and will exclude all finite sets. Now, let's take two sequences $a,b \in \mathbb{R}^\mathbb{N},$ and define their agreement set $I$ to be the indices on which $a,b$ are identical (have the same real number in the same position).
Observe that $I$ is a set of natural numbers. If $I \in \mathcal{U},$ then $I$ cannot be finite, and it seems pretty obvious that almost all the elements in $a,b$ are the same (they only disagree at a finite number of places after all). Conversely, if $I \not\in \mathcal{U},$ this implies that $\mathbb{N}/I \in \mathcal{U}$, which means that $a,b$ disagree at almost all positions, so they probably shouldn't be equal.
Voila! We have a suitable definition of "almost all agreement": if the agreement set $I$ is contained in some arbitrary nonprincipal ultrafilter $\mathcal{U}$.
Let $^*\mathbb{R}$ be the quotient set of $\mathbb{R}^\mathbb{N}$ under this equivalence relation (essentially, the set of all distinct equivalence classes of $\mathbb{R}^\mathbb{N}$). Does this satisfy the zero product property?
(Notation note: we will let $(a)$ denote the infinite sequence of the real number $a$, and $[a]$ the equivalence class of the sequence $(a)$ in $^* \mathbb{R}$.)
Yes, This Behaves Like The Real Numbers
Let $a,b \in \mathbb{R}^\mathbb{N}$ such that $ab = (0)$. Let's break this down element-wise: either $a_n, b_n$ must be zero for all $n \in \mathbb{N}.$ As one of the ultrafilter axioms is that it must contain a set or its complement, either the index set of the zero elements in $a$ or the index set of the zero elements in $b$ will be in any nonprincipal ultrafilter on $\mathbb{N}.$ Therefore, either $a$ or $b$ is equivalent to $(0)$ in $^* \mathbb{R},$ so $^* \mathbb{R}$ satisfies the zero product property.
Therefore, division is well defined on $^\mathbb{R}$! Now all we need is an ordering, and luckily almost all agreement saves the day again. We can say for $a,b \in ^\mathbb{R}$ that $a>b$ if almost all elements in $a$ are greater than the elements in $b$ at the same positions (using the same ultrafilter equivalence).
So, $^*\mathbb{R}$ is an ordered field!
Infinitesimals and Infinitely Large Numbers
We have the following hyperreal:
$$\epsilon = \left( 1, \frac{1}{2}, \frac{1}{3}, \ldots, \frac{1}{n}, \ldots \right).$$
Recall that we embed the real numbers into the hyperreals by assigning every real number $a$ to the equivalence class $[a]$. Now observe that $\epsilon$ is smaller than every real number embedded into the hyperreals this way.
Pick some arbitrary real number $a$. There exists $p \in \mathbb{N}$ such that $\frac{1}{p}< a$. There are infinitely many fractions of the form $\frac{1}{n}$, where $n$ is a natural number greater than $p$, so $\epsilon$ is smaller than $(a)$ at almost all positions, so it is smaller than $a$.
This is an infinitesimal! This is a rigorously defined, coherently defined, infinitesimal number smaller than all real numbers! In a number system which shares all of the important properties of the real numbers! (except the Archimedean one, as we will shortly see, but that doesn't really matter).
Consider the following
$$\Omega = (1,2,3, \ldots).$$
By a similar argument this is larger than all possible real numbers. I encourage you to try to prove this for yourself!
(The Archimedean principle is that which guarantees that if you have any two real numbers, you can multiply the smaller by some natural number to become greater than the other. This is not true in the hyperreals. Why? (Hint: $\Omega$ breaks this if you consider a real number.))
How does this tie into calculus, exactly?
Well, we have a coherent way of defining infinitesimals!
The short answer is that we can define the star operator (also called the standard part operator) $\text{st}(x)$ as that which maps any hyperreal to its closest real counterpart. Then, the definition of a derivative becomes
$$f'(x) = \text{st}\left( \frac{^*f(x+\Delta x)- ^*f(x)}{\Delta x}\right)$$
where $\Delta x$ is some infinitesimal, and $^*f$ is the natural extension of $f$ to the hyperreals. More on this in a future blog post!
It also turns out the hyperreals have a bunch of really cool applications in fields far removed from analysis. Check out my expository paper on the intersection of nonstandard analysis and Ramsey theory for an example!
Yet, the biggest effect I think this will have is pedadogical. I've always found the definition of a limit kind of unintuitive, and it was specifically invented to add post hoc coherence to calculus after it had been invented and used widely. I suspect that formulating calculus via infinitesimals in introductory calculus classes would go a long way to making it more intuitive.
April 23, 2023
In English
A boring, colloquial way.
"Almost all the eggs are gone!" (as half a dozen remain)
Not-Finite-ness
A slightly-less-boring mathy way.
"Almost all prime numbers are odd!"
There are infinitely many primes. There is exactly one even prime number (2). Infinity minus one is... infinity.
"Almost all natural numbers are larger than one-hundred thousand quadrillion quadrillion vigintillion! (10^83)"
There are infinitely many natural numbers. There are infinitely many natural numbers larger than one-hundred thousand quadrillion quadrillion vigintillion. No practical difference between 10^83 and 1 (other than that one is an upper bound on the number of atoms in the universe).
You formalize this with sets: if you have some (infinite) set, a subset whose complement has a finite cardinality encompasses almost everything in the set. We call this cofiniteness.
Probability ~One
A way to quantify surety.
"Almost all the people will lose money at the casino!"
I'm not sure what the exact rates are, but I'd bet money on this being true.
"Almost all numbers are composite!"
It is well known that the number of prime numbers below $N$is approximately $N / \ln N$. In the limit as $N$ goes to infinity, the ratio of primes to non-primes numbers goes to approximately zero. So if you choose a random positive integer, I would bet my life savings that it's composite.
"Almost all graphs are asymmetric."
An intuitive explanation: if you take all possible combinations of nodes and vertices and let the number of each tend towards infinity, you should expect chaos to triumph over order. (sorry Ramsey). This does depend on a certain definition of symmetry, however, and a clearer statement would be "almost all graphs have only one automorphism."
The Lebesgue Measure is Not-Zero
A straightforward-yet-strange way for real numbers.
The Lebesgue measure is what you get when you try to generalize length, area, and volume to $n$-dimensions. It forms the basis for our current understanding of integration, and helps us figure out how big stuff is.
Something with zero volume basically doesn't exist anyway.
"Almost all real numbers are irrational!"
Well yes, all the rationals have measure zero. All countable sets have measure zero, and the rationals are countable.
(Measures are nice: they're a neat generalization of physical scales (mass, volume, etc.) to arbitrary mathematical objects. They're particularly useful for dealing with continuous things ~we love Lebesgue measures~)
"Almost all real numbers are noncomputable!"
Well yes? Of course an arbitrary real number can't be computed to arbitrary precision by a finite algorithm? Computable numbers are countable, remember? Nevermind that basically all the numbers we deal with are computable, this is obvious.
(Measure zero stuff basically doesn't exist, even if they're the only things we use on a daily basis)
"Almost all real numbers aren't in the Cantor set!"
Well yes, of course! Even though the Cantor set is uncountably infinite, it still has measure zero! It's a weird pseudo-fractal embedding of the real line that somehow manages to lose everything in translation but still keep all the relevant information.
(Idk, the Cantor set is weird)
It is Contained in a Nonprincipal Ultrafilter
A filter $\mathcal{F}$ on an arbitrary set $I$ is a collection of subsets of $I$ that is closed under set intersections and supersets. (Note that this means that the smallest filter on $I$ is $I$ itself).
An ultrafilter is a filter which, for every $A \subseteq I$, contains either $A$ or its complement. A principal ultrafilter contains a finite set.
A nonprincipal ultrafilter does not.
This turns out to be an incredibly powerful mathematical tool, and can be used to generalize the concept of "almost all" to esoteric mathematical objects that might not have well-defined or intuitive properties.
(One of the coolest uses of nonprincipal ultrafilters is in the construction of the hyperreals, post forthcoming).
Let $\mathcal{U}$ be a nonprincipal ultrafilter over the natural numbers. It obviously contains no finite sets, but we run into a slight issue when we take the set
$$E = \{2,4,6,8, \ldots \}$$
and its complement
$$O = \{1,3,5,7, \ldots \}.$$
By the filter axioms, only one of these can be in $\mathcal{U}$, and one of them has to be in $\mathcal{U}$. And thus, we can safely say:
"Almost all natural numbers are even."
February 01, 2023
At a rationality workshop I ran an activity called “A Thing.” Not only because I didn’t know what to call it, but because I didn’t know what to expect. In retrospect, I've decided to christen it "Directed Babbling."
It was borne out of a naive hope that if two individuals trusted each other enough, they’d be able to lower their social inhibitions enough to have a conversation with zero brain-to-mouth filter. I thought this would lead to great conversations, and perhaps act as a pseudo-therapeutic tool to resolve disputes, disagreements over emotionally charged topics, and the like. However, it turns out this isn’t necessarily the best use case for a conversation where you simply say the first thing that comes into your head.
As with any writing trying to describe social dynamics, this may be somewhat inscrutable. However, I will try my best to explain exactly what I claim to be a useful conversational tool, for use-cases from “solving hard technical problems with a partner”, to “diving off the insanity deep end”.
Background
Alice and Bob are having a conversation. Alice says X, which Bob responds to with Y, in the context of the conversation (the previous things that Alice and Bob have said to each other) and the context of the world (Bob’s priors). Typically, Bob’s System 1 formulates Y and Bob’s System 2 “edits” it (for lack of a better term) - in most cases, the final output has more to do with System 1 than System 2. However, most of the time in discussion is spent with these System 2 “add-ons” - formulating ideas into sentences, making sure that the vocabulary is appropriate for the conversation, etc.
Hypothesis: if you intentionally remove the System 2 filters from the conversation between Alice and Bob, then you get a rapid feedback loop where the System 1 responses are simultaneously much faster and shorter than the original, which lets the conversation have a much higher idea density.
Setup
We paired participants and asked them to come up with a topic to start their conversation on. Following that, their instructions were to say “the first thing that came into their head” after hearing their partner, and see where this led. After fifteen minutes of this, we checked in and had a discussion on how this went. Repeat about 6 times.
Observations
Individuals did not report a loss in the ease of communication or a lack of nuance - rather, they reported being much more tired than normal and that their perception of time was quite dilated. Someone likened it to “having a 40 minute conversation but feeling that only five minutes had passed.” Generally, sentiment was extremely positive at this being an alternative method of communication.
A few concrete things that some individuals did:
Consciously refused to talk in sentences, and only in key words Chose a “spiciness” level before hand (think hotseat 1-10) Other variations included choosing to talk about either technical or emotional topics, focusing on responding to the last thing the person said vs. the first thing that came into their head, etc.
Use Cases
The most surprising outcome was that this method of conversation seems to be quite useful for technical discussions when both individuals have similar levels of intuition on the subject. It was counterintuitive for me when I tried this: I expected technical conversations to be driven much more by System 2 than System 1, especially when compared to other types of conversations. But when discussing some mathematical proof, it turns out the System 1 responses represent much more the motivations for certain logic than the actual logic itself, and this is what allows for the partner to understand better. See here.
As an introspection tool, it also seems quite useful. If both you and your partner are interested or confused by some social phenomenon, lowering your System 2 filters removes a lot of the implicit restrictions we place on our speech with regards to social contexts, and it opens the door for more valuable conversations.
Failure Modes
The obvious failure mode is a conversation in which Alice and Bob decide on a topic they both feel strongly about, disagree, and then one or both leaves feeling hurt/ having a worse opinion of the other. While you can’t eliminate this risk entirely, some safeguards make it much less likely:
Setting a “spiciness” level for the conversation before it begins. Mutually arriving at what exactly a level “7” means is probably necessary. Only talking about emotionally charged topics with individuals you trust to handle it maturely. Having extremely low barriers to exit the conversation. Making it a social norm to get up and leave one of these at any moment is the bare minimum. A subtler failure mode is a conversation which waffles between topics without any substance being exchanged between the participants. Such as:
Alice: "Do you prefer London or New York?"
Bob: "Purple."
Alice: "Clouds."
Bob: "Steak."
… and so on. I personally find these to be very entertaining, but it is a good idea to set expectations beforehand of exactly how unhinged you would like the conversation to be (some calibration is necessary).
Directed Babbling seems to have much higher idea density and not much information loss compared to typical conversation. I would recommend that you try this with someone sometime, especially if you’re stuck on a technical problem with a partner. If you do end up trying this, please let me know how it went! My sample size currently is quite small, and more data is always great!
Crossposted to LessWrong.
January 05, 2023
Hydrogen is the fuel of the future. It’s the most abundant element in the universe, is incredibly mass-efficient, and can be produced without much fuss. It can be used for both large and small scale energy production (think fusion and fuel cells respectively), and has virtually no emissions, carbon or otherwise. However, there’s just one problem.
It’s absurdly volume-inefficient.
One liter of hydrogen can produce 0.01 MJ of energy at STP (standard temperature and pressure, 273K and 1 atm) compared to 34 MJ/L for gasoline. That’s 0.02% of the energy output for liter. Granted, things improve drastically for liquid hydrogen, where the comparison is 8 MJ/L vs 34, but this requires maintenance of temperatures below −252.8°C, only a few degrees above absolute zero.
Gaseous hydrogen isn’t that easy to store either: it requires containers pressurizable up to 700 bar (700x atmospheric pressure). That’s 10,000 PSI. And even then, it takes 5L of hydrogen to match up to 1L of gasoline. The Department of Energy estimates that to meet most lightweight vehicular driving ranges, between 5-13 kg of hydrogen need to be carried onboard the vehicle. If we do some quick calculations, that means that vehicles need to carry between 85 - 294 L of liquid hydrogen (some multiple of this for gaseous hydrogen) to go anywhere. For all you Americans, this is roughly between 22-77 gallons.
(For reference, 5-13kg of hydrogen converts to 20-70 liters of gasoline, in terms of energy efficiency. I also make no claim as to the specific accuracy of the numbers, these are napkin calculations, and this doesn’t take into account energy efficiencies created by recycling hydrogen or fuel cells being more efficient than combustion engines, etc.)
It would not be feasible to have stereotypical sedans with fifty gallon gas tanks in a world of solely hydrogen fuel. Additionally, the extreme conditions liquid hydrogen must be stored as bottlenecks its production and transportation.
The Department of Energy set standards for hydrogen storage to be met by 2020 in order for hydrogen fuel to become feasible for portable power and light-duty vehicular applications. These were:
1.5 kWh/kg (overall system performance) 1.0 kWh/L (overall system performance) $10/kWh (translates to $333/kg for stored hydrogen) So far, none of these goals have been met, and I have doubts about the scalability of present research (although I’m open to criticism on this take). Let’s take a look at what’s being done to fix the issue.
Short-term Solutions
The majority of short-term solutions to hydrogen’s issues as a widespread fuel consist of creating inexpensive, high pressure storage solutions for H2 gas. From the Department of Energy’s website again, this means developing fiber-reinforced composites that can be produced cheaply and withstand 700 bar pressures. As these don’t necessarily address the underlying volume-inefficiency problem, we can move on to the long-term solutions.
Long-term Solutions
Long-term solutions take two forms: higher-density gaseous storage and materials-based hydrogen storage. The former develops vessels that can compress H2 gas more, while the latter seeks to manufacture materials that have better volumetric hydrogen ratios. This will mainly focus on the materials based approach.
The main research avenues for materials-based hydrogen storage are metal hydrides, adsorbents, and chemical hydrogen storage materials. Let’s look at each.
Metal Hydrides
Metal hydrides are compounds in which metal atoms form ligands with hydrogen. The strength and nature of these bonds vary widely with the metal, but they allow compounds to act as hydrogen carriers without getting decomposed themselves, which is useful for materials cycling (and drastically improves efficiency).
The issue is, the sorts of complex hydrides that have the most promising properties cannot be produced at scale cheaply in any amount, let alone the quantities required for hydrogen to become a serious competitor to gasoline. However, if these compounds were able to scale, then I would be very excited about our energy futures.
Adsorbents
Adsorption is the process by which molecules stick to the internal or external surface of something. Attaching hydrogen gas to some compound would allow it to retain its original molecular form and simultaneously compress it (by virtue of gases being quite volume inefficient). The issue with these is that they don’t compress from the original that much, and are expensive to make.
Chemical Hydrogen Storage Materials
These are perhaps the most straightfoward: find materials that you can either hydrolyze or pyrolyze to release hydrogen, and hope that they have less volume than liquid hydrogen and that they’re easier to store. Well, lo and behold, it turns out there’s an entire class of molecules like this: Amine-boranes.
Amine-boranes are essentially ammonia molecules complexed to boron with extra hydrogen thrown in. The simplest amine-borane is ammonia borane, or borazane, with the chemical formula NH3BH3. It hydrolyzes and pyrolyzes well, has a higher molar density of hydrogen than hydrogen itself, and is a stable solid at room temperature. What more could you ask for?
Well, it’s absurdly expensive to synthesize. I suspect the cost can be reduced at scale, but it does not seem feasible at the moment.