This my basic, no-frills blog. It contains both posts about math and other topics. Lately, they've mostly been about linux and vim. I eventually intend to add in a tag and category based word-cloud so that the blog is easier to navigate.

Tag Cloud


Tracy-Widom universality for polymers in intermediate disorder

Jeremy Quastel and I started working on this project when I visited the Fields Institute in June 2014. The most recent version of the paper fixes some serious errors, but it’s still under review and we have not posted the updated version on the arxiv.

When I was a graduate student, I had been reading about the Lindeberg style argument that Terry Tao and Van Vu used in the random matrix universality papers from 2010. It’s a simple argument originally introduced by Lindeberg in 1922 to prove the central limit theorem. Davar Khosnevisan believes that the method is due to Lyapunov; I struggled through parts of Lyapunov’s original paper in French to see if I could verify this, but I couldn’t quite get through it. The method was strengthened with a truncation argument and repopularized by Trotter in 1959. More recently, it has been applied quite successfully to more modern models by Chatterjee (2005).

I was trying to apply the method to first-passage percolation, and mentioned this to Jeremy when I started at the Fields institute. But it’s clear that a naive perturbation like this is not going to work: if one tries to use the method to prove that the fluctuations of the passage time are universal, one necessarily proves that the time-constant should be universal as well. This is not generally true in last-passage percolation at least (see Cox-Durrett 1981 for a first-passage example), so at the moment, this does seem like a serious obstruction. Luckily, Jeremy had co-discovered the intermediate disorder regime in directed polymers a few years earlier, and he suggested that we try to apply it here.

The analog of the time-constant in the $d+1$ dimensional polymer model is the free-energy. There are two parameters here: the length of the polymer and the inverse temperature $\beta$. The fluctuations of the free energy are thought to be given by the Tracy-Widom GUE (TW) distrbution even in the intermediate disorder regime $O(1) \gg \beta \geq O(N^{-1/4})$. A non-rigorous argument for this appears in the physics paper by Alberts, Khanin and Quastel (2014). In the special regime when $\beta = \tilde\beta N^{-1/4}$, Alberts-Khanin-Quastel (2015) proved that the partition function of the polymer scales to the solution of the stochastic heat equation. The log of solution to the stochastic heat equation has long-time $(\tilde\beta \to \infty)$ TW fluctuations (Amir-Corwin-Quastel 2011). So in the special intermediate-disorder double limit $N \to \infty$ and then $\tilde\beta \to \infty$, it’s known quite generally that the polymer has TW fluctuations. However, when $O(1) \gg \beta \gg O(N^{-1/4})$, the centered and rescaled log partition function of the polymer is converge directly to a TW random variable without having to take this double limit. But this had not been proved for any standard discrete polymer.

The free-energy does indeed become “universal” in the intermediate disorder regime. So there is some hope of using the Lindeberg replacement strategy. Moreover, since $\beta \to 0$ in this regime at a given rate, we can use this to control the error in a Taylor expansion of the log partition function. For example, this $\beta \to 0$ property is implicit in the Sherrington-Kirkpatrick spin glass, and hence the Lindeberg strategy has been quite successful there.

The situation is complicated by the fact that no standard polymer is known to be in the TW universality class, and so an “invariance theorem” is a little unsatisfactory. Seppalainen’s log-gamma polymer is not quite a standard polymer, but it’s solvable and can be shown to have TW fluctuations, at least for $\beta \geq \beta* > 0$ (O’Connell-Seppalainen-Zygouras 2014, Borodin-Corwin-Remenik 2013, Borodin-Corwin-Ferrari-Veto 2015). We first show that the log-gamma polymer has TW fluctuations in intermediate disorder; then we use the Lindeberg strategy to prove that polymers that are close to the log-gamma polymer also have TW fluctuations.

Weak and strong mixing

I had a useful conversation with Jon Chaika today where he explained some elementary ergodic theory to me. I really ought to know these things better, so I decided to write it down and work some of the details out.

Given any dynamical system $(\OFP,T)$, we can define an $L^2$ unitary operator

The unitarity follows from the fact that $T$ preserves measure. We express the ergodic average as

The ergodic theorem says that for any $L^1$ function, $S_nf \to \overline{f}$ almost surely and in $L^1$. Here, $\overline{f}$ is the conditional expectation with respect to the invariant $\sigma$-algebra of $T$. We will assume ergodicity; i.e., the invariant $\sigma$-algebra is trivial.

The usual notions of ergodicity, and mixing can be described in terms of the spectrum of the unitary operator $U$. For example, we know that $T$ is ergodic if and only if the eigenvalue $1$ is simple. In other words, the only eigenvectors corresponding to $1$ are constant functions.

A system is weak-mixing if it has no eigenvalues other than $1$. In terms of the operator $U$, a system is weak-mixing iff for each $f \in L^2$, and all $g \in L^2$,

From this it is easy to argue as follows. Suppose there is a $\lambda \neq 1$ (but $\lvert\lambda\rvert = 1$) such that

Then from the characterization of weak-mixing above, we get

Since $g$ can be any $L^2$ function, this completes the proof.

There is a spectral description of strong mixing as well, but it’s in terms of the spectral measure of $U$. Apparently, the requirement is that the spectral measure is Reichmann. I cannot reproduce the condition because I can’t quite remember it, but it seems like it’s saying something about the decay of the moments of the spectral measure.

The dyadic machine

One frequently hears that the obstructions to weak-mixing are compact group rotations. My impression was that one should think of systems that are not weak-mixing as containing a copy of the standard rotation on a unit-circle or a product of such rotations. This is not correct, and systems that are not weak-mixing can contain really complicated compact group rotations.

A canonical example of a compact group rotation is the so-called dyadic machine. Dyadic machines can have really complicated spectra.

The system is defined on $\W = \{0,1\}^{\Z}$. Suppose $x = (0,1,1,0,0,\ldots) \in \W$. Then, $Tx = (1,0,0,1,0,\ldots)$. That is, you add $1\mod2$ to the first coordinate and carry over the remainder to the next coordinate. Precisely,

where the $x’_i$ is simply used to track the “carrying over” operation. You can think of $\W$ as the unit interval by mapping each point to its binary expansion. In this case, the map acts quite weirdly on the unit interval.

The measure is uniquely ergodic for this map. The unique ergodicity of this iid measure is quite suprising to me. It means that there are no bad sets to consider for the ergodic theorem; things converge “surely” and not almost surely.

Now, one notices that

and therefore has eigenvalue $-1$. Similarly,

This implies that, for example

is an eigenvector when $s^5 = 1$ is a root of unity! There are a whole host of such eigenvectors and corresponding eigenvalues. One can change the space to $\W = \{0,\ldots,k\}^{\Z}$ and change the operation to addition mod $k$ with carryover to get other spectra. Despite its complexity, the dyadic machine is not weak-mixing. Moreover, it’s a compact group rotation.

This note is clearly incomplete: I haven’t really specified what a compact group rotation is, and I haven’t proved that the dyadic machine is a compact group rotation. I intend to revisit this at some point in the future. Some of these concepts can be found in T. Austin’s notes. The notes also prove that the obstructions to weak mixing are such compact group rotations.

Fall 2018 Activity

This was my first ever semester at the University of Rochester. I had only one class to teach, and thankfully, it was one I’d done before: MTH 201, Introduction to Probability. I spent most of September getting settled, and buying books and computers for my office.

I visited Ido Ben-Ari at the University of Connecticut on Sep 8 2017. I spoke about my joint work with Jon Chaika (University of Utah) that I’m quite excited about. It’s a generalized description of the collective behavior of random, semi-infinite walks on the lattice. The two major assumptions on the walks are that they have to coalesce if they meet. Under some very mild assumptions, we prove a striking dichotomy theorem: either all the walks coalesce, or they form biinfinite trajectories with an asymptotic velocity that foliate the space. We built this theory to understand the behavior of geodesics in first- and last-passage percolation. We use a lot of tools from ergodic theory, and I learned a lot from Jon. We also construct a bunch of examples. Our main example is a first-passage model with non-crossing geodesics that do have asymptotic direction.

It was almost by accident that I met Jon. I had this general framework that I thought would be a useful way to understand the behavior of generalized ‘Busemann functions’ in first- and last-passage percolation. Generalized Busemann functions appear as minimizers in the newly discovered variational formulas for first- and last-passage percolation (Georgiou, Rassoul-Agha, Seppalainen, Comm. Math. Phys 2016, and Krishnan, Comm. Pure Appl. Math., 2016). Luckily, the first ergodic theorist I randomly approached at Utah was an amazing mathematician. There is a typo-ridden draft on the arXiv, but we have a new version that we’re almost ready to send out for review.

I spent October putting together my NSF grant. It was on the properties of the Busemann functions I mentioned. It was my first time writing a grant about this project, and it took a while to put together.

I spent a part of November working on a project with an undergraduate student that I advised at the University of Utah. We spent some time refining our writeup and sending it out to people. Scott came up with the idea for the project: it’s on a connection between colored non-crossing partitions (see Marberg, Eric (2013) ‘Crossings and nestings…’, MR3139391), longest increasing subsequences constrained to be smaller than a certain number, and certain special Kostka numbers. Kostka numbers appear in the theory of symmetric functions in a seemingly unrelated context. We worked on cleaning up and writing Scott’s bijection between Kostka numbers and longest increasing subsequences constrained to be smaller than a certain number. I’m mainly interested in the combinatorics because of the connection between longest increasing subsequences and several classical models in probability.

We wrote to a few experts about the results, and it seems that some parts of it ought to be well-known. However, some of our generalizations appear to be new. The most interesting thing for us, at least, it to connect the Kostka numbers to colored non-crossing partitions. We’ve made some progress, but we’re not really close to anything. So its not quite ready to publish, but it’s certainly a very good start for Scott. He’s just applied to grad schools this year. He won the prestigious Churchill scholarship to spend a year at Cambridge. He intends to learn more about analytic and algebraic combinatorics there.

I also attended the North East Probability Seminar at Columbia in mid-November. I really enjoyed Hugo Duminil-Copins talk, and want to look into this paper of Schramm that he mentioned. It has a cool new inequality in it. I also had a chance to speak to LP Arguin, and we’re collaborating on a project in log-correlated fields and spin glasses.

Jeremy Quastel and I discovered a small hiccup in a paper that is under review. We were using a theorem from Borodin, Corwin, Ferrari and Veto 2013 that isn’t quite right. We spent a bit of time fixing it, and incorportating the referees suggestions. We hope to be done in the next few weeks.

Securely erase a server

I recently had to securely erase a server that I had only ssh access to. I was worried that running

rm -rf --no-root-preserver /

would cause a kernel panic and not erase the drive properly. One way to overcome this problem would be to create a ramdisk. Since ramdisks are stored in memory, it should be able to successfully erase this root filesystem.

mkdir -p /dev/shm/ramdisk/bin /dev/shm/ramdisk/dev /dev/shm/ramdisk/lib64 /dev/shm/ramdisk/usr/lib
cp /bin/dd /bin/bash /bin/ls /dev/shm/ramdisk/bin
ldd /bin/bash

and then copy all the necessary libraries to

cp /dev/shm/ramdisk/lib

or the appropriate directory like lib64 or usr/lib. Then mount bind your dev directory

mount -o bind /dev/ /dev/shm/ramdisk/dev

Chroot into the ramdisk

chroot /dev/shm/ramdisk/dev

Then simply run

dd if=/dev/zero of=/dev/sda bs=1M

pv is usually faster than dd, so you can use that instead.

Some people say that its necessary to go over old hardware with bytes other than zeros, and that you would have to make multiple passes. But I do not think it is necessary. See this article

https://security.stackexchange.com/questions/10464/why-is-writing-zeros-or-random-data-over-a-hard-drive-multiple-times-better-th

The article just simply says “just write it with zeros and forget it. Some storage media have a secure erase feature that you could activate using

hdparm --security-erase

On ssds, security erasing is a lot harder. One would have to this quite carefully, I would imagine.

Vim scrolls very slowly

My vim version

VIM - Vi IMproved 8.0 (2016 Sep 12, compiled Sep  5 2017 11:27:38)
Included patches: 1-987
Compiled by Arch Linux

has been hit by a weird scrolling bug when

:set cursorline

or

:set relativenumber

is set. This is some regexp computation
bug

according to stackoverflow. I've simply disabled these things unless I
need to use them.