Dan (and others), Is the problem that the densest packings are not unique?
Not exactly. The problem is that under the prevalent notion of "densest", the densest packings aren't unique (up to isometry) in those *special* dimensions (2, 8, and 24) in which they morally "ought" to be unique.
I don't see this as a problem per se. For example: In the Kepler problem of packing spheres in 3D, the packings obtained by any sequence of what are sometimes called the "a", "b", and "c" packings in successive layers — as long as two successive layers are not given by the same letter — is equally densest. Which gives uncountably many ties. This may be surprising, but it's just the way things are and I see no need to arrange things so this not be the case.
I agree. However, in the theory I envision, it would be a theorem that these uncountably many packings, AND NO OTHERS, are the densest packings. So there'd be an explicit parametrization of the densest packings.
On the other hand, if asymptotic density is the only thing to maximize, I find it somewhat unpleasant that by starting with say the fcc packing but removing all (unit) spheres lying closer to the origin than a distance of 10^10^10^10^10^10^10^10^10^10 (exponents grouped from the top down) results in yet another packing tied for densest.
That's part of my motivation too.
Jim: Before we get to how your theory might be constructed, I would like to understand much better what it is you are aiming to achieve or avoid.
That's a great question, and one that I'll probably be able to answer correctly only after giving a few incorrect answers. But let me make a start. One characteristic of the theory should be that in two dimensions, the densest ways to pack disks of equal size are precisely the highly symmetrical six-around-one packings and nothing else. Similarly uniqueness claims should prevail in dimensions 8 and 24. (I also suspect that in three dimensions, the densest packings are the Barlow packings and nothing else.) I wrote something a few years back about this, which I've lightly edited and uploaded to http://jamespropp.org/thoughts-about-packings.pdf . There I use a quadratic-exponential integration kernel to regularize divergent integrals (which in this case are really divergent sums since the functions are infinite sums of delta functions). In the toy version of this theory that I've been exploring lately, where I'm just packing {0,1,2,3,...}, I can use a linear-exponential summation kernel, and life is simpler (since rational functions are nicer than theta functions!), but I still can't prove the uniqueness results that I sense are true. Jim