Is the problem that the densest packings are not unique? I don't see this as a problem per se. For example: In the Kepler problem of packing spheres in 3D, the packings obtained by any sequence of what are sometimes called the "a", "b", and "c" packings in successive layers — as long as two successive layers are not given by the same letter — is equally densest. Which gives uncountably many ties. This may be surprising, but it's just the way things are and I see no need to arrange things so this not be the case. On the other hand, if asymptotic density is the only thing to maximize, I find it somewhat unpleasant that by starting with say the fcc packing but removing all (unit) spheres lying closer to the origin than a distance of 10^10^10^10^10^10^10^10^10^10 (exponents grouped from the top down) results in yet another packing tied for densest. One could avoid such "cheater" configurations by requiring that all sufficiently large configurations cannot be made denser by adding more spheres. I'm not sure if this condition is known to always allow the densest of all configurations to satisfy it. Certainly there are densest configurations in dimensions 1, 2, 8, and 24 that *do* satisfy it, however, and I believe no dimensions known (i.e., not dimension 3, either) where some densest configuration does not satisfy it. But in any case this cuts down on some of the unpleasantness. Jim: Before we get to how your theory might be constructed, I would like to understand much better what it is you are aiming to achieve or avoid. —Dan ----- Here's some background on where my recent question (so quickly resolved by Andy Latto) came from: The itch I'm trying to scratch is the fact that even though the sphere-packing problems in dimensions 2, 8, and 24 have beautiful canonical solutions, these solutions are only known to be "optimal" in ways I find unsatisfactory. We say for instance that the six-around-one packing in 2D "achieves the maximal packing density" --- but then so do lots of other packings (for instance, any perturbed version of the six-around-one packing). Or, we say that the six-around-one packing is, up to isometry, the "unique lattice packing", but the lattice condition seems extraneous (and indeed the lattice condition becomes problematic in higher dimensions, where it's not even known if densest lattice packings exist). My diagnosis of the problem is that density is too coarse a way to measure packings. We need a non-Archimedean measure (or rather valuation) that can "see" both the density of a lattice packing and the finite perturbations we can apply to it, even though they are separated by infinite orders of magnitude. I have various ideas for how such a theory could be constructed, but none of them feel quite right, and there are significant technical challenges to applying them. So, rather than attack the sphere-packings problem head-on, I'm taking a detour through one-dimensional packing problems. And, to make things even simpler, I'm starting by focussing on discrete packing problems. Furthermore, instead of packing Z, I'm just packing the right half of it. So I want a way to measure subsets of {0,1,2,...} that includes cardinality and density as special cases. One approach is to measure a set S by the limiting behavior of its generating function \sum_{n \in S} q^n as q approaches 1; you can see some low-hanging fruits of this idea at http://mathoverflow.net/questions/248994/comparing-sizes-of-sets-of-natural-... . I was trying to apply this approach to packing problems and related sorts of combinatorial optimization problems. For instance: Consider subsets of {0,1,2,...} having the property that no two elements of S differ by 3 or 5. In what sense is S_0 := {0,2,4,...} the unique "biggest" set of this kind? It has density 1/2, but so does S_1 := {1,3,5,...}. And the set S_2 := {0,1,2, 8,9,10, 16,17,19, ...} starts out looking denser than S_0. Examine at the associated q-series as q approaches 1: using the notation of the MathOverflow article, we have |S_0|_q > |S_1|_q > |S_2|_q for all q<1 sufficiently close to 1. So in this sense S_0 is "bigger" than the other two sets. I'd like to prove that for all finite sets D (of positive integers), not just the finite set {3,5}, there is a unique biggest set of natural numbers, S, such that no two elements of S differ by an element of D, where "biggest" denotes comparison of generating functions as q goes to 1. Naturally, I expect that this unique maximal S is an eventually periodic set. (Where I call a set eventually periodic, I mean of course that its indicator function is eventually periodic.) If I could prove this, a simple coding would give results about packings of the set of natural numbers by translates of a finite set. Anyone have any thoughts about this? -----