This is a fancy way to amplify noise.
--Right!... but you may not yet appreciate that you don't want to use a direct approach, my "fancy" way actually is a lot better and simpler than so-called "non-fancy" ways would be! WRONG WAY TO GO: connect up a very hi-gain amplifier (gain=1 million or so) to a tiny noise source. If you do that, then the amp is expensive, prone to oscillation, slow, and your tiny noise source will be contaminated by nonrandom noise (from crosstalk from nonrandom signals) and stuff, giving you bad randomness. My right way involves only a low-gain fast cheap amp and if there's contamination from nonrandom noise, that's no problem -- any amount of true-random noise mixed in, no matter how small a fraction, will still get amplified exponentially and still give you perfect randomness out. And the thing is "pipelined" hence elegant design.
The problems with analog multipliers (inaccuracy and small bandwidth) can be avoided by just amplifying the noise directly, with AC coupling so as not to rail the amplifier.
--wrong approach, see above! And we don't have to rail the amp, because [-1, 1] volts could be a subrange of the rail to rail range.
Also, placing the sample-and-hold within the feedback loop makes the system sensitive to S/H offsets. In addition, what prevents the voltage from exceeding the +- 1 volt range, and thus locking the circuit into a fixed state?
--there will always be offsets & such, all of which we can just consider as computing with a slightly wrong iteration function. Now there are some notes & caveats: 1. really, map [-1,1] not into itself but really into a slight subinterval of itself, thus causing the endpoints to be repellors. Also, the definition of f really needs to be stated for a domain which is a superset of the interval too and there it had better map to inside the interval to avoid this issue. (It is possible using op-amps and diodes to provide "clipping" to keep signal within a subinterval.) 2. the zig-zag-zig piecewise linear function with slope=+3, -3, +3 can also be used as an alternative iterator and also is easy to build with diodes and ideal op-amp, it generates 1 random trit per iteration. The T3 chebyshev polynomial also can be used as the iterator to get 1 trit per iteration, but this seems less easy to build a circuit for. 3. the advantage of trits, is you can also use those for bits, but you get a safe amount of elbow room. You get lg3 bits of entropy per iteration that way, which is more than enough, so even with slightly inaccurate f-computation each iteration you still are safely above 1 bit of entropy. 4. With slightly inaccurately computed f we get less than the ideal amount of entropy per bit, and the bits will not be exactly 50-50 and not exactly uncorrelated. However, the correlations should fall off exponentially with time. Hence, if you pipe these bits into a digital post-processor which XORs them with bits from the past (say 100 iterations ago since using a mod2 linear feedback shift register of length about 100...) then the resulting bits will be extremely uncorrelated, especially if the digital post processor then outputs only a fraction of the bits it inputs as a safety measure to make sure we get enough entropy. An old trick to get rid of bias in bits is, take two bits X, Y, if differ then output 0 if X<Y else 1; if equal discard them. This yields exactly-50-50 bits out with biased bits in, but wastes slightly over half of them. There are other tricks based on, e.g. data compression algorithms, which also get rid of bias with less wastage, but more computing.
Yet another hardware random bit generator, one that directly uses quantum randomness, consists of a LED and two photodetectors capable of detecting single photons. One detector generates the zeros, the other the ones. Such detectors are not cheap, and it is helpful to use the shortest wavelength LED you can get.
--yes but this sucks for several reasons: not cheap, still vulnerable to nonrandom noise and need a huge-gain amp (problems with that already discussed), and you will often get more than 1 photon or sometimes get 0 photons, which would require digital post processing to (mostly) correct. My way you STILL are using quantum noise and thermal noise as the ultimate source, but no need to worry about single photons, way simpler than that, the whole ball of wax is entirely implementable on 1 chip.
None of these proposed methods are totally free of bias the the random output bits, and I don't believe any physical process can generate bits that are precisely 50/50.
--you are correct. In principle my method would generate exact 50-50 and exact zero correlation, but only with exact f-computation each iteration, which is not going to happen in reality due to manufacturing errors. With slightly-wrong f, my method will come close to 50-50 and zero. However, I believe these defects will be correctable to exponentially near to being true with appropriate digital post-processing of the imperfect random bits. --I should also mention Intel in a recent processor provides hardware random bits. Their method bugged me. It was basically to make a static RAM cell to store a bit. You turn it on. The result is a random bit. This with almost all commercial static RAMs would actually provide a hugely-biased bit (e.g. 90-10) due to built in biases. Intel attempted to null out those biases. However, it seems to me that in principle this whole approach sucks. The reason is, with perfect idealized static RAM circuit, no matter how well they fiddoodle with trimming biasing resistors and such, then this circuit would in principle provide a 1 bit with probability 100% (or 0 with 100% probability, point is, the more you idealize, the more you get 100-0 bias). The only reason they avoid that fate is they trim it so accurately that the bias is actually a lot less than the initial noise level. That's high accuracy. With my scheme, low accuracy suffices and there is no 100% probability of failure looming behind the scenes. Even with idealized circuit and poor accuracy, my circuit still comes close to 50-50 and zero correlation. Intel's cannot say that. On the bright side, I suspect Intel's scheme is very fast. (I would not trust Intel bits without digital postprocessing.)