Density of States of the 2D Ising Model

Introduction

The probability that a system in equilibrium with a heat bath at temperature T has energy E is given by

P(E,β) = g(E) e-βE/Z,

where Z is the partition function and g(E) is the number of states with energy E. If g(E) is known, we can calculate the mean energy and other thermodynamic quantities at any temperature from the relation

<E> = (1/Z) ∑E Eg(E)eβ E.

Hence, the quantity g(E) is of much interest.

    In the following we discuss an algorithm for directly computing g(E). To be concrete we apply the algorithm to the two-dimensional Ising model. In this model, the energy is a discrete variable and hence the quantity we wish to compute is the number of spin configurations with the same energy. If we were to apply the algorithm to a system for which the energy is continuous, we would compute the number of states with energy between E and E + ΔE, that is g(E)ΔE. In this case, g(E) would be the density of states. It is common to refer to g(E) as the density of states even when we really mean the number of states.

    Suppose that we were to try to compute g(E) by doing a random walk in energy space by flipping the spins at random and accepting all configurations that we obtain in this way. The histogram of the energy, H(E), the number of visits to each possible energy E of the system, would converge to g(E) if the walk visited all possible configurations. In practice, it would be impossible to realize such a long random walk given the extremely large number of configurations. For example, the Ising model on a L = 10 square lattice has 2100 ≅ 1.3 × 1030 spin configurations.

    The main difficulty of doing a simple random walk to determine g(E) is that the walk would spend most of its time visiting the same energy values over and over again and would not reach the values of E that are less probable. The idea of the Wang-Landau algorithm is to do a random walk in energy space by flipping single spins at random and accepting the changes with a probability that is proportional to the reciprocal of the density of states. In this way energy values that would be visited often using a simple random walk would be visited less often because they have a larger density of states. There is only one problem -- we don't know the density of states. We will see that the Wang-Landau algorithm estimates the density of states at the same time that it does a random walk in phase space.

Algorithm

  1. Start with an initial arbitrary configuration of spins and a guess for the density of states. The simplest guess is to set g(E) = 1 for all possible energies E. (We should really use a different symbol for the density of states – one for the exact result and one for the approximate density of states that we will estimate while implementing the Wang-Landau algorithm.)
  2. Choose a spin at random and make a trial flip. Compute the energy before, E1, and after the flip, E2, and accept the change with probability

    p(E1 → E2) = min(g(E1)/g(E2), 1),

    This equation implies that if g(E2) ≤ g(E1), the state with energy E2 is always accepted; otherwise, it is accepted with probability g(E1)/g(E2). That is, the state with energy E2 is accepted if a random number r ≤ g(E1)/g(E2).
  3. After step 2 the energy of the system is E. (E is E2 if the change is accepted or remains at E1 if the change is not accepted.)
  4. The other part of the Wang-Landau algorithm is to multiply the current value of g(E) by the modification factor f > 1

    g(E) = fg(E).

    We also update the existing entry for H(E) in the energy histogram:

    H(E) = H(E) + 1.

    Because g(E) becomes very large, in practice we must work with the logarithm of the density of states, so that ln(g(E)) will fit into double precision numbers. Therefore, each update of the density of states is implemented as

    ln(g(E)) → ln(g(E)) + ln(f),

    and the ratio of the density of states is computed as exp[ln(g(E1)) - ln(g(E2))].
  5. A reasonable choice of the initial modification factor is f = f0 = e ≅ 2.71828 ... If f0 is too small, the random walk will need a very long time to reach all possible energies. Too large a choice of f0 will lead to large statistical errors.
  6. Proceed with the random walk in energy space until a "flat" histogram H(E) is obtained, that is, until all the possible energy values are visited an approximately equal number of times. Of course, it is impossible to obtain a perfectly flat histogram, and we will say that H(E) is "flat" when H(E) for all possible E is not less than p of the average histogram <H(E)>; p is chosen according to the size and the complexity of the system and the desired accuracy of the density of states. For the two-dimensional Ising model on small lattices, p can be chosen to be as high as 0.95, but for large systems the criterion for flatness may never be satisfied if p is too close to unity.
  7. Once the flatness criterion has been satisfied, reduce the modification factor f using a function such as f1 = (f0)1/2, reset the histogram to H(E) = 0 for all values of E, and begin the next iteration of the random walk during which the density of states is modified by f1 at each trial flip. The density of states is not reset during the simulation. We continue performing the random walk until the histogram H(E) is again flat.
  8. Reduce the modification factor, fi + 1 = (fi)1/2, reset the histogram to H(E) = 0 for all values of E, and continue the random walk. Stop the simulation when f is smaller than a predefined value (such as ffinal = exp(10-8) ≅ 1.00000001). The modification factor acts as a control parameter for the accuracy of the density of states during the simulation and also determines how many Monte Carlo sweeps are necessary for the entire simulation.

At the end of the simulation, the algorithm provides only a relative density of states. To determine the normalized density of states we can either use the fact that the total number of states for the Ising model is

E g(E) = 2N,

or that the number of ground states (for which E = -2N) is 2. The latter normalization guarantees the accuracy of the density of states at low energies which is important in the calculation of thermodynamic quantities at low temperature. If we apply the former condition, we cannot guarantee the accuracy of g(E) for energies at or near the ground state, because the rescaling factor is dominated by the maximum density of states. We can use one of these two normalization conditions to obtain the absolute density of states, and use the other normalization condition to check the accuracy of our result.

Problems

  1. First choose L = 2. How many states are there for each value of E? Run the simulation and verify that the computed density of states is close to your exact answer.
  2. Choose larger values of L, for example, L = 16. Describe the qualitative energy dependence of g(E).
  3. Compute the specific heat as a function of temperature and describe its qualitative temperature dependence. Is there any evidence of a phase transition?
  4. *Determine the value of the specific heat C at T = Tc = 2/ln(1 + √2) = 2.269 as a function of L and make a log-log plot of C versus L. If your data for C is sufficiently accurate, you will find that the log-log plot of C versus L is not a straight line but shows curvature. The reason is that the critical exponent α equals zero for the two-dimensional Ising model, and hence

    C ≅ C0 ln L.

    Is your data for C consistent with this form? The constant C0 is approximately 0.4995.

References

Java Classes

Updated 6 February 2010.