- 1 Introduction. Part I Introduction to programming and numerical methods
- 1.1 Choice of programming language
- 1.2 Designing programs
- 2 Introduction to C++ and Fortran.
- 2.1 Getting Started
- 2.1.1 Scientific hello world
- 2.2 Representation of Integer Numbers
- 2.2.1 Fortran codes
- 2.3 Real Numbers and Numerical Precision
- 2.3.1 Representation of real numbers.
- 2.3.2 Machine numbers
- 2.4 Programming Examples on Loss of Precision and Round-offErrors
- 2.4.1 Algorithms fore−x.
- 2.4.2 Fortran codes
- 2.4.3 Further examples
- 2.5 Additional Features of C++ and Fortran
- 2.5.1 Operators in C++
- 2.5.2 Pointers and arrays in C++.
- 2.5.3 Macros in C++
- 2.5.4 Structures in C++ and TYPE in Fortran
- 2.6 Exercises.
- 2.1 Getting Started
- 3 Numerical differentiation and interpolation.
- 3.1 Numerical Differentiation
- 3.1.1 The second derivative ofexp(x).
- 3.1.2 Error analysis
- 3.2 Numerical Interpolation and Extrapolation
- 3.2.1 Interpolation
- 3.2.2 Richardson’s deferred extrapolation method
- 3.3 Classes in C++
- 3.3.1 The Complex class
- 3.3.2 The vector class.
- 3.4 Modules in Fortran
- 3.5 How to make Figures with Gnuplot.
- 3.6 Exercises.
- 3.1 Numerical Differentiation
- 4 Non-linear Equations x Contents
- 4.1 Particle in a Box Potential
- 4.2 Iterative Methods
- 4.3 Bisection
- 4.4 Newton-Raphson’s Method
- 4.5 The Secant Method.
- 4.5.1 Broyden’s Method
- 4.6 Exercises.
- 5 Numerical Integration.
- 5.1 Newton-Cotes Quadrature
- 5.2 Adaptive Integration.
- 5.3 Gaussian Quadrature
- 5.3.1 Orthogonal polynomials, Legendre
- 5.3.2 Integration points and weights with orthogonal polynomials
- 5.3.3 Application to the caseN=
- 5.3.4 General integration intervals for Gauss-Legendre
- 5.3.5 Other orthogonal polynomials
- 5.3.6 Applications to selected integrals
- 5.4 Treatment of Singular Integrals.
- 5.5 Parallel Computing
- 5.5.1 Brief survey of supercomputing concepts and terminologies
- 5.5.2 Parallelism
- 5.5.3 MPI with simple examples
- 5.5.4 Numerical integration with MPI
- 5.6 An Integration Class
- 5.7 Exercises.
- 6 Linear Algebra. Part II Linear Algebra and Eigenvalues
- 6.1 Introduction
- 6.2 Mathematical Intermezzo
- 6.3 Programming Details
- 6.3.1 Declaration of fixed-sized vectors and matrices
- 6.3.2 Runtime Declarations of Vectors and Matrices in C++.
- 6.3.3 Matrix Operations and C++ and Fortran Features of Matrix handling
- 6.4 Linear Systems
- 6.4.1 Gaussian Elimination
- 6.4.2 LU Decomposition of a Matrix
- 6.4.3 Solution of Linear Systems of Equations
- 6.4.4 Inverse of a Matrix and the Determinant
- 6.4.5 Tridiagonal Systems of Linear Equations.
- 6.5 Spline Interpolation
- 6.6 Iterative Methods
- 6.6.1 Jacobi’s method
- 6.6.2 Gauss-Seidel.
- 6.6.3 Successive over-relaxation
- 6.6.4 Conjugate Gradient Method
- 6.7 A vector and matrix class
- 6.7.1 How to construct your own matrix-vector class
- 6.8 Exercises.
- 6.8.1 Solution Contents xi
- 7 Eigensystems.
- 7.1 Introduction
- 7.2 Eigenvalue problems
- 7.3 Similarity transformations
- 7.4 Jacobi’s method
- 7.5 Similarity Transformations with Householder’s method.
- 7.5.1 The Householder’s method for tridiagonalization
- 7.5.2 Diagonalization of a Tridiagonal Matrix via Francis’Algorithm
- 7.6 Power Methods
- 7.7 Iterative methods: Lanczos’ algorithm
- 7.8 Schrödinger’s Equation Through Diagonalization
- 7.8.1 Numerical solution of the Schrödinger equation by diagonalization
- 7.8.2 Program example and results for the one-dimensional harmonic oscillator
- 7.9 Exercises.
- 8 Differential equations. Part III Differential Equations
- 8.1 Introduction
- 8.2 Ordinary differential equations
- 8.3 Finite difference methods
- 8.3.1 Improvements of Euler’s algorithm, higher-order methods
- 8.3.2 Verlet and Leapfrog algorithms
- 8.3.3 Predictor-Corrector methods
- 8.4 More on finite difference methods, Runge-Kutta methods
- 8.5 Adaptive Runge-Kutta and multistep methods
- 8.6 Physics examples
- 8.6.1 Ideal harmonic oscillations
- 8.6.2 Damping of harmonic oscillations and external forces.
- 8.6.3 The pendulum, a nonlinear differential equation
- 8.7 Physics Project: the pendulum
- 8.7.1 Analytic results for the pendulum
- 8.7.2 The pendulum code
- 8.8 Exercises.
- 9 Two point boundary value problems.
- 9.1 Introduction
- 9.2 Shooting methods
- 9.2.1 Improved approximation to the second derivative, Numerov’s method.
- 9.2.2 Wave equation with constant acceleration.
- 9.2.3 Schrödinger equation for spherical potentials
- 9.3 Numerical procedure, shooting and matching
- 9.3.1 Algorithm for solving Schrödinger’s equation.
- 9.4 Green’s function approach
- 9.5 Exercises.
- 10 Partial Differential Equations. xii Contents
- 10.1 Introduction
- 10.2 Diffusion equation.
- 10.2.1Explicit Scheme
- 10.2.2Implicit Scheme.
- 10.2.3Crank-Nicolson scheme
- 10.2.4Solution for the One-dimensional Diffusion Equation
- 10.2.5Explict scheme for the diffusion equation in two dimensions
- 10.3 Laplace’s and Poisson’s Equations
- 10.3.1Scheme for solving Laplace’s (Poisson’s) equation
- 10.3.2Jacobi Algorithm for solving Laplace’s Equation
- 10.3.3Jacobi’s algorithm extended to the diffusion equation in two dimensions.
- 10.4 Wave Equation in two Dimensions.
- 10.4.1Closed-form Solution
- 10.5 Exercises.
- 11 Outline of the Monte Carlo Strategy. Part IV Monte Carlo Methods
- 11.1 Introduction
- 11.1.1Definitions
- 11.1.2First Illustration of the Use of Monte-Carlo Methods.
- 11.1.3Second Illustration, Particles in a Box
- 11.1.4Radioactive Decay
- 11.1.5Program Example for Radioactive Decay
- 11.1.6Brief Summary.
- 11.2 Probability Distribution Functions.
- 11.2.1Multivariable Expectation Values
- 11.2.2The Central Limit Theorem
- 11.2.3Definition of Correlation Functions and Standard Deviation
- 11.3 Random Numbers
- 11.3.1Properties of Selected Random Number Generators.
- 11.4 Improved Monte Carlo Integration
- 11.4.1Change of Variables
- 11.4.2Importance Sampling
- 11.4.3Acceptance-Rejection Method
- 11.5 Monte Carlo Integration of Multidimensional Integrals
- 11.5.1Brute Force Integration
- 11.5.2Importance Sampling
- 11.6 Classes for Random Number Generators
- 11.7 Exercises.
- 11.1 Introduction
- 12 Random walks and the Metropolis algorithm.
- 12.1 Motivation
- 12.2 Diffusion Equation and Random Walks
- 12.2.1Diffusion Equation
- 12.2.2Random Walks
- 12.3 Microscopic Derivation of the Diffusion Equation
- 12.3.1Discretized Diffusion Equation and Markov Chains.
- 12.3.2Continuous Equations
- 12.3.3Numerical Simulation
- 12.4 Entropy and Equilibrium Features
- 12.5 The Metropolis Algorithm and Detailed Balance Contents xiii
- 12.5.1Brief Summary.
- 12.6 Langevin and Fokker-Planck Equations
- 12.6.1Fokker-Planck Equation.
- 12.6.2Langevin Equation
- 12.7 Exercises.
- 13 Monte Carlo Methods in Statistical Physics.
- 13.1 Introduction and Motivation
- 13.2 Review of Statistical Physics
- 13.2.1Microcanonical Ensemble
- 13.2.2Canonical Ensemble
- 13.2.3Grand Canonical and Pressure Canonical
- 13.3 Ising Model and Phase Transitions in Magnetic Systems
- 13.3.1Theoretical Background
- 13.4 Phase Transitions and Critical Phenomena
- 13.4.1The Ising Model and Phase Transitions
- 13.4.2Critical Exponents and Phase Transitions from Mean-field Models
- 13.5 The Metropolis Algorithm and the Two-dimensional Ising Model
- 13.5.1Parallelization of the Ising Model
- 13.6 Selected Results for the Ising Model
- 13.7 Correlation Functions and Further Analysis of the Ising Model
- 13.7.1Thermalization.
- 13.7.2Time-correlation Function.
- 13.8 The Potts’ model
- 13.9 Exercises.
- 14 Quantum Monte Carlo Methods.
- 14.1 Introduction
- 14.2 Postulates of Quantum Mechanics.
- 14.2.1Mathematical Properties of the Wave Functions
- 14.2.2Important Postulates
- 14.3 First Encounter with the Variational Monte Carlo Method
- 14.4 Variational Monte Carlo for Quantum Mechanical Systems
- 14.4.1First illustration of Variational Monte Carlo Methods
- 14.5 Variational Monte Carlo for atoms
- 14.5.1The Born-Oppenheimer Approximation
- 14.5.2The Hydrogen Atom
- 14.5.3Metropolis sampling for the hydrogen atom and the harmonic oscillator
- 14.5.4The Helium Atom
- 14.5.5Program Example for Atomic Systems
- 14.5.6Importance sampling
- 14.6 Exercises.
- 15.1 Introduction 15 Many-body approaches to studies of electronic systems: Hartree-Fock theory and Density Functional
- 15.2 Hartree-Fock theory
- 15.3 Expectation value of the Hamiltonian with a given Slater determinant
- 15.4 Derivation of the Hartree-Fock equations
- 15.4.1Reminder on calculus of variations
- 15.4.2Varying the single-particle wave functions xiv Contents
- 15.4.3Detailed solution of the Hartree-Fock equations
- 15.4.4Hartree-Fock by variation of basis function coefficients
- 15.5 Density Functional Theory
- 15.5.1Hohenberg-Kohn Theorem
- 15.5.2Derivation of the Kohn-Sham Equations.
- 15.5.3The Local Density Approximation and the Electron Gas.
- 15.5.4Applications and Code Examples
- 15.6 Exercises.
- 16 Improved Monte Carlo Approaches to Systems of Fermions.
- 16.1 Introduction
- 16.2 Splitting the Slater Determinant
- 16.3 Computational Optimization of the Metropolis/Hasting Ratio
- 16.3.1Evaluating the Determinant-determinant Ratio
- 16.4 Optimizing the∇ΨT/ΨTRatio
- 16.4.1Evaluating the Gradient-determinant-to-determinant Ratio
- 16.5 Optimizing the∇^2 ΨT/ΨTRatio
- 16.6 Updating the Inverse of the Slater Matrix
- 16.7 Reducing the Computational Cost of the Correlation Form
- 16.8 Computing the Correlation-to-correlation Ratio
- 16.9 Evaluating the∇ΨC/ΨCRatio
- 16.9.1Special Case: Correlation Functions Depending on the Relative Distance
- 16.10Computing the∇^2 ΨC/ΨCRatio
- 16.11Efficient Optimization of the Trial Wave Function
- 16.12Exercises.
- 17 Bose-Einstein condensation and Diffusion Monte Carlo.
- 17.1 Diffusion Monte Carlo
- 17.1.1Importance Sampling
- 17.2 Bose-Einstein Condensation in Atoms.
- 17.3 Exercises.
- References.
- 17.1 Diffusion Monte Carlo
axel boer
(Axel Boer)
#1