Introduction
This document serves as a scientific and technical guide for the powder5 web application, a tool for the analysis of powder X-ray diffraction (PXRD) data via whole-pattern fitting. This technique, also known as pattern decomposition, is a crucial method in materials science and crystallography for refining structural and microstructural parameters when a complete structural model is either unknown or unnecessary.
The application facilitates the extraction of precise lattice parameters, peak profile information, and integrated intensities of Bragg reflections. It implements two principal decomposition algorithms: the iterative Le Bail method for rapid and stable convergence, and the simultaneous Pawley method for rigorous, unbiased intensity extraction. Peak profiles are modeled using phenomenological functions, including a versatile Simple pseudo-Voigt, an Asymmetric Split pseudo-Voigt, and a physically rigorous Anisotropic model (TCH) based on the Thompson-Cox-Hastings formulation with a Stephens model for anisotropic line broadening. The background is modeled using a monotonic cubic spline interpolation between user-defined points.
Getting Started: Data Input
Analysis commences with the loading of a powder diffraction data file. The application is designed to automatically parse numerous common ASCII-based file formats from major instrument manufacturers and standard crystallographic software.
- Supported Formats: Built-in parsers are included for Bruker (
.brml,.uxd), PANalytical (.xrdml), Rigaku (.ras,.rasx), Philips (.udf,.rd,.sd), GSAS (.esd,.gsa,.xra), and Jade (.mdi). - Generic Data: Standard two-column ASCII files (
.xy,.csv,.txt,.dat,.ascetc.) containing $2\theta$ and intensity values are also supported. The parser accommodates space, comma, or semicolon delimiters. Comment lines prefixed with#,!,;or//are ignored, as are non-numeric header lines. - Metadata Parsing: For many instrument-specific formats, instrument parameters such as the X-ray wavelength for Kα1 are read from the file's metadata and used to populate the relevant fields in the user interface. It is incumbent upon the user to verify the correctness of these automatically populated values.
Interactive Data Visualization
The diffraction pattern is rendered in an interactive plot to facilitate detailed inspection of the experimental data and the quality of the model fit. You can hide any of the plots by clicking on its legend at the top of the chart.
- Pan: Click and drag on the chart to translate the view along the $2\theta$ and intensity axes.
- Zoom: Use the mouse wheel to adjust the zoom level. The zoom behavior is context-dependent:
- Over the plot area: Zooms both axes isotropically.
- Over the Intensity (Y) axis: Zooms the vertical axis exclusively.
- Over the $2\theta$ (X) axis: Zooms the horizontal axis exclusively.
- Reset View: A right-click on the chart resets the zoom and pan to the range currently defined by the 2θ Min/Max sliders.
- Reflection Data: Hovering the cursor near a Bragg reflection marker (tick mark) displays a tooltip containing the corresponding Miller indices ($hkl$).
- Add Background Spline Point: To add a point defining the background shape, hold down the Ctrl key and click on a point in the pattern. This adds the nearest experimental data point to the background spline list. Points cannot be added outside the current 2θ Min/Max slider range, nor can they be added exactly at the edge point indices.
Pattern Decomposition Methodologies
Pattern decomposition enables the fitting of a powder diffraction pattern based on a unit cell and space group, without requiring a full structural model (atomic coordinates). This is essential for the precise determination of lattice parameters and the extraction of integrated intensities, which are requisite for ab initio structure determination.
The Le Bail Method
The Le Bail method is an iterative, sequential algorithm known for its computational efficiency and robust convergence. The process is as follows:
- Initialization: A theoretical pattern is calculated from the user-supplied lattice, profile, and background parameters (defined by the spline points). The heights ($I_{hkl}$) of all reflections are initially assumed to be equal (e.g., 1000).
- Intensity Extraction (Height Partitioning): The observed net intensity ($y_{i,obs} - y_{i,bkg}$) at each data point is partitioned among the calculated Bragg peaks contributing to that point. The contribution of each peak is proportional to its profile function value at that point. Summing these partitioned intensities for each reflection yields a new set of "observed" integrated intensities. These are then converted back to estimated peak heights using the current profile function's area.
- Parameter Refinement: These newly extracted peak heights are held constant, and a non-linear least-squares refinement is performed on the lattice and profile parameters (the background is fixed by the spline points) to minimize the difference between the calculated and observed patterns.
- Iteration: The refined parameters from Step 3 are used to restart the process from Step 2 (Intensity Extraction). This cycle repeats until the parameters and R-factors converge (typically 4 cycles).
The Pawley Method
The Pawley method employs a simultaneous, non-iterative approach. It treats the peak height ($I_{hkl}$) of each Bragg reflection as an independent, refinable variable within a single, large-scale least-squares minimization.
This means that all parameters—lattice, profile, and all individual peak heights—are refined concurrently. The background shape is fixed by the user-defined spline points and is not refined.
- Advantages: The Pawley method is considered more rigorous as it avoids the iterative bias of the Le Bail method, particularly in cases of severe peak overlap. It can yield more accurate and statistically sound integrated intensities (reported as areas) and parameter uncertainties.
- Disadvantages: The inclusion of hundreds or thousands of intensity variables significantly increases the computational complexity and memory requirements. The refinement can be susceptible to instability and parameter correlation, especially if the initial model is poor or if stochastic optimization algorithms are used with insufficient iterations.
Due to the high dimensionality and potential for parameter correlation when refining individual intensities, the Pawley method is generally most stable and efficient when used with the Levenberg-Marquardt (LM) algorithm. While the Parallel Tempering (PT) algorithm can also be used, it may require significantly more iterations to achieve comparable convergence due to the complexity of the intensity parameter space. It is generally recommended to start with the LM algorithm for Pawley refinements, especially after obtaining a good initial model using the Le Bail method.
Minimization Algorithms
The goal of refinement is to minimize the sum-of-squares objective function, $\chi^2 = \sum w_i (y_{i,obs} - y_{i,calc})^2$, where $w_i$ is the statistical weight of each data point (typically $w_i = 1/y_{i,obs}$). This application provides several algorithms to navigate the complex parameter space and find the minimum of this function.
Levenberg-Marquardt (LM)
The LM algorithm is a standard gradient-based method for non-linear least-squares problems. It effectively interpolates between the Gauss-Newton algorithm and the method of gradient descent. By calculating the Jacobian matrix (the matrix of first partial derivatives of the calculated pattern with respect to the parameters), it determines the most efficient path toward the nearest local minimum.
- Characteristics: LM is a local minimizer, exhibiting rapid quadratic convergence when the initial parameters are close to the true minimum. It is the preferred method for final, high-precision refinement and is the only algorithm here that can calculate valid estimated standard deviations (ESDs) for the refined parameters from the covariance matrix.
- Limitations: It is susceptible to converging to a local minimum if the starting model is far from the global solution.
- Pawley Mode: Generally the recommended algorithm for Pawley refinements due to stability and efficiency.
Parallel Tempering (Replica Exchange)
Parallel Tempering, also known as Replica Exchange MCMC, is an advanced stochastic algorithm designed to overcome the slow convergence of traditional search methods on complex landscapes. Instead of a single system, Parallel Tempering simulates multiple copies (or "replicas") of the system simultaneously, each at a different, fixed temperature in a predefined ladder ($T_1 < T_2 < ... < T_N$).
- Mechanism: Each replica evolves independently according to a standard Monte Carlo or Simulated Annealing-like algorithm at its respective temperature. The high-temperature replicas explore the parameter space broadly (high mobility, escaping local minima), while the low-temperature replicas perform a fine-grained search of local minima (high precision).
- The Swap Move: Periodically, the algorithm attempts to swap the entire set of parameters between adjacent replicas (e.g., between replica $i$ at temperature $T_i$ and replica $i+1$ at $T_{i+1}$). The swap is accepted with a Metropolis-like probability that depends on the energies (costs) and temperatures of the two replicas. This crucial step allows a good solution discovered by a high-temperature replica in a distant valley to "percolate down" to the low-temperature replicas, dramatically improving the efficiency of finding the global minimum compared to single-temperature methods.
- Advantages: Significantly more efficient at global exploration than simpler stochastic methods, making it a robust choice for complex problems or when the starting model is highly uncertain (primarily in Le Bail mode).
- Pawley Mode: Can be used, but may require substantially more iterations than LM to converge reliably due to the large number of intensity parameters.
Guide to Refinable Parameters
This section provides a detailed breakdown of the parameters you can control and refine.
A Note on Parameter Scaling & GSAS Comparison
Following a long-standing convention in crystallographic software like GSAS, some refinable parameters in this program are internally scaled. This is done for user convenience, allowing you to work with manageable numbers (e.g., 1.0) instead of very small decimals (e.g., 1.0e-4). The documentation below provides the exact formulas used, allowing for direct comparison with physical models and values from other software.
Crystal System & Space Group
These parameters define the crystallographic symmetry of the material.
- The System selection imposes metrical constraints on the lattice parameters (e.g., for Cubic, $a=b=c$, $\alpha=\beta=\gamma=90^\circ$).
- The Space Group selection determines the systematic reflection conditions ($hkl$ absences) used to generate the list of Bragg peaks. The underlying logic for these conditions is consistent with established crystallographic libraries and was taken from Computational Crystallography Toolbox (cctbx).
Instrumental Parameters
Found under the "Sample" tab, these parameters model the diffractometer configuration.
Radiation 1/2 (Å) & Ratio: Defines the X-ray source. For divergent-beam laboratory instruments, a Kα1/Kα2 doublet is typically used. For synchrotron radiation, the Ratio is set to 0.Zero: A refinable parameter that corrects for instrumental zero-point error in the $2\theta$ axis. It is highly correlated with lattice parameters and must be refined with caution.2θ Min / Max: These sliders define the refinement range. It is standard practice to exclude regions of low signal-to-noise or known artifacts from the calculation.
Background Modeling (Spline Interpolation)
The background contribution is modeled using a monotonic cubic Hermite spline interpolation between a series of user-defined points (spline points or knots). This approach provides flexibility and ensures a smooth, physically realistic background shape without introducing refinable background parameters into the least-squares minimization. The background shape is therefore considered fixed during the refinement process based on the current spline points.
Control of the background spline is located under the "Background" tab:
- Auto-generation: Use the Auto-points slider (10-40 points) and the Generate Spline Points button to automatically populate the list. The algorithm selects points corresponding to local intensity minima within intervals distributed across the current 2θ Min/Max slider range, applying some averaging around the minimum. The points at the exact 2θ Min and Max slider positions are always included and fixed to these $2\theta$ values.
- Manual Addition: Add individual points by holding Ctrl and clicking on the chart. The closest experimental point will be added to the list, provided it's within the current slider range and not an edge point.
- Editable List: The generated and manually added points appear in a list below the controls.
- You can directly edit the $2\theta$ and Intensity values for any point, except for the $2\theta$ values of the first (Min) and last (Max) points, which are fixed by the sliders. Edits trigger recalculation of the spline.
- Points can be deleted using the × button, except for the first and last points.
- Chart Display: The spline points can be toggled on/off on the chart using the "Show Points on Chart" checkbox. The calculated spline curve is always shown.
Simple pVoigt
This function models the peak shape as a linear combination of a Gaussian and a Lorentzian function: $pV(x) = \eta L(x) + (1-\eta)G(x)$. The angular dependence of the Full Width at Half Maximum (FWHM) for each component is modeled empirically. $$H_G^2 = GU \tan^2\theta + GV \tan\theta + GW + GP / \cos^2\theta$$ $$H_L = LX / \cos\theta$$
GU, GV, GW, GP: Parameters describing the Gaussian FWHM ($H_G$). The terms are associated with strain ($GU$), instrumental factors ($GV, GW$), and particle size effects ($GP$). Note that $GU$, $GW$, and $GP$ should physically be non-negative.LX: Describes the Lorentzian FWHM ($H_L$), primarily associated with crystallite size broadening ($LX > 0$).eta: A simple linear mixing parameter ($0 \le \eta \le 1$; $\eta=0$ for pure Gaussian, $\eta=1$ for pure Lorentzian).shft & trns: Corrects for peak position shifts due to sample displacement and transparency, respectively.Unit and Scaling for theshftparameter:
The refinedshftparameter is a dimensionless, scaled coefficient, not a direct physical length. Its relationship to the physical specimen displacement ($s$) and the goniometer radius ($R$) is defined as follows:- The physical peak shift in radians is: $\Delta(2\theta)_{\text{rad}} = -2 \frac{s}{R} \cos(\theta)$
- The program calculates this shift (in degrees) using the formula: $\Delta(2\theta)_{\text{deg}} = -(\text{shft} / 1000) \times \cos(\theta) \times (180 / \pi)$
- Therefore, the relationship is: $\frac{\text{shft}}{1000} = \frac{2s}{R}$
- To find the physical displacement $s$ from the refined parameter, use: $s = R \times (\text{shft} / 2000)$.
Example: For a typical instrument with $R=240$ mm, a refinedshftvalue of 1.0 corresponds to a physical displacement $s$ of $240 \times (1/2000) = 0.12$ mm.
TCH (Size/Strain/Aniso)
This is a physically rigorous profile function adapted from GSAS, convoluting a Thompson-Cox-Hastings (TCH) pseudo-Voigt with a Stephens model for anisotropic strain broadening.
Isotropic Broadening (TCH Model)
The TCH formulation models the FWHM of the Gaussian ($H_G$) and Lorentzian ($H_L$) components based on physical contributions to line broadening: $$H_G^2 = U \tan^2\theta + V \tan\theta + W$$ $$H_L = X \tan\theta + Y / \cos\theta$$
The total FWHM ($H$) and pseudo-Voigt mixing parameter ($\eta$) are then derived from these components using polynomial approximations. The final shape is $pV(x) = \eta L(x, H) + (1-\eta)G(x, H)$, where both functions share the same convoluted FWHM.
U, V, W: Gaussian broadening parameters related to strain ($U, V$) and instrumental resolution ($W$). Physically, $U$ and $W$ should be non-negative.X, Y: Lorentzian broadening parameters related to strain ($X$) and crystallite size ($Y$). Physically, $X$ and $Y$ should be non-negative.
Peak Asymmetry
The S/L and H/L parameters introduce an angle-dependent asymmetry, primarily correcting for axial divergence effects at low $2\theta$.
Anisotropic Broadening (Stephens Model)
Anisotropic microstrain, where broadening varies with crystallographic direction, is modeled by adding terms to the Lorentzian component ($H_L$) that are dependent on the Miller indices ($hkl$). The model is a fourth-order polynomial in the reciprocal lattice vectors.
The refinable parameters (S400, S040, etc.) are the non-zero, symmetry-unique coefficients of this polynomial. The application automatically applies symmetry constraints based on the Laue class of the selected space group (e.g., for cubic, $S400=S040=S004$).
S_hkl parameters:The user-inputted
S_hkl parameters are scaled for convenience. The dimensionless anisotropic broadening term ($H_{aniso}$) is calculated from these parameters, and its contribution to the total Lorentzian width (in degrees $2\theta$) is scaled by a factor of 1000.
$$H_L(\text{total}) = H_L(\text{isotropic}) + \frac{|H_{aniso}|}{1000}$$
This scaling allows the user to refine values in a manageable range (e.g., -10 to +10) rather than requiring input of very small decimals (e.g., 1e-4), a convention common in other refinement software.
Split pVoigt (Asymmetric)
This profile function is a modification of the Simple pseudo-Voigt designed to model asymmetric peaks (e.g., from axial divergence or stacking faults). It achieves this by defining independent sets of profile width parameters for the left side (at $2\theta$ values less than the peak center) and the right side of the peak.
The shape is still a linear combination $pV(x) = \eta L(x) + (1-\eta)G(x)$, but the $H_G$ and $H_L$ parameters used in the calculation depend on whether $x$ is to the left or right of the peak center.
Gaussian Broadening (Left & Right)
The Gaussian FWHM ($H_G$) for each side is modeled as:
$$H_{G, \text{side}}^2 = GU_{\text{side}} \tan^2\theta + GV_{\text{side}} \tan\theta + GW_{\text{side}}$$
(Note: This model does not use the $GP$ term used in the Simple pVoigt profile.)
GU-L, GV-L, GW-L: Parameters describing the Gaussian FWHM for the left side of the peak.GU-R, GV-R, GW-R: Parameters describing the Gaussian FWHM for the right side of the peak.
Lorentzian Broadening (Left & Right)
The Lorentzian FWHM ($H_L$) for each side is modeled as:
$$H_{L, \text{side}} = LX_{\text{side}} / \cos\theta$$
LX-L: Describes the Lorentzian FWHM for the left side, primarily associated with size broadening.LX-R: Describes the Lorentzian FWHM for the right side, primarily associated with size broadening.
Peak Shape & Position
eta (Mixing)(param:eta_split): A simple linear mixing parameter ($0 \le \eta \le 1$; $\eta=0$ for pure Gaussian, $\eta=1$ for pure Lorentzian). This single value is used for both sides of the peak.shft (Displ.)(param:shft_split): Corrects for peak position shifts due to sample displacement.trns (Transp.)(param:trns_split): Corrects for peak position shifts due to transparency.Unit and Scaling for theshft_splitparameter:
This parameter is a dimensionless, scaled coefficient, identical in function to theshftparameter in the Simple pVoigt profile.- The physical peak shift in radians is: $\Delta(2\theta)_{\text{rad}} = -2 \frac{s}{R} \cos(\theta)$
- The program calculates this shift (in degrees) using the formula: $\Delta(2\theta)_{\text{deg}} = -(\text{shft\_split} / 1000) \times \cos(\theta) \times (180 / \pi)$
- Therefore, the relationship is: $\frac{\text{shft\_split}}{1000} = \frac{2s}{R}$
- To find the physical displacement $s$ from the refined parameter, use: $s = R \times (\text{shft\_split} / 2000)$.
Recommended Refinement Strategy
A sequential and hierarchical refinement strategy is crucial for achieving a stable and physically meaningful solution. Attempting to refine all parameters simultaneously from a poor starting model will likely lead to divergence or convergence to a false minimum.
Phase 1: Initial Model Setup
- Define the Model: Load data, select the crystal system and space group, and define the refinement range using the $2\theta$ sliders.
- Set Background Points: Use the controls under the "Background" tab (Generate Spline Points, Ctrl+Click on chart, or manual editing of the list) to define a set of points that reasonably follow the experimental background. This background shape is fixed during refinement.
- Peak Position Refinement: Using the Le Bail and LM algorithms, refine only the Lattice Parameter(s) and, if necessary, the instrumental Zero Shift. The goal is to align the calculated Bragg positions with the observed peak maxima.
Phase 2: Peak Profile Refinement
- Isotropic Broadening: Once positions are correct, refine the primary isotropic peak shape parameters (e.g., W, Y, U, X in TCH, or GW, LX in Simple/Split pVoigt). This will account for the dominant size and strain contributions.
- Asymmetry and Shape: Introduce asymmetry parameters (S/L in TCH, shft/trns in Simple/Split pVoigt) if there is a clear misfit, particularly at low angles. Refine the mixing parameter (eta) if needed.
- Anisotropic Broadening (TCH only): If systematic misfits remain (e.g., some peaks are consistently broader than the model), introduce the anisotropic Stephens parameters (e.g., S400). Refine only the symmetry-independent terms.
Phase 3: Finalization and Intensity Extraction
- Global Optimization (Optional): If the LM algorithm converges to a poor solution during the Le Bail steps, switch to Parallel Tempering (PT) for one or more Le Bail cycles to perform a global search. Afterwards, switch back to LM for a final, precise local minimization.
- Pawley Refinement: With a stable and well-refined model from the Le Bail method, perform a final refinement using the Pawley method. It is generally recommended to use the Levenberg-Marquardt (LM) algorithm for stability, though Parallel Tempering (PT) can also be used (potentially requiring more iterations). This will provide the most statistically robust set of integrated intensities, suitable for subsequent structure solution.
Technical Note on Calculation Parameters
The theoretical pattern is calculated using a hybrid approach for performance and accuracy. A search window (defined by CALCULATION_WINDOW_MULTIPLIER, currently 8.0 times the peak FWHM) is used to find all potential contributions to a given data point. However, only points where the calculated peak height is greater than a defined threshold (PEAK_HEIGHT_CUTOFF, currently 0.1%) relative to its maximum are included in the final sum.
Interpretation of Results
Assessing the quality of a refinement requires both statistical analysis and critical visual inspection of the fit.
Figures of Merit
Standard crystallographic R-factors are provided to quantify the quality of the fit.
- R-pattern ($R_p$): The unweighted residual error based on net intensities, sensitive primarily to the fit of high-intensity reflections. $$R_p = \frac{\sum |(y_{i,obs} - y_{i,bkg}) - (y_{i,calc} - y_{i,bkg})|}{\sum |y_{i,obs} - y_{i,bkg}|} \times 100\%$$
- Weighted R-pattern ($R_{wp}$): The primary figure of merit, weighted by the inverse of the observed gross intensities ($w_i = 1/y_{i,obs}$), which properly accounts for the counting statistics across the entire pattern. $$R_{wp} = \left[ \frac{\sum w_i (y_{i,obs} - y_{i,calc})^2}{\sum w_i y_{i,obs}^2} \right]^{1/2} \times 100\%$$
- Reduced Chi-squared ($\chi^2$, Goodness of Fit): The most statistically rigorous indicator. For a statistically perfect fit where the model correctly describes the data and the weights are accurate, $\chi^2$ should approach 1.0. $$\chi^2 = \frac{1}{N - P} \sum w_i (y_{i,obs} - y_{i,calc})^2 = \left(\frac{R_{wp}}{R_{exp}}\right)^2$$ where $N$ is the number of data points, $P$ is the number of refined parameters, and $R_{exp}$ is the statistically expected minimum $R_{wp}$.
Calculating Observed Intensities ($I_{obs}$) for Overlapping Peaks
A simple numerical integration over a fixed angular range is insufficient for accurately determining the observed integrated intensity ($I_{obs}$) of overlapping peaks. This tool employs a more robust intensity partitioning method.
At each point in the diffraction pattern, the net observed intensity ($y_{obs} - y_{bkg}$) is distributed among all contributing Bragg reflections. This distribution is proportional to the value of each peak's calculated profile function (including both Kα1 and Kα2 components, scaled by the refined peak height) at that specific point. By integrating these partitioned "slices" of intensity for each reflection across the entire pattern (using the trapezoidal rule), the method yields a reliable $I_{obs}$ value (reported as integrated area) that correctly deconvolutes contributions from neighboring peaks.
Visual Inspection
Numerical indicators can be misleading. Visual inspection of the difference plot (observed minus calculated) is the most critical step in evaluating the fit.
- The Difference Plot: A successful refinement should yield a difference plot that consists of random, uncorrelated noise centered on zero. The plot is scaled relative to the main pattern for visibility.
- Systematic Residuals: The presence of structured, non-random features in the difference plot (e.g., "M-shaped" residuals around peaks, broad humps where the spline is inadequate, or un-indexed peaks) is a clear indication of systematic errors in the model. These may arise from an incorrect peak shape, unmodeled anisotropy or asymmetry, an inadequate background model (requiring adjustment of spline points), or the presence of an unaccounted-for impurity phase.
Williamson-Hall Size-Strain Analysis
For refinements utilizing Profile Function #3 (Anisotropic TCH), the application can automatically perform a Williamson-Hall analysis to extract approximate microstructural information. This method separates the contributions of crystallite size and microstrain to the total peak broadening by analyzing their different dependencies on the diffraction angle, $\theta$.
The analysis is based on the linear Williamson-Hall equation, where $\beta$ is the total physical peak breadth (FWHM) in radians derived from the refined sample-only broadening parameters (
U,X,Y), excluding instrumental contributions (V,W): $$\beta \cos(\theta) = \frac{K\lambda}{L} + 4\epsilon \sin(\theta)$$This equation describes a straight line when plotting $\beta \cos(\theta)$ vs. $4\sin(\theta)$. The software performs a linear least-squares fit on the relevant data points (within the fitted 2θ range) to determine the y-intercept (related to crystallite size, $L$) and the slope (related to microstrain, $\epsilon$).
Reported Values
- Apparent Crystallite Size (nm): An estimate of the average size of the coherently scattering domains, calculated from the y-intercept of the Williamson-Hall plot (using $K=0.9$).
- Apparent Microstrain (%): An estimate of the root-mean-square strain within the crystallites, calculated from the slope of the plot.
- Linear Fit R²: The coefficient of determination for the linear regression. A value close to 1.0 indicates that the isotropic size/strain model is a good fit for the observed peak broadening. Values significantly less than 1.0 may suggest that broadening is anisotropic or that the model is otherwise inadequate.
Note: This Williamson-Hall analysis provides an approximation based only on the isotropic TCH parameters (U, X, Y). It does not account for anisotropic broadening effects (Stephens parameters) or instrumental contributions (V, W). For rigorous quantitative analysis, dedicated size/strain analysis software should be employed.
Data Export
- Save Report: Generates a comprehensive ASCII text file containing all statistical indicators, refined parameter values with their ESDs (for LM refinements), the Williamson-Hall analysis results (if applicable), the list of background spline points used, a list of reflections with calculated and observed integrated intensities (areas), and a point-by-point list of observed, calculated, background, and difference intensities across the fitted range.
- Generate PDF: Creates a summary PDF document containing a high-resolution plot and tables of final parameters and statistics, suitable for archival or reporting.
References & Further Reading
Pawley, G. S. (1981). "Unit-cell refinement from powder diffraction scans". Journal of Applied Crystallography, 14(6), 357-361.
Le Bail, A., Duroy, H. & Fourquet, J.L. (1988). "Ab-initio structure determination of LiSbWO6 by X-ray powder diffraction". Materials Research Bulletin, 23(3), 447-452.
Larson, A. C. & Von Dreele, R. B. (2004). "General Structure Analysis System (GSAS)". Los Alamos National Laboratory Report LAUR 86-748.
Thompson, P., Cox, D. E. & Hastings, J. B. (1987). "Rietveld refinement of Debye-Scherrer synchrotron X-ray data from Al2O3". Journal of Applied Crystallography, 20(2), 79-83.
Stephens, P. W. (1999). "Phenomenological model of anisotropic peak broadening in powder diffraction". Journal of Applied Crystallography, 32(2), 281-289.
Swendsen, R. H., & Wang, J. S. (1986). "Replica Monte Carlo simulation of spin-glasses". Physical Review Letters, 57(21), 2607.
Fritsch, F. N., & Carlson, R. E. (1980). "Monotone Piecewise Cubic Interpolation". SIAM Journal on Numerical Analysis, 17(2), 238–246.
Grosse-Kunstleve, R. W., Sauter, N. K., Moriarty, N. W., & Adams, P. D. (2002). "The Computational Crystallography Toolbox: crystallographic algorithms in a reusable software framework". Journal of Applied Crystallography, 35(1), 126-136. (Used for space group systematic absence rules).
About This Tool
The powder5 toolkit was developed by Nita Dragoe from Université Paris-Saclay as a simple browser-based implementation of powder pattern decomposition methods. It is a long-time successor of PowderV2 (Dragoe, N. (2001). J. Appl. Cryst., 34, 535) and has been updated to include the Pawley method and modern global optimization algorithms.
Numerical calculations, including matrix operations and optimization routines, are performed using the math.js library, which is distributed under the Apache 2.0 license. PDF generation uses jsPDF and html2canvas. Charting uses Chart.js with the zoom plugin.
Version: 1.15 | Last Updated: 27 October 2025.
This document was updated with the assistance of an AI.