Technical Overview & Methodology

This document is a technical reference for the Brutus powder indexing software. It explains the main algorithms, search parameters, and methodology, and is intended for users familiar with powder X-ray diffraction. As the name suggests, this is a brute-force method — the search is exhaustive rather than heuristic, and is not necessarily fast.

Core Goal

The aim of ab initio powder indexing is to determine the unit-cell parameters ($a, b, c, \alpha, \beta, \gamma$) from a list of observed diffraction peak positions ($2\theta$). Brutus performs this task using a symmetry-specific, exhaustive search algorithm.

The central assumption is that a small subset of the most intense, low-angle reflections corresponds to simple crystal planes with low-integer Miller indices $(hkl)$. For a given crystal system, the program chooses exactly as many observed peaks as there are unknown lattice parameters, and solves the resulting system of linear equations.

Q-Space Formulation

All peak positions are first converted from $2\theta$ to Q-space, where $Q = 1/d^2$. The general quadratic relationship between $Q$, the Miller indices, and the reciprocal cell parameters ($A, B, C, D, E, F$) is:

$$ Q_{hkl} = Ah^2 + Bk^2 + Cl^2 + Dkl + Ehl + Fhk $$

where $A = a^{*2}$, $B = b^{*2}$, $C = c^{*2}$, and the cross-terms involve the reciprocal angles. Working in Q-space linearises the problem: for a given assignment of Miller indices to observed peaks, the unknown parameters $\{A, B, C, \ldots\}$ appear as the solution to a system of linear equations.

Brutus solves for these reciprocal parameters (or the subset relevant to the current crystal system), then converts them back to real-space cell parameters. Each candidate cell is then refined with a weighted least-squares fit and scored against the full observed peak list via figures of merit.

System-Specific Parameterisation

The quadratic form simplifies for each crystal system, reducing the number of free parameters. The design matrix column vector for the least-squares system is:

SystemFree parameters ($k$)Design row getLSDesignRow(hkl)
Cubic1$[h^2+k^2+l^2]$
Tetragonal2$[h^2+k^2,\; l^2]$
Hexagonal2$[\tfrac{4}{3}(h^2+hk+k^2),\; l^2]$
Orthorhombic3$[h^2,\; k^2,\; l^2]$
Monoclinic4$[h^2,\; k^2,\; l^2,\; hl]$
Triclinic6$[h^2,\; k^2,\; l^2,\; kl,\; hl,\; hk]$

The solved parameters relate to reciprocal lattice constants. For example, in orthorhombic, the three fitted parameters are $A=1/a^2$, $B=1/b^2$, $C=1/c^2$, giving $a=1/\sqrt{A}$ etc. For monoclinic, the four parameters are $A=1/(a^2\sin^2\beta)$, $B=1/b^2$, $C=1/(c^2\sin^2\beta)$, $D=-2\cos\beta/(ac\sin^2\beta)$, from which $\beta$ is extracted as $\cos\beta = -D/(2\sqrt{AC})$.

Architecture

Brutus runs entirely in the browser, requiring no installation. The program is split across several files:

Quick Start Guide

Use the following workflow for a typical single-phase powder pattern.

  1. Load the data file. Click Select Data File. Supported formats include .xy, .xrdml, .ras, .xra, and plain text two-column data.
  2. Detect peaks. On the Peaks tab, adjust the Min peak (%), Radius (pts), and Points sliders until the automatically detected peaks match the visual pattern.
  3. Curate the peak list. Carefully review all peaks:
    • Edit $2\theta$ positions for accuracy.
    • Delete spurious peaks (noise, impurities, Kα2 if stripping is off).
    • Add important missing peaks, e.g. low-angle reflections, using Ctrl + Click on the chart.
    A clean list of about 15–20 peaks, free of impurities at low angle, is ideal.
  4. Set parameters. On the Parameters tab:
    • Select the correct X-ray Radiation Preset (e.g. Cu Kα).
    • Choose whether to enable Strip K-alpha2. When enabled, the Ka1 wavelength is used; when disabled, the average Kα wavelength is used. The default is ON (average Kα wavelength).
    • Set a chemically reasonable Max Volume (ų) to limit the search space.
    • Set 2θ Error (°) according to your data quality (e.g. ≈0.02° for synchrotron, ≈0.05° for a typical lab diffractometer).
    • Leave Refine Zero-Point Error enabled unless you have a specific reason to turn it off.
    • Select the crystal systems to search. Enabling Orthorhombic, Monoclinic, or Triclinic activates GPU-accelerated searches.
  5. (Optional) Tune GPU Parameters. If searching Orthorhombic, Monoclinic, or Triclinic, a new box appears.
    • HKL Basis Size: Number of simple HKLs to use for the search (Defaults: Ortho 300, Mono 100, Tri 40).
    • Peaks to Combine: Number of observed peaks used to form combinations (Defaults: Ortho 7, Mono 7, Tri 9).
    • Leave these at their defaults unless you have trouble finding a solution.
  6. Start indexing. Click Start Indexing. Progress is shown on the main progress bar.
  7. Inspect solutions. On the Solutions tab:
    • Sort solutions by M(20).
    • Click a row to display calculated (blue) and observed (red) tick marks on the chart.
    • A plausible solution will show excellent alignment and reasonable space-group suggestions.

The User Interface

The application window is divided into a Controls Panel (left) and a Results Area (right), separated by a draggable divider that allows you to resize both panels.

Controls Panel

The controls are organized into three main tabs.

1. Peaks Tab

2. Parameters Tab

GPU Search Parameters

When Orthorhombic, Monoclinic, or Triclinic is selected, a dedicated panel appears with four parameters controlling the scope, speed, and memory usage of the GPU search.

Note: The status text at the bottom of the panel updates in real-time as you adjust these values, showing the total estimated number of cells to be tested. Ensure this number remains reasonable (e.g. under 500 Billion for a result within a few minutes). For a regular GPU card (e.g. Nvidia T1000, 8 GB) the effective search speed is typically in the range of tens of millions of cells per second.

3. Solutions Tab

Results Area

Peak Finding in Detail

Accurate peak positions are the single most important input for successful indexing. Brutus uses a multi-step procedure to detect peaks from raw intensity data.

Algorithm Steps

  1. 2 stripping (optional): If Strip K-alpha2 is enabled, the Rachinger (1948) correction is applied to the raw intensities. Even if the peak base shape is not perfectly preserved, this is unimportant here because only peak positions are used for indexing, and those are highly accurate.
  2. Background subtraction: A rolling-ball style algorithm estimates and removes the background. The Radius slider controls the ball radius in data points. A larger radius smooths over broader background humps.
  3. Data smoothing: A Savitzky–Golay filter is applied to the background-subtracted signal to reduce noise while preserving peak shapes. The Points slider sets the window width.
  4. Initial peak detection: Local maxima above the Min peak (%) threshold are identified.
  5. Position refinement: For each detected peak, a five-point least-squares quadratic fit (Savitzky–Golay coefficients) is carried out around the local maximum to obtain a sub-step precision position. If the peak is too close to the data edge, the algorithm falls back to a three-point fit.

Suspected Kα2 Peaks & Wavelength Treatment

If your data has not been stripped of Kα2, Brutus automatically scans the peak list to flag suspected Kα2 ghosts:

Practical Recommendations

Indexing Algorithm and Search Strategy

Brutus uses an exhaustive, symmetry-specific trial-and-refine indexing algorithm. For each crystal system, the program generates trial solutions from combinations of low-angle observed peaks and low-index Miller indices, solves the corresponding linear system in reciprocal space, rejects unphysical cells, and then performs a full weighted least-squares refinement on surviving candidates.

Linear System Formulation

The search is formulated as a system of linear equations based on the quadratic form. Given a trial assignment of Miller indices $\{(hkl)_i\}$ to observed peaks $\{Q_{obs,i}\}$:

$$ Q_{obs,i} = \sum_j P_j \cdot H_{j,i} $$

where $H_{j,i}$ is the $j$-th element of the design row for $(hkl)_i$ (see the table in the Overview), and $P_j$ are the unknown reciprocal-lattice parameters. Stacking $k$ such equations gives a $k \times k$ square system that is solved exactly (or in the over-determined case, by weighted least-squares).

CPU Searches (High Symmetry) CPU

High-symmetry systems have fewer degrees of freedom ($k \leq 2$) and are solved on the CPU in a background Web Worker. Recent updates have significantly expanded the search scope to ensure large unit cells are not missed.

GPU-Accelerated Searches (Low Symmetry) GPU

For Orthorhombic, Monoclinic, and Triclinic systems, the combinatorial explosion makes CPU searching impractical. Brutus offloads these tasks to the WebGPU API, allowing billions of trial cells to be evaluated in minutes.

The Combinatorial Search Space

The total number of cells tested depends on two user-configurable parameters.

Total Trials = Peak Combinations × HKL Combinations × Permutations $$ N_{total} = C(N_{peaks},\, k) \;\times\; C(N_{hkl},\, k) \;\times\; k! $$

System Specifics

System$k$Default $N_{hkl}$Default $N_{peaks}$Estimated trials (defaults)
Orthorhombic33007 $C(7,3) \times C(300,3) \times 6 \approx 936\;\text{Million}$
Monoclinic41007 $C(7,4) \times C(100,4) \times 24 \approx 3.3\;\text{Billion}$
Triclinic6409 $C(9,6) \times C(40,6) \times 720 \approx 232\;\text{Billion}$

These numbers can grow very rapidly. For example, increasing the Monoclinic $N_{hkl}$ from 100 to 150 multiplies the HKL combinations by roughly $(150/100)^4 \approx 5\times$, and increasing $N_{peaks}$ from 7 to 9 multiplies the peak combinations by $C(9,4)/C(7,4) = 126/35 \approx 3.6\times$.

GPU Memory Management & Safe Batching

Testing hundreds of billions of cells generates massive amounts of data. To prevent crashing your browser or freezing your computer, Brutus uses a Safe Batching architecture:

  1. Hardware Limit (VRAM): The program detects your GPU's available memory. On mobile devices or laptops with integrated graphics, the batch size is automatically reduced to fit within the available VRAM.
  2. Time Limit (TDR Prevention): On Windows, if a GPU computation takes longer than ~2 seconds, the OS assumes the driver has frozen and resets it (DXGI_ERROR_DEVICE_HUNG). Brutus enforces a strict "Speed Limit" per batch (e.g. max 100,000 trials per GPU dispatch for Triclinic) to keep the GPU responsive.
What this means for you: You do not need to configure memory settings. If you see the progress bar moving in many small steps, the safety system is working correctly.

If the search is too slow: Adjusting internal GPU memory or chunk parameters is not useful. Instead, curate the peak list, lower the HKL Basis Size, or reduce Peaks to Combine.

Two-Stage GPU Filtering

Most candidate cells are invalid and are discarded on the GPU to save CPU time. Two filters are applied:

  1. Geometric Filter: Cells are rejected immediately if they are singular (near-zero determinant), have unphysical edge lengths ($< 2\;\text{Å}$ or $> 50\;\text{Å}$), have invalid angles ($< 20°$ or $> 160°$ for monoclinic/triclinic), or exceed the user's Max Volume.
  2. Figure of Merit Pre-filter: For surviving cells, the GPU calculates a fast internal score: the average absolute deviation $\langle|\Delta Q|\rangle$ over the first $\max(N_{peaks}, 10)$ observed peaks. If this deviates beyond FoM Tolerance × ΔQ_{user} (where $\Delta Q_{user}$ is the Q tolerance derived from the user's 2θ Error), the cell is discarded. Only cells passing this filter are stored in the GPU buffer for CPU refinement.

Least-Squares Refinement

Every candidate cell passing the GPU (or CPU) filters is subjected to a full weighted least-squares refinement before being scored and reported. This section details the weighting scheme, zero-point correction, and error propagation implemented in worker-logic.js.

Q-Space Tolerance

The matching tolerance used during peak assignment is not a fixed ΔQ but a peak-angle-dependent quantity derived from the user's stated $\sigma_{2\theta}$. From $Q = (4/\lambda^2)\sin^2\theta$, differentiating:

$$ \Delta Q_i = \frac{8\,\sin\theta_i\,\cos\theta_i}{\lambda^2}\,\frac{\sigma_{2\theta}\,\pi}{360} = \frac{2\,\sin(2\theta_i)}{\lambda^2}\,\sigma_{2\theta}^{(\text{rad})} $$

This means the Q tolerance is largest at intermediate angles (where $\sin 2\theta$ is largest) and converges to zero at $2\theta = 0°$ and $2\theta = 180°$. The implementation adds a small constant ($10^{-9}$) to prevent division by zero.

Least-Squares Weighting Scheme

For a measurement with constant $2\theta$ uncertainty $\sigma_{2\theta}$, the uncertainty in $Q_{obs}$ is:

$$ \sigma_{Q,i} = \frac{2\,\sin(2\theta_i)}{\lambda^2}\,\sigma_{2\theta} $$

The statistically optimal weight for each peak in the least-squares system is therefore:

$$ w_i = \frac{1}{\sigma_{Q,i}^2} \propto \frac{1}{\sin^2(2\theta_i)} $$

This scheme gives high weight to low-angle peaks (which have small $\sigma_Q$) and low weight to high-angle peaks (which have large $\sigma_Q$). This is the opposite of using $w_i = Q_{obs,i}$ (which would over-weight high-angle peaks), and it is particularly important for the zero-point correction: the historical un-weighted approach significantly biased both the cell parameters and the zero shift. A floor of $w_\text{min} = 1/\sin^2(178°)$ prevents weights from blowing up for hypothetical peaks near $0°$ or $180°$.

The least-squares normal equations solved are therefore:

$$ (M^T W M)\,\mathbf{x} = M^T W\,\mathbf{q}_{obs} $$

where $M$ is the design matrix (rows = indexed peaks, columns = reciprocal-lattice parameters), $W = \text{diag}(w_1, \ldots, w_n)$, and $\mathbf{x}$ is the vector of reciprocal parameters. This system is solved numerically via LU decomposition with partial pivoting (implemented in luDecomposition / luSolve).

Zero-Point Error Refinement (Two-Round Strategy)

When Refine Zero-Point Error is enabled, a zero-point parameter $\Delta_{zero}$ (in degrees) is added as an extra column to the design matrix. The physical model is that the true $2\theta$ of a peak is $2\theta_{true} = 2\theta_{obs} - \Delta_{zero}$, which shifts the observed Q as:

$$ Q_{true} = \frac{4\sin^2[(\theta_{obs} - \Delta_{zero}/2)]}{\lambda^2} $$

Linearising gives the zero-error design-matrix column element:

$$ \frac{\partial Q}{\partial \Delta_{zero}} \approx \frac{2}{\lambda^2}\sin(2\theta_{obs}) $$

A two-round strategy is used to handle non-trivial zero shifts robustly:

  1. Round 1: Peak assignment is done assuming $\Delta_{zero}=0$. The full weighted LS system (cell + zero) is solved to obtain a first estimate $\Delta_{zero}^{(1)}$.
  2. Round 2: Peak assignment is repeated, this time correcting the observed $Q$ values by $\Delta_{zero}^{(1)}$. The LS system is solved again on the new peak set. This dramatically improves robustness when the zero shift is large enough to cause mis-assignments in Round 1.

The same two-round strategy is applied to both GPU candidates (monoclinic/triclinic) and CPU candidates (cubic/tetragonal/hexagonal/orthorhombic).

Minimum Peaks Required for Acceptance

A solution is accepted only if a minimum number of observed peaks can be indexed. These thresholds are hard-coded and slightly exceed the number of free parameters to ensure the fit is over-determined:

SystemFree parametersMin indexed peaks
Cubic14
Tetragonal / Hexagonal25
Orthorhombic36
Monoclinic47
Triclinic67

Standard Deviation Propagation

The covariance matrix $V = \sigma^2 (M^T W M)^{-1}$ (with $\sigma^2 = SSR/\nu$, $\nu$ = degrees of freedom) is propagated to standard deviations of the cell parameters using analytical derivatives:

Computational Limits & The Scale of Search

Because Brutus performs a brute-force combinatorial search, the number of trial cells can become astronomical. While modern GPUs are incredibly fast, there are hard limits defined by both hardware (WebGPU specifications) and practical time constraints.

The Hierarchy of Limits

The "Total Trials" displayed in the application is:

$$ \text{Trials} = \underbrace{C(N_{hkl}, k)}_{\text{Limit: } \approx 4.29 \times 10^9} \times \underbrace{C(N_{peaks}, k)}_{\text{User dependent}} \times \underbrace{k!}_{\text{Constant per system}} $$

The HKL Combos count is the critical hardware constraint. It maps directly to the number of GPU threads dispatched. Since WebGPU uses 32-bit unsigned integers (u32) for thread indexing, this value must remain below $2^{32}-1 \approx 4.29 \times 10^9$. Brutus automatically calculates this limit based on your HKL Basis Size and prevents the search from starting if you exceed it.

System-Specific Breakdown

1. Triclinic ($k=6$)

2. Monoclinic ($k=4$)

3. Orthorhombic ($k=3$)

Practical Implications

Is the "Trials" number real?
Yes. It represents the total number of mathematical systems the GPU would solve. However, most invalid cells are rejected after checking only the first 2 or 3 peaks ("Fail-Fast" optimisation), meaning the GPU rarely performs the full calculation for every trial. The effective throughput is substantially higher than the raw theoretical number implies.

Can you actually run 120 Quadrillion trials?
Technically yes — the Safe Batching system prevents your computer from crashing. However, even at 200 Million trials/sec, 120 Quadrillion trials would take ~19 years to finish.

Practical Limit: For a search to finish in a "coffee break" time frame (5–10 minutes), keep the Total Trials count under 100–500 Billion. For modern desktop GPUs (e.g. Nvidia RTX series), speeds of 500 M – 2 B cells/sec are typical for Orthorhombic; Triclinic is slower due to the larger $k$ and the more complex 6×6 linear solve.

If you obtain no solutions, or too many, consider the following adjustments.

If no solutions (or only poor ones) are found

If you get too many solutions (GPU buffer fills)

The GPU buffer is limited (default 50,000). If it fills up, the search stops early ("Buffer Full"). In that case:

  1. Tighten the FoM Tolerance (e.g. from 3 to 1.5). This discards more candidates on the GPU, so fewer reach the CPU buffer.
  2. Lower the Max Volume if you can constrain the cell size further.
  3. Increase the Buffer size (e.g. from 50 to 200 kCells) if you suspect the correct solution is present but ranked below the buffer cut-off. Note that larger buffers mean more CPU work during post-processing.
  4. Tighten 2θ Error if your data quality warrants it — this also tightens the Q-tolerance used in matching.

Evaluating Solutions

The indexing search usually produces several candidate cells. Brutus keeps at most the best 50, ranked by M(20), after the search. Selecting the correct one requires interpreting figures of merit and checking the refined fit visually.

de Wolff Figure of Merit: M(20) / M(N)

The primary ranking indicator is the de Wolff Figure of Merit, M(N), calculated from the first $N$ observed reflections (typically $N = 20$ or the full peak list if fewer than 20 peaks are available). It quantifies both positional accuracy and completeness. The implementation uses:

$$ M(N) = \frac{Q_N}{2\,\langle|\Delta Q|\rangle\,N_{calc}} $$

A high M(N) therefore requires both a small average positional error (large 1/$\langle|\Delta Q|\rangle$) and high completeness (small $N_{calc}$ relative to $N$). Both M(20) (from the first 20 peaks) and M(all) (from the full peak list) are computed and stored. The M(20) value is used for ranking; M(all) appears in the PDF report.

M(20) valueInterpretation
> 20Very likely correct. A cell with both high M(20) and plausible chemistry is almost certainly the true solution.
> 10Likely correct, provided the cell volume is chemically plausible.
5–10Plausible; requires further inspection. Check all peaks visually.
< 5Probably spurious; treat with great caution.

A minimum M(20) of 2.0 is required for a solution to be retained at all. Solutions below this threshold are silently discarded.

F(N) Figure of Merit

As a complementary metric, Brutus computes the F(N) figure of merit (Smith & Snyder, 1979). While M(N) is expressed in Q-space, F(N) uses $2\theta$ directly:

$$ F(N) = \frac{N}{\langle|\Delta(2\theta)|\rangle \cdot N_{calc}} $$

A high F(20) indicates a precise fit with low average angular error. A solution with both high M(20) and high F(20) is highly reliable. Both F(20) and F(all) are reported in the PDF.

Least-Squares Refinement and Standard Deviations

For each promising candidate, Brutus performs the two-round weighted least-squares refinement described in the Refinement section. The resulting cell parameters are reported with standard deviations propagated from the covariance matrix.

When Refine Zero-Point Error is enabled, the reported zero correction ($\Delta_{zero}$) represents a systematic misalignment of the diffractometer. A large value (e.g. >0.05° for a lab instrument) may indicate a calibration issue and should be investigated independently.

Duplicate Suppression

To avoid reporting multiple equivalent cells, Brutus generates a canonical "key" for each refined cell: the system name plus the three axis lengths rounded to two decimal places, and (for monoclinic/triclinic) the angles. If a new solution matches an existing key, it replaces the existing entry only if its M(20) is higher.

Space Group Analysis

After a high-quality unit cell is obtained, Brutus can suggest likely space groups based on systematic absences. This serves as input for subsequent structure solution or Rietveld refinement.

Note: For some space groups that have different origin choices but identical extinction rules (e.g. Pmmn), both settings appear in the list. This is intentional — they are not distinguishable by powder diffraction alone, and the reminder is useful for subsequent Rietveld refinement.

Method

  1. Generate unique reflections. Using the refined cell, Brutus computes a complete list of theoretical reflections up to the maximum observed $2\theta$, applying Friedel's law to keep only symmetry-unique $(hkl)$ indices (e.g. $l > 0$, or $l=0,\,k > 0$, or $l=k=0,\,h > 0$ for triclinic).
  2. Index observed peaks. All peaks in the curated list are matched against the theoretical pattern using the Q-space tolerance.
  3. Build a high-confidence subset. To avoid ambiguity from overlapping reflections, Brutus retains only peaks for which no other theoretical $(hkl)$ lies within the 2θ Error window. This "unambiguous" subset is used exclusively for extinction analysis.
  4. Determine centering and extinctions. The unambiguous $(hkl)$ set is compared against the extinction rules (centering conditions and glide/screw symmetry conditions) for all candidate space groups. Brutus classifies violations as:
    • Hard violations: an unambiguous, real (non-Kα2) reflection breaks the rule.
    • Soft violations: only Kα2-suspect peaks break the rule. Space groups with only soft violations are retained in the list with a warning.
    The zone assignment for each $(hkl)$ (e.g. h00, 0k0, hk0, hkl etc.) is used to apply the correct subset of conditions for that zone.
  5. Rank space groups. Based on the crystal system and detected centering, the internal space-group database is filtered. Each space group is assigned a (hard) violation count and ranked accordingly. The conventional name of the space group (consistent with the orientation found by the program) is displayed.

Interpreting the Output

Advanced Topics: Enhanced Search and Sieving

Beyond the core indexing routine, Brutus applies several "fishing" strategies and reduction steps to improve robustness and simplify the set of final solutions. These are applied in findTransformedSolutions() after the main search completes.

1. Niggli Cell Reduction & Symmetry Upgrade

For each solution, Brutus computes the Niggli reduced cell — the standardised, most compact primitive cell for the lattice. The conventional (possibly centred) cell is first reduced to primitive using the detected centering (I, F, C, etc.), and then the Niggli algorithm is applied.

After reduction, the code calls getSymmetry() with a slightly loose tolerance (0.05 Å / 0.05°) to detect pseudo-symmetry: a cell found as monoclinic might, after Niggli reduction, reveal all angles equal to 90° and two equal axes — i.e. it is actually tetragonal. If the detected symmetry is higher than the search symmetry (and the user has selected that higher-symmetry system), a new trial cell is constructed with the correct symmetry and re-refined and re-scored.

Niggli cells are also used for:

2. Matrix-Based Cell Transformations

A set of seven crystallographic transformation matrices $P$ is applied to each candidate cell. Each transforms the real-space metric tensor $G \to P^T G P$, generating a new lattice description. The matrices include:

The resulting cell is classified by symmetry and, if the system is among those selected by the user, sent to refinement and scoring.

3. HKL Divisor Analysis (Sub-cell Detection)

The list of indexed $(hkl)$ values for each solution is inspected for common divisors. The GCD of all $|h|$ values, all $|k|$ values, and all $|l|$ values are computed separately. If any GCD exceeds 1 (e.g. all $h$ values are even), the program tests a shrunken cell with the corresponding axis divided by that GCD. This detects cases where the indexing algorithm found a super-cell of the true cell.

4. Orthorhombic–Hexagonal Relationship

A hexagonal lattice can sometimes be described as C-centred orthorhombic with $b/a \approx \sqrt{3}$ (or permutations). All orthorhombic solutions are checked against this condition (within 3% tolerance), and potential hexagonal equivalents are generated and evaluated.

5. Swap Fishing for Ambiguity

For each high-ranking solution, Brutus re-examines the first few low-angle peaks. Pairs of peaks that are very close in angle (the most common source of mis-assignment in the initial brute-force step) are identified, and a new hypothesis is created by swapping their HKL labels. The swapped cell is fully refined and rescored. If it yields a higher M(20), it is retained.

6. Final De-Duplication (Sieving)

After all transformations and re-refinements, Brutus applies a final de-duplication step. If two solutions have volumes within 1% of each other:

The final retained set contains at most 50 solutions, ranked by M(20).

Troubleshooting & FAQ

Error: "WebGPU not supported in this browser"

This error usually indicates a software or configuration issue rather than a lack of hardware capability. Even integrated graphics (e.g. Intel UHD) support WebGPU if configured correctly.

To verify: Type chrome://gpu in your address bar and look for "WebGPU". If it says "Software only" or "Disabled", the GPU search will not function.

Why were no solutions found?

Why is M(20) low although the fit looks good?

The progress bar is stuck / search seems frozen

Test Files

Found a Bug?

If you suspect you have found a bug (e.g. the interface freezes unexpectedly or valid data crashes the GPU search), please report it.

Report a Bug on GitHub

Note: You will need a free GitHub account. The form will ask for your browser version and GPU model to help reproduce the issue.

References

Brutus was developed by Nita Dragoe at Université Paris-Saclay (2024–2025) as a successor to the earlier program Powder (1999–2000). If you use Brutus in your work, please cite:
https://doi.org/10.13140/rg.2.2.18182.84806

For further background on the methodology, the following references are recommended:

  1. M(N) figure of merit
    de Wolff, P. M. (1968). "A Simplified Criterion for the Reliability of a Powder Pattern Indexing." Journal of Applied Crystallography 1, 108–113.
  2. F(N) figure of merit
    Smith, G. S. & Snyder, R. L. (1979). "F(N): A Criterion for Rating Powder Diffraction Patterns and Evaluating the Reliability of Powder-Pattern Indexing." Journal of Applied Crystallography 12, 60–65.
  3. General powder diffraction text
    Klug, H. P. & Alexander, L. E. (1974). X-Ray Diffraction Procedures for Polycrystalline and Amorphous Materials, 2nd ed. New York: Wiley-Interscience.
  4. 2 Stripping Algorithm
    Rachinger, W. A. (1948). "A Correction for the α1 α2 Doublet in the Measurement of Widths of X-ray Diffraction Lines." Journal of Scientific Instruments 25, 254.
  5. Savitzky–Golay smoothing
    Savitzky, A. & Golay, M. J. E. (1964). "Smoothing and Differentiation of Data by Simplified Least Squares Procedures." Analytical Chemistry 36(8), 1627–1639.
  6. Alternative indexing approaches
    Ito, T. (1949). "A General Powder X-ray Photography." Nature 164, 755–756.
    Werner, P.-E., Eriksson, L. & Westdahl, M. (1985). "TREOR, a Semi-exhaustive Trial-and-Error Powder Indexing Program for All Symmetries." Journal of Applied Crystallography 18, 367–370.
    Visser, J. W. (1969). "A Fully Automatic Program for Finding the Unit Cell from Powder Data." Journal of Applied Crystallography 2, 89–95.
    Le Bail, A. (2004). "Monte Carlo Indexing with McMaille." Powder Diffraction 19(3), 249–254.
    Boultif, A. & Louër, D. (2004). "Powder Pattern Indexing with the Dichotomy Method." Journal of Applied Crystallography 37, 724–731.
  7. Previous software
    Dragoe, N. (2001). "PowderV2: A Suite of Applications for Powder X-Ray Diffraction Calculations." Journal of Applied Crystallography 34, 535.