# Solutions To Common Problems

This page documents some of the error messages that can appear in the Questaal suite, and their resolution.

Fatal errors typically begin with a message  Exit −1 routine-name message, indicating where and why the program failed. Sometimes non-fatal, warning messages are given. Usually the message has  “(warning)“  or something similar.

If the warning is severe, there will be an accompanying exclamation mark, e.g.  warning!, which will be logged and indicated on exit (see severe warnings below).

This page discusses problems that may arise, and also error messages that may appear. Error messages are also summarized in the Error Messages page, with solutions. On this page the discussion is more discursive.

### General

Generally, your directory should be cleaned after every complete simulation. For example, if you run a simulation with extension .si, then edit the input files and then rerun the simulation, you may get conflicts as old information about .si simulation runs still exist in the directory.

It is thus good practise to clean your directory of unnecessary files before a new simulation (only files related to the material in question need be cleaned), especially cleaning the rst and mixm files.

Also, you should inspect the standard output! Particularly look out for the severe warning message when the code exits.

#### Severe Warnings

If a Questaal program includes a  [severe warning]  in the Exit message:

Exit n [1 severe warning] message

it means that a significant problem was encountered which should be investigated. A warnings of this type is logged to indicate you should look for a message somewhere the standard output containing the string  warning!.

Note: less severe warnings can also appear in the text. These are not indicated when the program exits.

If you get an error similar to the following:

unexpected value # for file rmt ... expected #


it is probably because you have a mismatched restart file rst.ext which contains augmentation radii from a prior run. This information, should you change your input files, could become invalid for the current simulation and thus cause errors. See Error messages.

#### 2. Problems with Radial Schrodinger equation solver

A message like this may appear:

RSEQ : nit gt999 and bad nodes for l=2.  Sought 0 but found 1.  e=-2.0811D-01


This can occur for a myriad of reasons, usually because something is amiss in the potential (issue [3]), or because the linearization energy is too far out of range (issue [4]).

#### 3. Small errors in radial integration, ASA

The Questaal codes integrate partial waves on a radial mesh.

It turns out that the algorithm it uses can become unstable in the ASA, when GGA functionals are used. This is because very small discontinuities can appear in the second energy derivative $\ddot\phi(\varepsilon,r)$ of the partial wave, at the point where the inward- and outward- radial integrations meet. This tiny error gets amplified by the GGA, which since the potential involves gradients and laplacians of the density.

The resolution to this is to add 4 to whatever value you have selected (if any so far) to tag HAM_QASA.

This switch causes the integrator to integrate only outwards when making $\dot\phi_l(\varepsilon,r)$ and $\ddot\phi_l(\varepsilon,r)$. (The standard procedure is to integrate inwards and outwards to a middle point, and join the solution). Outward-only integration can be slightly erroneous when states are deep and core-like (when the wave function is decaying exponentially away from the nucleus), but in the ASA you are not likely to have a valence state where this is a significant issue.

#### 4. Bad logarithmic derivative parameters

The linear method approximates the energy-dependent partial wave $\dot\phi_l(\varepsilon,r)$ with a first-order Taylor series about a linearization energy, traditionally called $\varepsilon_\nu(l)$. Normally you do not specify $\varepsilon_\nu(l)$ in the Questaal package, but the continuously varying principal quantum number Pl. You can specify Pl yourself (indeed on getting started an initial estimate is necessary), but normally the codes will float this quantity in the course of a self-consistency cycle.

##### 4.1 P is too small

It may happen that Pl is floated to a value that is too small (referring to the fractional part of Pl). This can happen if the natural band center is far above the Fermi level, e.g. the Ga $4d$ state. lmf mostly protects against this by not allowing Pl to fall below the free electron value but the ASA codes do not. This is intentional, because what value is acceptable depends on whether the state is folded down or not. (lmf does not have this capability yet.)

By default lmf uses as a lower bound three free electron value for Pl. This is usually adequate. If HAM_AUTOBAS_GW is true, the lower bound is set a little bit higher. This is because in the GW case, the unoccupied states also affect the potential, so there is a higher demand on the precision of states above the Fermi level.

You can also freeze Pl with token SPEC_ATOM_IDMOD.

##### 4.2 P is too large

It may happen that Pl is floated to a value that is too large (referring to the fractional part of Pl), i.e. too close to 1. This can happen if the natural band center is very deep; this can occur when using lmf with local orbitals. Usually it is not an issue, but if it is, you can set PZ to a fixed value and freeze it with SPEC_ATOM_IDMOD.

It can also happen that the floating algorithm goes haywire and puts Pl unrealistically high. This is rare, but it occurs occasionally. A classic example appears when carrying out a QSGW calculation for Ni, where the usual floating algorithm tries to set Pd=3.96 (it should be about 3.85).

The resolution to this is to revert to lmf’s traditional floating algorithm. By default lmf’s uses a new algorithm introduced by Takao Kotani with the advent of version 7.0. We make the newer scheme the default because should in principle be slightly better (in practice there is little difference), but for reasons unknown it occasionally develops problems.

The choice of floating algorithm is buried in the second argument to this tag, input through either express_autobas_pfloat or HAM_AUTOBAS_PFLOAT. You can find a brief description of the tag by running

\$ lmf --input


To select the traditional floating algorithm, set the second element of pfloat to zero, in the input file e.g.

express autobas[... pfloat=2,0]


Note: If you build the input file with the blm utility, you can cause it to autoselect this algorithm by invoke blm with switch  --clfloat.

Alternatively, you can freeze Pl with token SPEC_ATOM_IDMOD, but usually the floating algorithm will make a better choice.

##### 4.3 PZ is too large

The local orbital’s version of P (called PZ) can occasionally become too large for deep lying states. If you freeze it with SPEC_ATOM_IDMOD, both PZ and the usual valence P are frozen. For an example see this tutorial

#### 5. Inconsistent treatment of local orbitals

##### 5.1 Inconsistent contents of local orbitals in the restart file

When reading the restart file, lmf may produce a message like this:

         site   1:Se      :file pz  is  0.00  0.00  0.00  0.00
given pz  is  0.00  0.00  3.90  0.00
warning!  local orbital mismatch


lmf has found that the local orbitals specified in the input file are not consistent with those in the restart file. This mismatch flags a severe warning, and may also cause lmf to stop with an error, failing to find the eigenvalues..

For solution, see Error Messages.

##### 5.2 Error in reading the atm file

When reading the file atm.ext, lmf may abort with an error message like this:

 is=1 qsc0=10 qsca=10 qc=28 qc0=28
Exit -1 problem in locpot -- possibly low LMXA, or orbital mismatch, species Se


It can happen for more than one reason, but the most common is that lmfa generated the atm file with a valence-core partitioning different from what lmf expects. Usually this means you invoked lmfa with a different set of local orbitals than the current input conditions specify. For example, lmfa create the atm; you create a new basp file with a different set of local orbitals but do not re-run lmfa with altered valence-core partitioning.

The solution is to run lmfa again with the same input conditions as lmf expects.

#### 6 Non-integral number of electrons

You may encounter this message, especially when using the tetrahedron method :

 (warning): non-integral number of electrons --- possible band crossing at E_f


In finding a Fermi level the integrator assigns weights to each state. This message is printed when the sum of weights don’t add up to an integral number of electrons. It mostly likely occurs when using the tetrahedron integration scheme and two bands cross near the Fermi level. This confuses the tetrahedron integrator because doesn’t know how to smoothly interpolate the bands. The larger the system with a denser mesh of bands, the likely this problem appears.

It can also appear if you use a non-integral nuclear charge, or add background charge to the system. This is not an error, and you can disregard the warning.

The resolution to this is to change the number of k divisions. In can happen that the problem will resolve itself in the course of the self-consistency cycle, as the potential changes.

#### 7 Inexact Inverse Bloch transform

This error may appear when the (static) self-energy Σ0 is read from disk, causing the program to stop

 Oops!  Bloch sum deviates more than specified tolerance:
i   j      diff              bloch sum                file value
1   1    0.000020     -0.020464    0.000000     -0.020443    0.000000


This occurs when the inverse Bloch sum of $\Sigma^0_{\mathbf{R}L,\mathbf{R'}L'}(\mathbf{k})$ to the real space $\Sigma^0_{\mathbf{R+T}L,\mathbf{R'}L'}$ is inexact. (Here T is a lattice translation vector.) The reader performs the inverse Bloch transform using FFT techniques. It is followed by a forward Bloch sum and compared against the original $\Sigma^0_{\mathbf{R}L,\mathbf{R'}L'}(\mathbf{k})$.

For the inverse sum, a cluster of points around each atom is generated. The radius of the cluster is printed out in a line similar to

 hft2rs: make neighbor table for r.s. hamiltonian using range = 4.9237 * alat


The number of pairs (connecting vectors) it finds within spheres around each atom of radius  range  is printed out in lines like these:

 hft2rs: found 1020 connecting vectors out of 1024 possible for FFT
symiax: enlarged neighbor table from 1020 to 1452 pairs (48 symops)


The first line tells you how many pairs it found out, and also the number pairs it requires for the FFT to be exact. (The second tells you how many pairs it “padded” by adding equivalent boundary points to the original list, but it is not important here). The (backward,forward) process should be exact provided enough pairs are available. In the lines above, only 1020 pairs were found, while 1024 pairs are needed so as not to lose any information in the process.

To resolve this problem, increase  HAM_RSRNGE. You can use as large a number as you like, but the larger the number, the slower the calculation. Best to increase  HAM_RSRNGE just enough (say 4.9237→6) so that the error does not appear. Alternatively, you can increase the acceptable tolerance in the error (HAM_RSSTOL).

#### 8 Failure to find all eigenvalues

The diagonalizer is unable to calculate all of the eigenvalues. lmf aborts with a message similar to
Exit -1 zhev: zhegv cannot find all evals

The ASA lm may abort with the message
DIAGNO: tinvit cannot find all evecs

This can happen for several reasons.

• The diagonalizer sometimes uses inverse iteration to diagonalize the tridiagonal form of the matrix after the Householder transformation.

Solution: Set [BZ_INVIT] to false; another algorithm will be used. If this is the problem it will usually disappear with some tiny change, e.g. the density is updated.

• Another common reason for this error is that the overlap matrix is not positive definite.

• Especially in the ASA, this can happen if spheres overlap too much or the potential is very poor. Change the input conditions.

• If this occurs when using the lmf code, it may be that convergence parameters are too loose. Especially the PMT basis can produce nearly singular overlap matrices when both the LMTO and APW basis are sizeable. This is because they are spanning nearly the same Hilbert space (this is the primary drawback to the method).

• This can also happen with lmf if your restart file has a valence-core partitioning inconsistent with the ctrl file, also explained here.

Solutions:

• tighten the Ewald tolerance (EWALD_TOL) and the tolerance in the plane-wave expansion of envelope functions (HAM_TOL).
• Reduce HAM_PWEMAX
• remove orbitals from the LMTO basis or set EH more negative.
• Set HAM_OVEPS to a small number, e.g. 1e-6.
• Remove rst.ext

#### G1 Error message ecore>evalence encountered

hsfp0 may produce this error:

 ---- hsfp0 ixc=3: ecore>evalence
---- ERROR EXIT!


This means that some core state is higher than the lowest valence state.

If you look near the bottom of the output you should find a message similar to the following:

 hsfp0 core level ecore(  4,1) =  -2.3631 lies above bottom of valence band = -21.4956


It says that core level #4, spin 1 is the offender. Look at the core table in the GWinput file. In this instance it reads

  atom   l    n  occ unocc   ForX0 ForSxc :CoreState(1=yes, 0=no)
1    0    1    0    0      0    0    ! 1S *
1    0    2    0    0      0    0    ! 2S
1    0    3    0    0      0    0    ! 3S
1    0    4    1    0      0    0    ! 4S


The fourth core level is the 4s state (it is a Sr atom).

To fix this problem, include the 4s in the valence as a local orbital. (You will also have to include the 4p state: it is higher than the 4s) If you are using a basp file, do something like

 Sr RSMH= ... PZ= 14.9338 14.9148


lmfa should find this state automatically if HAM_AUTOBAS_LOC is set.