We have
gathered some information from computational chemistry forums, which can be
helpful for you. It should be noted that the following text cannot be
considered as a professional guide; such guides will possibly appear in the
future.
1. The
choice of a DFT functional
First of
all, you should read this manual:
One of the thesises in this book is that the GGA functionals are ususally more universal (let’s say, closer to
“Ab initio”) than the hybrid functionals (this statement, however, has some weak points). This also means that the errors with
these functionals are more systematic: for example, the PBE functional usually
overestimates the bond lengths and underestimates the vibrational frequencies.
In our opinion, if you choose between, e.g., the PBE and B3LYP functionals, you
should note that the latter should be more accurate for most organic molecules,
but it should be less accurate in some problematic cases; so, the PBE
functional is more reliable. Because of that, at one forum we found the
following advice (written in 2010): always use the PBE functional and don’t
worry. It is CGA, so it must be more universal than, e.g., the B3LYP functional.
It is written in this manual that the B3LYP
functional has shown good results for organic molecules, but it is worse for
transition metal compounds and for large molecules. TPSSh probably is a good functional for transition metal compounds (according to this manual).
The B3LYP functional is commonly used in chemistry, while the PBE, PBE0 functionals are commonly used in applications to extended systems (materials) [13].
We highly
recommend you to read this paper:
Markus
Bursch, Jan-Michael Mewes, Andreas Hansen, Stefan Grimme. Best Practice DFT
Protocols for Basic Molecular Computational Chemistry.
D O I:
10.26434/chemrxiv-2022-n304h
Here is another compilation on the subject:
Here is a
screenshot from this paper:
It is not
clear from this list, whether the dispersion correction should be always used.
However, at other forums we found the advices to use the dispersion correction
always if possible. In Ref. [1] you can see that the ωB97X-D is the best single-component
functional, while PBE0-D3 perform almost as well. Besides that, on the CCL list one can read that B3LYP-D3 is usually better than B3LYP.
Dispersion
correction is the interaction of induced dipoles. This correction becomes
important if two parallel benzene rings interact (stacking). So, the dispersion
correction is important for computing such molecules as tetraphenylporphyrin, bilirubin,
etc.
Here is a
list of favorable and non-favorable DFT functionals from the DFT 2015 poll for
computing particular properties:
With all due respect to the creators of the
above list (computational chemistry community), we must mention that we tried to compute the properties of
bilirubin molecule (having intermolecular H-bonds) using the PBE, B3LYP and wB97XD
functionals, and we found that the PBE functional is the worst at describing intermolecular
H bonds (the PMR spectra computed using the PBE/6-311G(D,P) method are in
poorer agreement with the experimental ones than the PMR spectra computed using
the B3LYP/6-311G(D,P) or wB97XD/6-311G(D,P) methods). So, we found that the PBE
functional is not good at describing H-bonds, in contrast to the conclusions
drawn above. So, we think that you should not fully trust these tables.
Another
post from CCL states the following:
- Recommended GGA methods: revPBE-D3, B97-D3
- Recommended meta-GGA methods: oTPSS-D3,
TPSS-D3
- Hybrid functionals: PW6B95-D3, M062X-D3
- Double-hybrids are the most accurate DFT
methods on the market: DSD-BLYP-D3, DSD-PBEP86-D3, PWPB95-D3
In Ref. [2], a thorough energy benchmark
study of various density functionals (DFs) was carried out. The authors write:
“In
summary, we recommend on the GGA level the B97-D3 and revPBE-D3 functionals.
The best meta-GGA is oTPSS-D3 although meta-GGAs represent in general no clear
improvement compared to numerically simpler GGAs. Notably, the widely used
B3LYP functional performs worse than the average of all tested hybrids and is
also very sensitive to the application of dispersion corrections.”
“The
ωB97X-D functional seems to be a promising method. The most robust hybrid is
Zhao and Truhlar's PW6B95 functional in combination with DFT-D3”.
“If higher
accuracy is required, double-hybrids should be applied. The corresponding
DSD-BLYP-D3 and PWPB95-D3 variants are the most accurate and robust functionals
of the entire study.”
The tests
in this paper were performed on GMTKN30 set – this set covers mainly molecules
containing main group elements, mostly organic (link).
So, the double-hybrids seem to be the best
DFT methods at the moment. This is illustrated by the following chart from the
aforementioned paper:
Another
advantage of PBE is that this functional is “cheap”.
Note that
the PBE and PBE0 methods are quite different: PBE is a CGA, while PBE0 is a hybrid method. However, if one compares e.g. BP86, BLYP, BPW91 functionals (GGA) with PBE0, he finds that PBE0 is "less semi-empirical".
Here is
another comparison of DFT functionals. In Ref. [3], a few DFT functionals were benchmarked
for 14 compounds (calculation of vertical excitation energies by TDDFT and their
comparison to experiment). Here are two pictures from this
paper:
Ref. [4]
reports that the CAM-B3LYP and BHandH functionals yield the best agreement
between computed and experimental vertical absorption energies for a set of
some simple organic molecules (involving first and second row atoms).
We have
performed some benchmark NMR computations with different functionals. The 1H
NMR spectra of 26 simple organic molecules (not containing internal hydrogen
bonds) were computed at PCM wB97XD/6-31G(D,P), PCM B3LYP/6-31G(D,P), PCM B3LYP-D3/6-31G(D,P), PCM
B3LYP-D3/aug-cc-pVTZ and some other levels, and the following conclusions were
made:
1) The
methods PCM wB97XD/6-31G(D,P) and PCM B3LYP-D3/6-31G(D,P) yield very similar standard
deviations (SD) from the experiment of 0.1411 ppm and 0.13005 ppm, respectively;
note that the signals of the protons not attached to carbons do not fit into
common correlation). This, however, does not mean that these two functionals
produce similar results (correlation of the values computed by them has an SD
of 0.06849 ppm);
2)
Switching from PCM B3LYP-D3/6-31G(D,P) to PCM B3LYP-D3/aug-cc-pVTZ does not
improve the agreement with the experimental data: the SD is 0.13005 for the
former and 0.13539 ppm for the latter. This is even rather strange for us, why
the enlargement of the basis set does not lead to the improvement of the
agreement with experiment; maybe, the main source of disagreement is the
experimental error or some fundamental problems of NMR computation algorithms.
Note that
we have performed some benchmark IR spectra computations (mentioned in a
previous post in this blog), and we found that switching from wB97XD/6-31G(D,P)
to wB97XD/aug-cc-pVTZ method improves the agreement with the experiment
approximately by a factor of 1.2;
3) It is of
no real importance, whether to perform a full geometry optimization at PCM
B3LYP-D3/aug-cc-pVTZ level of theory, or just perform the geometry optimization
at PCM B3LYP-D3/6-31G(D,P) level and then do a single point with aug-cc-pVTZ
basis set. The NMR shift values obtained by these two approaches correlate with
SD=0.01442 ppm;
4) Taking
into account the solvation effects with PCM model improves the agreement with
the experiment with an almost negligible increase in computational costs;
5) Switching
from PCM B3LYP/6-31G(D,P) to PCM B3LYP-D3/6-31G(D,P) improves the agreement
with the experiment by 0.4% only (SD changes from 0,13056 to 0,13005). This is
because only such molecules as phenol, biphenyl, anthracene, hexane, etc. were
computed. If we had computed such molecule as benzene dimer, the dispersion
correction would become important. Besides that, for our molecules the energies differ rather significantly with the B3LYP and B3LYP-D3 methods.
6) Two NMR
computation schemes – GIAO and CSGT – produce almost identical results (SD
between them is 0.016 ppm).
The
combination of these advices can confuse an inexperienced user. As for us, we
decided that we should use PBE-D3 for inorganic molecules and ωB97X-D or
B3LYP-D3 for organic ones, since we deal with the Gaussian09A package. Such an
advice should be useful only for “amateurs” who are unable to gather more
information.
Anyway, it is better to use several
functionals to ensure that they produce similar results. MP2 should
not also be forgotten (SCS-MP2 seems to be better than conventional MP2, as written in the paper
above; as far as we know, SOS-MP2 is better too).
Recently, the B3LYP/6-31G(D,P) method has
been quite popular. We think that using this method for computing organic
molecules (not containing d and f elements) is still rather adequate, but the
snobs can interpret the use of this method as the sign of amateurishness (at
least, if you don’t employ different functionals and/or basis sets in the same
study). See, for example, this and this posts on the CCL list.
The flaws of this famous B3LYP/6-31G* model
chemistry are discussed in Ref. [5]:
The authors write that the relatively good
performance of B3LYP/6-31G*, which made it so popular, is caused by a hidden
error cancellation. The B3LYP-gCP-D3/6-31G* method, according to the authors, is
much better (it removes the two major deficiencies: missing London dispersion effects and basis set
superposition error). The B3LYP-D3/6-31G* method is slightly worse as it does
not provide a BSSE elimination. This picture illustrates the aforesaid:
As far as
we know, the density fitting / RI (Resolution of the Identity) approximation is usually a good thing,
as it speeds up your calculations without significant loss of accuracy (it least, this is written in Orca manual). However, in
some cases it can lead to bad SCF convergence or give the error of 1-2 kcal/mol
in energies.
Here is a picture from Ref. [13] illustrating the availability of DFT functionals:
2. The
choice of basis set
As far as we know, at the moment the optimal basis sets for high-accuracy computations are Dunning family sets: cc-pVnZ, aug-cc-pVnZ, cc-pCVnZ, cc-pwCVnZ (n=2,3.4,5,
etc). These basis sets are correlation consistent; this means, that they were
optimized using correlated methods, unlike the 6-31G** basis sets. In Ref. [6] the following is stated:
"One of the primary reasons for the cc basis set family’s lasting
popularity is due to a series of empirical observations that as
the cardinal number (n in cc-pVnZ) of the basis set is increased,
energies and various properties converge smoothly toward the
complete basis set (CBS) limit."
"One of the primary reasons for the cc basis set family’s lasting
popularity is due to a series of empirical observations that as
the cardinal number (n in cc-pVnZ) of the basis set is increased,
energies and various properties converge smoothly toward the
complete basis set (CBS) limit."
The so-called complete basis set (CBS) limit means that you first compute with cc-pVDZ, then cc-pVTZ,
then cc-pVQZ, then cc-pV5Z, etc., and the energy should converge to a
hypothetical “complete” basis set limit. At the same time, there are more than 10 extrapolation schemes which give nearly the same result after performing only 2-3 computations (however, these extrapolation schemes are empirical to some extent).
For heavy elements (Z>29), relativistic effects are
strong and must be taken into account either using the methods like ZORA, DKH, or
using effective core potentials (ECPs, PPs). The main relativistic effects include relativistic contraction and spin-orbit interaction. For many tasks, even such elements as Fe, Co, Ni do not require including relativistic effects in the computation (you will have a lot of problems besides relativism with these atoms).
The Ref. [6] provides an overview of the
development of Gaussian basis sets for molecular calculations, with a focus on
four popular families of modern bases ("Gaussian basis set" means any basis set with Gaussian (not Slater) functions, not a specific set for the GAUSSIAN program). The authors write about the cases when
using ECPs is not advisable (in particular, electron paramagnetic resonance),
and it is written that using the DFT-based ZORA or DKH models with segmented
all-electron relativistic contracted (SARC) basis sets produce good agreement
with experiment and higher level ab initio computations.
One interesting point is mentioned in Ref.
[7]: the authors report that the computations with 6-311++G** basis set gave
better molecular geometries than the more costly aug-cc-pVDZ (the methods used
were MP2 and CCSD). In addition, the smaller 6-311++G** invariably leads to
lower calculated total energies than aug-cc-pVDZ. So, it seems that the aug-cc-pVDZ
can be worse than the 6-311++G** set (nevertheless, we suppose that if you need an
expensive basis set or CBS (complete basis set) extrapolation, you should use cc-pVTZ,
cc-pVQZ, cc-pV5Z, etc).
Some people
say that it is not actual to use basis sets larger than cc-pVTZ with DFT. However,
in Ref. [14] the authors performed energy computations of 211 small first and
second row compounds (mostly organic), and they concluded that the 5Z basis set
(aug-cc-pV5Z) is required to get the MAE of atomization energies below 1
kcal/mol. See this blog for more information.
The same is
written at this handbook “Practical Advice for Quantum Chemistry Computations”:
For some small organic molecules, we have
found that the basis sets 6-31++G(D,P) and AUG-cc-pVDZ give almost identical results
(protonation energies of 16 amide-containing molecules computed with wB97XD/6-31++G(D,P) and wB97XD/AUG-cc-pVDZ methods correlate with R= 0,99966; this difference
is almost negligible for our applied tasks). In contrast to the results
reported in the aforementioned paper, the total energies computed with wB97XD/AUG-cc-pVDZ
method are 3-30 kJ/mol lower than the energies computed with wB97XD/6-31++G(D,P).
At the same time, with the basis set AUG-
cc-pVDZ the computation time was 3-6 times higher than with the 6-31++G(D,P)
basis set. So, the 6-31++G** basis set should be still considered good enough.
It is usually considered that the computation
of anions or significantly electronegative atoms (which show big negative Mulliken
charge) requires the use of diffuse functions (“++” for 6-31G or “aug” for cc-pVnZ).
However, in the paper [8] this conclusion is criticized to a significant
extent. The authors write:
“We
conclude that the use of diffuse functions for calculating geometrical
parameters for PAH anions in general is unnecessary and does not improve the
calculated results significantly. Energy calculations are affected in much the
same way.”.
As the authors write, the only case when the
diffuse functions are important are the computations of absolute values of
chemical shifts; however, in most cases, when the experimental data are
available, it is no necessary to obtain their absolute values as the
correlations between the computed and experimental values can be built instead.
On the other hand, D. Truhlar who investigated the use of diffuse functions writes here:
"How should one add diffuse functions to the basis set? Diffuse functions are known to be critical in describing the electron distribution of anions (as discussed in my book), but they are also quite important in describing weak interactions, like hydrogen bonds, and can be critical in evaluating activation barriers and other properties."
The Truhlar group recommends using the "jun-" basis sets (see below).
One more source of information is the review "Basis sets in quantum chemistry" by C. David. Sherill. The author writes in this review about the diffuse functions:
Our knowledge of the subject
and our personal experience says that the diffuse functions indeed should be
used when calculating anions. We have computed the energies of deprotonation of
12 carbon acids (with PCM solvation model), both with diffuse functions and
without them (wB97XD/6-31++G(D,P) method and the wB97XD/6-31G(D,P) method), and
the values calculated by the first method correlate much better with
experimental PKa values than the values computed without diffuse function (the
correlation coefficients R are correspondingly 0,99522 for wB97XD/6-31++G(D,P)
and 0,98884 for wB97XD/6-31G(D,P)).
Some
recommendations concerning the choice of basis sets can be found on Orca input library:.
These recommendations are:
- Rule of thumb: Energies and geometries are
usually fairly converged at the DFT level when using a balanced polarized
triple-zeta basis set (such as def2-TZVP) while MP2 and other post-HF methods
converge slower w.r.t. the basis set. Ab initio methods are much more basis set
sensitive than DFT methods
- Stick with one family of basis sets that is
available for all the elements of your system. Mixing and matching basis sets
from different families can lead to problems.
- Calculations on heavy elements can either be
performed using an all-electron approach or effective core potentials (ECPs).
Here is a picture from the Orca input library:
So, it
seems that diffuse functions are really important for computing electron affinities.
As far as we know, usually it is not needed
to use a larger basis set than cc-pVTZ with DFT: further increasing basis set size
will not improve the accuracy of the computation. In contrast, this is not true for ab initio
computations, which will benefit from using larger basis sets, such as cc-pVQZ,
cc-pV5Z, etc.
Some papers,
in which the results of DFT computations are compared to those of ab initio
methods and to the experimental data, conclude that DFT performs not worse (or even slightly better) [10, 11, 12]. This is caused by employing modest
basis sets (not larger than cc-pVTZ) in these papers.
So, the choice between DFT or ab initio
methods depends on which properties are calculated and what accuracy is
required.
The larger the basis set, the more difficult
the SCF convergence is (especially if diffuse-augmented basis sets are used). We recommend to always specify SCF=XQC in GAUSSIAN input files. With this keyword, the scf is firstly converged using the default DIIS algorithm, and if the convergence is not achieved, Gaussian switches to more reliable and costly quadratically convergent SCF procedure.
Ref. [9] describes the role of diffuse
functions in computations. It is known, that for many tasks using the diffuse
functions will not lead to significant increase of computational accuracy, but
will increase the cost of the calculation; besides that, using the diffuse
functions can lead to SCF convergence problems and can increase the basis set
superposition error (BSSE). The authors write: “We conclude that much current
practice includes more diffuse functions than are needed. Often, better
accuracy could be achieved if the additional cost were invested in higher-ζ
basis set or more polarization functions.”
The popular basis set family cc-pVnZ (of Dunning
and co-workers) comprises the diffuse functions, if “aug-” prefix is used. The authors notice that
chemists usually utilize “fully augmented” basis sets, and this may not be
optimal for large molecules. For example, the cc-pVTZ basis set for methane has
s, p, d, and f functions on C and s, p, and d functions on H; aug-cc-pVTZ
contains diffuse s, p, d, and f functions on C and diffuse s, p, and d functions
on H atoms.
In contrast, the earlier “plus” basis sets
originally systematized by Pople and co-workers contained only diffuse s and p functions
on non-hydrogen atoms and no diffuse functions on hydrogen atoms. In Ref. [9]
this is called “minimal augmentation”. The maug-cc-pVTZ basis set retains the
diffuse s and p functions on carbon with the exponential parameters optimized
for the aug case but deletes all other diffuse functions.
So, the authors (Truhlar et al.) conclude that using the
minimal augmentation is usually more optimal than using the full augmentation (particularly
with DFT). The authors recommend the so-called “calendar” basis sets, in
particular the “jun” level of augmentation – for example, the jun-cc-pVTZ set
is recommended in comparison to aug-cc-pVDZ or cc-pVTZ. When increasing the
zeta number in Dunning basis sets (i.e. switching from cc-pVDZ to cc-pVTZ, then to cc-pVQZ, etc), augmentation becomes less important, and using the “calendar”
basis sets provides a more efficient sequence of basis sets (than unaugmented, minimally
augmented, or fully augmented sets) for basis set extrapolation to the complete
basis set limit. We know, however, that many researchers have criticized the approach proposed by the authors.
3. DFT quackery
Anyway,
density functional theory is a “black box”. Look at this picture from Ref. [13]:
Our comment
on this picture:
First and
second points: In contrast to ab initio methods, DFT is not hierarchical. Ab initio
(non-empirical) methods are hierarchical: this means that if we increase basis
set size, level of taking into account the electronic correlation (excitation
rank), and possibly the level of taking into account the relativistic effects
(for heavy elements), we approach the exact solution (within the Born–Oppenheimer
approximation). More specifically, if we go, e.g., through CCSD/cc-pVDZ -> CCSDT/cc-pVDZ -> CCSDTQ/cc-pVDZ -> CCSDTQ5/cc-pVDZ, etc., the
results of the computation systematically approach some limit; if
we go through CCSD/cc-pVDZ ->
CCSD/cc-pVTZ -> CCSD/cc-pVQZ -> CCSD/cc-pV5Z -> CCSD/cc-pV6Z,
etc., the results systematically approach the complete basis set (CBS) limit. For the first row, the improvement can be non-monotonic,
while for the second case the improvement seems to be always monotonic.
So, we can verify the accuracy of an ab initio method by comparing its results with the results of a higher level computation. For DFT, this possibility is much less available.
So, we can verify the accuracy of an ab initio method by comparing its results with the results of a higher level computation. For DFT, this possibility is much less available.
The points
mentioned below are mostly our private opinion, maybe not fully right.
As far as
we know, DFT is often used to “confirm” an experiment. This means that if the
experiment and a DFT computation lead to similar conclusions, this increases
the reliability of the investigation. On the contrary, if the experiment and
the DFT calculation give different results, this can be either a discovery or a
failure (inaccuracy of the computation, or maybe the experiment).
Speaking of “confirming” an experiment, it
should be noted that this approach is only good with an independent experiment.
We know some cases when the experiment was “adjusted” for better agreement with
the computation (both at DFT and ab initio levels).
As mentioned above, it is a good practice to
perform the computation with several different DFT functionals, to ensure that
they all give the same results. And as far as we know, some researchers, being
not honest enough, meaningly avoid using more than one functional, because if
different functionals give contradictory results in their work, this makes this
whole work less “publishable”.
Here you can
read an ironical essay “Obituary : Density Functional Theory. 1927-1993”:
The author claims that the density functional
theory in current implementation is not a mathematically correct approach:
“The Hohenberg-Kohn argument is what
mathematicians call an existence proof, as opposed to a constructive proof.
That is, although we now know that, in
theory, DFT can extract as much information from r(r) as her brother can
from Y
( r 1, r 2, ... , r n) , no-one knew how to dress her so that
she could achieve this in practice.
All quantum mechanical theories are created equal, but some are more equal than
others.”
The hybrid functionals, which appeared in
1993, are even more unreliable and not correct from the theoretical point of
view; in other words, using such functionals may be a kind of “shamanism”, or maybe
even “scientific charlatanism”. The author thinks that the density functional
theory finally died (we should add, it died as a well-grounded scientific
theory) in 1993, after the spreading of hybrid functionals.
On the other hand, in Ref. [13] the author states the following:
"I believe that a fundamental principle underlies the success
of DFT, which is that local approximations are a peculiar type of semiclassical approximation to the many-electron problem. For the last 6 years, with both my group and many collaborators, I have been trying to uncover this connection, and make use of it. The underlying math is very challenging, and some must be invented."
The "DFT shamanism" can exist in the following form: if different functionals are applied to the same object, the user may select any results consistent with experimental data (even the latter are invalid or erroneous) and explain them. We suggest calling such practive "DFT quackery".
In Ref. [13] the following is proposed: "Users should stick to the standard functionals (as most do, according to Fig. 1), or explain very carefully why not."
4. DFT future
Here is a picture from Ref. [13]:
A fragment of the paper [13]:
"XII. THE FUTURE
So, where does this leave us? It is clearly both the best and worst of times for DFT. More calculations, both good and bad, are being performed than ever. One of the most frequently asked questions of developers of traditional approaches to electronic structure is: “When will DFT go away?.” Judging from Fig. 1, the answer is clearly no time soon. Although based on exact theorems, as shown in Fig. 2, these theorems give no simple prescription for constructing approximations. This leads to the many frustrations of the now manifold users listed in Table I.Without such guidance, the swarm of available approximations of Fig. 3 will continue to evolve and reproduce, perhaps ultimately undermining the entire field. But I expect that some of the many excellent ideas being developed by the DFT community will come to fruition, i.e., produce new and more general standard approximations, well before that happens."
References
[3]
S.S.Leang, F.Zahariev, M.S.Gordon, J.Chem.Phys., 136, 104101 (2012)
[4]
G.Garcı´a, C.Adamo, I.Ciofini, Phys. Chem. Chem. Phys., 2013, 15, 20210--20219
[8]
Calculations of PAH anions: When are diffuse functions necessary? Noach
Treitel1, Roy Shenhar, Ivan Aprahamian, Tuvia Sheradsky and Mordecai
Rabinovitz. P h y s . C h e m . C h e m . P h y s . , 2 0 0 4 , 6 , 1 1 1 3 – 1
1 2 1
[10] Do Practical Standard Coupled Cluster Calculations
Agree Better than Kohn–Sham Calculations with
Currently Available Functionals When Compared
to the Best Available Experimental Da...
Article in Journal of Chemical Theory and Computation · May 2015
Impact Factor: 5.5 · DOI: 10.1021/acs.jctc.5b00081
[11] On the dissociation energy of Ti(OH,)+.
An MCSCF, CCSD(T), and DFT study
A. Irigoras, J.M. Ugalde, X. Lopez, and C. Sarasola
Can. J. Chem. 74: 1824-1829 (1996). Printed in Canada / Imprimt au Canada
[13] J. Chem. Phys. 136, 150901 (2012). Perspective on density functional theory. Kieron Burke.
[14]
Jensen, Stig Rune; Saha, Santanu; Flores-Livas, José Abdenago; Huhn, William; Blum, Volker; Goedecker, Stefan; Frediani, Luca, 2017, "GGA-PBE and hybrid-PBE0 energies and dipole moments with MRChem, FHI-aims, NWChem and ELK", doi:10.18710/0EM0EL, UiT Open Research Data Dataverse, V3
[14]
Jensen, Stig Rune; Saha, Santanu; Flores-Livas, José Abdenago; Huhn, William; Blum, Volker; Goedecker, Stefan; Frediani, Luca, 2017, "GGA-PBE and hybrid-PBE0 energies and dipole moments with MRChem, FHI-aims, NWChem and ELK", doi:10.18710/0EM0EL, UiT Open Research Data Dataverse, V3
1)
ОтветитьУдалитьDid you ask for permission to reproduce these figures, tables, data?
2)
Second of all, please supply proper references before you smear density functionals for one (!!) particular molecule: https://en.m.wikipedia.org/wiki/Bilirubin
3)
I would be surprised to see this result, and what methodology was used (geometry optimization, molecular dynamics simulations, conformational sampling, etc), and what was used as reference data
4)
Finally, please read the whole document, not just one table and you will see that the "creator of the list" is the computational chemistry community, i.e. your peers and good colleagues.
5)
Sincerely, the main organizer of "the list"
Sorry for a late reply.
ОтветитьУдалить1) No( I suppose my way is acceptable, because I show only a small part of the information from the articles, and the readers need to find them for more details. My posts advertize these arcitles.
2) I didn't yet publish my computations of the bilirubin, so I can't cite this job.
3) I can show you a sample Gaussian input file (coordinates removed):
%CHK=BR.chk
#P wB97XD/6-311G(D,P) OPT
BR
0,1
6 4.890395000 0.501244000 1.264059000
6 5.342501000 -0.580110000 2.155534000
--Link1--
%CHK=BR.chk
#P wB97XD/6-311G(d,p) GUESS(READ) GEOM(ALLCHECK) NMR
gfinput
4) Ok, I replaced the text with the following: With all due respect to the creators of the above list (computational chemistry community),
Really nice text comparing and contrasting methodology. Visualizing the results gives us a clear view of the functional differences. I'll have a look at the bibliography cited later for more information.
ОтветитьУдалитьSorry, I don't appreciate comments without any suggestions or critics.
УдалитьYou are not the first person I hear criticism of the Swart's poll. I think that in such a matter as the choice of functional, the opinion of experts should be taken into account, but you should not be guided by it. Otherwise, some kind of cyclic error is obtained.
ОтветитьУдалитьAnd about "the enlargement of the basis set does not lead to the improvement of the agreement with experiment". This always brings me to a daze. I had the same feeling, for example, when reading the works of Kendall Houk. He writes that when predicting Diels-Alder reaction barriers, it is sufficient to use a small basis set. And if you look at the numbers, then the big basis set often spoils agreement with experimental activation energy. And this is not the only example.