Making gmsnbo.i8.a and NBO6.i8.exe

Making gmsnbo.i8.a and NBO6.i8.exe

Postby david » Mon Feb 02, 2015 10:20 pm

Hi
I compiled and linked gamess may1 2013 without any error. I ran it by rungms. I used intel cluster edition 2015, by ifort and intelmpi.
I config nbo6 in make.config by setting ifort as fortran compiler and both lapack and blas as false.
I can make library (gmsnbo.i8.a) and execute (NBO6.i8.exe) file. both of them link to the gamess, apparently. However, when I submit a NBO NEDA calculation, it calls them, but, when it wants to run scf after deletion, it stops. Although the "top" cammand shows that the gamess.01.x runs, but the code doesn't print any DFT energy.
Can anybody help?
david
 
Posts: 15
Joined: Mon Feb 02, 2015 10:04 pm

Re: Making gmsnbo.i8.a and NBO6.i8.exe

Postby ericg » Tue Feb 03, 2015 5:31 pm

Does the NBO program (without NEDA) execute appropriately with GAMESS? Also, when you attempt the NEDA calculation, do you request a parallel calculation or sequential?

It might be helpful if you would include the last few lines of the NBO output from your calculation.

Eric
ericg
 
Posts: 270
Joined: Sat Dec 29, 2012 9:31 am

Re: Making gmsnbo.i8.a and NBO6.i8.exe

Postby david » Wed Feb 04, 2015 1:52 am

Dear Eric,
Thanks for your reply. Here is more information:
I installed intel composerxe cluster edition 2015, with ifort version 15, and icc version 15, and
intelmpi version 5.
I compiled gamess with changing GMS_IFORT_VERN0 to 12, due to the same commands are used for both version 12 and 15.
The ddi in gamess uses gcc.

For Gamess 1May 2013, I used following setting:
============================================
setenv GMS_PATH /home/gamess_nbo
setenv GMS_BUILD_DIR /home/gamess_nbo
# machine type
setenv GMS_TARGET linux64
# FORTRAN compiler setup
setenv GMS_FORTRAN ifort
setenv GMS_IFORT_VERNO 12
# mathematical library setup
setenv GMS_MATHLIB mkl
setenv GMS_MATHLIB_PATH /usr/mkl/lib/intel64
setenv GMS_MKL_VERNO 12
# parallel message passing model setup
setenv GMS_DDI_COMM mpi
setenv GMS_MPI_LIB impi
setenv GMS_MPI_PATH /usr/impi/5.0.1.035
# LIBCCHEM CPU/GPU code interface
setenv GMS_LIBCCHEM false
=============================================

For NBO I used following parameters to build gmsnbo.i8.a and nbo6.i8.exe:
=============================================
FC=ifort
CC=gcc
INT=8
os=linux
NBODIR = /home/gamess_nbo/nbo6
LAPACK = false
BLAS = false
STATIC = true
FTNCHEK = false
FLUSH=false
FAST = true
TIME =false
==============================================
For NBO6 I didn't use mkl, because I couldn't make it with static or dynamic libraries of mkl.
I guess it is not required, because in the gamess static libraries of mkl are used (I am not sure).
When I delete NEDA section, everything is ok, even for 36 core distributed on the three smp systems, and the program is terminated normally.
But, when I use NEDA:

$nbo plot $END
$del
neda end
$end

Gamess calls NBO6 and program prints out the outputs until it needs to do scf.
Here is final section of the output:

=========================================================
.
.
.
NEXT STEP: Perform one SCF cycle to evaluate the energy of the new density
matrix constructed from the deleted NBO Fock matrix.

------------------------------------------------------------------------------


--------------------------
R-B3LYP SCF CALCULATION
--------------------------
DENSITY MATRIX CONVERGENCE THRESHOLD= 2.00E-05
COARSE -> FINE DFT GRID SWITCH THRESHOLD= 3.00E-04 (SWITCH IN $DFT)
HF -> DFT SWITCH THRESHOLD= 0.00E+00 (SWOFF IN $DFT)

DIRECT SCF CALCULATION, SCHWRZ=T FDIFF=T, DIRTHR= 0.00E+00 NITDIR=10

NONZERO BLOCKS
ITER EX DEM TOTAL ENERGY E CHANGE DENSITY CHANGE DIIS ERROR INTEGRALS SKIPPED
==========================================================

The program stops at the last line "ITER EX DEM TOTAL ENERGY..." .
I guess the absence of blas and lapack in making nbo6 maybe is the reason of this (I am not sure).

For sequential run, when I use only one core, following error presents:
Error: Expecting an even number of MPI processes (cp:ds::1:1).

When I compile gamess with socket instead of intelmpi, everything is ok for sequential run (one cpu).
But, when I use parallel version of socket, the problem is similar to the error of intelmpi.
david
 
Posts: 15
Joined: Mon Feb 02, 2015 10:04 pm

Re: Making gmsnbo.i8.a and NBO6.i8.exe

Postby ericg » Wed Feb 04, 2015 7:54 am

You write
everything is ok for sequential run (one cpu)


Does this include NEDA? Does NEDA complete successfully for sequential runs?

The issue certainly isn't BLAS or LAPACK.

Eric
ericg
 
Posts: 270
Joined: Sat Dec 29, 2012 9:31 am

Re: Making gmsnbo.i8.a and NBO6.i8.exe

Postby david » Wed Feb 04, 2015 10:14 am

Yes, for sequential run (sockets only), NEDA works, and all parts of binding interactions are printed out. For Intelmpi, I couldn't run the code sequentially (I reported the error in my former post).
I remember that for version 5G, we could only ran nbo program in sequential mode.
david
 
Posts: 15
Joined: Mon Feb 02, 2015 10:04 pm

Re: Making gmsnbo.i8.a and NBO6.i8.exe

Postby ericg » Wed Feb 04, 2015 11:38 am

I doubt whether I can help you here.

NEDA was originally implemented for sequential GAMESS calculations. Mike Schmidt modified the GAMESS/NBO interface about two years ago so that NEDA could be performed with NBO6 in parallel calculations. My only experience has been with sockets. You might ask Mike for some advice here.

Eric
ericg
 
Posts: 270
Joined: Sat Dec 29, 2012 9:31 am


Return to NBO Installation

Who is online

Users browsing this forum: No registered users and 1 guest

cron