Difference between revisions of "Main Page/faq"
From Nekcem
Jump to navigationJump to search(18 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
This is the resource listing page for [http://www.mcs.anl.gov/~mmin/nekcem.html NekCEM]. | This is the resource listing page for [http://www.mcs.anl.gov/~mmin/nekcem.html NekCEM]. | ||
==Resource Links== | ==Resource Links== | ||
− | *[https://wiki.mcs.anl.gov/nekcem/index.php/Main_Page/PIO | + | *[https://wiki.mcs.anl.gov/nekcem/index.php/Main_Page/PIO Parallel I/O of NekCEM] |
− | *[https://wiki.mcs.anl.gov/nekcem/index.php/Main_Page/hybrid_prog | + | *[https://wiki.mcs.anl.gov/nekcem/index.php/Main_Page/hybrid_prog Hybrid programming (proposed work)] |
+ | |||
+ | *[https://wiki.mcs.anl.gov/nekcem/index.php/Main_Page/aio Sync/Async Blocking/non-blocking I/O] | ||
+ | |||
+ | *[https://wiki.mcs.anl.gov/nekcem/index.php/Main_Page/bgQ Blue Gene/Q and Mira] | ||
+ | |||
+ | *[https://wiki.mcs.anl.gov/nekcem/index.php/Main_Page/C_Fortran C Fortran mixed programming] | ||
+ | |||
+ | *[https://wiki.mcs.anl.gov/nekcem/index.php/Main_Page/Meeting_Notes Meeting Notes] | ||
== Implementation == | == Implementation == | ||
− | === | + | === I/O code === |
* I/O functions were initiated from cem_out function of cem_dg.F (and cem_dg2.F). | * I/O functions were initiated from cem_out function of cem_dg.F (and cem_dg2.F). | ||
Line 13: | Line 21: | ||
* vtkcommon.c and vtkcommon.h serve as a place to hold common functions as well as global variables. <br> | * vtkcommon.c and vtkcommon.h serve as a place to hold common functions as well as global variables. <br> | ||
− | === NekCEM | + | * cem_out_fields3 (in cem_dg.F) |
+ | ** openfile3(dumpno, nid) !vtkbin.c | ||
+ | ** vtk_dump_header3 | ||
+ | *** writeheader3() !vtkbin.c | ||
+ | *** writenodes3() !vtkbin.c | ||
+ | *** write2dcells3 !vtkbin.c <br> or write3dcells3 !vtkbin.c | ||
+ | ** vtk_dump_field3 | ||
+ | *** writefield3 !vtkbin.c | ||
+ | ** close_file3 !vtkbin.c | ||
+ | |||
+ | * Binary file → ASCII file: transfer double/float/int/read to chars then write out | ||
+ | ** float (4 bytes) → %18.8E | ||
+ | ** int (4 bytes) → %10d | ||
+ | ** long long (8 bytes) → %18lld | ||
+ | ** elemType → %4d | ||
+ | |||
+ | === NekCEM notes=== | ||
+ | * scaling | ||
+ | ** strong scaling: defined as how the solution time varies with the number of processors for a fixed total problem size. | ||
+ | ** weak scaling: defined as how the solution time varies with the number of processors for a fixed problem size per processor. | ||
+ | * pre-compute file size | ||
+ | ** #grid point = nx * ny * nz * nelt; size = #grid point * 3 * float | ||
+ | ** cell type: 2d → 4 * #cell * int + 1* #cell * int (3d → 9) | ||
+ | ** #field = nfields * 3 * #grid point; size = #field * float; | ||
+ | |||
* .box → num elements in x,y,z | * .box → num elements in x,y,z | ||
* .rea → input data | * .rea → input data | ||
− | * SIZEu | + | * SIZEu → SIZE parameters: |
+ | ** lxi ? | ||
** lp = #proc | ** lp = #proc | ||
** lelx = 20 each dimension | ** lelx = 20 each dimension | ||
** lelv = alloc max # of element per proc | ** lelv = alloc max # of element per proc | ||
+ | * .usr → subuser.F | ||
+ | * cem() in cem_dg.F is the main solver and application entry point | ||
+ | |||
+ | * only CELL and point data need to be re-computed | ||
+ | |||
+ | * compile and run NekCEM | ||
+ | ** in a specific case, ../../bin/cleanall, ../../bin/makenek, ../../bin/nek "case_name" #proc (e.g. in cylwave, ../../bin/nek cylwave 4) | ||
== To-do List == | == To-do List == | ||
*More tests on BG/P for config with ng = M and 1< nf < M | *More tests on BG/P for config with ng = M and 1< nf < M | ||
*Tests on Kraken and Jaguar | *Tests on Kraken and Jaguar | ||
+ | *Pthread + MPI for I/O | ||
+ | *OpenMP/Pthread + MPI for NekCEM computation | ||
+ | *Parallel I/O for reading .rea file | ||
− | == | + | == Miscellaneous notes == |
* Fortran generated binary file may not be correctly read in C. | * Fortran generated binary file may not be correctly read in C. | ||
* -lstdc++ for link | * -lstdc++ for link | ||
* libF77 and libI77 | * libF77 and libI77 | ||
* common.h and common_c.h | * common.h and common_c.h | ||
+ | * write() in Fortran: 6 refer to screen, * is to screen as well .. |
Latest revision as of 12:28, 19 May 2011
This is the resource listing page for NekCEM.
Resource Links
Implementation
I/O code
- I/O functions were initiated from cem_out function of cem_dg.F (and cem_dg2.F).
- Implementation of parallel I/O routine were defined in vtkbin.c and rbIO_nekcem.c
- vtkcommon.c and vtkcommon.h serve as a place to hold common functions as well as global variables.
- cem_out_fields3 (in cem_dg.F)
- openfile3(dumpno, nid) !vtkbin.c
- vtk_dump_header3
- writeheader3() !vtkbin.c
- writenodes3() !vtkbin.c
- write2dcells3 !vtkbin.c
or write3dcells3 !vtkbin.c
- vtk_dump_field3
- writefield3 !vtkbin.c
- close_file3 !vtkbin.c
- Binary file → ASCII file: transfer double/float/int/read to chars then write out
- float (4 bytes) → %18.8E
- int (4 bytes) → %10d
- long long (8 bytes) → %18lld
- elemType → %4d
NekCEM notes
- scaling
- strong scaling: defined as how the solution time varies with the number of processors for a fixed total problem size.
- weak scaling: defined as how the solution time varies with the number of processors for a fixed problem size per processor.
- pre-compute file size
- #grid point = nx * ny * nz * nelt; size = #grid point * 3 * float
- cell type: 2d → 4 * #cell * int + 1* #cell * int (3d → 9)
- #field = nfields * 3 * #grid point; size = #field * float;
- .box → num elements in x,y,z
- .rea → input data
- SIZEu → SIZE parameters:
- lxi ?
- lp = #proc
- lelx = 20 each dimension
- lelv = alloc max # of element per proc
- .usr → subuser.F
- cem() in cem_dg.F is the main solver and application entry point
- only CELL and point data need to be re-computed
- compile and run NekCEM
- in a specific case, ../../bin/cleanall, ../../bin/makenek, ../../bin/nek "case_name" #proc (e.g. in cylwave, ../../bin/nek cylwave 4)
To-do List
- More tests on BG/P for config with ng = M and 1< nf < M
- Tests on Kraken and Jaguar
- Pthread + MPI for I/O
- OpenMP/Pthread + MPI for NekCEM computation
- Parallel I/O for reading .rea file
Miscellaneous notes
- Fortran generated binary file may not be correctly read in C.
- -lstdc++ for link
- libF77 and libI77
- common.h and common_c.h
- write() in Fortran: 6 refer to screen, * is to screen as well ..