Difference between revisions of "Main Page/Meeting Notes"

From Nekcem
Jump to navigationJump to search
Line 3: Line 3:
 
=== 02/17/2011, Misun, Jing Fu===
 
=== 02/17/2011, Misun, Jing Fu===
 
* Paper
 
* Paper
** will keep iterating the NekCEM I/O paper
+
** will keep iterating the NekCEM I/O paper in accordance to review/feedback from ICS
 
** Chris suggest [http://www.ppam.pl/ PPAM] (submit: 4/30, notify: 6/15) over ICPP and Cluster
 
** Chris suggest [http://www.ppam.pl/ PPAM] (submit: 4/30, notify: 6/15) over ICPP and Cluster
 
** Results on Jaguar is going to take a while (allocation application turnaround time, compile NekCEM on Cray machines, tune performance, summarize results in paper)
 
** Results on Jaguar is going to take a while (allocation application turnaround time, compile NekCEM on Cray machines, tune performance, summarize results in paper)
 
** Results from Jaguar/Lustre can go into follow-up journal paper (most likely by the end of summer)
 
** Results from Jaguar/Lustre can go into follow-up journal paper (most likely by the end of summer)
 +
 +
* hybrid model
 +
** I/O w/ pthread is coming along on SMP machines, will try Intrepid soon
 +
** OpenMP should expect strong support (according to Bronis etc.), will likely be used for computation frame
 +
** pthread should be acceptable for I/O tasks
 +
** MPI task?
 +
** test sub-communicator collective performance degradation (for potential total comm split)
 +
*** for all_reduce, going through subcomm would force routine go torus rather than collective tree network, causing a 10x-30x performance drop for double and integer all_reduce, compared to on MPI_COMM_WORLD; tested on 32k and 64k
 +
 +
* summer time frame
 +
** Flexible, Misun only out July 26-29

Revision as of 16:16, 22 February 2011

This page records meeting notes about NekCEM.

Notes

02/17/2011, Misun, Jing Fu

  • Paper
    • will keep iterating the NekCEM I/O paper in accordance to review/feedback from ICS
    • Chris suggest PPAM (submit: 4/30, notify: 6/15) over ICPP and Cluster
    • Results on Jaguar is going to take a while (allocation application turnaround time, compile NekCEM on Cray machines, tune performance, summarize results in paper)
    • Results from Jaguar/Lustre can go into follow-up journal paper (most likely by the end of summer)
  • hybrid model
    • I/O w/ pthread is coming along on SMP machines, will try Intrepid soon
    • OpenMP should expect strong support (according to Bronis etc.), will likely be used for computation frame
    • pthread should be acceptable for I/O tasks
    • MPI task?
    • test sub-communicator collective performance degradation (for potential total comm split)
      • for all_reduce, going through subcomm would force routine go torus rather than collective tree network, causing a 10x-30x performance drop for double and integer all_reduce, compared to on MPI_COMM_WORLD; tested on 32k and 64k
  • summer time frame
    • Flexible, Misun only out July 26-29