Shared more. Cited more. Safe forever.
    • advanced search
    • submit works
    • about
    • help
    • contact us
    • login
    View Item 
    •   MOspace Home
    • University of Missouri-Columbia
    • Division of Information Technology (MU)
    • University of Missouri Cluster (Lewis)
    • University of Missouri Cluster (Lewis) documentation
    • View Item
    •   MOspace Home
    • University of Missouri-Columbia
    • Division of Information Technology (MU)
    • University of Missouri Cluster (Lewis)
    • University of Missouri Cluster (Lewis) documentation
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.
    advanced searchsubmit worksabouthelpcontact us

    Browse

    All of MOspaceCommunities & CollectionsDate IssuedAuthor/ContributorTitleIdentifierThesis DepartmentThesis AdvisorThesis SemesterThis CollectionDate IssuedAuthor/ContributorTitleIdentifierThesis DepartmentThesis AdvisorThesis Semester

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular AuthorsStatistics by Referrer

    University of Missouri Cluster (Lewis) : about

    Date
    2019
    Format
    Other
    Metadata
    [+] Show full item record
    Abstract
    Description: The Lewis HPC cluster is a shared cluster located in Columbia Missouri accessible to researchers and their collaborators at the University of Missouri and across the University of Missouri System. Lewis's currently runs CentOS 7 and uses SLURM to schedule jobs. It currently consists of 54 Haswell nodes with 24 cores and 128GB or 256GB of RAM and 81 Broadwell nodes with 28 cores and 256GB of RAM and 37 Broadwell nodes with 512GB of RAM connected with 10 Gigabit Ethernet and 40 Gigabit QDR Infiniband. It also consists of 35 Skylake nodes with 40 cores and 384GB of RAM connected with 25 Gigabit Ethernet and 100 Gigabit EDR Infiniband for a total of 6200 cores. There are also 144 Sandy Bridge cores for interactive jobs and testing. There are currently 17 GPU nodes with 8 NVIDIA K20m GPU's, 2 NVIDIA K40 GPU's, 1 NVIDIA P100, 16 GTK 1080Tis, and 6 NVIDIA v100 for a total of 32 GPU's. The system is connected to 595TB of shared high speed parallel HPC storage (Lustre) for computation and 1240TB of HTC storage for large economical low computational intensity project storage (ZFS).
    URI
    https://hdl.handle.net/10355/69891
    https://doi.org/10.32469/10355/69802
    Collections
    • University of Missouri Cluster (Lewis) documentation

    Send Feedback
    hosted by University of Missouri Library Systems
     

     


    Send Feedback
    hosted by University of Missouri Library Systems