dizcza / docker-hashcat

Latest hashcat docker for CUDA, OpenCL, and POCL. Deployed on Vast.ai
MIT License
135 stars 37 forks source link

Trivial bump to newer CUDA image #13

Closed BillWeiss closed 1 year ago

BillWeiss commented 1 year ago

Starting the current latest for cuda prints out this:

==========
== CUDA ==
==========

CUDA Version 10.2.89

Container image Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

*************************
** DEPRECATION NOTICE! **
*************************
THIS IMAGE IS DEPRECATED and is scheduled for DELETION.
    https://gitlab.com/nvidia/container-images/cuda/blob/master/doc/support-policy.md

I bumped to the currently latest 12.1.0 (latest as per https://hub.docker.com/r/nvidia/cuda/tags), built, and was able to run hashcat.

root@ip-172-31-83-232:~/asdf# docker run --gpus all --rm -it b691bec2c210 /bin/bash

==========
== CUDA ==
==========

CUDA Version 12.1.0

Container image Copyright (c) 2016-2022, NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

A copy of this license is made available in this container at /NGC-DL-CONTAINER-LICENSE for your convenience.

root@8c59c2f930ef:~# hashcat -I
hashcat (v6.2.6) starting in backend information mode

CUDA Info:
==========

CUDA.Version.: 12.1

Backend Device ID #1 (Alias: #2)
  Name...........: Tesla T4
  Processor(s)...: 40
  Clock..........: 1590
  Memory.Total...: 15101 MB
  Memory.Free....: 14998 MB
  Local.Memory...: 64 KB
  PCI.Addr.BDFe..: 0000:00:03.6

OpenCL Info:
============

OpenCL Platform ID #1
  Vendor..: NVIDIA Corporation
  Name....: NVIDIA CUDA
  Version.: OpenCL 3.0 CUDA 12.0.139

  Backend Device ID #2 (Alias: #1)
    Type...........: GPU
    Vendor.ID......: 32
    Vendor.........: NVIDIA Corporation
    Name...........: Tesla T4
    Version........: OpenCL 3.0 CUDA
    Processor(s)...: 40
    Clock..........: 1590
    Memory.Total...: 15101 MB (limited to 3775 MB allocatable in one block)
    Memory.Free....: 14976 MB
    Local.Memory...: 48 KB
    OpenCL.Version.: OpenCL C 1.2 
    Driver.Version.: 525.85.12
    PCI.Addr.BDF...: 00:03.6

Benchmarks look the same, so no performance gain, just getting rid of a warning :)

dizcza commented 1 year ago

Thanks! I keep the master branch with a legacy cuda version while the cuda branch is supposed to track more recent cuda versions. Let it be in master then.

BillWeiss commented 1 year ago

Thanks!