junxnone / tio

Log
Other
10 stars 5 forks source link

Google Colab - Colaboratory #1056

Open junxnone opened 6 years ago

junxnone commented 6 years ago

Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud.

Reference

Index

Open the github notebook and commit to github

image

image image image

junxnone commented 6 years ago

Hardware

CPU

# !lscpu
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
CPU(s):              2
On-line CPU(s) list: 0,1
Thread(s) per core:  2
Core(s) per socket:  1
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               63
Model name:          Intel(R) Xeon(R) CPU @ 2.30GHz
Stepping:            0
CPU MHz:             2300.000
BogoMIPS:            4600.00
Hypervisor vendor:   KVM
Virtualization type: full
L1d cache:           32K
L1i cache:           32K
L2 cache:            256K
L3 cache:            46080K
NUMA node0 CPU(s):   0,1
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms xsaveopt
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
[name: "/device:CPU:0"
 device_type: "CPU"
 memory_limit: 268435456
 locality {
 }
 incarnation: 4639597472285534396]

GPU

[name: "/device:CPU:0"
 device_type: "CPU"
 memory_limit: 268435456
 locality {
 }
 incarnation: 1798544216336249911, name: "/device:GPU:0"
 device_type: "GPU"
 memory_limit: 11281989632
 locality {
   bus_id: 1
   links {
   }
 }
 incarnation: 3877716453063618870
 physical_device_desc: "device: 0, name: Tesla K80, pci bus id: 0000:00:04.0, compute capability: 3.7"]

Tesla K80

Memory

# !free -mh
              total        used        free      shared  buff/cache   available
Mem:            12G        1.6G        1.2G        253M        9.9G         10G
Swap:            0B          0B          0B
MemTotal:       13335188 kB
MemFree:         1139128 kB
MemAvailable:   11574896 kB
Buffers:          163676 kB
Cached:          9637268 kB
SwapCached:            0 kB
Active:          1447600 kB
Inactive:        9898520 kB
Active(anon):    1117832 kB
Inactive(anon):    89564 kB
Active(file):     329768 kB
Inactive(file):  9808956 kB
Unevictable:           0 kB
Mlocked:               0 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:              4072 kB
Writeback:             0 kB
AnonPages:       1545208 kB
Mapped:           449556 kB
Shmem:            260088 kB
Slab:             668712 kB
SReclaimable:     623264 kB
SUnreclaim:        45448 kB
KernelStack:        3760 kB
PageTables:         8408 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     6667592 kB
Committed_AS:    3311164 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:      184268 kB
DirectMap2M:     8204288 kB
DirectMap1G:     7340032 kB
junxnone commented 4 years ago

当你使用的越多越会分配算力更好的GPU: 分到P100了 image