lartpang / Machine-Deep-Learning

:wave: ML/DL学习笔记(基础+论文)
260 stars 55 forks source link

技巧-Python #45

Open ghost opened 5 years ago

ghost commented 5 years ago
import os
if os.path.isdir(path):
    print "it's a directory"
elif os.path.isfile(path):
    print "it's a normal file"
else:
    print "it's a special file(socket,FIFO,device file)"

>>> os.path.exists('d:/assist')
 True
>>> os.path.exists('d:/assist/getTeacherList.py')
 True

os.path.getsize(path)
ghost commented 5 years ago
# 显示目录下所有文件
g = os.walk(r"e:\test")  

for path,dir_list,file_list in g:  
    for file_name in file_list:  
        print(os.path.join(path, file_name) )

# 显示所有子目录 
g = os.walk("e:\test")  

for path,dir_list,file_list in g:  
    for dir_name in dir_list:
        print(os.path.join(path, dir_name) )
ghost commented 5 years ago

import os, glob, fnmatch

针对某些操作, 官方推荐这些操作

This module provides a portable way of using operating system dependent functionality.

环境变量

# 在操作系统中定义的环境变量,全部保存在os.environ这个变量中,可以直接查看:

>>> os.environ
environ({...'LD_LIBRARY_PATH': '/usr/local/cuda-9.0/lib64:/usr/local/cuda-9.0/lib64', ..., 'LC_IDENTIFICATION': 'zh_CN.UTF-8', ...})

# 要获取某个环境变量的值,可以调用如下操作:
>>> os.environ['MANPATH']
'/home/lart/texlive/2018/texmf-dist/doc/man:/usr/local/man:'
>>> os.environ.get('MANPATH')
'/home/lart/texlive/2018/texmf-dist/doc/man:/usr/local/man:'
>>> os.environ.get('MANPATH', 'not found')
'/home/lart/texlive/2018/texmf-dist/doc/man:/usr/local/man:'
>>> os.environ.get('MAINPATH', 'not found')
'not found'
>>> os.environ('MAINPATH')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: '_Environ' object is not callable

>>> os.getenv('MANPATH')
'/home/lart/texlive/2018/texmf-dist/doc/man:/usr/local/man:'
>>> os.getenv('MAINPATH', "not found")
'not found'
# os.getenv最大的差异就在于不存在路径的时候, 不会引发异常
>>> os.getenv('MAINPATH')

系统指令

python调用Shell脚本,有两种方法:os.system(command)os.popen(command), 前者返回值是脚本的退出状态码, 后者的返回值是脚本执行过程中的输出内容. 实际使用时视需求情况而选择.

# 通过 os.popen() 返回的是 file read 的对象,对其进行读取 read() 的操作可以看到执行的输出。
output = os.popen('cat /proc/cpuinfo')
print(output.read())

系统信息

# 获取系统类型
>>> os.name
'posix'
>>> os.uname()
posix.uname_result(sysname='Linux', nodename='lart', release='4.15.0-43-generic', version='#46-Ubuntu SMP Thu Dec 6 14:45:28 UTC 2018', machine='x86_64')
>>> os.uname()[0]
'Linux'
>>> os.uname()[4]
'x86_64'
>>> import sys
>>> sys.platform
'linux'

# 获取当前路径/父路径的标识符
>>> os.curdir
'.'
>>> os.pardir
'..'
# 获取路径分隔符和换行符的表示
>>> os.sep
'/'
>>> os.linesep
'\n'
# 获取当前系统环境变量分隔符
>>> os.pathsep
':'

# 获取cpu核心数
# 此数字不等于当前进程可以使用的CPU数量。可以使用`len(os.sched_getaffinity(0))`获得可用CPU的数量
>>> os.cpu_count()
12
>>> len(os.sched_getaffinity(0))
12

文件(夹)操作

路径获取

######################################################################################
# 返回当前进程工作路径
>>> os.getcwd()
'/home/lart/md/python总结'

######################################################################################
# 返回指定path下的文件夹与文件(包含后缀名)的完整名字, 不包含'.', '..'这样的特殊目录
>>> os.listdir('.')
['converter.py', 'face++.py', 'Face++.ipynb', 'person-young-man-beard-emotions-157966.png', '.ipynb_checkpoints', '.idea', 'supervisely.py', 'facepp-python-sdk-master']

######################################################################################
# 使用内容管理器管理os.scandir迭代器, 返回的是一个`os.DirEntry`对象, 相比`os.listdir`可以获得更多的信息
# `os.DirEntry` Object yielded by scandir() to expose the file path and other file attributes of a directory entry.
# 包含如下属性与方法: `name, path, inode(), is_dir(*, follow_symlinks=True), is_file(*, follow_symlinks=True), is_symlink(), stat(*, follow_symlinks=True)`
>>> with os.scandir('.') as it:
...     for entry in it:
...         if not entry.name.startswith('.') and entry.is_file():
...             print(entry.name)
...         if entry.is_dir():
...             print(f"{entry.name}是名字")
...
converter.py
face++.py
Face++.ipynb
person-young-man-beard-emotions-157966.png
.ipynb_checkpoints是名字
.idea是名字
supervisely.py
facepp-python-sdk-master是名字

有目录:

➜  tool_scripts tree
.
├── converter.py
├── Face++.ipynb
├── facepp-python-sdk-master
│   ├── call_four_task.ipynb
│   ├── call.py
│   ├── facepp_custom.py
│   ├── imgResource
│   │   ├── demo.jpeg
│   │   ├── gray_image.png
│   │   ├── merge.jpg
│   │   ├── resultImg.jpg
│   │   ├── resultImg.png
│   │   ├── search.png
│   │   ├── segment.b64
│   │   └── segment.jpg
│   ├── PythonSDK
│   │   ├── compat.py
│   │   ├── facepp.py
│   │   ├── ImagePro.py
│   │   ├── __pycache__
│   │   │   ├── compat.cpython-36.pyc
│   │   │   ├── facepp.cpython-36.pyc
│   │   │   ├── ImagePro.cpython-36.pyc
│   │   │   └── structures.cpython-36.pyc
│   │   └── structures.py
│   ├── Python SDK demo 使用文档.pdf
│   └── README.md
├── face++.py
├── person-young-man-beard-emotions-157966.png
└── supervisely.py

4 directories, 26 files
######################################################################################
# os.walk(top, topdown=True, onerror=None, followlinks=False)
# top 是要便利的目录的地址, 为最上层的地址
# topdown 为True(默认为True),则优先遍历父目录(先把父目录里的所有文件(文件夹)遍历完, 在搜索子文件夹),否则优先遍历top的子目录 (广度优先与深度优先)
# onerror 需要一个callable对象,当walk需要异常时,会调用
# followlinks如果为True,则会遍历目录下的快捷方式(linux下是`symbolic link`)实际所指的目录(默认False)
#
# os.walk 的返回值是一个生成器(generator),也就是说不断的遍历它,来获得所有的内容。
# 每次遍历的对象都是返回的是一个三元组`(dirpath, dirnames, filenames)`
# dirpath (string)当前正在遍历的这个文件夹的本身的地址
# dirnames (list)**该文件夹中** 所有的 *子文件夹* 的名字(不包括子目录, excluding '.' and '..')
# filenames (list)**该文件夹中** 所有的 *文件* 的名字
# 注意:
#   - 名字列表中不包含完整路径, 要想得到完整的路径, 可以使用`os.path.join(dirpath, name)`.
#   - 当设定`followlinks`为True时, 若是链接指向父文件夹, 会导致无线递归, 因为`.walk()`不会关注搜索过得部分
#   - 如果传递相对路径名,请不要在`walk()`的恢复之间更改当前工作目. `walk()`从不更改当前目录, 并假定其调用者也不会

###############################################################################
# 下面使用了`topdown=True`
>>> for dirpath, dirnames, filenames in os.walk('.'):
...     print(dirpath)
...     print(dirnames)
...     print(filenames)
...
.
['.ipynb_checkpoints', '.idea', 'facepp-python-sdk-master']
['converter.py', 'face++.py', 'Face++.ipynb', 'person-young-man-beard-emotions-157966.png', 'supervisely.py']
./.ipynb_checkpoints
[]
['Face++-checkpoint.ipynb']
./.idea
[]
['misc.xml', 'modules.xml', 'workspace.xml', 'tool_scripts.iml', 'encodings.xml']
./facepp-python-sdk-master
['imgResource', 'PythonSDK', '.ipynb_checkpoints', '.idea']
['call.py', 'facepp_custom.py', 'call_four_task.ipynb', 'Python SDK demo 使用文档.pdf', 'README.md', '.gitignore']
./facepp-python-sdk-master/imgResource
[]
['demo.jpeg', 'resultImg.jpg', 'merge.jpg', 'gray_image.png', 'segment.jpg', 'search.png', 'segment.b64', 'resultImg.png']
./facepp-python-sdk-master/PythonSDK
['__pycache__']
['compat.py', 'facepp.py', 'ImagePro.py', 'structures.py']
./facepp-python-sdk-master/PythonSDK/__pycache__
[]
['compat.cpython-36.pyc', 'facepp.cpython-36.pyc', 'ImagePro.cpython-36.pyc', 'structures.cpython-36.pyc']
./facepp-python-sdk-master/.ipynb_checkpoints
[]
['call_four_task-checkpoint.ipynb']
./facepp-python-sdk-master/.idea
[]
['misc.xml', 'modules.xml', 'facepp-python-sdk-master.iml', 'workspace.xml', 'encodings.xml']

###############################################################################
# 下面使用了`topdown=False`
>>> for dirpath, dirnames, filenames in os.walk('.', topdown=False):
...     print(dirpath)
...     print(dirnames)
...     print(filenames)
...
./.ipynb_checkpoints
[]
['Face++-checkpoint.ipynb']
./.idea
[]
['misc.xml', 'modules.xml', 'workspace.xml', 'tool_scripts.iml', 'encodings.xml']
./facepp-python-sdk-master/imgResource
[]
['demo.jpeg', 'resultImg.jpg', 'merge.jpg', 'gray_image.png', 'segment.jpg', 'search.png', 'segment.b64', 'resultImg.png']
./facepp-python-sdk-master/PythonSDK/__pycache__
[]
['compat.cpython-36.pyc', 'facepp.cpython-36.pyc', 'ImagePro.cpython-36.pyc', 'structures.cpython-36.pyc']
./facepp-python-sdk-master/PythonSDK
['__pycache__']
['compat.py', 'facepp.py', 'ImagePro.py', 'structures.py']
./facepp-python-sdk-master/.ipynb_checkpoints
[]
['call_four_task-checkpoint.ipynb']
./facepp-python-sdk-master/.idea
[]
['misc.xml', 'modules.xml', 'facepp-python-sdk-master.iml', 'workspace.xml', 'encodings.xml']
./facepp-python-sdk-master
['imgResource', 'PythonSDK', '.ipynb_checkpoints', '.idea']
['call.py', 'facepp_custom.py', 'call_four_task.ipynb', 'Python SDK demo 使用文档.pdf', 'README.md', '.gitignore']
.
['.ipynb_checkpoints', '.idea', 'facepp-python-sdk-master']
['converter.py', 'face++.py', 'Face++.ipynb', 'person-young-man-beard-emotions-157966.png', 'supervisely.py']

os.path

路径显示

>>> os.path.commonprefix(['/usr/lib', '/usr/local/lib'])
'/usr/l'
>>> os.path.commonpath(['/usr/lib', '/usr/local/lib'])
'/usr'
>>> path = '/home/lart/Datasets/tool_scripts'
>>> os.path.dirname(path)
'/home/lart/Datasets'
>>> path = '/home/lart/Datasets/tool_scripts/converter.py'
>>> os.path.dirname(path)
'/home/lart/Datasets/tool_scripts'
'/home/lart/Datasets'
>>> os.path.exists(path)
True
>>> path = '/home/lart/Datasets/tool_scripts/converter.py'
>>> os.path.exists(path)
True

路径判断

>>> path = '~/Datasets/tool_scripts/converter.py'
>>> os.path.expanduser(path)
'/home/lart/Datasets/tool_scripts/converter.py'
>>> path = '~/Datasets/tool_scripts'
>>> os.path.expanduser(path)
'/home/lart/Datasets/tool_scripts'

路径时间

>>> path = '/home/lart/Datasets/tool_scripts'
>>> os.path.getctime(path)
1547694633.120985
>>> os.path.getctime(path + '/converter.py')
1546766886.501085

路径体积

>>> path = '/home/lart/Datasets/tool_scripts'
>>> os.path.getsize(path)
4096
>>> os.path.getsize(path + '/converter.py')
857

路径规范

glob

glob模块根据Unix shell使用的规则查找与指定模式匹配的所有路径名,尽管结果以任意顺序返回。

实现了:

glob中模式規則不是正則表達式, 而是, 符合標準Uinx路徑擴展規則. 但是Shell變量名和符號(~)是不被擴充的, 只有一些特殊的字符: 兩個不同的通配符和字母範圍被支持(见上). 对于~符和shell变量扩展,请使用os.path.expanduser()os.path.expandvars(0). 模塊規則適合於文檔名的片段(以/為分隔, 也就是只能匹配//之间的文本), 但模式中的路徑可以是相對或者絕對路徑.

这是通过一致地使用os.scandir()fnmatch.fnmatch()函数来完成的,而不是通过实际调用子shell。请注意,与fnmatch.fnmatch()不同,glob将以点(.)开头的文件名视为特殊情况, 通配符不会进行匹配. 可见后面的例子.

>>> import glob
>>> path
'/home/lart/Datasets/tool_scripts'
>>> glob.iglob(path + '/*')
<generator object _iglob at 0x7fdca3773930>
# 这里只返回下一级
>>> glob.glob(path + '/*', recursive=True)
['/home/lart/Datasets/tool_scripts/converter.py', '/home/lart/Datasets/tool_scripts/face++.py', '/home/lart/Datasets/tool_scripts/Face++.ipynb', '/home/lart/Datasets/tool_scripts/person-young-man-beard-emotions-157966.png', '/home/lart/Datasets/tool_scripts/supervisely.py', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master']
# 这里会遍历所有的子目录和文件
>>> glob.glob(path + '/**', recursive=True)
['/home/lart/Datasets/tool_scripts/', '/home/lart/Datasets/tool_scripts/converter.py', '/home/lart/Datasets/tool_scripts/face++.py', '/home/lart/Datasets/tool_scripts/Face++.ipynb', '/home/lart/Datasets/tool_scripts/person-young-man-beard-emotions-157966.png', '/home/lart/Datasets/tool_scripts/supervisely.py', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/call.py', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/imgResource', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/imgResource/demo.jpeg', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/imgResource/resultImg.jpg', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/imgResource/merge.jpg', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/imgResource/gray_image.png', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/imgResource/segment.jpg', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/imgResource/search.png', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/imgResource/segment.b64', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/imgResource/resultImg.png', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/PythonSDK', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/PythonSDK/compat.py', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/PythonSDK/facepp.py', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/PythonSDK/__pycache__', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/PythonSDK/__pycache__/compat.cpython-36.pyc', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/PythonSDK/__pycache__/facepp.cpython-36.pyc', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/PythonSDK/__pycache__/ImagePro.cpython-36.pyc', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/PythonSDK/__pycache__/structures.cpython-36.pyc', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/PythonSDK/ImagePro.py', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/PythonSDK/structures.py', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/facepp_custom.py','/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/call_four_task.ipynb', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/Python SDK demo 使用文档.pdf', '/home/lart/Datasets/tool_scripts/facepp-python-sdk-master/README.md']

>>> glob.glob('./[0-9].*')
['./1.gif', './2.txt']
>>> glob.glob('*.gif')
['1.gif', 'card.gif']
>>> glob.glob('?.gif')
['1.gif']
>>> glob.glob('**/*.txt', recursive=True)
['2.txt', 'sub/3.txt']
>>> glob.glob('./**/', recursive=True)
['./', './sub/']

If the directory contains files starting with . they won’t be matched by default. 需要在匹配路径名字上补上符号.

For example, consider a directory containing card.gif and .card.gif:

>>>
>>> import glob
>>> glob.glob('*.gif')
['card.gif']
>>> glob.glob('.c*')
['.card.gif']
# https://hk.saowen.com/a/83de58e8a7e060d8ace69c912cbd209948d6c4fe533aa2746201d96a1b45a8bf
import glob

specials = '?*['

for char in specials:
    pattern = 'dir/*' + glob.escape(char) + '.txt'
    print('Searching for:  {!r}'.format(pattern))
    for name in sorted(glob.glob(pattern)):
        print(name)
    print()

# 输出
Searching for:  'dir/*[?].txt'
dir/file?.txt

Searching for:  'dir/*[*].txt'
dir/file*.txt

Searching for:  'dir/*[[].txt'
dir/file[.txt
# https://www.jianshu.com/p/b1f24d56d73b
>>> glob.escape('./**.?.*.[a-z]')
'./[*][*].[?].[*].[[]a-z]'

fnmatch

此模块提供对Unix shell样式通配符的支持,这些通配符与正则表达式(在re模块中记录)不同。shell样式通配符中使用的特殊字符是:

请注意:

主要的方法:

对于目录:

>>> print(os.popen('tree -a -L 2').read())
.
├── converter.py
├── Face++.ipynb
├── facepp-python-sdk-master
│   ├── call_four_task.ipynb
│   ├── call.py
│   ├── facepp_custom.py
│   ├── .gitignore
│   ├── .idea
│   ├── imgResource
│   ├── .ipynb_checkpoints
│   ├── PythonSDK
│   ├── Python SDK demo 使用文档.pdf
│   └── README.md
├── face++.py
├── .idea
│   ├── encodings.xml
│   ├── misc.xml
│   ├── modules.xml
│   ├── tool_scripts.iml
│   └── workspace.xml
├── .ipynb_checkpoints
│   └── Face++-checkpoint.ipynb
├── person-young-man-beard-emotions-157966.png
├── supervisely.py
└── .test

从下面的测试可以看出来, 实际上, glob应该更为常用一些, fnmatch模块不会忽略.开头的文件(夹), 这有时候会造成不必要的麻烦, 因大多数时候, 是不会需要搜索这些文件夹的.

>>> import fnmatch
>>> for file in os.listdir('.'):
...     if fnmatch.fnmatch(file, '*'):
...         print(file)
...
converter.py
face++.py
.test
Face++.ipynb
person-young-man-beard-emotions-157966.png
.ipynb_checkpoints
.idea
supervisely.py
facepp-python-sdk-master

>>> glob.glob('./*')
['./converter.py', './face++.py', './Face++.ipynb', './person-young-man-beard-emotions-157966.png', './supervisely.py', './facepp-python-sdk-master']
>>> import fnmatch, re
>>>
>>> regex = fnmatch.translate('*.txt')
>>> regex
'(?s:.*\\.txt)\\Z'
>>> reobj = re.compile(regex)
>>> reobj.match('foobar.txt')
<re.Match object; span=(0, 10), match='foobar.txt'>

参考链接

ghost commented 5 years ago

numpy实现的onehot编码

def encode_onehot(labels):
    return (np.unique(labels) == labels[:]).astype(np.integer)
ghost commented 5 years ago

多进程处理

import argparse
import os
import time
from multiprocessing import Pool

import numpy as np
import torch
from PIL import Image
from torch.autograd import Variable
from torch.backends import cudnn
from torch.utils.data import DataLoader
from torchvision import transforms
from tqdm import tqdm

from datasets_test import ImageFolder
from misc import check_mkdir
from model_R2 import R3Net

begin = time.time()

parser = argparse.ArgumentParser(description='美少女战士 ==>> 开始处理图像')
parser.add_argument('--data_dir', type=str, default='/home/lart/Datasets/ECSSD',
                    help='测试数据文件夹')
args = parser.parse_args()

torch.manual_seed(2019)
torch.cuda.set_device(0)
cudnn.benchmark = True

img_transform = transforms.Compose([
    transforms.Resize((480, 480)),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])

to_pil = transforms.ToPILImage()

train_set = ImageFolder(args.data_dir,
                        None,
                        img_transform)
train_loader = DataLoader(train_set,
                          batch_size=16,
                          num_workers=8,
                          shuffle=False,
                          pin_memory=True)

# 65.92232346534729
pth_path = './pth'
snapshot = '32650'
save_path = './segmentation_result'

# 使用保存好的已经训练好的模型来进行测试, 具体使用哪个, 由参数'sanpshot'确定
print('载入训练好的模型')
net = R3Net().cuda()
net.load_state_dict(torch.load(os.path.join(pth_path, snapshot + '.pth')))
net.eval()

mid_1 = time.time() - begin

############################# 以上为公用部分

with torch.no_grad():
    # img_path = os.path.join(data_dir, 'image_test_640')
    check_mkdir(save_path)
    for data in tqdm(train_loader):
        inputs, img_paths, w, h = data
        batch_size = inputs.size(0)
        img_var = Variable(inputs).cuda()
        prediction = net(img_var)

        for i_item in range(batch_size):
            img_name = (img_paths[i_item].split(os.sep)[-1]).split('.')[0]
            pre_cpu = prediction[i_item].cpu()
            pre_cpu = to_pil(pre_cpu)
            pre_cpu = pre_cpu.resize((w[i_item], h[i_item]), Image.BILINEAR)
            out = np.array(pre_cpu)
            out[out >= 211] = 255  # 找出来的
            out[out < 255] = 0
            Image.fromarray(out).save(
                os.path.join(save_path, img_name + '.png'))

mid_2 = time.time()
print("原始计时", mid_2 - begin)

# del inputs, img_paths, w, h, img_name
# -----------------------------------------------------------------------------

# pool.map 48s
# pool.apply_async 43s

def img_resize(pre_cpu, w, h, img_name):
    pre_cpu = to_pil(pre_cpu)
    pre_cpu = pre_cpu.resize((w, h), Image.BILINEAR)
    pre_cpu = np.array(pre_cpu)
    out = np.zeros_like(pre_cpu)
    out[out >= 211] = 255  # 找出来的
    Image.fromarray(out).save(
        os.path.join(save_path, img_name + '.png'))

pools_num = 8
pool = Pool(pools_num)
with torch.no_grad():
    check_mkdir(save_path)

    for data in tqdm(train_loader):
        inputs, img_paths, w, h = data
        batch_size = inputs.size(0)
        img_var = inputs.cuda()
        prediction = net(img_var)
        pre_cpu = np.array(prediction.cpu()).astype('uint8')

        # iter_item = iter([[pre_cpu[i], w[i], h[i], img_names[i]]
        #                   for i in range(batch_size)])
        # pool.map(img_resize, iter_item)

        for i_item in range(batch_size):
            img_name = (img_paths[i_item].split(os.sep)[-1]).split('.')[0]
            pool.apply_async(
                img_resize,
                args=(pre_cpu[i_item], w[i_item], h[i_item], img_name))

pool.close()
pool.join()

end = time.time()

################################## 以下为公用部分
print("多进程计时", end - mid_2 + mid_1, end - begin)

# 要比to_pil慢
# pre_cpu = np.array(
#     prediction[i].squeeze(0).cpu()).astype('uint8')
# pre_cpu = pre_cpu[:, :, np.newaxis]
# pre_cpu = np.repeat(pre_cpu, 3, axis=2)
# pre_cpu = Image.fromarray(pre_cpu)
# print(prediction.type(), pre_cpu.type())
ghost commented 5 years ago

图像相关python库

图像处理

Numpy

Numpy对多维矩阵A的操作一般有:

CV2

PIL,Pillow, Pillow-SIMD

Matplotlib

Skimage

读取

CV2

PIL,Pillow, Pillow-SIMD

img = Image.open('examples.png')

Matplotlib

img = plt.imread('examples.png')

显示

Matplotlib

# 最主要目的是用来绘图: 将numpy数组格式的RGB图像显示;float类型的图像,范围0-1;如果是uint8图像,范围是0-255;
img = plt.imread('examples.png')
plt.imshow(img)
plt.show()

CV2

img = cv2.imread('examples.png')
plt.imshow(img[..., -1::-1]) # 因为opencv读取进来的是bgr顺序,而imshow需要的是rgb顺序,因此需要先反过来,也可以plt.imshow(img[:,:,::-1])
plt.show()

PIL

#可直接打开
plt.imshow(Image.open('examples.png')) # 实际上plt.imshow可以直接显示PIL格式图像
plt.show()
#转换为需要的numpy格式打开
img_gray = img.convert('L') #转换成灰度图像
img = np.array(img)
img_gray = np.array(img_gray)
plt.imshow(img) # or plt.imshow(img / 255.0)5
plt.show()
plt.imshow(img_gray, cmap=plt.gray()) # 显示灰度图要设置cmap参数
plt.show()

转换

主要是通过numpy的transpose操作,修正RGB,BGR;

例如:

opencv相关图像操作

img_gray = cv2.cvtColor(img,
cv2.COLOR_BGR2GRAY) # BGR转灰度

img_bgr = cv2.cvtColor(img_gray,
cv2.COLOR_GRAY2BGR) # 灰度转BRG

img_rgb = cv2.cvtColor(img_gray,
cv2.COLOR_GRAY2RGB) # 灰度转RGB

b,g,r = cv2.split(img) #bgr图像分离三个通道
img2 = cv2.merge([r,g,b]) #merge成rgb图像

### PIL相关图像操作

```python
img = Image.open('examples.png')
img_gray = image.convert(‘L’)
img_color = img_gray.convert(‘RGB’)

### PIL与numpy格式转换操作

```python
numpy.asarray()
Image.fromarray()

如果是pil转opencv,记得需要通过copy命令得到的才可以进行cv2操作,不然会有bug。切记.

尺寸

# PIL类型,尺寸信息,通过.size方法,得到WH
print image.size #width height
# 同样进行resize操作,顺序也是wh;
img2_resize = img2.resize((960,540))
img2_resize.save('test1.jpg')
# 而如果使用cv2操作,顺序也是wh;
img3_resize = cv2.resize(img3, (960,540))
cv2.imwrite('test2.jpg', img3_resize)
# 但如果放在numpy里面,调用shape方法,得到的是HWC;

保存

PIL

#直接save方法
img = Image.open('examples.png')
img.save('examples2.png')
img_gray = img.convert('L')
img_gray.save('examples_gray.png') # 不管是灰度还是彩色,直接用save函数保存就可以,但注意,只有PIL格式的图片能够用save函数

CV2

import cv2
img = cv2.imread('examples.png') # 这是BGR图片
cv2.imwrite('examples2.png', img) # 这里也应该用BGR图片保存,这里要非常注意,因为用pylab或PIL读入的图片都是RGB的,如果要用opencv存图片就必须做一个转换
img_gray = cv2.cvtColor(img,
cv2.COLOR_BGR2GRAY)
cv2.imwrite('examples_gray.png', img_gray)
ghost commented 5 years ago

类的属性与方法

class pub():
    _name = 'protected类型的变量'
    __info = '私有类型的变量'
    def _func(self):
        print("这是一个protected类型的方法")
    def __func2(self):
        print('这是一个私有类型的方法')
    def get(self):
        return(self.__info)

注意: 这里并没有真正意义上定义protected和私有方法, 如下文所述:

https://segmentfault.com/a/1190000002611411

lartpang commented 5 years ago

python的模块路径问题