DeepDream(python深度学习)

2021/7/22 22:11:00

本文主要是介绍DeepDream(python深度学习),对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!

其中代码有部分修改,因为原来书中的代码不能运行(可能是版本原因)

源码:

# -*- coding = utf-8 -*-
# @Time : 2021/7/22
# @Author : pistachio
# @File : p24.py
# @Software : PyCharm
from keras.applications import inception_v3
from keras import backend as K
import numpy as np
import scipy
from keras.preprocessing import image
import tensorflow as tf
tf.compat.v1.disable_eager_execution()
import imageio

K.set_learning_phase(0)

model = inception_v3.InceptionV3(weights='imagenet',
                                 include_top=False)

# set DeepDream config

layer_contributions = {
    'mixed2': 0.2,
    'mixed3': 3.,
    'mixed4': 2.,
    'mixed5': 1.5,
}

# define required maximum loss

layer_dict = dict([(layer.name, layer) for layer in model.layers])

loss = K.variable(0.)

for layer_name in layer_contributions:
    coeff = layer_contributions[layer_name]
    activation = layer_dict[layer_name].output
    
    scaling = K.prod(K.cast(K.shape(activation), 'float32'))
    loss = coeff * K.sum(K.square(activation[:, 2: -2, 2: -2, :])) / scaling
    loss += loss
    
# set gradient rise process
dream = model.input
grads = K.gradients(loss, dream)[0]
grads /= K.maximum(K.mean(K.abs(grads)), 1e-7)
outputs = [loss, grads]
fetch_loss_and_grads = K.function([dream], outputs)

def eval_loss_and_grads(x):
    outs = fetch_loss_and_grads([x])
    loss_value = outs[0]
    grad_values = outs[1]
    return loss_value, grad_values

def gradient_ascent(x, iterations, step, max_loss=None):
    for i in range(iterations):
        loss_value, grad_values = eval_loss_and_grads(x)
        if max_loss is not None and loss_value > max_loss:
            break
        print('...Loss value at', i, ':', loss_value)
        x += step * grad_values
    return x

def resize_img(img, size):
    img = np.copy(img)
    factors = ( 1,
                float(size[0]) / img.shape[1],
                float(size[1]) / img.shape[2],
                1)
    return scipy.ndimage.zoom(img, factors, order=1)

def save_img(img, fname):
    pil_img = deprocess_image(np.copy(img))
    imageio.imwrite(fname, pil_img)
    
def preprocess_image(image_path):
    img = image.load_img(image_path)
    img = image.img_to_array(img)
    img = np.expand_dims(img, axis=0)
    img = inception_v3.preprocess_input(img)
    return img

def deprocess_image(x):
    if K.image_data_format() == 'channels_first':
        x = x.reshape((3, x.shape[2], x.shape[3]))
        x = x.transpose((1, 2, 0))
    else:
        x = x.reshape((x.shape[1], x.shape[2], 3))
    x /= 2.
    x += 0.5
    x *= 255.
    x = np.clip(x, 0, 255).astype('uint8')
    return x

step = 0.01
num_octave = 3
octave_scale = 1.4
iterations = 20
max_loss = 10.
base_image_path = 'D:\PYCHARMprojects\Dailypractise\data\images\zgh.png'
img = preprocess_image(base_image_path)
original_shape = img.shape[1:3]
successive_shapes = [original_shape]

for i in range(1, num_octave):
    shape = tuple([int(dim / (octave_scale ** i))
        for dim in original_shape])
    successive_shapes.append(shape)
    successive_shapes = successive_shapes[::-1]
    original_img = np.copy(img)
    shrunk_original_img = resize_img(img, successive_shapes[0])
    
    for shape in successive_shapes:
        print('Processing image shape', shape)
        img = resize_img(img, shape)
        img = gradient_ascent(img,iterations=iterations,step=step,max_loss=max_loss)
        upscaled_shrunk_original_img = resize_img(shrunk_original_img, shape)
        same_size_original = resize_img(original_img, shape)
        lost_detail = same_size_original - upscaled_shrunk_original_img
        img += lost_detail
        shrunk_original_img = resize_img(original_img, shape)
        save_img(img, fname='dream_at_scale_' + str(shape) + '.png')
    save_img(img, fname='final_dream.png')

运行结果:

D:\Anaconda\envs\tensorflow\python.exe D:/PYCHARMprojects/Dailypractise/p24.py
WARNING:tensorflow:From D:/PYCHARMprojects/Dailypractise/p24.py:15: set_learning_phase (from tensorflow.python.keras.backend) is deprecated and will be removed after 2020-10-11.
Instructions for updating:
Simply pass a True/False value to the `training` argument of the `__call__` method of your layer or model.
2021-07-22 19:16:20.610261: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
Processing image shape (214, 380)
...Loss value at 0 : 0.23929241
...Loss value at 1 : 0.31458685
...Loss value at 2 : 0.45655105
...Loss value at 3 : 0.63419825
...Loss value at 4 : 0.80707663
...Loss value at 5 : 1.005498
...Loss value at 6 : 1.2241814
...Loss value at 7 : 1.4203368
...Loss value at 8 : 1.6686953
...Loss value at 9 : 1.887331
...Loss value at 10 : 2.1674018
...Loss value at 11 : 2.398844
...Loss value at 12 : 2.6251698
...Loss value at 13 : 2.799604
...Loss value at 14 : 3.0620894
...Loss value at 15 : 3.2514527
...Loss value at 16 : 3.5478892
...Loss value at 17 : 3.772508
...Loss value at 18 : 3.997301
...Loss value at 19 : 4.1313596
Processing image shape (300, 533)
...Loss value at 0 : 0.7091144
...Loss value at 1 : 1.2117096
...Loss value at 2 : 1.6464589
...Loss value at 3 : 2.0503352
...Loss value at 4 : 2.4257367
...Loss value at 5 : 2.7572763
...Loss value at 6 : 3.0393498
...Loss value at 7 : 3.3627818
...Loss value at 8 : 3.6663146
...Loss value at 9 : 4.0352416
...Loss value at 10 : 4.2996826
...Loss value at 11 : 4.5605884
...Loss value at 12 : 4.8835526
...Loss value at 13 : 5.144657
...Loss value at 14 : 5.4622097
...Loss value at 15 : 5.707896
...Loss value at 16 : 5.9486456
...Loss value at 17 : 6.131893
...Loss value at 18 : 6.5231385
...Loss value at 19 : 6.7280445
Processing image shape (153, 271)
...Loss value at 0 : 0.23545246
...Loss value at 1 : 0.5281047
...Loss value at 2 : 0.872685
...Loss value at 3 : 1.2010163
...Loss value at 4 : 1.4984994
...Loss value at 5 : 1.751119
...Loss value at 6 : 1.9609538
...Loss value at 7 : 2.2237175
...Loss value at 8 : 2.4595366
...Loss value at 9 : 2.66988
...Loss value at 10 : 2.9131498
...Loss value at 11 : 3.121789
...Loss value at 12 : 3.3527956
...Loss value at 13 : 3.5521648
...Loss value at 14 : 3.6582441
...Loss value at 15 : 3.8582535
...Loss value at 16 : 4.0456595
...Loss value at 17 : 4.2001696
...Loss value at 18 : 4.428154
...Loss value at 19 : 4.5263395
Processing image shape (300, 533)
...Loss value at 0 : 0.55826104
...Loss value at 1 : 1.1834915
...Loss value at 2 : 1.87849
...Loss value at 3 : 2.476308
...Loss value at 4 : 3.0957234
...Loss value at 5 : 3.5740178
...Loss value at 6 : 4.0973105
...Loss value at 7 : 4.489731
...Loss value at 8 : 4.8811307
...Loss value at 9 : 5.1922107
...Loss value at 10 : 5.632334
...Loss value at 11 : 5.935752
...Loss value at 12 : 6.3485394
...Loss value at 13 : 6.619316
...Loss value at 14 : 6.9723473
...Loss value at 15 : 7.2498198
...Loss value at 16 : 7.6495433
...Loss value at 17 : 7.848679
...Loss value at 18 : 8.2405
...Loss value at 19 : 8.46919
Processing image shape (214, 380)
...Loss value at 0 : 1.8388493
...Loss value at 1 : 3.1717606
...Loss value at 2 : 4.071284
...Loss value at 3 : 4.5586343
...Loss value at 4 : 5.116611
...Loss value at 5 : 5.6414633
...Loss value at 6 : 6.085823
...Loss value at 7 : 6.494198
...Loss value at 8 : 6.8460126
...Loss value at 9 : 7.2819858
...Loss value at 10 : 7.7683487
...Loss value at 11 : 8.084449
...Loss value at 12 : 8.521009
...Loss value at 13 : 8.910183
...Loss value at 14 : 9.201728
...Loss value at 15 : 9.539077
...Loss value at 16 : 9.833274

Process finished with exit code 0

原图:

 

 效果图:

 

 千万记得放原图,别忘了



这篇关于DeepDream(python深度学习)的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!


扫一扫关注最新编程教程