修正plt.imshow(generate_pattern('block3_conv1', 0))
原始程式碼:
plt.imshow(generate_pattern('block3_conv1', 0)) plt.show()
出現錯誤: RuntimeError Traceback (most recent call last) Cell In[39], line 1 ----> 1 plt.imshow(generate_pattern('block3_conv1', 0)) 2 plt.show() Cell In[36], line 7, in generate_pattern(layer_name, filter_index, size) 4 loss = K.mean(layer_output[:, :, :, filter_index]) 6 # Compute the gradient of this loss with respect to the input image ----> 7 grads = K.gradients(loss, model.input)[0] 9 # Normalize the gradient 10 grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5) File C:\ProgramData\Anaconda3\lib\site-packages\keras\src\backend.py:4695, in gradients(loss, variables) 4683 @keras_export("keras.backend.gradients") 4684 @doc_controls.do_not_generate_docs 4685 def gradients(loss, variables): 4686 """Returns the gradients of `loss` w.r.t. `variables`. 4687 4688 Args: (...) 4693 A gradients tensor. 4694 """ -> 4695 return tf.compat.v1.gradients( 4696 loss, variables, colocate_gradients_with_ops=True 4697 ) File C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gradients_impl.py:165, in gradients(ys, xs, grad_ys, name, colocate_gradients_with_ops, gate_gradients, aggregation_method, stop_gradients, unconnected_gradients) 160 # Creating the gradient graph for control flow mutates Operations. 161 # _mutation_lock ensures a Session.run call cannot occur between creating and 162 # mutating new ops. 163 # pylint: disable=protected-access 164 with ops.get_default_graph()._mutation_lock(): --> 165 return gradients_util._GradientsHelper( 166 ys, xs, grad_ys, name, colocate_gradients_with_ops, 167 gate_gradients, aggregation_method, stop_gradients, 168 unconnected_gradients) File C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gradients_util.py:476, in _GradientsHelper(ys, xs, grad_ys, name, colocate_gradients_with_ops, gate_gradients, aggregation_method, stop_gradients, unconnected_gradients, src_graph) 474 """Implementation of gradients().""" 475 if context.executing_eagerly(): --> 476 raise RuntimeError("tf.gradients is not supported when eager execution " 477 "is enabled. Use tf.GradientTape instead.") 478 ys = variable_utils.convert_variables_to_tensors(_AsList(ys)) 479 xs = [ 480 x.handle if resource_variable_ops.is_resource_variable(x) else x 481 for x in _AsList(xs) 482 ] RuntimeError: tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead.
修正成這樣就好了
def generate_pattern(layer_name, filter_index, size=150): # Construct a loss function that maximizes the activation for the nth filter in the considered layer layer_output = model.get_layer(layer_name).output loss = K.mean(layer_output[:, :, :, filter_index]) # Compute the gradient of this loss with respect to the input image with tf.GradientTape() as tape: input_img_data = tf.Variable(np.random.random((1, size, size, 3)) * 20 + 128., dtype=tf.float32) predictions = model(input_img_data) loss = tf.reduce_mean(predictions[:, :, :, filter_index]) grads = tape.gradient(loss, input_img_data) # Normalize the gradient grads /= (K.sqrt(K.mean(K.square(grads))) + 1e-5) # Run gradient ascent for 40 steps step = 1. for i in range(40): loss, grads = compute_loss_and_gradients(input_img_data) input_img_data.assign_add(grads * step) img = input_img_data.numpy()[0] return deprocess_image(img)
评论
发表评论