修正grads = K.gradients(loss, model.input)[0]

原始程式碼:grads = K.gradients(loss, model.input)[0]

出現錯誤:
RuntimeError Traceback (most recent call last) Cell In[19], line 4 1 # 從 gradient 返回一個張量列表(在這種情況下大小為1) 2 # 因此我們只保留第一個元素 - 這是一個張量。 ----> 4 grads = K.gradients(loss, model.input)[0] File C:\ProgramData\Anaconda3\lib\site-packages\keras\src\backend.py:4695, in gradients(loss, variables) 4683 @keras_export("keras.backend.gradients") 4684 @doc_controls.do_not_generate_docs 4685 def gradients(loss, variables): 4686 """Returns the gradients of `loss` w.r.t. `variables`. 4687 4688 Args: (...) 4693 A gradients tensor. 4694 """ -> 4695 return tf.compat.v1.gradients( 4696 loss, variables, colocate_gradients_with_ops=True 4697 ) File C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gradients_impl.py:165, in gradients(ys, xs, grad_ys, name, colocate_gradients_with_ops, gate_gradients, aggregation_method, stop_gradients, unconnected_gradients) 160 # Creating the gradient graph for control flow mutates Operations. 161 # _mutation_lock ensures a Session.run call cannot occur between creating and 162 # mutating new ops. 163 # pylint: disable=protected-access 164 with ops.get_default_graph()._mutation_lock(): --> 165 return gradients_util._GradientsHelper( 166 ys, xs, grad_ys, name, colocate_gradients_with_ops, 167 gate_gradients, aggregation_method, stop_gradients, 168 unconnected_gradients) File C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gradients_util.py:476, in _GradientsHelper(ys, xs, grad_ys, name, colocate_gradients_with_ops, gate_gradients, aggregation_method, stop_gradients, unconnected_gradients, src_graph) 474 """Implementation of gradients().""" 475 if context.executing_eagerly(): --> 476 raise RuntimeError("tf.gradients is not supported when eager execution " 477 "is enabled. Use tf.GradientTape instead.") 478 ys = variable_utils.convert_variables_to_tensors(_AsList(ys)) 479 xs = [ 480 x.handle if resource_variable_ops.is_resource_variable(x) else x 481 for x in _AsList(xs) 482 ] RuntimeError: tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead.

改成這樣就好了
import numpy as np import tensorflow as tf from tensorflow.keras import optimizers # The rest of your code remains the same # Compile your model before using it in the gradient computation model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(learning_rate=1e-5), metrics=['acc']) # Dummy input for example gradient computation (replace with actual input) input_height = 128 # Replace with the height of your input images input_width = 128 # Replace with the width of your input images input_channels = 3 # Replace with the number of channels in your input images dummy_input = np.random.random((1, input_height, input_width, input_channels)) # Compute the loss and gradients using GradientTape with tf.GradientTape() as tape: inputs = tf.convert_to_tensor(dummy_input) predictions = model(inputs) loss = tf.reduce_mean(predictions) # Compute gradients with respect to the model's input grads = tape.gradient(loss, inputs) # Now you can use the gradients for further processing or analysis

评论

此博客中的热门博文

修正input_img_data = np.random.random((1, 150, 150, 3)) * 20 + 128.

緣起