여러가지 케이스를 보고(학습) 그에 따라 어떤 형태인가를 확률로 계산한다.
계산할 때 softmax regression을 사용한다. 결과는 약 92%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | import tensorflow as tf # test data 다운로드 from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets("MNIST_data/", one_hot=True) # tf Graph Input x = tf.placeholder(tf.float32, [None, 784]) # weight, bias W = tf.Variable(tf.zeros([784, 10])) b = tf.Variable(tf.zeros([10])) y = tf.nn.softmax(tf.matmul(x,W) + b) y_ = tf.placeholder(tf.float32, [None, 10]) cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_*tf.log(y), axis = 1)) # data를 변환하는 부분 : https://stackoverflow.com/questions/40088132/tensorflow-mnist-weight-and-bias-variables train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) batch_size = 100 for i in range(1000): # batch_size 단위로 학습 batch_xs, batch_ys = mnist.train.next_batch(batch_size) sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys}) correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_:mnist.test.labels})) | cs |
'tensorflow > tutorial' 카테고리의 다른 글
mnist 초급-2 (0) | 2017.11.15 |
---|