[인공지능 #6 ] multinomial 적용
[ multinomial 적용]
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
# Lab 6 Softmax Classifier
import tensorflow as tf
tf.set_random_seed(777) # for reproducibility
x_data = [[1, 2, 1, 1],
[2, 1, 3, 2],
[3, 1, 3, 4],
[4, 1, 5, 5],
[1, 7, 5, 5],
[1, 2, 5, 6],
[1, 6, 6, 6],
[1, 7, 7, 7]]
y_data = [[0, 0, 1], ==>2
[0, 0, 1], ==>2
[0, 0, 1],
[0, 1, 0], ==>1
[0, 1, 0],
[0, 1, 0],
[1, 0, 0],
[1, 0, 0]]
X = tf.placeholder("float", [None, 4])
Y = tf.placeholder("float", [None, 3])
nb_classes = 3
W = tf.Variable(tf.random_normal([4, nb_classes]), name='weight')
b = tf.Variable(tf.random_normal([nb_classes]), name='bias')
# tf.nn.softmax computes softmax activations
==> 확율( 1~0 )으로 변환해줌
# softmax = exp(logits) / reduce_sum(exp(logits), dim)
hypothesis = tf.nn.softmax(tf.matmul(X, W) + b)
# Cross entropy cost/loss
cost = tf.reduce_mean(-tf.reduce_sum(Y * tf.log(hypothesis), axis=1))
D(S,L)=-SIGMA Li log(Si) 함수와
는 같은 결과를 나타냄
예측이 맞으면 COST는 작아지고, 틀리면 커지도록 구성한 함수임.
상기 함수(상기 2함수는 같은결과를 내고있음 , 같은 함수라고 할수있음)가 그 역할을 충실히 해내고 있음.
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost)
COST 최소화 실행
# Launch graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for step in range(2001):
sess.run(optimizer, feed_dict={X: x_data, Y: y_data})
if step % 200 == 0:
print(step, sess.run(cost, feed_dict={X: x_data, Y: y_data}))
print('--------------')
# Testing & One-hot encoding
a = sess.run(hypothesis, feed_dict={X: [[1, 11, 7, 9]]})
print(a, sess.run(tf.arg_max(a, 1)))
==> 예측치 출력(3개) , one hot 값 출력 ==> 어떤 클래스 인지를 결정해서 출력해줌
==> 예시)손글씨의 특징을 3개의 x 값으로 표현하고, 예측치를 출력 3개로 하며, 이때 확율이 가장높게나타난 클래스가 해당 손글씨가 됨.
==> 예시) 글래스(강아지,고양이...)등의 특징을 여러 x갋으로 표현하고, 예측을 y값으로 예측하고, 그중 확율이 제일 높은수치를 선택해줌, 제일 큰 수치가 해당클래스(고양이, 강아지 등)가 된다.
print('--------------')
b = sess.run(hypothesis, feed_dict={X: [[1, 3, 4, 3]]})
print(b, sess.run(tf.arg_max(b, 1)))
print('--------------')
c = sess.run(hypothesis, feed_dict={X: [[1, 1, 0, 1]]})
print(c, sess.run(tf.arg_max(c, 1)))
print('--------------')
all = sess.run(hypothesis, feed_dict={
X: [[1, 11, 7, 9], [1, 3, 4, 3], [1, 1, 0, 1]]})
print(all, sess.run(tf.arg_max(all, 1)))
'''
--------------
[[ 1.38904958e-03 9.98601854e-01 9.06129117e-06]] [1] ==> 전체의 합은 1 임
--------------
[[ 0.93119204 0.06290206 0.0059059 ]] [0]
--------------
[[ 1.27327668e-08 3.34112905e-04 9.99665856e-01]] [2]
--------------
[[ 1.38904958e-03 9.98601854e-01 9.06129117e-06]
[ 9.31192040e-01 6.29020557e-02 5.90589503e-03]
[ 1.27327668e-08 3.34112905e-04 9.99665856e-01]] [1 0 2]
'''
[ Fancy softmax classfication 구현 ]
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
# Lab 6 Softmax Classifier
import tensorflow as tf
import numpy as np
tf.set_random_seed(777) # for reproducibility
# Predicting animal type based on various features
xy = np.loadtxt('data-04-zoo.csv', delimiter=',', dtype=np.float32)
x_data = xy[:, 0:-1]
y_data = xy[:, [-1]]
print(x_data.shape, y_data.shape)
nb_classes = 7 # 0 ~ 6
X = tf.placeholder(tf.float32, [None, 16])
Y = tf.placeholder(tf.int32, [None, 1]) # 0 ~ 6
Y_one_hot = tf.one_hot(Y, nb_classes) # one hot
print("one_hot", Y_one_hot)
Y_one_hot = tf.reshape(Y_one_hot, [-1, nb_classes])
# one hot을 하면 rank가 1개 늘어나게 된다, 이를 다시 줄이는 역할을 하는것이 reshape 임
print("reshape", Y_one_hot)
W = tf.Variable(tf.random_normal([16, nb_classes]), name='weight')
b = tf.Variable(tf.random_normal([nb_classes]), name='bias')
# tf.nn.softmax computes softmax activations
# softmax = exp(logits) / reduce_sum(exp(logits), dim)
logits = tf.matmul(X, W) + b
hypothesis = tf.nn.softmax(logits)
# Cross entropy cost/loss
cost_i = tf.nn.softmax_cross_entropy_with_logits(logits=logits,
labels=Y_one_hot)
cost = tf.reduce_mean(cost_i)
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost)
prediction = tf.argmax(hypothesis, 1)
correct_prediction = tf.equal(prediction, tf.argmax(Y_one_hot, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Launch graph
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for step in range(2000):
sess.run(optimizer, feed_dict={X: x_data, Y: y_data})
if step % 100 == 0:
loss, acc = sess.run([cost, accuracy], feed_dict={
X: x_data, Y: y_data})
print("Step: {:5}\tLoss: {:.3f}\tAcc: {:.2%}".format(
step, loss, acc))
# Let's see if we can predict
pred = sess.run(prediction, feed_dict={X: x_data})
# y_data: (N,1) = flatten => (N, ) matches pred.shape
for p, y in zip(pred, y_data.flatten()):
print("[{}] Prediction: {} True Y: {}".format(p == int(y), p, int(y)))
'''
Step: 0 Loss: 5.106 Acc: 37.62%
Step: 100 Loss: 0.800 Acc: 79.21%
Step: 200 Loss: 0.486 Acc: 88.12%
Step: 300 Loss: 0.349 Acc: 90.10%
Step: 400 Loss: 0.272 Acc: 94.06%
Step: 500 Loss: 0.222 Acc: 95.05%
Step: 600 Loss: 0.187 Acc: 97.03%
Step: 700 Loss: 0.161 Acc: 97.03%
Step: 800 Loss: 0.140 Acc: 97.03%
Step: 900 Loss: 0.124 Acc: 97.03%
Step: 1000 Loss: 0.111 Acc: 97.03%
Step: 1100 Loss: 0.101 Acc: 99.01%
Step: 1200 Loss: 0.092 Acc: 100.00%
[참고자료 ]
https://www.inflearn.com/course/기본적인-머신러닝-딥러닝-강좌/
http://agiantmind.tistory.com/176
https://www.tensorflow.org/install/
https://github.com/hunkim/deeplearningzerotoall
http://www.derivative-calculator.net/
http://terms.naver.com/entry.nhn?docId=3350391&cid=58247&categoryId=58247 ==> 미분계산/공식
http://matplotlib.org/users/installing.html ==>matplotlib 설치
https://www.kaggle.com/==> DATA SAMPLE 을 구할수 있는 사이트
'프로젝트 > 인공지능' 카테고리의 다른 글
[인공지능 #7] Rate overfiting , training/test data , nomalization (0) | 2017.08.05 |
---|---|
[python] python 문법정리 (0) | 2017.08.05 |
[인공지능 #5 ] Logistics classification 가설함수 (0) | 2017.07.30 |
[인공지능 #4 ] 여러개의 DATA를 (X 변수) 구현 (0) | 2017.07.30 |
[인공지능 #3 ] Cost 최소화 알고리즘 구현 (0) | 2017.07.30 |