Lab1 : TensorFlow 기본
TensorFlow 설치
- https://www.tensorflow.org/install/
- Anaconda 설치후, Anaconda Prompt 를 실행한 후 나머지 실행
- GPU 버전의 경우, 별도의 conda environment 를 설치한 후 실행
>> import tensorflow as tf
TensorFlow 실행 예제
import tensorflow as tf
node1 = tf.constant(3.0)
node2 = tf.constant(4.0)
node3 = tf.add(node1, node2)
sess = tf.Session()
print(sess.run([node1, node2]))
print(sess.run(node3)
TensorFlow 실행 단계
- TensorFlow operations를 사용하여 그래프를 build
- graph(operations)를 실행시킨다. -> sess.run(op)
- 그러면 그래프에 포함된 변수가 갱신된다.
Placeholer를 사용하여 노드만들기. 필요시 실 데이터로 교체 가능. 함수??
a = tf.placeholder(tf.float32)
b= tf.placeholder(tf.float32)
adder_node = a + b
print(sess.run(adder_node, feed_dict={a:3, b:4.5}))
print(sess.run(adder_node, feed_dict={a:[1,3], b:[2,4]})) -> [3., 7.]
Tensor : array...
- Ranks - 몇차원? 0:scalar, 1: vector, 2:matrix, 3:3-tensor(cube)
- Shapes - [[1,2], [3,4], [4,5]] -> [3,2]
- Types - DT_FLOAT, DT_DOUBLE, DT_INT8, DT_INT16, DT_INT32 ...
Lab2 : Linear Regression
import tensorflow as tf
# H(x) = Wx + b
x_train = [1, 2, 3, 4, 5]
y_train = [1, 2, 3, 4, 5]
W = tf.Variable(tf.random_normal([1]), name='weight') # trainable Variable
b = tf.Variable(tf.random_normal([1]), name='bias')
hypothesis = x_train * W +b
# cost(W,b)...
cost = tf.reduce_mean(tf.square(hypothesis - y_train))
# GradientDecent
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
train = optimizer.minimize(cost)
# Launch the graph
sess = tf.Session()
# Initailize glabal variables in the graph
sess.run(tf.global_variables_initializer())
# Fitting
for step in range(2001):
sess.run(train)
if step % 50 == 0:
print(step, sess.run(cost), sess.run(W), sess.run(b))
Lab3 : Minimizing Cost
===