일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | |||||
3 | 4 | 5 | 6 | 7 | 8 | 9 |
10 | 11 | 12 | 13 | 14 | 15 | 16 |
17 | 18 | 19 | 20 | 21 | 22 | 23 |
24 | 25 | 26 | 27 | 28 | 29 | 30 |
31 |
- model-free control
- sidleup
- objective functions for machine learning
- scowl
- noise contrast estimation
- checkitout
- Actor-Critic
- remove outliers
- straightup
- non parametic softmax
- sample rows
- 자료구조
- REINFORCE
- MRI
- 3d medical image
- resample
- domain adaptation
- loss functions
- clip intensity values
- freebooze
- Inorder Traversal
- pulloff
- thresholding
- Knowledge Distillation
- rest-api
- normalization
- shadowing
- fastapi
- Excel
- Policy Gradient
- Today
- Total
목록Deep Learning/Theory (6)
Let's Run Jinyeah
“Domain” and “Task” Domain relates to the feature space of a specific dataset and the marginal probability distribution of features Task relates to the label space of a dataset and an objective predictive function Transfer Learning goal is to transfer the knowledge learned from the Task(a) on Domain A to the Task(b) on Domain B common to update the last several layers of the pre-trained network ..
Entropy는 일반적으로 불확실성을 나타내는 지표이다. 딥러닝에서는 이를 정보량으로 볼 수 있다. Rationale Information (𝑿=𝒙i) = -㏒ 𝑷(𝑿=𝒙i) The degree of information delivered by an event xi is low if 𝑷(𝑿=𝒙i) is close to 1 is high if 𝑷(𝑿=𝒙i) is close to 0 즉, 확률이 클수록 정보량이 적다 logarithm: -logx = x가 1에 가까울수록 작다 Expectation E [𝑿] = ∑ 𝒙·𝑷(𝑿=𝒙) Summation of (특정 outcome x 그 outcome이 나올 확률) Entropy H [𝑿] = E [Information(𝑿=𝒙)] = - ∑ 𝑷(𝑿=𝒙) · ㏒𝑷(..

What is the normalization formula used for? Normalization is useful in statistics for creating a common scale to compare data sets with very different values. Deep Learning view? 학습의 안정화: Gradient vanising/exploding 문제를 해결할 수 있음 학습시간의 단축: learning rate를 크게 할 수 있음 성능 개선: local optimum에서 빨리 빠져나올 수 있음 Min-Max Normalization Method normalization formula to [0,1] xnormalized = (x-xmin) / (xmax-xmin) i..

Prior Probability Probability derived by deductive reasoning calculated by existing information regarding a situation, and thus may vary depending the given situation ex) 특정 fish image가 돔(S1)에 속할 확률 Class-conditional Probability / Likelihood probability density function for X(feature), given that the corresponding class is Si also called Likelihood of Si with respect to X ex) 돔(S1)일 때, 몸 너비(X)가 ..

To improve the performance of a Deep Learning model the goal is to the minimize or maximize the objective function. For regression, classification, and regression problems, the objective function is minimzing the difference between predictions and ground truths. Therefore, the objective function is also called loss functions. Regression Loss Functions Squared Error Loss Absolute Error Loss Huber..

Variance & Bias Bias - the difference between the average prediction of model and the correct value(center of Target) Variance - variability of model prediction(예측값들의 분산된 정도) Underfitting model? usually have high bias and low wariance happens when have very less data or try to build a linear model with a nonliner data Overfitting model? usually have low bias and high variance happens when our mo..