일 | 월 | 화 | 수 | 목 | 금 | 토 |
---|---|---|---|---|---|---|
1 | 2 | 3 | 4 | 5 | ||
6 | 7 | 8 | 9 | 10 | 11 | 12 |
13 | 14 | 15 | 16 | 17 | 18 | 19 |
20 | 21 | 22 | 23 | 24 | 25 | 26 |
27 | 28 | 29 | 30 |
- model-free control
- Policy Gradient
- pulloff
- resample
- loss functions
- sidleup
- shadowing
- sample rows
- checkitout
- REINFORCE
- thresholding
- normalization
- non parametic softmax
- Inorder Traversal
- fastapi
- scowl
- freebooze
- Excel
- clip intensity values
- MRI
- rest-api
- Knowledge Distillation
- domain adaptation
- objective functions for machine learning
- noise contrast estimation
- 자료구조
- straightup
- Actor-Critic
- 3d medical image
- remove outliers
- Today
- Total
목록분류 전체보기 (57)
Let's Run Jinyeah
python 위치 where python >> C:\Users\samsung\Anaconda3\python.exe [Anaconda3] python 가상환경 위치: C:\Users\samsung\Anaconda3\envs conda envs [Anacodna3] python 버전 확인 python --version >>Python 3.6.8 :: Anaconda, Inc. [Anaconda3] 설치 package 확인 conda list

Outer product uvT output: matrix time complexity: O(n2) Inner product uTv output: scalar time complexity: O(n) a = [a0, a1, a2, ... , aN-1], b = [b0, b1, b2, ...., bN-1] a0*b0 + a1*b1 + aN-1*bN-1 Assuming that multiplication and addition are constant-time operations, the time-complexity is O(n) Multiply O(n) + add O(n) = O(n)
The Ellen Show "The Unbelievably Hilarious Amy Schumer" 1. I don't fit in here -- just straight up body type Straight up - 격한공감, 완전, absolutely, totally / 솔직하게, honestly ex) I'm telling you straight up, I never saw him with her. ex) Straight up, I didn't think you'd make it this far in the competition, but you've done really well. 2. my arms register as legs register - to put information(your na..
“Domain” and “Task” Domain relates to the feature space of a specific dataset and the marginal probability distribution of features Task relates to the label space of a dataset and an objective predictive function Transfer Learning goal is to transfer the knowledge learned from the Task(a) on Domain A to the Task(b) on Domain B common to update the last several layers of the pre-trained network ..

Display all of your drives on a Linux System sudo fdisk -l lsblk 총 2개의 disk (nvme0n1과 nvme1n1)가 마운트 되어 있음 nvme0n1의 마운트 위치: / nvme1n1의 마운트 위치: /data1 Display Size of all of your drives on a Linux System df -h 특정 디렉토리의 File system 용량 확인하기 du -sh # 총 용량 du -h # 모든 하위 디렉토리들의 용량 du -h --max-depth=1 # 첫번째 하위 디렉토리들의 용량
Entropy는 일반적으로 불확실성을 나타내는 지표이다. 딥러닝에서는 이를 정보량으로 볼 수 있다. Rationale Information (𝑿=𝒙i) = -㏒ 𝑷(𝑿=𝒙i) The degree of information delivered by an event xi is low if 𝑷(𝑿=𝒙i) is close to 1 is high if 𝑷(𝑿=𝒙i) is close to 0 즉, 확률이 클수록 정보량이 적다 logarithm: -logx = x가 1에 가까울수록 작다 Expectation E [𝑿] = ∑ 𝒙·𝑷(𝑿=𝒙) Summation of (특정 outcome x 그 outcome이 나올 확률) Entropy H [𝑿] = E [Information(𝑿=𝒙)] = - ∑ 𝑷(𝑿=𝒙) · ㏒𝑷(..
When to Resample? Anytime we use two datasets with different sized voxels Dicom Attributes to use 1. Slice Thickess 2. PixelSpacing (width, height) Calcuate new size out_size = [ int(np.round(original_size[0] * (original_spacing[0] / out_spacing[0]))), int(np.round(original_size[1] * (original_spacing[1] / out_spacing[1]))), int(np.round(original_size[2] * (original_spacing[2] / out_spacing[2]))..

What is the normalization formula used for? Normalization is useful in statistics for creating a common scale to compare data sets with very different values. Deep Learning view? 학습의 안정화: Gradient vanising/exploding 문제를 해결할 수 있음 학습시간의 단축: learning rate를 크게 할 수 있음 성능 개선: local optimum에서 빨리 빠져나올 수 있음 Min-Max Normalization Method normalization formula to [0,1] xnormalized = (x-xmin) / (xmax-xmin) i..