<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Activation Memory on AI Tech Blog</title>
    <link>https://jesamkim.github.io/ai-tech-blog/tags/activation-memory/</link>
    <description>Recent content in Activation Memory on AI Tech Blog</description>
    <generator>Hugo -- 0.147.6</generator>
    <language>ko</language>
    <lastBuildDate>Wed, 15 Apr 2026 10:00:00 +0900</lastBuildDate>
    <atom:link href="https://jesamkim.github.io/ai-tech-blog/tags/activation-memory/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>분산학습의 이해 Part 1 - GPU 메모리 분석: Parameter vs Activation</title>
      <link>https://jesamkim.github.io/ai-tech-blog/posts/2026-04-16-distributed-training-memory-analysis/</link>
      <pubDate>Wed, 15 Apr 2026 10:00:00 +0900</pubDate>
      <guid>https://jesamkim.github.io/ai-tech-blog/posts/2026-04-16-distributed-training-memory-analysis/</guid>
      <description>Neural Network 학습 루프의 각 단계에서 GPU 메모리가 어떻게 소비되는지 분석합니다. SGD부터 Adam까지 optimizer별 메모리 수식, activation memory의 batch size 비례 관계, 그리고 OOM 대응 전략까지 정리합니다.</description>
    </item>
  </channel>
</rss>
