<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Data Parallelism on AI Tech Blog</title>
    <link>https://jesamkim.github.io/ai-tech-blog/tags/data-parallelism/</link>
    <description>Recent content in Data Parallelism on AI Tech Blog</description>
    <generator>Hugo -- 0.147.6</generator>
    <language>ko</language>
    <lastBuildDate>Wed, 15 Apr 2026 11:00:00 +0900</lastBuildDate>
    <atom:link href="https://jesamkim.github.io/ai-tech-blog/tags/data-parallelism/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>분산학습의 이해 Part 2 - Data Parallelism: 데이터를 나눠 메모리를 줄이다</title>
      <link>https://jesamkim.github.io/ai-tech-blog/posts/2026-04-16-data-parallelism-deep-dive/</link>
      <pubDate>Wed, 15 Apr 2026 11:00:00 +0900</pubDate>
      <guid>https://jesamkim.github.io/ai-tech-blog/posts/2026-04-16-data-parallelism-deep-dive/</guid>
      <description>Parameter Server 아키텍처의 동작 원리, 학습 4단계, Centralized Training과의 수학적 동치성, 메모리 분석, 그리고 DP의 근본적 한계를 분석합니다. ResNet-18 ImageNet 예시로 실제 메모리 절감 효과를 계산합니다.</description>
    </item>
  </channel>
</rss>
