<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Edge Intelligence | About Hyeongheon</title><link>https://chahh9808.github.io/tags/edge-intelligence/</link><atom:link href="https://chahh9808.github.io/tags/edge-intelligence/index.xml" rel="self" type="application/rss+xml"/><description>Edge Intelligence</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Wed, 01 Oct 2025 00:00:00 +0000</lastBuildDate><item><title>SNAP: Low-Latency Test-Time Adaptation with Sparse Updates</title><link>https://chahh9808.github.io/post/snap/</link><pubDate>Wed, 01 Oct 2025 00:00:00 +0000</pubDate><guid>https://chahh9808.github.io/post/snap/</guid><description>&lt;h2 id="position">Position&lt;/h2>
&lt;p>First author project in KAIST Mobile Intelligence &amp;amp; Interaction Lab&lt;/p>
&lt;h2 id="project-goals--works">Project Goals &amp;amp; Works&lt;/h2>
&lt;ol>
&lt;li>Designed SNAP, a sparse Test-Time Adaptation (TTA) framework for latency-sensitive edge applications.&lt;/li>
&lt;li>Proposed Class and Domain Representative Memory (CnDRM) to select a compact and informative subset of target samples for adaptation.&lt;/li>
&lt;li>Proposed Inference-only Batch-aware Memory Normalization (IoBMN) to align normalization statistics at inference time with minimal overhead.&lt;/li>
&lt;li>Integrated SNAP with five state-of-the-art TTA methods and validated consistent speedups with limited accuracy loss.&lt;/li>
&lt;/ol>
&lt;h2 id="key-results">Key Results&lt;/h2>
&lt;ol>
&lt;li>Reduced adaptation latency by up to 93.12% compared with baseline TTA pipelines.&lt;/li>
&lt;li>Maintained competitive performance with less than 3.3% accuracy drop across adaptation rates from 1% to 50%.&lt;/li>
&lt;li>Demonstrated strong practicality for resource-constrained, real-time on-device inference scenarios.&lt;/li>
&lt;/ol>
&lt;h2 id="links">Links&lt;/h2>
&lt;ul>
&lt;li>Website: &lt;a href="https://miil.kaist.ac.kr/projects/snap">https://miil.kaist.ac.kr/projects/snap&lt;/a>&lt;/li>
&lt;li>arXiv: &lt;a href="https://arxiv.org/abs/2511.15276">https://arxiv.org/abs/2511.15276&lt;/a>&lt;/li>
&lt;li>Code: &lt;a href="https://github.com/chahh9808/SNAP">https://github.com/chahh9808/SNAP&lt;/a>&lt;/li>
&lt;/ul></description></item></channel></rss>