Google’s announcement of TurboQuant is weighing on the share prices of memory companies, as the technology is expected to cut artificial intelligence (AI) models’ memory usage to about one-sixth of current levels. For analysts, however, concerns on the memory chip demand may be overblown, as they noted that even if memory demand per model declines due to TurboQuant, overall demand for AI continues to grow at a faster pace, keeping the broader memory market on a solid growth trajectory. Announced on Tuesday by Google Research, TurboQuant is a compression technology designed to maximize AI efficiency. The gist of the technology is compressing an AI model’s key value cache memory (KV cache) to just 3 bits, cutting its size by more than sixfold. KV cache is an AI model’s short-term memory, where it stores keys and values already calculated so it can generate the next words faster. A sixfold reduction in KV cache size effectively lowers memory usage to about one-sixth of current levels, making similar performance possible even with only one-sixth of the required memory. As AI services