Loading...
diagram
Articles

A Ladder of Reasoning: Testing the power of imagination in LLMs 

July 23, 2025

Given a language model, can we tell whether it is truly reasoning, or if its performance owes only to pattern recognition and memorization?

KBLaM blog | A flowchart illustrating the process of handling a prompt using a language model. The process begins with documents being used to construct and summarize a knowledge base (KB) offline. The summarized KB is then encoded and fed into the main process. A prompt goes through a tokenizer, followed by rectangular attention, and then into the large language model (LLM). The LLM retrieves information from the encoded KB to generate an answer.
Microsoft Research Blog

Introducing KBLaM: Bringing plug-and-play external knowledge to LLMs 

March 18, 2025 | Taketomo Isazawa, Xi Wang, Liana Mikaelyan, Mathew Salvaris, and James Hensman

Introducing KBLaM, an approach that encodes and stores structured knowledge within an LLM itself. By integrating knowledge without retraining, it offers a scalable alternative to traditional methods.

In the news | Windows Experience Blog

Phi Silica, small but mighty on-device SLM 

December 6, 2024

Today we will share how the Applied Sciences team used a multi-interdisciplinary approach to achieve a breakthrough in power efficiency, inference speed and memory efficiency for a state-of-the-art small language model (SLM), Phi Silica.