"Multi-Candidate Needle Prompting" for large context LLMs (Gemini 1.5)
Gemini 1.5's groundbreaking 1M token context window is a remarkable advancement in LLMs, providing capabilities unlike any other currently available model. With its 1M context window, Gemini 1.5 can ingest the equivalent of 10 Harry Potter books in one go. However, this enormous context window is not without its limitations. In my experience, Gemini 1.5 often struggles to retrieve the most relevant information from the vast amount of contextual data it has access to.
The "Needle in a Haystack" benchmark is a well-known challenge for LLMs, which tests their ability to find specific information within a large corpus of text. This benchmark is particularly relevant for models with large context windows, as they must efficiently search through vast amounts of data to locate the most pertinent information.
To address this issue, I have developed a novel prompting technique that I call "Multi-Candidate Needle Prompting." This approach aims to improve the model's ability to accurately retrieve key information from within its large context window. The technique involves prompting the LLM to identify 10 relevant sentences from different parts of the input text, and then asking it to consider which of these sentences (i.e. candidate needles) is the most pertinent to the question at hand before providing the final answer.
This process bears some resemblance to Retrieval Augmented Generation (RAG), but the key difference is that the entire process is carried out by the LLM itself, without relying on a separate retrieval mechanism.
By prompting the model to consider multiple relevant sentences from various parts of the text, "Multi-Candidate Needle Prompting" promotes a more thorough search of the available information and minimizes the chances of overlooking crucial details. Moreover, requiring the model to explicitly write out the relevant sentences serves as a form of intermediate reasoning, providing insights into the model's thought process.
The attached screenshot anecdotally demonstrates the effectiveness of my approach.
Many of my colleagues, especially those outside the United States, have not yet had the opportunity to experience Gemini 1.5 firsthand. I highly recommend trying it out now while it is still available for free, as Google plans to introduce charges in the near future.
For those of you located in Europe, accessing Gemini 1.5 will require the use of a VPN. I have attached an instructional video that guides you through the process of setting this up.
For best performance, go into settings and disable all safety settings, as Gemini tends to be quite overzealous in blocking content (see attached screenshot). Only leave these guardrails enabled if you need them for your particular application.
0
0 comments
Benjamin Bush
1
"Multi-Candidate Needle Prompting" for large context LLMs (Gemini 1.5)
Agent Artificial
skool.com/agent-artificial-1515
A community for applied AI. Learn to design, build and deploy AI pipelines into your business workflows. Learn with useful projects.
powered by