News_icr
Check out our recent work Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers, where we propose In-context Re-ranking (ICR) that performs re-ranking with LLMs using only O(1) forward passes!
Check out our recent work Attention in Large Language Models Yields Efficient Zero-Shot Re-Rankers, where we propose In-context Re-ranking (ICR) that performs re-ranking with LLMs using only O(1) forward passes!