1. Meta releases Llama 3.1: Meta has unveiled its latest open-source AI model, Llama 3.1, with sizes up to 405 billion parameters. This model rivals other leading AI models and includes a context length of 128K tokens and support for eight languages. Llama 3.1 is free and open source. It took MONTHS to train on 16,000 NVIDIA H100 GPUs, which likely costed hundreds of millions of dollars and used enough electricity to power a small country. According to benchmarks the model would beat ChatGPT-4o and Claude´s Sonnet. It comes in 3 sizes: 8B, 70B and 405B. Where B refers to Billions of parameters (the number of variables the model can use to make predictions). It is a relatively simple decoder only transformer. The code used to train this model is only 300 lines of python and PyTorch along with a library called FairScale used to distribute training across multiple GPUs. We will have a detailed post about the implementation and how the model work on the AI Quest. You can try the model for free on platforms like meta.ai, groq or NVIDIA´s playground. 2. OpenAI launched "SearchGPT": a new search engine aiming to revolutionize the way we find and synthesize information online. Unlike traditional search engines that provide a list of links, SearchGPT offers a more interactive and streamlined experience. Key Features of SearchGPT: - Chat Interface: Similar to ChatGPT, SearchGPT presents search results within a chat window. This setup allows for a more conversational and intuitive interaction with the search engine. - Multimedia Search: In addition to text-based queries, SearchGPT supports image searches and includes various widgets for weather, calculators, sports updates, financial information, and time zones. - Summarization Capabilities: The search engine can summarize web pages in up to 300 characters, making it easier for users to quickly grasp the essential information without navigating through multiple links. - Advanced Language Models: SearchGPT uses advanced language models such as GPT-4 Lite, GPT-4, or GPT-3.5. This ensures high-quality responses and improved comprehension of complex queries.