<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Leniolabs_ - Medium]]></title>
        <description><![CDATA[Discover how our unique approach can help you solve your biggest technology challenges. Leniolabs_ is now part of Improving. - Medium]]></description>
        <link>https://medium.com/leniolabs?source=rss----c133038ee589---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Thu, 07 May 2026 11:00:55 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/leniolabs" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Last Week on AI — no. 49]]></title>
            <link>https://medium.com/leniolabs/last-week-on-ai-no-49-4be6d28ff84f?source=rss----c133038ee589---4</link>
            <guid isPermaLink="false">https://medium.com/p/4be6d28ff84f</guid>
            <category><![CDATA[openai]]></category>
            <category><![CDATA[ai-news]]></category>
            <category><![CDATA[gemini]]></category>
            <category><![CDATA[qwen]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Leniolabs_]]></dc:creator>
            <pubDate>Tue, 26 Nov 2024 14:17:57 GMT</pubDate>
            <atom:updated>2024-11-26T14:17:56.967Z</atom:updated>
            <content:encoded><![CDATA[<h3>Last Week on AI — no. 49</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KmtHxtwOF2EmEbyJN0OyUA.png" /></figure><p>🗞️ + 🤖 Exciting tech updates this week!</p><p>Google’s Gemini 2 AI is expected in December with big improvements, and Gemini Advanced now personalizes responses. Microsoft 365 adds smart SharePoint agents, while Qwen2.5-Turbo boosts token capacity to 1M for better AI. Niantic leverages player data for robotics, ElevenLabs introduces AI agents, and Figure X’s robots make leaps in speed and success!</p><p>👇 Dive in for details.</p><p><a href="https://www.tomsguide.com/ai/google-gemini/google-gemini-2-just-tipped-for-december-launch-heres-what-we-know">Google Gemini 2 just tipped for December launch - here&#39;s what we know</a></p><p>Google is poised to launch the next generation of its Gemini AI models in December, promising substantial improvements over Gemini 1.5. With their recent improvements in model performance, we’re more than excited.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/GeminiApp/status/1858929151476199591&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/6dcf510ac1ed38e027d9a2781a2bd506/href">https://medium.com/media/6dcf510ac1ed38e027d9a2781a2bd506/href</a></iframe><p>Since last week, Gemini Advanced can remember user preferences to provide tailored responses, with options to view, edit, or delete shared information. OpenAI did it first, but is nice to see Google getting up to date with the current experiences with AI tech.</p><p><a href="https://techcommunity.microsoft.com/blog/microsoft365copilotblog/introducing-new-agents-in-microsoft-365/4296918">Introducing new agents in Microsoft 365 | Microsoft Community Hub</a></p><p>Microsoft 365 Introduces SharePoint AI Agents: New agents integrated into SharePoint provide instant insights grounded in site content, with customization options for specific projects or tasks.</p><p><a href="http://qwenlm.github.io/blog/qwen2.5-turbo/">Extending the Context Length to 1M Tokens!</a></p><p>Qwen2.5-Turbo Supports 1M Tokens: The updated Qwen2.5-Turbo model extends context length to 1M tokens, enabling faster inference, reduced costs, and competitive performance with GPT-4 on benchmarks.</p><p><a href="https://www.404media.co/pokemon-go-players-have-unwittingly-trained-ai-to-navigate-the-world/">Pokémon Go Players Have Unwittingly Trained AI to Navigate the World</a></p><p>Niantic, the company behind Pokemon GO, is leveraging player’s data to develop real-world navigation: Niantic is using data from Pokémon Go players to develop real-world navigation AI, hinting at broader applications in robotics.</p><p><a href="https://techcrunch.com/2024/11/18/elevenlabs-now-offers-ability-to-build-conversational-ai-agents/">ElevenLabs now offers ability to build conversational AI agents | TechCrunch</a></p><p>ElevenLabs Launches Conversational AI Tool: ElevenLabs now allows developers to create conversational AI agents with customizable tone and response length.</p><p>👇🏽 Leniolabs_ is now part of <a href="https://www.linkedin.com/company/improving-enterprises/">Improving</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*txp4metTReb4EYzqht_VJA.png" /></figure><p>Learn more: <a href="https://www.improving.com/thoughts/improving-expands-footprint-with-leniolabs-acquisition/">https://www.improving.com/thoughts/improving-expands-footprint-with-leniolabs-acquisition/</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4be6d28ff84f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/leniolabs/last-week-on-ai-no-49-4be6d28ff84f">Last Week on AI — no. 49</a> was originally published in <a href="https://medium.com/leniolabs">Leniolabs_</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Last Week on AI — no.47]]></title>
            <link>https://medium.com/leniolabs/last-week-on-ai-no-47-e50cf64ce686?source=rss----c133038ee589---4</link>
            <guid isPermaLink="false">https://medium.com/p/e50cf64ce686</guid>
            <category><![CDATA[ai-news]]></category>
            <category><![CDATA[openai]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[ai-newsletter]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Leniolabs_]]></dc:creator>
            <pubDate>Mon, 21 Oct 2024 17:23:40 GMT</pubDate>
            <atom:updated>2024-10-21T17:23:32.658Z</atom:updated>
            <content:encoded><![CDATA[<h3>Last Week on AI — no.47</h3><p>by the AI team</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CYmkeB7v79LPOoMZ0pDS0g.png" /></figure><p>From Microsoft announcements in their AI Tour to Nvidia’s new model beating GPT-4o, we’ve got the most exciting updates on AI advancements.</p><p>Don’t miss out on the latest in AI and tech innovation!</p><p><a href="https://cointelegraph.com/news/nvidia-open-source-ai-nemotron-surpasses-open-ai-gpt-4o">https://cointelegraph.com/news/nvidia-open-source-ai-nemotron-surpasses-open-ai-gpt-4o</a></p><p><strong>Nvidia’s New Open-Source AI Model Beats GPT-4o</strong>: Nvidia quietly launched Llama-3.1-Nemotron-70B-Instruct, an AI model outperforming GPT-4o and Claude-3 on several benchmarks. This open-source model appears to be fine-tuned from Meta’s Llama-70b.</p><p><a href="https://arxiv.org/abs/2410.05258">Differential Transformer</a></p><p><strong>Diff Transformer Introduced</strong>: A new architecture, Diff Transformer, enhances large language models by improving focus on relevant context and reducing hallucination. It uses a differential attention mechanism for improved accuracy in long-context and question-answering tasks.</p><p><a href="https://neuralmagic.com/blog/we-ran-over-half-a-million-evaluations-on-quantized-llms-heres-what-we-found/">500K+ Evaluations Show Quantized LLMs Retain Accuracy</a></p><p><strong>Quantized LLMs Maintain Performance</strong>: Neural Magic’s evaluations show that 8-bit and 4-bit quantized models offer competitive accuracy with negligible performance trade-offs for larger models (70B, 405B), delivering faster inference speeds and computational savings.</p><p><a href="https://www.forbes.com/sites/lanceeliot/2024/10/15/openai-newly-released-ai-product-swarm-swiftly-brings-agentic-ai-into-the-real-world/">OpenAI Newly Released AI Product &#39;Swarm&#39; Swiftly Brings Agentic AI Into The Real World</a></p><p><strong>OpenAI’s ‘Swarm’ Brings Multi-Agent AI to Developers</strong>: OpenAI’s Swarm enables developers to explore lightweight, multi-agent AI orchestration. With a focus on educational and experimental use, Swarm allows AI agents to work together on complex tasks.</p><p><a href="https://www.cnbc.com/2024/10/21/microsoft-to-allow-autonomous-ai-agent-development-next-month.html">https://www.cnbc.com/2024/10/21/microsoft-to-allow-autonomous-ai-agent-development-next-month.html</a></p><p><strong>Microsoft Introduces Autonomous AI Agents in Copilot Studio</strong>: Starting next month, Microsoft will let organizations develop custom AI agents in Copilot Studio. This move responds to growing competition, especially after Salesforce launched a similar product last month.</p><p><a href="https://finance.yahoo.com/news/microsoft-launches-copilot-ai-features-as-investors-look-for-signs-artificial-intelligence-is-paying-off-093044133.html">Microsoft launches Copilot AI features as investors look for signs artificial intelligence is paying off</a></p><p><strong>Microsoft Expands AI Offerings Amid Investor Scrutiny</strong>: As part of its AI push, Microsoft introduces autonomous agents during its AI Tour event. These agents help streamline enterprise workflows, allowing users to automate tasks with low-code instructions.</p><p><a href="https://finance.yahoo.com/news/ibm-expands-open-source-ai-120038644.html">IBM Expands Open-Source AI with Granite 3.0, Empowering Enterprise Flexibility</a></p><p><strong>IBM Launches Granite 3.0</strong>: IBM unveils its Granite 3.0 AI models, focusing on transparency, safety, and performance in enterprise environments. These open-source models aim to empower businesses with flexible, robust AI solutions.</p><p><strong>👇🏽 Learn what we can do for you:</strong></p><p><a href="https://www.leniolabs.com/?utm_source=linkedin&amp;utm_medium=newsletter&amp;utm_campaign=ai-news">Leniolabs_ | Frontend team augmentation that works</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e50cf64ce686" width="1" height="1" alt=""><hr><p><a href="https://medium.com/leniolabs/last-week-on-ai-no-47-e50cf64ce686">Last Week on AI — no.47</a> was originally published in <a href="https://medium.com/leniolabs">Leniolabs_</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Exploring the Learning Secrets of Neural Networks Using Entropy and Complexity]]></title>
            <link>https://medium.com/leniolabs/exploring-the-learning-secrets-of-neural-networks-using-entropy-and-complexity-1f023f1378b2?source=rss----c133038ee589---4</link>
            <guid isPermaLink="false">https://medium.com/p/1f023f1378b2</guid>
            <category><![CDATA[machine-learning]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[deep-learning]]></category>
            <category><![CDATA[neural-networks]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[Leniolabs_]]></dc:creator>
            <pubDate>Wed, 16 Oct 2024 19:16:49 GMT</pubDate>
            <atom:updated>2024-10-16T15:36:27.103Z</atom:updated>
            <content:encoded><![CDATA[<p>by Adrian Jiménez and the AI team at <a href="https://medium.com/u/c38e678e05e">Leniolabs_</a></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*AzZznxSw9JqjIUyK1-nPRw.png" /></figure><p>Neural networks have revolutionized machine learning, achieving remarkable performance in tasks ranging from image recognition to natural language processing. However, the inner workings of these powerful models often remain a mystery, earning them the moniker “black boxes.” In this blog post, we’ll shed light on the learning process of neural networks by examining the evolution of weights in a statistical space during training.</p><p>Using a simple yet revealing experiment with MNIST and fMNIST classification, we’ll explore how concepts from information theory — namely entropy and statistical complexity — can provide insights into the dynamics of neural network learning. By the end, you’ll have a new perspective on how these artificial “brains” organize information and adapt to solve complex problems, and provide insights into how neural networks adjust their parameters to enhance performance and generalization, offering valuable perspectives for optimizing network design and functionality.</p><h3>Introduction</h3><p>In the field of artificial intelligence (AI), a deep understanding of neural network (NN) models is crucial for improving their performance and applicability. This year’s Nobel Prize in Physics, awarded to John Hopfield and Geoffrey Hinton for their groundbreaking work in neural networks, underscores the importance of this technology in transforming both science and society. One of the less explored but highly significant aspects is how the weights of the NN change throughout the training process. These weight adjustments, much like the concepts pioneered by Hopfield and Hinton, not only reflect the model’s learning but also provide valuable insights into its behavior and generalization capability.</p><p>With this in mind, we look into how the weights of a neural network change during training. We used an MNIST and fMNIST image classifier with a simple three-layer neural network with dimensions of 28x28 (input data/image), 512, 512 (two layers of 512), and a final layer of 10 neurons representing the classifier’s labels. This approach allows us to analyze how the weights of one of the simplest networks adjust throughout the training and how these affect the model’s performance.</p><h3>Methodology</h3><p>The first step was to store the weights of the three layers at 10 selected training steps (for 60,000 training batches) over 10 epochs. If we plot the first layer as a 28*28*512 matrix, where each column represents a 28*28 flattened image, and each row represents the weight value of the first layer, we can observe the initialization of the weights as uniformly distributed. If we compare this with the same image after training, we can clearly see band-like patterns appearing along the columns.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*VQeUImbt6_bANxaC.jpg" /></figure><p>It is precisely this dynamic that we aim to quantify in order to better understand it. One of the best tools for describing these states is the Shannon entropy, which roughly measures the dispersion of distributions, providing a value for each possible permutation (a numerical value or macrostate of all associated microstates). Thus, Shannon entropy is a measure of uncertainty in a probability distribution. Mathematically, it is defined as:</p><p>S(X)=−∑iNp(xi)log⁡p(xi)</p><p>where p(xi) is the probability of event xi. In the context of neural networks, entropy can be interpreted as a measure of dispersion in the distribution of synaptic weights. A high entropy value indicates a more dispersed distribution, while a low value suggests a more defined one.</p><p>However, for each entropy value, we have many distributions with the same level of dispersion. It would be extremely difficult to interpret what may appear to be noise, so we opted to use tools designed to understand systems with these characteristics. For this, we use <strong>statistical complexity</strong>, as defined by Lopez-Ruiz (1995), as a product of information “H” and a distance measurement “D” between distributions. In this case, we use Shannon entropy as a measure of information and the <strong>Jensen-Shannon divergence</strong> as the distance. The complexity C(X) is then mathematically expressed as:</p><p>C(X)=H(X)×D(X||U)</p><p>where U is the uniform distribution and D(X||U) measures the distance between the weight distribution and the uniform distribution and H is the normalized Shannon entropy. This concept allows us to quantify the balance between disorder and order in the neural network, thus reflecting the network’s ability to efficiently organize information.</p><p>At the beginning of the training, the weight histogram’s show a dispersed distribution, indicating greater variability in the weight values. However, as training progresses, we observe that the weight distribution becomes more ordered and concentrated. The decrease in entropy suggests that the weights are being organized more efficiently. On the other hand, the increase in complexity indicates an improvement in the network’s ability to represent information more accurately.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/0*id2MJxXLs-bxPn_y.gif" /></figure><p>These findings highlight the process of weight adjustment during training. The transition from a dispersed distribution to a more organized one reflects how the model improves its classification and generalization capabilities. Understanding these changes is crucial for optimizing neural networks and developing more effective models.</p><p>To evaluate the learning in the neural network’s weights, we use entropy as a fundamental metric. As mentioned earlier, in the context of neural network weights, entropy helps quantify the variability in the weight distribution over time.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/0*q6_AkhyFvxLGdDKS.png" /></figure><h3>Introduction to Complexity:</h3><p>Along with entropy, statistical complexity provides a more complete view of the organization of the weights. Complexity is calculated as the product of Shannon entropy and Jensen-Shannon divergence. The Jensen-Shannon divergence is defined as:</p><p>Djs(W|U)=q×(H[(W+U)/2]−((H[W]+H[U])/2))</p><p>Here, q is a normalization constant. The uniform distribution is used as a comparison for the divergence.</p><h3>Application of Complexity:</h3><p>In our analysis, entropy measures the “randomness” in the weight distribution, while complexity combines this measure with the divergence between distributions. As we can see in FIg4, at the beginning of training, entropy is high and complexity is low, indicating a low concentration in weight values. As training progresses, entropy decreases, and complexity increases reflecting a more balanced organization in the weight distribution.</p><p>Something really interesting is that we can also observe that in both dataset training, the third layer makes a stop and correction. This means it reached some appropriate distribution but needed to refine the weights, by changing the distribution even with the same entropy.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/640/0*3yBLp8ejtqIUVzxh.gif" /></figure><h3>Results</h3><p>In analyzing the evolution of neural network weights, we have observed how the weight distribution changes over time, transitioning from an initial sparse configuration to a more ordered distribution. The reduction in entropy and increase in statistical complexity reflect a process of model adjustment and optimization during training.</p><p>These results have several important implications for the design and implementation of neural networks:</p><ol><li><strong>Model Performance Improvement:</strong> The transition towards a more ordered distribution in weights suggests that the model is learning to better classify images. The model’s ability to adjust its weights more efficiently is crucial for improving its performance in classification tasks.</li><li><strong>Parameter Optimization:</strong> Understanding how entropy and complexity change during training can help developers adjust model parameters, such as the number of layers or learning rates, to obtain more accurate and efficient results.</li><li><strong>Interpretation of Weight Changes:</strong> The analysis of entropy and complexity provides a useful tool for interpreting changes in weights and understanding how the model is learning and adapting. This is especially important for developing models that can generalize well to unseen data.</li></ol><h3>Real-World Applications:</h3><p>The techniques of entropy and complexity analysis are not only relevant for adjusting neural networks in image classification tasks but also have applications in a wide range of real-world problems:</p><ul><li><strong>Anomaly Detection:</strong> In anomaly detection systems, high entropy can indicate greater variability in the data, which could be useful for identifying unusual behaviors or errors in the data.</li><li><strong>Optimization of Production Models:</strong> For models in production, such as recommendation systems or natural language processing applications, understanding the evolution of weights can help improve model stability and efficiency.</li><li><strong>Development of New Architectures:</strong> Insights gained from these analyses can guide the development of new neural network architectures that are more robust and capable of handling variability in the data.</li></ul><h3>Conclusion</h3><p>The analysis of weight evolution, entropy, and complexity offers a better understanding of how neural network models learn and adjust during training. These techniques provide valuable tools for optimizing model performance and applying this knowledge to a variety of practical problems. Effectively interpreting and adjusting weights is essential for developing more accurate and efficient artificial intelligence models.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1f023f1378b2" width="1" height="1" alt=""><hr><p><a href="https://medium.com/leniolabs/exploring-the-learning-secrets-of-neural-networks-using-entropy-and-complexity-1f023f1378b2">Exploring the Learning Secrets of Neural Networks Using Entropy and Complexity</a> was originally published in <a href="https://medium.com/leniolabs">Leniolabs_</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Last Week on AI — no.43]]></title>
            <link>https://medium.com/leniolabs/last-week-on-ai-no-43-2c1f2f4ec2fd?source=rss----c133038ee589---4</link>
            <guid isPermaLink="false">https://medium.com/p/2c1f2f4ec2fd</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[openai]]></category>
            <category><![CDATA[chatgpt]]></category>
            <category><![CDATA[ai-newsletter]]></category>
            <category><![CDATA[ai-news]]></category>
            <dc:creator><![CDATA[Leniolabs_]]></dc:creator>
            <pubDate>Tue, 27 Aug 2024 15:35:55 GMT</pubDate>
            <atom:updated>2024-08-27T15:35:14.974Z</atom:updated>
            <content:encoded><![CDATA[<h3>Last Week on AI — no.43</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*qhcfq4Zo9Nr5l6StZIeodA.png" /></figure><blockquote><em>Every week, new technologies and strategic innovations appear in the ever-evolving AI scene, and it’s our duty to report on those we consider essential. So, let’s quickly dive into which things made the list in the past few weeks!</em></blockquote><p><a href="https://www.cursor.com/">Cursor - The AI Code Editor</a></p><p><strong>Cursor’s Cutting-Edge Platform</strong>: CursorAI recently unveiled its innovative platform designed to enhance data interaction and analytics. This new tool is set to transform how users access and engage with data, making it more actionable and accessible.</p><p><a href="https://blackforestlabs.ai/announcing-black-forest-labs/">Announcing Black Forest Labs</a></p><p><strong>Introducing Black Forest Labs and the Flux Model</strong>: Black Forest Labs has officially announced its launch along with its groundbreaking Flux model. The Flux model is designed to optimize machine learning processes, enhancing performance across diverse AI applications.</p><p><a href="https://siliconangle.com/2024/08/13/sakana-ai-creates-ai-scientist-automate-scientific-research-discovery/">Sakana AI creates an &#39;AI Scientist&#39; to automate scientific research and discovery - SiliconANGLE</a></p><p><strong>Sakana AI Develops AI Scientist</strong>: SakanaAI has introduced an “AI scientist” capable of automating the scientific research and discovery processes. This innovation could dramatically alter the landscape of research by increasing efficiency and reducing the time required for scientific advancements.</p><p><a href="https://openai.com/index/gpt-4o-fine-tuning/">https://openai.com/index/gpt-4o-fine-tuning/</a></p><p><strong>GPT-4o Fine-Tuning Introduced by OpenAI</strong>: OpenAI has rolled out fine-tuning capabilities for GPT-4o, allowing users to tailor AI responses to better suit specific industry needs and challenges.</p><p><a href="https://fortune.com/2024/08/20/meta-external-agent-new-web-crawler-bot-scrape-data-train-ai-models-llama/">A new web crawler launched by Meta last month is quietly scraping the web for AI training data</a></p><p><strong>Meta’s New External Agent for Data Scraping</strong>: Meta has launched a new web crawler bot, named External Agent, which is specifically designed to scrape data for training its Llama AI models. This bot aims to enhance the efficiency and accuracy of data collection for AI training.</p><p><a href="https://arxiv.org/abs/2408.12570">Jamba-1.5: Hybrid Transformer-Mamba Models at Scale</a></p><p><strong>Advances in AI Efficiency Research</strong>: A new research paper published on arXiv introduces methods for increasing the efficiency of AI algorithms, potentially leading to more sustainable AI practices in the future.</p><blockquote><strong><em>Last week on AI is a weekly recap of the most significant #ai news from the past two weeks, curated by the team at </em></strong><a href="https://www.linkedin.com/company/leniolabs/"><strong><em>Leniolabs_</em></strong></a></blockquote><p>👇🏽 Learn what we can do for you:</p><p><a href="https://www.leniolabs.com/?utm_source=medium&amp;utm_medium=newsletter&amp;utm_campaign=ai-news">Leniolabs_ | Frontend team augmentation that works</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2c1f2f4ec2fd" width="1" height="1" alt=""><hr><p><a href="https://medium.com/leniolabs/last-week-on-ai-no-43-2c1f2f4ec2fd">Last Week on AI — no.43</a> was originally published in <a href="https://medium.com/leniolabs">Leniolabs_</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Last Week on AI — no. 41]]></title>
            <link>https://medium.com/leniolabs/last-week-on-ai-no-41-4a6a261a8677?source=rss----c133038ee589---4</link>
            <guid isPermaLink="false">https://medium.com/p/4a6a261a8677</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[ai-news]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[openai]]></category>
            <dc:creator><![CDATA[Leniolabs_]]></dc:creator>
            <pubDate>Mon, 29 Jul 2024 15:47:50 GMT</pubDate>
            <atom:updated>2024-07-29T15:47:37.655Z</atom:updated>
            <content:encoded><![CDATA[<h3>Last Week on AI — no. 41</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oF_6vD_muFVlL8O6tmv4Nw.png" /></figure><blockquote><em>We’ve seen the introduction of several exciting new models like Llama3.1, Mistral Large 2 and ChatGPT-4o mini, marking an exciting week filled with advancements in AI.</em></blockquote><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;schema=twitter&amp;url=https%3A//x.com/karpathy/status/1815842603377779140&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/5a57814753d75721f82d9eb1d61e5190/href">https://medium.com/media/5a57814753d75721f82d9eb1d61e5190/href</a></iframe><p><strong><em>Llama 3.1 unveiled</em>:</strong> Meta AI’s new iteration of the Llama model has been released, boasting improvements and new features.</p><p><a href="https://mistral.ai/news/mistral-large-2407/">Large Enough</a></p><p><strong><em>Mistral Large 2 debuted</em>:</strong> The latest from Mistral AI, Mistral Large 2, promises enhanced capabilities and performance</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/StabilityAI/status/1815402275986387410&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/852d95402fcaec0843cf39fd63aa860f/href">https://medium.com/media/852d95402fcaec0843cf39fd63aa860f/href</a></iframe><p><strong><em>Stable Audio Open goes public</em>:</strong> Stability AI has released Stable Audio Open, expanding the landscape of AI-generated audio.</p><p><a href="https://www.tomsguide.com/ai/chatgpt/openai-to-make-gpt-4o-advanced-voice-available-by-the-end-of-the-month-to-select-group-of-users">OpenAI to make GPT-4o Advanced Voice available by the end of the month to select group of users</a></p><p><strong><em>GPT-4o Audio coming soon</em>:</strong> OpenAI announces that GPT-4o Audio will be available to a select group of users by the end of the month, promising significant enhancements in voice synthesis technology.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/sama/status/1815878155619754185&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/9a18c07616a926d00b0b3b04aca666a5/href">https://medium.com/media/9a18c07616a926d00b0b3b04aca666a5/href</a></iframe><p><strong><em>GPT-4o mini finetune released</em>:</strong> OpenAI has also introduced a finetuned version of the GPT-4o mini to provide more tailored responses.</p><p><a href="https://www.theverge.com/2024/7/18/24201041/meta-multimodal-llama-ai-model-launch-eu-regulations">Meta won&#39;t release its multimodal Llama AI model in the EU</a></p><p><strong><em>Meta halts multimodal Llama in EU</em>: </strong>Due to stringent EU regulations, Meta has decided not to release its multimodal Llama AI model in the European Union.</p><blockquote><strong><em>Last week on AI is a weekly recap of the most significant Artificial Intelligence news from the past two weeks, curated by a team of developers from </em></strong><a href="https://www.linkedin.com/company/leniolabs/"><strong><em>Leniolabs_</em></strong></a></blockquote><p><strong>👇🏽 Learn what we can do for you:</strong></p><p><a href="https://www.leniolabs.com/?utm_source=linkedin&amp;utm_medium=newsletter&amp;utm_campaign=ai-news">Leniolabs_ | Frontend team augmentation that works</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4a6a261a8677" width="1" height="1" alt=""><hr><p><a href="https://medium.com/leniolabs/last-week-on-ai-no-41-4a6a261a8677">Last Week on AI — no. 41</a> was originally published in <a href="https://medium.com/leniolabs">Leniolabs_</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Last Week on AI — no.40]]></title>
            <link>https://medium.com/leniolabs/last-week-on-ai-no-40-12bcfdef6541?source=rss----c133038ee589---4</link>
            <guid isPermaLink="false">https://medium.com/p/12bcfdef6541</guid>
            <category><![CDATA[ai-news]]></category>
            <category><![CDATA[ai-newsletter]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[newsletter]]></category>
            <category><![CDATA[technews]]></category>
            <dc:creator><![CDATA[Leniolabs_]]></dc:creator>
            <pubDate>Mon, 15 Jul 2024 16:10:32 GMT</pubDate>
            <atom:updated>2024-07-15T16:10:19.867Z</atom:updated>
            <content:encoded><![CDATA[<h3>Last Week on AI — no.40</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zjnZUsEhnTC4Ne5kBV4w3w.png" /></figure><p><a href="https://arxiv.org/abs/2309.03883">DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models</a></p><p>A new chapter in AI development: Factual generation technology is detailed in a newly published paper.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/ammaar/status/1808155555879403765&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/121060a331dde8890c91d59cc23148d9/href">https://medium.com/media/121060a331dde8890c91d59cc23148d9/href</a></iframe><p>Voice technology leaps forward: ElevenLabs has unveiled its new famous voices feature, promising to revolutionize voice synthesis.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/AIatMeta/status/1808157832497488201&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/f530c318bfdd9272af7c370604e4114a/href">https://medium.com/media/f530c318bfdd9272af7c370604e4114a/href</a></iframe><p>Revolutionizing 3D creation: Meta introduces Meta 3D Gen, a groundbreaking system that generates 3D assets from text in less than a minute.</p><p><a href="https://www.theverge.com/2024/7/10/24195528/microsoft-apple-openai-board-observer-seat-drop-regulator-scrutiny">Microsoft and Apple ditch OpenAI board seats amid regulatory scrutiny</a></p><p>Strategic alliances shape the future: Apple and Microsoft have dropped its pursuit of an observer role on OpenAI’s board due to increased regulator scrutiny, following its recent partnership developments.</p><iframe src="https://cdn.embedly.com/widgets/media.html?type=text%2Fhtml&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;schema=twitter&amp;url=https%3A//x.com/AIatMeta/status/1808579885499363598&amp;image=" width="500" height="281" frameborder="0" scrolling="no"><a href="https://medium.com/media/92656108e3a97861bf176e1ee50eec0e/href">https://medium.com/media/92656108e3a97861bf176e1ee50eec0e/href</a></iframe><p>Expanding AI’s language capabilities: The latest advancements in multi-token LLMs promise to enhance how machines understand and generate human language.</p><p><a href="https://huggingface.co/blog/winning-aimo-progress-prize">How NuminaMath Won the 1st AIMO Progress Prize</a></p><p>Celebrating AI milestones: NuminaMath celebrates their victory in the 1st AIMO Progress Prize, an award recognizing significant advancements in AI.</p><blockquote><em>Every week, we delve deep into the most important news from the AI and Data field. </em><strong><em>Follow the latest technical news, handpicked by a team of developers and engineers at </em></strong><a href="https://www.linkedin.com/article/edit/7218642511328604161/?author=urn%3Ali%3Afsd_company%3A3121446#"><strong><em>Leniolabs_</em></strong></a><em>, and free from the hype and dramas that often dominate the field — </em><strong><em>with our newsletter!</em></strong></blockquote><p><strong>👇🏽 Learn what we can do for you:</strong></p><p><a href="https://www.leniolabs.com/?utm_source=linkedin&amp;utm_medium=newsletter&amp;utm_campaign=ai-news">Leniolabs_ | Frontend team augmentation that works</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=12bcfdef6541" width="1" height="1" alt=""><hr><p><a href="https://medium.com/leniolabs/last-week-on-ai-no-40-12bcfdef6541">Last Week on AI — no.40</a> was originally published in <a href="https://medium.com/leniolabs">Leniolabs_</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Last Week on AI — no. 39]]></title>
            <link>https://medium.com/leniolabs/last-week-on-ai-no-39-a3584098cafd?source=rss----c133038ee589---4</link>
            <guid isPermaLink="false">https://medium.com/p/a3584098cafd</guid>
            <category><![CDATA[openai]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[tech]]></category>
            <category><![CDATA[innovation]]></category>
            <dc:creator><![CDATA[Leniolabs_]]></dc:creator>
            <pubDate>Tue, 30 Apr 2024 17:29:32 GMT</pubDate>
            <atom:updated>2024-04-30T17:29:15.883Z</atom:updated>
            <content:encoded><![CDATA[<h3>Last Week on AI — no. 39</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ThOb8iG686sBpjIZYNX4lQ.png" /></figure><p><strong>Last week in AI:</strong> Alibaba releases Qwen 1.5, a powerful multilingual LLM. DHS forms an AI Safety Board with industry leaders. Astribot introduces advanced robotics — see their demo! Plus, OpenAI’s new research enhances LLM security, and Apple unveils efficient OpenELM models.</p><p>👇🏽🚀 Let’s dive into these innovations!</p><p><strong>Alibaba open-sourced Qwen 1.5 110B parameter version.</strong></p><p><a href="https://qwenlm.github.io/blog/qwen1.5-110b/">Qwen1.5-110B: The First 100B+ Model of the Qwen1.5 Series</a></p><p>Context length of 32K tokens — Multilingual support, including English, Chinese, French, Spanish, Japanese, Korean, Vietnamese, etc.</p><p>👇🏽 Try the Demo on <strong>Hugging Face:</strong></p><p><a href="https://huggingface.co/spaces/Qwen/Qwen1.5-110B-Chat-demo">Qwen1.5 110B Chat Demo - a Hugging Face Space by Qwen</a></p><p>It performs similarly (to slightly better) across a range of LLM evaluations vs Llama-3.</p><p><strong>The Department of Homeland Security (DHS) has created the Artificial Intelligence Safety and Security Board.</strong></p><p><a href="https://www.dhs.gov/ai/promoting-ai-safety-and-security">Promoting AI Safety and Security | Homeland Security</a></p><p>The 22-member board includes tech leaders OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, Nvidia (NVDA.O), CEO Jensen Huang, IBM CEO Arvind Krishna (IBM.N), Adobe (ADBE.O), CEO Shantanu Narayen, Microsoft (MSFT.O), CEO Satya Nadella, Alphabet CEO Sundar Pichai (GOOGL.O), Cisco CEO Chuck Robbins (CSCO.O), , Amazon Web Services CEO (AMZN.O), Adam Selipsky and Advanced Micro Devices (AMD.O), CEO Lisa Su.</p><ul><li><a href="https://www.reuters.com/technology/us-homeland-security-names-ai-safety-security-advisory-board-2024-04-26/">US Homeland Security names AI safety, security advisory board</a></li><li><a href="https://interestingengineering.com/innovation/chinese-robot-shows-human-like-speed">China&#39;s S1 robot displays &#39;human-like&#39; speed and precision</a></li></ul><p>Another Chinese firm makes his way into the AI market:<strong> Astribot.</strong> The Senzhen-based subsidiary of Stardust Intelligence (born in Dec 22) is a robotics firm focused on developing AI robot assistants.</p><p>Unparalleled agility, dexterity, and accuracy for the S1. Check their demo video!</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FAePEcHIIk9s%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DAePEcHIIk9s&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FAePEcHIIk9s%2Fhqdefault.jpg&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/831fd16134b8ff7fd0ee6de227354cfc/href">https://medium.com/media/831fd16134b8ff7fd0ee6de227354cfc/href</a></iframe><p>📝 <strong>A new paper from OpenAI </strong>on prompt injection has released: <strong>The Instruction Hierarchy: Training LLMs to Prioritize</strong> Privileged Instructions.</p><p><a href="https://arxiv.org/abs/2404.13208">The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions</a></p><p>Advance robustness for prompt injections and other ways of tricking LLMs into executing unsafe actions. More details:</p><p>Last week,<strong> Apple quietly published OpenELM, a family of small, open, and on-device models</strong>, planned to run efficiently on iPhones and Macs.</p><p>They come in four sizes: OpenELM-270M, OpenELM-450M, OpenELM-1.1B, OpenELM-3B.</p><p><a href="https://machinelearning.apple.com/research/openelm">OpenELM: An Efficient Language Model Family with Open Training and Inference Framework</a></p><p>📝 <strong>OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework</strong></p><p><a href="https://arxiv.org/abs/2404.14619">OpenELM: An Efficient Language Model Family with Open-source Training and Inference Framework</a></p><p>They also presented <strong>CoreNet,</strong> a training library to train <strong>OpenELM.</strong></p><p><a href="https://github.com/apple/corenet">GitHub - apple/corenet: CoreNet: A library for training deep neural networks</a></p><p>OpenAI partnered with the Financial Times in a licensing agreement to enhance ChatGPT with attributed content and improve OpenAI’s models.</p><blockquote><strong><em>Last week on AI is a weekly recap of the most significant #ai news from the past week, curated by the team at </em></strong><a href="https://www.linkedin.com/company/leniolabs/"><strong><em>Leniolabs_</em></strong></a></blockquote><p><strong>👇🏽 Learn what we can do for you:</strong></p><p><a href="https://www.leniolabs.com/?utm_source=linkedin&amp;utm_medium=newsletter&amp;utm_campaign=ai-news">Leniolabs_ | Frontend team augmentation that works</a></p><p>🚀 Stay up-to-date on new AI advancements with<strong> Last Week on AI!</strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a3584098cafd" width="1" height="1" alt=""><hr><p><a href="https://medium.com/leniolabs/last-week-on-ai-no-39-a3584098cafd">Last Week on AI — no. 39</a> was originally published in <a href="https://medium.com/leniolabs">Leniolabs_</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Last Week on AI — no. 38]]></title>
            <link>https://medium.com/leniolabs/last-week-on-ai-no-38-534c0a786708?source=rss----c133038ee589---4</link>
            <guid isPermaLink="false">https://medium.com/p/534c0a786708</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[technews]]></category>
            <category><![CDATA[ai-newsletter]]></category>
            <category><![CDATA[aidigest]]></category>
            <dc:creator><![CDATA[Leniolabs_]]></dc:creator>
            <pubDate>Tue, 23 Apr 2024 16:49:35 GMT</pubDate>
            <atom:updated>2024-04-23T16:49:35.736Z</atom:updated>
            <content:encoded><![CDATA[<h3>Last Week on AI — no. 38</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wYI8E6mUcQwyTHmgR7RXxg.png" /><figcaption>This summary</figcaption></figure><p><strong>This week’s AI newsletter is packed with breakthroughs!</strong></p><p>Meta AI has launched <strong>Llama-3</strong>, hailed as the most advanced open-source model to date, less than a week after the release of <strong>Mixtral-8x22B.</strong> Microsoft swiftly followed up with the announcement of<strong> Phi-3</strong>, their open lightweight models. On top of that, Research at Microsoft revealed <strong>VASA-1</strong>, a new step into deepfakes. Meanwhile, Boston Dynamics has introduced an <strong>all-electric version of their Atlas robot</strong>, and Adobe has presented <strong>VideoGigaGAN</strong>, capable of upscaling videos by 8x.</p><p>🚀👇🏽 Dive into our weekly selected AI news!</p><p><strong>MetaAI has released Llama-3</strong>, and it made a resounding news in the open-source space. They say it is “the most capable openly available LLM to date”. It comes in two formats: 8B &amp; 70B models.</p><p><a href="https://llama.meta.com/llama3/">Meta Llama 3</a></p><p><strong>Microsoft research released VASA-1:</strong> Lifelike Audio-Driven Talking Faces Generated in Real Time. Read the paper 👇🏽</p><p><a href="https://arxiv.org/abs/2404.10667v1">VASA-1: Lifelike Audio-Driven Talking Faces Generated in Real Time</a></p><p><strong>TL;DR: single portrait photo + speech audio = hyper-realistic talking face video with precise lip-audio sync, lifelike facial behavior, and naturalistic head movements, generated in real time.</strong></p><p><a href="https://www.microsoft.com/en-us/research/project/vasa-1/">https://www.microsoft.com/en-us/research/project/vasa-1/</a></p><p>Last week, <strong>Boston Dynamics retired their hydraulic Atlas and unveiled a fully electric Atlas robot.</strong></p><p><a href="https://bostondynamics.com/blog/electric-new-era-for-atlas/">An Electric New Era for Atlas | Boston Dynamics</a></p><p><strong>Adobe research dropped VideoGigaGAN: Towards Detail-rich Video Super-Resolution</strong></p><p>Read the paper 👇🏽</p><p><a href="https://arxiv.org/html/2404.12388v1">VideoGigaGAN: Towards Detail-rich Video Super-Resolution</a></p><p>It allows you to upscale video by 8x with enhanced details. Video super-resolution (VSR) approaches have shown impressive temporal consistency in upsampled videos.</p><p><a href="https://videogigagan.github.io/">VideoGigaGAN</a></p><p><strong>Microsoft</strong> <strong>just dropped Phi-3,</strong> less than a week after the release of Llama-3 from Meta. It comes in 3 different sizes: mini (3.8B), small (7B) &amp; medium (14B).</p><p><a href="https://huggingface.co/microsoft/Phi-3-mini-4k-instruct">microsoft/Phi-3-mini-4k-instruct · Hugging Face</a></p><p><strong>It is trained on 3.3 trillion tokens and is reported to rival Mixtral 8x7B and GPT-3.5.</strong> Has a default context length of 4K but also includes a version that is extended to 128K</p><p><strong>Mistral AI released their latest model: The Mixtral-8x22B </strong>LLM is a pretrained generative Sparse Mixture of Experts.</p><p><a href="https://mistral.ai/news/mixtral-8x22b/">Cheaper, Better, Faster, Stronger</a></p><p>Try it out on <strong>HuggingFace:</strong></p><p><a href="https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1">mistral-community/Mixtral-8x22B-v0.1 · Hugging Face</a></p><blockquote><em>Last week on AI is a weekly recap of the most significant #ai news from the past week, curated by the team at </em><a href="https://www.linkedin.com/company/leniolabs/"><em>Leniolabs_</em></a></blockquote><p>👇🏽 Learn what we can do for you:</p><p><a href="https://www.leniolabs.com/?utm_source=linkedin&amp;utm_medium=newsletter&amp;utm_campaign=ai-news">Leniolabs_ | Frontend team augmentation that works</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=534c0a786708" width="1" height="1" alt=""><hr><p><a href="https://medium.com/leniolabs/last-week-on-ai-no-38-534c0a786708">Last Week on AI — no. 38</a> was originally published in <a href="https://medium.com/leniolabs">Leniolabs_</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Last Week on AI — no.37]]></title>
            <link>https://medium.com/leniolabs/last-week-on-ai-no-37-5774a172dce8?source=rss----c133038ee589---4</link>
            <guid isPermaLink="false">https://medium.com/p/5774a172dce8</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[innovation]]></category>
            <category><![CDATA[ai-news]]></category>
            <category><![CDATA[technews]]></category>
            <dc:creator><![CDATA[Leniolabs_]]></dc:creator>
            <pubDate>Tue, 16 Apr 2024 16:45:18 GMT</pubDate>
            <atom:updated>2024-04-16T16:45:18.334Z</atom:updated>
            <content:encoded><![CDATA[<h3>Last Week on AI — no.37</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*aVuMvqVH3k2gGCGtP8em_w.png" /></figure><p>🚀 <strong>Last week in AI brought us some crucial updates!</strong> From Google’s DeepMind seeing another team branch out, to Apple boosting Siri’s smarts with their new ReALM model. Also buzzing: UDIO’s innovative AI-powered music creation platform, OpenAI’s enhancements with GPT-4 Turbo with Vision API ready, and the introduction of (limited access) Voice Engine to create synthetic voices. Adobe isn’t staying behind either, teasing revolutionary Generative AI features in Premiere Pro for 2024. <strong>Let’s dive in:</strong></p><p><a href="https://www.theinformation.com/articles/more-google-deepmind-staff-depart-to-launch-an-ai-startup">https://www.theinformation.com/articles/more-google-deepmind-staff-depart-to-launch-an-ai-startup</a></p><p>Due to bureaucratic delays and venture capital interest, a<strong> group of researchers from Google’s DeepMind left to start their own AI startup, Uncharted Labs.</strong> This is part of a larger trend of AI talent leaving DeepMind, with 16 researchers beginning their ventures in the past year. Reasons for leaving include concerns about the commercial impact and pace of innovation, as well as opportunities for faster iteration and feedback outside of Google.</p><p><a href="https://arxiv.org/html/2403.20329v1">ReALM: Reference Resolution As Language Modeling</a></p><p><strong>Apple’s new AI model, ReALM,</strong> aims to improve Siri’s intelligence and contextual understanding in iOS 18. This update could make Siri more intuitive and personalized, enhancing user experience.</p><ul><li><a href="https://medium.com/macoclock/apples-new-realm-ai-model-is-about-to-make-your-iphone-a-genius-d0f422e9c533">Apple’s New ReALM AI Model Is About to Make Your iPhone a Genius</a></li><li><a href="https://www.udio.com/">Udio | AI Music Generator - Official Website</a></li></ul><p><a href="https://udio.com"><strong>UDIO</strong></a><strong> offers a unique music creation experience by harnessing the power of AI.</strong> Users can input text prompts to generate music in various styles, from electronic to Broadway musicals, with the option to add custom intros and outros. The platform showcases AI-generated parody/comedy songs and offers a referral program for purchasing music technology products, including Soundpaint, a music composition software.</p><p><a href="https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4">https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4</a></p><p><strong>GPT-4 Turbo with Vision is now generally available in the API</strong>. Vision requests can now also use JSON mode and function calling.</p><h3>OpenAI Developers on Twitter: &quot;GPT-4 Turbo with Vision is now generally available in the API. Vision requests can now also use JSON mode and function calling.https://t.co/cbvJjij3uLBelow are some great ways developers are building with vision. Drop yours in a reply 🧵 / Twitter&quot;</h3><p>GPT-4 Turbo with Vision is now generally available in the API. Vision requests can now also use JSON mode and function calling.https://t.co/cbvJjij3uLBelow are some great ways developers are building with vision. Drop yours in a reply 🧵</p><p><a href="https://openai.com/blog/navigating-the-challenges-and-opportunities-of-synthetic-voices">Navigating the Challenges and Opportunities of Synthetic Voices</a></p><p><strong>OpenAI has introduced a text-to-voice generation platform named Voice Engine that can create synthetic voices based on a 15-second sample of a person’s voice. </strong>The AI-generated voice can read text prompts in various languages. The technology aims to assist with translation, reading assistance, and aiding individuals who have lost their ability to speak while emphasizing the importance of ethical use and safeguards to prevent misuse.</p><p><a href="https://x.ai/blog/grok-1.5v">Grok-1.5 Vision Preview</a></p><p><strong>Introducing Grok-1.5V, X.AI first-generation multimodal model. </strong>Grok-1.5V will be available soon to early testers and existing Grok users.</p><p><a href="https://www.adobe.com/products/premiere/ai-video-editing.html">AI Video Editing - Adobe Premiere Pro</a></p><p><strong>PremierePro to add GenerativeAI features like Object Addition, Object Removal, and Generative Extend</strong> — powered by Adobe Firefly video model.​</p><p>They also said they are on “Early research explorations with our friends at Open AI, Runway, and Pika Labs…” Let’s see where this is heading.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F6de4akFiNYM%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D6de4akFiNYM&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F6de4akFiNYM%2Fhqdefault.jpg&amp;key=d04bfffea46d4aeda930ec88cc64b87c&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/973cbc05d020ec7f5a2dec7b6bef46d4/href">https://medium.com/media/973cbc05d020ec7f5a2dec7b6bef46d4/href</a></iframe><blockquote><em>Last week on AI is a weekly recap of the most significant #ai news from the past week, curated by the team at </em><a href="https://medium.com/u/c38e678e05e"><em>Leniolabs_</em></a></blockquote><p><strong>👇🏽 Learn what we can do for you:</strong></p><p><a href="https://www.leniolabs.com/?utm_source=linkedin&amp;utm_medium=newsletter&amp;utm_campaign=ai-news">Leniolabs_ | Frontend team augmentation that works</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5774a172dce8" width="1" height="1" alt=""><hr><p><a href="https://medium.com/leniolabs/last-week-on-ai-no-37-5774a172dce8">Last Week on AI — no.37</a> was originally published in <a href="https://medium.com/leniolabs">Leniolabs_</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Last Week on AI — no. 36]]></title>
            <link>https://medium.com/leniolabs/last-week-on-ai-no-36-dd12708d0df5?source=rss----c133038ee589---4</link>
            <guid isPermaLink="false">https://medium.com/p/dd12708d0df5</guid>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[ai-newsletter]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[openai]]></category>
            <category><![CDATA[ai-news]]></category>
            <dc:creator><![CDATA[Leniolabs_]]></dc:creator>
            <pubDate>Mon, 08 Apr 2024 15:53:19 GMT</pubDate>
            <atom:updated>2024-04-08T15:53:19.596Z</atom:updated>
            <content:encoded><![CDATA[<h3>Last Week on AI — no. 36</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*v5QhZA-Gj2haD1XJLintTQ.png" /></figure><h4>This week’s AI News compilation:</h4><p><strong>Microsoft and OpenAI are working on plans for a data center project</strong> that could cost as much as $100 billion and include an artificial intelligence supercomputer called<strong> “Stargate” </strong>set to launch in 2028, The Information reported on Friday.</p><p><a href="https://www.reuters.com/technology/microsoft-openai-planning-100-billion-data-center-project-information-reports-2024-03-29/">Microsoft, OpenAI plan $100 billion data-center project, media report says</a></p><p><strong>DALLE now allows to do inpainting</strong> within your created images. Available on every platform (web, iOS, Android).</p><h3>OpenAI on Twitter: &quot;You can now edit DALL·E images in ChatGPT across web, iOS, and Android. pic.twitter.com/AJvHh5ftKB / Twitter&quot;</h3><p>You can now edit DALL·E images in ChatGPT across web, iOS, and Android. pic.twitter.com/AJvHh5ftKB</p><p><strong>Introducing Stable Audio 2.0</strong>, audio-to-audio generation by allowing users to upload and transform samples using natural language prompts.</p><p><a href="https://stability.ai/news/stable-audio-2-0">Introducing Stable Audio 2.0 - Stability AI</a></p><p><strong>ObjectDrop from GoogleAI, </strong>achieve photorealistic object removal and insertion editing AI generated images.</p><p><a href="https://objectdrop.github.io/">ObjectDrop: Bootstrapping Counterfactuals for Photorealistic Object Removal and Insertion</a></p><h3>Daniel Winter on Twitter: &quot;We introduce ObjectDrop, our recent @GoogleAI project, aimed at achieving photorealistic object removal and insertion.Explore our project page: https://t.co/GOj5uAIF3vArxiv: https://t.co/0tNxic4mUI pic.twitter.com/QgWdOUCYeb / Twitter&quot;</h3><p>We introduce ObjectDrop, our recent @GoogleAI project, aimed at achieving photorealistic object removal and insertion.Explore our project page: https://t.co/GOj5uAIF3vArxiv: https://t.co/0tNxic4mUI pic.twitter.com/QgWdOUCYeb</p><p>👉🏽 The released paper: <strong>ObjectDrop: Bootstrapping Counterfactuals for Photorealistic Object Removal and Insertion: </strong><a href="https://arxiv.org/abs/2403.18818">https://arxiv.org/abs/2403.18818</a></p><p><strong>EVI, the world’s first emotionally intelligent AI, by HumeAI.</strong></p><p>Hume’s Empathic Voice Interface (EVI) is the world’s first emotionally intelligent voice AI. It accepts live audio input and returns both generated audio and transcripts augmented with measures of vocal expression.</p><p><a href="https://demo.hume.ai/">Voice-to-Voice Demo * Hume AI</a></p><p><strong>NVIDIA launched ChatRTX,</strong> a free AI-boosted chatbot that can assist you in your daily tasks. It lets you personalize a GPT model connected to your own content — docs, notes, or other data”.</p><p><a href="https://www.nvidia.com/en-us/ai-on-rtx/chatrtx/">NVIDIA ChatRTX</a></p><p><strong>TSMC gets $6.6 billion in chipmaking cash from Biden </strong>while pledging to build a third Arizona plant.</p><p><a href="https://finance.yahoo.com/news/tsmc-gets-66-billion-in-chipmaking-cash-from-biden-while-pledging-to-build-a-third-arizona-plant-090026550.html?guccounter=1">TSMC gets $6.6 billion in chipmaking cash from Biden while pledging to build a third Arizona plant</a></p><blockquote><em>Last week on AI is a weekly recap of the most significant #ai news from the past week, curated by the team at Leniolabs_</em></blockquote><p>👇🏽 Learn what we can do for you:</p><p><a href="https://www.leniolabs.com/?utm_source=linkedin&amp;utm_medium=newsletter&amp;utm_campaign=ai-news">Leniolabs_ | Frontend team augmentation that works</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dd12708d0df5" width="1" height="1" alt=""><hr><p><a href="https://medium.com/leniolabs/last-week-on-ai-no-36-dd12708d0df5">Last Week on AI — no. 36</a> was originally published in <a href="https://medium.com/leniolabs">Leniolabs_</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>