Skip to content

Innovation

7 posts with the tag “Innovation”

Understanding Low-Rank Adaptation (LoRA): Revolutionizing Fine-Tuning for Large Language Models

Exploring Low-Rank Adaptation for Efficient Model Fine-Tuning

This article explores Low-Rank Adaptation (LoRA), a transformative technique for efficiently fine-tuning large language models (LLMs) like GPT-4 and Stable Diffusion. By reducing the computational burden of adapting these models, LoRA enables faster and more cost-effective training processes. We will cover its principles, advantages, and practical applications, as well as provide a hands-on implementation guide using Python libraries.

Apple Intelligence Foundation Language Models

Apple's foundation language models for powering Apple Intelligence across iOS, iPadOS, and macOS

Apple has developed foundation language models to enhance Apple Intelligence across iOS, iPadOS, and macOS. These models consist of a 3 billion parameter on-device version and a more powerful server-based variant, both designed for optimal efficiency and versatility. The training process involves core pre-training on 6.3 trillion tokens, followed by continued pre-training with longer sequence lengths and context lengthening. For post-training, supervised fine-tuning and reinforcement learning from human feedback (RLHF) are employed, utilizing advanced techniques such as the iterative teaching committee (iTeC) and mirror descent with leave-one-out estimation (MDLOO). The models are further specialized using LoRA adapters, making them well-suited for on-device applications. Benchmark results indicate that the AFM-on-device model outperforms larger open-source models, while the AFM-server model competes with GPT-3.5. Both models excel in safety evaluations, underscoring Apple’s commitment to responsible AI practices.

Building Large Language Models for Multimodal Understanding and Generation

Building Large Language Models for Multimodal Understanding and Generation

Dive into the latest advancements in multimodal Large Language Models (LLMs), exploring their capabilities to process and generate content across text, images, and audio. Learn about the challenges, methodologies, and applications driving this cutting-edge technology.