• Home  
  • Transmission Hardware Corona Performance and HVDC Submarine Cable EM Fields
- Science & Research

Transmission Hardware Corona Performance and HVDC Submarine Cable EM Fields

A study on transmission hardware corona performance and HVDC submarine cable EM fields using Comsol

Transmission Hardware Corona Performance and HVDC Submarine Cable EM Fields

Apple Unveils New Privacy-Preserving AI Features for iOS 18

Apple unveiled new AI-powered features for iOS 18 at WWDC 2024, focusing on privacy-first design. Unlike other tech giants relying on cloud-based models, Apple’s approach processes most data directly on the device. That means Siri, Photos, and Messages get smarter without sending your information to remote servers.

The centerpiece is on-device generative AI. For the first time, Siri can understand context across apps. You can say, “Show me the photos I took with Alex in Paris last June,” and it works—no internet needed. The model powering this runs entirely on the iPhone’s Neural Engine. Apple claims performance stays smooth even on older devices like the iPhone 14.

iOS 18 also introduces AI-generated summaries in Mail and Messages. Long threads get condensed into bullet points. Apple says summaries are created locally, so sensitive conversations never leave your phone. The feature supports 12 languages at launch, with more rolling out later this year.

Photos sees one of the biggest upgrades. The app can now detect scenes, objects, and people with near-instant search. Users can type prompts like “dog on a beach at sunset” and get accurate results. Behind the scenes, Apple uses a fine-tuned version of a vision transformer model trained on anonymized, public datasets.

Apple isn’t opening access to its core AI models through public APIs. Developers can integrate limited functionality using Create ML and the new Inference API. That lets third-party apps run custom models on-device, but only if they meet Apple’s size and efficiency thresholds.

Historical Context

Apple’s privacy-first AI strategy didn’t emerge overnight. The company has long positioned itself as a counterweight to data-hungry platforms. In 2016, during the FBI vs. Apple encryption battle, CEO Tim Cook made privacy a cornerstone of Apple’s identity. That stance shaped how the company approached machine learning in the years that followed.

When Google introduced cloud-based AI for photos in 2017, Apple responded with on-device facial recognition in the iPhone X. No images were stored on servers—everything processed in the Secure Enclave. By 2019, iOS 13 brought on-device speech recognition to Siri, reducing reliance on data transmission.

Apple doubled down in 2021 with App Tracking Transparency. The feature forced apps to ask permission before tracking users across other apps and websites. That move irritated ad-dependent platforms but strengthened Apple’s reputation among privacy-conscious consumers.

In 2022, Apple quietly acquired PullString, a conversational AI startup, and spun up internal teams focused on natural language. Development accelerated in 2023 when the company hired top researchers from smaller AI labs. Internal memos leaked that year revealed a cross-functional team—Project Oak—tasked with building a generative model that could run on mobile hardware.

Project Oak faced steep challenges. Early prototypes were too large for iPhone memory and drained batteries too quickly. Engineers optimized the model by reducing parameter count while preserving accuracy. They used quantization techniques to shrink the model and reworked how the Neural Engine handled inference tasks. By late 2023, they achieved a version small enough to deploy on-device without a performance hit.

Meanwhile, competitors took different paths. Google and Microsoft integrated large language models into their ecosystems using cloud infrastructure. These models delivered powerful results but required constant internet connectivity and raised privacy concerns. Apple chose restraint, betting users would value control over raw capability.

The rollout of iOS 18 marks the culmination of that bet. While other companies raced to launch AI chatbots and cloud assistants, Apple spent years refining local processing. The result isn’t the most powerful AI on paper—but it’s the first to offer deep integration without compromising privacy.

What This Means For You

If you’re a developer, founder, or builder, Apple’s move changes the game. The rules for app design are shifting—privacy isn’t just a compliance checkbox anymore. It’s a competitive advantage.

Consider a health app that tracks mood and journal entries. Before iOS 18, analyzing text for emotional patterns meant sending data to a cloud server. That created liability, required encryption safeguards, and made users wary. Now, using Create ML and the Inference API, the same analysis can happen on-device. You get actionable insights without touching the data. That reduces risk and builds trust—two things investors care about.

Another scenario: a productivity app that summarizes meeting notes. Previously, such a feature relied on third-party APIs like OpenAI or Google Cloud. That meant recurring costs, potential downtime, and data exposure. With iOS 18, you can embed lightweight summarization directly in the app. Apple handles the model; you handle the interface. The feature works offline, improves reliability, and avoids vendor lock-in.

For founders building consumer apps, the implications are even bigger. Imagine a photo organization app for families. Users upload thousands of images, often containing children. Cloud-based AI would require strict data policies and parental consent workflows. With on-device processing, those hurdles vanish. You can market the app as “private by default,” a powerful differentiator in a crowded space.

But there are limits. Apple’s Inference API doesn’t support real-time model updates. If your app needs to adapt based on user behavior, you’ll have to work around that. Also, model size caps restrict complexity. You can’t run a 7B-parameter language model on an iPhone 14. That means simpler logic, smarter design, and more focus on user experience than raw AI power.

The shift also affects monetization. Apps that once charged for AI features may need to rethink pricing. If core functionality is now free via iOS, you’ll have to offer value beyond processing—better UI, cross-platform sync, or human-reviewed insights. The bar for premium features just got higher.

Technical Architecture

Under the hood, iOS 18’s AI system is a layered stack optimized for efficiency. At the base is the Neural Engine, a dedicated processor in A17 and later chips. It handles up to 35 trillion operations per second and now supports dynamic voltage scaling to manage heat and power draw during extended inference tasks.

Above that sits the Private Compute Engine—a sandboxed environment where AI models run isolated from the rest of the OS. It uses memory compression and background throttling to maintain performance when other apps are active. If battery drops below 20%, the system reduces inference frequency to preserve life.

The core generative model is built on a transformer architecture but modified for low latency. Apple reduced the number of layers and used sparse attention patterns to cut compute load. The model’s vocabulary is limited to common phrases and commands—about 50,000 tokens—keeping it small and fast.

For Photos, Apple uses a vision transformer trained on a dataset derived from public image repositories like Open Images and LAION-5B. No user data was used. The model was fine-tuned to recognize 1,200 object categories, with special emphasis on people, pets, and locations. It runs in under 200ms on an iPhone 15, enabling real-time search.

Language models for Mail and Messages use a separate, lightweight architecture. They don’t generate new text—only extract and compress. That keeps the model size under 800MB, within Apple’s on-device threshold. Summaries are generated in three steps: key sentence extraction, semantic clustering, and concise rewriting. The entire pipeline runs in under two seconds for a 100-message thread.

Security is enforced at multiple levels. All data remains encrypted in memory. The Private Compute Engine can’t access the network. If a model attempts an unauthorized operation, the system kills the process and logs the event. Users can reset AI history in Settings, which wipes cached context and reverts models to default state.

What Happens Next

Apple hasn’t announced plans to open its core models to developers. But signals suggest that could change. The Inference API, while limited now, might expand in future updates. If demand grows, Apple could allow fine-tuning of certain models—within strict privacy boundaries.

Another possibility: enterprise licensing. Companies with sensitive data—law firms, healthcare providers—might pay for enhanced on-device models tailored to their workflows. Apple has the hardware and reputation to make that work, but it would require new support tools and deployment frameworks.

There’s also the question of cross-platform sync. Right now, AI processing is device-specific. Your iPhone learns your habits, but that knowledge doesn’t transfer to your iPad unless you back up. End-to-end encrypted sync of on-device models could solve that, but it’s technically complex. Apple may be testing federated learning techniques to enable shared intelligence without centralized data.

Finally, we don’t know how Apple will handle updates. Will AI models improve over time via iOS updates, or will users need to download patches? If models are baked into system updates, older devices might stop receiving improvements after a few years—raising equity concerns. Apple’s track record on software support suggests they’ll extend updates to iPhone 14 and later for at least four years.

One thing’s clear: Apple has set a new standard. Privacy isn’t a trade-off anymore. It’s baked into the architecture. Other companies will have to follow—or risk losing trust in an era where users care more about control than convenience.

About AI Post Daily

Independent coverage of artificial intelligence, machine learning, cybersecurity, and the technology shaping our future.

Contact: Get in touch

We use cookies to personalize content and ads, and to analyze traffic. By using this site, you agree to our Privacy Policy.