Streaming is available in most browsers,
and in the Developer app.
-
Platforms State of the Union
Discover the newest advancements on Apple platforms.
Chapters
- 0:00:21 - Introduction
- 0:01:35 - Apple Intelligence
- 0:09:14 - Generative Tools
- 0:21:10 - Xcode
- 0:23:58 - Swift Assist
- 0:27:44 - Swift
- 0:33:01 - Swift Testing
- 0:35:23 - SwiftUI
- 0:41:58 - RealityKit
- 0:44:17 - iOS
- 0:47:24 - iPadOS
- 0:49:30 - watchOS
- 0:52:11 - macOS
- 0:56:26 - visionOS
- 1:02:15 - Native Experiences
- 1:04:23 - Conclusion
Resources
-
Download
♪ ♪ Susan Prescott: Welcome to the WWDC 24 Platforms State of the Union! WWDC is always an exciting time of year when we dig into the technical details of what we've been working on, share how it works, and help you understand what's possible in your apps and games. Before we get started, we want to take a moment to recognize your hard work and dedication-- you are the creators, designers, and developers of the amazing apps and games that people use every day to learn, play, work, and change the world. Thank you! We love connecting with you around the world at Apple Developer Centers, Developer Academies, and Apple Vision Pro labs where you created the first spatial computing apps and games! And the next generation of developers blew us away in the Swift Student Challenge with clever, creative playgrounds that tackled topics from social impact to security. We're continually amazed by your creativity, enthusiasm, and new ideas. Thank you for making this such an incredible and exciting ecosystem. Now, let's dive into the details of our biggest developer announcements, starting with Seb to tell you about Apple Intelligence.
Sebastien Marineau-Mes: This is going to be an incredible year for developers. There are so many innovations happening right now with Generative AI, and today marks the beginning of an exciting chapter with Apple Intelligence. Apple Intelligence is the personal intelligence system, bringing powerful generative models to our platforms. iOS, iPadOS, and macOS get powerful new capabilities for understanding and generating language and images, and helping users take actions, all with rich awareness of users' personal context. It's deeply integrated into features and apps across the system, and built with privacy from the ground up. Before we get into all of the new ways that you can integrate with the new features and bring them into your apps, let's take a look behind the scenes at how Apple Intelligence was developed. For years, our platforms have been at the forefront of running machine learning tasks on-device, taking full advantage of the power of Apple silicon. We want to run as much as we can on-device because it delivers low latency and a better user experience. And, of course, it helps keep users' personal data and activity private. New generative AI models are incredibly exciting and powerful, and they are really pushing the limits of what can be run locally. Apple Intelligence starts with our on-device foundation model, a highly capable Large Language Model. We were looking for the sweet spot: powerful enough for experiences that we wanted, and yet small enough to run on a device. Starting from that foundation model, there were three key challenges we needed to solve: specializing it to be great for the many tasks and features that we wanted to run, making it small enough to fit on devices like an iPhone, and delivering the best possible inference performance and energy efficiency. The first technique we used is fine-tuning. This involves running different training passes on our model, each teaching it to be great for a given task, such as text summarization, proofreading, or generating mail replies. This process results in a set of distinct models, that are each trained to be great at one thing but not quite as good at the others. A more efficient approach to fine-tuning leverages a new technique called adapters. Adapters are small a collection of model weights that are overlaid onto the common base foundation model. They can be dynamically loaded and swapped, giving the foundation model the ability to specialize itself on-the-fly for the task at hand. Apple Intelligence includes a broad set of adapters, each fine-tuned for a specific feature. It's an efficient way to scale the capabilities of our foundation model. The next step we took is compressing the model. We leveraged state-of-the-art quantization techniques to take a 16-bit per parameter model down to an average of less than 4 bits per parameter to fit on Apple Intelligence-supported devices, all while maintaining model quality. Lastly, we focused on inference performance and efficiency, optimizing models to get the shortest time to process a prompt and produce a response. We adopted a range of technologies, such as speculative decoding, context pruning, and group query attention, all tuned to get the most out of the Neural Engine. We also applied a similar process for a diffusion model that generates images, using adapters for different styles and Genmoji. This is Apple Intelligence on-device: powerful, intuitive, and integrated language and diffusion models that deliver great performance and run on a device small enough to fit in your hand. Still, there are some more advanced features that require larger models to reason over more complex data. So we've extended Apple Intelligence to the cloud with Private Cloud Compute to run those larger foundation models. Because these models process users' personal information, we needed to rethink Cloud Compute and extend the privacy approach of our devices to servers. Private Cloud Compute is designed specifically for processing AI, privately. It runs on a new OS using a hardened subset of the foundations of iOS, based on our industry leading operating system security work. To mitigate entire classes of privacy risks, we have omitted features that are not strictly necessary in a dedicated AI server, such as persistent data storage. On top of this secure foundation, we have completely replaced the tools normally used to manage servers. Our tooling is designed to prevent privileged access, such as via remote shell, that could allow access to user data. And finally, Private Cloud Compute includes a full machine learning stack that powers intelligence. The result is an unprecedented cloud security foundation based on Apple Silicon. It starts with the Secure Enclave to protect critical encryption keys. Secure Boot ensures the OS is signed and verified just like on iOS, Trusted Execution Monitor makes sure that only signed and verified code runs. And attestation enables a user's device to securely verify the identity and configuration of a Private Cloud Compute cluster before ever sending a request. For each request, a user's device establishes an end-to-end encrypted connection with a Private Cloud Compute cluster. Only the chosen cluster can decrypt the request data, which is not retained after the response is returned and is never accessible to Apple. But we're going even further: we're committing to making virtual images of every production build of Private Cloud Compute publicly available for inspection by security researchers, so they can verify the promises that we're making, and findings will be rewarded through the Apple Security Bounty. Second, we're making sure a user's device will only communicate with Private Cloud Compute clusters that are running a signed build that has been publicly logged for inspection. This is verified with the strong cryptographic attestation mechanisms in Apple silicon. We believe this is the most advanced security architecture ever deployed for cloud AI compute at scale. Apple Intelligence is the personal intelligence system that brings this all together. It includes an on-device semantic index that can organize personal information from across apps as well as an App Intents Toolbox that can understand capabilities of apps and tap into them on a user's behalf. When a user makes a request, Apple Intelligence orchestrates how it's handled either through its on-device intelligence stack or using Private Cloud Compute. And it draws on its semantic index to ground each request in the relevant personal context and uses its App Intents Toolbox to take actions for the user. It's specialized to be absolutely great at the features it enables. It's built with the best possible performance and energy efficiency, and of course, it's designed around privacy and security from the ground up. And that's Apple Intelligence. We have new APIs to bring these features into your apps, and new ways for your apps to expose their capabilities to Apple Intelligence for deeper integration into system experiences. Let's start with our language and image features: Writing Tools, Genmoji, and Image Playground. Here's Leslie to tell you more. Leslie Ikemoto: Our system-wide Writing Tools use the models Seb just talked about, to help users rewrite, proofread, and summarize text. If you're using any of the standard UI frameworks to render text fields, your app will automatically get Writing Tools! And using our new TextView delegate API, you can customize how you want your app to behave while Writing Tools is active, for example, by pausing syncing to avoid conflicts while Apple Intelligence is processing text. So for an app like Weebly, Writing Tools can help a small business owner find just the right words for their new website! And when it comes to images, Genmoji opens up entirely new ways to communicate, letting users create a new emoji to match any moment. If you're already using our standard text systems with inline images, you're in great shape! All you need to do is set just one property, and now your text views accept Genmoji from the keyboard. Under the hood, Genmoji are handled differently from emoji. While emoji are just text, Genmoji are handled using AttributedString, which is a data type that's been with us for many years, for representing rich text with graphics. Apple Intelligence also delivers amazing new systemwide capabilities for creating fun, original images across apps. The new Image Playground API delivers a consistent, playful, and easy-to-use experience. When you adopt it, you'll get access to the same experience that users are familiar with from Messages, Keynote, and the new Image Playground app. And since the images are created on the user's device, they have the power to experiment and create as many images as they want. And you don't have to worry about setting up or paying for your own text-to-image models or servers to deliver this experience in your own apps. Let's take a look at how easy it is to get started.
Here I am in Xcode, working on an app called Destination Video. I want to use the new Image Playground API to let users add fun avatar pictures to their user profiles. I'm going to do that by adding a quick bit of SwiftUI code to my profile button to set up the Image Playground sheet. Now I'm going to add some code to store the new image. And finally, I'm going to add a text description to give users a default avatar to work with. Now let's tap build and run, and take a look at what we've got over here on my iPad. Here's my profile button. Now when I tap on it, the Image Playground view pops up, and just like that, I get a whimsical avatar I can use for my profile. And users can tap on the bubbles, or edit the prompt we supplied them, to create anything they want.
Writing Tools, Genmoji, and Image Playground are three powerful new Apple Intelligence features, and you'll delight your users by integrating these into your apps. There's another way to bring Apple Intelligence into your app: Siri! This year, with Apple Intelligence, Siri will be able to take hundreds of new actions in and across apps, including some that leverage the new writing and image generation capabilities we just talked about. This is made possible through significant enhancements we're making to App Intents. App Intents is a framework that lets you define a set of actions for Siri, Shortcuts, and other system experiences, and now, App Intents are a powerful way for your app to tap into Apple Intelligence. We're starting with support for these domains, and over time, we'll add even more. And if you have an app that fits in an existing SiriKit domain, it can also benefit from Siri's enhanced conversational capabilities, like responding correctly even if you stumble over your words, and understanding references to earlier parts of a conversation. There are also two new Siri capabilities that apps can benefit from with no additional work. First, Siri will be able to invoke any item from your app's menus. So when a user who's reviewing a slide deck says, "Show presenter notes," or perhaps more conversationally says, "I need to see my speaker notes," Siri will know just what to do. Second, Siri will be able to access text displayed in any app that uses our standard text systems. This will allow users to directly reference and act on text visible on screen. So when a user is looking at a reminder to wish Grandpa a happy birthday, they can just say, "FaceTime him." So that's taking action through Siri. Now let's talk about how Apple Intelligence equips Siri with personal context understanding. Apple Intelligence can now access a semantic index of things like photos, messages, files, calendar events, and much more, to help it find and understand things it never could before. Siri will now also be able to search data from your app, with a new Spotlight API that enables App Entities to be included in its index. And when App Intents and App Entities come together, they unlock new ways for your users to connect content from across the system to the actions supported by your app. For example, a user can bring a summary from a note they just took in the Notes app into an email they've drafted in Superhuman. With the drafted email defined as an app entity for the index, the user can refer to it conversationally. And Siri can bring content from their note right to where the user wants it in the Superhuman app. Whether you build new App Intents and App Entities, or have existing SiriKit integrations, Apple Intelligence will enable Siri to expose much deeper and more natural access to your app's data and capabilities than ever before. Exposing your app's capability using App Intents is the key to this integration, and you can start working on it today. Users will be able to use your App Intents with the Shortcuts app immediately, and over time, Siri will gain the ability to call the App Intents that fall into supported domains. App intent schemas for two domains are available now, with more coming later this year. And in software updates, we'll be rolling out the in-app actions and personal context understanding you just heard about. Back to Seb. Seb: So that's our language and image features, as well as the new Siri, all powered by Apple Intelligence. And if you're running your own models and looking for lower-level access to the stack to take advantage of AI-accelerated hardware, there are many more ways to use machine learning and AI on-device within your app. Here's Manasi to tell you more. Manasi Joshi: Our built-in machine learning frameworks offer intelligence capabilities across a number of categories. These include APIs for natural language processing, sound analysis, speech understanding, and vision intelligence. The Vision framework is getting a whole new Swift API this year. There are so many capabilities you can tap into across these frameworks. And you can extend them by using Create ML to bring in additional data for training. For example, if you have a unique data set of images, you can augment our image models with your data to improve classification and object detection. And beyond our frameworks, you can also import and run on-device AI models, such as large language or diffusion models developed and trained elsewhere. You can run a wide array of models across our devices including Whisper, Stable Diffusion, and Mistral. It just takes a couple steps to get a model ready to run in your app. You can start with any PyTorch model. You then use Core ML Tools and convert it into the Core ML format. Core ML Tools offer you a number of ways to optimize your model, leveraging many of the techniques used in Apple Intelligence, such as quantization and efficient key value caching in LLMs. You then run your model within your app using the Core ML framework. Core ML optimizes hardware-accelerated execution across the CPU, GPU, and Neural Engine and includes a number of new tools for you to further optimize performance of your model. Core ML is the most commonly used framework for running AI models as part of apps on iOS, iPadOS, and macOS. When your app has heavy workloads for non-machine learning tasks, you may want more control over when machine learning tasks are executed to manage overall performance. For example, if your app has significant graphics workloads, Metal provides you ways to sequence your machine learning tasks with those other workloads using Metal Performance Shaders so you can get the best GPU performance. And if you are running real-time signal processing on the CPU, the Accelerate framework includes BNNS Graph for your machine learning tasks to maintain stricter control over latency and memory management. Now, let's show you this in action and see how new optimizations can boost model execution. We are using the Mistral 7B parameter model from Mistral's Hugging Face Space. It's been converted into the Core ML Format, and is running in a test-app built using the Swift Transformers package. On macOS Sonoma, this is running as a 16-bit model. For macOS Sequoia, we've applied the latest 4-bit quantization and stateful KV cache techniques in Core ML. We've given the model a simple question: what is ML model quantization in three sentences? You can see that with these optimizations, the model can produce a response over 5 times faster with almost 9 times peak memory savings. Quantization is a powerful technique but can affect the output quality, so we recommend additional testing and tuning afterwards. For those of you experimenting with the latest advancements including training models, there is no better place to do this than the Mac! Whether you are using PyTorch, TensorFlow, JAX, or MLX, you can take full advantage of Apple silicon's hardware acceleration and unified memory when you're training models. Our work on machine learning and AI is a collaborative effort. We are partnering with others in the research community to advance the state of the art together, and you can find the resulting research online. We've published hundreds of papers with novel approaches to AI models and on-device optimizations, with many including sample code and data sets. And we have shared many of the tools we use for research as open source. There are so many ways for you to tap into on-device AI with Apple platforms. And with machine learning technology advancing rapidly, Apple continues to push on cutting-edge research in this space. Now back to Seb. Seb: This is the beginning of a journey. Apple Intelligence truly is powerful intelligence for your most personal devices. We'll continue to bring generative intelligence into the core of Apple platforms and expose new capabilities to integrate into your apps. Generative Intelligence is also transforming the way that we all write code, and we're adding some amazing new intelligence capabilities to our developer tools. Here's Ken to tell you more.
Ken Orr: Every day, millions of developers around the world use Xcode to bring their ideas to life on all Apple platforms. Xcode helps you write great Swift code and create stunning experiences with SwiftUI, rapidly experiment in Simulators for Apple devices and OSes, get detailed performance insight using Instruments, and test and distribute to all your users with Xcode Cloud. All of this works together seamlessly so you can stay focused, work fast, and unleash your creativity. Xcode 16 begins a whole new chapter for development, as we infuse our tools with the power of Generative Models. Building on Apple's Foundation model, we've created specialized coding models that capture expertise only Apple can provide, like the latest APIs, language features, documentation, and sample code, and best practices distilled from decades of building software for all our platforms. It starts with a major leap forward in a core feature that you use every day, code completion, with an innovative new engine that can predict the code you need. The engine is powered by a unique model specifically trained for Swift and Apple SDKs. It uses your project symbols to customize suggestions. And it runs locally on your Mac, keeping your code private, giving you super-fast results, and even works when you're offline. Let's take a look. I'm working on an app that features fun videos from around the world. Next on my to-do list is adding code to represent a collection of videos. I want a name for the collection. As soon as I start typing, I get a great suggestion. I'll press tab to accept that. I also want a property to hold an array of videos. Again, Xcode finishes the line using a type from my own project. Next, I want a function that returns a sorted list of videos. I'll start typing the name, and I get a suggestion for a function that returns an array of videos sorted by release date. As soon as I accept it, I get a suggested implementation, too. In addition to code in my project, Xcode can use the comments I write as context. I'll add a comment for a function that will return a list of videos for a given director. And with just a couple characters, I get a suggestion for exactly what I had in mind. Next, I'll add the start of a function to get a cached thumbnail for a video. As I'm typing and pick one of the suggestions, I now get all the parameters filled out for my selection. Finally, I'll add the body for my view. I can hold Option to see multiple lines of predicted code, and then press tab to accept them all. Xcode's completion model is powered by Apple silicon and machine learning enhancements in macOS Sequoia. And Xcode automatically downloads and updates the model with the latest SDK and language changes. Building an app is more than just typing code. It's about transforming entire ideas into code. And the easiest way to do that is with natural language. So we created a larger and more powerful model that runs in the cloud. And crafted a unique experience in Xcode that only Apple could deliver. Introducing Swift Assist, a companion for all your coding tasks. And whether you know exactly what you're after, or want help writing the best Swift code, Swift Assist can answer your coding questions and help with tasks like experimenting with new APIs. Swift Assist is a great new way to code. Let me show you. I've always wanted to build an app to catalog the classic Macs in my garage. I'll start with an empty project, and bring up Swift Assist, and ask for what I want. Swift Assist is great for prototyping. It can help me quickly visualize an idea. Here, it creates a struct that represents a classic Mac, with its name and details. And I even got some realistic sample data that I can visualize in the preview. This is a great start. Next, how about adding some images? I'll ask to "add images next to each Mac." I already had some images in my asset catalog. And Swift Assist can reference those in the code that it creates. My ClassicMac struct is updated to include an imageName property, and then the sample data is updated with image names from my asset catalog. So far, this is looking great! One of my favorite things about classic Macs is their startup sound. I recorded some and added them to my project, so let's do something fun. I'll ask, "Play sound when I tap on a Mac." And like code completion, Swift Assist uses details from my project, including symbols and the relationships between them, to create personalized code. Let's see how that sounds.
That brings back some great memories! Finally, I want to try out a different layout. Let's see what this looks like when I use a grid instead of a list.
What was just an idea a few minutes ago, is something I can run on my device. How cool is that? Swift Assist knows Apple's latest SDKs and Swift language features, so you'll always get up-to-date and modern code that blend perfectly into your project. So now, tasks like exploring new frameworks and experimenting with new ideas are just one request away. Like all Apple developer services, Swift Assist is built with your privacy and security in mind. Your code is never stored on the server. It's only used for processing your request, and most importantly, Apple doesn't use it for training machine learning models. Swift Assist and the new predictive completions will turbo charge the way you work in Xcode. This marks the beginning of a journey to add extraordinary intelligence into our tools. What an exciting time to be developing for Apple platforms. Xcode 16 also has many other new features to make you more productive and improve the quality of your apps, things like a single view of your backtraces, showing relevant code from all stack frames together, a "flame graph" of your profiling data in Instruments, giving you deeper insight into your app's performance, and enhancements to localization catalogs, so you can bring your app to even more people around the world. The first beta of Xcode 16 is available now, including the new predictive completion for Apple silicon Macs. And Swift Assist will be available later this year. Next, let's talk about all the exciting changes in Swift. And here to tell you all about it is Ted. Ted Kremenek: Swift is the revolutionary programming language that's approachable for newcomers and powerful for experts. It's fast, modern, safe, and a joy to write. This year, Swift celebrates its 10th birthday! It's a good time to reflect on the journey so far and set a course for the next decade of Swift. Before Swift, the software on Apple's devices was primarily written using C, C++, and Objective-C. We created Swift to be an expressive and safer programming language that would simplify the process of writing software. Swift is an ideal language for app development, and is used by almost one million apps. But Swift is great for more than just apps. Apple uses Swift throughout our software stack, from apps and system services, to frameworks, all the way down to firmware like the Secure Enclave. And it's also used for network services, like Private Cloud Compute. As Swift continues to evolve, it's becoming a compelling choice for even the most performance-sensitive and secure code. Swift's safety, speed, and approachability, combined with built-in C and C++ interoperability, mean Swift is the best choice to succeed C++. Apple is committed to adopting Swift in our C++ codebases, and moving to Swift will improve the quality of software, both at Apple, and throughout the industry. Looking ahead to the next ten years of Swift, we're working with the open source community to bring Swift to more platforms and domains. First, to meet developers where they are, we're investing in support for Swift in Visual Studio Code and other editors that leverage the Language Server Protocol. We're also expanding our Linux support to include Debian and Fedora and improving support for Windows. And Swift.org hosts a variety of community-authored guides on using Swift across different domains, like creating a web service using Vapor. Swift's community has been key to its success. Open source libraries and tools underpin so many of the things you build using Swift throughout the wider software ecosystem. To further support the community and foster greater collaboration, we're excited to announce a new GitHub organization, dedicated to Swift at github.com/swiftlang. This new organization will host a number of critical projects for the Swift ecosystem, including the Swift compiler, Foundation, and other key libraries. And this year also brings an exciting new release, with the launch of Swift 6. Swift 6 makes concurrent programming dramatically easier by introducing data-race safety. A data race happens when different parts of code try to modify and access the same data simultaneously. Swift 6 eliminates these kinds of bugs by diagnosing them at compile time. Since the introduction of async/await, structured concurrency, and actors, Swift has progressively gained the building blocks necessary to provide full data-race safety, culminating in the new Swift 6 language mode, which enables compile-time data-race safety. Because data-race safety may involve changes to your code, the new Swift 6 language mode is opt-in. You can take advantage of it whenever you are ready to tackle data races in your code. When you turn on the Swift 6 language mode, the compiler will diagnose concurrent access to memory across your project, and you can fix many data-race safety errors with narrow changes to your code. You can migrate to Swift 6 incrementally, one module at a time. You don't need to wait for your dependencies to migrate either, and when they do, you don't need to make any changes to your code until you decide to use the new language mode. Every module that migrates to Swift 6 contributes to the community-wide transition to bring data-race safety to the Swift software ecosystem. You can help by updating your open source packages to Swift 6, and everyone can follow along on the adoption of Swift 6 in popular packages on SwiftPackageIndex.com. Swift.org also has a migration guide with insights and patterns on how best to modify your code to eliminate data races. Compile-time data-race safety in Swift 6 will further elevate your code's safety and help ensure its maintainability in the future. And there are many other exciting developments in Swift 6, along with improvements to concurrency, generics, and a new "Embedded Swift" subset for targeting highly-constrained environments like operating system kernels and microcontrollers. Another important aspect of software development is writing tests. We're excited to introduce an all-new testing framework, built from the ground up for Swift. It's appropriately named Swift Testing. Swift Testing has expressive APIs that make it simple to write tests. It is easy to learn. And it's cross-platform, so you can use it to write tests for a variety of platforms and domains. Swift Testing is also developed as an open source package. It was introduced nine months ago, and community feedback has been invaluable. Writing a test starts simply by adding a function with the Test attribute to your test suite. You can provide a friendly title and use macros like #expect to evaluate the result of any Swift expression, making it easy to write complex checks. Swift Testing also includes a flexible tagging system to help you organize your tests and test plans. With tags, you can selectively run tests across your test suite, like tests that use a certain module, or that run on a specific device. And with just a little bit of code, you can easily parameterize tests so they can be reused multiple times, repeating the same logic over a sequence of values. Xcode 16 has full support for Swift Testing. The test navigator organizes tests by tag, and shows parameterized tests, and the source editor has a rich inline presentation to help diagnose what went wrong when a test fails. Swift Testing takes full advantage of the power of concurrency in Swift, running all the tests safely in parallel. It's designed for all of Swift's use cases, with integrated support in both Xcode and Visual Studio Code. Swift has helped us all write safer, better code, and we're excited to see Swift continuing to transform software around the world. By moving Swift and Apple's frameworks together, we unlock new levels of productivity and expressivity, and nowhere is that more powerful than with SwiftUI. Here's Josh to tell you more. Josh Shaffer: SwiftUI is the best way to build apps for any Apple device. Like the Swift language, SwiftUI is easy to learn and yet packed with advanced features. Its design is informed by deep experience building apps that work across devices and integrate seamlessly with the underlying OS. When you write apps with SwiftUI, you focus on describing the UI that you want, and SwiftUI takes care of details like dark mode, dynamic type, and tracking changes in your model. By enabling you to express what you want, not how to build it, SwiftUI lets you share more of your code across more devices. Of course, you can always customize and fine-tune the look and feel provided by SwiftUI using a large set of modifiers and protocols, so you can achieve exactly the result that you want. Whether you're building a brand-new app, or building a new feature in an existing app, SwiftUI is the right tool to use. This is exactly what we've been doing with SwiftUI adoption at Apple. There are brand-new apps built from the ground up with SwiftUI, like Image Playground, an all-new app with a stylish, custom interface. And the new Passwords app, which has a more standard look and feel built around familiar forms and controls. And SwiftUI is used for new features in existing apps as well, like the all-new design of Photos, where redesigned elements built using SwiftUI run side-by-side with pre-existing views like the photo grid. SwiftUI has also helped us share more code across platforms, like with Music, which first adopted SwiftUI for visionOS and tvOS, and is now using it to consolidate and simplify their codebase across iOS and iPadOS as well. Across our platforms, there have been a huge number of apps and experiences adopting SwiftUI in recent years. SwiftUI is used in apps like Xcode, Pages, and Music and core system experiences like Control Center, Notification Center, and Finder. On watchOS, SwiftUI is used extensively, including in key apps like Workout, Activity, and Sleep. And on visionOS, SwiftUI is the perfect choice for building a spatial app. With SwiftUI used in more and more places, we're continuing to make multi-year investments to the developer experience. This year, we focused on previews, customizations, and interoperability. First, Xcode Previews has a new dynamic linking architecture that uses the same build artifacts for previews and when you build-and-run. This avoids rebuilding your project when switching between the two, making for a dramatically smoother and more productive workflow. And it's now easier to set up Previews too. A new @Previewable macro makes it possible to use dynamic properties like @State directly in an Xcode Preview, reducing the amount of code that you have to write. SwiftUI has also gained a number of customizations to fine-tune the look and feel of your apps, like custom hover effects for visionOS, which give your users additional context when interacting with UI elements, new options to customize window behavior and styling in macOS, giving control over things like the window's toolbar and background, and a new text renderer API that enables a whole new level of visual effects and playful animations. Many apps adopting SwiftUI also use views written with UIKit and AppKit, so great interoperability with these frameworks is critical. Achieving this requires deep integration with the frameworks themselves. This year, all our UI frameworks share more common foundations. Gesture recognition has been factored out of UIKit, enabling you to take any built-in or custom UIGestureRecognizer and use it in your SwiftUI view hierarchy. This works even on SwiftUI views that aren't directly backed by UIKit, like those in a Metal-accelerated drawing group. And Animations have been factored out of SwiftUI so you can now set up animations on UIKit or AppKit views and then drive them with SwiftUI, including fully custom animations. And of course, there are many more exciting and useful features in SwiftUI this year, like custom containers, mesh gradients, scrolling customizations, and much more. If you aren't already using SwiftUI in your apps, there's no reason to wait. SwiftUI is ready to help you build any user interface that you want, using less code and better code. The Swift programming language started a revolution in how productive and expressive APIs could be. From the standard library and Foundation to SwiftUI and the new Swift Testing, APIs designed for Swift are dramatically easier to use, and make you more productive. Just last year, we added SwiftData to this list, helping you model and persist your app's information using a lightweight API that feels totally natural in Swift. You can define your schema with just a few additions to a normal Swift class starting with applying the @Model macro. That's actually all you need, but you can further refine it using @Attribute to specify behaviors on properties and @Relationship to describe how models relate to one another. This year, we continued to build on SwiftData's simple syntax and modeling capabilities with the addition of #Index and #Unique. #Index makes your queries more efficient by telling the underlying storage which properties are commonly queried together, so they can be stored and retrieved quickly. And #Unique indicates that a set of properties can have no duplicate entries. And the new @Previewable macro also works great with SwiftData, making it easier to work with your queries while iterating on views. Beyond these syntax additions, SwiftData also has expanded capabilities around how your data is stored and how changes are recorded, starting with custom data stores. Today's apps are built using a variety of storage backends. And by default, SwiftData uses Core Data to store information. With a custom data store, you have the ability to store data using an alternative backend of your choosing. This makes it possible to use SwiftData's API with backends like SQLite, a remote web service, or even just a mapped collection of JSON files. It's really flexible. And SwiftData now provides access to the history of changes in a data store. A data store's history keeps track of all of the changes that have occurred in the underlying data, making it easy to inspect all the changes that were recently made. This is useful for tracking local changes that need to be synced to a remote web service. These new features make working with data easier than ever before, all using an API that feels incredibly natural in Swift. Now, let's talk about a key framework that helps you create compelling 3D and spatial experiences, and that's RealityKit. RealityKit simplifies the process of rendering 3D models with a variety of styles, like realistic, cell-shaded, or cartoon. RealityKit first shipped on iPhone, iPad, and Mac. With the launch of Vision Pro, it gained significant new capabilities, alongside a brand-new tool, Reality Composer Pro, which simplified the development of spatial apps but only supported visionOS. This year, these APIs and tools are now aligned across macOS, iOS, and iPadOS as well with RealityKit 4 so you can easily build for all of these platforms at once! Everything you expect, including MaterialX, Portals, and Particles are now available to be used with RealityView on all four of these platforms. This includes APIs for rich materials and virtual lighting, providing creative control over how your 3D objects appear and how they interact with the user's environment. As well as brand-new APIs and tools like BlendShapes, Inverse Kinematics, and animation timelines which bring expanded character animation capabilities, enabling interactions that are dynamic and responsive to the surrounding environment and to user behavior. RealityKit 4 also provides more direct access to rendering with new APIs for Low Level Mesh and Textures, which work with Metal Compute Shaders to offer improved control over how your apps look, enabling fully dynamic models and textures in every frame. On visionOS, these features work in both the Shared Space and an app's Full Space. It's also easier to inspect your RealityKit content, as Xcode's view debugging now supports introspecting your 3D scene content. You can investigate your scene's object hierarchy and inspect every element's properties, both those built into RealityKit as well as the custom components that you create. With Swift and frameworks like SwiftUI, SwiftData, and RealityKit, Apple's SDKs make it easy to build beautiful and immersive apps. They also include powerful APIs that extend your apps' reach into system spaces, enabling deep integration with the underlying platforms. Let's take a look at some of the latest capabilities in this year's OS releases. Here's Jonathan to start with iOS. Jonathan Thomassian: In addition to Apple Intelligence, there are lots of new APIs across all of our platforms to give you even more possibilities. Let's begin with iOS. This year, iOS is more customizable than ever, and it starts with Controls. They make getting to frequent tasks from your apps faster and easier and are a great way to engage with your app from more places across the system. Controls can toggle a setting, execute an action, or deep link right to a specific experience. Using the new Controls API, you can create a control by specifying the type, a symbol, and an App Intent. Once defined, your control will be available to users in the new Controls Gallery, where they can add it into their Control Center for easy access. Users can also assign your control to the Action button on their iPhone 15 Pro or, for the first time, to appear as one of the controls on their Lock Screen. And for apps that leverage the camera, the new LockedCameraCapture framework enables captures even while the device is locked. Now, let's talk about another way iOS is becoming more customizable, on the Home Screen. App icons and widgets can now appear Light, Dark, or with a Tint. To get you started, a tinted version of your app icon will automatically be available to your users after they upgrade to iOS 18. This treatment is applied to all app icons, and is crafted intelligently to preserve your design intent and maintain legibility. This results in a consistent visual experience across the Home Screen. And no matter how your icon is rendered, you can make sure that it always looks great by customizing each version. The Human Interface Guidelines have updated icon templates and best practices for adapting your icons to these new appearances. And when you're ready, Xcode now supports Dark and Tinted app icon variants that you can drop right into your asset catalog. From extending your controls across the system to making sure your app icons and widgets look their very best, iOS 18 gives you amazing new ways to customize how your users experience your apps. Now, let's talk about security. Two years ago, iOS added support for passkeys. Passkeys are a replacement for passwords that are more secure, easier to use, and can't be phished. They offer faster sign-in, fewer password resets, and reduced support costs. This year, we've created a seamless way to transition more of your users to passkeys with a new registration API. It creates passkeys automatically for eligible users the next time they sign in to your app, so future sign-ins will be faster and stronger than before. After the passkey is created, users are informed that a passkey has been saved without interrupting their flow. If you've already adopted passkeys, adding automatic passkey registration just requires a single new parameter. There's never been a better time to transition to passkeys in your apps. And of course, all these features are also available on iPadOS. This year, iPadOS delivers big updates to the ways that your users interact with your apps, starting with the redesigned tab bar. It floats at the top of your app and makes it easy to jump to your favorite tabs. And it turns into a sidebar for those moments when you want to dive deeper. Like when you want to explore your channels in Apple TV. There's a new API that simplifies building important interactions like customization, menus, and drag and drop. So you can accomplish more, with less code. If your app has a simple hierarchy, then you can just adopt the tab bar. If your app has a deeper hierarchy, you can implement both the tab bar and sidebar with the same API. And you can even enable your users to customize what's in their tab bars. You may have noticed that the tab bar elegantly morphs into the sidebar. These kinds of refined animations are also available for your app. For example, users have loved the zoom transition in Photos. It's precisely controlled by touch, and can even be interrupted as it's happening. Interruptible animations keep your app feeling responsive as users navigate your UI, because they don't have to wait for animations to complete before their next interaction. You can take advantage of the same interruptible, fluid zoom transition in your apps on iOS and iPadOS. This is great in apps like Notes, where notes now beautifully animate open from the Gallery view, and can even be pinched closed. And you can also use the new zoom transition with an updated Document Launch View. It allows you to create a bespoke look for your app, and connect your launch experience to your brand. You can customize actions, change the background, and add fun animated assets to bring the header to life. These are just a few of the API improvements you can leverage to better integrate your apps into the system and enhance your customers' experience on iOS and iPadOS. Now, moving on to watchOS, here's Lori to tell you more. Lori Hylan-Cho: Apple Watch is the easiest way for people to access the most important information for their day at a glance. From any watch face, relevant insights and actions are just a scroll of the digital crown away. This year, watchOS 11 gives you even more opportunities to bring compelling experiences from your app into the Smart Stack with some new APIs and by leveraging code you've already written. In fact, one of the coolest new features on watchOS this year actually starts on iOS: Live Activities. If you've adopted Live Activities in your iOS app, the system will leverage the work you've already done to support the Dynamic Island to offer a Live Activity experience on Apple Watch. Watch wearers will see your compact leading and trailing views automatically in the Smart Stack, as well as when significant event notifications occur. You can use the All Variants Preview in Xcode 16 to see how your Live Activity will appear on watchOS with your current Live Activity Widget Configuration, and you can even add a custom watchOS presentation by indicating that you support the small supplemental activity family. Giving your customers a great experience on watchOS is as simple as using the @Environment to further customize your Live Activity view when it appears on Apple Watch. For those of you who already have a watchOS app or are planning to build one, you can make your experiences even more timely, contextual, and functional by taking advantage of the expanded capabilities of widgets in watchOS 11. You can now bring your interactive widgets to watchOS, using the same APIs you're currently using on iOS and macOS. App Intents let you create widgets with multiple interactive areas that perform actions and update state directly in the widget. The new accessory WidgetGroup layout is one way to provide more information and interactivity to your customers. It accommodates three separate views, and supports both deep linking to different parts of your app and Button and Toggle initializers to perform actions without leaving the widget. To make sure that your informative and interactive widgets appear when they'd be most useful, you can now specify one or more RelevantContexts, such as time of day, AirPods connection, location, and routine, so that the system can insert them into the Smart Stack at just the right time. And for those of you eager to integrate Double Tap into your apps, handGestureShortcut is the modifier you've been looking for. Use this modifier to identify a Button or Toggle as the primary action in your app, widget, or Live Activity to give your customers quick, one-handed control. Live Activities, interactive and contextual widgets, and Double Tap support are just a few of the new opportunities coming to watchOS 11 this year. Next, Eric will dive into some exciting updates for macOS. Eric Sunalp: This year, macOS supports Apple Intelligence with features like Writing Tools, Genmoji, and Image Playground that you can integrate right into your apps to create engaging experiences. It also introduces productivity features like easier window tiling and iPhone mirroring, and delivers new APIs including user space file system support and major improvements to MapKit. And now, we'd like to focus on one area that's growing incredibly fast. And that's gaming. With the incredible pace of innovation in Metal and Apple silicon, there's a fundamental shift taking place. Every Apple silicon Mac, every iPad with an M-series chip, and even the latest iPhone 15 Pro can play the type of games that previously required dedicated gaming systems with power-hungry discrete GPUs. These "console-class" Apple devices create a unified gaming platform that's built with a tightly integrated graphics software and a scalable hardware architecture. And every year, this rapidly growing platform delivers new advancements in Apple silicon and Metal to further improve the gaming experience. To bring your high-end games to this platform and reach even more players, one of the best places to start is the Game Porting Toolkit. And it's been so great seeing the positive reception so far. Developers like Ubisoft could bring their games to Apple devices faster than ever. Gaming enthusiasts were able to evaluate demanding Windows games like "Cyberpunk 2077" for the first time on their Mac. And we're so excited to see community projects like Whisky and Homebrew and products like CrossOver use Game Porting Toolkit to provide even more options to get started with the evaluation process. This year, we're excited to announce Game Porting Toolkit 2, with big updates based on your feedback that help accelerate your timeline no matter if you're bringing an existing game or one that's currently in development. New additions to the toolkit help you bring more advanced games to Mac, bring Mac games to iPad and iPhone, and deliver a great user experience. First, let's take a look at bringing advanced games to Mac. With Game Porting Toolkit 2, you can now evaluate more Windows games thanks to improved compatibility with technologies like AVX2 and advanced gaming features like ray tracing, giving you a more complete picture of your game's potential on Mac. And Metal adds highly requested API improvements that help DirectX developers port their existing graphics code. For instance, managing Metal resources should feel much more familiar. And Xcode adds another highly requested feature, enabling you to debug and profile the source of your original HLSL shaders. And you can do this at any stage in the development process, whether you're evaluating your original Windows binary or debugging the Metal version of your game. Once you have an optimized Mac game, it's now much more straightforward to bring Mac games to iPad and iPhone, setting the stage for a huge opportunity to bring your games to even more players. In fact, there's no other gaming platform in the world that enables developers to bring games like "Death Stranding: Director's Cut" to well over a hundred million console-level devices, spanning phones to personal computers. To help you do the same, Game Porting Toolkit 2 includes helpful code examples to accelerate your development by taking you through essential steps, like how to convert your various gaming subsystems, and how to build your shaders once and deploy everywhere. Game Porting Toolkit 2 also helps you deliver a great user experience with an expanded set of human interface guidelines to take full advantage of Apple hardware. It also covers important topics, such as how to best streamline downloads and installation, adapting your game's UI for various display sizes, and creating easy-to-use touch controls that best fit the style of your game. With key improvements to Metal and an updated collection of tools across the most important phases of game development, Game Porting Toolkit 2 makes it easier than ever to create amazing games on Apple platforms. All of the latest OS releases are packed with so many new features to help you create amazing experiences that are only possible with Apple devices. And Vision Pro has taken this to a whole new level. Here's En to cover what's new in visionOS. En Kelly: Since Apple Vision Pro was announced at WWDC last year, we have been thrilled with the enthusiasm from the developer community! Developers from across the world have been building incredible spatial apps for visionOS. Within the first few days of availability, there were already over 1,000 apps on the App Store! Some of these developers started by recompiling their iOS and iPadOS apps to quickly get the visionOS spatial UI layout and then built upon it. For example, djay started with their SwiftUI-based iPad app, easily recompiled to run on visionOS, and then extended it for spatial computing, and it is a great experience! Whether you already have a visionOS app or are new to the platform, spatial computing provides incredible opportunities for your apps. visionOS is built on the foundation of the decades of engineering innovation in macOS, iOS, and iPadOS. The tools and many of the frameworks you use are common across these platforms, which means you can write a line of code just once and use it on all platforms! SwiftUI, RealityKit, and ARKit are at the core of developing the best spatial apps for visionOS. ARKit is a core framework for AR experiences across platforms, and powers spatial experiences that have an even deeper interaction and understanding of the world around the user. And this is powerful, because if your app uses one of these frameworks today, you are already well on your way to a great spatial computing app! We are excited to share details about how visionOS 2 allows creation of apps with richer spatial experiences that take full advantage of depth and space. With the introduction of Spatial Computing, we created a new type of SwiftUI scene, called a volume. Volumes are great for adding 3D objects and scenes with rich interactions to an app. Apps can be run side-by-side, and give you a real sense of the size and scale of the virtual objects.
This is a core part of spatial computing, taking Vision Pro's groundbreaking spatial multitasking to the next level. With visionOS 2, it is now easy to resize your volume. Just like with a window, you use the SwiftUI scene modifier windowResizability to set your volume's size to be just right for your content. And if it's not just right, users can now resize volumes themselves. You'll also have the ability to choose if you want your volume to have a fixed or dynamic scale, so that when the 3D object moves away from the user, it either appears constant in size or gets smaller as it moves away, just as a real world object would appear. visionOS allows ornaments to be affixed to volumes. This is great for controls, additional information, or really any UI. You now have even more flexibility to place ornaments anywhere along the edge of your volume, giving you the freedom to create your volume's interface in new and clever ways. These ornaments, as well as your app's chrome and content can also dynamically move, adjusting to face the user as they walk around the space. You can see the ornaments move, as well as the character, in response to the position of the user! We've given some developers an early look at these Volumetric APIs like "1640" that leverages volume resizing APIs, "Dear Reality" that takes advantage of how ornaments change position based on the user's location, and "Rezzil" which shows how these new volumetric features can come together to provide an incredible experience for real-time game analysis, complete with other apps running side by side. Apple Vision Pro helps users connect with their friends and family through shared experiences together with SharePlay apps. To facilitate even more shared apps, we made TabletopKit, a framework for easy development of collaborative experiences centered around a table. TabletopKit does the heavy lifting that handles manipulating cards and pieces, placement and layout, and defining game boards. And it also works seamlessly with spatial Personas and SharePlay, enabling social games for users to play together! TabletopKit integrates with familiar frameworks like GroupActivities, RealityKit, and SwiftUI, allowing you to quickly get an experience up and running. The developer behind "Checkmate Chronicles" did just that, creating a compelling game board experience using this new framework! visionOS 2 doesn't stop there. There are brand-new Enterprise APIs that provide access to spatial barcode scanning, low latency external camera streaming, and more. These will enable specific workflow use cases to take advantage of spatial computing. We have made updates to input on Vision Pro. You can now decide if you want the user's hands to appear in front of or behind the content, giving you even more creativity in your app experience. We have significantly extended the fidelity of our Scene Understanding capabilities. Planes can now be detected in all orientations, and allow anchoring objects on surfaces in your surroundings. We have added the concept of Room Anchors that consider the user's surroundings on a per room basis. You can even detect a user's movement across rooms. And we have a new Object Tracking API for visionOS that allows you to attach content to individual objects found around the user. This new functionality allows you to attach virtual content like instructions to a physical object for new dimensions of interactivity. We have made it even easier to get started with spatial computing by providing more samples and documentation for the visionOS platform. The future of spatial computing is exciting, and we encourage you to be a part of the story with your own visionOS app. Now, over to Josh. Josh: Apple builds products that bring together hardware, software, and services to create truly delightful experiences for our users. Our consistent motivation is a simple one: we want to build the most advanced and innovative platforms so that you can create the best apps in the world. We want to help you build highly integrated apps that feel like natural extensions of the user's device, harnessing its hardware and software to the fullest. And with every release, we enhance our platforms with this motivation in mind. When you build your apps with Apple's SDKs, you get the fastest possible performance. You're using the same tools and frameworks that we use, with APIs that have been fine-tuned and optimized for our platforms and products. Those same APIs give you direct access to integrate with all the platform features that users love, like interactive widgets and controls, accessibility and dynamic type, and of course Apple Intelligence. Integrating deeply with the platform means making your app available in more places, so users can interact with it in the way that work best for them. Apple's SDKs also make it easy to build apps with a familiar look and feel that is shared across the platforms. This benefits your users, since they can reuse the interactions they already know from other apps. The SDKs also share many common frameworks, so most code can express common behaviors across devices, while still enabling you to tailor the results when needed. And Apple's SDKs give you all-in-one tooling, providing everything you need to build your apps. You can use Swift, SwiftUI, and Xcode to build any experience you want, no matter which devices you're targeting. All of these benefits are available when you use Apple's native SDKs directly. Or, put more simply, the best apps are built with native SDKs. We're so excited to have our SDKs in your hands, and in the hands of millions of other developers, so that you can build on top of the most advanced and innovative platforms in the world, and build the best possible apps. Now back to Susan. Susan: So that's a closer look at some of our biggest developer announcements. Apple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac, with new features for creating language and images in apps along with major updates to Siri. It draws on personal context to deliver intelligence that's helpful and relevant, and it's built with privacy from the ground up. Swift is expanding to Windows and Linux, and Swift Assist can transform your ideas into code so you can focus on higher-level problems. visionOS 2 elevates spatial computing even further with enhanced support for volumetric apps, powerful new enterprise APIs, and new technologies that make it easier to build shared apps and games. And Game Porting Toolkit 2 makes it possible to bring your most intensive games to Mac, iPad, and iPhone. Explore these topics and more with over 100 in-depth sessions releasing throughout the week of WWDC. You can get your questions answered by Apple engineers and other Apple experts all week in online labs and consultations and in the Apple Developer Forums, where you can also connect with the developer community! You can access all this via the Developer app and Developer website. We are grateful that you are part of the Apple developer community, and we're excited to see you bring these technologies to life in your amazing apps and games. Have a great week!
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.