Tech News

How will Apple support the future of its five major OSs?


At 10 am on June 3, US time, the Keynote section of the WWDC 2019 Global Developers Conference was officially opened at the McEnery Convention Center in downtown San Jose. In addition to the hardware-level Mac Pro and Pro Display XDR, Apple has also released a series of developer tools such as ARKit 3, RealityKit, Core ML 3, SiriKit and more.

What are the highlights of these developer tools? Take you to find out.

AR: More diverse functions

Looking at the WWDC conference in the previous two years, Apple’s emphasis on AR has only increased. WWDC 2019 In addition to upgrading ARKit, WWDC 2019 also announced a new advanced AR framework RealityKit, and a new application Reality Composer that can easily build an AR experience.

ARKit was launched in 2017 and is the first step for Apple to enter AR. In 2018, Apple upgraded to ARKit 2 with two major updates: the new file format USDZ with Pixar and the multi-person shared AR. Now ARKit is upgraded again and welcomes ARKit 3.

ARKit 3 introduces instant human occlusion, knowing the position of people and AR objects, and properly occluding the scene. Not only that, but also through the motion capture, tracking human movements, as an AR scene input. At the same time, through ARKit 3, the front and rear lenses of the device can be used simultaneously, so the user’s facial expression can also be part of the AR experience.

In addition to the simultaneous use of two lenses, multi-face tracking and instant collaboration between multiple people is also a highlight of ARKit 3, allowing users to have a more diverse AR experience.

ARKit 3 is upgraded from the original ARKit; unlike ARKit 3, the RealityKit and Reality Composer announced for the first time this year are more novel.

RealityKit is a new advanced framework with realistic rendering, camera effects, animation, physics and more. It is built for augmented reality and can handle network for multi-user AR applications, which means developers don’t need It is a network engineer who can develop a shared AR experience.

Reality Composer is a new development program that can be used not only for iOS but also for macOS. This tool allows developers to visualize AR scenes and add animations such as motion, zoom, and rotation to the scene. Not only that, but developers can also set animations: when the user taps on the object, the user approaches the object, and triggers other flip-flops.

Core ML 3: Support for advanced neural networks

WWDC 2019 Apple introduced Core ML 3, the latest version of the Apple Machine Learning Model Framework.

Core ML is a high-performance machine learning framework that can be used in Apple products to help developers quickly integrate multiple machine learning models into the App. Launched in 2017, upgraded to Core ML 2 in 2018, processing speed increased by 30%.

Now that Core ML is upgraded to Core ML 3, Core ML 3 will train machine learning for the first time. Since the model can be updated with device user profiles, Core ML 3 can help keep the model relevant to user behavior without compromising privacy.

Not only that, Core ML 3 also supports advanced neural networks, supports more than 100 layer categories, and has better performance in image and audio recognition. In addition, the CPU, GPU, and neural engine are seamlessly utilized to provide maximum performance and efficiency.

SiriKit: Better application extension

Siri is Apple’s first AI app and one of the most popular voice assistants in the world; Siri has also been upgraded at WWDC 2019.

The most intuitive change is that Apple uses Neural Text-to-Speech (TTS) technology on iOS 13 to make Siri’s voice sound more natural, which means that Siri no longer sounds from human speech samples.

The combination of Siri and AirPods is also one of the highlights. When the user receives the newsletter, Siri can be read directly from AirPods, and not only can it be quickly answered with AirPods. In addition, Siri’s experience in HomePod has been greatly enhanced and more personalized. For example, HomePod can identify different members of the home. When different members’ mobile phones are close to HomePod, they can know the podcasts and music that they like.

It is worth noting that this year Apple launched SiriKit. SiriKit includes the Intents and Intents UI framework, which developers can use to extend the application; once the application uses SiriKit, even if the application itself is not executed, it can be extended through Siri.

HomeKit: Enhance privacy protection

HomeKit is Apple’s smart home framework launched in 2015, built on iOS 8, communication and control devices connected to the user’s home.

This time WWDC, Apple mainly emphasizes the protection of HomeKit on user privacy. For example, Apple first launched HomeKit Secure Video, which can analyze smart home devices (such as security lenses) and then upload and upload them to iCloud.

Like HomeKit Secure Video, the HomeKit router, which received a series of third-party support, was able to isolate devices and protect the entire network from attacks.

The privacy protection measures provided by HomeKit routers far exceed the scope of home security lenses. Automatic firewalls can be connected to HomeKit accessories. As long as one of the accessories is compromised, there is no way for other devices to access and prevent personal information from leaking.

SwiftUI: from 100 stroke code to a dozen lines

Apple also published a framework based on the development language Swift SwiftUI.

Swift is Apple’s new development language published by WWDC in 2014. It can be used with Objective-C on macOS and iOS platforms to build applications based on the Apple platform. Swift is designed with security in mind to avoid common programming errors; in 2015 Apple opened Swift.

This release of SwiftUI is based on the Swift language, using a set of tools and APIs, can provide a unified UI framework on all Apple platforms, of course, can also be used for watchOS, tvOS, macOS and other Apple’s multiple operating systems, with automatic support dynamics Category, Diablo mode, localization and accessibility features.

For example, the new SwiftUI programming framework has a new interactive developer experience, and as the developer changes, the preview of the emulator is updated immediately. For example, SwiftUI uses graphics modules to put in code, and when extended, the drop-down menu makes it easier for developers to change parameters. With a single click, the developer can switch to the emulator and the app moves to the actual hardware almost immediately.

Craig Federighi also demonstrates how to reduce the 100-pass code to about a dozen lines, greatly reducing the developer’s development process.

It’s worth mentioning that SwiftUI is also integrated with other APIs, such as ARKit; it is also optimized for some languages ​​that are input from left to right – of course, SwiftUI also natively supports dark mode.

to sum up

From the development kit published by WWDC, Apple mainly focuses on two aspects, one is to pay attention to the technical development of AR and AI, and the other is the cross-system development experience under Apple’s ecology, and it covers all of its macOS, watchOS, iOS, Operating systems such as tvOS and iPadOS. This not only brings a good user experience, but also makes each part of the Apple operating system ecosystem more closely linked and more attractive.

It can be said that through this WWDC, you can vaguely see the future of the entire Apple application ecosystem.

(first – Source: Apple )

Shivam Singh
Founder of the TechGrits, has always looked at technology as a piece of knots. From an early age connected to the technological world, this is literally your dream job.

These are all iPhones, iPads and iPods that will receive iOS 13

Previous article

Train in China “floats” at speeds of 600Km/h

Next article

You may also like

More in Tech News