|Launched||September 12, 2017|
|Discontinued||April 15, 2020|
|Designed by||Apple Inc.|
|Max. CPU clock rate||to 2.39 GHz|
|L1 cache||64 KB instruction, 64 KB data|
|L2 cache||8 MB|
|Architecture and classification|
|Min. feature size||10 nm|
|Microarchitecture||"Monsoon" and "Mistral"|
|Instruction set||A64 – ARMv8.2-A|
|GPU(s)||Apple-designed 3 core|
The Apple A11 Bionic is a 64-bit ARM-based system on a chip (SoC), designed by Apple Inc. and manufactured by TSMC. It first appeared in the iPhone 8, iPhone 8 Plus, and iPhone X which were introduced on September 12, 2017. Apple states that the two high-performance cores are 25% faster than the Apple A10's and the four high-efficiency cores are up to 70% faster than the two corresponding cores in the A10.
The A11 features an Apple-designed 64-bit ARMv8-A six-core CPU, with two high-performance cores at 2.39 GHz, called Monsoon, and four energy-efficient cores, called Mistral. The Monsoon cores are a 7-wide decode out-of-order superscalar design, while the Mistral cores are a 3-wide decode out-of-order superscalar design. The Mistral cores are based on Apple's Swift cores from the Apple A6. The A11 uses a new second-generation performance controller, which permits the A11 to use all six cores simultaneously, unlike its predecessor the A10.
The A11 also integrates an Apple-designed three-core graphics processing unit (GPU) with 30% faster graphics performance than the A10. Embedded in the A11 is the M11 motion coprocessor. The A11 includes a new image processor which supports computational photography functions such as lighting estimation, wide color capture, and advanced pixel processing.
The A11 is manufactured by TSMC using a 10 nm FinFET process and contains 4.3 billion transistors on a die 87.66 mm2 in size, 30% smaller than the A10. It is manufactured in a package on package (PoP) together with 2 GB of LPDDR4X memory in the iPhone 8 and 3 GB of LPDDR4X memory in the iPhone 8 Plus and iPhone X.
|SoC||A11 (10 nm)|
|CPU Complex (incl. cores)||14.48|
The A11 also includes dedicated neural network hardware that Apple calls a "Neural Engine". This neural network hardware can perform up to 600 billion operations per second and is used for Face ID, Animoji and other machine learning tasks. The neural engine allows Apple to implement neural network and machine learning in a more energy-efficient manner than using either the main CPU or the GPU. However, third party apps cannot use the Neural Engine, leading to similar neural network performance to older iPhones.
Bloomberg says that the neural engine is the fruit of Apple's efforts to improve its AI team, since the 2015 report by Bloomberg that Apple's secretive nature made it difficult to attract AI research scientists. Apple has since recruited people and multiple companies working on AI, and has published papers related to AI research. In October 2016, Apple hired Russ Salakhutdinov as its director of AI research.