QuickLogic has introduced a triple core sensor hub called EOS.
The justification for sensor hubs is that: “It is power-prohibitive to do sensor processing on the host processor,” according to QuickLogic vice-president Brian Faith.
The three cores are: an ARM Cortex M4F MCU, a front-end sensor manager and a QuickLogic proprietary core which it calls Flexible Fusion Engine (FFE). A fourth core could be integrated into the hub’s FPGA fabric.
The FFE and and sensor manager handle the bulk of the algorithm processing, which minimises the duty cycle for the floating point MCU.
This approach lowers aggregate power consumption, and enables mobile, wearable and IoT device designers to introduce next generation sensor-driven applications, such as pedestrian dead reckoning (PDR), indoor navigation, motion compensated heart rate monitoring, and other advanced biological applications within their power budgets.
The EOS platform includes a hardened subsystem specifically designed for always-listening voice applications.
With its dedicated PDM-to-PCM conversion block, and Sensory’s Low Power Sound Detector (LPSD) technology, the EOS system enables always-on voice triggering and recognition while consuming less than 350 microAmps.
“It solves the problem of doing voice recognition at low power,” says Faith.
EOS has 2,800 effective logic cells of in-system reprogrammable logic that can be used for an additional FFE or customer-specific hardware differentiated features.
The EOS S3 platform and QuickLogic’s SenseMe library are compliant with Android Lollipop (5.0+) as well as various RTOSes.
Since the platform is sensor and algorithm agnostic, it can support third party and customer-developed algorithms through QuickLogic’s industry-standard Eclipse Integrated Development Environment (IDE) plugin.
The IDE provides optimised and proven code generation tools as well as a feature-rich debugging environment to ensure quick porting of existing code into both the FFE and the ARM M4F MCU of the EOS S3 platform.
Applications include:
- Always-on, always-listening voice recognition and triggering
- Pedometry, pedestrian dead reckoning, and indoor navigation
- Sports and activity monitoring
- Biological and environmental sensor applications
- Sensor fusion including gestures and context awareness
- Augmented reality
- Gaming
Processor Cores
- 180DMIPS of aggregate processing capability
- 578KB of aggregate SRAM for code and data storage
QuickLogic Proprietary microDSP Flexible Fusion Engine
- 50KB SRAM for Code
- 16KB SRAM for Data
- Very long instruction word (VLIW) microDSP architecture
- 50µW/MHz
- As low as 12.5µW/DMIPS
ARM Cortex M4F
- Up to 80MHz
- Up to 512KB SRAM
- 32-bit, includes floating point unit
- 100µW/MHz; ~80µW/DMIPS
Programmable Logic
- 2,800 effective logic cells
- Capable of implementing an additional FFE and customer-specific functionality
Package Configurations
- Ball grid array (BGA)
- 3.5×3.5×0.8mm, 0.40mm ball pitch
- 49-ball, 34-user I/O’s
Wafer Level Chip Scale Package (WLCSP)
- 2.5×2.3×0.7mm, 0.35mm ball pitch
- 36-ball, 28-user I/O’s
Integrated Voice
- Always-on voice trigger and phrase recognition capability, in conjunction with sensory
- I2S and PDM microphone input with support for mono and stereo configurations
- Integrated hardware PDM to PCM conversion
- Sensory low power sound detector (LPSD)
Interface Support
- To host – SPI slave
- To sensors and peripherals – SPI master (2X), I2C, UART
- To microphones – PDM and I2S
Additional Components
- ADC
- 12-bit sigma delta
- Regulator – low drop 0ut (LDO), with 1.8V to 3.6V input support
- System clock – integrated 32kHz and high speed oscillator
from News http://ift.tt/1LYZlZF
via Yuichun
沒有留言:
張貼留言