Thursday, May 24, 2018

AEye Introduces Dynamic Vixels

PRNewswire: AEye introduces a new sensor data type called Dynamic Vixels. In simple terms, Dynamic Vixels combine pixels from digital 2D cameras with voxels from AEye's Agile 3D LiDAR sensor into a single super-resolution sensor data type.

"There is an ongoing argument about whether camera-based vision systems or LiDAR-based sensor systems are better," said Luis Dussan, Founder and CEO of AEye. "Our answer is that both are required – they complement each other and provide a more complete sensor array for artificial perception systems. We know from experience that when you fuse a camera and LiDAR mechanically at the sensor, the integration delivers data faster, more efficiently and more accurately than trying to register and align pixels and voxels in post-processing. The difference is significantly better performance."

"There are three best practices we have adopted at AEye," said Blair LaCorte, Chief of Staff. "First: never miss anything; second: not all objects are equal; and third: speed matters. Dynamic Vixels enables iDAR to acquire a target faster, assess a target more accurately and completely, and track a target more efficiently – at ranges of greater than 230m with 10% reflectivity."

Qualcomm Snapdragon 710 Supports 6 Cameras, ToF Sensing, More

Qualcomm 10nm Snapdragon 710 processors features a number of advanced imaging features:
  • Qualcomm Spectra 250 ISP
  • 2nd Generation Spectra architecture
  • 14-bit image signal processing
  • Up to 32MP single camera
  • Up to 20MP dual camera
  • Can connect up to 6 different cameras (many configurations possible)
  • Multi-Frame Noise Reduction (MFNR) with accelerated image stabilization
  • Hybrid Autofocus with support for dual phase detection (2PD) sensors
  • Ultra HD video capture (4K at 30 fps) with Motion Compensated Temporal Filtering (MCTF)
  • Takes 4K Ultra HD video at up to 40% lower power
  • 3D structured light and time of flight active depth sensing

Mobileye Autonomous Car Fails in Demo

EETimes Junko Yoshida publishes an explanation of Mobileye self-driving car demo where the car passes a junction on red light:

"The public AV demo in Jerusalem inadvertently allowed a local TV station’s video camera to capture Mobileye’s car running a red light. (Fast-forward the video to 4:28 for said scene.)

According to Mobileye, the incident was not a software bug in the car. Instead, it was triggered by electromagnetic interference (EMI) between a wireless camera used by the TV crew and the traffic light’s wireless transponder. Mobileye had equipped the traffic light with a wireless transponder — for extra safety — on the route that the AV was scheduled to drive in the demo. As a result, crossed signals from the two wireless sources befuddled the car. The AV actually slowed down at the sight of a red light, but then zipped on through.
"


On a similar theme, NTSB publishes a preliminary analysis of Uber self-driving car crash that killed a women in Arizona in March 2018:

According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about 6 seconds before impact, when the vehicle was traveling at 43 mph. As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle, and then as a bicycle with varying expectations of future travel path. At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision. According to Uber, emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior. The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.

SystemPlus on iPhone X Color Sensor

SystemPlus reverse engineering shows a difference between iPhone X color sensors and other AMS spectral sensors:

Wednesday, May 23, 2018

Nice Animations

Lucid Vision Labs publishes nice animations explaining how Sony readout with dual analog/digital CDS works:

Sony Exmor rolling shutter sensor
Same pipeline but with global shutter (Pregius)
1st stage - analog domain CDS
2nd stage - digital domain CDS

There are few more animations on the company's site and also a pictorial image sensor tutorial.

Tuesday, May 22, 2018

NHK Presents 8K Selenium Sensor

NHK Open House to be held on May 24-27 exhibits an 8K avalanche-multiplying crystalline selenium image sensor:

"Electric charge generated by incident light are increased by avalanche multiplication phenomenon inside the photoelectric conversion film. The film can be overlaid on a CMOS circuit with a low breakdown voltage because avalanche multiplication occurs at low voltage in crystalline selenium, which can absorb a sufficient amount of light even when thin."

The paper on crystalline selenium-based image sensor has been published in 2015.

Pixelligent Raises $7.6M for Nanoparticle Microlens

BusinessWire: Baltimore, MD-based Pixelligent's Zirconium oxide capped nanoparticles (ZrO2), a high refractive index inorganic material, with a sub-10 nm diameter with functionalized surface, is said to have a potential to contribute to sensitivity of CMOS image sensors. The company announces $7.6M in new funding to help further drive product commercialization and accelerate global customer adoption.

Although Pixelligent lenses for image sensor applications have been announced a couple of years ago, there is no such product on the market yet, to the best of my knowledge. In 2013, the company President & CEO Craig Bandes said: "During the past 12 months we have seen a tremendous increase in demand for our nanocrystal dispersions spanning the CMOS Image Sensor, ITO, LED, OLED and Flat Panel Display markets. This demand is coming from customers around the globe with the fastest growth being realized in Asia. In the first quarter of 2013, we began shipping our first commercial orders and currently have more than 30 customers at various stages of product qualification."

Sony Image Sensor Business Strategy

Sony IR Day 2018 held on May 22 (today) has quite a detailed presentation on its semiconductor business targets and strategy. From the Sony official PR:

"In the area of CMOS image sensors that capture the real world in which we all live, and are vital to KANDO content creation, aim to maintain Sony’s global number one position in imaging applications, and become the global leader in sensing.

Through the key themes of KANDO - to move people emotionally - and "getting closer to people," Sony will aim to sustainably generate societal value and high profitability across its three primary business areas of electronics, entertainment, and financial services. It will pursue this strategy based on the following basic principles.

CMOS image sensors are key component devices in growth industries such as the Internet of Things, artificial intelligence, autonomous vehicles, and more. Sony's competitive strength in this area is based on its wealth of technological expertise in analog semiconductors, cultivated over many years from the charge-coupled device (CCD) era. Sony aims to maintain its global number one position in imaging and in the longer term become the number one in sensing applications. To this end, Sony will extend its development of sensing applications beyond the area of smartphones, into new domains such as automotive use.

...based on its desire to contribute to safety in the self-driving car era, Sony will work to further develop its imaging and sensing technologies.
"

Monday, May 21, 2018

Hamamatsu Sensors in Automotive Applications

Hamamatsu publishes a nice article "Photonics for advanced car technologies" showing many applications for its light sensors:

Samsung Presentation

Samsung System LSI Investor Presentation dated by April 30, 2018 shows the company success in image sensor business:

  • 1/3 Global Smartphones use ISOCELL image sensors
  • 4.6 out of 10 Chinese smartphones use ISOCELL sensors
  • 28nm image sensor process