Wednesday, June 29, 2011

OmniVision Acquires Visera Wafer-Level Lens Manufacturing

PR Newswire: Omnivision acquires from VisEra its wafer-level lens production operations. OmniVision currently outsources the wafer-level lens production and assembly processes associated with its CameraCube technology to VisEra. The cash consideration for the operations is $45M. As Visera is jointly owned by Omnivision and TSMC now, it looks like Omnivision is going to buy the TSMC part.

"This transaction will enable us to further streamline the production process, consolidate the supply chain, expand production capacity, and reduce the cost to meet customers' growing demand for our CameraCube products," said Shaw Hong, Omnivision's CEO.

OmniVision anticipates that the parties will close the transaction in the second quarter of its fiscal year 2012.

Toshiba Iwate Image Sensor Fab Info

I've just discovered that Toshiba Iwate has its own web site. This image sensor and camera module fab is located close to the earthquake area and was able to restore partial production only a month after the disaster, on April 18.


Talking about Toshiba Iwate milestones, the first image sensor was shipped in March 2010, just a year before the Great earthquake. The message from Toshiba Iwate president says that nowadays the fab produces both CCD and CMOS sensors and "provide[s] foundry services to non-Toshiba companies using our specialized technologies and processes.

For imaging devices for example, chip scale camera module (CSCM) products installed in mobile phone cameras are manufactured using one of our core technologies called the TCV technology. This technology allows reduction of the camera module product volume to 64%, and we believe it will allow a wider selection of applications to be installed.
"

Tuesday, June 28, 2011

Chipworks Overviews Small Pixel Presentations at IISW 2011

Chipworks' analyst Ray Fontaine published his impressions from small pixel presentations at IISW 2011. Actually, one of the most interesting presentations came from Ray himself and talked about 1.4um pixel reverse engineering revelations. The presentation and the paper are kindly made accessible on-line by Chipworks. The most impressive part for me was ST managing to fit color filters in cavities etched between the pixel metals (in mass production):


Another impressive job by ST is deep trench isolation between the pixels, while maintaining 50e-/s dark current at 60C (mass produced too, found in RIM handsets and tablets):


Many other interesting reverse engineering pictures are inside - highly recommended.

Thanks to RF for sending me the link!

Monday, June 27, 2011

3D HDR Imaging

Germany-based Automation Technology (AT) announced C4-2040-GigE comes with a resolution of 2048 x 1088 pixels and delivers more than 66 million 3D points with a profile frequency of up to 32 kHz. The camera is also available in its C4-2040-4M-GigE version, with a resolution of 2048 x 2048 pixels.

The 3D measurements come from active triangulation based on laser scanning:


The new 3D-sensor uses a global shutter and supports the HDR-3D technology allowing scanning materials and surfaces with inhomogeneous reflectivity properties. To cope with 3D HDR scenes up to 90db AT combines two approaches:

Multiple exposure (up to 3 exposures):


and non-destructive readout (up to 7 readouts) making possible to read out the recording of highly reflective objects shortly after the start of the exposure, the image of darker areas being read at the end of the integration time:


Thanks to TK for sending me the link!

Image Sensors in Cellular Phones: Market Proportions

Forward Concepts published Cellular Handsets and Chip Markets 2011 report, giving image sensors about 7% slice of the whole cellphone components market (probably meant to be camera modules rather than sensors):


Also, Aptina and Omnivision each take about 2% of the whole component market - quite nice numbers in proportion with Infineon, Samsung, Broadcom and other juggernauts:


Said all that, the image sensor share went down from 8% in the 2010 report:

Sunday, June 26, 2011

PTC and Color, Part 2

Albert Theuwissen published the second part of PTC and color article showing that PTC is pretty much color independent, so many kinds of mix and match are possible in the measurements.

Saturday, June 25, 2011

e2v and Thales Alenia Space Sign Two Contracts Totalling €4M

e2v signed two contracts worth a total of €4M with Thales Alenia Space, for the supply of CCDs to equip the High-Resolution (HR) optical imaging instruments for Earth observation satellites: for Göktürk satellite system for the Turkish Ministry of Defence and for Seosat-Ingenio, the first Spanish Earth observation satellite.

The sensors will be delivered over a 2 year period from July 2011.

Friday, June 24, 2011

Lytro Put Ren Ng 2006 PhD Thesis On-Line

Lytro kindly allows everybody get a taste of its technology from Ren Neg's 2006 PhD thesis here. The thesis is titled "Digital Light Field Photography" and explains it in pretty much plain words. I'm not sure if this 5 year old work reflects Lytro's current state, but it's interesting read anyway.

Light field camera from the dissertation:


This appears to be the raw image output (probably after de-mosaicing):


The modified DSLR sensor assembly used in the thesis:


I'd guess it took big big effort to transfer it technology from DSLR with relatively nice sensor output to a modern small pixel sensor with all its artifacts.

Update: Optics.org has a nice article discussion pro and contra of Lytro approach.

SensL Signs €1m Contract with ESA

SensL has signed a three-year, multi-project contract with the European Space Agency (ESA) that will generate over €1m in the initial program phases. The ESA contract will involve the deployment of SensL’s silicon photomultipliers in a number of applications: in range-finding LIDAR cameras that will determine the location of suitable terrain for lunar landings, and in gamma-ray spectroscopy for element and isotope analysis.

Thursday, June 23, 2011

Siemens AG Exclusively Licenses Imaging Technology to NikkoIA SAS

Newly created Grenoble, France-based startup NikkoIA becomes a worldwide exclusive licensee of NIR patents and associated know-how developed by Siemens AG.

Created in May 2011, NikkoIA aims at designing and manufacturing photodetectors and multispectral image sensors in the visible and near infrared spectrum (0.7 - 3μm). It takes advantage of technology developed by Siemens Corporate, which is said to provide a significant cost benefit over existing near infrared sensors.

The Siemens technology combines both organic and inorganic materials sensitive to specific wavelengths and deposited as thin film layers onto industry-standard electronic substrates. The technology benefits are the versatility in sensor size (up to several hundreds of square inches), shape, sensitivity and resolution, as well as a simplified manufacturing process, the process compatibility with any reading substrates (CMOS, amorphous Silicon thin-film transistor backplanes or printed electronics in the future). Functional sensor prototypes have already been built and tested.

By settling down in the Grenoble area in France, NikkoIA and its future production facilities reinforce and benefit from the rich environment in electronics, semiconductor and imaging competencies, and the local world-renowned infrared industry.


Thanks to DR for sending me the news.

Lytro Widely Announces its Technology

Techcrunch, NY Times, Wall Street Journal, Techland, CNET and dozen other sources run articles on Lytro offering re-focus technology. This Youtube video shows how it works from a user perspective:



Lytro site hints how it works:


Light Field Capture
How does a light field camera capture the light rays?

Recording light fields requires an innovative, entirely new kind of sensor called a light field sensor. The light field sensor captures the color, intensity and vector direction of the rays of light. This directional information is completely lost with traditional camera sensors, which simply add up all the light rays and record them as a single amount of light.


NYT writes:

"The Lytro camera captures far more light data, from many angles, than is possible with a conventional camera. It accomplishes that with a special sensor called a microlens array, which puts the equivalent of many lenses into a small space. “That is the heart of the breakthrough,” said Pat Hanrahan, a Stanford professor."

WSJ writes:

"A key to Lytro's strategy is to use the increasing resolution found in the image sensors in conventional digital cameras. The company developed a special array of lenses that fits in front of image sensors and helps break the image apart into individual rays, along with software to help reassemble and manipulate it.

Lytro lists other benefits. For one thing, since images are focused after the fact, users don't have to spend time focusing before shooting. Nor do they have to worry if they wound up focusing on the wrong thing.

The technology works in very low light without a flash, Lytro said, while 3-D glasses can add a particularly vivid effect—simulated three-dimensional images that users can adjust to show different perspectives.
"

Lytro founder and CEO, Ren Ng, 31 explained the concept in 2006 in his Ph.D. thesis at Stanford University, which won the worldwide competition for the best doctoral dissertation in computer science that year from the Association for Computing Machinery. Leading Lytro's technology team are Kurt Akeley, formerly of Silicon Graphics, and Adam Fineberg, formerly chief architect for the WebOS software developed by Palm, which is now part of HP.

So far Lytro has raised $50M from NEA, K9 Ventures, Greylock Partners and Andreessen Horowitz. Lytro isn't disclosing details before releasing its first cameras later this year, but Ng says their pricing will be competitive with today's consumer cameras. Ng gave 15-min long interview to TechCrunchTV:



WSJ points to possible Lytro's competition: "Adobe Systems Inc., which has developed prototype light field cameras for research purposes. Besides the technology departments of big camera companies, other startups are pursuing related technology... One is Pellican Imaging Corp., which in February announced a prototype of what it calls an array camera for use in mobile devices."

Update: PR Newswire: Sequence, a creative development agency based in San Francisco, announced that it is Lytro's branding and user-experience partner, and helped them with all aspects of their brand.

"Sequence has been a valuable partner," said Ren Ng, CEO and founder of Lytro. "They quickly understood the complexity and potential impact of our new technology and have helped us create a powerful yet simple brand experience that really resonates with our target audience."

Wednesday, June 22, 2011

Complete List of I3A Vision 2020 Awards

Business Wire: I3A has announced the 2011 winners of its VISION 2020 Imaging Innovation Awards presented at 6Sight Future of Imaging Summit:

Gold: InVisage for its QuantumFilm image sensors, see the earlier announcement from the company.

Silver: 36Pix of Montreal, for its ChromaStar chroma keying engine technology. Photographers take their photos using a green background and send them online to 36Pix to remove the green backgrounds. The photos go back over the web to the photographers (or photo labs) so that they can then insert their own backgrounds.

Bronze: Aptina for its HDR technology, including the first system-on-a-chip HDR product, which enables camera phones to capture high quality images and video in diverse illumination conditions.

In-Stat: 3D Mobile Devices to Increase Demand for Image Sensors 130% by 2015

Market Wire: "The image sensor segment stands to gain the most from the use of 3D technology in mobile devices, because unlike the processor or the display, true 3D requires at least two image sensors, one for each imaging solution," says Jim McGregor, In-Stat's Chief Technology Strategist. "That means four image sensors, two front facing and two rear facing, are required for a full 3D experience. Several mobile devices with four image sensors have already been introduced and many more are slated for introduction throughout 2011 and 2012."

In-Stat findings include:
  • Total annual shipments of 3D mobile devices will surpass 148 million units in 2015.
  • Nearly 30% of all handheld game consoles will be 3D by 2015.
  • In 2011, handheld game consoles will be the first 3D-enabled mobile device to surpass 1 million units annually.
  • By 2014, 18% of all tablets will be 3D.
  • Media consumption devices including handheld game consoles, smartphones, and tablets will drive demand for image sensors over 130% by 2015

InVisage Receives I3A VISION 2020 Imaging Innovation Gold Award

Marketwire: InVisage announced that its QuantumFilm technology has received the International Imaging Industry Association's (I3A) VISION 2020 Imaging Innovation's top gold award. The PR says:

"QuantumFilm is the world's first quantum dot-based material for image sensors and enables four times the amount of light to be captured. This allows camera phones and other small form factors to take photographs of unprecedented quality.

This award is particularly important because the VISION 2020 judges are the world's top imaging experts who have an intimate understanding of the technology and the market. The announcement was made during the 6Sight Mobile Imaging Summit event taking place in San Jose this week.
"

"The QuantumFilm technology is clearly a technological advance that will have significant impact on the industry for many years to come," said I3A President Lisa Walker.

Caeleste Publications On-Line

Caeleste has all-new web site including its recent presentations at IISW 2011:

Color X-ray photon counting image sensor

Backside thinned, 1.5 e-RMS, BSI, 700fps, 1760×1760 pixels wave-front image sensor for natural and laser guide stars

Thanks to BD for updating me on that!

268MP Astrophotography Sensor

Popphoto shows pictures of ESO's VST (European Southern Observatory's VLT Survey Telescope) camera with 268MP sensor array.

The VST uses a 720kg OmegaCAM camera with an array of 32 CCDs arranged in 8x4 array. The extra 4 CCDs are used to control the guiding and active optics systems as well as many huge color filters:



Monday, June 20, 2011

Aptina Launches 1.75um Pixel-based 2MP Sensor

Aptina announced MT9D015, a 1/5-inch, 2MP sensor for primary and front-facing cameras for still image and video capture in smartphones. The sensor captures 720p/30fps video and also full resolution snapshots at 30fps for reduced shutter lag. Video recording in 720p/30fps format relies on high quality scaling provides and maintains full field-of-view. The MT9D015 uses the latest generation 1.75um pixels and enables low z-height to enable thinner phones. The sensor integrates some ISP features like defect correction, scaling and lens shading correction.

Currently sampling, the MT9D015 is being evaluated by several leading mobile manufacturers. Mass production is scheduled for calendar Q4 2011.

Friday, June 17, 2011

Swiss Image and Vision Sensors Workshop 2011

IEEE Swiss Image and Vision Sensors Workshop 2011 (SIVS 2011) is to be held on September 8, 2011 in University of Zurich, Switzerland. The preliminary program covers a lot of interesting stuff:

How biological retinas compute many views of the world
Botond Roska, FMI Basel

Opportunities and lessons from building event-based digital silicon retinas
Tobi Delbruck, UZH-ETH Zurich, Inst. of Neuroinformatics,

Low cost and low power vision systems
Pierre-Francois Ruedi, CSEM Neuchatel

leanXcam: Ideas and lessons learned from an open source business model
Johannes Gassner, SCS AG

Single-photon detection: Facts and myths. What should we expect from integrated SPAD imaging?
Eduardo Charbon, TU Delft and EPFL

Single-photon integrating CMOS image sensors
Thomas Baechler, Section Head Image Sensing, CSEM Zurich, with Nicolas Blanc

World record sub-electron resolution room temperature CMOS image sensors based on open-loop amplification
Christian Lotto, Heliotis AG and CSEM Photonics Division

High-speed imaging – Historical review, state-of-the-art sensors, and future trends
Thomas Baechler, Section Head Image Sensing, CSEM Zurich, with Nicolas Blanc

Photosensors for optical 3D imaging
Peter Seitz, CSEM

World record high dynamic range imaging: Opportunities and challenges
Peter Schwider, Photofocus AG

Outstanding properties of the Espros CMOS/CCD technology and consequences for image sensors
Beat De Coi & Martin Popp, Espros Photonics Corporation

Raw image conversion and processing
Aboubakr Bekkali with Urs Krebs, Seitz Phototechnik AG

Forum discussion: What features are needed for future sensors?

Via electronsandholes

Thursday, June 16, 2011

Sony View-DR HDR Technology

I seem to have missed this news from almost a year ago that Sony has announced View-DR HDR technology for security cameras. A fast sensor simultaneously (!) captures four exposures combined in a single image:


There is a nice Youtube video demoing the HDR capabilities:

Wednesday, June 15, 2011

Pixim Acquires Advasense

PRWeb, VentureBeat: Pixim announced the acquisition of Advasense. Advasense was founded with the vision of dramatically improving the image quality of cameras used in mobile applications.

Currently, Pixim is experiencing exponential sales growth driven by the release of Seawolf, its latest chip product. The Advasense team ideally complements Pixim in the areas of image sensor-specific product design and will be immediately integrated into Pixim’s existing product development organization. This is expected to allow the company to further accelerate its strong sales growth.

The technology of the two companies is highly complementary. For instance, the current push for higher resolution cameras in the video security market demands higher-performance small pixels that do not compromise image quality. Advasense’s development of deep photodiodes significantly enhances pixel well capacity, improving image quality in all lighting conditions even as pixel sizes get smaller.

Hynix News

In what looks like a flashback to early 2000s, Hynix announced Hi-QD1, a 1/13-inch QVGA SoC based on 3.2um pixels. Customer samples are available now and volume production will be started in July while its validation tests are going well at major chipset makers. The article says: "With launching this Hi-QD1, Hynix steps into a good business opportunity of CMOS image sensor market." To me this sounds like a production of some Siliconfile's old product has been moved from Dongbu to Hynix fab.

Sys-Con Media: In an unrelated news, Scalado announced that Hynix has licensed its SpeedTags technology. It will now be integrated with Hynix's image sensor products in order to help manage the larger files produced by the latest high-resolution image sensors.

Tuesday, June 14, 2011

Albert Theuwissen Completes IISW 2011 Reviews

Albert Theuwissen has published day 2, day 3 and day 4 of his IISW 2011 overview. Albert writes: "Overall this was a great workshop ! An high-level technical program, superb organization and great service."

Confessions, Confessions

The ISCAS 2011 has had a confession session talking about design mistakes. There are few confessions about image sensor mistakes:

Confession 10: A bipolar imager with one giant pixel
Tobi Delbruck, University of Zurich and ETH Zurich

In 1996 at National Semiconductor and Synaptics in an enterprise that was later to become Foveon we were trying to build a new type of image sensor which Carver Mead invented. It was based on pulsed bipolar phototransistors (Fig. 7, Delbruck et. al, 1997). These pixels required developing our own “poly emitter” bipolar process with vertical NPN bipolar transistors and poly emitters, where the base region was self-aligned by the thin oxide regions. After we got the sensor back from fab, we tried to make it work for many weeks, but could never see any image! The pressure was on. All the circuits seemed to be working but all we could see was that the "picture" changed brightness depending on the light intensity. Dick Merrill finally figured it out. He examined the hundreds of lines of detailed process specifications and noticed that the base implant had been set to 400keV rather than 40keV: The base implant was penetrating right through the field oxide so that we had built one single giant photodiode! I still remember the meeting in a center. Dick said something like “Any idiot would know that 400keV penetrates through FOX!” Well, at that point I certainly didn’t know it, but nodded my head wisely as if I did. Anyhow, instead of an image sensor with 4 million pixels, we had one giant pixel measuring 4mm by 4mm! Eventually we got it all to work, but after many rounds of silicon we concluded that the FPN in the bipolar gain and the image lag (because the base is never fully reset) was a killer and Foveon went in their storied direction of vertical color separation (Gilder 2005).

Moral? Even an experienced team can be tripped up by a typo.


Confession 11: Metal density rules are there for a reason
Tobi Delbruck, University of Zurich and ETH Zurich

During the early days of imager company Foveon we were working closely with National Semiconductor on process development and were turning a new wafer lot at least once a month (Gilder, 2005). This was in the days of 250nm fab development and the fab guys were using us as testers for their process development. We kept having problems with reliable metal in the pixel arrays. It seemed as though the wires were just not making it across the array, or shorting to each other. Finally, a meeting with the group doing the fab development cleared up the mystery: Our pixel arrays just didn't have enough metal in them. As a result, the chemical mechanical polishing (CMP) was leaving the surface of the array at a different height than the periphery, so that the lithography equipment, which focused using the alignment markers at a corner of the chip, was putting the array out of focus. This defocus blurred out the resist exposure, leading to bad metal. Even the very experienced crew of professionals didn’t consider the reason for this metal density rule. CMP was pretty new then, but it didn’t help that (as usual) the design rule documentation gave no reasons for any of the rules.

Moral – Design rules have a physical basis. DR documents should provide a bit of motivation to the designers for following the rules.

Confession 15: Don't cough up your core technology
Tobi Delbruck, University of Zurich and ETH Zurich

In forming a development agreement with an industry partner, we had the bright idea that they might be able to help us pay for chip fabrication. That would have been fine except that we also let them manage the actual submission and direct payment of the run. As a result, they got the chips and the full layout of our sensor and were in a position that they could re-fabricate the design, even without asking us - which they did. We were left in a position of having to buy our own design from another party instead of having them buy it from us.

Moral – Consider how your technology will be used if you let someone else have it.

Confession 9: Beware of parasitic photodiodes in CMOS image sensor design
M. K. Law and A. Bermak, Hong Kong University of Science and Technology

We designed an image sensor array with a fixedpattern-noise (FPN) reduction scheme that required no calibration current source. We modeled the expected photocurrents in each pixel photodiode during simulation and everything worked great. We expected the FPN after correlated double sampling (CDS) should be improved by a factor of 15 to 20. However, measurement results showed that there was only an approximately 2 or 3 times improvement. After a long and tedious debugging process, we finally realized the problem was caused by the fact that the pixel output was also light dependent even during calibration. What we overlooked is that there are also PN-junctions in other pixel transistors (Fig. 6). In that case, they are also photodiodes when illuminated with light, but we did not include this effect in the simulation!! We should have noticed this effect and put dummy metal over the transistors to shield incoming light. Fortunately we can use some post processing techniques at the sensor output to improve the overall FPN.

Moral – You have to fully understand your circuit before running simulations. Most importantly, never blindly believe in simulation results.


Confession 20: Address Decoding Glitches Reset Pixel
Shoushun Chen, Nanyang Technological University

In 2008 we designed and fabricated a motion detection image sensor using an address counter and decoder. The motivation to use an address counter instead of a scanner is to obtain flexibility in reading out regions of interest. However, we found that in the captured image there were a few rows of pixels having abnormal brightness, or row based mismatch. We also found that the error increased with integration time. At the beginning we thought the problem was due to power supply noise. Finally we realized that the error came from glitches of the decoder, which resets the pixel in the middle of integration (Fig. 10). We finally confirmed the mistake in post layout simulation.

Moral – Firstly, without special delay balancing techniques, the decoder always produces glitches. Secondly, the reset node of the pixel is highly sensitive. Any glitch applied to this node will destroy the integration signal. The decoded row/column reset signal should be resynchronized using a register to filter the glitch.


If anyone wants to confess in his/her mistakes too, please do so in comments. In case a figure needs to be added, email me and I'll publish your confession in a separate post.

PerkinElmer Acquires Dexela

Business Wire: PerkinElmer announced that it has acquired Dexela, a London, UK-based provider of flat panel CMOS X-ray detection technologies and services. Founded in 2005, Dexela develops and commercializes its CMOS technology portfolio for fast, low-dose X-ray imaging. The company’s CMOS products offer high spatial resolution frame rate and reliability, low noise and absence of image lag.

"Dexela has been a pioneer in the development of CMOS X-ray detection solutions and has achieved award-winning recognition in the imaging market for its design strengths and ease of manufacturability – which are major advantages for OEM customers," said Brian Giambattista, President, Medical Imaging, PerkinElmer.

Monday, June 13, 2011

Ambarella Files for IPO

EETimes, Business Wire: Ambarella announced that it has filed a registration statement for a proposed IPO. The number of shares to be offered and the price range for the offering have not yet been determined. Morgan Stanley and Deutsche Bank are acting as the joint book-running managers for the offering. Stifel, Nicolaus & Company and Needham & Company are acting as co-managers for the offering.

So, soon we might have a second publicly traded company specializing in image processors. The first one was Mtekvision, traded on KOSDAQ.

Thanks to S.S. for sending me the news!

Saturday, June 11, 2011

News from IISW 2011

I received these announcements from Eric Fossum:

June 10, 2011

News from the 2011 International Image Sensor Workshop in Hokkaido, Japan

The parent organization of the 2011 IISW is changing its name to the International Image Sensor Society, Inc., from ImageSensors, Inc. The renamed organization, IISS, also has added two directors to its Board – Junichi Nakamura and Johannes Solhusvik. They join Eric Fossum, Albert Theuwissen and Nobukazu Teranishi.

The IISS will become a member-based Professional Society and will solicit members in the coming months. In addition to sponsoring the biennial IISW and the Walter Kosonocky Award, plans for an on-line open-access peer-reviewed IISS Journal of Image Sensors are being developed by Dr. Solhusvik. It is hoped that cooperation with other professional societies such IEEE, SPIE or OSA can be implemented.

At the 2011 IISW, it was announced that the winner of the 2011 Walter Kosonocky Award is Hayato Wakabayashi and his co-authors from Sony Corporation for the paper titled “A 1/2.3-inch 10.3Mpixel 50frame/s Back-Illuminated CMOS Image Sensor” which was presented at the 2010 IEEE International Solid-State Circuits Conference.

A new award has been established by the International Image Sensor Society for Exceptional Service. The 2011 IISS Exceptional Service Award was presented to Vladimir Koifmann for the creation and editorship of the Image Sensors World blog, which has proved to be valuable for many in the image sensor community. Mr. Koifman is currently with AdvaSense in Israel.

The International Image Sensor Society congratulates its award winners on achieving excellence and contributing to the image sensor community.

Cambridge Mechatronics Proposes OIS Performance Measure

Cambridge Mechatronics (CML) proposes a performance measure for image stabilization systems, described in the company's whitepaper here. Ben Brown, the paper's author, presents nice graphs of measured handshake:


One can see that a long term handshake fits within +/-0.5 deg range. I seem to recall some old Kodak data saying it's more like +/-2 deg, but this should be very camera-dependent. In any case, Cambridge Mechatronics actuator shows impressive capability to reduce it and even compared with a Canon 70-200mm f/4L SLR lens.

The paper concludes:

"The performance of CML’s 8.5mm x 8.5mm SMA OIS camera-Tilt (SOT) actuator offers around 2 stops more suppression than the Canon 70-200mm lens. This is most likely due to the high resonant frequency afforded by the high stiffness of the SMA actuator and the small mass of miniature cameras. This is an advantage that miniature cameras have over larger cameras which will allow OIS to have a very significant performance impact in smartphones."

CML SMA Camera Module, 8.5mm x 8.5mm, AF + OIS

Thanks to DW for sending me the link!

Friday, June 10, 2011

Albert Theuwissen Overviews IISW, Day 1

Albert Theuwissen published a nice overview of the first day of IISW 2011.

TYZX Demo Shows Passive Stereo 3D Technoogy in Action

The Youtube video below possibly shows passive stereo 3D vision limitation - only high-contrast objects are visible on the depth map. Probably the flat panels of cubicles and walls do not have a sufficient contrast:

Optilux-Varioptic Separation Explained

CNET published an article on Optilux founded by former Varioptic's CEO Hamid Farzaneh. "Farzaneh has an exclusive license to use the Varioptic technology in consumer products, and funding to develop a manufacturing process to scale up to consumer product run-rates (Varioptic can ship about 100,000 lenses a month, a far cry from the millions that Farzaneh thinks the market will want). Over time, as the new company digs into the consumer market, Optilux technology will diverge from Varioptic.

Optilux is an American company; Varioptic and its acquirer Parrot are both French.
[Farzaneh is planning] to set up both R&D and manufacturing in the U.S.

Farzaneh said that having manufacturing next door to U.S. engineers will keep development moving faster. Finally, he says, he needs an automated system for high-volume production, and automated plants are easier to set up next to U.S.-based engineers, compared with the current liquid lens construction techniques, which are based on hand-assembled lenses that benefit from lower (overseas) labor costs.

The timing of all this: We should, Farzaneh says, finally start seeing Optilux liquid lenses in cameras in 2013. These will be the auto-focusing and image-stabilized lenses. The zoom packages could show up in consumer products in 2014.
"

The original article also has a nice video demo of Optlux focus and image stabilization capabilities.

Thursday, June 09, 2011

1-bit Gigapixel Imager Mathematics

A group of authors from Harvard University published a paper on 1-bit gigapixel image processing:

"Gigapixel Binary Sensing: Image Acquisition Using Oversampled One-Bit Poisson Statistics"
Feng Yang, Yue M. Lu, Luciano Sbaiz, Martin Vetterli

The paper can be freely downloaded here.

Tuesday, June 07, 2011

Digitimes: Omnivision to Supply 90% CIS for iPhone 5, Sony to Supply the Rest

Digitimes reports that about 90% of the CIS orders for Apple's new iPhone 5 will be supplied by OmniVision, while Sony takes up the remainder, according to the newspaper's sources. Thanks to the Apple orders, OmniVision is expected to increase its total wafer starts at TSMC to almost 260,000 8-inch equivalent units in the third quarter, up more than 40% sequentially, the sources indicated.

Pixart to Supply Sensors for Next Geraration Wii

Digitimes: Pixart to supply sensor for the next-generation Nintendo Wii consoles, according to the newspaper's sources. Nintendo is expected to announce the launch data of the new Wii consoles during the ongoing E3 trade fair, with Digitimes' sources believing the new consoles to be available at the end of 2011 or early in 2012.

The next generation Wii supposedly has internal name Project Cafe and is rumored to have a game controller with camera for self-portraits and other photo-related stuff.

Aptina Announced Design Win in Pantech Vega Racer Smartphone

Business Wire: Aptina announced that its CCS8140 imaging solution has been chosen by Pantech for its primary camera in the newly released Vega Racer smartphone. The CCS8140 solution combines the MT9E013 8MP sensor with MT9E311 imaging co-processor and requires only minimal tuning and sensor adjustments to complete the final camera design. The fully tuned sensor and co-processor combination provides flexibility and easy integration for OEMs.

Sunday, June 05, 2011

Aptina Proposes Ion Implantation to Modify Passivation Refractive Index

Aptina's patent application US20110127628 proposes to use ion implantation to change passivation layer refractive index:

"FIGS. 2A illustrates the transmission of incident light 202 through CFA 104, passivation layer 106 (106′) and insulation layer 108, when passivation layer 106 (106′) includes impurities. In particular, FIG. 2A illustrates passivation layer 106 having a single refractive index neff.

In conventional image sensors, a passivation layer typically has a refractive index of about 2.0. In contrast, CFA 104 typically includes a refractive index (n1) of between about 1.5-1.7, whereas insulation layer 108 typically includes a refractive index (n2) of about 1.46. Accordingly, the refractive indices n1 and n2 of CFA 104 and insulation layer 108 are typically similar. Because the refractive index of the passivation layer in conventional devices is different from the refractive indices n1 and n2, incident light 202 may be reflected at an interface 214 (between CFA 104 and a conventional passivation layer) and/or at an interface 216 (between a conventional passivation layer and insulation layer 108).
"

Saturday, June 04, 2011

Invisage Holds Promise for Machine Vision

Test & Measurement World published an article about Invisage QuantumFilm with QuantumShutter sensor. The article seems to be based on an interview with Michael Hepp, InVisage’s director of marketing. Few advantages for machine vision market are presented:

  1. Adjustable bandgap of QuantumFilm tunes the sensitivity for specific wavelength range
  2. QuantumShutter (a nice word for global shutter) eliminates rolling shutter artifacts
  3. Improved QE in range of 80 to 90%
  4. Process is simpler than BSI

Hepp said that the QuantumFilm sensors designed for camera phones will have a 1.4um pixel size, and next-generation devices are planned at 1.1um. Invisage's 1.1um pixels can use a 110nm process. "In a 1.4-micron pixel, we can achieve a 12,000-electron well," says Hepp.

TSMC is expected to sample chips made with the process this summer, said Hepp.

Friday, June 03, 2011

Agilent Announces MIPI M-PHY Testing Solution

EETimes-Europe: Agilent Technologies has announced a MIPI M-PHY test solution for debugging and validating all layers of M-PHY devices, including physical and protocol layers, at speeds up to 5.8 Gb/sec. The Agilent solution consists of oscilloscopes, protocol analyzers and exercisers, and bit error-rate testers (BERTs) using custom M-PHY stimulus software. Each instrument comes with custom M-PHY-ready software to support design teams through the entire product design process.

The solution is not cheap:

  • The N4903B J-BERT high-performance serial BERT is available now, starting at $102,000 for a 7-Gb/sec pattern generator configuration.
  • The N5990A-165 test automation software will be available later this month and is priced at $10,500.
  • The M-PHY-based DigRF v4 exerciser/analyzers are currently available starting at $30,000.
  • Agilent 90000A and 90000X-Series oscilloscopes with bandwidths up to 32 GHz are available now. Prices start at $60,000 for 6-GHz oscilloscopes.
  • M-PHY TX test automation will be available in September.

Sensors and Image Forensics

Hany Farid from Darmouth published a tutorial on image forensics. Some image sensor artifacts help to recognize a forgery:

"a CFA interpolated image can be detected (in the absence of noise) by noticing, for example, that every other sample in every other row or column is perfectly correlated to its neighbors. At the same time, the non-interpolated samples are less likely to be correlated in precisely the same manner. Furthermore, it is likely that tampering will destroy these correlations, or that the splicing together of two images from different cameras will create inconsistent correlations across the composite image. As such, the lack of correlations produced by CFA interpolation can be used to expose it as a forgery."

Also, PRNU can be used to determine if an image is originated from a specific camera, or if a portion of an image has been altered.

Wednesday, June 01, 2011

PTC and Color

Albert Theuwissen continues his PTC article series, this time covering color sensors' PTC. The article talks about color-specific artifacts, such as these nice curves: