• JULY 28 – 30 2023
  • Madison, US
  • Computational Photography

ICCP 2023

International Conference on Computational Photography 2023 (ICCP)

The International Conference on Computational Photography is the premier venue for research works broadly on computational photography. Computational photography is a vibrant area at the intersection of optics, imaging, sensors, signal processing, computer vision, and computer graphics. It seeks to create new photographic and imaging functionalities and experiences that go beyond the possibilities of exclusive, disparate camera and image processing technologies. This cross-disciplinary field further seeks a better understanding of imaging models, perception, and limits, through holistic analysis.
We look forward to this year's exciting sponsorship and exhibition opportunities, featuring a variety of ways to connect with participants in person. Sony will exhibit and participate as a Platinum sponsor.

Recruiting information for ICCP-2023

We look forward to working with highly motivated individuals to fill the world with emotion and to pioneer future innovation through dreams and curiosity. With us, you will be welcomed onto diverse, innovative, and creative teams set out to inspire the world.

At this time, the full-time and internship roles previously listed on this page are closed.

As such, please see all our other open positions through the links below.

Sony AI: https://ai.sony/joinus/jobroles/
Japan-based Positions: Sony Group Portal - Global Careers - Careers in Japan
Sony Computer Science Laboratories (CSL): https://www.sonycsl.co.jp/kyoto/careers_en/

NOTE: For those interested in Japan-based full-time and internship opportunities, please note the following points and benefits:

・Japanese language skills are NOT required, as your work will be conducted in English. However, willingness to learn Japanese may widen opportunities and/or expedite career advancement.
・For internships, in addition to the daily allowance you will receive, we cover round trip flights, travel insurance, visas, commuting fees, and accommodation expenses as part of our support package.

For full-time roles, in addition to your compensation and benefits package, we cover onboarding expenses such as your flight to Japan, shipment of your belongings to Japan, visas, commuting fees, and more!


Industry Consortium Talk

For the first time, ICCP 2023 is introducing an Industry Consortium to foster stronger connections between academia and industry. Many students and postdocs have little experience outside of academia but most will end up having a career in industry. The Industry Consortium is a targeted networking event to help graduate students and postdocs prepare for collaboration with or careers in industry.

Computational Image Sensing at Sony

We introduce development activities on computational image sensing in Sony. Sony may be regarded in public as a company for consumer electronics, pictures, music and games. but the imaging & sensing solution business is also one of the major business segments of Sony Group. As the first part, we would like to show Sony's development capability of image sensor devices that exists behind our development activities on computational image sensing. As the second part, some examples of actual developments on computational image sensing are introduced by leading persons of those developments.

Date & Time;
July 29 (Saturday) 15:30-16:00 (CDT)
Venue;
the Monona Terrace in Madison, WI
Event Type;
Talk &Presentation

Tomoo Mitsunaga

Tomoo Mitsunaga received his B.E. and M.E. degree in biophysical engineering from Osaka University, Japan, in 1989 and 1991, respectively. He has been working for Sony Corporation since 1991. He studied computer vision and computational photography as a visiting scholar with Prof. Shree Nayar in Columbia University from 1997 to 1999. Recent 10 years, He has worked on signal processing algorithms in and near image sensors, not only RGB image sensors but also image sensors for other than RGB such as depth image sensors and event-based vision sensors.


Technologies & Business use case

Technology 01 Real-time 3D sensing with EVS and projector

An EVS sensor and a projector are synchronized at 1kHz, achieving real-time and high-quality sensing. EVS latches the ambient illuminated image as a background, then each binary pattern is detected as changes from the latched image.

figure-06.gif

Technology 02 Sparse Polarization Sensor

Polarization sensors can simultaneously acquire RGB images and polarization information. However, a trade-off exists between the quality of the RGB image and the polarization information: fewer polarization pixels minimize the degradation of the RGB image quality but decrease the resolution of the polarization information. To address this issue, we propose a strategic arrangement of polarization pixels on the sensor, effectively resolving this trade-off. Our proposed solution is further supported by a network architecture that includes an RGB image refinement network and a polarization information compensation network. By implementing this innovative approach, we provide a compelling solution for acquiring high-quality RGB images and accurate polarization information.

Sparse Polarization Sensor

Publications

Publication 01 Learning to Synthesize Photorealistic Dual-pixel Images from RGBD frames

Authors
Feiran Li (Sony AI), Heng Guo, Hiroaki Santo, Fumio Okura, Yasuyuki Matsushita
Abstract
Recent advances in data-driven dual-pixel (DP) research are bottlenecked by the difficulties in reaching large-scale DP datasets, and a photorealistic image synthesis approach appears to be a credible solution. To benchmark the accuracy of various existing DP image simulators and facilitate data-driven DP image synthesis, this work presents a real-world DP dataset consisting of approximately $5000$ high-quality pairs of sharp images, DP defocus blur images, detailed imaging parameters, and accurate depth maps. Based on this large-scale dataset, we also propose a holistic data-driven framework to synthesize photorealistic DP images, where a neural network replaces conventional handcrafted imaging models. Experiments show that our neural DP simulator can generate more photorealistic DP images than existing state-of-the-art methods and effectively benefit data-driven DP-related tasks.

Publication 02 Programmable Spectral Filter Arrays using Phase Spatial Light Modulators

Authors
Vishwanath Saragadam (Rice University); Vijay Rengarajan (Meta); Ryuichi Tadano (Sony); Tuo Zhuong (Sony); Hideki Oyaizu (Sony); Jun Murayama (Sony); Aswin Sankaranarayanan (Carnegie Mellon University)
Abstract
Spatially varying spectral modulation can be implemented using a liquid crystal spatial light modulator (SLM) since it provides an array of liquid crystal cells, each of which can be purposed to act as a programmable spectral filter array. However, such an optical setup suffers from strong optical aberrations due to the unintended phase modulation, precluding spectral modulation at high spatial resolutions. In this work, we propose a novel computational approach for the practical implementation of phase SLMs for implementing spatially varying spectral filters. We provide a careful and systematic analysis of the aberrations arising out of phase SLMs for the purposes of spatially varying spectral modulation. The analysis naturally leads us to a set of “good patterns” that minimize the optical aberrations. We then train a deep network that overcomes any residual aberrations, thereby achieving ideal spectral modulation at high spatial resolution. We show a number of unique operating points with our prototype including dynamic spectral filtering, material classification, and singleand multi-image hyperspectral imaging.