• AUGUST 19 – 25 2023
  • Macao, S.A.R

IJCAI 2023

The 32nd International Joint Conference on Artificial Intelligence

IJCAI has been a premier international gathering of AI researchers and practitioners since 1969.

We look forward to this year's exciting sponsorship and exhibition opportunities, featuring a variety of ways to connect with participants in person. Sony will exhibit and participate as a Gold sponsor.

Recruiting information for IJCAI-2023

We look forward to working with highly motivated individuals to fill the world with emotion and to pioneer future innovation through dreams and curiosity. With us, you will be welcomed onto diverse, innovative, and creative teams set out to inspire the world.

At this time, the full-time and internship roles previously listed on this page are closed.
As such, please see all our other open positions through the links below.

Sony AI: https://ai.sony/joinus/jobroles/
Japan-based Positions: Sony Group Portal - Global Careers - Careers in Japan

NOTE: For those interested in Japan-based full-time and internship opportunities, please note the following points and benefits:

・Japanese language skills are NOT required, as your work will be conducted in English. However, willingness to learn Japanese may widen opportunities and/or expedite career advancement.
・For internships, in addition to the daily allowance you will receive, we cover round trip flights, travel insurance, visas, commuting fees, and accommodation expenses as part of our support package.
・For full-time roles, in addition to your compensation and benefits package, we cover onboarding expenses such as your flight to Japan, shipment of your belongings to Japan, visas, commuting fees, and more!


Keynote

Date & Time
August 24 (Thursday) 14:00 PM (MOT)
Venue
Main Hall
Event Type
Invited Talk
Title
Mirror, Mirror, on the Wall, Who’s the Fairest of Them All?
Abstract
Debates in AI ethics often hinge on comparisons between AI and humans: which is more beneficial, which is more harmful, which is more biased, the human or the machine? This question, however, is often a red herring. It ignores what is most interesting and important about AI ethics: AI is a mirror. AI reflects patterns in our society, just and unjust, and the worldviews of its human creators, fair or biased. The question then is not which is fairer, the human or the machine, but what can we learn from this reflection of our society and how can we make AI fairer? This talk will discuss three major intervention points—data curation, algorithmic methods, and policies around appropriate use—and how challenges to developing fairer AI in practice stem from this reflective property of AI.

Alice Xiang

Sony AI

Alice Xiang is the Global Head of AI Ethics at Sony. As the VP leading AI ethics initiatives across Sony Group, she manages the team responsible for conducting AI ethics assessments across Sony’s business units and implementing Sony’s AI Ethics Guidelines. Sony is one of the world’s largest manufacturers of consumer and professional electronics products, the largest video game console company and publisher, and one of the largest music companies and film studios. In addition, as the Lead Research Scientist for AI ethics at Sony AI, Alice leads a lab of AI researchers working on cutting-edge research to enable the development of more responsible AI solutions. Alice also recently served as a General Chair for the ACM Conference on Fairness, Accountability, and Transparency (FAccT), the premier multidisciplinary research conference on these topics. Alice previously served on the leadership team of the Partnership on AI. As the Head of Fairness, Transparency, and Accountability Research, she led a team of interdisciplinary researchers and a portfolio of multi-stakeholder research initiatives. She also served as a Visiting Scholar at Tsinghua University’s Yau Mathematical Sciences Center, where she taught a course on Algorithmic Fairness, Causal Inference, and the Law. She has been quoted in the Wall Street Journal, MIT Tech Review, Fortune, Yahoo Finance, and VentureBeat, among others. She has given guest lectures at the Simons Institute at Berkeley, USC, Harvard, SNU Law School, among other universities. Her research has been published in top machine learning conferences, journals, and law reviews. Alice is both a lawyer and statistician, with experience developing machine learning models and serving as legal counsel for technology companies. Alice holds a Juris Doctor from Yale Law School, a Master’s in Development Economics from Oxford, a Master’s in Statistics from Harvard, and a Bachelor’s in Economics from Harvard.

Invited Speaker


Technologies & Business use case

Technology 01 Federated Learning

Traditional machine learning training methods require centralizing a large amount of data from diverse sources to a single server. However, the growing concern over data privacy, particularly for applications that involve sensitive personal information (PI), is making this training paradigm a huge concern. Federated learning (FL) revolutionizes the traditional centralized training paradigm by enabling model training from decentralized data without any data sharing with the central server.

Meet Sony AI Privacy and Security Team to know how we developed extensive experience in FL research and application development. We have published numerous papers in top-tier AI conferences and journals (e.g., NeurIPS, ICLR, ICML, Nature Communications, etc.).

Federated Learning Publications:

Technology 02IP Protection, Responsible Generative AI and Foundation Model Development

Concerns have been raised around the potential misuse and intellectual property (IP) infringement associated with image generation models. It is, therefore, necessary to analyze the origin of images by inferring if a specific image was generated by a particular model, i.e., origin attribution. Similarly, concerns have arisen regarding the illegal data scraping from Internet and unauthorized usage of data during the training process. Meanwhile, well-trained commercial foundation models as pay-as-you-use services (in the form of proprietary APIs) on companies’ cloud platforms can be stolen through extraction/imitation attacks, thus compromising their IPs.
Meet Sony AI Privacy and Security Team to see how we safeguard our customers and products and build up generative AI and foundation models in a responsible manner!

IP Protection, Responsible Generative AI and Foundation Models Publications:

Technology 03 Enhancing games with cutting-edge AI to unlock new possibilities for game developers and players.

We are evolving Game-AI beyond rule-based systems by using deep reinforcement learning to train robust and challenging AI agents in gaming ecosystems. This technology enables game developers to design and deliver richer experiences for players. The recent demonstration of Gran Turismo SophyTM, a trained AI that beat world champions in the PlayStationTM game Gran TurismoTM SPORT, embodies the excitement and possibilities that emerge when modern AI is deployed in a rich gaming environment. As AI technology continues to evolve and mature, we believe it will help spark the imagination and creativity of game designers and players alike.

Can an AI outrace the best human Gran Turismo drivers in the world? Meet Gran Turismo Sophy and find out how the teams at Sony AI, Polyphony Digital Inc., and Sony Interactive Entertainment worked together to create this breakthrough technology. Gran Turismo Sophy is a groundbreaking achievement for AI, but there’s more: it demonstrates the power of AI to deliver new gaming and entertainment experiences.


Publications

Publication 01 BRExIt: On Opponent Modelling in Expert Iteration

Authors
Daniel Hernandez (Sony AI), Hendrik Baier, Michael Kaisers
Abstract
Finding a best response policy is a central objective in game theory and multi-agent learning, with modern population-based training approaches employing reinforcement learning algorithms as best response oracles to improve play against candidate opponents (typically previously learnt policies). We propose Best Response Expert Iteration (BRExIt), which accelerates learning in games by incorporating opponent models into the state-ofthe-art learning algorithm Expert Iteration (ExIt). BRExIt aims to (1) improve feature shaping in the apprentice, with a policy head predicting opponent policies as an auxiliary task, and (2) bias opponent moves in planning towards the given or learnt opponent model, to generate apprentice targets that better approximate a best response. In an empirical ablation on BRExIt’s algorithmic variants against a set of fixed test agents, we provide statistical evidence that BRExIt learns better performing policies than ExIt.

Publication 02 Reducing Communication for Split Learning by Randomized Top-k Sparsification

Authors
Fei Zheng, Chaochao Chen, Lingjuan Lyu (Sony AI), Binhui Yao
Abstract
Split learning is a simple solution for Vertical Federated Learning (VFL), which has drawn substantial attention in both research and application due to its simplicity and efficiency. However, communication efficiency is still a crucial issue for split learning. In this paper, we investigate multiple communication reduction methods for split learning, including cut layer size reduction, top-k sparsification, quantization, and L1 regularization. Through analysis of the cut layer size reduction and top-k sparsification, we further propose randomized top-k sparsification, to make the model generalize and converge better. This is done by selecting top-k elements with a large probability while also having a small probability to select non-top-k elements. Empirical results show that compared with other communication-reduction methods, our proposed randomized top-k sparsification achieves a better model performance under the same compression level.

Publication 03 FedSampling: A Better Sampling Strategy for Federated Learning

Authors
Tao Qi, Fangzhao Wu, Lingjuan Lyu (Sony AI), Yongfeng Huang, Xing Xie
Abstract
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way. Existing FL methods usually uniformly sample clients for local model learning in each round. However, different clients may have significantly different data sizes, and the clients with more data cannot have more opportunities to contribute to model training, which may lead to inferior performance. In this paper, instead of client uniform sampling, we propose a novel data uniform sampling strategy for federated learning (FedSampling), which can effectively improve the performance of federated learning especially when client data size distribution is highly imbalanced across clients. In each federated learning round, local data on each client is randomly sampled for local model learning according to a probability based on the server desired sample size and the total sample size on all available clients. Since the data size on each client is privacy-sensitive, we propose a privacy-preserving way to estimate the total sample size with a differential privacy guarantee. Experiments on four benchmark datasets show that FedSampling can effectively improve the performance of federated learning.

Publication 04 RAIN: RegulArization on Input and Network for Black-Box Domain Adaptation

Authors
Qucheng Peng, Zhengming Ding, Lingjuan Lyu (Sony AI), Lichao Sun, Chen Chen
Abstract
Source-Free domain adaptation transits the source-trained model towards target domain without exposing the source data, trying to dispel these concerns about data privacy and security. However, this paradigm is still at risk of data leakage due to adversarial attacks on the source model. Hence, the Black-Box setting only allows to use the outputs of source model, but still suffers from overfitting on the source domain more severely due to source model's unseen weights. In this paper, we propose a novel approach named RAIN (RegulArization on Input and Network) for Black-Box domain adaptation from both input-level and network-level regularization. For the input-level, we design a new data augmentation technique as Phase MixUp, which highlights task-relevant objects in the interpolations, thus enhancing input-level regularization and class consistency for target models. For network-level, we develop a Subnetwork Distillation mechanism to transfer knowledge from the target subnetwork to the full target network via knowledge distillation, which thus alleviates overfitting on the source domain by learning diverse target representations. Extensive experiments show that our method achieves state-of-the-art performance on several cross-domain benchmarks under both single- and multi-source black-box domain adaptation.

Publication 05A Pathway Towards Responsible AI Generated Content

Authors
Lingjuan Lyu (Sony AI)
Abstract
AI Generated Content (AIGC) has received tremendous attention within the past few years, with content ranging from image, text, to audio, video, etc. Meanwhile, AIGC has become a double-edged sword and recently received much criticism regarding its responsible usage. In this article, we focus on three main concerns that may hinder the healthy development and deployment of AIGC in practice, including risks from privacy; bias, toxicity, misinformation; and intellectual property (IP). By documenting known and potential risks, as well as any possible misuse scenarios of AIGC, the aim is to sound the alarm of potential risks and misuse, help society to eliminate obstacles, and promote the more ethical and secure deployment of AIGC.