Hi! PARIS Reading groups “Transfer Learning”
The Hi! PARIS reading groups propose to study a topic using scientific articles on a theoretical and a practical point of view. The reading groups are opportunities of interaction between our corporate donors and our affiliates academic teams around selected topics of interest.
Each edition is planned for 2-4 sessions presenting one topic by the mean of 3-4 research papers. For each session: presentation of mathematical models and theoretical advances by a researcher + simulations with a Python notebook by an engineer.
Registration
Please register to the event using your professional email address to get your personal conference link. Please do not share your personalised link with others, it is unique to you. You will receive an email regarding your registration status.
Transfer Learning
The Transfer Learning Reading Group focuses on understanding how machine learning models can transfer knowledge from one task or dataset to improve performance on another. Each session combines an introduction to the principles and applications of transfer learning, a technical discussion on domain adaptation to address distribution shifts between datasets, and a review of practical research papers. Topics include reducing divergence in latent spaces using techniques like covariance distances and Optimal Transport, as well as applications like enriching small datasets with larger, biased ones for improved model performance. The group aims to provide a clear and practical understanding of transfer learning for researchers and practitioners in various fields.
Session 1/3
Tuesday 11 February, 2025 – 2.00-3.30pm (Online)
- Speaker: Hélène Halconruy, Télécom SudParis – IP Paris
- Title: Transfer Learning: Unlocking Efficiency in Data-Scarce Scenarios
- Abstract: Transfer learning is a machine learning technique which adapts a model trained on a source task to a related target task leveraging prior knowledge to save time, computational resources, and handle limited data effectively. Its versatility addresses classical machine learning limitations, with applications in areas like activity recognition, image processing, and natural language processing. After outlining its principles, methods, and applications, we will discuss Data enriched Linear Regression by Chen, Owen, and Shi, which explores transfer learning for linear regression, using a small target dataset enriched by a larger, potentially biased source dataset.
Paper: Chen, A. B. Owen, M. Shi. Data enriched linear regression. Electronic Journal of Statistics, 9 (1) 1078 – 1112, 2015.
Session 2/3
Tuesday 12 March, 2025 – 2.00-3.30pm (Online)
- Speaker: Théo Gnassounou, Inria Saclay.
- Title: Domain Adaptation: Tackle Distribution Shift Without Access to Target Label
- Abstract: Machine learning aims to train models on a dataset with labels (source domain) and then predict on a new dataset without access to the label (target domain). But in real life, shift happens. Domain Adaptation (DA) aims to tackle this shift to avoid the drop in performance at test time. To reduce this shift in the deep learning field, people try to reduce the divergence in the latent space with a new DA loss for the training. Depending on the work, the divergence can be a covariance distance, an Optimal Transport-based distance, or an adversarial one.
The presentation will explore the goal of DA and explain the most impactful methods in the field and their applications.
- Paper:
– Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., … & Lempitsky, V. (2016). Domain-adversarial training of neural networks. Journal of machine learning research, 17(59), 1-35.
– Damodaran, B. B., Kellenberger, B., Flamary, R., Tuia, D., & Courty, N. (2018). Deepjdot: Deep joint distribution optimal transport for unsupervised domain adaptation. In Proceedings of the European conference on computer vision (ECCV) (pp. 447-463).
– Sun, B., & Saenko, K. (2016). Deep coral: Correlation alignment for deep domain adaptation. In Computer Vision–ECCV 2016 Workshops: Amsterdam, The Netherlands, October 8-10 and 15-16, 2016, Proceedings, Part III 14 (pp. 443-450). Springer International Publishing.
Session 3/3
Tuesday 8 April, 2024 – 2.00-3.30pm (Online)
- Speaker: Hattay Anas, CEA–List.
- Title: More info to come
- Abstract: More info to come

This work has benefited from a government grant managed by the ANR under France 2030 with the reference “ANR-22-CMAS-0002”.