no code implementations • 18 Apr 2024 • Insoo Kim, Jae Seok Choi, Geonseok Seo, Kinam Kwon, Jinwoo Shin, Hyong-Euk Lee
As recent advances in mobile camera technology have enabled the capability to capture high-resolution images, such as 4K images, the demand for an efficient deblurring model handling large motion has increased.
no code implementations • 17 Apr 2024 • Jaehyung Kim, Jaehyun Nam, Sangwoo Mo, Jongjin Park, Sang-Woo Lee, Minjoon Seo, Jung-Woo Ha, Jinwoo Shin
While incorporating new information with the retrieval of relevant passages is a promising way to improve QA with LLMs, the existing methods often require additional fine-tuning which becomes infeasible with recent LLMs.
1 code implementation • 16 Apr 2024 • Woomin Song, Seunghyuk Oh, Sangwoo Mo, Jaehyung Kim, Sukmin Yun, Jung-Woo Ha, Jinwoo Shin
Large language models (LLMs) have shown remarkable performance in various natural language processing tasks.
1 code implementation • 2 Apr 2024 • KyuYoung Kim, Jongheon Jeong, Minyong An, Mohammad Ghavamzadeh, Krishnamurthy Dvijotham, Jinwoo Shin, Kimin Lee
To investigate this issue in depth, we introduce the Text-Image Alignment Assessment (TIA2) benchmark, which comprises a diverse collection of text prompts, images, and human annotations.
no code implementations • 22 Mar 2024 • Kyungmin Lee, Kihyuk Sohn, Jinwoo Shin
Recent progress in text-to-3D generation has been achieved through the utilization of score distillation methods: they make use of the pre-trained text-to-image (T2I) diffusion models by distilling via the diffusion model training objective.
no code implementations • 21 Mar 2024 • Sihyun Yu, Weili Nie, De-An Huang, Boyi Li, Jinwoo Shin, Anima Anandkumar
To tackle this issue, we propose content-motion latent diffusion model (CMD), a novel efficient extension of pretrained image diffusion models for video generation.
1 code implementation • 8 Mar 2024 • Yisol Choi, Sangkyung Kwak, Kyungmin Lee, Hyungwon Choi, Jinwoo Shin
Finally, we present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity.
Ranked #1 on Virtual Try-on on VITON-HD
1 code implementation • 7 Mar 2024 • Jihoon Tack, Jaehyung Kim, Eric Mitchell, Jinwoo Shin, Yee Whye Teh, Jonathan Richard Schwarz
We propose an amortized feature extraction and memory-augmentation approach to compress and extract information from new documents into compact modulations stored in a memory bank.
no code implementations • 19 Feb 2024 • Kyungmin Lee, Sangkyung Kwak, Kihyuk Sohn, Jinwoo Shin
In particular, our method results in a superior Pareto frontier to the baselines.
no code implementations • 18 Jan 2024 • Seong Jin Cho, Gwangsu Kim, Junghyun Lee, Jinwoo Shin, Chang D. Yoo
Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.
no code implementations • 15 Aug 2023 • Honggu Kang, Seohyeon Cha, Jinwoo Shin, Jongmyeong Lee, Joonhyuk Kang
Previous studies tackle the system heterogeneity by splitting a model into submodels, but with less degree-of-freedom in terms of model architecture.
1 code implementation • 12 Jul 2023 • Sanghyun Kim, Seohyeon Jung, Balhae Kim, Moonseok Choi, Jinwoo Shin, Juho Lee
Large-scale image generation models, with impressive quality made possible by the vast amount of data available on the Internet, raise social concerns that these models may generate harmful or copyrighted content.
no code implementations • 4 Jul 2023 • Subin Kim, Kyungmin Lee, June Suk Choi, Jongheon Jeong, Kihyuk Sohn, Jinwoo Shin
Generative priors of large-scale text-to-image diffusion models enable a wide range of new generation and editing applications on diverse visual modalities.
1 code implementation • 8 Jun 2023 • Jaehyung Kim, Jinwoo Shin, Dongyeop Kang
In this paper, we investigate task-specific preferences between pairs of input texts as a new alternative way for such auxiliary data annotation.
1 code implementation • 30 May 2023 • Jaehyung Kim, Yekyung Kim, Karin de Langis, Jinwoo Shin, Dongyeop Kang
However, not all samples in these datasets are equally valuable for learning, as some may be redundant or noisy.
1 code implementation • NeurIPS 2023 • Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jinwoo Shin
By combining these objectives, S-CLIP significantly enhances the training of CLIP using only a few image-text pairs, as demonstrated in various specialist domains, including remote sensing, fashion, scientific figures, and comics.
1 code implementation • CVPR 2023 • Sukmin Yun, Seong Hyeon Park, Paul Hongsuck Seo, Jinwoo Shin
In this paper, we introduce a novel image-free segmentation task where the goal is to perform semantic segmentation given only a set of the target semantic categories, but without any task-specific images and annotations.
1 code implementation • CVPR 2023 • Jongheon Jeong, Sihyun Yu, Hankook Lee, Jinwoo Shin
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition (i. e., less generalizable), so that one cannot prevent a model from co-adapting on such (so-called) "shortcut" signals: this makes the model fragile in various distribution shifts.
1 code implementation • 20 Mar 2023 • Junsu Kim, Younggyo Seo, Sungsoo Ahn, Kyunghwan Son, Jinwoo Shin
Recently, graph-based planning algorithms have gained much attention to solve goal-conditioned reinforcement learning (RL) tasks: they provide a sequence of subgoals to reach the target-goal, and the agents learn to execute subgoal-conditioned policies.
1 code implementation • 6 Mar 2023 • Hankook Lee, Jongheon Jeong, Sejun Park, Jinwoo Shin
To enable the joint training of EBM and CRL, we also design a new class of latent-variable EBMs for learning the joint density of data and the contrastive latent variable.
1 code implementation • 2 Mar 2023 • Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
In this paper, we present Preference Transformer, a neural architecture that models human preferences using transformers.
1 code implementation • 2 Mar 2023 • Jaehyun Nam, Jihoon Tack, Kyungmin Lee, Hankook Lee, Jinwoo Shin
Learning with few labeled tabular samples is often an essential requirement for industrial machine learning applications as varieties of tabular data suffer from high annotation costs or have difficulties in collecting new samples for novel tasks.
1 code implementation • 6th Workshop on Meta-Learning at NeurIPS 2022 2022 • Huiwon Jang, Hankook Lee, Jinwoo Shin
Unsupervised meta-learning aims to learn generalizable knowledge across a distribution of tasks constructed from unlabeled data.
1 code implementation • CVPR 2023 • Sihyun Yu, Kihyuk Sohn, Subin Kim, Jinwoo Shin
Specifically, PVDM is composed of two components: (a) an autoencoder that projects a given video as 2D-shaped latent vectors that factorize the complex cubic structure of video pixels and (b) a diffusion model architecture specialized for our new factorized latent space and the training/sampling procedure to synthesize videos of arbitrary length with a single model.
1 code implementation • 5 Feb 2023 • Younggyo Seo, Junsu Kim, Stephen James, Kimin Lee, Jinwoo Shin, Pieter Abbeel
In this paper, we investigate how to learn good representations with multi-view data and utilize them for visual robotic manipulation.
1 code implementation • 26 Jan 2023 • Younghyun Kim, Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jaeho Lee, Jinwoo Shin
The keyword explanation form of visual bias offers several advantages, such as a clear group naming for bias discovery and a natural extension for debiasing using these group names.
no code implementations • 23 Jan 2023 • Jonathan Richard Schwarz, Jihoon Tack, Yee Whye Teh, Jaeho Lee, Jinwoo Shin
We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR).
no code implementations • CVPR 2023 • Jongin Lim, Youngdong Kim, Byungjai Kim, Chanho Ahn, Jinwoo Shin, Eunho Yang, Seungju Han
Our key idea is that an adversarial attack on a biased model that makes decisions based on spurious correlations may generate synthetic bias-conflicting samples, which can then be used as augmented training data for learning a debiased model.
1 code implementation • 18 Dec 2022 • Jongheon Jeong, Seojin Kim, Jinwoo Shin
Under the smoothed classifiers, the fundamental trade-off between accuracy and (adversarial) robustness has been well evidenced in the literature: i. e., increasing the robustness of a classifier for an input can be at the expense of decreased accuracy for some other inputs.
2 code implementations • 13 Dec 2022 • Hyunwoo Kang, Sangwoo Mo, Jinwoo Shin
Using the object labels, OAMixer computes a reweighting mask with a learnable scale parameter that intensifies the interaction of patches containing similar objects and applies the mask to the patch mixing layers.
no code implementations • 5 Dec 2022 • Junhyun Nam, Sangwoo Mo, Jaeho Lee, Jinwoo Shin
(a) Fairness Intervention (FI): emphasize the minority samples that are hard to generate due to the spurious correlation in the training dataset.
1 code implementation • 13 Oct 2022 • Subin Kim, Sihyun Yu, Jaeho Lee, Jinwoo Shin
Succinct representation of complex signals using coordinate-based neural representations (CNRs) has seen great progress, and several recent efforts focus on extending them for handling videos.
1 code implementation • 11 Oct 2022 • Jihoon Tack, Jongjin Park, Hankook Lee, Jaeho Lee, Jinwoo Shin
The idea of using a separately trained target model (or teacher) to improve the performance of the student model has been increasingly popular in various machine learning domains, and meta-learning is no exception; a recent discovery shows that utilizing task-wise target models can significantly boost the generalization performance.
no code implementations • 23 Aug 2022 • Kisoo Kwon, Kuhwan Jung, Junghyun Park, Hwidong Na, Jinwoo Shin
In this paper, we investigate the problem of string-based molecular generation via variational autoencoders (VAEs) that have served a popular generative approach for various tasks in artificial intelligence.
1 code implementation • 12 Aug 2022 • Kyungmin Lee, Jinwoo Shin
Here, the choice of data augmentation is sensitive to the quality of learned representations: as harder the data augmentations are applied, the views share more task-relevant information, but also task-irrelevant one that can hinder the generalization capability of representation.
1 code implementation • 10 Aug 2022 • Taesik Gong, Jongheon Jeong, Taewon Kim, Yewon Kim, Jinwoo Shin, Sung-Ju Lee
Test-time adaptation (TTA) is an emerging paradigm that addresses distributional shifts between training and testing phases without additional data acquisition or labeling cost; only unlabeled test data streams are used for continual model adaptation.
1 code implementation • 19 Jul 2022 • Sukmin Yun, Jaehyung Kim, Dongyoon Han, Hwanjun Song, Jung-Woo Ha, Jinwoo Shin
Understanding temporal dynamics of video is an essential aspect of learning better video representations.
1 code implementation • ICML 2022 • Hwijoon Lim, Yechan Kim, Sukmin Yun, Jinwoo Shin, Dongsu Han
The teacher-student (TS) framework, training a (student) network by utilizing an auxiliary superior (teacher) network, has been adopted as a popular training paradigm in many machine learning schemes, since the seminal work---Knowledge distillation (KD) for model compression and transfer learning.
1 code implementation • CVPR 2022 • Sukmin Yun, Hankook Lee, Jaehyung Kim, Jinwoo Shin
Despite its simplicity, we demonstrate that it can significantly improve the performance of existing SSL methods for various visual tasks, including object detection and semantic segmentation.
no code implementations • 5 Apr 2022 • Chaewon Kim, Jaeho Lee, Jinwoo Shin
Recent denoising algorithms based on the "blind-spot" strategy show impressive blind image denoising performances, without utilizing any external dataset.
no code implementations • ICLR 2022 • Junhyun Nam, Jaehyung Kim, Jaeho Lee, Jinwoo Shin
The paradigm of worst-group loss minimization has shown its promise in avoiding to learn spurious correlations, but requires costly additional supervision on spurious attributes.
no code implementations • ICLR 2022 • Jongjin Park, Younggyo Seo, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
In order to leverage unlabeled samples for reward learning, we infer pseudo-labels of the unlabeled samples based on the confidence of the preference predictor.
1 code implementation • ICLR 2022 • Sihyun Yu, Jihoon Tack, Sangwoo Mo, Hyunsu Kim, Junho Kim, Jung-Woo Ha, Jinwoo Shin
In this paper, we found that the recent emerging paradigm of implicit neural representations (INRs) that encodes a continuous signal into a parameterized neural network effectively mitigates the issue.
Ranked #25 on Video Generation on UCF-101
no code implementations • CVPR 2022 • Minsu Ko, Eunju Cha, Sungjoo Suh, Huijin Lee, Jae-Joon Han, Jinwoo Shin, Bohyung Han
Unsupervised image-to-image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs).
no code implementations • CVPR 2022 • Jian Meng, Li Yang, Jinwoo Shin, Deliang Fan, Jae-sun Seo
Contrastive learning (or its variants) has recently become a promising direction in the self-supervised learning domain, achieving similar performance as supervised learning with minimum fine-tuning.
no code implementations • 16 Dec 2021 • Joonhyung Park, June Yong Yang, Jinwoo Shin, Sung Ju Hwang, Eunho Yang
However, they now suffer from lack of sample diversification as they always deterministically select regions with maximum saliency, injecting bias into the augmented data.
no code implementations • 22 Nov 2021 • Taesik Gong, Yewon Kim, Adiba Orzikulova, Yunxin Liu, Sung Ju Hwang, Jinwoo Shin, Sung-Ju Lee
However, various factors such as different users, devices, and environments impact the performance of such applications, thus making the domain shift (i. e., distributional shift between the training domain and the target domain) a critical issue in mobile sensing.
2 code implementations • NeurIPS 2021 • Hankook Lee, Kibok Lee, Kimin Lee, Honglak Lee, Jinwoo Shin
Recent unsupervised representation learning methods have shown to be effective in a range of vision tasks by learning representations invariant to data augmentations such as random cropping and color jittering.
1 code implementation • NeurIPS 2021 • Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin
Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.
1 code implementation • NeurIPS 2021 • Jongjin Park, Younggyo Seo, Chang Liu, Li Zhao, Tao Qin, Jinwoo Shin, Tie-Yan Liu
Behavioral cloning has proven to be effective for learning sequential decision-making policies from expert demonstrations.
no code implementations • NeurIPS 2021 • Sihyun Yu, Sungsoo Ahn, Le Song, Jinwoo Shin
We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.
1 code implementation • NeurIPS 2021 • Jaeho Lee, Jihoon Tack, Namhoon Lee, Jinwoo Shin
Implicit neural representations are a promising new avenue of representing general signals by learning a continuous function that, parameterized as a neural network, maps the domain of a signal to its codomain; the mapping from spatial coordinates of an image to its pixel values, for example.
1 code implementation • NeurIPS 2021 • Junsu Kim, Younggyo Seo, Jinwoo Shin
In this paper, we present HIerarchical reinforcement learning Guided by Landmarks (HIGL), a novel framework for training a high-level policy with a reduced action space guided by landmarks, i. e., promising states to explore.
Efficient Exploration Hierarchical Reinforcement Learning +2
no code implementations • 29 Sep 2021 • Sukmin Yun, Hankook Lee, Jaehyung Kim, Jinwoo Shin
This paper aims to improve their performance further by utilizing the architectural advantages of the underlying neural network, as the current state-of-the-art visual pretext tasks for self-supervised learning do not enjoy the benefit, i. e., they are architecture-agnostic.
no code implementations • ICLR 2022 • Jaehyung Kim, Dongyeop Kang, Sungsoo Ahn, Jinwoo Shin
Remarkably, our method is more effective on the challenging low-data and class-imbalanced regimes, and the learned augmentation policy is well-transferable to the different tasks and models.
no code implementations • ICLR 2022 • Youngmin Oh, Jinwoo Shin, Eunho Yang, Sung Ju Hwang
Experience replay is an essential component in off-policy model-free reinforcement learning (MfRL).
no code implementations • 29 Sep 2021 • Kyunghwan Son, Junsu Kim, Yung Yi, Jinwoo Shin
Although these two sources are both important factors for learning robust policies of agents, prior works do not separate them or deal with only a single risk source, which could lead to suboptimal equilibria.
Ranked #1 on SMAC+ on Off_Near_parallel
Multi-agent Reinforcement Learning reinforcement-learning +3
1 code implementation • NeurIPS 2021 • Sangwoo Mo, Hyunwoo Kang, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin
Contrastive self-supervised learning has shown impressive results in learning visual representations from unlabeled images by enforcing invariance against different data augmentations.
no code implementations • 22 Jul 2021 • Sihyun Yu, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin
Abstract reasoning, i. e., inferring complicated patterns from given observations, is a central building block of artificial general intelligence.
1 code implementation • 1 Jul 2021 • SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin
Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets.
1 code implementation • 29 Jun 2021 • Jongjin Park, Sukmin Yun, Jongheon Jeong, Jinwoo Shin
Semi-supervised learning (SSL) has been a powerful strategy to incorporate few labels in learning better representations.
2 code implementations • 28 Jun 2021 • Hyuntak Cha, Jaeho Lee, Jinwoo Shin
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks than joint-training methods relying on task-specific supervision.
1 code implementation • CVPR 2021 • Insoo Kim, Seungju Han, Ji-won Baek, Seong-Jin Park, Jae-Joon Han, Jinwoo Shin
Our two-stage scheme allows the network to produce clean-like and robust features from any quality images, by reconstructing their clean images via the invertible decoder.
Ranked #17 on Domain Generalization on ImageNet-C
no code implementations • ICML Workshop AML 2021 • Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin
Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.
no code implementations • ICML Workshop AML 2021 • Minseon Kim, Jihoon Tack, Jinwoo Shin, Sung Ju Hwang
Adversarial training methods, which minimizes the loss of adversarially-perturbed training examples, have been extensively studied as a solution to improve the robustness of the deep neural networks.
1 code implementation • NeurIPS 2021 • Amir Zandieh, Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin
To accelerate learning with NTK, we design a near input-sparsity time approximation algorithm for NTK, by sketching the polynomial expansions of arc-cosine kernels: our sketch for the convolutional counterpart of NTK (CNTK) can transform any image using a linear runtime in the number of pixels.
1 code implementation • 9 Jun 2021 • Junsu Kim, Sungsoo Ahn, Hankook Lee, Jinwoo Shin
Our main idea is based on a self-improving procedure that trains the model to imitate successful trajectories found by itself.
Ranked #4 on Multi-step retrosynthesis on USPTO-190
no code implementations • 3 May 2021 • Hankook Lee, Sungsoo Ahn, Seung-Woo Seo, You Young Song, Eunho Yang, Sung-Ju Hwang, Jinwoo Shin
Retrosynthesis, of which the goal is to find a set of reactants for synthesizing a target product, is an emerging research area of deep learning.
no code implementations • 3 May 2021 • Seewoo Lee, Youngduck Choi, Juneyoung Park, Byungsoo Kim, Jinwoo Shin
Knowledge Tracing (KT), tracking a human's knowledge acquisition, is a central component in online learning and AI in Education.
no code implementations • 3 Apr 2021 • Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin
We combine random features of the arc-cosine kernels with a sketching-based algorithm which can run in linear with respect to both the number of data points and input dimension.
1 code implementation • ICLR 2021 • Jongheon Jeong, Jinwoo Shin
Recent works in Generative Adversarial Networks (GANs) are actively revisiting various data augmentation techniques as an effective way to prevent discriminator overfitting.
1 code implementation • ICML Workshop AML 2021 • Jihoon Tack, Sihyun Yu, Jongheon Jeong, Minseon Kim, Sung Ju Hwang, Jinwoo Shin
Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial robustness of deep neural networks.
2 code implementations • ICLR Workshop SSL-RL 2021 • Younggyo Seo, Lili Chen, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee
Recent exploration methods have proven to be a recipe for improving sample-efficiency in deep reinforcement learning (RL).
no code implementations • 7 Feb 2021 • Youngmin Oh, Jinwoo Shin, Eunho Yang, Sung Ju Hwang
We show that the proposed scheme, called Model-augmented $Q$-learning (MQL), obtains a policy-invariant solution which is identical to the solution obtained by learning with true reward.
no code implementations • 1 Jan 2021 • SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin
As it turns out, fine-tuning offline RL agents is a non-trivial challenge, due to distribution shift – the agent encounters out-of-distribution samples during online interaction, which may cause bootstrapping error in Q-learning and instability during fine-tuning.
1 code implementation • ICCV 2021 • Hyuntak Cha, Jaeho Lee, Jinwoo Shin
Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks than cross-entropy based methods which rely on task-specific supervision.
1 code implementation • 17 Dec 2020 • Seung Jun Moon, Sangwoo Mo, Kimin Lee, Jaeho Lee, Jinwoo Shin
We claim that one central obstacle to the reliability is the over-reliance of the model on a limited number of keywords, instead of looking at the whole context.
no code implementations • NeurIPS 2020 • Junhyun Nam, Hyuntak Cha, Sung-Soo Ahn, Jaeho Lee, Jinwoo Shin
Neural networks often learn to make predictions that overly rely on spurious corre- lation existing in the dataset, which causes the model to be biased.
no code implementations • Asian Conference on Computer Vision (ACCV) 2020 • Insoo Kim, Seungju Han, Seong-Jin Park, Ji-won Baek, Jinwoo Shin, Jae-Joon Han, Changkyu Choi
Softmax-based learning methods have shown state-of-the-art performances on large-scale face recognition tasks.
Ranked #1 on Face Verification on CALFW
1 code implementation • NeurIPS 2020 • Younggyo Seo, Kimin Lee, Ignasi Clavera, Thanard Kurutach, Jinwoo Shin, Pieter Abbeel
Model-based reinforcement learning (RL) has shown great potential in various control tasks in terms of both sample-efficiency and final performance.
no code implementations • 26 Oct 2020 • Sejun Park, Jaeho Lee, Chulhee Yun, Jinwoo Shin
It is known that $O(N)$ parameters are sufficient for neural networks to memorize arbitrary $N$ input-label pairs.
3 code implementations • ICLR 2021 • Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee
Contrastive representation learning has shown to be effective to learn representations from unlabeled data.
1 code implementation • ICLR 2021 • Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin
Recent discoveries on neural network pruning reveal that, with a carefully chosen layerwise sparsity, a simple magnitude-based pruning achieves state-of-the-art tradeoff between sparsity and performance.
no code implementations • NeurIPS 2020 • Youngsung Kim, Jinwoo Shin, Eunho Yang, Sung Ju Hwang
While humans can solve a visual puzzle that requires logical reasoning by observing only few samples, it would require training over large amount of data for state-of-the-art deep reasoning models to obtain similar performance on the same task.
1 code implementation • NeurIPS 2020 • In Huh, Eunho Yang, Sung Ju Hwang, Jinwoo Shin
Time-reversal symmetry, which requires that the dynamics of a system should not change with the reversal of time axis, is a fundamental property that frequently holds in classical and quantum mechanics.
1 code implementation • NeurIPS 2020 • Jaehyung Kim, Youngbum Hur, Sejun Park, Eunho Yang, Sung Ju Hwang, Jinwoo Shin
While semi-supervised learning (SSL) has proven to be a promising way for leveraging unlabeled data when labeled data is scarce, the existing SSL algorithms typically assume that training class distributions are balanced.
1 code implementation • NeurIPS 2020 • Jihoon Tack, Sangwoo Mo, Jongheon Jeong, Jinwoo Shin
Based on this, we propose a new detection score that is specific to the proposed training scheme.
no code implementations • ICLR 2021 • Youngmin Oh, Kimin Lee, Jinwoo Shin, Eunho Yang, Sung Ju Hwang
Experience replay, which enables the agents to remember and reuse experience from the past, has played a significant role in the success of off-policy reinforcement learning (RL).
2 code implementations • 6 Jul 2020 • Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, Jinwoo Shin
Neural networks often learn to make predictions that overly rely on spurious correlation existing in the dataset, which causes the model to be biased.
Ranked #1 on Out-of-Distribution Generalization on ImageNet-W
2 code implementations • NeurIPS 2020 • Sungsoo Ahn, Junsu Kim, Hankook Lee, Jinwoo Shin
De novo molecular design attempts to search over the chemical space for molecules with the desired property.
no code implementations • 22 Jun 2020 • Kyunghwan Son, Sung-Soo Ahn, Roben Delos Reyes, Jinwoo Shin, Yung Yi
QTRAN is a multi-agent reinforcement learning (MARL) algorithm capable of learning the largest class of joint-action value functions up to date.
1 code implementation • 22 Jun 2020 • Divyam Madaan, Jinwoo Shin, Sung Ju Hwang
Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations.
1 code implementation • ICML 2020 • Sungsoo Ahn, Younggyo Seo, Jinwoo Shin
Designing efficient algorithms for combinatorial optimization appears ubiquitously in various scientific fields.
no code implementations • ICLR 2021 • Sejun Park, Chulhee Yun, Jaeho Lee, Jinwoo Shin
In this work, we provide the first definitive result in this direction for networks using the ReLU activation functions: The minimum width required for the universal approximation of the $L^p$ functions is exactly $\max\{d_x+1, d_y\}$.
1 code implementation • NeurIPS 2020 • Jaeho Lee, Sejun Park, Jinwoo Shin
The second result, based on a novel variance-based characterization of OCE, gives an expected loss guarantee with a suppressed dependence on the smoothness of the selected OCE.
1 code implementation • NeurIPS 2020 • Jongheon Jeong, Jinwoo Shin
A recent technique of randomized smoothing has shown that the worst-case (adversarial) $\ell_2$-robustness can be transformed into the average-case Gaussian-robustness by "smoothing" a classifier, i. e., by considering the averaged prediction over Gaussian noise.
2 code implementations • ICML 2020 • Kimin Lee, Younggyo Seo, Seung-Hyun Lee, Honglak Lee, Jinwoo Shin
Model-based reinforcement learning (RL) enjoys several benefits, such as data-efficiency and planning, by learning a model of the environment's dynamics.
Model-based Reinforcement Learning reinforcement-learning +1
1 code implementation • CVPR 2020 • Jaehyung Kim, Jongheon Jeong, Jinwoo Shin
In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.
Ranked #43 on Long-tail Learning on CIFAR-10-LT (ρ=10)
1 code implementation • CVPR 2020 • Sukmin Yun, Jongjin Park, Kimin Lee, Jinwoo Shin
Deep neural networks with millions of parameters may suffer from poor generalization due to overfitting.
4 code implementations • 25 Feb 2020 • Sangwoo Mo, Minsu Cho, Jinwoo Shin
Generative adversarial networks (GANs) have shown outstanding performance on a wide range of problems in computer vision, graphics, and machine learning, but often require numerous training data and heavy computational resources.
Ranked #5 on 10-shot image generation on Babies
1 code implementation • ICLR 2020 • Sejun Park, Jaeho Lee, Sangwoo Mo, Jinwoo Shin
Magnitude-based pruning is one of the simplest methods for pruning neural networks.
1 code implementation • NeurIPS 2019 • Sangwoo Mo, Chiheon Kim, Sungwoong Kim, Minsu Cho, Jinwoo Shin
Conditional generative adversarial networks (cGANs) have gained a considerable attention in recent years due to its class-wise controllability and superior quality for complex generation tasks.
1 code implementation • ICML 2020 • Hankook Lee, Sung Ju Hwang, Jinwoo Shin
Our main idea is to learn a single unified task with respect to the joint distribution of the original and self-supervised labels, i. e., we augment original labels via self-supervision of input transformation.
2 code implementations • ICLR 2020 • Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee
Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images.
no code implementations • 25 Sep 2019 • Sungsoo Ahn, Younggyo Seo, Jinwoo Shin
Designing efficient algorithms for combinatorial optimization appears ubiquitously in various scientific fields.
1 code implementation • ICML 2020 • Divyam Madaan, Jinwoo Shin, Sung Ju Hwang
Despite the remarkable performance of deep neural networks on various computer vision tasks, they are known to be susceptible to adversarial perturbations, which makes it challenging to deploy them in real-world safety-critical applications.
1 code implementation • ICML 2020 • Insu Han, Haim Avron, Jinwoo Shin
This paper studies how to sketch element-wise functions of low-rank matrices.
4 code implementations • 15 May 2019 • Yunhun Jang, Hankook Lee, Sung Ju Hwang, Jinwoo Shin
To address the issue, we propose a novel transfer learning approach based on meta-learning that can automatically learn what knowledge to transfer from the source network to where in the target network.
no code implementations • 14 May 2019 • Sejun Park, Eunho Yang, Se-Young Yun, Jinwoo Shin
Our contribution is two-fold: (a) we first propose a fully polynomial-time approximation scheme (FPTAS) for approximating the partition function of GM associating with a low-rank coupling matrix; (b) for general high-rank GMs, we design a spectral mean-field scheme utilizing (a) as a subroutine, where it approximates a high-rank GM into a product of rank-1 GMs for an efficient approximation of the partition function.
no code implementations • 11 May 2019 • Jongheon Jeong, Jinwoo Shin
Recent progress in deep convolutional neural networks (CNNs) have enabled a simple paradigm of architecture design: larger models typically achieve better accuracy.
no code implementations • ICLR 2019 • Jongheon Jeong, Jinwoo Shin
Bottleneck structures with identity (e. g., residual) connection are now emerging popular paradigms for designing deep convolutional neural networks (CNN), for processing large-scale features efficiently.
no code implementations • ICLR 2019 • Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin
For instance, on CIFAR-10 dataset containing 45% noisy training labels, we improve the test accuracy of a deep model optimized by the state-of-the-art noise-handling training method from33. 34% to 43. 02%.
1 code implementation • ICLR 2019 • Sangwoo Mo, Minsu Cho, Jinwoo Shin
Unsupervised image-to-image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs).
1 code implementation • ICCV 2019 • Kibok Lee, Kimin Lee, Jinwoo Shin, Honglak Lee
Lifelong learning with deep neural networks is well-known to suffer from catastrophic forgetting: the performance on previous tasks drastically degrades when learning a new task.
1 code implementation • 31 Jan 2019 • Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin
Large-scale datasets may contain significant proportions of noisy (incorrect) class labels, and it is well-known that modern deep neural networks (DNNs) poorly generalize from such noisy training datasets.
1 code implementation • 28 Dec 2018 • Sangwoo Mo, Minsu Cho, Jinwoo Shin
Our comparative evaluation demonstrates the effectiveness of the proposed method on different image datasets, in particular, in the aforementioned challenging cases.
no code implementations • NeurIPS 2018 • Jonghwan Mun, Kimin Lee, Jinwoo Shin, Bohyung Han
The proposed framework is model-agnostic and applicable to any tasks other than VQA, e. g., image classification with a large number of labels but few per-class examples, which is known to be difficult under existing MCL schemes.
4 code implementations • NeurIPS 2018 • Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin
Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.
Ranked #2 on Out-of-Distribution Detection on MS-1M vs. IJB-C
1 code implementation • 7 Jul 2018 • Hankook Lee, Jinwoo Shin
This is remarkable due to their simplicity and effectiveness, but training many thin sub-networks jointly faces a new challenge on training complexity.
no code implementations • CVPR 2018 • Kibok Lee, Kimin Lee, Kyle Min, Yuting Zhang, Jinwoo Shin, Honglak Lee
The essential ingredients of our methods are confidence-calibrated classifiers, data relabeling, and the leave-one-out strategy for modeling novel classes under the hierarchical taxonomy.
no code implementations • ICML 2018 • Sungsoo Ahn, Michael Chertkov, Adrian Weller, Jinwoo Shin
Probabilistic graphical models are a key tool in machine learning applications.
1 code implementation • NeurIPS 2018 • Insu Han, Haim Avron, Jinwoo Shin
A large class of machine learning techniques requires the solution of optimization problems involving spectral functions of parametric matrices, e. g. log-determinant and nuclear norm.
no code implementations • 5 Jan 2018 • Sungsoo Ahn, Michael Chertkov, Jinwoo Shin, Adrian Weller
Recently, so-called gauge transformations were used to improve variational lower bounds on $Z$.
3 code implementations • ICLR 2018 • Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin
The problem of detecting whether a test sample is from in-distribution (i. e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications.
2 code implementations • ICML 2017 • Kimin Lee, Changho Hwang, KyoungSoo Park, Jinwoo Shin
Ensemble methods are arguably the most trustworthy techniques for boosting the performance of machine learning models.
no code implementations • 11 Apr 2017 • Kimin Lee, Jaehyung Kim, Song Chong, Jinwoo Shin
In this paper, we aim at developing efficient training methods for SFNN, in particular using known architectures and pre-trained parameters of DNN.
no code implementations • 6 Apr 2017 • Sejun Park, Yunhun Jang, Andreas Galanis, Jinwoo Shin, Daniel Stefankovic, Eric Vigoda
The Gibbs sampler is a particularly popular Markov chain used for learning and inference problems in Graphical Models (GMs).
no code implementations • 12 Mar 2017 • Sejun Park, Eunho Yang, Jinwoo Shin
Learning parameters of latent graphical models (GM) is inherently much harder than that of no-latent ones since the latent variables make the corresponding log-likelihood non-concave.
1 code implementation • ICML 2017 • Insu Han, Prabhanjan Kambadur, KyoungSoo Park, Jinwoo Shin
Determinantal point processes (DPPs) are popular probabilistic models that arise in many machine learning tasks, where distributions of diverse sets are characterized by matrix determinants.
no code implementations • NeurIPS 2017 • Sungsoo Ahn, Michael Chertkov, Jinwoo Shin
Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM).
1 code implementation • 3 Mar 2017 • Jung-hun Kim, Se-Young Yun, Minchan Jeong, Jun Hyun Nam, Jinwoo Shin, Richard Combes
This implies that classical approaches cannot guarantee a non-trivial regret bound.
no code implementations • 28 Feb 2017 • Jungseul Ok, Sewoong Oh, Yunhun Jang, Jinwoo Shin, Yung Yi
Crowdsourcing platforms emerged as popular venues for purchasing human intelligence at low cost for large volume of tasks.
no code implementations • NeurIPS 2016 • Sung-Soo Ahn, Michael Chertkov, Jinwoo Shin
In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i. e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC.
1 code implementation • 3 Jun 2016 • Insu Han, Dmitry Malioutov, Haim Avron, Jinwoo Shin
Computation of the trace of a matrix function plays an important role in many scientific computing applications, including applications in machine learning, computational physics (e. g., lattice quantum chromodynamics), network analysis and computational biology (e. g., protein folding), just to name a few application areas.
Data Structures and Algorithms
no code implementations • 29 May 2016 • Sungsoo Ahn, Michael Chertkov, Jinwoo Shin
Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series.
no code implementations • 26 May 2016 • Hyeryung Jang, Hyungwon Choi, Yung Yi, Jinwoo Shin
This paper studies the problem of parameter learning in probabilistic graphical models having latent variables, where the standard approach is the expectation maximization algorithm alternating expectation (E) and maximization (M) steps.
no code implementations • 11 Feb 2016 • Jungseul Ok, Sewoong Oh, Jinwoo Shin, Yung Yi
Crowdsourcing systems are popular for solving large-scale labelling tasks with low-paid workers.
no code implementations • NeurIPS 2015 • Sungsoo Ahn, Sejun Park, Michael Chertkov, Jinwoo Shin
Max-product Belief Propagation (BP) is a popular message-passing algorithm for computing a Maximum-A-Posteriori (MAP) assignment over a distribution represented by a Graphical Model (GM).
1 code implementation • 22 Mar 2015 • Insu Han, Dmitry Malioutov, Jinwoo Shin
Logarithms of determinants of large positive definite matrices appear ubiquitously in machine learning applications including Gaussian graphical and Gaussian process models, partition functions of discrete graphical models, minimum-volume ellipsoids, metric learning and kernel learning.
no code implementations • 5 Mar 2015 • Sanghyuk Chun, Yung-Kyun Noh, Jinwoo Shin
Subspace clustering (SC) is a popular method for dimensionality reduction of high-dimensional data, where it generalizes Principal Component Analysis (PCA).
no code implementations • 16 Dec 2014 • Sejun Park, Jinwoo Shin
The max-product {belief propagation} (BP) is a popular message-passing heuristic for approximating a maximum-a-posteriori (MAP) assignment in a joint distribution represented by a graphical model (GM).
no code implementations • NeurIPS 2013 • Jinwoo Shin, Andrew E. Gelfand, Misha Chertkov
It was recently shown that BP converges to the correct MAP assignment for a class of loopy GMs with the following common feature: the Linear Programming (LP) relaxation to the MAP problem is tight (has no integrality gap).
no code implementations • 5 Jun 2013 • Michael Chertkov, Andrew Gelfand, Jinwoo Shin
This manuscript discusses computation of the Partition Function (PF) and the Minimum Weight Perfect Matching (MWPM) on arbitrary, non-bipartite graphs.
no code implementations • 17 May 2013 • Andrew Gelfand, Jinwoo Shin, Michael Chertkov
For this class of problems, MAP inference can be stated as an integer LP with an LP relaxation that coincides with minimization of the BFE at ``zero temperature".