Search Results for author: Jinwoo Shin

Found 144 papers, 81 papers with code

Real-World Efficient Blind Motion Deblurring via Blur Pixel Discretization

no code implementations18 Apr 2024 Insoo Kim, Jae Seok Choi, Geonseok Seo, Kinam Kwon, Jinwoo Shin, Hyong-Euk Lee

As recent advances in mobile camera technology have enabled the capability to capture high-resolution images, such as 4K images, the demand for an efficient deblurring model handling large motion has increased.

4k Deblurring +1

SuRe: Summarizing Retrievals using Answer Candidates for Open-domain QA of LLMs

no code implementations17 Apr 2024 Jaehyung Kim, Jaehyun Nam, Sangwoo Mo, Jongjin Park, Sang-Woo Lee, Minjoon Seo, Jung-Woo Ha, Jinwoo Shin

While incorporating new information with the retrieval of relevant passages is a promising way to improve QA with LLMs, the existing methods often require additional fine-tuning which becomes infeasible with recent LLMs.

Question Answering Retrieval

Confidence-aware Reward Optimization for Fine-tuning Text-to-Image Models

1 code implementation2 Apr 2024 KyuYoung Kim, Jongheon Jeong, Minyong An, Mohammad Ghavamzadeh, Krishnamurthy Dvijotham, Jinwoo Shin, Kimin Lee

To investigate this issue in depth, we introduce the Text-Image Alignment Assessment (TIA2) benchmark, which comprises a diverse collection of text prompts, images, and human annotations.

DreamFlow: High-Quality Text-to-3D Generation by Approximating Probability Flow

no code implementations22 Mar 2024 Kyungmin Lee, Kihyuk Sohn, Jinwoo Shin

Recent progress in text-to-3D generation has been achieved through the utilization of score distillation methods: they make use of the pre-trained text-to-image (T2I) diffusion models by distilling via the diffusion model training objective.

3D Generation Image-to-Image Translation +1

Efficient Video Diffusion Models via Content-Frame Motion-Latent Decomposition

no code implementations21 Mar 2024 Sihyun Yu, Weili Nie, De-An Huang, Boyi Li, Jinwoo Shin, Anima Anandkumar

To tackle this issue, we propose content-motion latent diffusion model (CMD), a novel efficient extension of pretrained image diffusion models for video generation.

Video Generation

Improving Diffusion Models for Virtual Try-on

1 code implementation8 Mar 2024 Yisol Choi, Sangkyung Kwak, Kyungmin Lee, Hyungwon Choi, Jinwoo Shin

Finally, we present a customization method using a pair of person-garment images, which significantly improves fidelity and authenticity.

Virtual Try-on

Online Adaptation of Language Models with a Memory of Amortized Contexts

1 code implementation7 Mar 2024 Jihoon Tack, Jaehyung Kim, Eric Mitchell, Jinwoo Shin, Yee Whye Teh, Jonathan Richard Schwarz

We propose an amortized feature extraction and memory-augmentation approach to compress and extract information from new documents into compact modulations stored in a memory bank.

Language Modelling Meta-Learning

Direct Consistency Optimization for Compositional Text-to-Image Personalization

no code implementations19 Feb 2024 Kyungmin Lee, Sangkyung Kwak, Kihyuk Sohn, Jinwoo Shin

In particular, our method results in a superior Pareto frontier to the baselines.

Querying Easily Flip-flopped Samples for Deep Active Learning

no code implementations18 Jan 2024 Seong Jin Cho, Gwangsu Kim, Junghyun Lee, Jinwoo Shin, Chang D. Yoo

Active learning is a machine learning paradigm that aims to improve the performance of a model by strategically selecting and querying unlabeled data.

Active Learning

NeFL: Nested Federated Learning for Heterogeneous Clients

no code implementations15 Aug 2023 Honggu Kang, Seohyeon Cha, Jinwoo Shin, Jongmyeong Lee, Joonhyuk Kang

Previous studies tackle the system heterogeneity by splitting a model into submodels, but with less degree-of-freedom in terms of model architecture.

Federated Learning

Towards Safe Self-Distillation of Internet-Scale Text-to-Image Diffusion Models

1 code implementation12 Jul 2023 Sanghyun Kim, Seohyeon Jung, Balhae Kim, Moonseok Choi, Jinwoo Shin, Juho Lee

Large-scale image generation models, with impressive quality made possible by the vast amount of data available on the Internet, raise social concerns that these models may generate harmful or copyrighted content.

Image Generation

Collaborative Score Distillation for Consistent Visual Synthesis

no code implementations4 Jul 2023 Subin Kim, Kyungmin Lee, June Suk Choi, Jongheon Jeong, Kihyuk Sohn, Jinwoo Shin

Generative priors of large-scale text-to-image diffusion models enable a wide range of new generation and editing applications on diverse visual modalities.

Prefer to Classify: Improving Text Classifiers via Auxiliary Preference Learning

1 code implementation8 Jun 2023 Jaehyung Kim, Jinwoo Shin, Dongyeop Kang

In this paper, we investigate task-specific preferences between pairs of input texts as a new alternative way for such auxiliary data annotation.

Multi-Task Learning

S-CLIP: Semi-supervised Vision-Language Learning using Few Specialist Captions

1 code implementation NeurIPS 2023 Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jinwoo Shin

By combining these objectives, S-CLIP significantly enhances the training of CLIP using only a few image-text pairs, as demonstrated in various specialist domains, including remote sensing, fashion, scientific figures, and comics.

Contrastive Learning Partial Label Learning +3

IFSeg: Image-free Semantic Segmentation via Vision-Language Model

1 code implementation CVPR 2023 Sukmin Yun, Seong Hyeon Park, Paul Hongsuck Seo, Jinwoo Shin

In this paper, we introduce a novel image-free segmentation task where the goal is to perform semantic segmentation given only a set of the target semantic categories, but without any task-specific images and annotations.

Image Segmentation Language Modelling +3

Enhancing Multiple Reliability Measures via Nuisance-extended Information Bottleneck

1 code implementation CVPR 2023 Jongheon Jeong, Sihyun Yu, Hankook Lee, Jinwoo Shin

In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition (i. e., less generalizable), so that one cannot prevent a model from co-adapting on such (so-called) "shortcut" signals: this makes the model fragile in various distribution shifts.

Adversarial Robustness Novelty Detection

Imitating Graph-Based Planning with Goal-Conditioned Policies

1 code implementation20 Mar 2023 Junsu Kim, Younggyo Seo, Sungsoo Ahn, Kyunghwan Son, Jinwoo Shin

Recently, graph-based planning algorithms have gained much attention to solve goal-conditioned reinforcement learning (RL) tasks: they provide a sequence of subgoals to reach the target-goal, and the agents learn to execute subgoal-conditioned policies.

Reinforcement Learning (RL)

Guiding Energy-based Models via Contrastive Latent Variables

1 code implementation6 Mar 2023 Hankook Lee, Jongheon Jeong, Sejun Park, Jinwoo Shin

To enable the joint training of EBM and CRL, we also design a new class of latent-variable EBMs for learning the joint density of data and the contrastive latent variable.

Representation Learning

Preference Transformer: Modeling Human Preferences using Transformers for RL

1 code implementation2 Mar 2023 Changyeon Kim, Jongjin Park, Jinwoo Shin, Honglak Lee, Pieter Abbeel, Kimin Lee

In this paper, we present Preference Transformer, a neural architecture that models human preferences using transformers.

Decision Making Reinforcement Learning (RL)

STUNT: Few-shot Tabular Learning with Self-generated Tasks from Unlabeled Tables

1 code implementation2 Mar 2023 Jaehyun Nam, Jihoon Tack, Kyungmin Lee, Hankook Lee, Jinwoo Shin

Learning with few labeled tabular samples is often an essential requirement for industrial machine learning applications as varieties of tabular data suffer from high annotation costs or have difficulties in collecting new samples for novel tasks.

Few-Shot Learning

Video Probabilistic Diffusion Models in Projected Latent Space

1 code implementation CVPR 2023 Sihyun Yu, Kihyuk Sohn, Subin Kim, Jinwoo Shin

Specifically, PVDM is composed of two components: (a) an autoencoder that projects a given video as 2D-shaped latent vectors that factorize the complex cubic structure of video pixels and (b) a diffusion model architecture specialized for our new factorized latent space and the training/sampling procedure to synthesize videos of arbitrary length with a single model.

Video Generation

Multi-View Masked World Models for Visual Robotic Manipulation

1 code implementation5 Feb 2023 Younggyo Seo, Junsu Kim, Stephen James, Kimin Lee, Jinwoo Shin, Pieter Abbeel

In this paper, we investigate how to learn good representations with multi-view data and utilize them for visual robotic manipulation.

Camera Calibration Representation Learning

Discovering and Mitigating Visual Biases through Keyword Explanation

1 code implementation26 Jan 2023 Younghyun Kim, Sangwoo Mo, Minkyu Kim, Kyungmin Lee, Jaeho Lee, Jinwoo Shin

The keyword explanation form of visual bias offers several advantages, such as a clear group naming for bias discovery and a natural extension for debiasing using these group names.

Image Classification Image Generation

Modality-Agnostic Variational Compression of Implicit Neural Representations

no code implementations23 Jan 2023 Jonathan Richard Schwarz, Jihoon Tack, Yee Whye Teh, Jaeho Lee, Jinwoo Shin

We introduce a modality-agnostic neural compression algorithm based on a functional view of data and parameterised as an Implicit Neural Representation (INR).

Data Compression

BiasAdv: Bias-Adversarial Augmentation for Model Debiasing

no code implementations CVPR 2023 Jongin Lim, Youngdong Kim, Byungjai Kim, Chanho Ahn, Jinwoo Shin, Eunho Yang, Seungju Han

Our key idea is that an adversarial attack on a biased model that makes decisions based on spurious correlations may generate synthetic bias-conflicting samples, which can then be used as augmented training data for learning a debiased model.

Adversarial Attack Data Augmentation

Confidence-aware Training of Smoothed Classifiers for Certified Robustness

1 code implementation18 Dec 2022 Jongheon Jeong, Seojin Kim, Jinwoo Shin

Under the smoothed classifiers, the fundamental trade-off between accuracy and (adversarial) robustness has been well evidenced in the literature: i. e., increasing the robustness of a classifier for an input can be at the expense of decreased accuracy for some other inputs.

Adversarial Robustness

OAMixer: Object-aware Mixing Layer for Vision Transformers

2 code implementations13 Dec 2022 Hyunwoo Kang, Sangwoo Mo, Jinwoo Shin

Using the object labels, OAMixer computes a reweighting mask with a learnable scale parameter that intensifies the interaction of patches containing similar objects and applies the mask to the patch mixing layers.

Inductive Bias Object +2

Breaking the Spurious Causality of Conditional Generation via Fairness Intervention with Corrective Sampling

no code implementations5 Dec 2022 Junhyun Nam, Sangwoo Mo, Jaeho Lee, Jinwoo Shin

(a) Fairness Intervention (FI): emphasize the minority samples that are hard to generate due to the spurious correlation in the training dataset.

Attribute Fairness

Scalable Neural Video Representations with Learnable Positional Features

1 code implementation13 Oct 2022 Subin Kim, Sihyun Yu, Jaeho Lee, Jinwoo Shin

Succinct representation of complex signals using coordinate-based neural representations (CNRs) has seen great progress, and several recent efforts focus on extending them for handling videos.

Video Compression Video Frame Interpolation +2

Meta-Learning with Self-Improving Momentum Target

1 code implementation11 Oct 2022 Jihoon Tack, Jongjin Park, Hankook Lee, Jaeho Lee, Jinwoo Shin

The idea of using a separately trained target model (or teacher) to improve the performance of the student model has been increasingly popular in various machine learning domains, and meta-learning is no exception; a recent discovery shows that utilizing task-wise target models can significantly boost the generalization performance.

Knowledge Distillation Meta-Learning +1

String-based Molecule Generation via Multi-decoder VAE

no code implementations23 Aug 2022 Kisoo Kwon, Kuhwan Jung, Junghyun Park, Hwidong Na, Jinwoo Shin

In this paper, we investigate the problem of string-based molecular generation via variational autoencoders (VAEs) that have served a popular generative approach for various tasks in artificial intelligence.

RenyiCL: Contrastive Representation Learning with Skew Renyi Divergence

1 code implementation12 Aug 2022 Kyungmin Lee, Jinwoo Shin

Here, the choice of data augmentation is sensitive to the quality of learned representations: as harder the data augmentations are applied, the views share more task-relevant information, but also task-irrelevant one that can hinder the generalization capability of representation.

Contrastive Learning Data Augmentation +1

NOTE: Robust Continual Test-time Adaptation Against Temporal Correlation

1 code implementation10 Aug 2022 Taesik Gong, Jongheon Jeong, Taewon Kim, Yewon Kim, Jinwoo Shin, Sung-Ju Lee

Test-time adaptation (TTA) is an emerging paradigm that addresses distributional shifts between training and testing phases without additional data acquisition or labeling cost; only unlabeled test data streams are used for continual model adaptation.

Autonomous Driving Test-time Adaptation

TSPipe: Learn from Teacher Faster with Pipelines

1 code implementation ICML 2022 Hwijoon Lim, Yechan Kim, Sukmin Yun, Jinwoo Shin, Dongsu Han

The teacher-student (TS) framework, training a (student) network by utilizing an auxiliary superior (teacher) network, has been adopted as a popular training paradigm in many machine learning schemes, since the seminal work---Knowledge distillation (KD) for model compression and transfer learning.

Knowledge Distillation Self-Supervised Learning +1

Patch-level Representation Learning for Self-supervised Vision Transformers

1 code implementation CVPR 2022 Sukmin Yun, Hankook Lee, Jaehyung Kim, Jinwoo Shin

Despite its simplicity, we demonstrate that it can significantly improve the performance of existing SSL methods for various visual tasks, including object detection and semantic segmentation.

Instance Segmentation object-detection +5

Zero-shot Blind Image Denoising via Implicit Neural Representations

no code implementations5 Apr 2022 Chaewon Kim, Jaeho Lee, Jinwoo Shin

Recent denoising algorithms based on the "blind-spot" strategy show impressive blind image denoising performances, without utilizing any external dataset.

Image Denoising Inductive Bias

Spread Spurious Attribute: Improving Worst-group Accuracy with Spurious Attribute Estimation

no code implementations ICLR 2022 Junhyun Nam, Jaehyung Kim, Jaeho Lee, Jinwoo Shin

The paradigm of worst-group loss minimization has shown its promise in avoiding to learn spurious correlations, but requires costly additional supervision on spurious attributes.

Attribute

Generating Videos with Dynamics-aware Implicit Generative Adversarial Networks

1 code implementation ICLR 2022 Sihyun Yu, Jihoon Tack, Sangwoo Mo, Hyunsu Kim, Junho Kim, Jung-Woo Ha, Jinwoo Shin

In this paper, we found that the recent emerging paradigm of implicit neural representations (INRs) that encodes a continuous signal into a parameterized neural network effectively mitigates the issue.

Generative Adversarial Network Video Generation

Self-Supervised Dense Consistency Regularization for Image-to-Image Translation

no code implementations CVPR 2022 Minsu Ko, Eunju Cha, Sungjoo Suh, Huijin Lee, Jae-Joon Han, Jinwoo Shin, Bohyung Han

Unsupervised image-to-image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs).

Translation Unsupervised Image-To-Image Translation

Contrastive Dual Gating: Learning Sparse Features With Contrastive Learning

no code implementations CVPR 2022 Jian Meng, Li Yang, Jinwoo Shin, Deliang Fan, Jae-sun Seo

Contrastive learning (or its variants) has recently become a promising direction in the self-supervised learning domain, achieving similar performance as supervised learning with minimum fine-tuning.

Contrastive Learning Self-Supervised Learning

Saliency Grafting: Innocuous Attribution-Guided Mixup with Calibrated Label Mixing

no code implementations16 Dec 2021 Joonhyung Park, June Yong Yang, Jinwoo Shin, Sung Ju Hwang, Eunho Yang

However, they now suffer from lack of sample diversification as they always deterministically select regions with maximum saliency, injecting bias into the augmented data.

DAPPER: Label-Free Performance Estimation after Personalization for Heterogeneous Mobile Sensing

no code implementations22 Nov 2021 Taesik Gong, Yewon Kim, Adiba Orzikulova, Yunxin Liu, Sung Ju Hwang, Jinwoo Shin, Sung-Ju Lee

However, various factors such as different users, devices, and environments impact the performance of such applications, thus making the domain shift (i. e., distributional shift between the training domain and the target domain) a critical issue in mobile sensing.

Domain Adaptation

Improving Transferability of Representations via Augmentation-Aware Self-Supervision

2 code implementations NeurIPS 2021 Hankook Lee, Kibok Lee, Kimin Lee, Honglak Lee, Jinwoo Shin

Recent unsupervised representation learning methods have shown to be effective in a range of vision tasks by learning representations invariant to data augmentations such as random cropping and color jittering.

Representation Learning Transfer Learning

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Robustness

1 code implementation NeurIPS 2021 Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin

Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.

RoMA: Robust Model Adaptation for Offline Model-based Optimization

no code implementations NeurIPS 2021 Sihyun Yu, Sungsoo Ahn, Le Song, Jinwoo Shin

We consider the problem of searching an input maximizing a black-box objective function given a static dataset of input-output queries.

Meta-Learning Sparse Implicit Neural Representations

1 code implementation NeurIPS 2021 Jaeho Lee, Jihoon Tack, Namhoon Lee, Jinwoo Shin

Implicit neural representations are a promising new avenue of representing general signals by learning a continuous function that, parameterized as a neural network, maps the domain of a signal to its codomain; the mapping from spatial coordinates of an image to its pixel values, for example.

Meta-Learning

Landmark-Guided Subgoal Generation in Hierarchical Reinforcement Learning

1 code implementation NeurIPS 2021 Junsu Kim, Younggyo Seo, Jinwoo Shin

In this paper, we present HIerarchical reinforcement learning Guided by Landmarks (HIGL), a novel framework for training a high-level policy with a reduced action space guided by landmarks, i. e., promising states to explore.

Efficient Exploration Hierarchical Reinforcement Learning +2

PASS: Patch-Aware Self-Supervision for Vision Transformer

no code implementations29 Sep 2021 Sukmin Yun, Hankook Lee, Jaehyung Kim, Jinwoo Shin

This paper aims to improve their performance further by utilizing the architectural advantages of the underlying neural network, as the current state-of-the-art visual pretext tasks for self-supervised learning do not enjoy the benefit, i. e., they are architecture-agnostic.

object-detection Object Detection +3

What Makes Better Augmentation Strategies? Augment Difficult but Not too Different

no code implementations ICLR 2022 Jaehyung Kim, Dongyeop Kang, Sungsoo Ahn, Jinwoo Shin

Remarkably, our method is more effective on the challenging low-data and class-imbalanced regimes, and the learned augmentation policy is well-transferable to the different tasks and models.

Data Augmentation Semantic Similarity +3

Model-augmented Prioritized Experience Replay

no code implementations ICLR 2022 Youngmin Oh, Jinwoo Shin, Eunho Yang, Sung Ju Hwang

Experience replay is an essential component in off-policy model-free reinforcement learning (MfRL).

Disentangling Sources of Risk for Distributional Multi-Agent Reinforcement Learning

no code implementations29 Sep 2021 Kyunghwan Son, Junsu Kim, Yung Yi, Jinwoo Shin

Although these two sources are both important factors for learning robust policies of agents, prior works do not separate them or deal with only a single risk source, which could lead to suboptimal equilibria.

Multi-agent Reinforcement Learning reinforcement-learning +3

Object-aware Contrastive Learning for Debiased Scene Representation

1 code implementation NeurIPS 2021 Sangwoo Mo, Hyunwoo Kang, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin

Contrastive self-supervised learning has shown impressive results in learning visual representations from unlabeled images by enforcing invariance against different data augmentations.

Contrastive Learning Object +2

Abstract Reasoning via Logic-guided Generation

no code implementations22 Jul 2021 Sihyun Yu, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin

Abstract reasoning, i. e., inferring complicated patterns from given observations, is a central building block of artificial general intelligence.

Offline-to-Online Reinforcement Learning via Balanced Replay and Pessimistic Q-Ensemble

1 code implementation1 Jul 2021 SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin

Recent advance in deep offline reinforcement learning (RL) has made it possible to train strong robotic agents from offline datasets.

Offline RL reinforcement-learning +1

OpenCoS: Contrastive Semi-supervised Learning for Handling Open-set Unlabeled Data

1 code implementation29 Jun 2021 Jongjin Park, Sukmin Yun, Jongheon Jeong, Jinwoo Shin

Semi-supervised learning (SSL) has been a powerful strategy to incorporate few labels in learning better representations.

Contrastive Learning Representation Learning

Co$^2$L: Contrastive Continual Learning

2 code implementations28 Jun 2021 Hyuntak Cha, Jaeho Lee, Jinwoo Shin

Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks than joint-training methods relying on task-specific supervision.

Continual Learning Contrastive Learning +2

Quality-Agnostic Image Recognition via Invertible Decoder

1 code implementation CVPR 2021 Insoo Kim, Seungju Han, Ji-won Baek, Seong-Jin Park, Jae-Joon Han, Jinwoo Shin

Our two-stage scheme allows the network to produce clean-like and robust features from any quality images, by reconstructing their clean images via the invertible decoder.

Data Augmentation Domain Generalization +2

SmoothMix: Training Confidence-calibrated Smoothed Classifiers for Certified Adversarial Robustness

no code implementations ICML Workshop AML 2021 Jongheon Jeong, Sejun Park, Minkyu Kim, Heung-Chang Lee, DoGuk Kim, Jinwoo Shin

Randomized smoothing is currently a state-of-the-art method to construct a certifiably robust classifier from neural networks against $\ell_2$-adversarial perturbations.

Adversarial Robustness

Entropy Weighted Adversarial Training

no code implementations ICML Workshop AML 2021 Minseon Kim, Jihoon Tack, Jinwoo Shin, Sung Ju Hwang

Adversarial training methods, which minimizes the loss of adversarially-perturbed training examples, have been extensively studied as a solution to improve the robustness of the deep neural networks.

Scaling Neural Tangent Kernels via Sketching and Random Features

1 code implementation NeurIPS 2021 Amir Zandieh, Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin

To accelerate learning with NTK, we design a near input-sparsity time approximation algorithm for NTK, by sketching the polynomial expansions of arc-cosine kernels: our sketch for the convolutional counterpart of NTK (CNTK) can transform any image using a linear runtime in the number of pixels.

regression

Self-Improved Retrosynthetic Planning

1 code implementation9 Jun 2021 Junsu Kim, Sungsoo Ahn, Hankook Lee, Jinwoo Shin

Our main idea is based on a self-improving procedure that trains the model to imitate successful trajectories found by itself.

Multi-step retrosynthesis valid

RetCL: A Selection-based Approach for Retrosynthesis via Contrastive Learning

no code implementations3 May 2021 Hankook Lee, Sungsoo Ahn, Seung-Woo Seo, You Young Song, Eunho Yang, Sung-Ju Hwang, Jinwoo Shin

Retrosynthesis, of which the goal is to find a set of reactants for synthesizing a target product, is an emerging research area of deep learning.

Contrastive Learning Retrosynthesis

Consistency and Monotonicity Regularization for Neural Knowledge Tracing

no code implementations3 May 2021 Seewoo Lee, Youngduck Choi, Juneyoung Park, Byungsoo Kim, Jinwoo Shin

Knowledge Tracing (KT), tracking a human's knowledge acquisition, is a central component in online learning and AI in Education.

Data Augmentation Knowledge Tracing

Random Features for the Neural Tangent Kernel

no code implementations3 Apr 2021 Insu Han, Haim Avron, Neta Shoham, Chaewon Kim, Jinwoo Shin

We combine random features of the arc-cosine kernels with a sketching-based algorithm which can run in linear with respect to both the number of data points and input dimension.

Training GANs with Stronger Augmentations via Contrastive Discriminator

1 code implementation ICLR 2021 Jongheon Jeong, Jinwoo Shin

Recent works in Generative Adversarial Networks (GANs) are actively revisiting various data augmentation techniques as an effective way to prevent discriminator overfitting.

Contrastive Learning Data Augmentation +1

Consistency Regularization for Adversarial Robustness

1 code implementation ICML Workshop AML 2021 Jihoon Tack, Sihyun Yu, Jongheon Jeong, Minseon Kim, Sung Ju Hwang, Jinwoo Shin

Adversarial training (AT) is currently one of the most successful methods to obtain the adversarial robustness of deep neural networks.

Adversarial Robustness Data Augmentation

Model-Augmented Q-learning

no code implementations7 Feb 2021 Youngmin Oh, Jinwoo Shin, Eunho Yang, Sung Ju Hwang

We show that the proposed scheme, called Model-augmented $Q$-learning (MQL), obtains a policy-invariant solution which is identical to the solution obtained by learning with true reward.

Q-Learning

Addressing Distribution Shift in Online Reinforcement Learning with Offline Datasets

no code implementations1 Jan 2021 SeungHyun Lee, Younggyo Seo, Kimin Lee, Pieter Abbeel, Jinwoo Shin

As it turns out, fine-tuning offline RL agents is a non-trivial challenge, due to distribution shift – the agent encounters out-of-distribution samples during online interaction, which may cause bootstrapping error in Q-learning and instability during fine-tuning.

D4RL Offline RL +3

Co2L: Contrastive Continual Learning

1 code implementation ICCV 2021 Hyuntak Cha, Jaeho Lee, Jinwoo Shin

Recent breakthroughs in self-supervised learning show that such algorithms learn visual representations that can be transferred better to unseen tasks than cross-entropy based methods which rely on task-specific supervision.

Continual Learning Contrastive Learning +2

MASKER: Masked Keyword Regularization for Reliable Text Classification

1 code implementation17 Dec 2020 Seung Jun Moon, Sangwoo Mo, Kimin Lee, Jaeho Lee, Jinwoo Shin

We claim that one central obstacle to the reliability is the over-reliance of the model on a limited number of keywords, instead of looking at the whole context.

Domain Generalization General Classification +6

Learning from Failure: De-biasing Classifier from Biased Classifier

no code implementations NeurIPS 2020 Junhyun Nam, Hyuntak Cha, Sung-Soo Ahn, Jaeho Lee, Jinwoo Shin

Neural networks often learn to make predictions that overly rely on spurious corre- lation existing in the dataset, which causes the model to be biased.

Provable Memorization via Deep Neural Networks using Sub-linear Parameters

no code implementations26 Oct 2020 Sejun Park, Jaeho Lee, Chulhee Yun, Jinwoo Shin

It is known that $O(N)$ parameters are sufficient for neural networks to memorize arbitrary $N$ input-label pairs.

Memorization

Layer-adaptive sparsity for the Magnitude-based Pruning

1 code implementation ICLR 2021 Jaeho Lee, Sejun Park, Sangwoo Mo, Sungsoo Ahn, Jinwoo Shin

Recent discoveries on neural network pruning reveal that, with a carefully chosen layerwise sparsity, a simple magnitude-based pruning achieves state-of-the-art tradeoff between sparsity and performance.

Image Classification Network Pruning

Few-shot Visual Reasoning with Meta-analogical Contrastive Learning

no code implementations NeurIPS 2020 Youngsung Kim, Jinwoo Shin, Eunho Yang, Sung Ju Hwang

While humans can solve a visual puzzle that requires logical reasoning by observing only few samples, it would require training over large amount of data for state-of-the-art deep reasoning models to obtain similar performance on the same task.

Contrastive Learning Logical Reasoning +1

Time-Reversal Symmetric ODE Network

1 code implementation NeurIPS 2020 In Huh, Eunho Yang, Sung Ju Hwang, Jinwoo Shin

Time-reversal symmetry, which requires that the dynamics of a system should not change with the reversal of time axis, is a fundamental property that frequently holds in classical and quantum mechanics.

Distribution Aligning Refinery of Pseudo-label for Imbalanced Semi-supervised Learning

1 code implementation NeurIPS 2020 Jaehyung Kim, Youngbum Hur, Sejun Park, Eunho Yang, Sung Ju Hwang, Jinwoo Shin

While semi-supervised learning (SSL) has proven to be a promising way for leveraging unlabeled data when labeled data is scarce, the existing SSL algorithms typically assume that training class distributions are balanced.

Pseudo Label

Learning to Sample with Local and Global Contexts in Experience Replay Buffer

no code implementations ICLR 2021 Youngmin Oh, Kimin Lee, Jinwoo Shin, Eunho Yang, Sung Ju Hwang

Experience replay, which enables the agents to remember and reuse experience from the past, has played a significant role in the success of off-policy reinforcement learning (RL).

Reinforcement Learning (RL)

Learning from Failure: Training Debiased Classifier from Biased Classifier

2 code implementations6 Jul 2020 Junhyun Nam, Hyuntak Cha, Sungsoo Ahn, Jaeho Lee, Jinwoo Shin

Neural networks often learn to make predictions that overly rely on spurious correlation existing in the dataset, which causes the model to be biased.

Action Recognition Facial Attribute Classification +1

Guiding Deep Molecular Optimization with Genetic Exploration

2 code implementations NeurIPS 2020 Sungsoo Ahn, Junsu Kim, Hankook Lee, Jinwoo Shin

De novo molecular design attempts to search over the chemical space for molecules with the desired property.

Imitation Learning

QTRAN++: Improved Value Transformation for Cooperative Multi-Agent Reinforcement Learning

no code implementations22 Jun 2020 Kyunghwan Son, Sung-Soo Ahn, Roben Delos Reyes, Jinwoo Shin, Yung Yi

QTRAN is a multi-agent reinforcement learning (MARL) algorithm capable of learning the largest class of joint-action value functions up to date.

reinforcement-learning Reinforcement Learning (RL) +2

Learning to Generate Noise for Multi-Attack Robustness

1 code implementation22 Jun 2020 Divyam Madaan, Jinwoo Shin, Sung Ju Hwang

Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations.

Meta-Learning

Learning What to Defer for Maximum Independent Sets

1 code implementation ICML 2020 Sungsoo Ahn, Younggyo Seo, Jinwoo Shin

Designing efficient algorithms for combinatorial optimization appears ubiquitously in various scientific fields.

Combinatorial Optimization

Minimum Width for Universal Approximation

no code implementations ICLR 2021 Sejun Park, Chulhee Yun, Jaeho Lee, Jinwoo Shin

In this work, we provide the first definitive result in this direction for networks using the ReLU activation functions: The minimum width required for the universal approximation of the $L^p$ functions is exactly $\max\{d_x+1, d_y\}$.

Learning Bounds for Risk-sensitive Learning

1 code implementation NeurIPS 2020 Jaeho Lee, Sejun Park, Jinwoo Shin

The second result, based on a novel variance-based characterization of OCE, gives an expected loss guarantee with a suppressed dependence on the smoothness of the selected OCE.

Consistency Regularization for Certified Robustness of Smoothed Classifiers

1 code implementation NeurIPS 2020 Jongheon Jeong, Jinwoo Shin

A recent technique of randomized smoothing has shown that the worst-case (adversarial) $\ell_2$-robustness can be transformed into the average-case Gaussian-robustness by "smoothing" a classifier, i. e., by considering the averaged prediction over Gaussian noise.

Adversarial Robustness

Context-aware Dynamics Model for Generalization in Model-Based Reinforcement Learning

2 code implementations ICML 2020 Kimin Lee, Younggyo Seo, Seung-Hyun Lee, Honglak Lee, Jinwoo Shin

Model-based reinforcement learning (RL) enjoys several benefits, such as data-efficiency and planning, by learning a model of the environment's dynamics.

Model-based Reinforcement Learning reinforcement-learning +1

M2m: Imbalanced Classification via Major-to-minor Translation

1 code implementation CVPR 2020 Jaehyung Kim, Jongheon Jeong, Jinwoo Shin

In most real-world scenarios, labeled training datasets are highly class-imbalanced, where deep neural networks suffer from generalizing to a balanced testing criterion.

Classification General Classification +3

Freeze the Discriminator: a Simple Baseline for Fine-Tuning GANs

4 code implementations25 Feb 2020 Sangwoo Mo, Minsu Cho, Jinwoo Shin

Generative adversarial networks (GANs) have shown outstanding performance on a wide range of problems in computer vision, graphics, and machine learning, but often require numerous training data and heavy computational resources.

10-shot image generation Image Generation +1

Lookahead: a Far-Sighted Alternative of Magnitude-based Pruning

1 code implementation ICLR 2020 Sejun Park, Jaeho Lee, Sangwoo Mo, Jinwoo Shin

Magnitude-based pruning is one of the simplest methods for pruning neural networks.

Mining GOLD Samples for Conditional GANs

1 code implementation NeurIPS 2019 Sangwoo Mo, Chiheon Kim, Sungwoong Kim, Minsu Cho, Jinwoo Shin

Conditional generative adversarial networks (cGANs) have gained a considerable attention in recent years due to its class-wise controllability and superior quality for complex generation tasks.

Active Learning

Self-supervised Label Augmentation via Input Transformations

1 code implementation ICML 2020 Hankook Lee, Sung Ju Hwang, Jinwoo Shin

Our main idea is to learn a single unified task with respect to the joint distribution of the original and self-supervised labels, i. e., we augment original labels via self-supervision of input transformation.

Data Augmentation imbalanced classification +2

Network Randomization: A Simple Technique for Generalization in Deep Reinforcement Learning

2 code implementations ICLR 2020 Kimin Lee, Kibok Lee, Jinwoo Shin, Honglak Lee

Deep reinforcement learning (RL) agents often fail to generalize to unseen environments (yet semantically similar to trained agents), particularly when they are trained on high-dimensional state spaces, such as images.

Data Augmentation reinforcement-learning +1

Deep Auto-Deferring Policy for Combinatorial Optimization

no code implementations25 Sep 2019 Sungsoo Ahn, Younggyo Seo, Jinwoo Shin

Designing efficient algorithms for combinatorial optimization appears ubiquitously in various scientific fields.

Combinatorial Optimization Computational Efficiency

Adversarial Neural Pruning with Latent Vulnerability Suppression

1 code implementation ICML 2020 Divyam Madaan, Jinwoo Shin, Sung Ju Hwang

Despite the remarkable performance of deep neural networks on various computer vision tasks, they are known to be susceptible to adversarial perturbations, which makes it challenging to deploy them in real-world safety-critical applications.

Adversarial Robustness

Learning What and Where to Transfer

4 code implementations15 May 2019 Yunhun Jang, Hankook Lee, Sung Ju Hwang, Jinwoo Shin

To address the issue, we propose a novel transfer learning approach based on meta-learning that can automatically learn what knowledge to transfer from the source network to where in the target network.

Meta-Learning Small Data Image Classification +1

Spectral Approximate Inference

no code implementations14 May 2019 Sejun Park, Eunho Yang, Se-Young Yun, Jinwoo Shin

Our contribution is two-fold: (a) we first propose a fully polynomial-time approximation scheme (FPTAS) for approximating the partition function of GM associating with a low-rank coupling matrix; (b) for general high-rank GMs, we design a spectral mean-field scheme utilizing (a) as a subroutine, where it approximates a high-rank GM into a product of rank-1 GMs for an efficient approximation of the partition function.

Training CNNs with Selective Allocation of Channels

no code implementations11 May 2019 Jongheon Jeong, Jinwoo Shin

Recent progress in deep convolutional neural networks (CNNs) have enabled a simple paradigm of architecture design: larger models typically achieve better accuracy.

Selective Convolutional Units: Improving CNNs via Channel Selectivity

no code implementations ICLR 2019 Jongheon Jeong, Jinwoo Shin

Bottleneck structures with identity (e. g., residual) connection are now emerging popular paradigms for designing deep convolutional neural networks (CNN), for processing large-scale features efficiently.

Model Compression

Robust Determinantal Generative Classifier for Noisy Labels and Adversarial Attacks

no code implementations ICLR 2019 Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin

For instance, on CIFAR-10 dataset containing 45% noisy training labels, we improve the test accuracy of a deep model optimized by the state-of-the-art noise-handling training method from33. 34% to 43. 02%.

Instance-aware Image-to-Image Translation

1 code implementation ICLR 2019 Sangwoo Mo, Minsu Cho, Jinwoo Shin

Unsupervised image-to-image translation has gained considerable attention due to the recent impressive progress based on generative adversarial networks (GANs).

Semantic Segmentation Translation +1

Overcoming Catastrophic Forgetting with Unlabeled Data in the Wild

1 code implementation ICCV 2019 Kibok Lee, Kimin Lee, Jinwoo Shin, Honglak Lee

Lifelong learning with deep neural networks is well-known to suffer from catastrophic forgetting: the performance on previous tasks drastically degrades when learning a new task.

Class Incremental Learning Incremental Learning

Robust Inference via Generative Classifiers for Handling Noisy Labels

1 code implementation31 Jan 2019 Kimin Lee, Sukmin Yun, Kibok Lee, Honglak Lee, Bo Li, Jinwoo Shin

Large-scale datasets may contain significant proportions of noisy (incorrect) class labels, and it is well-known that modern deep neural networks (DNNs) poorly generalize from such noisy training datasets.

InstaGAN: Instance-aware Image-to-Image Translation

1 code implementation28 Dec 2018 Sangwoo Mo, Minsu Cho, Jinwoo Shin

Our comparative evaluation demonstrates the effectiveness of the proposed method on different image datasets, in particular, in the aforementioned challenging cases.

Semantic Segmentation Translation +1

Learning to Specialize with Knowledge Distillation for Visual Question Answering

no code implementations NeurIPS 2018 Jonghwan Mun, Kimin Lee, Jinwoo Shin, Bohyung Han

The proposed framework is model-agnostic and applicable to any tasks other than VQA, e. g., image classification with a large number of labels but few per-class examples, which is known to be difficult under existing MCL schemes.

General Classification General Knowledge +5

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

4 code implementations NeurIPS 2018 Kimin Lee, Kibok Lee, Honglak Lee, Jinwoo Shin

Detecting test samples drawn sufficiently far away from the training distribution statistically or adversarially is a fundamental requirement for deploying a good classifier in many real-world machine learning applications.

Class Incremental Learning Incremental Learning +1

Anytime Neural Prediction via Slicing Networks Vertically

1 code implementation7 Jul 2018 Hankook Lee, Jinwoo Shin

This is remarkable due to their simplicity and effectiveness, but training many thin sub-networks jointly faces a new challenge on training complexity.

Hierarchical Novelty Detection for Visual Object Recognition

no code implementations CVPR 2018 Kibok Lee, Kimin Lee, Kyle Min, Yuting Zhang, Jinwoo Shin, Honglak Lee

The essential ingredients of our methods are confidence-calibrated classifiers, data relabeling, and the leave-one-out strategy for modeling novel classes under the hierarchical taxonomy.

Generalized Zero-Shot Learning Novelty Detection +2

Stochastic Chebyshev Gradient Descent for Spectral Optimization

1 code implementation NeurIPS 2018 Insu Han, Haim Avron, Jinwoo Shin

A large class of machine learning techniques requires the solution of optimization problems involving spectral functions of parametric matrices, e. g. log-determinant and nuclear norm.

Gauged Mini-Bucket Elimination for Approximate Inference

no code implementations5 Jan 2018 Sungsoo Ahn, Michael Chertkov, Jinwoo Shin, Adrian Weller

Recently, so-called gauge transformations were used to improve variational lower bounds on $Z$.

Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples

3 code implementations ICLR 2018 Kimin Lee, Honglak Lee, Kibok Lee, Jinwoo Shin

The problem of detecting whether a test sample is from in-distribution (i. e., training distribution by a classifier) or out-of-distribution sufficiently different from it arises in many real-world machine learning applications.

Confident Multiple Choice Learning

2 code implementations ICML 2017 Kimin Lee, Changho Hwang, KyoungSoo Park, Jinwoo Shin

Ensemble methods are arguably the most trustworthy techniques for boosting the performance of machine learning models.

General Classification Image Classification +1

Simplified Stochastic Feedforward Neural Networks

no code implementations11 Apr 2017 Kimin Lee, Jaehyung Kim, Song Chong, Jinwoo Shin

In this paper, we aim at developing efficient training methods for SFNN, in particular using known architectures and pre-trained parameters of DNN.

Rapid Mixing Swendsen-Wang Sampler for Stochastic Partitioned Attractive Models

no code implementations6 Apr 2017 Sejun Park, Yunhun Jang, Andreas Galanis, Jinwoo Shin, Daniel Stefankovic, Eric Vigoda

The Gibbs sampler is a particularly popular Markov chain used for learning and inference problems in Graphical Models (GMs).

Sequential Local Learning for Latent Graphical Models

no code implementations12 Mar 2017 Sejun Park, Eunho Yang, Jinwoo Shin

Learning parameters of latent graphical models (GM) is inherently much harder than that of no-latent ones since the latent variables make the corresponding log-likelihood non-concave.

Novel Concepts

Faster Greedy MAP Inference for Determinantal Point Processes

1 code implementation ICML 2017 Insu Han, Prabhanjan Kambadur, KyoungSoo Park, Jinwoo Shin

Determinantal point processes (DPPs) are popular probabilistic models that arise in many machine learning tasks, where distributions of diverse sets are characterized by matrix determinants.

Point Processes

Gauging Variational Inference

no code implementations NeurIPS 2017 Sungsoo Ahn, Michael Chertkov, Jinwoo Shin

Computing partition function is the most important statistical inference task arising in applications of Graphical Models (GM).

Variational Inference

Iterative Bayesian Learning for Crowdsourced Regression

no code implementations28 Feb 2017 Jungseul Ok, Sewoong Oh, Yunhun Jang, Jinwoo Shin, Yung Yi

Crowdsourcing platforms emerged as popular venues for purchasing human intelligence at low cost for large volume of tasks.

regression

Synthesis of MCMC and Belief Propagation

no code implementations NeurIPS 2016 Sung-Soo Ahn, Michael Chertkov, Jinwoo Shin

In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i. e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC.

Approximating the Spectral Sums of Large-scale Matrices using Chebyshev Approximations

1 code implementation3 Jun 2016 Insu Han, Dmitry Malioutov, Haim Avron, Jinwoo Shin

Computation of the trace of a matrix function plays an important role in many scientific computing applications, including applications in machine learning, computational physics (e. g., lattice quantum chromodynamics), network analysis and computational biology (e. g., protein folding), just to name a few application areas.

Data Structures and Algorithms

MCMC assisted by Belief Propagation

no code implementations29 May 2016 Sungsoo Ahn, Michael Chertkov, Jinwoo Shin

Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series.

Adiabatic Persistent Contrastive Divergence Learning

no code implementations26 May 2016 Hyeryung Jang, Hyungwon Choi, Yung Yi, Jinwoo Shin

This paper studies the problem of parameter learning in probabilistic graphical models having latent variables, where the standard approach is the expectation maximization algorithm alternating expectation (E) and maximization (M) steps.

Minimum Weight Perfect Matching via Blossom Belief Propagation

no code implementations NeurIPS 2015 Sungsoo Ahn, Sejun Park, Michael Chertkov, Jinwoo Shin

Max-product Belief Propagation (BP) is a popular message-passing algorithm for computing a Maximum-A-Posteriori (MAP) assignment over a distribution represented by a Graphical Model (GM).

Combinatorial Optimization

Large-scale Log-determinant Computation through Stochastic Chebyshev Expansions

1 code implementation22 Mar 2015 Insu Han, Dmitry Malioutov, Jinwoo Shin

Logarithms of determinants of large positive definite matrices appear ubiquitously in machine learning applications including Gaussian graphical and Gaussian process models, partition functions of discrete graphical models, minimum-volume ellipsoids, metric learning and kernel learning.

Metric Learning

Scalable Iterative Algorithm for Robust Subspace Clustering

no code implementations5 Mar 2015 Sanghyuk Chun, Yung-Kyun Noh, Jinwoo Shin

Subspace clustering (SC) is a popular method for dimensionality reduction of high-dimensional data, where it generalizes Principal Component Analysis (PCA).

Clustering Dimensionality Reduction

Max-Product Belief Propagation for Linear Programming: Applications to Combinatorial Optimization

no code implementations16 Dec 2014 Sejun Park, Jinwoo Shin

The max-product {belief propagation} (BP) is a popular message-passing heuristic for approximating a maximum-a-posteriori (MAP) assignment in a joint distribution represented by a graphical model (GM).

Combinatorial Optimization

A Graphical Transformation for Belief Propagation: Maximum Weight Matchings and Odd-Sized Cycles

no code implementations NeurIPS 2013 Jinwoo Shin, Andrew E. Gelfand, Misha Chertkov

It was recently shown that BP converges to the correct MAP assignment for a class of loopy GMs with the following common feature: the Linear Programming (LP) relaxation to the MAP problem is tight (has no integrality gap).

Loop Calculus and Bootstrap-Belief Propagation for Perfect Matchings on Arbitrary Graphs

no code implementations5 Jun 2013 Michael Chertkov, Andrew Gelfand, Jinwoo Shin

This manuscript discusses computation of the Partition Function (PF) and the Minimum Weight Perfect Matching (MWPM) on arbitrary, non-bipartite graphs.

Belief Propagation for Linear Programming

no code implementations17 May 2013 Andrew Gelfand, Jinwoo Shin, Michael Chertkov

For this class of problems, MAP inference can be stated as an integer LP with an LP relaxation that coincides with minimization of the BFE at ``zero temperature".

Cannot find the paper you are looking for? You can Submit a new open access paper.