CVPR 2019 UG2+ 挑戰賽已於 2019 年 1 月 3 日開啟報名,賽事總獎金高達 6 萬美金。CVPR 2019 UG2+ 目前正面向全球企業、研究機構和院校進行隊伍招募,報名截止日期為 2019 年 4 月 1 日。
大賽官網:http://www.ug2challenge.org/
Call for Workshop Papers & Prize Challenge Participation
UG2+: Bridging the Gap between Computational Photography and Visual Recognition
in conjunction with
The 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2019)
Long Beach, CA, USA, June 16th-21th, 2019
Topic Description:
What is the current state-of-the-art for image restoration and enhancement applied to degraded images acquired under less than ideal circumstances? Can the application of such algorithms as a pre-processing step improve image interpretability for automatic visual recognition?
Continuing the success of the 1st UG2 Prize Challenge workshop held at CVPR 2018, we significantly expand our workshop scope for this year, to provide an integrated forum for researchers to review the recent progress of handling various adverse visual conditions in real-world scenes, in robust, effective and task-oriented ways.
Beyond the human vision-driven restorations, we also extend particular attention to the degradation models and the related inverse recovery processes that may benefit successive machine vision tasks. We embrace the most advanced deep learning systems, but are still open to classical physically grounded models, as well as any well-motivated combination of the two streams.
The workshop will consist of four invited talks, together with peer-reviewed regular papers (oral and poster), and talks associated with winning prize challenge contributions. Original high-quality contributions are solicited on the following topics:
• Novel algorithms for robust object detection, segmentation or recognition on outdoor mobility platforms, such as UAVs, gliders, autonomous cars, outdoor robots, etc.
• Novel algorithms for robust visual understanding in the presence of one or more real-world adverse conditions, such as haze, rain, snow, hail, dust, underwater, low-illumination, low resolution, etc.
• Novel algorithms for dehazing, deraining, light enhancement, or enhancing other real-world adverse conditions
• The potential models and theories for explaining, quantifying, and optimizing the mutual influence between the low-level computational photography (image reconstruction, restoration, or enhancement) tasks and various high-level computer vision tasks.
• Novel physically grounded and/or explanatory models, for the underlying degradation and recovery processes, of real-world images going through complicated adverse visual conditions.
• Novel evaluation methods and metrics for image restoration and enhancement algorithms, with a particular emphasis on no-reference metrics, since for most real outdoor images with adverse visual conditions it is hard to obtain any clean “ground truth” to compare with.
Submission Instructions:
All submitted work will be assessed based on their relevance to workshop theme, novelty, technical quality, clarity, and reproducibility. For each accepted submission, at least one author must attend the workshop and present the paper.
All submissions will follow standard CVPR format requirements, and will be double-blind peer-reviewed. Accepted papers will be presented at the poster session, with selected papers also being presented in an oral session. All accepted papers will be published by the CVPR in the workshop proceedings.
Best paper awards (a total of $1,000) will be given to the highest-quality original submission(s).
Challenge Description:
We will announce two challenges built on our collected large-scale benchmarks. The teams will be ranked in terms of testing set accuracy. All final winners will be required to open-source their code. Winning teams will also be invited to submit papers to the workshop to describe their methods.
The organizers acknowledge the generous sponsorship from IARPA, NEC Labs, Walmart, Kuaishou, Meitu, and Brain-Inspired Technology, leading to a total of $60, 000 cash prize for challenge winners. It is structured as two challenges, divided further into five sub-challenge tracks.
Challenge 1: Video Object Classification and Detection from Unconstrained Mobility Platforms
It consists of two sub-challenges: (1.1) restoration and enhancement to improve UAV-based object detection; and (1.2) restoration and enhancement to improve UAV-based classification of objects in videos.
Challenge 2: Object Detection in Poor Visibility Environments
It consists of three sub-challenges: (2.1) (Semi-)supervised object detection in the haze; (2.2) (Semi-)supervised face detection in the low light condition; and (2.3) Zero-shot object detection with raindrop occlusions.
More challenge and dataset details could be found at the website.
Important Dates:
1. Paper Submission
May 1, 2019: Paper submission deadline
May 10, 2019: Paper decision notification
May 17, 2019: Paper camera ready
2. Challenge Participation
January 31, 2019: Development kit and registration made available
March 15 – April 15, 2019: Dry run period
April 1, 2019: Registration deadline
May 1, 2019: Challenge submission deadline
May 20, 2019: Challenge results will be released
June 18, 2019: Most successful and innovative teams present at CVPR 2019 workshop
Organization Committee:
– Walter Scheirer, Assistant Professor, Notre Dame University, USA
– Zhangyang (Atlas) Wang, Assistant Professor, Texas A&M; University, USA
– Jiaying Liu, Associate Professor, Peking University, China
– Wenqi Ren, Assistant Professor, Chinese Academy of Sciences, China
– Wenhan Yang, Postdoc Researcher, City University of Hong Kong, Hong Kong, China
– Kevin Bowyer, Schubmehl-Prein Family Professor, Notre Dame University, USA
– Thomas S. Huang, Maybelle Leland Swanlund Endowed Chair Emeritus, University of Illinois at Urbana-Champaign, USA
– Sreya Banerjee, Graduate Student, Notre Dame University, USA
– Rosaura Vidal-Mata, Graduate Student, Notre Dame University, USA
– Ye Yuan, Graduate Student, Texas A&M; University, USA
For further questions please contact: Walter Scheirer walter.scheirer@nd.edu, Zhangyang (Atlas) Wang atlaswang@tamu.edu.