Welcome to MReaL! (Machine Reasoning and Learning, pronounced Me Real). Current AI is substantially different from human intelligence in crucial ways because our mind is bicameral: the right brain hemisphere is for perception, which is similar to existing deep learning systems; the left hemisphere is for logic reasoning; and the two of them work so differently and collaboratively that yield creative intelligence. To this end, at MReaL, we are seeking in-principle reasoning algorithms that take the complementary advantages of the modern deep neural networks for learning representations and the old-school symbolic operations for reasoning.

News

Three Papers (One Oral) Accepted by NeurIPS 2020

  • Causal Intervention for Weakly-Supervised Semantic Segmentation (Oral)
  • Interventional Few-Shot Learning
  • Long-Tailed Classification by Keeping the Good and Removing the Bad Momentum Causal Effect
  • One Paper Accepted by ACM-MM 2020

  • Hierarchical Scene Graph Encoder-Decoder for Image Paragraph Captioning
  • Two Papers Accepted by ECCV and TMM

  • Feature Pyramid Transformer, ECCV
  • Self-Adaptive Neural Module Transformer for Visual Question Answering, TMM
  • Eight Papers (Two Oral) Accepted by CVPR 2020

  • Iterative Context-Aware Graph Inference for Visual Dialog (Oral)
  • Unbiased Scene Graph Generation from Biased Training (Oral)
  • Visual Commonsense R-CNN
  • Learning to Segment the Tail
  • Two Causal Principles for Improving Visual Dialog
  • More Grounded Image Captioning by Distilling Image-Text Matching Model
  • Counterfactual Samples Synthesizing for Robust Visual Question Answering
  • Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration
  • PREMIA Best Student Paper, Silver Award

  • Our paper "Learning to Compose Dynamic Tree Structures for Visual Contexts" received the PREMIA Best Student Paper, Silver Award (2nd Place)
  • Four Papers (Two Oral) Accepted by ICCV 2019

  • Counterfactual Critic Multi-Agent Training for Scene Graph Generation [oral]
  • Learning to Assemble Neural Module Tree Networks for Visual Grounding [oral]
  • Making History Matter: History-Advantage Sequence Training for Visual Dialog
  • Learning to Collocate Neural Modules for Image Captioning
  • One Paper Accepted by TPAMI

    Three Papers Accepted by ACM MM 2019

    CVPR 2019 conference

  • Our Team MReaL-BDAI wins the first place in Visual Dialogue Challenge
  • Our paper "Learning to Compose Dynamic Tree Structures for Visual Contexts" appears on the Best Paper Finallists.
  • Two Papers Accepted by TPAMI

    Four Papers (Three Oral) Accepted by CVPR 2019

    Two Papers Accepted by AAAI 2019

    Contact Us