1. Home /
  2. College & University /
  3. Berkeley AI Research

Category



General Information

Locality: Berkeley, California



Address: Berkeley Way West 94704 Berkeley, CA, US

Website: bair.berkeley.edu/

Likes: 13679

Reviews

Add review

Facebook Blog





Berkeley AI Research 10.02.2021

Congratulations to David Gaddy and Dan Klein for the Best Paper Award at EMNLP 2020! Paper: https://www.aclweb.org/anthology/2020.emnlp-main.445/ Talk: https://slideslive.com/38939073 https://syncedreview.com//emnlp-2020-best-paper-award-goe/

Berkeley AI Research 27.01.2021

It's thrilling to see how Ray has gone from a research project to production! Join us at #RaySummit to hear the latest in scalable ML and distributed systems. It starts tomorrow!

Berkeley AI Research 12.01.2021

Congratulations to Shiry Ginosar for being selected CIFellow!

Berkeley AI Research 04.11.2020

Congratulations to BAIR faculty Peter Bartlett, Bin Yu, Shankar Sastry, Yi Ma and @JacobSteinhardt, all appointed to @NSF Collaborations via their work through the @SimonsFdn! https://t.co/Y5VCVK4ZJv Berkeley AI Research (@berkeley_ai) September 4, 2020

Berkeley AI Research 01.11.2020

"Explaining what Explainable AI Did Not": @lvinwan describes how to make models as accurate as neural networks but with an interpretable decision process in the new BAIR blog! https://t.co/nvJois1H8j #xai https://t.co/COqSPqQzgH (via Twitter)

Berkeley AI Research 19.10.2020

RT @realNingYu: New paper on improving minority mode coverage of StyleGAN2: we combine GAN and IMLE objectives to get the best of both worlds. Joint work with @KL_Div , Peng Zhou, Jitendra Malik, Larry Davis, and Mario Fritz. More details in https://t.co/H2pOLGAWCy. Code and models coming soon. https://t.co/jAXt4LlQGN (via Twitter)

Berkeley AI Research 02.10.2020

This NYT article on self-supervised learning also features BAIR faculty @pabbeel and @svlevine! https://t.co/YfTUhPas6g (via Twitter)

Berkeley AI Research 21.09.2020

Congratulations, @andreaa7b & Dexter and co-authors, on being awarded best paper at @HRI_2020! https://t.co/WWu1o2db8G https://t.co/WA6NMzujmG (via Twitter)

Berkeley AI Research 18.09.2020

Check out this BAIR work on detecting and exposing general artifacts in images generated by convolutional neural networks (CNNs)! Project website at https://t.co/qofq5ISYIh! https://t.co/Njq022wa09 (via Twitter)

Berkeley AI Research 30.08.2020

The https://t.co/vjvHEGa7tr Digital Transformation Institute is a collaboration between https://t.co/vjvHEGa7tr, @Microsoft, and top universities to research #COVID19 and will be co-directed by BAIR professor Shankar Sastry. https://t.co/nVuBSzkIAa (via Twitter)

Berkeley AI Research 11.08.2020

Representing scenes as neural radiance fields for view synthesis: A BAIR / Google research / UCSD collaboration! https://t.co/DOHZPoBYhi (via Twitter)

Berkeley AI Research 22.07.2020

RT @shengs1123: Ever wondered why BN is not used in NLP? We found that NLP batch statistics exhibit *large variance* throughout training, which leads to poor BN performance. To address this, we propose Power Norm that achieves SOTA vs. LN/BN. [1/5] https://t.co/lpSJker4wV (via Twitter)

Berkeley AI Research 20.07.2020

RT @svlevine: In deep RL, the training distr. plays a crucial role, and "standard" on-policy collection is *not* the best choice! It can produce non-convergence. Our method DisCor corrects this, substantially improving results. arxiv: https://t.co/9Cam6b89vo Blog: https://t.co/aU6EGeG3zs https://t.co/vtCHWBgU0J (via Twitter)

Berkeley AI Research 10.07.2020

RT @NVIDIAEmbedded: Meet #BADGR: @berkeley_ai's autonomous self-supervised learning-based navigation system, running on NVIDIA's #JetsonTX2. BADGR can be trained with data gathered in the real-world, without any simulation or human supervision. https://t.co/HDfAauPlEF (via Twitter)

Berkeley AI Research 30.06.2020

RT @svlevine: Greg Kahn's blog post describing BADGR, our real-world reinforcement learning system for robotic navigation that gets better and better with experience, is now out! https://t.co/aLQhR25AV3 https://t.co/nj6IfrIWz8 (via Twitter)

Berkeley AI Research 25.06.2020

RT @Eric_Wallace_: Not everyone can afford to train huge neural models. So, we typically *reduce* model size to train/test faster. However, you should actually *increase* model size to speed up training and inference for transformers. Why? [1/6] https://t.co/GcjytCEmox https://t.co/HatYO5GfhP https://t.co/ivKyNo1ve0 (via Twitter)