Profile

Dipl.-Ing. Lucas Beyer
Room 129
Phone: +492418020773
Fax: +492418022731
Email: beyer@vision.rwth-aachen.de
Office hours: write me a mail.

Research In Progress :)

About

Busy doing too many cool things. See my homepage for more. Mostly working on these things:

Lucas' wordcloud

Note for HiWi applicants: Due to my busy schedule, I cannot currently supervise any HiWi/Master students which have no hands-on experience with deep-learning and solid Python and C or C++ coding skills. If you do have both, get in touch!

Students

Current

  • Vitaly Kurin - student assistant (head-orientation, re-ID) and master thesis (Speed up deep RL)

Past

  • Dian Tsai - master thesis (Unsupervised re-ID and continuous clustering)
  • Iaroslava Grinchenko - master thesis (CNNs on head classification)
  • Diego Gomez - student assistant (Tooling for the robot)
  • Vojtek Novak - student assistant (Tooling for the robot)


Publications


Lucas Beyer, Alexander Hermans, Bastian Leibe
IEEE Robotics and Automation Letters (RA-L) and IEEE Int. Conference on Robotics and Automation (ICRA'17)

TL;DR: Collected & annotated laser detection dataset. Use window around each point to cast vote on detection center.

We introduce the DROW detector, a deep learning based detector for 2D range data. Laser scanners are lighting invariant, provide accurate range data, and typically cover a large field of view, making them interesting sensors for robotics applications. So far, research on detection in laser range data has been dominated by hand-crafted features and boosted classifiers, potentially losing performance due to suboptimal design choices. We propose a Convolutional Neural Network (CNN) based detector for this task. We show how to effectively apply CNNs for detection in 2D range data, and propose a depth preprocessing step and voting scheme that significantly improve CNN performance. We demonstrate our approach on wheelchairs and walkers, obtaining state of the art detection results. Apart from the training data, none of our design choices limits the detector to these two classes, though. We provide a ROS node for our detector and release our dataset containing 464k laser scans, out of which 24k were annotated.

» Show BibTeX
@article{BeyerHermans2016RAL, title = {{DROW: Real-Time Deep Learning based Wheelchair Detection in 2D Range Data}}, author = {Beyer*, Lucas and Hermans*, Alexander and Leibe, Bastian}, journal = {{IEEE Robotics and Automation Letters (RA-L)}}, year = {2016} }





Alexander Hermans, Lucas Beyer, Bastian Leibe
arXiv:1703.07737

TL;DR: Use triplet loss, hard-mining inside mini-batch performs great, is similar to offline semi-hard mining but much more efficient.

In the past few years, the field of computer vision has gone through a revolution fueled mainly by the advent of large datasets and the adoption of deep convolutional neural networks for end-to-end learning. The person re-identification subfield is no exception to this, thanks to the notable publication of the Market-1501 and MARS datasets and several strong deep learning approaches. Unfortunately, a prevailing belief in the community seems to be that the triplet loss is inferior to using surrogate losses (classification, verification) followed by a separate metric learning step. We show that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms any other published method by a large margin.

» Show BibTeX
@article{HermansBeyer2017Arxiv, title = {{In Defense of the Triplet Loss for Person Re-Identification}}, author = {Hermans*, Alexander and Beyer*, Lucas and Leibe, Bastian}, journal = {arXiv preprint arXiv:1703.07737}, year = {2017} }





Nick Hawes, Chris Burbridge, Ferdian Jovan, Lars Kunze, Bruno Lacerda, Lenka Mudrová, Jay Young, Jeremy Wyatt, Denise Hebesberger, Tobias Körtner, Rares Ambrus, Nils Bore, John Folkesson, Patric Jensfelt, Lucas Beyer, Alexander Hermans, Bastian Leibe, Aitor Aldoma, Thomas Fäulhammer, Michael Zillich, Markus Vincze, Muhannad Al-Omari, Eris Chinellato, Paul Duckworth, Yiannis Gatsoulis, David Hogg, Anthony Cohn, Christian Dondrup, Jaime Fentanes, Tomas Krajník, João Santos, Tom Duckett, Marc Hanheide
IEEE Robotics and Automation Magazine

Thanks to the efforts of our community, autonomous robots are becoming capable of ever more complex and impressive feats. There is also an increasing demand for, perhaps even an expectation of, autonomous capabilities from end-users. However, much research into autonomous robots rarely makes it past the stage of a demonstration or experimental system in a controlled environment. If we don't confront the challenges presented by the complexity and dynamics of real end-user environments, we run the risk of our research becoming irrelevant or ignored by the industries who will ultimately drive its uptake. In the STRANDS project we are tackling this challenge head-on. We are creating novel autonomous systems, integrating state-of-the-art research in artificial intelligence and robotics into robust mobile service robots, and deploying these systems for long-term installations in security and care environments. To date, over four deployments, our robots have been operational for a combined duration of 2545 hours (or a little over 106 days), covering 116km while autonomously performing end-user defined tasks. In this article we present an overview of the motivation and approach of the STRANDS project, describe the technology we use to enable long, robust autonomous runs in challenging environments, and describe how our robots are able to use these long runs to improve their own performance through various forms of learning.





Lucas Beyer, Alexander Hermans, Bastian Leibe
German Conference on Pattern Recognition (GCPR'15) - Oral

TL;DR: By doing the obvious thing of encoding an angle φ as (cos φ, sin φ), we can do cool things and simplify data labeling requirements.

While head pose estimation has been studied for some time, continuous head pose estimation is still an open problem. Most approaches either cannot deal with the periodicity of angular data or require very fine-grained regression labels. We introduce biternion nets, a CNN-based approach that can be trained on very coarse regression labels and still estimate fully continuous 360° head poses. We show state-of-the-art results on several publicly available datasets. Finally, we demonstrate how easy it is to record and annotate a new dataset with coarse orientation labels in order to obtain continuous head pose estimates using our biternion nets.

» Show BibTeX
@inproceedings{Beyer2015BiternionNets, author = {Lucas Beyer and Alexander Hermans and Bastian Leibe}, title = {Biternion Nets: Continuous Head Pose Regression from Discrete Training Labels}, booktitle = {Pattern Recognition}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, volume = {9358}, pages = {157-168}, year = {2015}, isbn = {978-3-319-24946-9}, doi = {10.1007/978-3-319-24947-6_13}, ee = {http://lucasb.eyer.be/academic/biternions/biternions_gcpr15.pdf}, }





Rudolph Triebel, Kai Oliver Arras, Rachid Alami, Lucas Beyer, Stefan Breuers, Raja Chatila, Mohamed Chetouani, Daniel Cremers, Vanessa Evers, Michelangelo Fiore, Hayley Hung, Omar A. Ramírez Islas, Michiel Joosse, Harmish Khambhaita, Tomasz Kucner, Bastian Leibe, Achim Lilienthal, Timm Linder, Manja Lohse, Martin Magnusson, Billy Okal, Luigi Palmieri, Umer Rafi, Marieke van Rooij, Lu Zhang
Field and Service Robotics (FSR'15)

We present an ample description of a socially compliant mobile robotic platform, which is developed in the EU-funded project SPENCER. The purpose of this robot is to assist, inform and guide passengers in large and busy airports. One particular aim is to bring travellers of connecting flights conveniently and efficiently from their arrival gate to the passport control. The uniqueness of the project stems from the strong demand of service robots for this application with a large potential impact for the aviation industry on one side, and on the other side from the scientific advancements in social robotics, brought forward and achieved in SPENCER. The main contributions of SPENCER are novel methods to perceive, learn, and model human social behavior and to use this knowledge to plan appropriate actions in real- time for mobile platforms. In this paper, we describe how the project advances the fields of detection and tracking of individuals and groups, recognition of human social relations and activities, normative human behavior learning, socially-aware task and motion planning, learning socially annotated maps, and conducting empir- ical experiments to assess socio-psychological effects of normative robot behaviors.

» Show BibTeX
@article{triebel2015spencer, title={SPENCER: a socially aware service robot for passenger guidance and help in busy airports}, author={Triebel, Rudolph and Arras, Kai and Alami, Rachid and Beyer, Lucas and Breuers, Stefan and Chatila, Raja and Chetouani, Mohamed and Cremers, Daniel and Evers, Vanessa and Fiore, Michelangelo and Hung, Hayley and Islas Ramírez, Omar A. and Joosse, Michiel and Khambhaita, Harmish and Kucner, Tomasz and Leibe, Bastian and Lilienthal, Achim J. and Linder, Timm and Lohse, Manja and Magnusson, Martin and Okal, Billy and Palmieri, Luigi and Rafi, Umer and Rooij, Marieke van and Zhang, Lu}, journal={Field and Service Robotics (FSR) year={2015}, publisher={University of Toronto} }





Lucas Beyer, Paolo Bientinesi
International European Conference on Parallel and Distributed Computing (Euro-Par'13) - Oral

In the context of the genome-wide association studies (GWAS), one has to solve long sequences of generalized least-squares problems; such a task has two limiting factors: execution time --often in the range of days or weeks-- and data management --data sets in the order of Terabytes. We present an algorithm that obviates both issues. By pipelining the computation, and thanks to a sophisticated transfer strategy, we stream data from hard disk to main memory to GPUs and achieve sustained peak performance; with respect to a highly-optimized CPU implementation, our algorithm shows a speedup of 2.6x. Moreover, the approach lends itself to multiple GPUs and attains almost perfect scalability. When using 4 GPUs, we observe speedups of 9x over the aforementioned implementation, and 488x over a widespread biology library.

» Show BibTeX
@inproceedings{Beyer2013GWAS, author = {Lucas Beyer and Paolo Bientinesi}, title = {Streaming Data from HDD to GPUs for Sustained Peak Performance}, booktitle = {Euro-Par}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, volume = {8097}, pages = {788-799}, year = {2013}, isbn = {3642400477}, ee = {http://arxiv.org/abs/1302.4332}, }





Lucas Beyer
Diploma Thesis (2012)

Accelerate Genome-Wide Association Studies (GWAS) by performing the most demanding computation on the GPU in a batched, streamed fashion. Involves huge data size (terabytes), streaming, asynchronicity, parallel computation and some more buzzwords.

» Show BibTeX
@MastersThesis{Beyer2012GWAS, author = {Lucas Beyer}, title = {{Exploiting Graphics Adapters for Computational Biology}}, school = {RWTH Aachen (AICES)}, address = {Aachen, Germany}, year = {2012}, }




Disclaimer Home Visual Computing institute RWTH Aachen University