Skip to main content

Luke Roberto

PhD Student


Office Location

805 Columbus Avenue
5th Floor, Interdisciplinary Science & Engineering Complex (ISEC)
Boston, MA 02120


Luke Roberto is a PhD student at Northeastern’s College of Computer and Information Science focusing on robotics and artificial intelligence, advised by Professor Dr. Robert Platt. Roberto earned his BS in Mechanical Engineering at the Massachusetts Institute of Technology. He comes from Cleves, Ohio.

Roberto’s interests are at the intersection of robotic control, planning, and perception. With data driven approaches such as reinforcement learning, he researches and discovers policies that leverage knowledge from each of these disciplines. A problem he hopes to solve is to reduce the amount of data needed to train deep reinforcement learning models using transfer and hierarchical learning approaches. The advancement of robotic dexterity that is sensitive to the real world noise and model error fascinates him and encourages him to contribute to robotics.


  • BS in Mechanical Engineering, Massachusetts Institute of Technology

About Me

  • Hometown: Cleves, OH
  • Field of Study: Robotics
  • PhD Advisor: Robert Platt

What are the specifics of your graduate education (thus far)?

I will be starting my PhD this coming fall under the supervision of Robert Platt. My focus will be on robotics and artificial intelligence.

What are your research interests in a bit more detail? Is your current academic/research path what you always had in mind for yourself, or has it evolved somewhat? If so, how/why?

Currently, my interests lie at the intersection of robotic control, planning, and perception. These fields have been quite separate, and so data driven approaches like reinforcement learning have now given us the ability to learn policies that leverage knowledge of each of these disciplines.

What’s one problem you’d like to solve with your research/work?

One problem that I would like to solve with my research is to reduce the amount of data needed to train deep reinforcement learning approaches. I believe transfer and hierarchical learning approaches can help leverage the large data sets we currently have to generalize to less specific use cases.

What aspect of what you do is most interesting/fascinating to you? What aspects of your research (findings, angles, problems you’re solving) might surprise others?

Currently, the most fascinating thing thing to me is we are finally reaching a time where robots can actually perform dexterous tasks that are quite robust to all the noise and model error we see in the real world. Rather than rigidly learning how to do specific tasks with precision in structured environments. We are starting to be able to have them perform higher level and unstructured tasks in noisy and sometimes unknown environments.

What are your research/career goals, going forward?

Being relatively new to the field, I would like to learn as much as I possibly can and find in what ways I can best contribute to robotics.